Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

24.09-trtllm-python-py3 使用 tensorrt-llm==0.15.0.dev2024101500 编译模型报错“undefined symbol: _ZN3c106detail14torchCheckFailEPKcS2_jRKSs” #674

Open
xqun3 opened this issue Nov 18, 2024 · 3 comments

Comments

@xqun3
Copy link

xqun3 commented Nov 18, 2024

Hi 您好,我根据您的代码,对 whisper-large-v3-turbo 这个模型进行编译部署,报错如下,我看 24.09-trtllm-python-py3 支持的 tensorrt-llm 是0.13.0.您那边测试是成功的吗?

Traceback (most recent call last):
  File "/workspace/TensorRT-LLM/examples/whisper/convert_checkpoint.py", line 24, in <module>
    import tensorrt_llm
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/__init__.py", line 32, in <module>
    import tensorrt_llm.functional as functional
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/functional.py", line 28, in <module>
    from . import graph_rewriting as gw
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/graph_rewriting.py", line 12, in <module>
    from .network import Network
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/network.py", line 27, in <module>
    from tensorrt_llm.module import Module
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/module.py", line 17, in <module>
    from ._common import default_net
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/_common.py", line 37, in <module>
    from ._utils import str_dtype_to_trt
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/_utils.py", line 31, in <module>
    from tensorrt_llm.bindings import GptJsonConfig
ImportError: /usr/local/lib/python3.10/dist-packages/tensorrt_llm/bindings.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c106detail14torchCheckFailEPKcS2_jRKSs
@csukuangfj
Copy link
Collaborator

@yuekaizhang Could you have a look?

@yuekaizhang
Copy link
Collaborator

@xqun3 #672, 请参考这个 PR 里的 readme, 支持用 docker compose 一键部署 large-v3 和 turbo的

@xqun3
Copy link
Author

xqun3 commented Nov 22, 2024

@yuekaizhang 感谢回复,我用这个 nvcr.io/nvidia/tritonserver:24.10-py3 镜像也可以编译部署成功

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants