-
Notifications
You must be signed in to change notification settings - Fork 371
Issues: pytorch/executorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
RFC: Improve developer experience by anchoring on multimodal use-case
#7093
opened Nov 26, 2024 by
mergennachin
RFC: PTE Size Inspector Design
rfc
Request for comment and feedback on a post, proposal, etc.
#7088
opened Nov 26, 2024 by
Olivia-liu
"Make clean" script should kill/restart buck automatically
module: build
Related to buck2 and cmake build
#7085
opened Nov 26, 2024 by
mergennachin
Create "make clean" equivalent script that encapsulates all necessary steps
module: build
Related to buck2 and cmake build
#7083
opened Nov 26, 2024 by
mergennachin
nnlib-hifi4 is too big during setting up ExecuTorch from source
module: build
Related to buck2 and cmake build
#7081
opened Nov 26, 2024 by
mergennachin
Attempting running Minibench on Android, no results generated
#7076
opened Nov 26, 2024 by
deneriz-veridas
I want to add a system prompt when I run the Llama3 test on my computer.
#7053
opened Nov 25, 2024 by
scj0709
Make Llava to be configurable so that you can swap text model
#7032
opened Nov 22, 2024 by
mergennachin
kernel 'aten::_upsample_bilinear2d_aa.out' not found.
module: kernels
Issues related to kernel libraries, e.g. portable kernels and optimized kernels
#7031
opened Nov 22, 2024 by
My-captain
how to build a llama2 runner binary with vulkan backends in the server with intel x86 server
module: vulkan
#7030
opened Nov 22, 2024 by
l2002924700
Is Related to Qualcomm's QNN delegate
partner: qualcomm
For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm
sh examples/demo-apps/android/LlamaDemo/setup-with-qnn.sh
supposed/equipped to generate .aar file for x86_64?
module: qnn
#7029
opened Nov 22, 2024 by
Astuary
Model could not load (Error Code: 32) in LlamaDemo app
Android
Android building and execution related.
module: qnn
Related to Qualcomm's QNN delegate
#7028
opened Nov 22, 2024 by
Astuary
Missing Out Variants When Running Llama3.2 Example Without XNNPack
#6975
opened Nov 20, 2024 by
sheetalarkadam
I pulled the latest code, and the model is reporting errors everywhere
bug
Something isn't working
module: qnn
Related to Qualcomm's QNN delegate
#6955
opened Nov 19, 2024 by
yangh0597
Auto-assign reviewers for delegate PRs
actionable
Items in the backlog waiting for an appropriate impl/fix
feature
A request for a proper, new feature.
#6911
opened Nov 15, 2024 by
cbilgin
Executorch Android Build Docs seem out-of-date
Android
Android building and execution related.
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#6906
opened Nov 15, 2024 by
JCodeShelver
Load Llava model error: Fatal signal 11 (SIGSEGV), code 2 (SEGV_ACCERR)
module: examples
Issues related to demos under examples directory
triage review
Items require an triage review
#6887
opened Nov 15, 2024 by
RustamG
The error in the XNNPACK quantize script in aot_compiler.py
actionable
Items in the backlog waiting for an appropriate impl/fix
bug
Something isn't working
good first issue
Good for newcomers
module: examples
Issues related to demos under examples directory
module: xnnpack
Issues related to xnnpack delegation
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#6886
opened Nov 15, 2024 by
LuckyHeart
Previous Next
ProTip!
Updated in the last three days: updated:>2024-11-24.