Researcher on deep learning.
Pinned Loading
-
daizedong.github.io
daizedong.github.io PublicForked from academicpages/academicpages.github.io
Daize Dong's personal page.
JavaScript 1
-
pjlab-sys4nlp/llama-moe
pjlab-sys4nlp/llama-moe Public⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
-
OpenSparseLLMs/LLaMA-MoE-v2
OpenSparseLLMs/LLaMA-MoE-v2 Public🚀LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
-
A4Bio/GraphsGPT
A4Bio/GraphsGPT PublicThe official implementation of the ICML'24 paper "A Graph is Worth K Words: Euclideanizing Graph using Pure Transformer".
-
CASE-Lab-UMD/Unified-MoE-Compression
CASE-Lab-UMD/Unified-MoE-Compression PublicThe official implementation of the paper "Demystifying the Compression of Mixture-of-Experts Through a Unified Framework".
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.