Skip to content
View DaizeDong's full-sized avatar

Block or report DaizeDong

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
DaizeDong/README.md

Contribution Grid Snake

Pinned Loading

  1. daizedong.github.io daizedong.github.io Public

    Forked from academicpages/academicpages.github.io

    Daize Dong's personal page.

    JavaScript 1

  2. pjlab-sys4nlp/llama-moe pjlab-sys4nlp/llama-moe Public

    ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)

    Python 887 46

  3. OpenSparseLLMs/LLaMA-MoE-v2 OpenSparseLLMs/LLaMA-MoE-v2 Public

    🚀LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training

    Python 53 7

  4. A4Bio/GraphsGPT A4Bio/GraphsGPT Public

    The official implementation of the ICML'24 paper "A Graph is Worth K Words: Euclideanizing Graph using Pure Transformer".

    Python 33 2

  5. CASE-Lab-UMD/Unified-MoE-Compression CASE-Lab-UMD/Unified-MoE-Compression Public

    The official implementation of the paper "Demystifying the Compression of Mixture-of-Experts Through a Unified Framework".

    Python 49 5