Skip to content
View Coobiw's full-sized avatar
🎯
Focusing
🎯
Focusing
  • Peking University

Highlights

  • Pro

Organizations

@Mixture-AI

Block or report Coobiw

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Coobiw/README.md

Hi ! Here is Coobiw πŸ‘‹

πŸ™‹β€β™‚οΈ About Me:

  • πŸ‘¨β€πŸ¦° I’m currently a Master of Science candidate of Peking University (PKU).
  • πŸ‘¦ Before that, I received the Honours Bachelor, Huazhong University of Science and Technology (HUST).
  • ❀️‍πŸ”₯ Now, I am intersted in Multi-modal Learning especially MLLM.

πŸ˜‹ Projects:

  • πŸ’₯ In 2023 summer, I take part in OSPP(Open Source Promotion Plan) Summer Camp , with the honor of contributing for MMPretrain to build prompt-based classifier.
    • Now, the implement of zero-shot CLIP classifier has been merged to the main branch. Codebase
    • The implement of RAM(Recognize Anything Model) has been merged to the dev branch. Welcome to use the gradio WebUI to test it on MMPretrain! Codebase
  • πŸ’₯ 2023.11-2024.5: MPP-Qwen-Next is released! All training is conducted on 3090/4090 GPUs. To prevent poverty (24GB of VRAM) from limiting imagination, I implemented an MLLM version based on deepspeed Pipeline Parallel. The Repo supports {video/image/multi-image} {single/multi-turn} conversations. Let's have a try! Codebase.
  • πŸ’₯ 2024.9: We release ChartMoE, a multimodal large language model with Mixture-of-Expert connector, for advanced chart 1)understanding, 2)replot, 3)editing, 4)highlighting and 5)transformation. arXiv Project Page Codebase
  • πŸ’₯πŸ’₯πŸ’₯ 2024.10: I am really fortunate to be involved in the development of Aria. Aria is a Naive Multimodal MoE model, with best-in-class performance across multimodal, language, and coding tasks! Blog Tech Report Code


Anurag's GitHub stats

Pinned Loading

  1. rhymes-ai/Aria rhymes-ai/Aria Public

    Codebase for Aria - an Open Multimodal Native MoE

    Jupyter Notebook 872 72

  2. MPP-LLaVA MPP-LLaVA Public

    Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Tr…

    Jupyter Notebook 382 20

  3. IDEA-FinAI/ChartMoE IDEA-FinAI/ChartMoE Public

    Jupyter Notebook 18

  4. open-mmlab/mmpretrain open-mmlab/mmpretrain Public

    OpenMMLab Pre-training Toolbox and Benchmark

    Python 3.5k 1.1k

  5. IP-IQA IP-IQA Public

    [ICME2024, Official Code] for paper "Bringing Textual Prompt to AI-Generated Image Quality Assessment"

    Python 15

  6. TriVQA TriVQA Public

    [CVPRW2024, Official Code] for paper "Exploring AIGC Video Quality: A Focus on Visual Harmony, Video-Text Consistency and Domain Distribution Gap".

    11