Skip to content

[IEEE VIS 2024] LLaVA-Chart: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Notifications You must be signed in to change notification settings

zengxingchen/ChartQA-MLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[IEEE VIS 2024] LLaVA-Chart: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Paper Link: https://arxiv.org/abs/2407.20174 data-generation-pipeline

Release

To-dos

  • Write a walk-through tutorial about this repo.
Data Gallery chart-gallery-1 chart-gallery-2

Evaluation

You can run our evaluation bash scripts scripts/*.sh.

CLI Inference

Here is the command for chatting with our model without the need for a Gradio interface.

python -m model/high_resolution/llava_hr.serve.cli \
    --model-path ./checkpoints/llava-hr-ChartInstruction \
    --image-file "*.jpg" 

Usage and License Notices:

  • For the base model llava: This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g. Llama community license for LLaMA-2 and Vicuna-v1.5).

Acknowledgement

  • Vicuna: the codebase LLaVA built upon. LLaVA's base language model is Vicuna-13B.
  • LLaVA: the codebase we built upon. LLaVA was the only open-sourced project with all training code open-sourced when we started this work.
  • LLaVA-HR: the high-resolution version model we built upon.
  • SemDeDup: the sampling module we are based on. SemDeDup is designed for hundred million of image sampling.
  • WYTIWYR: Part of data our classification are collected from here.
  • Unichart: Part of existing data are first collected by Unichart.

Contact

If you have any questions about this work, please email Xingchen Zeng at [email protected].

Citation

@article{zeng2024vis,
  author={Zeng, Xingchen and Lin, Haichuan and Ye, Yilin and Zeng, Wei},
  journal={IEEE Transactions on Visualization and Computer Graphics}, 
  title={Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning}, 
  year={2024},
  pages={1-11},
  doi={10.1109/TVCG.2024.3456159}
}

About

[IEEE VIS 2024] LLaVA-Chart: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published