Skip to content

Our source code for EACL2021 workshop: Meme Classification for Tamil Language. We took first place in this task finally!πŸ₯³

Notifications You must be signed in to change notification settings

codewithzichao/Multimodal-Transformers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Meme Classification for Tamil Language at EACL2021 Workshop

Our source code for EACL2021 workshop: Meme Classification for Tamil Language. We took first place in this task finally!πŸ₯³

Updated: Source code is released!🀩

I will release the code very soon.

Repository structure

β”œβ”€β”€ MyLoss.py                          # Impelmentation of some loss function 
β”œβ”€β”€ README.md                   
β”œβ”€β”€ __init__.py
β”œβ”€β”€ args.py                            # declare some argument
β”œβ”€β”€ ckpt
β”‚Β Β  └── README.md
β”œβ”€β”€ data                               # store data
β”‚Β Β  └── README.md       
β”œβ”€β”€ gen_data.py                        # generate Dataset
β”œβ”€β”€ install_cli.sh                     # install required package
β”œβ”€β”€ logfile                            # store logfile during training
β”œβ”€β”€ main.py                            # train model         
β”œβ”€β”€ model.py                           # define model
β”œβ”€β”€ multimodal_attention.py            # Implentation of multimodal attention layer
β”œβ”€β”€ pred_data
β”‚Β Β  └── README.md
β”œβ”€β”€ preprocessing.py                   # preprocess the data
β”œβ”€β”€ pretrained_weights                 # store pretrained weights of resnet and xlm-roberta
β”‚Β Β  └── README.md
β”œβ”€β”€ run.sh                             # run model
└── train.py                           # define training and validation loop             

Installation

Use the following command so that you can install all of required packages:

sh install_cli.sh

Preprocessing

The first step is to preprocess the data. Just use the following command:

python3 -u preprocessing.py

Training

The second step is to train our model. Use the following command:

nohup sh run.sh > run_log.log 2>&1 &

Inference

The final step is inference after training. Use the following command:

nohup python3 -u inference.py > inference.log 2>&1 &

Congralutions! You have got the final results!🀩

If you use our code, please indicate the source.

About

Our source code for EACL2021 workshop: Meme Classification for Tamil Language. We took first place in this task finally!πŸ₯³

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published