Skip to content

jupyter notebooks to fine tune whisper models on luganda using colab and/or kaggle

License

Notifications You must be signed in to change notification settings

allandclive/fine-tune-whisper-lg

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fine-tune whisper lg

jupyter notebooks to fine tune whisper models on luganda using kaggle (should also work on colab but not throughly tested)

N.B.1 import any trainer or pipeline class from transformers crash kaggle TPU session so better use GPU

N.B.2 trainer class from transformers can auto use multi-GPU like kaggle free T4×2 without code change

scripts

evaluate accuracy (WER):

fine-tune whisper tiny with traditional approach:

fine-tine whisper large with PEFT-LoRA + int8:

fine-tune wav2vec v2 bert: w2v-bert-v2.ipynb

docker image to fine-tune on AWS: Dockerfile

convert to openai-whisper, whisper.cpp, faster-whisper, ONNX, TensorRT: not yet

datasets

About

jupyter notebooks to fine tune whisper models on luganda using colab and/or kaggle

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 90.0%
  • Python 9.4%
  • Dockerfile 0.6%