Skip to content

Latest commit

 

History

History
30 lines (19 loc) · 1.27 KB

README.md

File metadata and controls

30 lines (19 loc) · 1.27 KB

fine-tune whisper lg

jupyter notebooks to fine tune whisper models on luganda using kaggle (should also work on colab but not throughly tested)

N.B.1 import any trainer or pipeline class from transformers crash kaggle TPU session so better use GPU

N.B.2 trainer class from transformers can auto use multi-GPU like kaggle free T4×2 without code change

scripts

evaluate accuracy (WER):

fine-tune whisper tiny with traditional approach:

fine-tine whisper large with PEFT-LoRA + int8:

fine-tune wav2vec v2 bert: w2v-bert-v2.ipynb

docker image to fine-tune on AWS: Dockerfile

convert to openai-whisper, whisper.cpp, faster-whisper, ONNX, TensorRT: not yet

datasets