-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large encoder and decoder model #25
Comments
Hi, Would you please say how much memory on GPU it is required to run the model? Thank You |
hi, @Mina1368 |
save model as: torch.save('filename.t7', model:clearState()) to make model take less disk space for deployment |
hi, @culurciello torch.save(filename, model:clearState():get(1)) or in this way: model:clearState() both of these are the same for encoder, about 400M I also tried "torch.save('filename.t7', model:clearState())" Do you have any idea? |
When I want to trian ENet in Cityscape Dataset or Camvid Dataset, how much GPU memory should I use |
Hi,
I trained both encoder and decoder, I got the large model.net. Encoder model is about 400M; decoder model is 19M. Both of these are much larger than yours, are these normal or something wrong?
First, I trained encoder:
th run.lua --dataset cs --datapath /home/janice/Pictures/Cityscapes --model models/encoder.lua --save /home/janice/Documents/ENet-training/train/trained/encoder_model/ --imHeight 256 --imWidth 512 --labelHeight 32 --labelWidth 64 --cachepath /home/janice/Documents/ENet-training/train/trained/encoder_dataset_cache/ --nGPU 1 --learningRate 5e-4 --weightDecay 2e-4 --batchSize 5
Then I trained decoder:
th run.lua --dataset cs --datapath /home/janice/Pictures/Cityscapes/ --model models/decoder.lua --save /home/janice/Documents/ENet-training/train/trained/decoder_model/ --imHeight 256 --imWidth 512 --labelHeight 256 --labelWidth 512 --cachepath /home/janice/Documents/ENet-training/train/trained/decoder_dataset_cache/ --nGPU 1 --learningRate 5e-4 --weightDecay 2e-4 --batchSize 5 --CNNEncoder /home/janice/Documents/ENet-training/train/trained/encoder_model/model-best.net
Thanks!!
The text was updated successfully, but these errors were encountered: