Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #322 (Change bf16 to amp_bf16) #443

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ parallel: true
# Basic run configuration, additional details will be added to this name for each GLUE task, and each random seed
base_run_name: hf-bert-base-uncased-glue-finetuning # Determines how runs are saved and logged in W&B
default_seed: 19
precision: bf16
precision: amp_bf16

# Tokenizer for dataset creation
tokenizer_name: bert-base-uncased
Expand All @@ -19,7 +19,7 @@ model:
tokenizer_name: ${tokenizer_name}

# Loading
starting_checkpoint_load_path: # Fill this in with the composer checkpoint from the end of pre-training a HF BERT
starting_checkpoint_load_path: # Fill this in with the composer checkpoint from the end of pre-training a HF BERT
local_pretrain_checkpoint_folder: ./local-bert-checkpoints/

# Saving
Expand Down
16 changes: 8 additions & 8 deletions examples/benchmarks/bert/yamls/finetuning/glue/mcloud_run.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,19 @@ name: mosaic-bert-base-uncased-glue-finetuning
image: mosaicml/pytorch:1.13.1_cu117-python3.10-ubuntu20.04

compute:
gpus: 8 # Number of GPUs to use
gpus: 8 # Number of GPUs to use

## These configurations are optional
# cluster: TODO # Name of the cluster to use for this run
# gpu_type: a100_80gb # Type of GPU to use. We use a100_80gb in our experiments

integrations:
- integration_type: git_repo
git_repo: mosaicml/examples
git_branch: v0.0.4 # use your branch
# git_commit: # OR use your commit hash
pip_install: -e .[bert]
ssh_clone: false # Should be true if using a private repo
- integration_type: git_repo
git_repo: mosaicml/examples
git_branch: v0.0.4 # use your branch
# git_commit: # OR use your commit hash
pip_install: -e .[bert]
ssh_clone: false # Should be true if using a private repo
command: |
cd examples/examples/bert
python glue.py /mnt/config/parameters.yaml
Expand All @@ -43,7 +43,7 @@ parameters:
base_run_name: # If left blank, will be read from top YAML name

default_seed: 19
precision: bf16
precision: amp_bf16

# Tokenizer for dataset creation
tokenizer_name: bert-base-uncased
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ parallel: true
# Basic run configuration, additional details will be added to this name for each GLUE task, and each random seed
base_run_name: mosaic-bert-base-uncased-glue-finetuning # Determines how runs are saved and logged in W&B
default_seed: 19
precision: bf16
precision: amp_bf16

# Tokenizer for dataset creation
tokenizer_name: bert-base-uncased
Expand Down