Skip to content

Commit

Permalink
Merge pull request #150 from georgian-io/fix-generate
Browse files Browse the repository at this point in the history
Bug Fix - `llmtune generate config` not finding the yml file
  • Loading branch information
benjaminye authored Apr 10, 2024
2 parents 98e1ad4 + 20be6e2 commit eec2690
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion llmtune/cli/toolkit.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ def generate_config():
"""
Generate an example `config.yml` file in current directory
"""
module_path = Path(llmtune.__file__).parent
module_path = Path(llmtune.__file__)
example_config_path = module_path.parent / EXAMPLE_CONFIG_FNAME
destination = Path.cwd()
shutil.copy(example_config_path, destination)
Expand Down
6 changes: 3 additions & 3 deletions config.yml → llmtune/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,15 @@ data:
prompt_stub:
>- # Stub to add for training at the end of prompt, for test set or inference, this is omitted; make sure only one variable is present
{output}
test_size: 0.1 # Proportion of test as % of total; if integer then # of samples
train_size: 0.9 # Proportion of train as % of total; if integer then # of samples
test_size: 25 # Proportion of test as % of total; if integer then # of samples
train_size: 500 # Proportion of train as % of total; if integer then # of samples
train_test_split_seed: 42

# Model Definition -------------------
model:
hf_model_ckpt: "mistralai/Mistral-7B-Instruct-v0.2"
torch_dtype: "bfloat16"
attn_implementation: "flash_attention_2"
#attn_implementation: "flash_attention_2"
quantize: true
bitsandbytes:
load_in_4bit: true
Expand Down

0 comments on commit eec2690

Please sign in to comment.