Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message" #1

Open
fahdmirza opened this issue Jul 27, 2024 · 5 comments

Comments

@fahdmirza
Copy link

Hi ,
Thanks for the tool. I was facing error while running it and fixed main.py as follows to get it working:

def create_meta_file(local_dir, file_path):
# Create the metafile for Ollama

meta_file_content = f"""
FROM {file_path}

TEMPLATE \"""{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>\"""
    """

print("Metafile content:")
print(meta_file_content)

meta_file_path = os.path.join(local_dir, "metafile.txt")
with open(meta_file_path, "w") as meta_file:
    meta_file.write(meta_file_content)
return meta_file_path

===============

Following was the error:

(olguff) Ubuntu@0136-ict-prxmx50056:~/olguff$ python3 main.py
Enter the Hugging Face model ID: shenzhi-wang/Llama3.1-8B-Chinese-Chat
Available files in the repository:

  1. .gitattributes
  2. README.md
  3. config.json
  4. gguf/llama3.1_8b_chinese_chat_f16.gguf
  5. gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf
  6. gguf/llama3.1_8b_chinese_chat_q8_0.gguf
  7. model-00001-of-00004.safetensors
  8. model-00002-of-00004.safetensors
  9. model-00003-of-00004.safetensors
  10. model-00004-of-00004.safetensors
  11. model.safetensors.index.json
  12. special_tokens_map.json
  13. tokenizer.json
  14. tokenizer_config.json
    Enter the number of the file you want to download: 5
    File 'gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf' already exists. Do you want to redownload it? (yes/no): no
    Skipping download of 'gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf'.
    Enter the Ollama name for the model (default: gguf/llama3.1_8b_chinese_chat_q4_k_m): myllama
    Do you want to proceed with the 'ollama create' command? (yes/no): yes

Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"
Model imported successfully!

PS. I am also doing a video on it for my channel https://www.youtube.com/@fahdmirza and it will be published soon. Thanks.

@fahadshery
Copy link

were you able to resolve this?

@fahdmirza
Copy link
Author

were you able to resolve this?

This tool doesn't seem ready yet, but I have covered various other similar tools at https://www.youtube.com/@fahdmirza

@fahadshery
Copy link

thank you Mirza sahb. I have a gpu and I am interested in safetensor files. have you covered those or are planning those files to convert to ollama?

@ZhangChi7
Copy link

Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"

@ZhangChi7
Copy link

My situation is that the first line in the generated Modelfile is "failed to get console mode for stdout: The handle is invalid." Deleting this line and then executing the command to create a new model works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants