You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi ,
Thanks for the tool. I was facing error while running it and fixed main.py as follows to get it working:
def create_meta_file(local_dir, file_path):
# Create the metafile for Ollama
meta_file_content = f"""
FROM {file_path}
TEMPLATE \"""{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>\"""
"""
print("Metafile content:")
print(meta_file_content)
meta_file_path = os.path.join(local_dir, "metafile.txt")
with open(meta_file_path, "w") as meta_file:
meta_file.write(meta_file_content)
return meta_file_path
===============
Following was the error:
(olguff) Ubuntu@0136-ict-prxmx50056:~/olguff$ python3 main.py
Enter the Hugging Face model ID: shenzhi-wang/Llama3.1-8B-Chinese-Chat
Available files in the repository:
.gitattributes
README.md
config.json
gguf/llama3.1_8b_chinese_chat_f16.gguf
gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf
gguf/llama3.1_8b_chinese_chat_q8_0.gguf
model-00001-of-00004.safetensors
model-00002-of-00004.safetensors
model-00003-of-00004.safetensors
model-00004-of-00004.safetensors
model.safetensors.index.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
Enter the number of the file you want to download: 5
File 'gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf' already exists. Do you want to redownload it? (yes/no): no
Skipping download of 'gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf'.
Enter the Ollama name for the model (default: gguf/llama3.1_8b_chinese_chat_q4_k_m): myllama
Do you want to proceed with the 'ollama create' command? (yes/no): yes
Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"
Model imported successfully!
My situation is that the first line in the generated Modelfile is "failed to get console mode for stdout: The handle is invalid." Deleting this line and then executing the command to create a new model works.
Hi ,
Thanks for the tool. I was facing error while running it and fixed main.py as follows to get it working:
def create_meta_file(local_dir, file_path):
# Create the metafile for Ollama
===============
Following was the error:
(olguff) Ubuntu@0136-ict-prxmx50056:~/olguff$ python3 main.py
Enter the Hugging Face model ID: shenzhi-wang/Llama3.1-8B-Chinese-Chat
Available files in the repository:
Enter the number of the file you want to download: 5
File 'gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf' already exists. Do you want to redownload it? (yes/no): no
Skipping download of 'gguf/llama3.1_8b_chinese_chat_q4_k_m.gguf'.
Enter the Ollama name for the model (default: gguf/llama3.1_8b_chinese_chat_q4_k_m): myllama
Do you want to proceed with the 'ollama create' command? (yes/no): yes
Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"
Model imported successfully!
PS. I am also doing a video on it for my channel https://www.youtube.com/@fahdmirza and it will be published soon. Thanks.
The text was updated successfully, but these errors were encountered: