Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I implemented Llama3-8B-1.58-100B-tokens with TL1 on ARM, a malloc() error was occured. #143

Open
y-vectorfield opened this issue Dec 19, 2024 · 0 comments

Comments

@y-vectorfield
Copy link

y-vectorfield commented Dec 19, 2024

Environment

  • CPU: NVIDA Grace CPU(72thread)
  • Model: Llama3-8B-1.58-100B-tokens with TL1
  • Prompt: AI is going to
  • N_Predict: 128
  • Threads: 1, 2, 4, 8, 16, 32, 64, 72
  • Context Size: 2048
  • Temperature: 0.8

When I implemented Llama3-8B-1.58-100B-tokens with TL1 on ARM, a malloc() error was occured.

  • Error Type
    • Type 1: malloc() error and nothing output
    • Type 2: malloc() error and correct output(generated text)
    • Type 3: double malloc() error and correct output(generated text)
Thread Size Error Message Output Error Type
1thread malloc(): invalid next size (unsorted) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '1', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Nothing Type 1
2thread free(): invalid next size (normal) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '2', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Generated Text Type 2
4thread double free or corruption (!prev) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '4', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Generated Text Type 3
8thread double free or corruption (!prev) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '8', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Generated Text Type 3
16thread double free or corruption (!prev) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '16', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Generated Text Type 3
32thread double free or corruption (!prev) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '32', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Generated Text Type 3
64thread free(): invalid next size (normal) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '64', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Generated Text Type 2
72thread double free or corruption (!prev) Error occurred while running command: Command '['build/bin/llama-cli', '-m', '/root/BitNet/models/Llama3-8B-1.58-100B-tokens/ggml-model-tl1.gguf', '-n', '128', '-t', '72', '-p', 'AI is going to', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1']' died with <Signals.SIGABRT: 6>. Generated Text Type 3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant