This repository has been archived by the owner on Mar 30, 2024. It is now read-only.
llama-2-70B #17
Answered
by
chenhunghan
ranjanshivaji
asked this question in
Q&A
-
If these Models don't use GPU. How will the performance be affected? Or They are the same as the Official variant? |
Beta Was this translation helpful? Give feedback.
Answered by
chenhunghan
Jul 28, 2023
Replies: 1 comment
-
There is some discussion on Reddit regarding this https://www.reddit.com/r/LocalLLaMA/comments/12vo2rn/ggml/ basically 4x less RAM but lower quality. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
chenhunghan
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There is some discussion on Reddit regarding this https://www.reddit.com/r/LocalLLaMA/comments/12vo2rn/ggml/ basically 4x less RAM but lower quality.