-
Notifications
You must be signed in to change notification settings - Fork 329
Issues: EricLBuehler/mistral.rs
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Llama 3.2 interactive mode fails after second message
bug
Something isn't working
#1016
opened Dec 31, 2024 by
dozingcat
BitNet support
new feature
New feature or request
optimization
#1013
opened Dec 28, 2024 by
EricLBuehler
Model cannot terminate itself when running single GGUF/GGML model
bug
Something isn't working
#1012
opened Dec 28, 2024 by
guoqingbao
0.3.4 #992 - #998 doesn't build
bug
Something isn't working
build
Issues relating to building mistral.rs
#999
opened Dec 20, 2024 by
misureaudio
How do I finetune/train models with this?
new feature
New feature or request
#980
opened Dec 9, 2024 by
Tameflame
[Feature Request] -- EfficientQAT (Omniquant Successor) and/or ISTA-DASLab Higgs Quant. Models/Formatting
new feature
New feature or request
#977
opened Dec 7, 2024 by
BuildBackBuehler
create_ordering.py not supported with llama 3 loras
bug
Something isn't working
#976
opened Dec 7, 2024 by
kkailaasa
fast-forward tokens with llguidance
new feature
New feature or request
#965
opened Dec 2, 2024 by
mmoskal
parallel computation of mask in constrained sampling
new feature
New feature or request
#964
opened Dec 2, 2024 by
mmoskal
Possible problem with candle 0.8.0 - doesn't build on a GTX1650 (CI 75) nor a GTX1070 (CI 61)
bug
Something isn't working
build
Issues relating to building mistral.rs
#954
opened Dec 1, 2024 by
misureaudio
Create and load standalone quantized UQFF models
new feature
New feature or request
#947
opened Nov 29, 2024 by
FishiaT
DiffusionArchitecture not found in python package
bug
Something isn't working
build
Issues relating to building mistral.rs
resolved
#943
opened Nov 28, 2024 by
Manojbhat09
Error: Enable to run Lora - Adapter files are empty
bug
Something isn't working
#929
opened Nov 23, 2024 by
kkailaasa
Speculative decoding support for mistralrs-server
new feature
New feature or request
#912
opened Nov 16, 2024 by
PkmX
Tracking: Metal performance vs. MLX, llama.cpp
optimization
#903
opened Nov 10, 2024 by
EricLBuehler
Add gemma2 architecture support for GGUF
models
Additions to model or architectures
new feature
New feature or request
#901
opened Nov 9, 2024 by
grpathak22
How can I use two NVIDIA RTX 4090 GPUs with mistral.rs?
new feature
New feature or request
#892
opened Oct 28, 2024 by
ricesin888
Text Completion/Raw Input support?
new feature
New feature or request
#890
opened Oct 27, 2024 by
oofdere
Feature Request 「plz support InternLM2.5」
new feature
New feature or request
#876
opened Oct 23, 2024 by
boshallen
Previous Next
ProTip!
Follow long discussions with comments:>50.