-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Update Llama.cpp Submodule to #9fb13f #1007
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest commit allows for MoE models thanks to commit #799a1cb. This should update the connector to use the new llama.cpp files and allow for MoE models (such as Mixtral-8x7B-v0.1) to be used.
Update the Llama.cpp submodule to include commit #799a1cb, which expands Llama.cpp to include MoE models such as Mixtral-8x7B-v0.1.
Hi @AuLaSW . I went through and tested your PR and it seems to work fine. I used
|
Does this also cover #1000? I read through it and it seems like it would. |
I believe it does |
I love it! |
Please merge it! 🙏 |
@AuLaSW thank you for this! I've merged the latest llama.cpp release into main and published a new release (v0.2.23) to pypi. |
Tested new release, seems good. |
This pull request is small and simple: update the Llama.cpp submodule to #9fb13f. The submodule has been updated enough to include support for MoE models (such as the new Mixtral-8x7B-v0.1 that came out yesterday). I have tested this on WSL and it works with the quantized version of that model from TheBloke.