Tags: JamePeng/llama-cpp-python
Toggle v0.3.14-cu126-AVX2-win-20250724's commit message
Update Submodule vendor/llama.cpp f0d4d17..a86f52b
Toggle v0.3.14-cu126-AVX2-linux-20250723's commit message
Update Submodule vendor/llama.cpp f0d4d17..a86f52b
Toggle v0.3.14-cu124-AVX2-win-20250724's commit message
Update Submodule vendor/llama.cpp f0d4d17..a86f52b
Toggle v0.3.14-cu124-AVX2-linux-20250723's commit message
Update Submodule vendor/llama.cpp f0d4d17..a86f52b
Toggle v0.3.13-cu126-AVX2-win-20250717's commit message
fix memory_seq_rm crash bug
Toggle v0.3.13-cu126-AVX2-linux-20250717's commit message
fix memory_seq_rm crash bug
Toggle v0.3.13-cu124-AVX2-win-20250717's commit message
fix memory_seq_rm crash bug
Toggle v0.3.13-cu124-AVX2-linux-20250717's commit message
fix memory_seq_rm crash bug
Toggle v0.3.12-cu126-AVX2-win-20250714's commit message
try to use the logit_bias instead of logit_processors in test_llama
Toggle v0.3.12-cu126-AVX2-linux-20250714's commit message
try to use the logit_bias instead of logit_processors in test_llama
You can’t perform that action at this time.