Skip to content

Commit b121b7c

Browse files
committed
Update docstring
1 parent 206efa3 commit b121b7c

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

llama_cpp/llama.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,13 @@ def __init__(
2424
"""Load a llama.cpp model from `model_path`.
2525
2626
Args:
27-
model_path: Path to the model directory.
28-
n_ctx: Number of tokens to keep in memory.
27+
model_path: Path to the model.
28+
n_ctx: Maximum context size.
2929
n_parts: Number of parts to split the model into. If -1, the number of parts is automatically determined.
30-
seed: Random seed.
31-
f16_kv: Use half-precision for key/value matrices.
32-
logits_all: Return logits for all tokens, not just the vocabulary.
33-
vocab_only: Only use tokens in the vocabulary.
30+
seed: Random seed. 0 for random.
31+
f16_kv: Use half-precision for key/value cache.
32+
logits_all: Return logits for all tokens, not just the last token.
33+
vocab_only: Only load the vocabulary no weights.
3434
n_threads: Number of threads to use. If None, the number of threads is automatically determined.
3535
3636
Raises:

0 commit comments

Comments
 (0)