The misc/llama.cpp port
llama-cpp-0.0.4706 – LLM inference system
Description
Inference of Meta's LLaMA model (and others) in pure C/C++ with
minimal setup and state-of-the-art performance on a wide range
of hardware
WWW: https://github.com/ggerganov/llama.cpp
- Only for arches
-
aarch64
alpha
amd64
mips64
mips64el
powerpc64
riscv64
sparc64
- Categories:
-
misc
Library dependencies
Build dependencies
Run dependencies