The misc/llama.cpp port
llama.cpp-0.0.6641 – LLM inference system
Description
Inference of Meta's LLaMA model (and others) in pure C/C++ with
minimal setup and state-of-the-art performance on a wide range
of hardware
WWW: https://github.com/ggml-org/llama.cpp
- Only for arches
-
aarch64
aarch64
alpha
amd64
amd64
arm
arm
hppa
i386
i386
mips64
mips64
mips64el
mips64el
powerpc
powerpc
powerpc64
powerpc64
riscv64
riscv64
sparc64
- Categories:
-
misc
Library dependencies
Build dependencies
Run dependencies