LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub. All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed. For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don’t have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the build section.| localai.io
Build linkLocalAI can be built as a container image or as a single, portable binary. Note that the some model architectures might require Python libraries, which are not included in the binary. The binary contains only the core backends written in Go and C++. LocalAI’s extensible architecture allows you to add your own backends, which can be written in any language, and as such the container images contains also the Python dependencies to run all the available backends (for example, in orde...| localai.io
The model gallery is a curated collection of models configurations for LocalAI that enables one-click install of models directly from the LocalAI Web interface. A list of the models available can also be browsed at the Public LocalAI Gallery.| localai.io