warning Section under construction This section contains instruction on how to use LocalAI with GPU acceleration. ⚡ For acceleration for AMD or Metal HW is still in development, for additional details see the build Automatic Backend Detection linkWhen you install a model from the gallery (or a YAML file), LocalAI intelligently detects the required backend and your system’s capabilities, then downloads the correct version for you. Whether you’re running on a standard CPU, an NVIDIA GPU, ...| localai.io
LocalAI supports running OpenAI functions and tools API with llama.cpp compatible models. To learn more about OpenAI functions, see also the OpenAI API blog post.| localai.io
LocalAI supports generating embeddings for text or list of tokens. For the API documentation you can refer to the OpenAI docs: https://platform.openai.com/docs/api-reference/embeddings Model compatibility linkThe embedding endpoint is compatible with llama.cpp models, bert.cpp models and sentence-transformers models available in huggingface.| localai.io