Skip to main content

LocalAI

Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https://localai.io/basics/getting_started/index.html and https://localai.io/features/embeddings/index.html.

from langchain_community.embeddings import LocalAIEmbeddings
API Reference:LocalAIEmbeddings
embeddings = LocalAIEmbeddings(
openai_api_base="http://localhost:8080", model="embedding-model-name"
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])

Let's load the LocalAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see here

from langchain_community.embeddings import LocalAIEmbeddings
API Reference:LocalAIEmbeddings
embeddings = LocalAIEmbeddings(
openai_api_base="http://localhost:8080", model="embedding-model-name"
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
import os

# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"

Limitations

langchain_community.embeddings.LocalAIEmbeddings has two issues:

  • it depends on Open AI SDK v0, which is outdated
  • and it requests document embeddings one by one in embed_documents without bulking them into the single request.

langchain-localai is the separate integration package provided for resolving these issues:


Was this page helpful?