Embed, store, and search vectors on the client side!

Implement semantic search with only 5 lines of code

Compute embeddings outperforms OpenAI's text-embedding-ada-002 on client side

Search up to 100K vectors in less than 100ms, get rid of latency

Scale with our embedding API, pay only $20/mo to embed, store, and search 10M vectors

  // npm i client-vector-search
  import { getEmbedding, EmbeddingIndex } from 'client-vector-search';

  // compute the embeddings of your data
  const initialObjects = [
    { id: 1, name: "Apple", embedding: getEmbedding("Apple") },
  ]; // up to 100k embeddings  // TODO:

  // create an index
  const index = new EmbeddingIndex(initialObjects);

  // compute the embedding of your query
  const queryEmbedding = await getEmbedding("pear");

  // search the index
  const results = await index.search(queryEmbedding, { topK: 5 });

  // THAT'S IT!
  // you can also save these indexes, and use our APIs to scale!


You can test how fast you can compute embeddings by pasting a long article and you can load a wikipedia dataset and search through it and see how fast search is.
You can find the dataset we use to search here.