text-embeddings-inference documentation

Using TEI locally with CPU

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Using TEI locally with CPU

You can install text-embeddings-inference locally to run it on your own machine. Here are the step-by-step instructions for installation:

Step 1: Install Rust

Install Rust on your machine by run the following in your terminal, then following the instructions:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Step 2: Install necessary packages

Depending on your machine’s architecture, run one of the following commands:

For x86 Machines

cargo install --path router -F mkl

For M1 or M2 Machines

cargo install --path router -F metal

Step 3: Launch Text Embeddings Inference

Once the installation is successfully complete, you can launch Text Embeddings Inference on CPU with the following command:

model=BAAI/bge-large-en-v1.5
revision=refs/pr/5

text-embeddings-router --model-id $model --revision $revision --port 8080

In some cases, you might also need the OpenSSL libraries and gcc installed. On Linux machines, run the following command:

sudo apt-get install libssl-dev gcc -y

Now you are ready to use text-embeddings-inference locally on your machine. If you want to run TEI locally with a GPU, check out the Using TEI locally with GPU page.

< > Update on GitHub