ggrn RobSteele commited on
Commit
9cf5b2b
1 Parent(s): ea8160f

Update README.md (#4)

Browse files

- Update README.md (2a8fedcdf0e6e96d6728e1e7dd59bc4f8ed7f59e)


Co-authored-by: Rob Steele <RobSteele@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ _Fork of https://huggingface.co/thenlper/gte-small with ONNX weights to be compa
12
 
13
  # gte-small
14
 
15
- Gegeral Text Embeddings (GTE) model.
16
 
17
  The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
18
 
 
12
 
13
  # gte-small
14
 
15
+ General Text Embeddings (GTE) model.
16
 
17
  The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
18