File size: 5,546 Bytes
e91a73e a7949b9 e91a73e a7949b9 e91a73e a7949b9 e91a73e f50ab7d e91a73e f50ab7d e91a73e f50ab7d e91a73e f50ab7d e91a73e f50ab7d e91a73e f50ab7d e91a73e f50ab7d e91a73e f50ab7d e91a73e a7949b9 e91a73e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
model = SentenceTransformer("KDHyun08/TAACO_STS")
docs = ['μ΄μ λ μλ΄μ μμΌμ΄μλ€', 'μμΌμ λ§μ΄νμ¬ μμΉ¨μ μ€λΉνκ² λ€κ³ μ€μ 8μ 30λΆλΆν° μμμ μ€λΉνμλ€. μ£Όλ λ©λ΄λ μ€ν
μ΄ν¬μ λμ§λ³Άμ, λ―Έμκ΅, μ‘μ±, μμΌ λ±μ΄μλ€', 'μ€ν
μ΄ν¬λ μμ£Ό νλ μμμ΄μ΄μ μμ μ΄ μ€λΉνλ €κ³ νλ€', 'μλ€λ 1λΆμ© 3λ² λ€μ§κ³ λμ€ν
μ μ νλ©΄ μ‘μ¦μ΄ κ°λν μ€ν
μ΄ν¬κ° μ€λΉλλ€', 'μλ΄λ κ·Έλ° μ€ν
μ΄ν¬λ₯Ό μ’μνλ€. κ·Έλ°λ° μμλ λͺ»ν μΌμ΄ λ²μ΄μ§κ³ λ§μλ€', 'λ³΄ν΅ μμ¦λμ΄ λμ§ μμ μμ‘μ μ¬μ μ€ν
μ΄ν¬λ₯Ό νλλ°, μ΄λ²μλ μμ¦λμ΄ λ λΆμ±μ΄μ ꡬμ
ν΄μ νλ€', 'κ·Έλ°λ° μΌμ΄μ€ μμ λ°©λΆμ κ° λ€μ΄μλ κ²μ μΈμ§νμ§ λͺ»νκ³ λ°©λΆμ μ λμμ νλΌμ΄ν¬μ μ¬λ €λμ κ²μ΄λ€', 'κ·Έκ²λ μΈμ§ λͺ»ν 체... μλ©΄μ μΌ λΆμ 1λΆμ κ΅½κ³ λ€μ§λ μκ° λ°©λΆμ κ° ν¨κ» ꡬμ΄μ§ κ²μ μμλ€', 'μλ΄μ μμΌμ΄λΌ λ§μκ² κ΅¬μλ³΄κ³ μΆμλλ° μ΄μ²κ΅¬λμλ μν©μ΄ λ°μν κ²μ΄λ€', 'λ°©λΆμ κ° μΌ λΆμ λ
Ήμμ κ·Έλ°μ§ λ¬Όμ²λΌ νλ¬λ΄λ Έλ€', ' κ³ λ―Όμ νλ€. λ°©λΆμ κ° λ¬»μ λΆλ¬Έλ§ μ κ±°νκ³ λ€μ ꡬμΈκΉ νλλ° λ°©λΆμ μ μ λ λ¨Ήμ§ λ§λΌλ λ¬Έκ΅¬κ° μμ΄μ μκΉμ§λ§ λ²λ¦¬λ λ°©ν₯μ νλ€', 'λ무λ μνκΉμ λ€', 'μμΉ¨ μΌμ° μλ΄κ° μ’μνλ μ€ν
μ΄ν¬λ₯Ό μ€λΉνκ³ κ·Έκ²μ λ§μκ² λ¨Ήλ μλ΄μ λͺ¨μ΅μ λ³΄κ³ μΆμλλ° μ ν μκ°μ§λ λͺ»ν μν©μ΄ λ°μν΄μ... νμ§λ§ μ μ μ μΆμ€λ₯΄κ³ λ°λ‘ λ€λ₯Έ λ©λ΄λ‘ λ³κ²½νλ€', 'μμΌ, μμμ§ μΌμ±λ³Άμ..', 'μλ΄κ° μ’μνλμ§ λͺ¨λ₯΄κ² μ§λ§ λμ₯κ³ μμ μλ νλν¬μμΈμ§λ₯Ό 보λ λ°λ‘ μμΌλ₯Ό ν΄μΌκ² λ€λ μκ°μ΄ λ€μλ€. μμμ μ±κ³΅μ μΌλ‘ μμ±μ΄ λμλ€', '40λ²μ§Έλ₯Ό λ§μ΄νλ μλ΄μ μμΌμ μ±κ³΅μ μΌλ‘ μ€λΉκ° λμλ€', 'λ§μκ² λ¨Ήμ΄ μ€ μλ΄μκ²λ κ°μ¬νλ€', '맀λ
μλ΄μ μμΌμ λ§μ΄νλ©΄ μμΉ¨λ§λ€ μμΌμ μ°¨λ €μΌκ² λ€. μ€λλ μ¦κ±°μ΄ νλ£¨κ° λμμΌλ©΄ μ’κ² λ€', 'μμΌμ΄λκΉ~']
#κ° λ¬Έμ₯μ vectorκ° encoding
document_embeddings = model.encode(docs)
query = 'μμΌμ λ§μ΄νμ¬ μμΉ¨μ μ€λΉνκ² λ€κ³ μ€μ 8μ 30λΆλΆν° μμμ μ€λΉνμλ€'
query_embedding = model.encode(query)
top_k = min(10, len(docs))
# μ½μ¬μΈ μ μ¬λ κ³μ° ν,
cos_scores = util.pytorch_cos_sim(query_embedding, document_embeddings)[0]
# μ½μ¬μΈ μ μ¬λ μμΌλ‘ λ¬Έμ₯ μΆμΆ
top_results = torch.topk(cos_scores, k=top_k)
print(f"μ
λ ₯ λ¬Έμ₯: {query}")
print(f"\n<μ
λ ₯ λ¬Έμ₯κ³Ό μ μ¬ν {top_k} κ°μ λ¬Έμ₯>\n")
for i, (score, idx) in enumerate(zip(top_results[0], top_results[1])):
print(f"{i+1}: {docs[idx]} {'(μ μ¬λ: {:.4f})'.format(score)}\n")
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 142 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |