KDHyun08 commited on
Commit
a7949b9
1 Parent(s): 1b1edd9

Upload with huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +6 -9
  2. config.json +1 -1
  3. pytorch_model.bin +1 -1
  4. tokenizer_config.json +1 -1
README.md CHANGED
@@ -2,16 +2,14 @@
2
  pipeline_tag: sentence-similarity
3
  tags:
4
  - sentence-transformers
 
5
  - sentence-similarity
6
  - transformers
7
- - TAACO
8
-
9
- language: ko
10
  ---
11
 
12
- # TAACO_Sentence_Similarity
13
 
14
- This is a Sentence_Similarity of TAACO with [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
15
 
16
  <!--- Describe your model here -->
17
 
@@ -55,9 +53,8 @@ def mean_pooling(model_output, attention_mask):
55
  sentences = ['This is an example sentence', 'Each sentence is converted']
56
 
57
  # Load model from HuggingFace Hub
58
- tokenizer = AutoTokenizer.from_pretrained("KDHyun08/TAACO_STS")
59
- model = AutoModel.from_pretrained("KDHyun08/TAACO_STS")
60
-
61
 
62
  # Tokenize sentences
63
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -87,7 +84,7 @@ The model was trained with the parameters:
87
 
88
  **DataLoader**:
89
 
90
- `torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
91
  ```
92
  {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
93
  ```
 
2
  pipeline_tag: sentence-similarity
3
  tags:
4
  - sentence-transformers
5
+ - feature-extraction
6
  - sentence-similarity
7
  - transformers
 
 
 
8
  ---
9
 
10
+ # {MODEL_NAME}
11
 
12
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
 
14
  <!--- Describe your model here -->
15
 
 
53
  sentences = ['This is an example sentence', 'Each sentence is converted']
54
 
55
  # Load model from HuggingFace Hub
56
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
57
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
 
58
 
59
  # Tokenize sentences
60
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
84
 
85
  **DataLoader**:
86
 
87
+ `torch.utils.data.dataloader.DataLoader` of length 142 with parameters:
88
  ```
89
  {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
90
  ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "klue/bert-base",
3
  "architectures": [
4
  "BertModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "C:\\Users\\DESKTOP/.cache\\torch\\sentence_transformers\\KDHyun08_TAACO_STS\\",
3
  "architectures": [
4
  "BertModel"
5
  ],
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:692467d733c2b6171f65825fb19735984090afc43bea3b8b5f4829e8a83f5242
3
  size 442543599
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc7bbf7951004b83a5907a98ce803a92b540f1a522e429aadb7b56fa079da210
3
  size 442543599
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_basic_tokenize": true, "never_split": null, "model_max_length": 512, "special_tokens_map_file": "C:\\Users\\DESKTOP/.cache\\huggingface\\transformers\\aeaaa3afd086a040be912f92ffe7b5f85008b744624f4517c4216bcc32b51cf0.054ece8d16bd524c8a00f0e8a976c00d5de22a755ffb79e353ee2954d9289e26", "name_or_path": "klue/bert-base", "tokenizer_class": "BertTokenizer"}
 
1
+ {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_basic_tokenize": true, "never_split": null, "model_max_length": 512, "special_tokens_map_file": "C:\\Users\\DESKTOP/.cache\\huggingface\\transformers\\aeaaa3afd086a040be912f92ffe7b5f85008b744624f4517c4216bcc32b51cf0.054ece8d16bd524c8a00f0e8a976c00d5de22a755ffb79e353ee2954d9289e26", "name_or_path": "C:\\Users\\DESKTOP/.cache\\torch\\sentence_transformers\\KDHyun08_TAACO_STS\\", "tokenizer_class": "BertTokenizer"}