Tele-Data / README.md
AliMaatouk's picture
Update README.md
db85d1e verified
|
raw
history blame
3.37 kB
---
license: mit
language:
- en
tags:
- telecom
task_categories:
- text-generation
configs:
- config_name: default
data_files:
- split: data
path: Tele-Eval.jsonl
---
# Tele-Data
## Dataset Summary
Tele-Data is a comprehensive dataset of telecommunications material that revolves around four categories of sources: (1) scientific papers from arXiv, (2) 3GPP standards, (3) Wikipedia articles related to telecommunications, and (4) telecommunications-related websites extracted from Common Crawl dumps.
LLM-based filtering was used to identify the relevant material from these sources, which then underwent extensive cleaning, format unification, and equation material standardization. The dataset consists of approximately 2.5 billion tokens, making it ideal for continually pretraining language models to adapt them to the telecommunications domain.
## Dataset Structure
### Data Fields
The data fields are as follows:
* `ID`: Provides a unique identifier for each data sample.
* `Category`: Identifies the category of the sample.
* `Content`: Includes the full text of the data sample.
* `Metadata`: Includes a JSON object, cast as a string, with information relevant to each data sample, which varies depending on the category.
### Data Instances
An example of Tele-Data looks as follows:
```json
{
"ID": "standard_2413",
"Category": "standard",
"Content": "3rd Generation Partnership Project; \n Technical Specification Group Core Network and Terminals;\n Interworking between the Public Land Mobile Network (PLMN)\n supporting packet based services with\n Wireless Local Area Network (WLAN) Access and\n Packet Data Networks (PDN)\n (Release 12)\n Foreword\n This Technical Specification (TS) has been produced...",
"Metadata":
"Series": "29",
"Release": "12",
"File_name": "29161-c00"
}
```
## Sample Code
Below, we share a code snippet on how to get quickly started with using the dataset. First, make sure to `pip install datasets`, then copy the snippet below and adapt it to your usecase.
#### Using the whole dataset
```python
import json
from datasets import load_dataset
Tele_Data = load_dataset("AliMaatouk/Tele-Data")
data_sample = Tele_Data['train'][0]
print(f"ID: {data_sample['id']}\nCategory: {data_sample['category']} \nContent: {data_sample['content']}")
for key, value in json.loads(data_sample['metadata']).items():
print(f"{key}: {value}")
```
#### Using a subset of the dataset
```python
import json
from datasets import load_dataset
Tele_Data = load_dataset("AliMaatouk/Tele-Data", name="standard")
data_sample = Tele_Data['train'][0]
print(f"ID: {data_sample['id']}\nCategory: {data_sample['category']} \nContent: {data_sample['content']}")
for key, value in json.loads(data_sample['metadata']).items():
print(f"{key}: {value}")
```
## Citation
You can find the paper with all details about the dataset at https://arxiv.org/abs/2409.05314. Please cite it as follows:
```
@misc{maatouk2024telellmsseriesspecializedlarge,
title={Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications},
author={Ali Maatouk and Kenny Chirino Ampudia and Rex Ying and Leandros Tassiulas},
year={2024},
eprint={2409.05314},
archivePrefix={arXiv},
primaryClass={cs.IT},
url={https://arxiv.org/abs/2409.05314},
}
```