File size: 4,683 Bytes
f31fa21 ed5afb0 0e2f653 45eff4f 0e2f653 45eff4f b822614 fb132fe b822614 0e2f653 b822614 0e2f653 b822614 0e2f653 b822614 0e2f653 45eff4f 7cabdea 45eff4f b822614 45eff4f b822614 45eff4f b822614 45eff4f b822614 45eff4f b822614 45eff4f b822614 45eff4f 7cabdea 45eff4f 7cabdea 0e2f653 45eff4f 0e2f653 45eff4f 0e2f653 45eff4f 0e2f653 b822614 0e2f653 b822614 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
license: cc-by-4.0
task_categories:
- time-series-forecasting
pretty_name: cloud
size_categories:
- 100M<n<1B
---
# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain
[Paper](https://arxiv.org/abs/2310.05063) | [Code](https://github.com/SalesforceAIResearch/pretrain-time-series-cloudops)
Datasets accompanying the paper "Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain".
## Quick Start
```bash
pip install datasets==2.12.0 fsspec==2023.5.0
```
### azure_vm_traces_2017
```python
from datasets import load_dataset
dataset = load_dataset('Salesforce/cloudops_tsf', 'azure_vm_traces_2017')
print(dataset)
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 17568
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 159472
})
})
```
### borg_cluster_data_2011
```python
dataset = load_dataset('Salesforce/cloudops_tsf', 'borg_cluster_data_2011')
print(dataset)
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 11117
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 143386
})
})
```
### alibaba_cluster_trace_2018
```python
dataset = load_dataset('Salesforce/cloudops_tsf', 'alibaba_cluster_trace_2018')
print(dataset)
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 6048
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 58409
})
})
```
## Dataset Config
```python
from datasets import load_dataset_builder
config = load_dataset_builder('Salesforce/cloudops_tsf', 'azure_vm_traces_2017').config
print(config)
CloudOpsTSFConfig(
name='azure_vm_traces_2017',
version=1.0.0,
data_dir=None,
data_files=None,
description='',
prediction_length=48,
freq='5T',
stride=48,
univariate=True,
multivariate=False,
optional_fields=(
'feat_static_cat',
'feat_static_real',
'past_feat_dynamic_real'
),
rolling_evaluations=12,
test_split_date=Period('2016-12-13 15:55', '5T'),
_feat_static_cat_cardinalities={
'pretrain': (
('vm_id', 177040),
('subscription_id', 5514),
('deployment_id', 15208),
('vm_category', 3)
),
'train_test': (
('vm_id', 17568),
('subscription_id', 2713),
('deployment_id', 3255),
('vm_category', 3)
)
},
target_dim=1,
feat_static_real_dim=3,
past_feat_dynamic_real_dim=2
)
```
```test_split_date``` is provided to achieve the same train-test split as given in the paper.
This is essentially the date/time of ```rolling_evaluations * prediction_length``` time steps before the last time step in the dataset.
Note that the pre-training dataset includes the test region, and thus should also be filtered before usage.
## Acknowledgements
The datasets were processed from the following original sources. Please cite the original sources if you use the datasets.
* Azure VM Traces 2017
* Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017.
* https://github.com/Azure/AzurePublicDataset
* Borg Cluster Data 2011
* John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at http://googleresearch.blogspot.com/2011/11/more-google-cluster-data.html.
* https://github.com/google/cluster-data
* Alibaba Cluster Trace 2018
* Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019.
* https://github.com/alibaba/clusterdata
## Citation
<pre>
@article{woo2023pushing,
title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain},
author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen},
journal={arXiv preprint arXiv:2310.05063},
year={2023}
}
</pre>
|