admin commited on
Commit
4009acb
1 Parent(s): 8176d33

upl scripts

Browse files
Files changed (3) hide show
  1. .gitignore +1 -0
  2. README.md +212 -1
  3. bel_canto.py +169 -0
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ rename.sh
README.md CHANGED
@@ -1,3 +1,214 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-nd-4.0
3
+ task_categories:
4
+ - audio-classification
5
+ - image-classification
6
+ language:
7
+ - zh
8
+ - en
9
+ tags:
10
+ - music
11
+ - art
12
+ pretty_name: Bel Conto and Chinese Folk Song Singing Tech
13
+ size_categories:
14
+ - 1K<n<10K
15
+ viewer: false
16
  ---
17
+
18
+ # Dataset Card for Bel Conto and Chinese Folk Song Singing Tech
19
+ The raw dataset, sourced from the [Bel Canto and National Singing Dataset](https://ccmusic-database.github.io/en/database/ccm.html#shou9), contains 203 acapella singing clips performed in two styles, Bel Canto and Chinese folk singing style, by professional vocalists. All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
20
+
21
+ Based on the aforementioned raw dataset, we have constructed the `default subset` of the current integrated version of the dataset, and its data structure can be viewed in the [viewer](https://www.modelscope.cn/datasets/ccmusic-database/bel_canto/dataPeview).
22
+
23
+ Since the default subset has not been evaluated, to verify its effectiveness, we have built the `eval subset` based on the default subset for the evaluation of the integrated version of the dataset. The evaluation results can be seen in the [bel_canto](https://www.modelscope.cn/models/ccmusic-database/bel_canto). Below are the data structures and invocation methods of the subsets.
24
+
25
+ ## Dataset Structure
26
+ <style>
27
+ .belcanto td {
28
+ vertical-align: middle !important;
29
+ text-align: center;
30
+ }
31
+ .belcanto th {
32
+ text-align: center;
33
+ }
34
+ </style>
35
+
36
+ ### Default Subset
37
+ <table class="belcanto">
38
+ <tr>
39
+ <th>audio</th>
40
+ <th>mel (spectrogram)</th>
41
+ <th>label (4-class)</th>
42
+ <th>gender (2-class)</th>
43
+ <th>singing_method(2-class)</th>
44
+ </tr>
45
+ <tr>
46
+ <td>.wav, 22050Hz</audio></td>
47
+ <td>.jpg, 22050Hz</td>
48
+ <td>m_bel, f_bel, m_folk, f_folk</td>
49
+ <td>male, female</td>
50
+ <td>Folk_Singing, Bel_Canto</td>
51
+ </tr>
52
+ <tr>
53
+ <td>...</td>
54
+ <td>...</td>
55
+ <td>...</td>
56
+ <td>...</td>
57
+ <td>...</td>
58
+ </tr>
59
+ </table>
60
+
61
+ ### Eval Subset
62
+ <table class="belcanto">
63
+ <tr>
64
+ <th>mel</th>
65
+ <th>cqt</th>
66
+ <th>chroma</th>
67
+ <th>label (4-class)</th>
68
+ <th>gender (2-class)</th>
69
+ <th>singing_method (2-class)</th>
70
+ </tr>
71
+ <tr>
72
+ <td>.jpg, 1.6s, 22050Hz</td>
73
+ <td>.jpg, 1.6s, 22050Hz</td>
74
+ <td>.jpg, 1.6s, 22050Hz</td>
75
+ <td>m_bel, f_bel, m_folk, f_folk</td>
76
+ <td>male, female</td>
77
+ <td>Folk_Singing, Bel_Canto</td>
78
+ </tr>
79
+ <tr>
80
+ <td>...</td>
81
+ <td>...</td>
82
+ <td>...</td>
83
+ <td>...</td>
84
+ <td>...</td>
85
+ <td>...</td>
86
+ </tr>
87
+ </table>
88
+
89
+ <img src="https://www.modelscope.cn/api/v1/datasets/ccmusic-database/bel_canto/repo?Revision=master&FilePath=.%2Fdata%2Fbel.png&View=true">
90
+
91
+ ### Data Instances
92
+ .zip(.wav, .jpg)
93
+
94
+ ### Data Fields
95
+ m_bel, f_bel, m_folk, f_folk
96
+
97
+ ### Data Splits
98
+ | Split(8:1:1) / Subset | default | eval |
99
+ | :-------------------: | :-----------------: | :-----------------: |
100
+ | train | 159 | 7907 |
101
+ | validation | 20 | 988 |
102
+ | test | 23 | 991 |
103
+ | total | 202 | 9886 |
104
+ | total duration(s) | `18192.37652721089` | `18192.37652721089` |
105
+
106
+ ## Viewer
107
+ <https://www.modelscope.cn/datasets/ccmusic-database/bel_canto/dataPeview>
108
+
109
+ ## Usage
110
+ ### Default Subset
111
+ ```python
112
+ from datasets import load_dataset
113
+
114
+ dataset = load_dataset("ccmusic-database/bel_canto", name="default")
115
+ for item in ds["train"]:
116
+ print(item)
117
+
118
+ for item in ds["validation"]:
119
+ print(item)
120
+
121
+ for item in ds["test"]:
122
+ print(item)
123
+ ```
124
+
125
+ ### Eval Subset
126
+ ```python
127
+ from datasets import load_dataset
128
+
129
+ dataset = load_dataset("ccmusic-database/bel_canto", name="eval")
130
+ for item in ds["train"]:
131
+ print(item)
132
+
133
+ for item in ds["validation"]:
134
+ print(item)
135
+
136
+ for item in ds["test"]:
137
+ print(item)
138
+ ```
139
+
140
+ ## Maintenance
141
+ ```bash
142
+ GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/bel_canto
143
+ cd bel_canto
144
+ ```
145
+
146
+ ## Dataset Description
147
+ - **Homepage:** <https://ccmusic-database.github.io>
148
+ - **Repository:** <https://huggingface.co/datasets/ccmusic-database/bel_canto>
149
+ - **Paper:** <https://doi.org/10.5281/zenodo.5676893>
150
+ - **Leaderboard:** <https://ccmusic-database.github.io/team.html>
151
+ - **Point of Contact:** <https://www.modelscope.cn/datasets/ccmusic-database/bel_canto>
152
+
153
+ ### Dataset Summary
154
+ This database contains hundreds of acapella singing clips that are sung in two styles, Bel Conto and Chinese national singing style by professional vocalists. All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
155
+
156
+ ### Supported Tasks and Leaderboards
157
+ Audio classification, Image classification, singing method classification, voice classification
158
+
159
+ ### Languages
160
+ Chinese, English
161
+
162
+ ## Dataset Creation
163
+ ### Curation Rationale
164
+ Lack of a dataset for Bel Conto and Chinese folk song singing tech
165
+
166
+ ### Source Data
167
+ #### Initial Data Collection and Normalization
168
+ Zhaorui Liu, Monan Zhou
169
+
170
+ #### Who are the source language producers?
171
+ Students from CCMUSIC
172
+
173
+ ### Annotations
174
+ #### Annotation process
175
+ All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
176
+
177
+ #### Who are the annotators?
178
+ professional vocalists
179
+
180
+ ### Personal and Sensitive Information
181
+ None
182
+
183
+ ## Considerations for Using the Data
184
+ ### Social Impact of Dataset
185
+ Promoting the development of AI in the music industry
186
+
187
+ ### Discussion of Biases
188
+ Only for Chinese songs
189
+
190
+ ### Other Known Limitations
191
+ Some singers may not have enough professional training in classical or ethnic vocal techniques.
192
+
193
+ ## Additional Information
194
+ ### Dataset Curators
195
+ Zijin Li
196
+
197
+ ### Evaluation
198
+ <https://huggingface.co/ccmusic-database/bel_canto>
199
+
200
+ ### Citation Information
201
+ ```bibtex
202
+ @dataset{zhaorui_liu_2021_5676893,
203
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
204
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
205
+ month = {mar},
206
+ year = {2024},
207
+ publisher = {HuggingFace},
208
+ version = {1.2},
209
+ url = {https://huggingface.co/ccmusic-database}
210
+ }
211
+ ```
212
+
213
+ ### Contributions
214
+ Provide a dataset for distinguishing Bel Conto and Chinese folk song singing tech
bel_canto.py ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import hashlib
4
+ import datasets
5
+ from datasets.tasks import ImageClassification
6
+
7
+
8
+ _NAMES = {
9
+ "all": ["m_bel", "f_bel", "m_folk", "f_folk"],
10
+ "gender": ["female", "male"],
11
+ "singing_method": ["Folk_Singing", "Bel_Canto"],
12
+ }
13
+
14
+ _DBNAME = os.path.basename(__file__).split(".")[0]
15
+
16
+ _HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic-database/{_DBNAME}"
17
+
18
+ _DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic-database/{_DBNAME}/repo?Revision=master&FilePath=data"
19
+
20
+
21
+ _URLS = {
22
+ "audio": f"{_DOMAIN}/audio.zip",
23
+ "mel": f"{_DOMAIN}/mel.zip",
24
+ "eval": f"{_DOMAIN}/eval.zip",
25
+ }
26
+
27
+
28
+ class bel_canto(datasets.GeneratorBasedBuilder):
29
+ def _info(self):
30
+ return datasets.DatasetInfo(
31
+ features=(
32
+ datasets.Features(
33
+ {
34
+ "audio": datasets.Audio(sampling_rate=22050),
35
+ "mel": datasets.Image(),
36
+ "label": datasets.features.ClassLabel(names=_NAMES["all"]),
37
+ "gender": datasets.features.ClassLabel(names=_NAMES["gender"]),
38
+ "singing_method": datasets.features.ClassLabel(
39
+ names=_NAMES["singing_method"]
40
+ ),
41
+ }
42
+ )
43
+ if self.config.name == "default"
44
+ else (
45
+ datasets.Features(
46
+ {
47
+ "mel": datasets.Image(),
48
+ "cqt": datasets.Image(),
49
+ "chroma": datasets.Image(),
50
+ "label": datasets.features.ClassLabel(names=_NAMES["all"]),
51
+ "gender": datasets.features.ClassLabel(
52
+ names=_NAMES["gender"]
53
+ ),
54
+ "singing_method": datasets.features.ClassLabel(
55
+ names=_NAMES["singing_method"]
56
+ ),
57
+ }
58
+ )
59
+ )
60
+ ),
61
+ supervised_keys=("mel", "label"),
62
+ homepage=_HOMEPAGE,
63
+ license="CC-BY-NC-ND",
64
+ version="1.2.0",
65
+ task_templates=[
66
+ ImageClassification(
67
+ task="image-classification",
68
+ image_column="mel",
69
+ label_column="label",
70
+ )
71
+ ],
72
+ )
73
+
74
+ def _str2md5(self, original_string: str):
75
+ md5_obj = hashlib.md5()
76
+ md5_obj.update(original_string.encode("utf-8"))
77
+ return md5_obj.hexdigest()
78
+
79
+ def _split_generators(self, dl_manager):
80
+ dataset = []
81
+ if self.config.name == "default":
82
+ files = {}
83
+ audio_files = dl_manager.download_and_extract(_URLS["audio"])
84
+ mel_files = dl_manager.download_and_extract(_URLS["mel"])
85
+ for fpath in dl_manager.iter_files([audio_files]):
86
+ fname: str = os.path.basename(fpath)
87
+ if fname.endswith(".wav"):
88
+ cls = os.path.basename(os.path.dirname(fpath)) + "/"
89
+ item_id = self._str2md5(cls + fname.split(".wa")[0])
90
+ files[item_id] = {"audio": fpath}
91
+
92
+ for fpath in dl_manager.iter_files([mel_files]):
93
+ fname = os.path.basename(fpath)
94
+ if fname.endswith(".jpg"):
95
+ cls = os.path.basename(os.path.dirname(fpath)) + "/"
96
+ item_id = self._str2md5(cls + fname.split(".jp")[0])
97
+ files[item_id]["mel"] = fpath
98
+
99
+ dataset = list(files.values())
100
+
101
+ else:
102
+ data_files = dl_manager.download_and_extract(_URLS["eval"])
103
+ for fpath in dl_manager.iter_files([data_files]):
104
+ fname = os.path.basename(fpath)
105
+ if "mel" in fpath and fname.endswith(".jpg"):
106
+ dataset.append(fpath)
107
+
108
+ categories = {}
109
+ for name in _NAMES["all"]:
110
+ categories[name] = []
111
+
112
+ for data in dataset:
113
+ fpath = data["audio"] if self.config.name == "default" else data
114
+ label = os.path.basename(os.path.dirname(fpath))
115
+ categories[label].append(data)
116
+
117
+ testset, validset, trainset = [], [], []
118
+ for cls in categories:
119
+ random.shuffle(categories[cls])
120
+ count = len(categories[cls])
121
+ p80 = int(count * 0.8)
122
+ p90 = int(count * 0.9)
123
+ trainset += categories[cls][:p80]
124
+ validset += categories[cls][p80:p90]
125
+ testset += categories[cls][p90:]
126
+
127
+ random.shuffle(trainset)
128
+ random.shuffle(validset)
129
+ random.shuffle(testset)
130
+
131
+ return [
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.TRAIN, gen_kwargs={"files": trainset}
134
+ ),
135
+ datasets.SplitGenerator(
136
+ name=datasets.Split.VALIDATION, gen_kwargs={"files": validset}
137
+ ),
138
+ datasets.SplitGenerator(
139
+ name=datasets.Split.TEST, gen_kwargs={"files": testset}
140
+ ),
141
+ ]
142
+
143
+ def _generate_examples(self, files):
144
+ if self.config.name == "default":
145
+ for i, item in enumerate(files):
146
+ label: str = os.path.basename(os.path.dirname(item["audio"]))
147
+ yield i, {
148
+ "audio": item["audio"],
149
+ "mel": item["mel"],
150
+ "label": label,
151
+ "gender": ("male" if label.split("_")[0] == "m" else "female"),
152
+ "singing_method": (
153
+ "Bel_Canto" if label.split("_")[1] == "bel" else "Folk_Singing"
154
+ ),
155
+ }
156
+
157
+ else:
158
+ for i, fpath in enumerate(files):
159
+ label = os.path.basename(os.path.dirname(fpath))
160
+ yield i, {
161
+ "mel": fpath,
162
+ "cqt": fpath.replace("mel", "cqt"),
163
+ "chroma": fpath.replace("mel", "chroma"),
164
+ "label": label,
165
+ "gender": ("male" if label.split("_")[0] == "m" else "female"),
166
+ "singing_method": (
167
+ "Bel_Canto" if label.split("_")[1] == "bel" else "Folk_Singing"
168
+ ),
169
+ }