ubaada's picture
Update README.md
7c8ad89
---
task_categories:
- summarization
- text-generation
language:
- en
pretty_name: BookSum Summarization Dataset Clean
size_categories:
- 1K<n<10K
configs:
- config_name: books
data_files:
- split: train
path: "books/train.jsonl"
- split: test
path: "books/test.jsonl"
- split: validation
path: "books/val.jsonl"
- config_name: chapters
data_files:
- split: train
path: "chapters/train.jsonl"
- split: test
path: "chapters/test.jsonl"
- split: validation
path: "chapters/val.jsonl"
---
# Table of Contents
1. [Description](#description)
2. [Usage](#usage)
3. [Distribution](#distribution)
- [Chapters Dataset](#chapters-dataset)
- [Books Dataset](#books-dataset)
4. [Structure](#structure)
5. [Results and Comparison with kmfoda/booksum](#results-and-comparison-with-kmfodabooksum)
# Description:
This repository contains the Booksum dataset introduced in the paper [BookSum: A Collection of Datasets for Long-form Narrative Summarization
](https://arxiv.org/abs/2105.08209).
This dataset includes both book and chapter summaries from the BookSum dataset (unlike the kmfoda/booksum one which only contains the chapter dataset). Some mismatched summaries have been corrected. Uneccessary columns has been discarded. Contains minimal text-to-summary rows. As there are multiple summaries for a given text, each row contains an array of summaries.
# Usage
Note: Make sure you have [>2.14.0 version of "datasets" library](https://github.com/huggingface/datasets/releases/tag/2.14.0) installed to load the dataset successfully.
```
from datasets import load_dataset
book_data = load_dataset("ubaada/booksum-complete-cleaned", "books")
chapter_data = load_dataset("ubaada/booksum-complete-cleaned", "chapters")
# Print the 1st book
print(book_data["train"][0]['text'])
# Print the summary of the 1st book
print(book_data["train"][0]['summary'][0]['text'])
```
# Distribution
<div style="display: inline-block; vertical-align: top; width: 45%;">
## Chapters Dataset
| Split | Total Sum. | Missing Sum. | Successfully Processed | Chapters |
|---------|------------|--------------|------------------------|------|
| Train | 9712 | 178 | 9534 (98.17%) | 5653 |
| Test | 1432 | 0 | 1432 (100.0%) | 950 |
| Val | 1485 | 0 | 1485 (100.0%) | 854 |
</div>
<div style="display: inline-block; vertical-align: top; width: 45%; margin-left: 5%;">
## Books Dataset
| Split | Total Sum. | Missing Sum. | Successfully Processed | Books |
|---------|------------|--------------|------------------------|------|
| Train | 314 | 0 | 314 (100.0%) | 151 |
| Test | 46 | 0 | 46 (100.0%) | 17 |
| Val | 45 | 0 | 45 (100.0%) | 19 |
</div>
# Structure:
```
Chapters Dataset
0 - bid (book id)
1 - book_title
2 - chapter_id
3 - text (raw chapter text)
4 - summary (list of summaries from different sources)
- {source, text (summary), analysis}
...
5 - is_aggregate (bool) (if true, then the text contains more than one chapter)
Books Dataset:
0 - bid (book id)
1 - title
2 - text (raw text)
4 - summary (list of summaries from different sources)
- {source, text (summary), analysis}
...
```
# Reults and Comparison with kmfoda/booksum
Tested on the 'test' split of chapter sub-dataset. There are slight improvement on R1/R2 scores compared to another BookSum repo likely due to the work done on cleaning the misalignments in the alignment file. In the plot for this dataset, first summary \[0\] is chosen for each chapter. If best reference summary is chosen from the list for each chapter, theere are further improvements but are not shown here for fairness.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62a7d1e152aa8695f9209345/lUNes4SFXVMdtebGMEJK0.png)