File size: 4,342 Bytes
9911600
 
 
 
 
 
 
 
 
 
 
 
 
 
8fd2dfd
 
9911600
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
---
license: mit
datasets:
- Trelis/tiny-shakespeare
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- rnn
- shakespeare
---

# Shakespeare RNN

This project implements a character-level Recurrent Neural Network (RNN) trained on Shakespeare's works. The model can generate Shakespeare-like text based on a given prompt.

## Table of Contents

- [Shakespeare RNN](#shakespeare-rnn)
  - [Table of Contents](#table-of-contents)
  - [Project Overview](#project-overview)
  - [Installation](#installation)
  - [Project Structure](#project-structure)
  - [Usage](#usage)
    - [Training](#training)
    - [Inference](#inference)
  - [Model Architecture](#model-architecture)
  - [Dataset](#dataset)
  - [Configuration](#configuration)
  - [Results](#results)
  - [Contributing](#contributing)
  - [License](#license)

## Project Overview

This project uses a Long Short-Term Memory (LSTM) network to generate text in the style of Shakespeare. The model is trained on a dataset of Shakespeare's works and can generate new text based on a given prompt.

Key features:
- Character-level text generation
- LSTM-based RNN architecture
- Customizable hyperparameters
- Training with Weights & Biases logging
- Interactive inference script

## Installation

1. Clone the repository:
   ```
   git clone https://github.com/your-username/shakespeare-rnn.git
   cd shakespeare-rnn
   ```

2. Create a virtual environment:
   ```
   python -m venv venv
   source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
   ```

3. Install the required packages:
   ```
   pip install -r requirements.txt
   ```

## Project Structure

```
shakespeare-rnn/
β”‚
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── dataset.py
β”‚
β”œβ”€β”€ model/
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── rnn.py
β”‚
β”œβ”€β”€ utils/
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── tokenizer.py
β”‚
β”œβ”€β”€ config.py
β”œβ”€β”€ train.py
β”œβ”€β”€ inference.py
β”œβ”€β”€ requirements.txt
└── README.md
```

## Usage

### Training

To train the model, run:

```
python train.py
```

This will start the training process and log the results to Weights & Biases. You can monitor the training progress in real-time through the W&B dashboard.

### Inference

To generate text using the trained model, run:

```
python inference.py
```

This will load the trained model and allow you to enter prompts for text generation. The script will also generate text for a few predefined prompts.

## Model Architecture

The model uses a character-level LSTM network with the following architecture:
- Embedding layer
- LSTM layer(s)
- Fully connected output layer

The exact architecture (number of layers, hidden dimensions, etc.) can be configured in the `config.py` file.

## Dataset

The model is trained on the Tiny Shakespeare dataset, which is a collection of Shakespeare's works. The dataset is automatically downloaded using the Hugging Face `datasets` library.

## Configuration

You can modify the model's hyperparameters and training settings in the `config.py` file. Key configurations include:
- Batch size
- Sequence length
- Embedding dimension
- Hidden dimension
- Number of LSTM layers
- Learning rate
- Number of training epochs

## Results

After training, you can find the training logs and performance metrics on the Weights & Biases dashboard. The trained model will be saved as `shakespeare_model.pth`, and the tokenizer will be saved as `tokenizer.pkl`.

Example generated text:

[Include some example outputs from your trained model here]

## Contributing

Contributions to this project are welcome! Please follow these steps:

1. Fork the repository
2. Create a new branch (`git checkout -b feature/your-feature-name`)
3. Make your changes
4. Commit your changes (`git commit -am 'Add some feature'`)
5. Push to the branch (`git push origin feature/your-feature-name`)
6. Create a new Pull Request

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
```

This README provides a comprehensive overview of your project, including installation instructions, usage guidelines, project structure, and other relevant information. You may want to customize some parts, such as the repository URL, example outputs, and any specific instructions or results from your implementation.