amezasor commited on
Commit
275b906
1 Parent(s): a1d5513
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -255,8 +255,8 @@ output = tokenizer.batch_decode(output)
255
  print(output)
256
  ```
257
 
258
- **Model Architeture:**
259
- Granite-3.0-3B-A800M-Base is based on a decoder-only sparse Mixture of Experts(MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss.
260
 
261
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
262
  | :-------- | :--------| :--------| :--------| :-------- |
 
255
  print(output)
256
  ```
257
 
258
+ **Model Architecture:**
259
+ Granite-3.0-3B-A800M-Base is based on a decoder-only sparse Mixture of Experts (MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss.
260
 
261
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
262
  | :-------- | :--------| :--------| :--------| :-------- |