File size: 1,427 Bytes
1f41d28 ea9c429 e64c542 ea9c429 1f41d28 ea9c429 1f41d28 e64c542 ea9c429 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
license: apache-2.0
---
# Edge Vision-Language Model (Moondream)
This repository contains the `Moondream` vision-language model, designed to generate captions for images. It utilizes a lightweight, experimental vision encoder and a language model for generating descriptions of input images.
[![Hugging Face Model](https://img.shields.io/badge/Hugging%20Face-Model-blue)](https://huggingface.co/irotem98/edge_vlm)
[![Hugging Face Spaces](https://img.shields.io/badge/Hugging%20Face-Spaces-orange)](https://huggingface.co/spaces/irotem98/edge_vlm)
[![GitHub Repo](https://img.shields.io/badge/GitHub-Repo-green)](https://github.com/rotem154154/edge_vlm)
## Installation
1. Clone the repository:
```bash
git clone https://huggingface.co/irotem98/edge_vlm
cd edge_vlm
```
2. Install the required dependencies:
```bash
pip install -r requirements.txt
```
## Usage
Here is a simple example to load the model, preprocess an image, and generate a caption:
```python
from model import MoondreamModel
import torch
# Load the model and tokenizer
model = MoondreamModel.load_model()
tokenizer = MoondreamModel.load_tokenizer()
# Load and preprocess an image
image_path = 'img.jpg' # Replace with your image path
image = MoondreamModel.preprocess_image(image_path)
# Generate the caption
caption = MoondreamModel.generate_caption(model, image, tokenizer)
print('Generated Caption:', caption)
|