--- tags: - model_hub_mixin - pytorch_model_hub_mixin - image-classification - densenet121 language: - ja --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: # Densenet121-Dog-Emotions Model Card - **学習データでの正答率:** 0.6451 - **テストデータでの正答率:** 0.5938 # モデルについて このモデルは犬の画像を[angry, happy, relaxed, sad]の4つのカテゴリに分類するモデルです。
densenet121の末端に線形層を追加し、devzohaib/dog-emotions-predictionデータセットで微調整を行ないました。参考文献と使い方は以下のようになっています。 ## 使い方 1. モデルの読み込み ```sh from huggingface_hub import PyTorchModelHubMixin import torch import torch.nn as nn from torchvision import models, transforms from PIL import Image import requests class CustomDenseNet(nn.Module, PyTorchModelHubMixin): def __init__(self, class_names): super().__init__() self.densenet = models.densenet121(pretrained=True) num_features = self.densenet.classifier.in_features self.densenet.fc = nn.Linear(num_features, len(class_names)) def forward(self, x): outputs = self.densenet(x) _, preds = torch.max(outputs, 1) probabilities = torch.nn.functional.softmax(outputs, dim=1).squeeze(0) predicted_class = class_names[preds.item()] predicted_probabilities = {class_names[i]: probabilities[i].item() for i in range(len(class_names))} return predicted_class, predicted_probabilities model_id = "shinyice/densenet121-dog-emotions" class_names = ['angry', 'happy', 'relaxed', 'sad'] model = CustomDenseNet(class_names) model = model.from_pretrained(model_id) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) ``` 2. 画像分類 ```sh def dog_emotion(model, url_mode=False, input_image=None): img_transforms = transforms.Compose([ transforms.Resize(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) if url_mode: image = Image.open(requests.get(input_image, stream=True).raw).convert('RGB') else: image = Image.open(input_image).convert('RGB') image_tensor = img_transforms(image).unsqueeze(0) image_tensor = image_tensor.to(device) model.eval() with torch.no_grad(): predicted_class, predicted_probabilities = model(image_tensor) return predicted_class, predicted_probabilities, image url_mode = True input_image = "" emotion,probabilities, image = dog_emotion(model=model, url_mode=url_mode, input_image=input_image) print(emotion,probabilities) image ``` ## 参考文献 - [カスタムデータセットでなるべくかんたんに画像分類器をつくりたい。Pytorchで転移学習](https://qiita.com/john-rocky/items/e386f0aa5232d323db7e) - [Dog Emotions Prediction](https://www.kaggle.com/datasets/devzohaib/dog-emotions-prediction) - [Uploading models](https://huggingface.co/docs/hub/models-uploading)