--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=False tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - migrated - anime - midjourny - style - midjourney anime - the protoart - midjourney v6 - protoart base_model: black-forest-labs/FLUX.1-dev instance_prompt: egmid widget: - text: ' ' output: url: >- 25452979.jpeg - text: ' ' output: url: >- 25439345.jpeg - text: ' ' output: url: >- 25439344.jpeg - text: ' ' output: url: >- 25439346.jpeg - text: ' ' output: url: >- 25451639.jpeg - text: ' ' output: url: >- 26131254.jpeg --- # FLUX MidJourney Anime ## Model description

This is my first attempt at creating an anime style based off midjourney using flux so it might take a bit more fine tuning

## Trigger words You should use `egmid` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/brushpenbob/flux-midjourney-anime/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('brushpenbob/flux-midjourney-anime', weight_name='FLUX_MidJourney_Anime.safetensors') image = pipeline('`egmid`').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)