File size: 6,827 Bytes
ea2c968
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
license: creativeml-openrail-m
tags:
  - imagepipeline
  - imagepipeline.io
  - text-to-image
  - ultra-realistic
pinned: false
pipeline_tag: text-to-image

---


## LEOSAMs-HelloWorld-SDXL-Base-Model
<img src="https://f005.backblazeb2.com/b2api/v2/b2_download_file_by_id?fileId=4_zfdf0a8ed59e8666b89b10713_f115c9bc406116333_d20231128_m132808_c005_v0501012_t0052_u01701178088353" alt="Generated by Image Pipeline" style="border-radius: 10px;">



**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**

Model details - HelloWorld 2.0 no longer requires trigger words, and the results are comparable in quality to version 1.0 with trigger words.. The trigger word 'leogirl' in 1.0 was highly associated with East Asians. After the cancellation of the trigger words, while words like '1girl' will still likely generate East Asian portraits when race is not specified, you can now specify the race by using keywords like nationality, skin color, etc. For example, the trigger effects for words like 'Chinese', 'Russian', 'Iranian', 'Jamaican', 'Kenyan', 'dark-skinned', 'pale-skinned', etc., are listed below.



You can also get different styles of characters by writing the names of people from different countries and genders in the prompt, such as Han Meimei (China), Sophie Martin (France), Priya Patel (India), Fatima Al-Hassan (Arab), Wanjiru Mwangi (Kenya). The above prompts are just examples, there are many available prompts and ways to play, and you're welcome to explore and share them by yourself.



HelloWorld 2.0 has balanced the quality/color and offers more style options. The 1.0 version, when used with 'leogirl', would likely produce images with a strong film texture. HelloWorld 2.0 is no longer tied to a film texture and can be customized with some quality-related prompts. Some prompts that have been tested and work well include:

high-end fashion photoshoot, product introduction photo, popular Korean makeup, aegyo sal, Sharp High-Quality Photo, studio light, medium format photo, Mamiya photography, analog film, Medium Portrait with Soft Light, real-life image, refined editorial photograph, raw photo, real photo, Scanned Photo, film still

The color effects of these prompts are as follows:



The training set for HelloWorld 2.0 significantly increased the proportion of full-body photos to improve the effects of SDXL in generating full-body and distant view portraits. Although it has improved compared to version 1.0, it is still strongly recommended to use 'adetailer' in the process of generating full-body photos. Also, for users with enough video memory (24g), it is recommended to perform 1.5x high-resolution repair on the image, which can significantly improve facial details. 




[![Try this model](https://img.shields.io/badge/try_this_model-image_pipeline-BD9319)](https://imagepipeline.io/models/94026682-87e0-4531-84f4-91778cb210a1/) 




## How to try this model ?

You can try using it locally or send an API call to test the output quality.

Get your `API_KEY` from  [imagepipeline.io](https://imagepipeline.io/). No payment required.





Coding in `php` `javascript` `node` etc ? Checkout our documentation  

[![documentation](https://img.shields.io/badge/documentation-image_pipeline-blue)](https://docs.imagepipeline.io/docs/introduction) 


```python
import requests  
import json  
  
url =  "https://imagepipeline.io/sdxl/text2image/v1/run"  
  
payload = json.dumps({  
"model_id":  "94026682-87e0-4531-84f4-91778cb210a1",  
"prompt":  "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",  
"negative_prompt":  "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",  
"width":  "512",  
"height":  "512",  
"samples":  "1",  
"num_inference_steps":  "30",  
"safety_checker":  false,   
"guidance_scale":  7.5,  
"multi_lingual":  "no",  
"embeddings":  "", 
"lora_models": "", 
"lora_weights":  "" 
})  
  
headers =  {  
'Content-Type':  'application/json',
'API-Key': 'your_api_key'
}  
  
response = requests.request("POST", url, headers=headers, data=payload)  
  
print(response.text)

}
```

Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` : 

[![All models](https://img.shields.io/badge/Get%20All%20Models-image_pipeline-BD9319)](https://imagepipeline.io/models) 

### API Reference

#### Generate Image

```http
  https://api.imagepipeline.io/sdxl/text2image/v1
```

| Headers               | Type     | Description                                                                                                        |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key`             | `str` | Get your `API_KEY` from  [imagepipeline.io](https://imagepipeline.io/)                                             |
| `Content-Type`        | `str` | application/json - content type of the request body |


| Parameter | Type     | Description                |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in  [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | 	Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |

---
license: creativeml-openrail-m
tags:
  - imagepipeline
  - imagepipeline.io
  - text-to-image
  - ultra-realistic
pinned: false
pipeline_tag: text-to-image

---

### Feedback 

If you have any feedback, please reach out to us at hello@imagepipeline.io


#### 🔗 Visit Website
[![portfolio](https://img.shields.io/badge/image_pipeline-BD9319?style=for-the-badge&logo=gocd&logoColor=white)](https://imagepipeline.io/)


If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits