clementchadebec commited on
Commit
92ca95d
1 Parent(s): 7239d5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -60,6 +60,22 @@ image
60
  <img style="width:500px;" src="examples/output.jpg">
61
  </p>
62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  # Training
64
  This model was trained with depth maps computed with [Clipdrop's depth estimator model](https://clipdrop.co/apis/docs/portrait-depth-estimation) as well as open-souce depth estimation models such as Midas or Leres.
65
 
 
60
  <img style="width:500px;" src="examples/output.jpg">
61
  </p>
62
 
63
+ 💡 Note: You can compute the conditioning map using for instance the `MidasDetector` from the `controlnet_aux` library
64
+
65
+ ```python
66
+ from controlnet_aux import MidasDetector
67
+ from diffusers.utils import load_image
68
+
69
+ midas = MidasDetector.from_pretrained("lllyasviel/Annotators")
70
+
71
+ # Load an image
72
+ im = load_image(
73
+ "https://huggingface.co/jasperai/jasperai/Flux.1-dev-Controlnet-Depth/resolve/main/examples/output.jpg"
74
+ )
75
+
76
+ surface = midas(im)
77
+ ```
78
+
79
  # Training
80
  This model was trained with depth maps computed with [Clipdrop's depth estimator model](https://clipdrop.co/apis/docs/portrait-depth-estimation) as well as open-souce depth estimation models such as Midas or Leres.
81