bconsolvo Chesebrough commited on
Commit
6546f18
1 Parent(s): 2563fd1

Update README.md (#2)

Browse files

- Update README.md (e6802a40694d0dfdf6476717bed465fe9b689231)


Co-authored-by: bob chesebrough <Chesebrough@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -42,11 +42,12 @@ Swin Transformer achieves strong performance on COCO object detection (58.7 box
42
 
43
  # Videos
44
 
45
- ![MiDaS Depth Estimation | Intel Technology](https://cdn-uploads.huggingface.co/production/uploads/641bd18baebaa27e0753f2c9/u-KwRFIQhMWiFraSTTBkc.png)
46
 
 
47
  MiDaS Depth Estimation is a machine learning model from Intel Labs for monocular depth estimation. It was trained on up to 12 datasets and covers both in-and outdoor scenes. Multiple different MiDaS models are available, ranging from high quality depth estimation to lightweight models for mobile downstream tasks (https://github.com/isl-org/MiDaS).
48
 
49
 
 
50
  ## Model description
51
 
52
  This Midas 3.1 DPT model uses the [SwinV2 Philosophy]( https://huggingface.co/docs/transformers/en/model_doc/swinv2) model as backbone and uses a different approach to Vision that Beit, where Swin backbones focus more on using a hierarchical approach.
@@ -120,6 +121,7 @@ depth
120
  or one can use the pipeline API:
121
  from transformers import pipeline
122
 
 
123
  pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384")
124
  result = pipe("http://images.cocodataset.org/val2017/000000181816.jpg")
125
  result["depth"]
 
42
 
43
  # Videos
44
 
 
45
 
46
+ [![MiDaS Depth Estimation - Intel Technology](https://img.youtube.com/vi/UjaeNNFf9sE/0.jpg)](https://www.youtube.com/watch?v=UjaeNNFf9sE)
47
  MiDaS Depth Estimation is a machine learning model from Intel Labs for monocular depth estimation. It was trained on up to 12 datasets and covers both in-and outdoor scenes. Multiple different MiDaS models are available, ranging from high quality depth estimation to lightweight models for mobile downstream tasks (https://github.com/isl-org/MiDaS).
48
 
49
 
50
+
51
  ## Model description
52
 
53
  This Midas 3.1 DPT model uses the [SwinV2 Philosophy]( https://huggingface.co/docs/transformers/en/model_doc/swinv2) model as backbone and uses a different approach to Vision that Beit, where Swin backbones focus more on using a hierarchical approach.
 
121
  or one can use the pipeline API:
122
  from transformers import pipeline
123
 
124
+ ```python
125
  pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384")
126
  result = pipe("http://images.cocodataset.org/val2017/000000181816.jpg")
127
  result["depth"]