Clement commited on
Commit
4ca2bc1
1 Parent(s): b4a77d0

replace depth anything v1 with v2 safetensors

Browse files
README.md CHANGED
@@ -1,65 +1,108 @@
1
  ---
 
 
2
  license: cc-by-nc-4.0
3
-
4
- language:
5
- - en
6
- pipeline_tag: depth-estimation
7
- library_name: depth-anything-v2
8
  tags:
9
  - depth
10
  - relative depth
 
 
 
11
  ---
12
 
13
- # Depth-Anything-V2-Large
14
 
15
- ## Introduction
16
  Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:
17
  - more fine-grained details than Depth Anything V1
18
  - more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard)
19
  - more efficient (10x faster) and more lightweight than SD-based models
20
  - impressive fine-tuned performance with our pre-trained models
21
 
22
- ## Installation
23
 
24
- ```bash
25
- git clone https://huggingface.co/spaces/depth-anything/Depth-Anything-V2
26
- cd Depth-Anything-V2
27
- pip install -r requirements.txt
28
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
- ## Usage
31
 
32
- Download the [model](https://huggingface.co/depth-anything/Depth-Anything-V2-Large/resolve/main/depth_anything_v2_vitl.pth?download=true) first and put it under the `checkpoints` directory.
33
 
34
  ```python
35
- import cv2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  import torch
 
 
 
 
 
 
 
 
 
37
 
38
- from depth_anything_v2.dpt import DepthAnythingV2
 
39
 
40
- model = DepthAnythingV2(encoder='vitl', features=256, out_channels=[256, 512, 1024, 1024])
41
- model.load_state_dict(torch.load('checkpoints/depth_anything_v2_vitl.pth', map_location='cpu'))
42
- model.eval()
43
 
44
- raw_img = cv2.imread('your/image/path')
45
- depth = model.infer_image(raw_img) # HxW raw depth map
 
 
 
 
 
46
  ```
47
 
48
- ## Citation
49
 
50
- If you find this project useful, please consider citing:
 
51
 
52
  ```bibtex
53
- @article{depth_anything_v2,
54
- title={Depth Anything V2},
55
- author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
56
- journal={arXiv:2406.09414},
57
- year={2024}
 
 
58
  }
59
-
60
- @inproceedings{depth_anything_v1,
61
- title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
62
- author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
63
- booktitle={CVPR},
64
- year={2024}
65
- }
 
1
  ---
2
+ library_name: transformers
3
+ library: transformers
4
  license: cc-by-nc-4.0
 
 
 
 
 
5
  tags:
6
  - depth
7
  - relative depth
8
+ pipeline_tag: depth-estimation
9
+ widget:
10
+ - inference: false
11
  ---
12
 
13
+ # Depth Anything V2 Base – Transformers Version
14
 
 
15
  Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:
16
  - more fine-grained details than Depth Anything V1
17
  - more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard)
18
  - more efficient (10x faster) and more lightweight than SD-based models
19
  - impressive fine-tuned performance with our pre-trained models
20
 
21
+ This model checkpoint is compatible with the transformers library.
22
 
23
+ Depth Anything V2 was introduced in [the paper of the same name](https://arxiv.org/abs/2406.09414) by Lihe Yang et al. It uses the same architecture as the original Depth Anything release, but uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions. The original Depth Anything model was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al., and was first released in [this repository](https://github.com/LiheYoung/Depth-Anything).
24
+
25
+ [Online demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2).
26
+
27
+ ## Model description
28
+
29
+ Depth Anything V2 leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone.
30
+
31
+ The model is trained on ~600K synthetic labeled images and ~62 million real unlabeled images, obtaining state-of-the-art results for both relative and absolute depth estimation.
32
+
33
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
34
+ alt="drawing" width="600"/>
35
+
36
+ <small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
37
+
38
+ ## Intended uses & limitations
39
+
40
+ You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for
41
+ other versions on a task that interests you.
42
 
43
+ ### How to use
44
 
45
+ Here is how to use this model to perform zero-shot depth estimation:
46
 
47
  ```python
48
+ from transformers import pipeline
49
+ from PIL import Image
50
+ import requests
51
+
52
+ # load pipe
53
+ pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Large-hf")
54
+
55
+ # load image
56
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
57
+ image = Image.open(requests.get(url, stream=True).raw)
58
+
59
+ # inference
60
+ depth = pipe(image)["depth"]
61
+ ```
62
+
63
+ Alternatively, you can use the model and processor classes:
64
+
65
+ ```python
66
+ from transformers import AutoImageProcessor, AutoModelForDepthEstimation
67
  import torch
68
+ import numpy as np
69
+ from PIL import Image
70
+ import requests
71
+
72
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
73
+ image = Image.open(requests.get(url, stream=True).raw)
74
+
75
+ image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Large-hf")
76
+ model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Large-hf")
77
 
78
+ # prepare image for the model
79
+ inputs = image_processor(images=image, return_tensors="pt")
80
 
81
+ with torch.no_grad():
82
+ outputs = model(**inputs)
83
+ predicted_depth = outputs.predicted_depth
84
 
85
+ # interpolate to original size
86
+ prediction = torch.nn.functional.interpolate(
87
+ predicted_depth.unsqueeze(1),
88
+ size=image.size[::-1],
89
+ mode="bicubic",
90
+ align_corners=False,
91
+ )
92
  ```
93
 
94
+ For more code examples, please refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#).
95
 
96
+
97
+ ### Citation
98
 
99
  ```bibtex
100
+ @misc{yang2024depth,
101
+ title={Depth Anything V2},
102
+ author={Lihe Yang and Bingyi Kang and Zilong Huang and Zhen Zhao and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao},
103
+ year={2024},
104
+ eprint={2406.09414},
105
+ archivePrefix={arXiv},
106
+ primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'}
107
  }
108
+ ```
 
 
 
 
 
 
config.json ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_commit_hash": null,
3
+ "architectures": [
4
+ "DepthAnythingForDepthEstimation"
5
+ ],
6
+ "backbone": null,
7
+ "backbone_config": {
8
+ "architectures": [
9
+ "Dinov2Model"
10
+ ],
11
+ "hidden_size": 1024,
12
+ "image_size": 518,
13
+ "model_type": "dinov2",
14
+ "num_attention_heads": 16,
15
+ "num_hidden_layers": 24,
16
+ "out_features": [
17
+ "stage5",
18
+ "stage12",
19
+ "stage18",
20
+ "stage24"
21
+ ],
22
+ "out_indices": [
23
+ 5,
24
+ 12,
25
+ 18,
26
+ 24
27
+ ],
28
+ "patch_size": 14,
29
+ "reshape_hidden_states": false,
30
+ "stage_names": [
31
+ "stem",
32
+ "stage1",
33
+ "stage2",
34
+ "stage3",
35
+ "stage4",
36
+ "stage5",
37
+ "stage6",
38
+ "stage7",
39
+ "stage8",
40
+ "stage9",
41
+ "stage10",
42
+ "stage11",
43
+ "stage12",
44
+ "stage13",
45
+ "stage14",
46
+ "stage15",
47
+ "stage16",
48
+ "stage17",
49
+ "stage18",
50
+ "stage19",
51
+ "stage20",
52
+ "stage21",
53
+ "stage22",
54
+ "stage23",
55
+ "stage24"
56
+ ],
57
+ "torch_dtype": "float32"
58
+ },
59
+ "fusion_hidden_size": 256,
60
+ "head_hidden_size": 32,
61
+ "head_in_index": -1,
62
+ "initializer_range": 0.02,
63
+ "model_type": "depth_anything",
64
+ "neck_hidden_sizes": [
65
+ 256,
66
+ 512,
67
+ 1024,
68
+ 1024
69
+ ],
70
+ "patch_size": 14,
71
+ "reassemble_factors": [
72
+ 4,
73
+ 2,
74
+ 1,
75
+ 0.5
76
+ ],
77
+ "reassemble_hidden_size": 1024,
78
+ "torch_dtype": "float32",
79
+ "transformers_version": null,
80
+ "use_pretrained_backbone": false
81
+ }
depth_anything_v2_vitl.pth → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345
3
- size 1341395338
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e01e34ed5549b529b70b92d53226bc370f03041977b390d3dde45d47f516cf9
3
+ size 1341322868
preprocessor_config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_valid_processor_keys": [
3
+ "images",
4
+ "do_resize",
5
+ "size",
6
+ "keep_aspect_ratio",
7
+ "ensure_multiple_of",
8
+ "resample",
9
+ "do_rescale",
10
+ "rescale_factor",
11
+ "do_normalize",
12
+ "image_mean",
13
+ "image_std",
14
+ "do_pad",
15
+ "size_divisor",
16
+ "return_tensors",
17
+ "data_format",
18
+ "input_data_format"
19
+ ],
20
+ "do_normalize": true,
21
+ "do_pad": false,
22
+ "do_rescale": true,
23
+ "do_resize": true,
24
+ "ensure_multiple_of": 14,
25
+ "image_mean": [
26
+ 0.485,
27
+ 0.456,
28
+ 0.406
29
+ ],
30
+ "image_processor_type": "DPTImageProcessor",
31
+ "image_std": [
32
+ 0.229,
33
+ 0.224,
34
+ 0.225
35
+ ],
36
+ "keep_aspect_ratio": true,
37
+ "resample": 3,
38
+ "rescale_factor": 0.00392156862745098,
39
+ "size": {
40
+ "height": 518,
41
+ "width": 518
42
+ },
43
+ "size_divisor": null
44
+ }