rwightman HF staff commited on
Commit
f1c4893
1 Parent(s): b530736

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -30,7 +30,7 @@ The models utilize:
30
 
31
  This 320x320 resolution model is a fine-tune of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It was fine-tune from the final checkpoint of the original 256x256 training run w/ an additional ~2.5B samples and a lower learning rate.
32
 
33
- At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters. The ConvNeXt was trained with 26B samples-seen and L/14 with 34B.
34
 
35
  | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
36
  | ----- | ------- | ---------- | ------------ | --------- |
 
30
 
31
  This 320x320 resolution model is a fine-tune of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It was fine-tune from the final checkpoint of the original 256x256 training run w/ an additional ~2.5B samples and a lower learning rate.
32
 
33
+ At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters.
34
 
35
  | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
36
  | ----- | ------- | ---------- | ------------ | --------- |