hysts HF staff commited on
Commit
4a2222c
1 Parent(s): 36fb353

commit files to HF hub

Browse files
Files changed (1) hide show
  1. papers.csv +1 -1
papers.csv CHANGED
@@ -1548,7 +1548,7 @@ TripLe: Revisiting Pretrained Model Reuse and Progressive Learning for Efficient
1548
  DiffRate : Differentiable Compression Rate for Efficient Vision Transformers,"Chen, Mengzhao*; Shao, Wenqi; Xu, Peng; Lin, Mingbao; Zhang, Kaipeng; Chao, Fei; Ji, Rongrong; Qiao, Yu; Luo, Ping",poster,2305.17997,https://arxiv.org/abs/2305.17997,https://github.com/OpenGVLab/DiffRate,https://huggingface.co/papers/2305.17997,,,,9,0
1549
  Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection,"Yang, Longrong; Zhou, Xianpan; Li, Xuewei; Qiao, Liang; Li, Zheyang; Yang, Ziwei; Wang, Gaoang; Li, Xi*",poster,2308.14286,https://arxiv.org/abs/2308.14286,https://github.com/TinyTigerPan/BCKD,https://huggingface.co/papers/2308.14286,,,,8,0
1550
  From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels,"Yang, Zhendong*; Zeng, Ailing; Li, Zhe; Zhang, Tianke; Yuan, Chun; Li, Yu",poster,2303.13005,https://arxiv.org/abs/2303.13005,https://github.com/yzd-v/cls_KD,https://huggingface.co/papers/2303.13005,,,,6,0
1551
- Efficient 3D Semantic Segmentation with Superpoint Transformer,"ROBERT, Damien*; Raguet, Hugo; Landrieu, Loic",poster,2306.08045,https://arxiv.org/abs/2306.08045,,https://huggingface.co/papers/2306.08045,,,,3,0
1552
  Dataset Quantization,"Zhou, Daquan; Wang, Kai*; Gu, Jianyang; Peng, Xiangyu; Lian, Dongze; Zhang, Yifan; You, Yang; Feng, Jiashi",poster,2308.10524,https://arxiv.org/abs/2308.10524,,https://huggingface.co/papers/2308.10524,,,,8,0
1553
  Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy,"Jie, Shibo*; Wang, Haoqing; Deng, Zhi-Hong",poster,2307.16867,https://arxiv.org/abs/2307.16867,https://github.com/JieShibo/PETL-ViT,https://huggingface.co/papers/2307.16867,,,,3,0
1554
  RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers,"Li, Zhikai*; Xiao, Junrui; Yang, Lianwei; Gu, Qingyi",poster,,,,,,,,,
 
1548
  DiffRate : Differentiable Compression Rate for Efficient Vision Transformers,"Chen, Mengzhao*; Shao, Wenqi; Xu, Peng; Lin, Mingbao; Zhang, Kaipeng; Chao, Fei; Ji, Rongrong; Qiao, Yu; Luo, Ping",poster,2305.17997,https://arxiv.org/abs/2305.17997,https://github.com/OpenGVLab/DiffRate,https://huggingface.co/papers/2305.17997,,,,9,0
1549
  Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection,"Yang, Longrong; Zhou, Xianpan; Li, Xuewei; Qiao, Liang; Li, Zheyang; Yang, Ziwei; Wang, Gaoang; Li, Xi*",poster,2308.14286,https://arxiv.org/abs/2308.14286,https://github.com/TinyTigerPan/BCKD,https://huggingface.co/papers/2308.14286,,,,8,0
1550
  From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels,"Yang, Zhendong*; Zeng, Ailing; Li, Zhe; Zhang, Tianke; Yuan, Chun; Li, Yu",poster,2303.13005,https://arxiv.org/abs/2303.13005,https://github.com/yzd-v/cls_KD,https://huggingface.co/papers/2303.13005,,,,6,0
1551
+ Efficient 3D Semantic Segmentation with Superpoint Transformer,"ROBERT, Damien*; Raguet, Hugo; Landrieu, Loic",poster,2306.08045,https://arxiv.org/abs/2306.08045,,https://huggingface.co/papers/2306.08045,,,,3,1
1552
  Dataset Quantization,"Zhou, Daquan; Wang, Kai*; Gu, Jianyang; Peng, Xiangyu; Lian, Dongze; Zhang, Yifan; You, Yang; Feng, Jiashi",poster,2308.10524,https://arxiv.org/abs/2308.10524,,https://huggingface.co/papers/2308.10524,,,,8,0
1553
  Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy,"Jie, Shibo*; Wang, Haoqing; Deng, Zhi-Hong",poster,2307.16867,https://arxiv.org/abs/2307.16867,https://github.com/JieShibo/PETL-ViT,https://huggingface.co/papers/2307.16867,,,,3,0
1554
  RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers,"Li, Zhikai*; Xiao, Junrui; Yang, Lianwei; Gu, Qingyi",poster,,,,,,,,,