适配新版transformers | adapt transformers update (https://github.com/huggingface/transformers/pull/31116)
#58 opened 3 days ago
by
HibernantBear
Alternative quantizations.
#57 opened 3 days ago
by
ZeroWw
Harness Evaluation
2
#56 opened 4 days ago
by
VityaVitalich
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1673264437106-noauth.jpeg)
[AUTOMATED] Model Memory Requirements
#55 opened 8 days ago
by
model-sizer-bot
已解决
#54 opened 10 days ago
by
zhongyi1997cn
huggingface_hub.utils._errors.FileMetadataError: Distant resource does not have a Content-Length.
1
#49 opened 15 days ago
by
alxemade
Why GLM3 is better than GLM4 on LVEval benchmark?
1
#48 opened 15 days ago
by
AnaRhisT
ETA for Flash Attention 2.0 Support in ChatGLMForConditionalGeneration
1
#46 opened 18 days ago
by
frank098
请问能否提供工具调用模板?
#42 opened 23 days ago
by
WateBear
模型许可证
#37 opened 25 days ago
by
Andrewzhu100
What does "open source" mean? Need info on source code, training data, fine-tuning data
#36 opened 25 days ago
by
markding
![](https://cdn-avatars.huggingface.co/v1/production/uploads/62d1218684bfbee86b6ee521/BpXX_XUP80IfdGAvbs_VI.png)
lobechat不能使用函数和图片功能
#35 opened 27 days ago
by
jackies
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/pLvnj2VYSC8fY2K7l70hp.jpeg)
FIX autogptq compat
#28 opened 29 days ago
by
Qubitium
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1669710270688-noauth.jpeg)
Multiple/Parallel function call?
1
#27 opened 29 days ago
by
Yhyu13
[ISSUE] forward() requires input_ids even if inputs_embeds is provided alternatively
#23 opened 30 days ago
by
x5fu
希望提供gguf版本
12
#17 opened 30 days ago
by
windkkk
FlashAttention only supports Ampere GPUs or newer.
#13 opened about 1 month ago
by
GuokLIU
qunatizer部分的为什么去掉了?
4
#10 opened about 1 month ago
by
fukai
使用trans来进行推理时候出现的错误:
#4 opened about 1 month ago
by
shams123321
请问prompt换了没
2
#1 opened about 1 month ago
by
okcwang