The output is "!!!!!!!" when using this FP8 checkpoint in docker image: nvcr.io/nvidia/pytorch:24.07-py3
1
#6 opened 8 days ago
by
Bobcuicui
Not able to use it with TGI
1
#5 opened 29 days ago
by
Alokgupta96
Does this model only work on CUDA devices with compute capability >= 9.0 or 8.9/ROCm MI300+?
1
#4 opened about 1 month ago
by
jcfasi
How to fast inference with FP8
1
#2 opened about 2 months ago
by
CCRss