Model failing to run

#557
by NobodyExistsOnTheInternet - opened

The model is almost definitely too big to be evaluated on the leaderboard. In 4bit it would take at least a quarter of a terabyte to run. I don't think the leaderboard is setup with that kind of hardware.

Open LLM Leaderboard org

Hi! Your models are too big to be supported on one node of our cluster (8GPUs), the maximum we allow is around 100B params.
We occasionally run bigger models manually in a multinode setup, but that's only something we do for base pretrained models (= not fine tunes, not merges, ...), as they are the most useful for the community.

clefourrier changed discussion status to closed

Sign up or log in to comment