WizardLM's picture
Update README.md
7dcd27c
|
raw
history blame
No virus
4.75 kB
metadata
license: llama2
  • We released WizardCoder-15B-V1.0 , which surpasses Claude-Plus (+6.8), Bard (+15.3) and InstructCodeT5+ (+22.3) on the HumanEval Benchmarks. For more details, please refer to WizardCoder.
Model Checkpoint Paper HumanEval MBPP Demo License
WizardCoder-15B-V1.0 πŸ€— HF Link πŸ“ƒ [WizardCoder] 57.3 51.8 OpenRAIL-M
  • Our WizardMath-70B-V1.0 model slightly outperforms some closed-source LLMs on the GSM8K, including ChatGPT 3.5, Claude Instant 1 and PaLM 2 540B.
  • Our WizardMath-70B-V1.0 model achieves 81.6 pass@1 on the GSM8k Benchmarks, which is 24.8 points higher than the SOTA open-source LLM, and achieves 22.7 pass@1 on the MATH Benchmarks, which is 9.2 points higher than the SOTA open-source LLM.
Model Checkpoint Paper GSM8k MATH Online Demo License
WizardMath-70B-V1.0 πŸ€— HF Link πŸ“ƒ [WizardMath] 81.6 22.7 Demo Llama 2
WizardMath-13B-V1.0 πŸ€— HF Link πŸ“ƒ [WizardMath] 63.9 14.0 Demo Llama 2
WizardMath-7B-V1.0 πŸ€— HF Link πŸ“ƒ [WizardMath] 54.9 10.7 Demo Llama 2
Model Checkpoint Paper MT-Bench AlpacaEval GSM8k HumanEval License
WizardLM-70B-V1.0 πŸ€— HF Link πŸ“ƒComing Soon 7.78 92.91% 77.6% 50.6 Llama 2 License
WizardLM-13B-V1.2 πŸ€— HF Link 7.06 89.17% 55.3% 36.6 Llama 2 License
WizardLM-13B-V1.1 πŸ€— HF Link 6.76 86.32% 25.0 Non-commercial
WizardLM-30B-V1.0 πŸ€— HF Link 7.01 37.8 Non-commercial
WizardLM-13B-V1.0 πŸ€— HF Link 6.35 75.31% 24.0 Non-commercial
WizardLM-7B-V1.0 πŸ€— HF Link πŸ“ƒ [WizardLM] 19.1 Non-commercial