These are the full weights, the quants are incoming from TheBloke already, will update this post when they’re fully uploaded

From the author(s):

WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on coding, mathematical reasoning and open-domain conversation capacities.

This model is license friendly, and follows the same license with Meta Llama-2.

Next version is in training and will be public together with our new paper soon.

For more details, please refer to:

Model weight: https://huggingface.co/WizardLM/WizardLM-70B-V1.0

Demo and Github: https://github.com/nlpxucan/WizardLM

Twitter: https://twitter.com/WizardLM_AI

GGML quant posted: https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GGML

GPTQ quant repo posted, but still empty (GPTQ is a lot slower to make): https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ

  • ffhein@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Me a few months ago when upgrading my computer: pff, who needs 64GB of RAM? Seems like a total waste

    Me after realising you can run LLM at home: cries