Is it just memory bandwidth? Or is it that AMD is not well supported by pytorch well enough for most products? Or some combination of those?

  • Kerfuffle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    If you’re using llama.cpp, some ROCM stuff recently got merged in. It works pretty well, at least on my 6600. I believe there were instructions for getting it working on Windows in the pull.