I’ve been messing around with GPTQ models with ExLlama in ooba, and have gotten 33b models @ 3k running smoothly, but was looking to try something bigger than my VRAM can hold.

However, I’m clearly doing something wrong, and the koboldcpp.exe documentation isn’t clear to me. Does anyone have a good setup guide? My understanding is koboldcpp.exe is preferable for GGML, as ooba’s llama.cpp doesn’t support GGML at >4k context yet.

  • Magiwarriorx@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Note this is koboldcpp.exe and not KoboldAI.

    The Github describes arguments to use GPU acceleration, but it is fuzzy on what the arguments do and completely neglects to mention what the values for those arguments do. I understand the --gpulayers arg, but the two ints after --useclblast are lost on me. I defaulted to “[path]\koboldcpp.exe --useclblast 0 0 --gpulayers 40”, but it seems to be completely ignoring GPU acceleration, and I’m clueless where the problem lies. I figured it would be easier to ask for a guide and just start my GGML setup from scratch.

    • actually-a-cat@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Those are OpenCL platform and device identifiers, you can use clinfo to find out which numbers are what on your system.

      Also note that if you’re building kobold.cpp yourself, you need to build with LLAMA_CLBLAST=1 for OpenCL support to exist in the first place. Or LLAMA_CUBLAS for CUDA.