Kay Ohtie 🔜 FWA

Canis Lat(ex)rans Inflatus! (inflatable latex coyote)

Some kind of weird 🎈:therian:.

Late 30’s, :flag_mlm:, :valve:, :opensuse: 🔞

Yapping at my beloved walfdog! 🧡🧡 @mathias 🧡🧡

Baker, and learning guitar slowly!

Expect art and rambling about furry inflation, macro growth, latex, and nerdy nonsense.

Blimps.xyz admin; I help keep this place bouncy!

  • 4 Posts
  • 5 Comments
Joined 8 years ago
cake
Cake day: March 20th, 2018

help-circle
  • @ohlaph Home Assistant OS on a Dell Micro with an i5-6500T in it and 16 GB of RAM.

    Runs extremely well, just slow for ESPHome builds so I don’t use the add-on anymore. Also while TTS is plenty fast I couldn’t use any larger than tiny-int8 or base-int8 for faster-whisper. I offloaded that to my server with my old RTX 2070 in it and have it able to run the turbo model for speech to text.

    But no Ollama or similar, fuck using those. I’ve only ever gotten uselessness out of them and I ain’t paying someone else to use theirs to do the same thing just with slightly fewer incidents of “I didn’t find a device called <the thing you said but slightly out of order and now the exact same as it’s actually called>”.