• 1 Post
  • 432 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2024

help-circle



  • If only a giant corporation like steam made their own Linux based OS and poured huge resources into an open source compatibility layer(maybe even call It proton) that would move more people to Linux and even create more incentive for third party developers to have Linux versions of their applications because of the growing market sjare of end users using linux that would be exciting… 👀👀👀







  • StinkySocialist@lemmy.mltoSteam Hardware@sopuli.xyzSteam Machine
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 month ago

    I’m seriously stoked about this, even though I’m not planning to buy any of the new hardware! It all comes down to the fact that Valve’s hardware projects force them to pump huge resources into open-source development, and we all get the benefits. That means the compatibility tools like Proton—which are essential for the high-end Steam Machine and Steam Frame—are immediately available to my desktop rig. By pushing Linux into the living room, VR, and high-performance space, they’re pressuring game developers to finally treat Linux as a serious platform. Basically, Valve’s huge investment accelerates development and developer adoption, which makes my own Linux desktop a way better and will hopefully get more people into Linux and open source.





  • Had to google it to check but you are right. The sum of all the energy used to prompt a model over its life time is usually greater than whats need to train it to begin with.

    I didn’t know that but that makes sense. I meant more like prompting The Thing Once isnt that big of an energy drain where as the intial training is. Average of 0.34 watt-hours per prompt but to train GPT-3 cost around 1.3 gigawatt-hours (GWh) and GPT-4 requiring an estimated 62.3 GWh. I see all these memes about how prompting an llm once is super wasteful and thats the misconception i was addressing.



  • You and another commenter have good points about rhe bigger models ans how many prompts users hit them with. I think its dimishing returns after about 8 billion parameters and you can run those ones on old hardware. My home server is a 10 year old desktop. It cost $200 bucks to buy used last year and i didnt notice the energy costs. Me and my wife try to use it for any thing wed use an online one for instead. Probably only gets prompted about 10 times a week between the two of us.

    🤔 I actually have a energy meter thing i could plug the the server into. I could do 100 prompts and tell you how much energy it ate for the day. Anybody interested?



  • Ive definitely noticed. I find the strength of opinions odd on both sides. Like conservatives treat AI like is a genius human who has a phd in everything and thats dangerously false obviously. LLMs make basic spelling and math error ans frequently hallucinate misinformation.

    On the other hand the libs tend to demonize the hell out of the tech. One example is the energy costs. There is a misconception about how much energy it takes to prompt an llm. Doing so is pretty damn low power cost. I have a 10 year old desktop i use as a server. If can run a 8 billion parameter version of gemni while it’s also streaming my music on jellyfin no problem. The reason for this misconception is there is a huge amount of energy needed to train these thinga ans having every conpany and their mother make their own is a huge waste.

    My personal opinion is that ai is like a a small team of dumbass interns. Its great for grunt work and busy work and thats about it. For example, one day my bosses boss decided that i need to update our approved software list with a paragraph description of each and every software listed. 900+ approved pieces of software and 400+ banned. He assigned this to me and one coworker. Told us its urgent… Bullshit but i need my job. So google sheets has a function where you can point to another cell and add a seperate prompt for gemini to fill the cell based on it. Dude i had that whole list done in a minute. It was like commanding a small army of interns. Did the ai make up incorrect descriptions for like 50 pieces of software? Yes. Does it matter and do i give a flying fuck? No

    If you made it this far, thanks for reading my 2 cents comrade 🫡