deleted by creator
deleted by creator
There is a disconnect between what computer scientists understands as AI and what the general public understands as AI. This was previously not a problem, nerds give confusing names to stuff all the time, but it became a problem after this latest hype cycle where incurious laypeople are in charge of the messaging (or in a less charitable interpretation, benefit from fear of the singularity™). Doesn’t help that scientific communication is dogshit.
They do once their depression gets better though? Anhedonia, loss of interest/libido/attention/whatever the fuck else are symptoms of depression. I’m all for self-improvement, my own mental health improved greatly as a result of trying to improve myself, to the point I consider myself no longer depressed. But we’re social creatures and no one builds self-confidence and mental resilience in a vacuum. It’s often up to the depressed person to put themselves out in situations where this can happen, but sometimes it does not work out for whatever reason and the whole thing is a long process. In this situation self-compassion is a lot better than telling yourself you’re a sack of shit.
Also, isn’t the interesting life thing all backwards? If you like a person you get curious and find them interesting. If I like a guy I’ll find what they are into cool, be it singing, playing chess or knowing a lot about bugs.
No one is owed that kind of attention, but most people are worthy of compassion.
Implementing fascism as a mechanic only for it to be unsustainable gameplaywise is a good bit tbh
u did the thing we designed the game to push you towards doing don’t you feel bad u monster lolololol
To be fair to the game that’s only the bait and switch at the very start with Toriel, designed to make the player reload and introduce the save meta-fuckery with Flowey. From then on the only incentive to do violence is getting stuck at a puzzle or completionism (which is at the heart of the meta-narrative).
The commentary on violence by itself is naive though (even the game points it out at one point) and if you don’t like the characters or roll your eyes at 4th wall stuff the whole thing falls apart pretty quick.
evaluating LLM
ask the researcher if they are testing form or meaning
they don’t understand
pull out illustrated diagram explaining what is form and what is meaning
they laugh and say “the model is demonstrating creativity sir”
looks at the test
it’s form
deleted by creator
This reminds me of an older paper on how LLMs can’t even do basic math when examples fall outside the training distribution (note that this was GPT-J and as far as I’m aware no such analysis is possible with GPT4, I wonder why), so this phenomena is not exclusive to multimodal stuff. It’s one thing to pre-train a large capacity model on a general task that might benefit downstream tasks, but wanting these models to be general purpose is really, really silly.
I’m of the opinion that we’re approaching a crisis in AI, we’ve hit a barrier on what current approaches are capable of achieving and no amount of data, labelers and tinkering with architectural minutiae or (god forbid) “prompt engineering” can fix that. My hopes are that with the bubble bursting the field will have to reckon with the need for algorithmic and architectural innovation, more robust standards for what constitutes a proper benchmark and reproducibility at the very least, and maybe, just maybe, extend its collective knowledge from other fields of study past 1960’s neuroscience and explore the ethical and societal implications of your work more deeply than the oftentimes tiny obligatory ethics section of a paper. That is definetly a overgeneralization, so sorry for any researchers out here <3, I’m just disillusioned with the general state of the field.
You’re correct about the C suites though , all they needed to see was one of those stupid graphs that showed line going up, with model capacity on the x axis and performance on the y axis, and their greed did the rest.