• 5 Posts
  • 201 Comments
Joined 3 years ago
cake
Cake day: April 19th, 2022

help-circle
  • chayleaf@lemmy.mltomemes@hexbear.net:two-wolves:
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Rosa is the author of “The Accumulation of Capital”, which was called something like “theory of the automatic demise of capitalism” and opposed in the USSR.

    The book basically says that capitalism is impossible because equivalent exchange of value means there’s nobody to buy the products, and survives by appending more and more regions to the capitalist system, which allows unequivalent exchange. Lenin (and later Soviet Marxists) opposed it for being anti-revolutionary. It downplays the internal contradictions of capitalism in favor of the nominally anti-imperialist external contradiction analysis (which in itself is bad for framing it as a matter of fairness), and that devalues the revolutionary class struggle, even if that certainly wasn’t the intention (“automatic demise of capitalism” implies there’s no historical need for that), ironically it was also used for opposing national liberation movements under the pretext of it being impossible to strive for national interests without having to become an imperialist (this is basically KKE’s “Imperialist Pyramid” line).

    Sadly, this means there’s plenty of “Luxembourgist” social fascists.















  • different neural network types excel at different tasks - image recognition was invented way before LLMs, not only for lack of processing power, but also because the previous architectures didn’t work with languages. New architectures don’t appear out of thin air, they are created with a rough idea of what we could need to make the network do a certain task (e.g. NLP) better. Even tokenization isn’t blind codepoint separation but is based on an analysis of languages. But yes, natural languages aren’t “parsed” for neural networks, they don’t even have a formal grammar.



  • While I agree that LLMs can achieve human-tier efficiency at most tasks eventually (some architectural changes will be necessary, but the core approach seems sound), it’s wrong to say it’s modeled after the human brain. We have no idea how brains work as they’re super complex, we’re building artificial neural networks from the ground up. AI uses centuries’ worth of math, but with our current maths knowledge the code isn’t too complicated. Human brains aren’t like that, they can’t be summed up in a few lines of code because DNA is a huge mess that contains so much more than just “learning”, so many inactive or redundant bits and pieces. We’re building LLMs with knowledge of how languages work, not how brains work.



  • Well, Tor (with bridges) still works just fine, I don’t really know any other “crowdsourced” proxy networks. Telegram isn’t blocked (it used to be, but everyone used it anyway, including people in the government, so they unblocked it), so any info there is freely available. Wireguard and OpenVPN are blocked (even within Russia for some reason), shadowsocks is throttled on certain connections but works fine, and I haven’t extensively tested anything else.

    Also, mobile networks are used for testing stricter blocking measures before rolling them out to landline connections