• 0 Posts
  • 957 Comments
Joined 9 months ago
cake
Cake day: March 8th, 2024

help-circle

  • I mean, up to you. As I said above, it’s not like owning one of those means actively supporting Facebook or whatever. I find the whole “engaging with these companies products implies endorsing them” capitalist view of money as support very strange, but I know it’s popular these days, particularly in anglo cultures.

    But like I said above, it’s not like a Meta account used for a Quest used on PC will give Meta any view on your data, or like they would have made any money out of you from a device they built at a massive loss that you’re then purchasing used. But hey, you do you. There are other older, crappier headsets you can buy used, but Quest 2 listings out there start at sixty bucks, which is absolutely nuts for what they are.









  • We had enough of them at a time that “the expats” was a relevant group of people you needed to refer to for specific things. Language lessons, HR support, what have you. I definitely heard the anglo guys refer to themselves as that frequently, and that then became the word people used.

    I had a chip on my shoulder about telling people I was a migrant, but I was pretty alone on that. The anglo guys mostly said they were “expats”.



  • It was used colloquially, for sure… by rich corporate migrants that didn’t want to self-ID as migrants. Or at least by the HR people and corpo consultants handling the international relocations and avoding the taboo word.

    Which is what the previous post is saying and it certainly matches my experience as one of the “expats”. I always self-identified as a migrant myself, though.


  • Screw that. I am forced to deal with US politics and culture in enough areas of my life to be shamed for refusing to care about their self-harming tendencies. I don’t have a need to care about what the US do to themselves in the same way I don’t have a need to care about what Argentina or Hungary or Russia do to themselves. At least Russians don’t have a real choice.

    Admittedly, I did have the compulsion to write that down here at all, as opposed to those other examples. In my defense, that’s because a) I literally wrote that as I clicked the “block” button in this community, and b) it’s insanely hard to not pay attention to the US. It requires active effort. This community isn’t even called “US politics”, it’s just called “Politics”. The US dominating my media is the default stance of the world, I have to take aggressive action to make that not be the case.





  • The idea is having tensor acceleration built into SoCs for portable devices so they can run models locally on laptops, tablets and phones.

    Because, you know, server-side ML model calculations are expensive, so offloading compute to the client makes them cheaper.

    But this gen can’t really run anything useful locally so far, as far as I can tell. Most of the demos during the ramp-up to these were thoroughly underwhelming and nowhere near what you get from server-side services.

    Of course they could have just called the “NPU” a new GPU feature and make it work closer to how this is run on dedicated GPUs, but I suppose somebody thought that branding this as a separate device was more marketable.




  • The stupid difference is supposed to be that they have some tensor math accelerators like the ones that have been on GPUs for three generations now. Except they’re small and slow and can barely run anything locally, so if you care about “AI” you’re probably using a dedicated GPU instead of a “NPU”.

    And because local AI features have been largely useless, so far there is no software that will, say, take advantage of NPU processing for stuff like image upscaling while using the GPU tensor calculations for in-game raytracing or whatever. You’re not even offloading any workload to the NPU when you’re using your GPU, regardless of what you’re using it for.

    For Apple stuff where it’s all integrated it’s probably closer to what you describe, just using the integrated GPU acceleration. I think there are some specific optimizations for the kind of tensor math used in AI as opposed to graphics, but it’s mostly the same thing.