This video is perfectly applicable, the rot that sets in in a large company when you have no competition to counteract it is exactly what has happened here.
This video is perfectly applicable, the rot that sets in in a large company when you have no competition to counteract it is exactly what has happened here.
Not sure a short summary will cut it.
They had no competition for a long period and ended up with an accountant CEO that caused their R&D to stagnate massively. They had a ton of struggling and failing to deliver all in most areas, and they wombled about releasing CPU generations with ~4% performance uplifts, probably saving a few bucks in the process.
AMD turned back up again with Ryzen and Epyc models that were pretty good and and an impressive pace of improvement ( like ~14% generational uplifts ) that caused them such a fright that they figured out they had to ditch the accountant.
Pat Gelsinger was asked to step up as CEO and fix that mess. They axed some obvious defective folks in their structure and rushed about to release 12th generation products with decent gains by cranking the power levels of the CPUs to absurd levels, this was risky and it kind of looks like they are being bit with it now.
Server CPU sales are way down because they are just plain uncompetitive. They have missed out on the chunk of money they could have got from the AI bubble because they never had a good GPU architecture they could leverage over to use. They have been shutting down unprofitable and troublesome divisions like the Optane storage and NUC divisions to try and save money, but they are in a bad way.
The class actions mentioned elsewhere in the thread are probably coming because the rush to make incremental improvements to 13th generation and 14th generation CPU’s resulted in issues with power levels and other problems that seem to be causing those CPU’s to crash and sometimes fail altogether.
Yeah, I reckon having a split of the frontend and the backend results in about half the complexity in each. If you have multiple frontends you can upgrade whatever the least important one is to see if there are any problems
I didn’t really answer your original question.
When I was using NUC’s I was using Linux mint which uses cinnamon by default as the window manager. Originally I changed it to use some really minimal window manager like twm, but then at some point it became practical to not use one at all and just run kodi directly on X.
If I was going back to a Linux frontend I’d probably evaluate libreELEC as it has alot of the sharp edges sorted out.
I used to run kodi on linux on intel NUC’s connected to all our TV’s a while ago. I don’t remember it being particularly unreliable. The issue that made me change that setup was hardware decoding support in 4k for newer codecs.
What I’ve had doing that frontend function ( kodi, jellyfin, disney plus, netflix etc ) for the last few years is three Nvidia shield TV pro’s which have been absolutely awesome. They are an old product now and I suspect Nvidia are too busy making money to work on a newer generation version of them,
The biggest surprise improvement was how good it was being able to ( easily ) configure their remotes to generate power on / off and volume up and down IR codes for the TV or the AV amp they were using so you only need a single remote.
Separating the function of the backend out from the frontend in the lounge has reduced the broken mess that happens around OS upgrades drastically.
Most hubs didn’t protect you from anything in particular.
Most of them would forward everything to every port, some really insane ones would strip out the spanning tree that could have prevented a loop.
It’s been a long time since I did anything that goes as far into a network as the desktop, but 15+ years ago we had a customer ring up with the same sort of complaint. After we followed the breadcrumbs on site we found a little 8 port hub ( that we hadn’t supplied ) plugged into two wall ports that went to two different Cisco edge switches in the server room, two cisco phones also with their passthrough ports both patched into same switch and then two desktop PC’s.
Amazing.
I replaced mythtv with tvheadend on the backend and kodi on the frontend like 5 or 6 years ago.
The setup and configuration at the time on mythtv was slanted towards old ( obsolete ) analog tuners and static setup and tvheadend was like a breath of fresh air in comparison where you could point it at a DVB mux or two and it would mostly do what you want without having to fight it.
I’m not sure how much longer I’ll want something that can tune DVB-S2 and DVB-T though. Jellyfin and friends handle everything other than legacy TV better than kodi these days.
I don’t have a good answer for you.
DHCPv6 is pretty well the only good way to have a prefix delegated by your ISP and have it chopped up and deployed in an automated fashion through multiple layers of an edge network. I’m also a real fan of the audit trail in the logs that results from a stateful transaction.
Some background info if you haven’t run into it though is described by this google issue tracker id: https://issuetracker.google.com/issues/36949085. The summary is that one guy at google is obstructing DHCPv6 being implemented on android.
I’ve built out a bunch of IPv6 networks that implement DHCPv6 on the edge. I personally use a whole lot of android devices and none of them get IPv6 addresses, pretty well everything else does. I’m mostly cool with it at this point, eventually the guy who is obstructing IPv6 at google will move on.
I got a pretty nice Yamaha bluray player that was an appropriate match to my home theatre amp.
Put a bluray in it, got a piracy warning, a few unskippable ads for other movies, an obnoxious excessively drawn out animated menu screen that stuttered like hell and was laggy to use.
Pulled the bluray back out of it, stuck it back in the DVD drawer and proceeded to download a copy of the movie to watch. Been doing that ever since.
The most impressed I’ve been with hardware encoding and decoding is with the built in graphics on my little NUC.
I’m using a NUC10i5FNH which was only barely able to transcode one vaguely decent bitrate stream in software. It looked like passing the hardware transcoding through to a VM was too messy for me so I decided to reinstall linux straight on the hardware.
The hardware encoding and decoding performance was absolutely amazing. I must have opened up about 20 jellyfin windows that were transcoding before I gave up trying and called it good enough. I only really need about 4 maximum.
The graphics on the 10th generation NUC’s is the same sort of thing that is on the 9th gen and 10th gen desktop cpu’s, so if you have and intel cpu with onboard graphics give it a try.
It’s way less trouble than the last time I built a similar setup with NVidia. I haven’t tried a Radeon card yet, but the jellyfin docs are a bit more negative about AMD.
A couple of seagulls made their nest in the cooling vent for the radiator of one of our backup generators. I caught it on our security cameras and mentioned it to management which resulted in folks being dispatched to evict them and clean up the giant pile of sticks and other junk they had dragged in.
Not sure what would have happened next time the thing started, so it was probably for the best. I still felt bad.
Yep, any time you have a traffic cap or bill for traffic you’ve got to have data to back up what you are billing for.
More recently CDN’s ( and widespread SSL adoption ) have made it a whole lot less obvious what sites the user is going to. I suspect that nice clearcut list of porn sites from 2007 would just look like some cloudflare, akamai and google these days.
There’s no way of knowing what happened there.
But back in the mid to late 2000’s we had a whole bunch of residential internet customers and every so often one would blow their traffic cap by a bunch and would ring up and say “Your billing system is wrong!”.
Then whoever could be bothered in the office would do some modest analysis on their netflow data and come up with something like "18% of your traffic this month was redtube.com, 33% was pornhub.com and 9% was xhamster.com.
We never knew if whoever was on the phone was the raging porn addict or it was one of their associates. Either way they would say “Oh well, I guess we will never know then. Thanks for your help. Bye.”. Followed by them quietly paying the bill.
Haha, 144p @ 60hz is fricking hilarious.
Reminds me of seeing completely rubbish resolution real player videos embedded in websites back in the late 90s and me thinking, “Well that isn’t ever going to take off”.
I just read the update to the post saying that the issue has been narrowed down to the NTFS driver. I haven’t used NTFS on linux since the NTFS fuse driver was brand new and still wonky as hell something like 15 years ago, so I don’t know much about it.
However, it sounds like the in kernel driver was still pretty fresh in 5.15, so doing as you have suggested and trying out a 6.5 kernel instead is a pretty good call.
If you haven’t already, try running hdparm on your drive to get an idea of if the drives are at least doing large raw reads straight off the disk at an appropriate performance level.
This is output from the little NUC I’m using right now:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 464.3G 0 part /
└─sda3 8:3 0 976M 0 part [SWAP]
# hdparm -i /dev/sda
/dev/sda:
Model=Samsung SSD 860 EVO 500GB, FwRev=RVT02B6Q, SerialNo=S3YANB0KB24583B
...
# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 1526 MB in 3.00 seconds = 508.21 MB/sec
If your results are really poor for this test then it points more at the drive / cable / controller / linux controller driver.
If the results are okay, then the issue is probably something more like a logical partitioning / filesystem driver issue.
I’m not sure what a good benchmark application for Linux that tests the filesystem layer as well is other than bonnie++ which has been around forever. Someone else might have a more current idea of something to use for this.
It might help for the folks here to know which brand and model of SSDs you have, what sort of sata controllers the sata ones are plugged into and what sort of cpu and motherboard the nvme one is connected to.
What I can say is Ubuntu 22.04 doesn’t have some mystery problem with SSDs. I work in a place where we have in the order of 100 Ubuntu 22.04 installs running with SSDs, all either older intel ones or newer samsung ones. They go great.
1988 Nissan Skyline GT with an RB20DET.
It was abandoned by my uncle at our place when he moved overseas and subsequently my sister drove it around a bit. Eventually it leaked coolant from the water pump, overheated and blew a head gasket because she wasn’t paying attention.
I was unemployed and bored and I decided to pull it apart and bought all the bits to fix it. I didn’t really know anything about mechanical stuff at the time, but I am good at logic and try not to be useless at practical stuff even though I’m really a computer geek. I drove it around for a bunch of years after that until I was earning enough money that I could buy something I wanted which was a Mitsubshi EVO 1.
So to answer the question, favorite thing was that I rescued it from oblivion even though I didn’t know much about cars or engines at the time.
The situation is mostly reversed on Linux. Nvidia has fewer features, more bugs and stuff that plain won’t work at all. Even onboard intel graphics is going to be less buggy than a pretty expensive Nvidia card.
I mention that because language model work is pretty niche and so is Linux ( maybe similar sized niches? ).
Please drink a verification can to continue.
Razer mice and keyboards can be managed with openrazer under Linux. I still use deathadders on a few Linux machines ( and one Windows games PC ) but I’ve ditched my razer keyboards for keychron which don’t really need any software. You can configure the RGB components of them all with openrgb if you want in Linux and Windows.