which is hard to decode using hardware acceleration
This is a little misleading. There is nothing fundamental about AV1 that makes it hard to decode, support is just not widespread yet (mostly because it is a relatively new codec).
which is hard to decode using hardware acceleration
This is a little misleading. There is nothing fundamental about AV1 that makes it hard to decode, support is just not widespread yet (mostly because it is a relatively new codec).
Just to be clear it is probably a good thing that YouTube re-encodes all videos. Videos are a highly complex format and decoders are prone to security vulnerabilities. By transcoding everything (in a controlled sandbox) YouTube takes most of this risk on and makes it highly unlikely that the resulting video that they serve to the general public is able to exploit any bugs in decoders.
Plus YouTube serves videos in a variety of formats and resolutions (and now different bitrates within a resolution). So even if they did try to preserve the original encoding where possible you wouldn’t get it most of the time because there is a better match for your device.
From my experience it doesn’t matter if there is an “Enhanced Bitrate” option or not. My assumption is that around the time that they added this option they dropped the regular 1080p bitrate for all videos. However they likely didn’t eagerly re-encode old videos. So old videos still look OK for “1080p” but newer videos look trash whether or not the “1080p Enhanced Bitrate” option is available.
It may be worth right-clicking the video and choosing “Stats for Nerds” this will show you the video codec being used. For me 1080p is typically VP9 while 4k is usually AV1. Since AV1 is a newer codec it is quite likely that you don’t have hardware decoding support.
I’m pretty sure that YouTube has been compressing videos harder in general. This loosely correlates with their release of the “1080p Enhanced Bitrate” option. But even 4k videos seem to have gotten worse to my eyes.
Watching a higher resolution is definitely a valid strategy. Optimal video compression is very complicated and while compressing at the native resolution is more efficient you can only go so far with less bits. Since the higher resolution versions have higher bitrates they just fundamentally have more data available and will give an overall better picture. If you are worried about possible fuzziness you can try using 4k rather than 1440p as it is a clean doubling of 1080p so you won’t lose any crisp edges.
Your Firefox install contains a file called omni.ja
. For example on many Linux machines it will be at /usr/lib/firefox/browser/omni.ja
. This file is a ZIP archive and contains your places.xhtml
as well as other browser files. The exact paths are not always obvious as there is some remapping taking place (see the .manifest
files in the archive) but I think the vast majority of chrome://
paths come from this archive.
Most particularly they generally pretend that nothing on the web is encrypted whereas in practice HTTPS is nearly universal at this point.
The use case will change everything. OP is likely using much more memory than you are (especially disk cache usage) so the kernel decided to swap out some data. Maybe you aren’t using as much so it has no need.
To put it another way you want to be using all of your RAM and swap. It becomes a problem if you are frequently reading from Swap. (Writing isn’t usually as much of an issue as they may be proactive writes in case more memory needs to be filled up).
Basically a perfect OS would use RAM + Swap such that the least disk reads need to be issued. This can mean swapping out some idle anonymous memory so that the space can be used as disk cache for some hotter data.
In this screenshot the OS decided that it was better to swap out 3GiB of something to use that space for the disk cache (“Cached” ). It is likely right about this decision (but is not always).
3 GiB does seem a bit high. But if you have lots of processes running that are using memory but are mostly idle it could definitely happen. For example in my case I often have lots of Language Servers running in my IDE, but many of them are for projects that I am not actively looking at so they are just waiting for something to happen. These often take lots of memory and it may make sense to swap these out until they are used again.
There is an option in settings to allow trying all games. By default it only allows it for tested and verified games. But it is a simple checkbox then you can download and run any Windows game.
It used to be common and useful. I did this even after Valve shipped a native Linux TF2 as at the beginning the Wine method gave better results on my hardware. But that time has long passed as Valve has integrated Wine (Proton) and in almost all cases the Linux native builds will outperform Wine (and Steam will let you use the Windows version via Proton if you want even if there is a native Linux build).
So while I suspect that there are still a few people doing this out of momentum, habit or reading old tutorials I am not aware of any good reasons to do this anymore.
Warning
Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of path, e.g. members that have absolute filenames starting with “/” or filenames with two dots “…”.
https://docs.python.org/3/library/tarfile.html#tarfile.TarFile.extractall
I would be careful if using this as a general purpose tool.
A better alternative would likely be to use the regular command-line tools which have been hardened to this type of thing (and are likely much faster) and then just inspect the result. Always create a wrapper directory, then if the result is only one directory inside of that move it out, otherwise just keep the wrapper. I would recommend that the other updates their tool to do this rather than the current approach.
It honestly sounds more like someone convincing you that crypto is great than someone convincing you that Greenpeace is great.
We did it not because it was easy, but because we thought it would be easy.
NAT sort of accidentally includes what is called a “stateful firewall”. It blocks inbound connections because it doesn’t know where they should go. IPv6 eliminates the need for NAT but doesn’t prevent stateful firewalls. It is just as easy to implement stateful firewalls (actually a bit easier) for IPv6 without NAT. The difference is that the choice is yours, rather than being a technical limitation.
For example if I had a smart microwave I would want to ensure that there is some sort of firewall (or more likely for me not connect it to the internet at all, but I digress). However I may want my gaming computer to be directly accessible so that my friends can connect to my game without going through some third-party relay, or maybe my voice chat can be direct between me and my friends for extra privacy and better latency.
Also relying on network-level protection like this is a good idea in general. Eventually a friend is going to come over with an infected network and connect to your WiFi. With just NAT this will allow the malware on their computer to access your microwave as they are “inside the NAT”. If you were applying a proper stateful firewall you would likely apply it to all traffic, not just internet traffic.
Mostly dropping the analogy as it falls apart quickly once you try to talk about more specific details.
How do I handle whether I want my phone number to be known to the world?
If you don’t want people to be able to call you then you can block incoming calls. This is sort of the like IPv4 NAT case, people can’t connect in (unless you forward ports). Or if you want to you can allow incoming calls. The choice is up to you now rather than being forced by a technical limitation.
Does my phone number ever change on its own or can I freely change it?
Generally you will be provided a “prefix” by your ISP. In v4 this would typically be a full address. In v6 there are a huge number of addresses inside this prefix. In both cases how often the prefix chances is up to your ISP, but for v6 you can chance the suffix you use inside of the prefix as often as you want.
Who has the phone book?
There are two main parts of “the phone book”. There is “Who owns this address?” and “How do I get to this address?” Both of these are basically identical between IPv4 and IPv6.
For “Who owns this address?” there is a global directory of assignments. This is typically done in multiple layers.
For “Who do I get to this address?” A protocol called BGP is used to advertise where an address is available from. So I may say “If you want to get to addresses 32 to 64 come talk to me”. This is sort of like in a hotel how there are signs saying which room numbers are in which direction. When two networks are connected they share this information between them to establish a “routing table”, so they know how to get to everywhere else on the internet.
This may look something like this:
Overall no single places knows how to get to every other address. But they know the best next step. So you don’t know where 17 is, but you know to send it to your ISP, your ISP doesn’t know where 17 is but knows that their partner tier 1 ISP knows how to get there, the tier 1 ISP doesn’t know where 17 is, but knows that it belongs to your friend’s ISP, your friends ISP doesn’t know what device 17 is, but knows that it is in your friends house, then finally your friends home router actually knows that 17 is your friends desktop.
You can sort of imagine this like delivering mail. If I send mail in Canada that is addressed to England, Canada Post doesn’t really care where exactly I am sending the letter. It just knows that it needs to forward it to Royal Mail and they will handle it from there.
I switched to Immich recently and am very happy.
The bad:
Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)
Most Intel GPUs are great at transcoding. Reliable, widely supported and quite a bit of transcoding power for very little electrical power.
I think the main thing I would check is what formats are supported. If the other GPU can support newer formats like AV1 it may be worth it (if you want to store your videos in these more efficient formats or you have clients who can consume these formats and will appreciate the reduced bandwidth).
But overall I would say if you aren’t having any problems no need to bother. The onboard graphics are simple and efficient.
I wouldn’t call a nail hard to use because I don’t have a hammer. Yes, you need the right hardware, but there is no difference in the difficulty. But I understand what you are trying to say, just wanted to clarify that it wasn’t hard, just not widespread yet.