TOTP can be backed up and used on several devices at least.
TOTP can be backed up and used on several devices at least.
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don’t have to trust any specific third party in this case.
Discounting temporary tech issues, I haven’t browsed internet without an adblocker for a single day in my entire life. Nobody is entitled to abuse my attention; no guilt, no exceptions.
If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don’t matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.
Like Firefox ScreenshotGo? (I think it only supports English though)
Don’t know much of the stochastic parrot debate. Is my position a common one?
In my understanding, current language models don’t have any understanding or reflection, but the probabilistic distributions of the languages that they learn do - at least to some extent. In this sense, there’s some intelligence inherently associated with language itself, and language models are just tools that help us see more aspects of nature than we could earlier, like X-rays or a sonar, except that this part of nature is a bit closer to the world of ideas.
Huh, it’s actually a thing.
You can generate synthetic data matching the distribution your transformer learned. You can use this dataset to train another model. As of now, that’s about it.
The temperature here was very interesting for a second or two until I remembered some people use °F.
xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.
CVEs are constantly found in complex software, that’s why security updates are important. If not these, it’d have been other ones a couple of weeks or months later. And government users can’t exactly opt out of security updates, even if they come with feature regressions.
You also shouldn’t keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.
You can get your hands on books3 or any other dataset that was exposed to the public at some point, but large companies have private human-filtered high-quality datasets that perform better. You’re unlikely to have the resources to do the same.
Very cool and impressive, but I’d rather be able to share arbitrary files.
And looks like you can only send images in DMs, but not in groups/forums.
Love those, plenty of experience every time.
If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.