Let’s not do anything about the unregulated technology that can spread lies faster than ever before as websites get absolutely flooded with believable bots that outnumber the actual users. Let’s make secret passwords and handshakes like we’re in a clubhouse.
Regardless, it’s not a bad idea since it’s probably not gonna get better for awhile if at all.
The technology is out. While something should be done on that side of things, it also doesn’t remove the technology from existence - you will still need other protections.
Regulations virtually always lag years behind technology, don’t they? In the interim period with absolutely no regulations, we must take it upon ourselves to protect ourselves and loved ones from being exploited.
Given just how wealthy the AI bubble is making some people, we may not see any common sense regulation for quite some time. Best to adapt to that reality imo. Gonna tell my friends and family to call me by my hacker alias, “X360N0_sc0peX” on the phone or I’ll assume they’re a bot.
What can be done, you can download an LLM and run it locally, they’re not going away
Websites have been full of shit, bots or not, since forever. Nothing new here.
Regulating it does nothing. Only rich people gets to have deepfakes? Nah, let it be public, so everyone can have some vigilance.
vigilance
Vigilance is like, not drinking the water that comes out of a nuclear reactor.
What we’re talking about here is letting everyone run their own reactor and dump the waste into the street.
You don’t gain vigilance, you lose all habitable public space.
It’s a bit late for that. This particular nuclear reactor is open source, free to download and runs on consumer hardware. Can’t really unfry that egg and the quality is getting better all the time. Identity fraud is already illegal in most places so not sure exactly what regulation would be appropriate here.
First of all: you need giant data centres to train the models.
Identity fraud is illegal, copyright theft is illegal as well — put the blame on the owner of the data centres.
I know from valid sources that governments know who theses folks are.
Not entirely true. You don’t need your own personal data centre, you can use GPU cloud instances for a lot of that stuff. It’s expensive but not so expensive that it would be impossible without being a huge tech company (only 1000s of dollars, not billions). This can be done by anyone with a credit card and some cash to burn. Also, you don’t need to train a model from scratch, you can build on existing models that others have published to cut down on training.
However, to impersonate someone’s voice you don’t need any of that. You only need about 5-10 seconds of audio for a zero-shot impersonation with a pre-trained model. A minute or so for few-shot. This runs on consumer hardware and in some cases even in real time.
Even to build your own model from scratch for high quality voice audio, there doesn’t need to be a huge amount of initial training data. Something like xtts was trained with about 10-15K hours of English audio which is actually pretty easy to come by in the public domain. There are a lot of open and public research datasets specifically for this kind of thing, no copyright infringements necessary. If a big tech company wants more audio data than what’s publically available, they just pay people to record audio, no need to steal it or risk copyright claims and breaking surveillance laws, they have a budget to exploit people to record whatever they want.
This tech wasn’t invented by some evil giant tech company stealing everybody’s data, it was mostly geeky computer scientists presenting things at computer speech synthesis conferences. That’s not to say there aren’t a bunch of huge evil tech companies profiting from this or contributing to this kind of tech, but in the context of audio deepfakes being accessible to scammers, it’s not on them and I don’t think that some kind of extra copyright regulation on data centres would do anything about it.
The current industry leader in this space in terms of companies trying to monetize speech synthesis is elevenlabs which is a private start-up with only a few dozen employees.
The current tech is not perfect but definitely good enough to fool someone who isn’t thinking too hard over a noisy phone call and a scammer doesn’t need server time or access to a data centre to do it.
Secret phrases seem like it could just get wiretapped and its not longer a secret.
You’re gonna need to change them every day, nay, every conversation.
Might need some some RSA 4096 to handshake each phone call for authentication and might as well do encryption too.
Or we might need to generate some One Time Pads and then do a challenge-response thing by reading 5 digits, then have the other person reply 5 digits after that, then the numbers are crossed out.
The future is gonna be so weird.
I feel like CSAM might go out of control.
Any video of politicians/candidates doing bad things would be responded with “CNN FAKE NEWS DEEPFAKE”.
Like you could just murder someone on 4K camera and you can claim its a deepfake.
We’re so fucked.
Could try the old school approach https://en.m.wikipedia.org/wiki/Shibboleth
deleted by creator
It’s the Terminator “your mother is dead” scene.
I am just imagining the scene in question, but the dialogue is Arnold asking Conner’s mom about the scene in question.
“What does Arnold ask you in Terminator 2?”
“I think he asked me about the weather.”
“Your foster parents are dead.”
So, let’s make the formula for concentrated dark matter our secret code.
“Mom, I’m getting fed up with this orgasm!”
HA HA HA fellow humankind member.
This has given me a pointer to a disk location of a friend back in university
Lol but seriously, back in uni a friend of mine got their social media hacked. The hacker was trying to beg for money and such. One person got suspicus and asked what their favorite beer was, so the scammer texted me “hey what is my favorite beer?”
Fortunately the account got locked for some reason, so no money was stolen. Bro still has not recovered it.
All of a sudden, poughkeepsie suddenly pops up inside my head as a curious “secret distress signal”.
I thought ours was “Tahiti”, though.
My previous comment is a reference to the Supernatural TV series. The protagonist brothers Sam and Dean Winchester had Poughkeepsie as a distress signal whenever one of them needed to inform the other to “pack up and run”. One of the situations involved Dean telling Crowley the distress signal so Crowley could enter Sam’s mind and warn him about his ongoing angelic possession.
Now I have a reason to spend the night looking at my Klingon dictionary.
Saying the same thing over and over again in different conversations would be super useful if your goal is to train an AI to listen to calls
Just ask them about something you’ve experienced together, assuming there has been contact.
Remember that time we went out and talked about your father while having tea?
Starting to realize I wouldn’t remember most things
Its honestly not hard
“Son its dad, Ive had to borrow a phone…”
“Before I transfer the money dad, whens Gran getting out of the hospital?”
(Grans been dead for a decade)
(puts the phone down)
‘Your foster parents are dead.’
El Psy Congroo
Father!!!
I mean close family members will recognise if it’s you so they’ll have to contact someone else which mostly likely will find through Facebook or other platforms. Still this still seems like a way to make people even more unique between them.