Also, captchas are meant to gather data to train on. That’s why we used to have pictures of writing, but that’s basically solved now. It’s why we now have a lot of self driving vehicle focused ones now, like identifying busses, bikes, traffic lights/signs, and that sort of thing.
Captchas get humans to label data so the ML algorithms can train on it, eventually being able to identify the tests themselves.
Now it’s making me identify developed pictures from a photo negative. I’m not quite sure what they’re going to do with that training since computers can already perform that task.
Also, captchas are meant to gather data to train on. That’s why we used to have pictures of writing, but that’s basically solved now. It’s why we now have a lot of self driving vehicle focused ones now, like identifying busses, bikes, traffic lights/signs, and that sort of thing.
Captchas get humans to label data so the ML algorithms can train on it, eventually being able to identify the tests themselves.
Now it’s making me identify developed pictures from a photo negative. I’m not quite sure what they’re going to do with that training since computers can already perform that task.
Also the “select the image below containing the example image above.”
Like… we already have computers that can recognize image repetitions.
So that’s almost certainly trying to gather data to defeat data poisoning. The other image is probably slightly altered in a way you can’t detect.
A common OCR tactic is to turn the image negative and bump the contrast to make text easier to recognize.
It could be a precursor for that step.