Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they’re a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.

ETH Zurich PhD student Andreas Plesner and his colleagues’ new research, available as a pre-print paper, focuses on Google’s ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an “invisible” reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.

Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low “human” confidence rating.

  • sudo@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    Pro-tip for webscrapers: using AI to solve captchas is a massive waste of effort and resources. Aim to not be presented with a captcha in the first place.

    • just_an_average_joe@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      I think thats much more difficult than it seems, because usually only residential IPs are the ones that don’t get those. And if you start to use a residential proxy too much then that IP can also get flagged.

      • sudo@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        That’s when you rotate the proxy. By default most residential proxies will give you a new proxy for each request unless you specify.