Mozilla opposes this proposal because it contradicts our principles and vision for the Web.

Any browser, server, or publisher that implements common standards is automatically part of the Web:

Standards themselves aim to avoid assumptions about the underlying hardware or software that might restrict where they can be deployed. This means that no single party decides which form-factors, devices, operating systems, and browsers may access the Web. It gives people more choices, and thus more avenues to overcome personal obstacles to access. Choices in assistive technology, localization, form-factor, and price, combined with thoughtful design of the standards themselves, all permit a wildly diverse group of people to reach the same Web.

Mechanisms that attempt to restrict these choices are harmful to the openness of the Web ecosystem and are not good for users.

Additionally, the use cases listed depend on the ability to “detect non-human traffic” which as described would likely obstruct many existing uses of the Web such as assistive technologies, automatic testing, and archiving & search engine spiders. These depend on tools being able to receive content intended for humans, and then transform, test, index, and summarize that content for humans. The safeguards in the proposal (e.g., “holdback”, or randomly failing to produce an attestation) are unlikely to be effective, and are inadequate to address these concerns.

Detecting fraud and invalid traffic is a challenging problem that we’re interested in helping address. However this proposal does not explain how it will make practical progress on the listed use cases, and there are clear downsides to adopting it.

  • HarkMahlberg@kbin.social
    link
    fedilink
    arrow-up
    51
    ·
    edit-2
    1 year ago

    You know Mozilla’s statement is actually pretty prescient. I haven’t seen much discussion about this that didn’t center around AdBlock or DRM or whatnot. But yeah, web development as a software discipline would be harmed by WEI too.

    which as described would likely obstruct many existing uses of the Web such as assistive technologies, automatic testing, and archiving & search engine spiders. These depend on tools being able to receive content intended for humans, and then transform, test, index, and summarize that content for humans.

    Like imagine if Google locked Inspect Element behind the site you’re going to requiring the Human signature… Or the opposite!

    • MalReynolds@slrpnk.net
      link
      fedilink
      arrow-up
      26
      ·
      1 year ago

      obstruct … search engine spiders

      An interesting way to nobble search competitors and potential LLM competitors too. Strategic now that their search results have gone to shit due to profiting from SEO and content farming. Could see then having advantage on fresh data.

    • MonkderZweite@feddit.ch
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Their vision was dead after HTML3. See dark mode, responsive webdesign, login; you have to implement everything yourself. And 90% of the implementations suck or leave something out.