In all three cases all they are doing is providing a platform. The issue with the size of the outages that we’ve seen should be placed on all of the companies that are opting to use them and only them without any regards to redundancy or design.

CloudFlare - There are other CDNs out there such as Akami and CloudFront

AWS - they have multiple regions, not just us-east-1. Also there is GCP and Azure, each with multiple regions

CrowdStrike - Okay there aren’t as many EDRs that do what they do, but it’s still the SPOF basket as the others

In every case I would argue it’s the inexperience, greed and path of least resistance to use these large companies and then blame the providers when something goes wrong, rather than the companies that have chosen to use these platforms. I understand that it’s easier to blame a single entity, but that shouldn’t absolve the companies that use them from being at fault.

  • TheAsianDonKnots@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Retail and warehouse applications. When everything was in-house application teams were used to getting 60-70ms coast to coast so their monitoring systems were built around that. Someone sold them on some bullshit cloud metrics and they want what they were sold, they don’t want to simply adjust their threshold for latency.

    What they REALLY want is room on my 100G private links for free.

    • ramble81@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 days ago

      1G, 100G, doesn’t much matter since you can’t exceed the speed of light. Cross country will always be 60-70ms.

      • TheAsianDonKnots@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        I’m talking about capacity, not speed. The bonus is that they get back down to the 60ms response times they are looking for before they decided to go to the public internet that bounces them all over the country.