• fuser@quex.ccOPM
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    For the small office, AWS, i.e “cloud” is definitely easy and economical, however the promised economies of scale are not easily realized in larger organizations. There are a number of reasons for this, but two of the main ones are that the provider’s interests are aligned with the subscriber spending as much as possible on compute, storage and I/O - and most subscribers, especially the larger ones, are notoriously bad at properly measuring, managing and optimizing these resources. Additionally, the promises of manpower reductions are overblown in the glossy slides that the C suite sees. Sticking your computer in somebody else’s data center saves a bit of upfront grunt work, but you still need everybody else from the sysadmin up to deliver the service.

    The transition is inevitable of course, as organizations globally of all size rush to concentrate their compute and storage infrastructure into 3 major providers and get data centers and bare metal off their balance sheets. The premise that these providers will jack up prices once they have enough control of the market seems reasonable based on where we are today. AWS now charging for public addresses and increasing the cost of their Email Service may just be the beginning of what they can get away with. If there is a way to squeeze out smaller providers completely they will definitely find it.

    • carl_dungeon@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I regularly support clients where reserved instances, compliance, and auto scaling by themselves translate into millions of dollars of annual savings over on-prem.

      Our biggest data storage client has over 7PB of s3 and has legally obligated retention and destruction policies. Spread across about 100 different projects, having central management of that alone saves probably $100k a month in auditing, class transitioning, purging, and inventory just in man hours.

      I oversee one client with over 200 aws accounts in their org, and I’m able to do it solo- in the 90s that involved dozens of people to support the hardware and networks alone.

      You’re not wrong that orgs can do it badly and don’t understand how to leverage services, but that also doesn’t mean it can’t be done well. You know how much a NIST 800-53 ATO costs in labor hours alone in a large org? It’s $$$. Cloud tooling automated so much of that and largely eliminates all the hardware and physical control components entirely.

      Plus when you consider govcloud and fedramp, that’s stupidly hard to do on your own, but with cloud you get those things as built-ins.

      • fuser@quex.ccOPM
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        For auto-scaling to realize material savings, the variation in the workload needs to represent a significant change in the production footprint. Many applications in the Private Sector now being dumped into relatively expensive compute and storage cloud services don’t have that profile. A handful of virtual servers inside a corporate data center with an internal user base is usually uneconomical to refactor or replace with a lower-cost footprint, at least for now.