What do package managers do? Install packages, obviously! But that is not everything. In my opinion, package managers do enough to be characterized as general automation frameworks. For example:

  • manage configurations and configuration files
  • manage custom compilation options and flags
  • provide isolation or containerization
  • make sure a specific file is present in a specific place given specific conditions
  • change installation files or configuration based on architecture or other conditions

Not all package managers do all of the above, but you get the idea.

Nix even manages your entire setup with a single configuration file.

It occurred to me that package management could theoretically be managed by an automation framework.

What do I mean by automation framework? Ansible, chef, puppet, or Sparrow.

Now imagine if you were to use one of those package managers as an automation framework. For most of them, it would suck. (nix is a notable exception). So maybe common package managers are just bad automation frameworks?

What if we used an automation framework as a package manager? Well currently, it might also suck, but only because it lacks the package definitions. Maybe it is not a bad experiment to have a distribution managed by a modern automation framework like Sparrow.

What are the benefits?

  • usable on other distributions
  • more easily create your own packages on the fly
  • Greater customization and configurability
  • use a known programming language that is easy to read to define packages and other functions, instead of a DSL
  • your package manager can easily automate just about any task using the same syntax and framework
    • matcha_addictOP
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I can very interested in Nix. The only thing making me hesitate is that it is a bit opinionated. There’s a “Nix way of doing things” rather than a general automation framework than can do anything. Am I wrong in thinking this as an outside observer?

      • iopq@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Well, the Nix way is declarative. You could just declare “I want a file with the following contents” and it will work

        But it’s better if someone already wrote an option that lets you enable the features you want. So ideally if it doesn’t exist you write it yourself. That way you can contribute it to nixpkgs for everyone to use

  • callcc@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    5 months ago

    Professional sysadmin here who has been trying to create ansible roles and playbooks to re-create all his VMs.

    I have spent a lot of time “packaging” custom web applications (and other stuff) for ubuntu systems and building complex configurations for a system of interacting hosts. Once I had finished writing a role to deploy or update one of those applications, I often found it very hard to use them for maintenance. The biggest problem being that I couldn’t remember how to invoke the roles or playbooks to get my desired outcome and what state my systems were in. Another problem with ansible for my usecase is it’s slowness. Installing a rather complex package might take minutes on one host.

    All in all, I found that I had been doing things the wrong way. Off course, it’s nice having all the procedures documented somehow, but if you don’t remember what state your machines are in and what tags and roles to apply, it wont be of practical help in your day to day work. My workload is maintaing a bunch of VMs with mostly different sets of packages and config installed, so ansible doesn’t play out it strengths of being able to execute things on multiple machines in parallel.

    I’m now switching over to a model where I only use ansible to manage installation and configuration tying machines together and where I use debian packaging for, well, packaging. Although it’s pretty tough to get into, once you have taken the first hurdles, things fall into place easily. You can do so many things with debian packaging, including installation of custom systemd service units, depend on other packages, distribute customized config files, install custom management scripts. There is even a way to ask questions during installation in an interactive and non interactive way (debconf). Since you target your package for a specific OS and version, you can rely on files being in their usual places (FHS), which makes configuration easy. The nice thing about this model is that I can now use the tools I’ve been using since ages, to install, update, uninstall, inspect and configure things. On top of that, I could easily distribute our weird to install software to third parties now instead of relying on a broken and long installation procedure.

    Sometimes we should just stop reinventing the wheel and just try to understand what previous generations have built (.deb, sql, unix, etc). Sure, the old ways are bad in many ways but they often get the work done.

    This being said, I’m happy for people to work on things like nix, guix, ansible etc. They are just not the right tool for my set of skills and problems.

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      I’m now switching over to a model where I only use ansible to manage installation and configuration tying machines together and where I use debian packaging for, well, packaging.

      Makes sense. I imagine the push model of Ansible had a lot to do with the speed issues? I can imagine how a solid .deb would be much more performant.

      Sure, the old ways are bad in many ways but they often get the work done.

      As someone who unapologetically uses Makefiles with even the newest and shiniest tech, I couldn’t agree more with this sentiment!

      • callcc@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Makes sense. I imagine the push model of Ansible had a lot to do with the speed issues? I can imagine how a solid .deb would be much more performant.

        It’s part of the problem, but the other part is that you have to re-do the package building all the time. Alternatively you fiddle with tags and only run part of your roles (which is a hassle anyways because ansible does not really have good abstractions that help encapsulation).

        • MajorHavoc@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          I’ve also struggled with Ansible tags, and said good riddance, at least for my use cases.

          I ended up breaking my playbooks up into my own relatively small roles, and then reusing those, instead. It’s not perfect, but I’ve been able to feel progress. I still usually make changes, but they’re not as invasive as I have found it pretty easy to turn a role on or off.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      5 months ago

      ask questions during installation in an interactive and non interactive way (debconf).

      Debs always had weak validation, but here you’re introducing a consistency weakness, which is the second pillar of enterprise packaging. You’re not moving the art forward by kicking out its legs.

      • callcc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Not sure I understand your criticism. Debs definitely help compared to how I was doing things before. Adding some form of parameters (eg. the hostname used by some web application) to the package is necessary and I’d rather have in the form of debconf than having to edit a config after installation.

        Do you have an alternative?

  • hallettj@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 months ago

    Well you’re really feeding my Nix confirmation bias here. I used to use Ansible with my dot files to configure my personal computers to make it easy to get set up on a new machine or server shell account. But it wasn’t great because I would have to remember to update my Ansible config whenever I installed stuff with my OS package manager (and usually I did not remember). Then along came Nix and Home Manager which combined package management and configuration management in exactly the way I wanted. Now my config stays in sync because editing it is how I install stuff.

    Nix with either Home Manager or NixOps checks all of the benefits you listed, except arguably using a “known” programming language. What are you waiting for?

    • matcha_addictOP
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I am very interested in Nix. The only thing making me hesitate is that it is a bit opinionated. There’s a “Nix way of doing things” rather than a general automation framework than can do anything. Am I wrong in thinking this as an outside observer?

      • hallettj@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I haven’t worked with the deployment tools which is what I think would make the most direct comparison to other automation frameworks so I don’t know how comparatively opinionated they are. I suspect it varies between those tools.

        Using NixOS or Home Manager there is a certain way of doing things. But these are intended to be opinionated. Packages and modules come from either nixpkgs, or from third-party flakes which package for Nix. Services are usually orchestrated through systemd units which come from Nix packages.

        You can do anything with Nix. The tools and frameworks encourage certain ways of doing things. But that depends on the framework. You can always build a new framework that works differently. Since they are all based on common concepts of Nix expressions, derivations, and in many cases flakes you get a certain baseline interoperability between frameworks.

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    5 months ago

    I think a big part of what you’re getting at is that we (as humanity) deserve a pull-model, human readable, platform independent, outcome descriptive / desired state / idempotent computer configuration standard (RFC).

    Quite a few package managers and orchestration tools almost get us there. Ironically, since package managers have been around longer, on average, many of them scratch that itch as well or better than many of the orchestration platforms.

    I’ve done it both ways, many times, and I currently prefer a poor-fit-to-task orchestration tool, over an excellent fit-to-task installer package. And I’ll take a well written Makefile over both of the others combined. That may just be my “old man yells at cloud” energy, though.

    • bitwolf@lemmy.one
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      Funny you brought up make files.

      We’ve been churning though Java technical debt for the past year and a huge pain point is that a lot of configuration gets lost within intelliJ.

      Most of this is env vars and jvm args. These could be wonderfully documented using an .env.example and a well written makefile.

      As a middle career technology professional just before reading this comment and thread I had the thought:

      “Make files really are the only correct way to distribute software”.

      Even with OCIs and soon Wasm Components, a makefile can still cover the constant changes in development trends. They can also wire together bash scripts used for tasks and maintenance. Bash and make really are some of the best swiss army knives we have.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        5 months ago

        We’ve been churning though Java technical debt for the past year and a huge pain point is that a lot of configuration gets lost within intelliJ.

        You’ve lost Single Source of Truth.

        “Make files really are the only correct way to distribute software”.

        Nope. They’re the best way to build your immutable artifacts. Building packages, though, should not be done inside your makefile but by the packaging layer that should sit outside the makefile: horse before the cart and never after. I say this knowing that older/crippled packaging formats and processes do this wrong. We have 3 decades of knowledge to leverage and we still get drek like IP5, but it’s a revelation to understand you need to keep the layers distinct.

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    Using a programming language to express configuration outcomes, while convenient, has caused me plenty of regret.

    I now prefer a domain specific language, so long as it expresses the necessary desired end state, rather than the steps to get there, whenever possible.

    Of course, I’'ll trade a domain specific language for an RFC defined shared standard in a New York Minute, given the chance. But of course now there are 16 competing standards applies to our progress towards this ideal.

    • matcha_addictOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      5 months ago

      Why has it caused you regret?

      Nothing wrong with a DSL inherently, it is just harder to get right. Maybe Nix does (judging by its popularity. I haven’t used it enough to judge), but in general, package manager DSLs never end up better except for a subset of cases.

      What are those 16 standards? I haven’t heard of this before but it sounds interesting!

      Although this XKCD is funny, I do not agree with its premise. The presence of many competing standards is usually due to some underlying issue or due to a good reason.

      For example, the competing standards in package management is more due to distributions reinventing their own implementation of doing almost the same thing. Debian and Fedora, for example, aren’t doing anything drastically different. Moreover, they don’t make it easy to use outside of their systems.

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Why has it caused you regret?

        Any configuration coded in, for example, Python 3.12, will cease to be useful in October of 2028, when it will become quite difficult to find an active useable Python 3.12 runtime.

        https://endoflife.date/python

        Whereas the same configuration, in an Ansible Playbook, will likely continue to function just fine, under the latest Anbile and Python versions.

        I’ve experienced this rodeo enough times that I trust Domain Specific meta-languages much more than direct program code, for configuration.

        On the XKCD, yeah. It’s a joke. Randal Munroe isn’t actually advocating to stop trying to fix our standards.

        I refer to it here as a reminder that there will always be fatigue with each new RFC, and that’s okay.

        Also, not every RFC will turn out as timeless and eternally useful as RFC 2324 And there’s no excuse for my bringing it up here, except that I like it.

        • callcc@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          What about using standard shell or bash? I know they are not easy to use correctly, but at least they won’t break every few years.

          • MajorHavoc@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            5 months ago

            Great question!

            Bash (on vanilla Linux) lacks functionality to verify each change after making it.

            (Edit: For configuration drift detection, which I need in various contexts.)

            Yes, I can verify each change I make, but it’s a huge pain in the ass in Bash. And in my experience, there’s a 100% chance the next person to update the config won’t understand that they need to update the matching verify step.

            In contrast, in Ansible, every step has “modify” and “verify” modes built-in. If I change the “modify” step, the next “verify” run will be correct, automatically, because one config line defines both.

            The closest thing in any shell, that I am aware of is “Desired State Config”, which currently only supports PowerShell. It currently leaves a lot to be desired. Configs in v1 and v2 are live code, which causes problems. But version 3 (currently in alpha) of Desired State Config looks quite promising (and is already designed to be an extension of Ansible and other orchestration engines.)

            DSC, if it becomes an RFC, could become the best of both worlds. I dream of doing DSC in my preferred shell, to get started, then dumping those configs on a full scale orchestration pull-server, when I need it.

            Edit: So many typos.

            • callcc@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              Not sure I really understand the issue here. Is it about installing or modifying parts of existing config files? I try to use config.d facilities as much as possible for this problem.

              • MajorHavoc@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                “Configuration drift” is when someone (often myself, in a moment of inattention) manually changes things from how I want them.

                I need to keep drift to a minimum, because I often build machines that are extremely similar to the previous build - and I won’t remember the manual change next time, unless I detect the drift and correct it in my configuration management solution.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 months ago

    It occurred to me that package management could theoretically be managed by an automation framework.

    This is how I know you don’t understand proper package management. The validation cornerstone seems to have been forgotten here.

    But you’re close to something. The state of proper package management had us managing systems by updating packages - using files, triggers and templates - and had more responsive code (they call it ‘self healing’ now) than ansible, about 8 years before they even invented the term ‘devops’.

    It’s amazing how forgetting the past lets us repeat it. That Ansible is such a dog, almost 25 years after make and sed was most of our engine, shows how much potential Mike squandered.

    • matcha_addictOP
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I did not intend to make an exhaustive list of everything package managers do, but validation is a good thing to point out. It further proves that we are dealing with generic automation frameworks.

      And just to clarify, I did not mean that something like ansible in its current form is suitable.

  • gnumdk@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    We use Puppet to manage Linux and Windows servers + Linux, Mac and Windows client (drives, registry/defaults/dconf, …). A package manager can’t handle this properly ;)

  • Eryn6844@beehaw.org
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    5 months ago

    what the fuck is sparrow? also salt-stack does all this. nothing sucks you are just doing it wrong.