I have tried, unsuccessfully, to get various AI models to create a script that will curl or wget the latest Ubuntu LTS desktop torrent, even should that LTS version update in the future (beyond 24.01.1 LTS). The purpose is that I would like to seed whatever the latest LTS torrent is and I don’t want to have to keep checking the Ubuntu page for updates, I want it automatic. I know that LTS is slow to change versions but I am annoyed that AI can’t just write a decent script for this.

I also have downloaded rtorrent as a command line and will deal with how to make sure the latest LTS is used, as opposed to the prior one, with a different script later, but that’s not what I’m trying to now.

I am not asking for a human to create this script for me. I am asking why AI models keep getting this so wrong. I’ve tried ChatGPT 4o, I’ve tried DeepSeek, I’ve tried other localized models, Reasoning Models. They all fail. And when I execute their code, and I get errors and show it to the models, they still fail, many times in a row. I want to ask Lemmy if getting an answer is theoretically possible with the right prompt or if AI just sucks at coding.

This shouldn’t be too hard to do. At https://www.releases.ubuntu.com, they list the releases. When curling the webpage, there’s a list of the releases with version numbers some with LTS. New versions are always larger numbers. At https://ubuntu.com/download/alternative-downloads, they list the torrents. Also, all release torrents for desktop are in the format https://www.releases.ubuntu.com/XX.XX/*desktop*.torrent. I’ve tried to teach these models this shit and to just create a script for me, holy shit it’s been annoying. The models are incredibly stupid with scripting.

I’m not a computer programmer or developer and am picking up more coding here and there just because I want to do certain things in linux. But I just don’t understand why this is so difficult.

So my question is, is there ANY prompt for ANY model that will output successful code for this seemingly easy task, or is AI still too stupid to do this?

  • secretlyaddictedtolinux@lemmy.worldOP
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    1 day ago

    My LLMs don’t have Internet access yet. I was trying to get https://github.com/open-webui/ to work in docker but struggled with the backend connecting with ollama, which was working. I mostly have been using LM Studio recently, which I don’t think can go online for searching, or if it does I haven’t figured that out yet.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Fair enough. Just without internet access, the entire task becomes impossible. It then has to make up some URL which will likely turn out to be not correct. Plus it doesn’t really know what year or month it is…

      So, with my prompt and ChatGPT, it returns the correct command.

      • secretlyaddictedtolinux@lemmy.worldOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        so i should try to get open-webui working and then have it try to generate something. is there an equally good alternative to open-webui? I could keep trying to troubleshoot why it can’t find ollama running on localhost:11434 or whatever it was, but I spent hours on it and there didn’t seem to be a logical reason why it was happening, although there clearly was one and I just wasn’t smart enough to figure it out.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          I’m not sure what you’re trying to accomplish. If you ask me and it’s the task from your post… Just write the 5 lines of Python or Bash script yourself, to scrape the URL from the website (or better, find some API endpoint, insert the current year into the correct format, or find the URL that always returns the current file), download it into the directory your torrent client watches for incoming files an be done with it… Or find out how the mirror pages do it automatically.

          Otherwise I’d recommend some AI framework like Langchain. You can create an agent to help with terminal commands, attach a shell, give it a Google search tool, a webscraping tool, come up with the few hundred lines of necessary Python code plus the prompts… And then you can attach that to one of the inference endpoints. Pretty much anything will do with these frameworks, I believe they support WebUI, ollama, anything that is compatible with the OpenAI API…