• 2 Posts
  • 374 Comments
Joined 5 months ago
cake
Cake day: September 9th, 2025

help-circle
  • I regularly use GH Copilot with Claude Sonnet at work and it’s a coin toss whether it’s actually useful, but I overall do find value in using it. For my own use at home, I don’t do subscriptions for software and I’m also not giving these companies my data. I would self-host something like Qwen3 with Llama.cpp, but running the flagship MoE model would basically require a $10k GPU and one hell of a PSU. I could probably self-host a smaller model that wouldn’t be nearly as useful, but I’m not sure that would even be worth the effort.

    Therein lies the problem. My company is paying a monthly fee for me to use Copilot that would take like 20 years to pay for even one of the $10k GPUs that I’m likely hogging for minutes at a time, and these companies are going to spend trillions building data centers full of these GPUs. It’s obvious that the price we are paying for AI now doesn’t cover the expense of actually running it, but it might when these models become less resource-intensive to the extent that they can run on a normal machine. However, in that case, why even run them in a data centers instead of just running them on the user’s local machine? I’m just not following how these new data centers are going to pay for themselves, though maybe my math is wrong, or I’m ignorant of the economies of scale hosting these models for a large user base.





  • From a quick reading of the actual law, here are some of the AI uses it prohibits that will apparently “stifle innovation”:

    …use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation

    …to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics

    …the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques

    …the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage

    …the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation







  • Yeah, I liked Jim Crow Joe even less than orange gameshow host. Some of his policies had direct negative effects on my life. Then the election came around and the democrats decided we don’t get a primary at all this time, so we ended up with a choice between an ignorant gameshow host who somehow fancies himself an independent thinker and a cackling hen who puts on no such airs, both of whom are completely loyal to the international billionaire cabal.




  • melfietoProgrammer Humor@programming.devScrum
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    That’s fair, and having no consequences for unfinished work certainly takes the pressure off, though you’re correct that I’ve been on teams where there certainly were consequences for not getting done what you “committed to” for the sprint, which really made me resent the process. I’ve also been on teams where we happily moved unfinished work over each sprint, and it largely felt like we were just going through the motions. To your point, I suppose the latter is perfectly acceptable, though it felt wrong based on my previous experiences. In either case, I always wonder what the point is of time-boxing in the first place when you can just take it one backlog item at a time with Kanban while still engaging in the other useful practices.


  • At least in Star Trek, the robots would say things like, “I am not programmed to respond in that area.” LLMs will just make shit up, which should really be the highest priority issue to fix if people are going to be expected to use them.

    Using coding agents, it is profoundly annoying when they generate code against an imaginary API, only to tell me that I’m “absolutely right to question this” when I ask for a link to the docs. I also generally find AI search to be useless, even though DuckDuckGo as an example does link to sources, but said sources often have no trace of the information presented in the summary.

    Until LLMs can directly cite and include a link to a credible source for every piece of information they present, they’re just not reliable enough to depend on for anything important. Even with sources linked, it would also need to be able to rate and disclose the credibility of every source (e.g., is the study peer reviewed and reproduced, is the sample size adequate, etc.).






  • melfietoProgrammer Humor@programming.devScrum
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    I started doing Scrum more than 15 years ago and I’ve worked on teams with full-time Scrum masters / project managers where the team genuinely did want to make the process work, but I have yet to be on a team where we did Scrum and figured out how to make it run like a well-oiled machine.

    I have indeed failed and learned from countless sprints over the years, and the main thing I’ve learned is that time-boxing doesn’t work. I have had success with Kanban / “Scrumban” that dispenses with the time-boxing, but does have planning, pointing, stand-up, retrospectives, backlog grooming, review and demo, etc.

    I guess I fundamentally reject the idea that a team should be able to plan and estimate weeks worth of work and if everything doesn’t go according to plan, then you did something wrong and need to figure out how to do better next time. Software development is design, which is always going involve a lot of exploration and learning, not manufacturing, so things not going according to plan is normal because if you’re doing it right, then you’re more knowledgeable today than when you made the plan weeks ago.

    With Kanban, if a 3-point story runs into unforeseen issues and you end up spending an extra day, no big deal. Maybe someone will even swarm on it with you. With Scrum, if that happens one or more times, you’re either putting in extra hours or not fulfilling your commitment for the sprint.

    The problem is that committing to a fixed scope in a fixed timeframe with a fixed team size is always bad idea, and Scrum makes an entire process out of this bad practice. It’s an improvement compared to full-on waterfall, but it’s still a flawed process, and getting rid of the time-boxing makes all the difference in my experience.