Slow June, people voting with their feet amid this AI craze, or something else?

  • Platomus@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 year ago

    It’s because it’s summer and students aren’t using it to cheat on their assignments anymore.

    • TheEllimist@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      It’s definitely this. Except the kids taking summer classes, which statistically probably have higher instances of cheating.

  • wackypants@kbin.social
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    It’s Summer. Students are on break, lots of people on vacation, etc. Let’s wait to see if the trend persists before declaring another AI winter.

    • twicetwotimes@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Agreed. I think being between academic years is likely a much bigger factor than we realize. I’m a college professor, and at the end of spring quarter we had a lot of conversations with undergrads, grad students, and faculty about how people are actually using AI.

      Literally every undergrad student I spoke with said they use it for every written assignment (for the large part in non-cheating legit educational resource ways). Most students used it for all or most of their programming assignments. Most use it to summarize challenging or long readings. Some absolutely use it to just do all their work for them, though fewer than you might expect.

      I’d be pretty surprised if there isn’t a significant bounce-back in September.

      • sndrtj@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        This worries me though. I’ve found chatgpt to be wrong in basically every fact-based question I’ve asked it. Sometimes subtly, sometimes completely, but it always hallucinates. You cannot use it as a source of truth.

        • twicetwotimes@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Honestly I feel like at this point its unreliability is kind of helpful for students. They have to learn how to use it most effectively as a tool for producing their own work and not a replacement. In my classes the more relevant “problem” for students is that GPT produces written work that on the surface feels composed and sensible but is actually straight up garbage. That’s good. They turn that in, it’s extremely obvious to me, and they get an F (because that’s the grade AI earned with the garbage paper).

          But they can and should use it for things it’s great at: reword this long sentence I’m having trouble phrasing concisely, help me think of a title for my paper, take my pseudocode and help me turn it into a while loop in R, generate a list of current researchers on this topic and two of their most recent publications, translate this paragraph of writing from Foucault/Marx/Bourdieu/some-good-thinker-and-bad-writer into simpler wording…

          I have a calculator in my pocket even though my teachers assured me I wouldn’t. Students will have access to and use AI forever now. The worry should be that we fail to teach them the difference between a homework-bot and an incredible, versatile tool to leverage.

      • afraid_of_zombies2@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I have been using it to do deep dives into subjects. Especially text analysis. Do you want to know the entire voc of the Gospel of Mark in original greek for example? 1080. Now how does this compare to a section of Plato’s republic of the same size? About 6-7x as large.

        So right there we can see why Mark is often viewed as a direct text while Plato is viewed as a more ambiguous writer.

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Mark is a direct and terse narrative of a specific segment of Jesus’s life and teachings while the republic is an attempt to expound a philosophy and system of government.

          I agree with you, but I’m not sure I’d call him a more ambiguous writer, mark is a ‘just the facts, ma’am’ notation of verbal histories near contemporary, with the other gospels being attempts to add on contemporary allegories and legends attributed by different groups to Jesus (or John who just did his own thing).

          I’d be curious at the comparison of the apology and crito, similar narratives of a similar figure in a specific segment of his life (the end of it). It’s fairly direct and terse as Socrates was portrayed as being direct and terse, but otherwise the styles are similar as (throw on hard hat) Jesus appears to have been attributed many of the allegories of Socrates in the recorded gospels, which makes sense if you’re trying to appeal to followers of hellenic religions such as those in Rome and Greece.

    • potustheplant@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      4
      ·
      1 year ago

      I think you’re being a bit self-centered, i’s always going to be summer somewhere. This is a tool used globally.

      • Smatt@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I see your point but:

        1. It’s not always summer somewhere, North and South are in spring/fall half the year.
        2. The global North has way more population than the south.
      • Bak@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        It’s summer somewhere half the time, but thank you for reminding them the southern hemisphere exists!

  • i_lost_my_bagel@seriously.iamincredibly.gay
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I tried it for about 20 minutes

    Had it do a few funny things

    Thought huh that’s neat

    Went on with life

    Since then the only times I’ve thought about ChatGPT has been seeing people using it in classes I’m in and just sitting here thinking “this is a fucking introductory course and you’re already cheating?”

    • idolofdust@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      In discrete mathematics right now and overheard way too many students hitting a brick wall with the current state of AI chatbots. as if thats what they used almost exclusively up to this point

  • Poob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    It’s really fucking annoying getting “As an AI language model, I don’t have personal opinions, emotions, or preferences. I can provide you with information and different perspectives on…” at the beginning of every prompt, followed by the driest, most bland answer imaginable.

    • afraid_of_zombies2@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      It definitely has its uses but it also has massive annoyances as you pointed out. One thing has really bothered me, I asked it a factual question about Mohammed the founder of Islam. This is how I a human not from a Muslim background would answer

      “Ok wikipedia says this ____”

      It answered in this long winded way that had all these things like “blessed prophet of Allah”. Basically the answer I would expect from an Imam.

      I lost a lot of trust in it when I saw that. It assumed this authority tone. When I heard about that case of a lawyer citing madeup caselaw from it I looked it as confirmation. I don’t know how it happened but for some questions it has this very authoritative tone like it knows this without any doubt.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, it’s boring as shit, if want a conversation partner there’s better (if less reliable) options out there, and groups like personal.ai that repackage it for conversation. There’s even scripts to break through the “guardrails”

      I love the boring. Every other day, I think "man, I really don’t want to do this annoying task. I’m not sure if it even saves much time since I have to look over the work, but it’s a hell of a lot less mentally exhausting.

      Plus, it’s fun having it Trumpify speeches. It’s tremendous. I’ve spent hours reading the bigglyest speeches. Historical speeches, speeches about AI, graduation speeches where bears attack midway through… Seriously, it never gets old

  • Magiwarriorx@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    I still use free GPT-3 as a sort of high level search engine, but lately I’m far more interested in local models. I havent used them for much beyond SillyTavern chatbots yet, but some aren’t terribly far off from GPT-3 from what I’ve seen (EDIT: though the models are much smaller at 13bn to 33bn parameters, vs GPT-3s 145bn parameters). Responses are faster on my hardware than on OpenAI’s website and its far less restrictive, no “as a large language model…” warnings. Definitely more interesting than sanitized corporate models.

    The hardware requirements are pretty high, 24GB VRAM to run 13bn parameter 8k context models, but unless you plan on using it for hundreds of hours you can rent a RunPod or something for cheaper than a used 3090.

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    On that, what would people recommend for a locally hosted (I have a graphics card) chatgpt-like LLM that is open source and doesn’t require a lot of other things to install.

    (Just one CMD line installation! That is, if you have pip, pip3, python, pytorch, CUDA, conda, Jupiter note books, Microsoft visual studio, C++, a Linux partition, and docker. Other than that, it is just one line installation!)

    • Flashoflight@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I looked into this too and it’s pretty resource heavy. I actually had a really good conversation with Chatgpt about making a separate instance of itself locallly. It’s worth talking to it about that and some of the price options

    • festus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Look into llama.cpp - it’s a single C++ program that run quantified models (basically models with some less precision - don’t need a full 64 bits for a double, really). As for models to run on it, there’s so many but I think WizardLM is pretty good.

  • BonfireOvDreams@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    It’s not just that the novelty has worn off, It’s progressively gotten less useful. Any god damn question I ask gets 90,000 qualifiers and it refuses to provide any data at all. I think OpenAI is so terrified of liabilty they have significantly dumbed down it’s utility in the public release. I can’t even ask ChatGPT to provide a link to study it references, if it references anything at all rather than making ambiguous statements.

    • afraid_of_zombies2@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I got it to give me a book that was still in copyright status by selectively asking for bigger and bigger quotes. Took a while. Now it seems to have cottoned on to that trick.

    • Kerfuffle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Also, ChatGPT 4 came out but is still only available to people who pay (as far as I know). So using ChatGPT 3 feels like only having access to the leftovers. When it first came out, that was exciting because it felt like progress was going to be rapid, but instead it stagnated. (Luckily interesting LLM stuff is still happening, it’s just nothing to do with OpenAI.)

      • ultranaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Chatgpt4 has also noticeably declined in quality since it was released too. I use it less because it’s become less useful and more frustrating to use. I think openAI have been steadily gimping it trying to get their costs down and make it respond faster.

      • cybersandwich@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I pay for it and it’s… Okay for most things. It’s pretty great at nerd stuff though*. Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

        *If you know enough to sus out the obviously wrong shit it produces every once in a while.

        • Kerfuffle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

          I usually can find what I’m looking for unless it’s really obscure with days of searching. If something is that obscure, it seems kind of unlikely ChatGPT is going to give a good answer either.

          If you know enough to sus out the obviously wrong shit it produces every once in a while.

          That’s one pretty big problem. If something really is difficult/complex you likely won’t be able to tell the difference between a wrong answer from ChatGPT and one that’s correct unless it just says something obviously ridiculous.

          Obviously humans make mistakes too, but at least when you search you see results in context, other can potentially call out/add context to things that might not be correct (or even misleading), etc. With ChatGPT you kind of have to trust it or not.

          • shiftybits@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Yeah if it’s that hard to find gpt is just going to hallucinate some bs into the response. I use it as a stack overflow at times and often run into garbage when I’m trying to solve a truly novel problem. I’ll often try to simplify it to something contrived but mostly find the output useful as a sort of spark. I can’t say I ever find the raw code it generates useful or all that good.

            It’ll often give wrong answers but some of those can contain useful bits that you can arrange into a solution. It’s cool, but I still think people are oddly enamored with what is really just a talking Google. I don’t think it’s the game changer people are thinking it is.

            • pancakes@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              1 year ago

              It’s pretty useful if you’re in a more generalist job. I mostly work in visual design, but I sometimes deal with coding and web dev. As someone with a mostly surface understanding of these things, asking gpt to explain exact things that don’t make sense in basic terms or solve basic issues is a huge time saver for me. Googling these issues usually works but takes way longer than getting a tailored response from gpt if you know how to ask.

  • Meow.tar.gz@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    ChatGPT has mostly given me very poor or patently wrong answers. Only once did it really surprise me by showing me how I configured BGP routing wrong for a network. I was tearing my hair out and googling endlessly for hours. ChatGPT solved it in 30 seconds or less. I am sure this is the exception rather than the rule though.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It all depends on the training data. If you pick a topic that it happens to have been well trained on, it will give you accurate, great answers. If not, it just makes things up. It’s been somewhat amusing, or perhaps confounding, seeing people use it thinking it’s an oracle of knowledge and wisdom that knows everything. Maybe someday.

  • gaiussabinus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I have a number of language models running locally. I am really liking the gpt4all install with Hermes model. So in my case i used chatgpt right up untill i had one i could keep private.

    • ClemaX@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      How does it compare with ChatGPT (GPT 3.5), quality and speed wise?

      • gaiussabinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Depends how you get in accomplished if you use the python bindings its slow but using the gpt4all its quick and there is a gpt4all api should you wish to build a private assistant. I like that one but its still run by a company so mileage may vary there are a few projects on github for use with opensource models. I can get better quality from the hermes model than i can with GPT 3.5 IMO but some models are better than others in regards to what you are trying to do. If you have done any work with stable diffusion lots of different models are popping up right now for different use-cases like you see on civit.ai. A good coding bot is probably going to be a bit shit in a conversation.

  • dep@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It was in the major TV news cycle for weeks but now it’s back to normal levels I’d say. Curious onlookers without a real need have moved on.

  • froggers@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I still use it sometimes, but ohhh boy it can be a wreck. Like I’ve started using the Creation Kit for Bethesda games, and you can bet your ass that anything you ask it, you’ll have to ask again. Countless times it’s a back-and-forth of:

    Me: Hey ChatGPT, how can I do this or where is this feature?

    ChatGPT: Here is something that is either not relevant or just does not exist in the CK.

    Me: Hey that’s not right.

    ChatGPT: Oh sorry, here’s the thing you are looking for. and then it’s still a 50-50 chance of it being real or fake.

    Now I realize that the Creation Kit is kinda niche, and the info on it can be a pain to look up but it’s still annoying to wade through all the shit that it’s throwing in my direction.

    With things that are a lot more popular, it’s a lot better tho. (still not as good as some people want everyone to believe)

    • cassetti@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Lol, Chat has it’s pros and cons. For helping me write or refine content, it’s extremely helpful.

      However I did try to use it to write code for me. I design 3D models using a programming language (OpenSCAD) and the results are hilarious. Literally it knows the syntax (kinda) and if I ask it to do something simple, it will essentially write the code for a general module (declaring key variables for the design), and then it calls a random module that doesn’t exist (like it once called a module “lerp()” which is absolutely not a module) - this magical module mysteriously does 99% of the design… but ChatGPT won’t give it to me. When I ask it to write the code for lerp(), it gives me something random like this

      module lerp() { splice(); }

      Where it simply calls up a new module that absolutely does not exist. The results are hilarious, the code totally does not compile or work as intended. It is completely wrong.

      But I think people are working it out of their system - some found novelty in it that wore off fast. Others like myself use it to help embellish product descriptions for ebay listings and such.

    • american_defector@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      I’ve been building a tool that uses ChatGPT behind the scenes and have found that that’s just part of the process of building a prompt and getting the results you want. It also depends on which chat model is being used. If you’re super vague, it’s going to give you rubbish every time. If you go back and forth with it though, you can keep whittling it down to give you better material. If you’re generating content, you can even tell it what format and structure to give the information back in (I learned how to make it give me JSON and markdown only).

      Additionally, you can give ChatGPT a description of what it’s role is alongside the prompt, if you’re using the API and have control of that kind of thing. I’ve found that can help shape the responses up nicely right out of the box.

      ChatGPT is very, very much a “your mileage may vary” tool. It needs to be setup well at the start, but so many companies have haphazardly jumped on using it and they haven’t put in enough work prepping it.

  • Wats0ns@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    OpenAI’s models, including its GPT series, are available via APIs and Microsoft Azure, and so a drop in ChatGPT’s website use may be due to people moving to programmatic interfaces

    I feel like this is an important detail that changes the conclusion of the article: there may be a lot more end user, through 3d party apps, but the way of measuring won’t reveal it. This especially important considering that (correct me if I’m wrong) API users are paying ones !

  • anlumo@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For my professional work, the training data is way too outdated by now for ChatGPT to be anywhere near being useful. The browsing feature also can’t make up for it, because it’s pretty bad at Internet search (bad search phrases etc).

    • PupBiru@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      i find even for really complex stuff it’s pretty good as long as you direct it: it can suggest some things, you can do some searching based on that, maybe give it a few links to summarise for you, etc

      it doesn’t do the work for you, but it makes a pretty good assistant that doesn’t quite understand the subject matter

      • anlumo@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’m old enough to not needing a babysitter to use the Internet for research.

        It even told me a few times that its training data is too outdated and that there probably was some progress in that area. I have to freaking push it to actually do a web search to update that knowledge with prompts like “You have web access, use it!”. It then finds a few posts on stackoverflow I’ve already seen and draws some incorrect conclusions from that.

        I’m way faster on my own.

          • anlumo@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            In my experience, Bing Chat is even worse, because it skips the part where ChatGPT is trying to come up with something based on the training data and goes straight to bad web searches with incorrect summaries.

        • PupBiru@kbin.social
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          your experience does not match mine

          which is not saying that your experience is wrong or that you’re using it wrong, however i and many others have managed to get exceptionally good results out of it, and you should be aware of that fact

          referring to these experiences as “needing a babysitter” is needlessly provocative as well; we’re all just talking here: no need to insult the intelligence of anyone that has managed to use the tool in a way that works incredibly well

          i hope that at some point in the future, you’re able to have your experience match ours, and have a similar feeling of “ooooh i see now… wait… OOOOOOH I REALLY SEEEE NOW”

          • anlumo@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Well, I hope that some day I will have the same experience.

            I think the main problem is that I’m only prompting it with lost causes, when I was unable to find anything on my own with very thorough searches, because there just isn’t an answer available online.

            I don’t go there first, because I’m always afraid of hallucinated answers, which are very common. For example, it often just tries to guess function names of programming libraries. That’s just wasting my time.

  • pngn@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m not really surprised at all, a lot of people I know wouldn’t stop talking about it for the grand total of maybe 2 weeks but then it all went quite. In fairness this is a sample of people who are all non-tech people, so I think a lot of it is just the fact they probably forgot the name of it or how to turn their computer on (definitely the case for some).