• PixelProf@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    8 months ago

    Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

    On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.

    • Fraylor@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Interesting, that makes sense. Thank you for such a thoughtful response.