• itslilith@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    64
    ·
    15 days ago

    Don’t copilot anything longer than a function of about 15 lines. That way you can quickly see if it made mistakes. Ensure it works, move on to the next.

    And only do that for boring, repetitive work. The tough challenges and critical parts you’re (for now) better off solving yourself.

    • flashgnash@lemm.ee
      link
      fedilink
      arrow-up
      25
      arrow-down
      2
      ·
      15 days ago

      Absolutely, I think the people who say it’s completely useless for code are in denial

      Definitely not replacing anyone but my god it has sped up development by generating code I already know how to write 90% of

      No more having to look up “what was the for loop syntax in this language again?”

      • xavier666@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        14 days ago

        “Copilot is really good at things which I already know” and that is perfectly fine

        • flashgnash@lemm.ee
          link
          fedilink
          arrow-up
          10
          ·
          15 days ago

          Exactly.

          It’s to speed up boilerplate and save you having to look up function names or language specific syntax for that one feature you want to use, not to entirely do your job for you

          • Ethan@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 days ago

            If I’ve been working in the same language for at least a year or two, I don’t have to look up any of that. Copilot might be actually helpful if I’m working in a language I’m not used to, but it’s been a long time since I’ve had to look up syntax or functions (excluding 3rd party packages) for the language I work in.

            • flashgnash@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              14 days ago

              Of course but presumably on occasion you do work in other languages? I work in all kinds of languages and so jumping between them it’s pretty handy to bridge the gap

              I think you could definitely still get value out of generating simple stuff though, at least for me it really helps get projects done quickly without burning myself out

              For small one off scripts it makes them actually save more time than they take to write (for example colleague had to write the permissions of a bunch of files recursively into an excel doc, chatgpt did 90% of that I did 9 and he did 1 lol)

              • Ethan@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 days ago

                Of course but presumably on occasion you do work in other languages? I work in all kinds of languages and so jumping between them it’s pretty handy to bridge the gap.

                If I were jumping languages a lot, I definitely think it would be helpful. But pretty much 100% of what I’ve done for the last 3-4 years is Go (mostly) or JavaScript (occasionally). I have used chatgpt the few times I needed to work in some other language, but that has been pretty rare.

                I think you could definitely still get value out of generating simple stuff though, at least for me it really helps get projects done quickly without burning myself out

                If simple stuff == for loops and basic boilerplate, the kind of stuff that copilot can autocomplete, I write that on autopilot and it doesn’t really register. So it doesn’t contribute to my burnout. If simple stuff == boring, boilerplate tests, I’ll admit that I don’t do nearly enough of that. But doing the ‘prompt engineering’ to get copilot to write that wasn’t any less painful that writing it myself.

                For small one off scripts it makes them actually save more time than they take to write

                The other day I wrote a duplicate image detector for my sister (files recovered from a dying drive). In hindsight I could have asked chatgpt to do it. But it was something I’ve never done before and an interesting problem so it was more fun to do it myself. And most of the one off stuff I’m asked to do by coworkers is tied to our code and our system and not the kind of thing chatgpt would know how to do.

      • Ethan@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        14 days ago

        I won’t say copilot is completely useless for code. I will say that it’s near useless for me. The kind of code that it’s good at writing is the kind of code that I can write in my sleep. When I write a for-loop to iterate over an array and print it out (for example), it takes near zero brain power. I’m on autopilot, like driving to work. On the other hand, when I was trialing copilot I’d have to check each suggestion it made to verify that it wasn’t giving me garbage. Verifying copilot’s suggestions takes a lot more brain power than just writing it myself. And the difference in time is minimal. It doesn’t take me much longer to write it myself than it does to validate copilot’s work.

        • flashgnash@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          14 days ago

          You can think bigger than that, as an example from the other day, I got it to a Display implementation for all of my error types in rust, it generated nice user friendly error messages based on context and wrote all the boilerplate around displaying them

          Also got it to generate a function that generated a unique RGB colour from a user ID, did it first try and I could use it straight away

          Both those things would’ve taken me maybe 15 minutes by hand but I can generate and proofread them in seconds

          That said, I don’t use copilot I use chatgpt, it’s intentional when I use it not just being shoved in my face all the time which might help my opinion of it

          • Ethan@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 days ago
            func randomRGB(uid int) color.RGBA {
            	b := binary.BigEndian.AppendUint64(nil, uint64(uid))
            	h := sha256.Sum256(b)
            	return color.RGBA{h[0], h[1], h[2], 255}
            }
            

            That took me under three minutes and half of that was remembering that RGBA is in the color package, not the image package, and uint-to-bits is in the binary package, not the math package. I have found chatgpt useful when I was working in a different language. But trying to get chatgpt or copilot to write tests or documentation for me (the kind of work that bores me to death), doing the prompt engineering to get it to spit out something useful was more work than just writing the tests/documentation myself. Except for the time when I needed to write about 100 tests that were all nearly the same. In that case, using chatgpt was worth it.

    • BudgetBandit@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      15 days ago

      Tried to learn coding using chatGPT. Wanted to make my own game engine for a phone game. Ended up looking up tutorials.

      • coffee_with_cream@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        14 days ago

        If you are using “game engine” in the industry standard way, you would want to learn object oriented programming first, then learn how to use an existing game engine, and then MAYBE, in a long time, with a big team, build your own game engine.

      • Bread@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        ChatGPT as a programming tool like any other tool works a whole lot better when you are well versed in how the process should go. It speeds up the workflow of a professional, it doesn’t make a new worker better.

  • CodingCarpenter@lemm.ee
    link
    fedilink
    arrow-up
    22
    ·
    15 days ago

    Ai is great for finding small flaws or reciting documentation in a more succinct way. But writing new code and functions? That’s a fools errand hoping it works out

    • flashgnash@lemm.ee
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      15 days ago

      I use it for writing functions and snippets all the time, at least in python and rust as long as you describe what you want it to do properly it works great

      Example I used recently: “Please generate me a rust function that will take a u32 user id and return a unique RGB colour”

      Generated the function, I plugged it in and it worked perfectly first time

  • computerscientistII@lemm.ee
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    15 days ago

    I haven’t been in development for nearly 20 years now, but I assumed it worked like that:

    You generate unit tests for a very specific function of rather limited magnitude, then you let AI generate the function. How could this work otherwise?

    Bonus points if you let the AI divide your overall problem into smaller problems of manageable magnitudes. That wouldn’t involve code generation as such…

    Am I wrong with this approach?

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      8
      ·
      15 days ago

      The complexity here lies in having to craft a comprehensive enough spec. Correctness is one aspect, but another is performance. If the AI craps out code that passes your tests, but does it in really inefficient way then it’s still a problem.

      Also worth noting that you don’t actually need AI to do such things. For example, Barliman is a tool that can do program synthesis. Given a set of tests to pass, it attempts to complete the program for you. Synthesis is performed using logic programming. Not only is it capable of generating code, but it can also reuse code it’s already come up with as basis for solving bigger problems.

      https://github.com/webyrd/Barliman

      here’s a talk about how it works https://www.youtube.com/watch?v=er_lLvkklsk

    • Kairos@lemmy.today
      link
      fedilink
      arrow-up
      8
      ·
      15 days ago

      At that point you should be able to just write the code yourself.

      The A"I" will either make mistakes even under defined bounds, or it will never make any mistakes ever in which case it’s not an autocomplete, it’s a compiler and we’ve just gone full circle.

    • I tend to write a comment of what I want to do, and have Copilot suggest the next 1-8 lines for me. I then check the code if it’s correct and fix it if necessary.

      For small tasks it’s usually good enough, and I’ve already written a comment explaining what the code does. It can also be convenient to use it to explore an unknown library or functionality quickly.

      • Lucy :3@feddit.org
        link
        fedilink
        arrow-up
        3
        ·
        15 days ago

        “Unknown library” often means a rather small and sparely documented and used library tho, for me. Which means AI makes everything even worse by hallucinating.

  • coffee_with_cream@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    14 days ago

    I told it to generate a pretty complex react component and it worked on the first try yesterday. It even made a style sheet. And it actually looks good.

    • geneva_convenience@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      14 days ago

      It’s so good when it works on the first try. But when it doesn’t work it can really fool people with totally nonfunctional code. AI is the genie in the bottle where you really need to ask the right question.

  • Fixbeat@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    10
    ·
    15 days ago

    Eh, if it ain’t right, I just bounce it back to ChatGPT to fix. With enough guidance and oversight it will get there.

    • hddsx@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      15 days ago

      Hi ChatGPT, write code with no memory or logic errors to perform <thing you want to do>.

      I’m not sure how to talk to ChatGPT, I’m assuming like Siri.

      • projectmoon@lemm.ee
        link
        fedilink
        arrow-up
        5
        ·
        15 days ago

        LLMs are statistical word association machines. Or tokens more accurately. So if you tell it to not make mistakes, it’ll likely weight the output towards having validation, checks, etc. It might still produce silly output saying no mistakes were made despite having bugs or logic errors. But LLMs are just a tool! So use them for what they’re good at and can actually do, not what they themselves claim they can do lol.

        • flashgnash@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          15 days ago

          I’ve found it behaves like a stubborn toddler

          If you tell it not to do something it will do it more, you need to give it positive instructions not negative

      • Fixbeat@lemmy.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        15 days ago

        Just explain what you are trying to do like you would with a person and it will give you the code. It probably won’t quite be what you want so refine your request or post a photo of the errors you might get and try again.

        If you are working with a language that you aren’t familiar with it is very helpful. Just remember that it has been trained on all the code on the internet, so it knows a lot.

        It is a great tool, with limitations, of course. As the developer you have to know how to apply it to get the most from it. You can see from the downvotes that there’s a lot of negativity towards llms, but I have positive experiences with using ChatGPT.

    • MuffinHeeler@aussie.zone
      link
      fedilink
      arrow-up
      3
      ·
      14 days ago

      I asked chat gpt to make me a business logo. It spelt the name wrong in every instance. Ok, keep that picture but replace the words with this exact spelling. Spelt it wrong a different way. Continued about 20 times before I gave up. It won’t follow explicit instructions. I can’t see it being that amazing at code

      • Fixbeat@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        14 days ago

        I have had the same experience with text and images. Not sure what is up with that. As for code, it is really helpful for generating boilerplate stuff that you probably don’t want to do yourself. Rapid prototyping and proof of concept creation is another. I suggest trying it out to see what it can do to make your life easier, but you will have instances where it just won’t do what you ask. I kind of enjoy the back and forth to get things working and I feel it saves effort.