• Photuris
      link
      fedilink
      014 days ago

      I’ve tried this on personal projects, but not work projects.

      My verdict:

      1. To be a good vibe coder, one must first be a good coder.

      2. Vibe coding is faster to draft up and POC, longer to debug and polish. Not as much time savings as one might think.

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        014 days ago

        exactly, you can only really verify the code if you were capable of writing it in the first place.

        And it’s an old well known fact that reading code is much harder than writing it.

        • @brygphilomena@lemmy.dbzer0.com
          link
          fedilink
          English
          014 days ago

          I weirdly love reading code and figuring out what it’s doing. Debugging is cathartic.

          It might take a while and I might be cussing up a storm saying, wtf is this shit? Why the fuck would you do it this way? Why the fuck did you make this convoluted for no reason?

          Right now it’s unfucking some vibe coded bs where instead of just fixing an API to get the info we needed accurately, it’s trying to infer it from other data. Like, there is a super direct and simple route, but instead there are hundreds of lines to work around hitting the wrong endpoint and getting data missing the details we need.

          Plus letting the vibe add so much that is literally never used, was never needed, and on top of that returns incorrect information.

        • @ulterno@programming.dev
          link
          fedilink
          English
          014 days ago

          An irrelevant but interesting take is that this applies as an analogue to a lot of stuff in electronics related space.

          • It is harder to receive data than to transmit it, because you need to do things like:
            • match your receiver’s frequency with that of the transmission (which might be minutely different from the agreed upon frequency), to understand it
            • know how long the data will be, before feeding into digital variables, or you might combine multiple messages or leave out some stuff without realising
          • this gets even harder when it is wireless, because now, you have noise, which is often, valid communication among other devices

          Getting back to code, you now need to get in the same “wavelength” as the one who wrote the code, at the time they wrote the code.

    • MaggiWuerze
      link
      fedilink
      014 days ago

      Even if they build the AI doing it from scratch, all by themselves?

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        014 days ago

        yes. Because that would still mean they didn’t code the app.

        “killing is bad!” “but what if the murderer 3d printed his own gun?”

        • @Honytawk@feddit.nl
          link
          fedilink
          014 days ago

          More like: “killing is bad” “but what if the ‘murderer’ designed, build and produced their own target?”

          You can’t kill a robot, so it isn’t killing.

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        014 days ago

        So? Some of the people pushing out ai slop would be perfectly capable of writing their own llm out of widely available free tools. Contrary to popular belief, they are not complex pieces of software, just extremely data hungry. Does not mean they magically understand the code output by the llm when it spits out something.

        • @Honytawk@feddit.nl
          link
          fedilink
          014 days ago

          Stark would have developed their own way of training their AI. It wouldn’t be an LLM in the first place.

          • @vrighter@discuss.tchncs.de
            link
            fedilink
            014 days ago

            so? someone invented current llms too. Nothing like them existed before either. If they vibe coded with them they’d still be producing slop.

            Coding an llm is very very easy. What’s not easy is having all the data, hardware and cash to train it.

            • @Initiateofthevoid@lemmy.dbzer0.com
              link
              fedilink
              English
              014 days ago

              The point is that no vibe coder could design an LLM without an LLM already existing. The math and tech behind machine learning is incredible, whatever you may think. Just because we can spin up new ones at will doesn’t mean we ever could have skipped ahead and built Jarvis in 2008, even if all of society was trying to do so - because they were trying.

              In the fictional universe where a human could singlehandedly invent one from scratch in 2008 with 3D image generation and voice functionality that still exceeds modern tech… yeah, that person and their fictional AI wouldn’t necessarily be producing slop.

            • @thevoidzero@lemmy.world
              link
              fedilink
              014 days ago

              Yeah but the people who made it like that probably understand whether to trust it to write code or not. The AI Tony wrote, he knows what it does best and he trusts it to write his code. Just because it’s AI doesn’t mean it’s LLM. Like I trust the errors compilers give me even if I didn’t write them because it’s good. And I trust my scripts to do things that I wrote them for, specifically since I tested them. Same with the AI you yourself made, you’d test it, and you’d know the design principles.

              • @vrighter@discuss.tchncs.de
                link
                fedilink
                014 days ago

                an ai is not a script. You can know what a script does. neural networks don’t work that way. You train them, and hope you picked the right dataset for it to hopefully learn what you want it to learn. You can’t test it. You can know that it works sometimes but you also know that it will also not work sometimes and there’sjacksjit you can do about it. A couple of gigabytes of floating point numbers is not decipherable to anyone.