• @digitalnuisance@infosec.pub
    link
    fedilink
    0
    edit-2
    27 days ago

    I had a dude screaming pretty much the same thing at me yesterday on here (on a different account), despite the fact that I’m senior-level, near the top of my field and that all the objective data as well as anecdotal reports from tons of other people says otherwise. Like, okay buddy, sure. People seem to just like fighting things online to feel better about themselves, even if the thing they’re fighting doesn’t really exist.

    • @Event_Horizon@lemmy.world
      link
      fedilink
      027 days ago

      I’m a senior BA working on a project to replace some outdated software with a new booking management and payment system. One of our minor stakeholders is an overly eager tech bro who insists on bringing up AI in every meeting, he’s gone as far as writing up and sending proposals to myself and project leads.

      We all just roll our eyes when a new email arrives. Especially when there’s almost no significant detail in these proposals, it’s all conjecture based of what he’s read online…on tech bro websites.

      Oh and the best part, this guy has no experience in system development or design or anything AI related. He doesn’t even work in IT. But he researchs AI in his spare time and uses it as a side hustle…

  • @Wanpieserino@lemm.ee
    link
    fedilink
    0
    edit-2
    27 days ago

    My mate is applying to Amazon as warehouse worker. He has an IT degree.

    My coworker in the bookkeeping department has two degrees. Accountancy and IT. She can’t find an IT job.

    At the other side though, my brother, an experienced software developer, is earning quite a lot of money now.

    Basically, the industry is not investing in new blood.

      • @boonhet@lemm.ee
        link
        fedilink
        027 days ago

        My company was desperate to find a brand new dev straight out of the oven we could still mold to our sensibilities late last year when everything seemed doomed. Yes, it was one hire out of like 10 interviewed candidates, but point is, there are companies still hiring. Our CTO straight up judges people who use an LLM and don’t know how the code actually works. Mr. “Just use an AI agent” would never get the job.

    • Basically, the industry is not investing in new blood.

      Yeah I think it makes sense out of an economic motivation. Often the code-quality of a junior is worse than that of an AI, and a senior has to review either, so they could just directly prompt the junior task into the AI.

      The experience and skill to quickly grasp code and intention (and having a good initial idea where it should be going architecturally) is what is asked, which is obviously something that seniors are good at.

      It’s kinda sad that our profession/art is slowly dying out because juniors are slowly replaced by AI.

      • Terrasque
        link
        fedilink
        026 days ago

        Yeah, I’ve been seeing the same. Purely economically it doesn’t make sense with junior developers any more. AI is faster, cheaper and usually writes better code too.

        The problem is that you need junior developers working and getting experience, otherwise you won’t get senior developers. I really wonder how development as a profession will be in 10 years

    • @Miaou@jlai.lu
      link
      fedilink
      027 days ago

      Not sure how you manage to draw conclusions by comparing two different fields.

  • I Cast Fist
    link
    fedilink
    027 days ago

    We’re as cooked as artists (when asked to do shit jobs for non paying customers)

    • @skuzz@discuss.tchncs.de
      link
      fedilink
      0
      edit-2
      27 days ago

      I had an AI render a simple diagram for a presentation with explicit instructions. It rendered a Rube Goldberg nonsense graphic. I included it anyway for the lulz. Sure, they will get better, and maybe some day be almost as useful as the Enterprise computer. No way they’ll be Lt. Cmdr. Data this century.

  • @Anders429@programming.dev
    link
    fedilink
    027 days ago

    Know a guy who tried to use AI to vibe code a simple web server. He wasn’t a programmer and kept insisting to me that programmers were done for.

    After weeks of trying to get the thing to work, he had nothing. He showed me the code, and it was the worst I’ve ever seen. Dozens of empty files where the AI had apparently added and then deleted the same code. Also some utter garbage code. Tons of functions copied and pasted instead of being defined once.

    I then showed him a web app I had made in that same amount of time. It worked perfectly. Never heard anything more about AI from him.

    • @_____@lemm.ee
      link
      fedilink
      English
      027 days ago

      “no dude he just wasn’t using [ai product] dude I use that and then send it to [another ai product]'s [buzzword like ‘pipeline’] you have to try those out dude”

    • AI is very very neat but like it has clear obvious limitations. I’m not a programmer and I could tell you tons of ways I tripped Ollama up already.

      But it’s a tool, and the people who can use it properly will succeed.

      • @Susaga@sh.itjust.works
        link
        fedilink
        English
        027 days ago

        Funny. Every time someone points out how god awful AI is, someone else comes along to say “It’s just a tool, and it’s good if someone can use it properly.” But nobody who uses it treats it like “just a tool.” They think it’s a workman they can claim the credit for, as if a hammer could replace the carpenter.

        Plus, the only people good enough to fix the problems caused by this “tool” don’t need to use it in the first place.

        • @CeeBee_Eh@lemmy.world
          link
          fedilink
          027 days ago

          But nobody who uses it treats it like “just a tool.”

          I do. I use it to tighten up some lazy code that I wrote, or to help me figure out a potential flaw in my logic, or to suggest a “better” way to do something if I’m not happy with what I originally wrote.

          It’s always small snippets of code and I don’t always accept the answer. In fact, I’d say less than 50% of the time I get a result I can use as-is, but I will say that most of the time it gives me an idea or puts me on the right track.

      • De Lancre
        link
        fedilink
        027 days ago

        This. I have no problems to combine couple endpoints in one script and explaining to QWQ what my end file with CSV based on those jsons should look like. But try to go beyond that, reaching above 32k context or try to show it multiple scripts and poor thing have no clue what to do.

        If you can manage your project and break it down to multiple simple tasks, you could build something complicated via LLM. But that requires some knowledge about coding and at that point chances are that you will have better luck of writing whole thing by yourself.

      • Emily (she/her)
        link
        fedilink
        027 days ago

        I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isn’t big enough to understand an entire project.

        That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.

        • @franzfurdinand@lemmy.world
          link
          fedilink
          027 days ago

          I’ve used them for unit tests and it still makes some really weird decisions sometimes. Like building an array of json objects that it feeds into one super long test with a bunch of switch conditions. When I saw that one I scratched my head for a little bit.

          • Emily (she/her)
            link
            fedilink
            027 days ago

            I most often just get it straight up misunderstanding how the test framework itself works, but I’ve definitely had it make strange decisions like that. I’m a little convinced that the only reason I put up with it for unit tests is because I would probably not write them otherwise haha.

            • @franzfurdinand@lemmy.world
              link
              fedilink
              027 days ago

              Oh, I am right there with you. I don’t want to write tests because they’re tedious, so I backfill with the AI at least starting me off on it. It’s a lot easier for me to fix something (even if it turns into a complete rewrite) than to start from a blank file.

        • @jorm1s@sopuli.xyz
          link
          fedilink
          027 days ago

          Isn’t writing tests with AI like a really bad idea? I mean, the whole point of writing separate tests is hoping that you won’t make the same mistakes twice, and therefore catch any behavior in the code that does not match your intent. But If you use an LLM to write a test using said code as context (instead of the original intent you would use yourself), there’s a risk that it’ll just write a test case that makes sure the code contains the wrong behavior.

          Okay, it might still be okay for regression testing, but you’re still missing most of the benefit you’d get by writing the tests manually. Unless you only care about closing tickets, that is.

          • @Grazed@lemmy.world
            link
            fedilink
            027 days ago

            “Unless you only care about closing tickets, that is.”

            Perfect. I’ll use it for tests at work then.

          • Emily (she/her)
            link
            fedilink
            0
            edit-2
            27 days ago

            I’ve used it most extensively for non-professional projects, where if I wasn’t using this kind of tooling to write tests they would simply not be written. That means no tickets to close either. That said, I am aware that the AI is almost always at best testing for regression (I have had it correctly realise my logic is incorrect and write tests that catch it, but that is by no means reliable) Part of the “hand holding” I mentioned involves making sure it has sufficient coverage of use cases and edge cases, and that what it expects to be the correct is actually correct according to intent.

            I essentially use the AI to generate a variety of scenarios and complementary test data, then further evaluating it’s validity and expanding from there.

    • @frezik@midwest.social
      link
      fedilink
      027 days ago

      I understand the motivated reasoning of upper management thinking programmers are done for. I understand the reasoning of other people far less. Do they see programmers as one of the few professions where you can afford a house and save money, and instead of looking for ways to make that happen for everyone, decide that programmers need to be taken down a notch?

    • I’m an engineer and can vibe code some features, but you still have to know wtf the program is doing over all. AI makes good programmers faster, it doesn’t make ignorant people know how to code.

  • katy ✨
    link
    fedilink
    027 days ago

    everytime i see a twitter screenshot i just know im looking at the dumbest people imaginable

    • androogee (they/she)
      link
      fedilink
      English
      027 days ago

      Only if you confine “ai” to mean an LLM.

      Automation has replaced so many jobs already. More to come. Head in the sand won’t help anyone.

      • @13igTyme@lemmy.world
        link
        fedilink
        0
        edit-2
        27 days ago

        Today’s “AI” is just a buzz word for Machine learning code. ML has been around for a few decades and has been used in predictive analytics for those same decades.

        A machine that automates a job in a factory does one thing and never changes from that. It doesn’t learn and doesn’t make adjustments. When talking about “AI” no one is talking about the robot arm in a factory that does 5 total movements and repeats endlessly.

  • Rose
    link
    fedilink
    026 days ago

    It’s even funnier because the guy is mocking DHH. You know, the creator of Ruby on Rails. Which 37signals obviously uses.

    I know from experience that a) Rails is a very junior developer friendly framework, yet incredibly powerful, and b) all Rails apps are colossal machines with a lot of moving parts. So when the scared juniors look at the apps for the first time, the senior Rails devs are like “Eh, don’t worry about it, most of the complex stuff is happening on the background, the only way to break it if you genuinely have no idea what you’re doing and screw things up on purpose.” Which leads to point c) using AI coding with Rails codebases is usually like pulling open the side door of this gargantuan machine and dropping in a sack of wrenches in the gears.

  • @null_dot@lemmy.dbzer0.com
    link
    fedilink
    English
    027 days ago

    I take issue with the “replacing other industries” part.

    I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.

    Generative AI is an incremental improvement in automation. In my industry it might make someone 10% more productive. For any role where it could make someone 20% more productive that role could have been made more efficient in some other way, be it training, templates, simple conversion scripts, whatever.

    Basically, if someone’s job can be replaced by AI then they weren’t really producing any value in the first place.

    Of course, this means that in a firm with 100 staff, you could get the same output with 91 staff plus Gen AI. So yeah in that context 9 people might be replaced by AI, but that doesn’t tend to be how things go in practice.

    • @andioop@programming.dev
      link
      fedilink
      English
      027 days ago

      I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.

      I am kind of surprised that is an unpopular opinion. I figure there is a reason we compensate people for jobs. Pay people to do stuff you cannot, or do not have the time to do, yourself. And for almost every job there is probably something that is way harder than it looks from the outside. I am not the most worldly of people but I’ve figured that out by just trying different skills and existing.

      • @null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        027 days ago

        Programmers like to think that programming is a special profession which only super smart people can do. There’s a reluctance to admit that there are smart people in other professions.

    • 𞋴𝛂𝛋𝛆
      link
      fedilink
      English
      027 days ago

      There are around 50 models listed as supported for function calling in llama.cpp. There are a half dozen or so different APIs. How many people have tried even a few of these. There is even a single model with its own API supported in llama.cpp function calling. The Qwen VL models look very interesting if the supported image recognition setup is built.

      • @null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        027 days ago

        I’m not really clear what you’re getting at.

        Are you suggesting that the commonly used models might only be an incremental improvement but some of the less common models are ready to take accountant’s and lawyer’s and engineer’s and architect’s jobs ?

  • @needanke@feddit.org
    link
    fedilink
    027 days ago

    Tinfoil hat time:

    That Ace account is just an alt of the original guy and rage baiting to give his posting more reach.

  • @explodicle@sh.itjust.works
    link
    fedilink
    English
    027 days ago

    Hey cool, an AI can program itself as well as a human can now. Think of how this will impact the programmer job market! That’s got to be like, the biggest implication of this development.