• NightCrawlerProMax@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    Yeah, not gonna happen where I work. The top brass is already pushing for “moving fast” every single day. “Use AI to be x10 more productive”, “Use vibe coding in your work to reduce time taken” are the common things that we hear. There’s no way they’ll do a 4 day work week.

    • Aganim@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      Vibe… what now? Is that like coding in all caps when you’re pissed off or in a bastardised flavour of “CaMel CaSe” for those moments you want some spice things up with some sarcasm?

      • NightCrawlerProMax@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        22 hours ago

        Vibe coding means the human is working with an AI Agent which is trained to be a software engineer. The responsibility of the human is only to write detailed prompts and the AI will write most or all of the code.

        • StarDreamer@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          19 hours ago

          As much as I hate the concept, it works. However:

          1. It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)

          2. It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.

          If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.

          • NightCrawlerProMax@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            It has gone way beyond that. Where I work, we have access to GitHub Copilot experimental SWE Agent. It’s ridiculously smart at looking at your current codebase and implementing a solution. The other day, I used it to build a page in our web app in 3 hours with prompts and minimal code changes myself. If I had done it myself, it would have taken me at least couple of days. But the SWE agent looked at the tech stack, patterns, structures etc of our web app and implemented based on that. Asked if it should add unit test cases for the new files and update the existing ones. Out of curiosity, I said yes. It kept iterating and running the tests until it had 100% coverage. To say I was impressed would be an understatement. To make things even interesting, it said it noticed that we use storybook testing so it went ahead and added couple of storybook tests as well.

            • StarDreamer@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 hours ago

              I keep hearing good things however I have not yet seen any meaningful results for the stuff I would use such a tool for.

              I’ve been working on network function optimization at hundreds of gigabit per second for the past couple of years. Even with MTU-sized packets you are only given approximately 200 ns for processing (this assumes without batching). Optimizations generally involve manual prefetching and using/abusing NIC offload features to minimize atomic instructions (this is also running on arm, where atomic fetch and add in gcc is compiled into a function that does lw, ll, sc and takes approximately 8 times the regular memory access time for a write). Current AI assisted agents cannot generate efficient code that runs at line rate. There are no textbooks or blogs that give a detailed explanation of how these things work. There are no resources for it to be trained on.

              You’ll find a similar problem if you try to prompt them to generate good RDMA code. At best you’ll find something that barely works, and almost always of the code cannot efficiently utilize the latency reduction RDMA provides over traditional transport protocols. The generated code usually looks like how a graduate CS student may think RDMA works, but is usually completely unusable, either requiring additional PCIe round-trips or has severe thrashing issues with main memory.

              My guess is that these tools are ridiculously good at stuff it can find examples of online. However for stuff that have no examples, it is woefully under prepared and you still need a programmer to manually do the work line by line.