Over half of all tech industry workers view AI as overrated::undefined

  • eestileib@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    1
    ·
    10 months ago

    Over half of tech industry workers have seen the “great demo -> overhyped bullshit” cycle before.

    • SineSwiper@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      27
      ·
      10 months ago

      No SQL, block chain, crypto, metaverse, just to name a few recent examples.

      AI is overhyped, but it is, so far, more useful than any of those other examples, though.

      • PieMePlenty@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        These are useful technologies if used when called for. They aren’t all in one solutions like the smart phone killing off cameras, pdas, media players… I think if people looked at them as tools which fix specific problems, we’d all be happier.

  • ParsnipWitch@feddit.de
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    edit-2
    10 months ago

    It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.

    My boss now believes he can “program too” because he let’s ChatGPT write scripts for him that more often than not are poor bs.

    He also enters chunks of our code into ChatGPT when we issue bugs or aren’t finished with everything in 5 minutes as some kind of “Gotcha moment”, ignoring that the solutions he then provides don’t work.

    Too many people see LLMs as authorities they just aren’t…

    • Spedwell@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 months ago

      It bugs me how easily people (a) trust the accuracy of the output of ChatGPT, (b) feel like it’s somehow safe to use output in commercial applications or to place output under their own license, as if the open issues of copyright aren’t a ten-ton liability hanging over their head, and © feed sensitive data into ChatGPT, as if OpenAI isn’t going to log that interaction and train future models on it.

      I have played around a bit, but I simply am not carefree/careless or am too uptight (pick your interpretation) to use it for anything serious.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 months ago

      Too many people see LLMs as authorities they just aren’t…

      This is more a ‘human’ problem than an ‘AI’ problem.

      In general it’s weird as heck that the industry is full force going into chatbots as a search replacement.

      Like, that was a neat demo for a low hanging fruit usecase, but it’s pretty damn far from the ideal production application of it given that the tech isn’t actually memorizing facts and when it gets things right it’s a “wow, this is impressive because it really shouldn’t be doing a good job at this.”

      Meanwhile nearly no one is publicly discussing their use as classifiers, which is where the current state of the tech is a slam dunk.

      Overall, the past few years have opened my eyes to just how broken human thinking is, not as much the limitations of neural networks.

  • shirro@aussie.zone
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Many areas of machine learning, particularly LLMs are making impressive progress but the usual ycombinator techbro types are over hyping things again. Same as every other bubble including the original Internet one and the crypto scams and half the bullshit companies they run that add fuck all value to the world.

    The cult of bullshit around AI is a means to fleece investors. Seen the same bullshit too many times. Machine learning is going to have a huge impact on the world, same as the Internet did, but it isn’t going to happen overnight. The only certain thing that will happen in the short term is that wealth will be transferred from our pockets to theirs. Fuck them all.

    I skip most AI/ChatGPT spam in social media with the same ruthlessness I skipped NFTs. It isn’t that ML doesn’t have huge potential but most publicity about it is clearly aimed at pumping up the market rather than being truly informative about the technology.

    • Barack_Embalmer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      ML has already had a huge impact on the world (for better or worse), to the extent that Yann LeCun proposes that the tech giants would crumble if it disappeared overnight. For several years it’s been the core of speech-to-text, language translation, optical character recognition, web search, content recommendation, social media hate speech detection, to name a few.

      • shirro@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        ML based handwriting recognition has been powering postal routing for a couple of decades. ML completely dominates some areas and will only increase in impact as it becomes more widely applicable. Getting any technology from a lab demo to a safe and reliable real world product is difficult and only more so when there are regulatory obstacles and people being dragged around by vehicles.

        For the purposes of raising money from investors it is convenient to understate problems and generate a cult of magical thinking about technology. The hype cycle and the manipulation of the narrative has been fairly obvious with this one.

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    2
    ·
    10 months ago

    It is overrated. It has a few uses, but it’s not a generalized AI. It’s like calling a basic calculator a computer. Sure it is an electronic computing device and makes a big difference in calculating speed for doing finances or retail cashiers or whatever. But it’s not a generalized computing system that can basically compute anything that it’s given instructions for which is what we think of when we hear something is a “computer”. It can only do basic math. It could never be used to display a photo , much less make a complex video game.

    Similarly the current thing that’s called “AI”, can learn in a very narrow subject that it is designed for. It can’t learn just anything. It can’t make inferences beyond the training material or understand. It can’t create anything totally new, it just remixes things. It could never actually create a new genre of games with some kind of new interface that has never been thought of, or discover the exact mechanisms of how gravity works, since those things aren’t in its training material since they don’t yet exist.

      • irotsoma@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Lol, those are different. I meant like a little solar powered addition, subtraction, multiplication, division and that’s it kind of calculator.

  • milkjug@lemmy.wildfyre.dev
    link
    fedilink
    English
    arrow-up
    30
    ·
    10 months ago

    I have a doctorate in computer engineering, and yeah it’s overhyped to the moon.

    I’m oversimplifying it and some one will ackchyually me but once you understand the core mechanics the magic is somewhat diminished. It’s linear algebra and matrices all the way down.

    We got really good at parallelizing matrix operations and storing large matrices and the end result is essentially “AI”.

  • steeznson@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    10 months ago

    I remember when it first came out I asked it to help me write a MapperConfig custom strategy and the answer it gave me was so fantastically wrong - even with prompting - that I lost an afternoon. Honestly the only useful thing I’ve found for it is getting it to find potential syntax errors in terraform code that the plan might miss. It doesn’t even complement my programming skills like a traditional search engine can do; instead it assumes a solution that is usually wrong and you are left to try to build your house on the boilercode sand it spits out at you.

    • lloram239@feddit.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 months ago

      It’s a general problem with ChatGPT(free), the more obscure the topic, the more useless the answers will be. It works pretty good for Wikipedia-style general knowledge, but everything that goes even a little deeper is a mess. This is true even when it comes to things that shouldn’t be that obscure, e.g. pop-culture things like movies. It can give you a summary of StarWars, but anything even a little more outside the mainstream it makes up on the spot.

      How much better is ChatGPT-Pro when it comes to this? Can it answer /r/tipofmytongue/ style question?

      • applebusch@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        I’ve found the free one can sometimes answer tip of my tongue questions but yeah anything even remotely obscure it will just lie and say that doesn’t exist, especially if you stray a little too close to the puritanical guard rails. One time I was going down a rabbit hole researching human sex organ variations and it flat out told me the people in South America who grow a penis at 12 don’t exist until I found the name guevedoces on my own, and wouldn’t you know it then it knew what I was talking about.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      10 months ago

      I also have tried to use it to help with programming problems, and it is confidently incorrect a high percentage (50%) of the time. It will fabricate package names, functions, and more. When you ask it to correct itself, it will give another confidently incorrect answer. Do this a few more times and you could end up with it suggesting the first incorrect answer it gave you and then you realize it is literally leading you in circles.

      It’s definitely a nice option to check something quickly, and it has given me some good information, but you really can’t blindly trust its output.

      At least with programming, you can validate fairly quickly that it is giving bad information. With other real-life applications, using it for cooking/baking, or trip planning, the consequences of bad information could be quite a bit worse.

  • thorbot@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    10 months ago

    That’s because it is overrated and the people in the tech industry are actually qualified to make that determination. It’s a glorified assistant, nothing more. we’ve had these for years, they’re just getting a little bit better. it’s not gonna replace a network stack admin or a programmer anytime soon.

  • SuperSpruce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    10 months ago

    It is currently overhyped and so much of it just seems to be copying the same 3 generative AI tools into as many places as possible. This won’t work out because it is expensive to run the AI models. I can’t believe nobody talks about this cost.

    Where AI shines is when something new is done with it, or there is a significant improvement in some way to an existing model (more powerful or runs on lower end chips, for example).

  • MeanEYE@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 months ago

    Of course, because hype didn’t come from tech people, but content writers, designers, PR people, etc. who all thought they didn’t need tech people anymore. The moment ChatGPT started being popular I started getting debugging requests from few designers. They went there and asked it to write a plugin or a script they needed. Only problem was it didn’t really work like it should. Debugging that code was a nightmare.

    I’ve seen few clever uses. Couple of our clients made a “chat bot” whose reference was their poorly written documentation. So you’d ask a bot something technical related to that documentation and it would decipher the mess. I still claim making a better documentation was a smarter move, but what do I know.

  • online@lemmy.ml
    cake
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    10 months ago

    In a podcast I listen to where tech people discuss security topics they finally got to something related to AI, hesitated, snickered, said “Artificial Intelligence I guess is what I have to say now instead of Machine Learning” then both the host and the guest started just belting out laughs for a while before continuing.

  • Furbag@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    10 months ago

    I’ll join in on the cacophony in this thread and say it truly is way overrated right now. Is it cool and useful? Sure. Is it going to replace all of our jobs and do all of our thinking for us from now on? Not even close.

    I, as a casual user, have already noticed some significant problems with the way that it operates such that I wouldn’t blindly trust any output that I get without some serious scrutiny. AI is mainly being pushed by upper management-types who don’t understand what it is or how it works, but they hear that it can generate stuff in a fraction of the time a person can and they start to see dollar signs.

    It’s a fun toy, but it isn’t going to change the world overnight.

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    10 months ago

    I use github copilot. It really is just fancy autocomplete. It’s often useful and is indeed impressive. But it’s not revolutionary.

    I’ve also played with ChatGPT and tried to use it to help me code but never successfully. The reality is I only try it if google has failed me, and then it usually makes up something that sounds right but is in fact completely wrong. Probably because it’s been trained on the same insufficient data I’ve been looking at.

    • MeanEYE@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      I still consider copilot to be a serial license violator. So many things are GPL licensed on GitHub and completing your code with someone else’s or at least variation of it without giving credit is a clear violation of the license.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      For me it depends a lot on the question. For tech questions like programming language questions, it’s much faster than a search engine. But when I did research for cars and read reviews, I used Kagi.

    • thelastknowngod@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Yeah agreed. I use copilot too. It’s fine for small, limited tasks/functions but that’s about it. The overwhelming majority of my work is systems design and maintenance though… There’s no AI for that…