The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    17
    ·
    4 months ago

    I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          3 months ago

          Cool, enjoy your entire industry going under thanks to cheap and free software and executives telling their middle managers to just shoot and cut it on their phone.

          Sincerely,

          A former video editor.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            If something can be effectively automated, why would I want to continue to invest energy into doing it manually? That’s literal busy work.

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  3 months ago

                  Video editing is not busy work. You’re excusing executives telling middle managers to put out inferior videos to save money.

                  You seem to think what I used to do was just cutting and pasting and had nothing to do with things like understanding film making techniques, the psychology of choosing and arranging certain shots, along with making do what you have when you don’t have enough to work with.

                  But they don’t care about that anymore because it costs money. Good luck getting an AI to do that as well as a human any time soon. They don’t care because they save money this way.

                  • Hackworth@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    3 months ago

                    I’ve been editing video for 30 years, 25 professionally - narrative, advertising, live, etc. I know exactly what it entails. Rough cuts can be automated right now. They still need a fair amount of work to take them to the finish line, though who knows how long that’ll remain true. I’m more interested in training an AI editor on my particular editing style and choices than lamenting the death of a job description. I’ve already seen newscasts go from needing 9 people behind the camera to only 3 and the analog film industry transition to digital, putting LOTS of people out of a career. It’s been a long time since I was under the illusion that this wouldn’t happen to my occupation.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 months ago

            “Soup to nuts” just means I am responsible for the entirety of the process, from pre-production to post-production. Sometimes that’s like a dozen roles. Sometimes it’s me.

    • themurphy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      17
      ·
      4 months ago

      Same, I’ve automated alot of my tasks with AI. No way 77% is “hampered” by it.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        2
        ·
        4 months ago

        I dunno, mishandling of AI can be worse than avoiding it entirely. There’s a middle manager here that runs everything her direct-report copywriter sends through ChatGPT, then sends the response back as a revision. She doesn’t add any context to the prompt, say who the audience is, or use the custom GPT that I made and shared. That copywriter is definitely hampered, but it’s not by AI, really, just run-of-the-mill manager PEBKAC.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          16
          ·
          edit-2
          3 months ago

          Voiceover recording, noise reduction, rotoscoping, motion tracking, matte painting, transcription - and there’s a clear path forward to automate rough cuts and integrate all that with digital asset management. I used to do all of those things manually/practically.

          e: I imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            arrow-down
            5
            ·
            edit-2
            3 months ago

            imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

            They’re right IMO. Practical effects still look and age better than (IMO very obvious) digital effects. Oh and digital deaging IMO looks like crap.

            But, this will always remain an opinion battle anyway, because quantifying “artistry” is in and of itself a fool’s errand.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              5
              ·
              3 months ago

              Digital video, not digital effects - I mean the guys I went to film school with that refused to touch digital videography.

          • WalnutLum@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 months ago

            All the models I’ve used that do TTS/RVC and rotoscoping have definitely not produced professional results.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              3 months ago

              What are you using? Cause if you’re a professional, and this is your experience, I’d think you’d want to ask me what I’m using.

              • WalnutLum@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Coqui for TTS, RVC UI for matching the TTS to the actor’s intonation, and DWPose -> controlnet applied to SDXL for rotoscoping

                • Hackworth@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 months ago

                  Full open source, nice! I respect the effort that went into that implementation. I pretty much exclusively use 11 Labs for TTS/RVC, turn up the style, turn down the stability, generate a few, and pick the best. I do find that longer generations tend to lose the thread, so it’s better to batch smaller script segments.

                  Unless I misunderstand ya, your controlnet setup is for what would be rigging and animation rather than roto. I do agree that while I enjoy the outputs of pretty much all the automated animators, they’re not ready for prime time yet. Although I’m about to dive into KREA’s new key framing feature and see if that’s any better for that use case.

                  • WalnutLum@lemmy.ml
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    3 months ago

                    I was never able to get appreciably better results from 11 labs than using some (minorly) trained RVC model :/ The long scripts problem is something pretty much any text-to-something model suffers from. The longer the context the lower the cohesion ends up.

                    I do rotoscoping with SDXL i2i and controlnet posing together. Without I found it tends to smear. Do you just do image2image?

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        8
        arrow-down
        4
        ·
        3 months ago

        A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.

        • themurphy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          3 months ago

          I’m not working in tech either. Everyone relying on a computer can use this.

          Also, medicin and radiology are two areas that will benefit from this - especially the patients.