The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.

“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.

On Thursday, someone using the developer’s name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.

“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.

One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.

“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.

He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.

  • Aux@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    84
    ·
    8 months ago

    And that’s why you cannot trust open source software blindly.

    • Shdwdrgn@mander.xyz
      link
      fedilink
      English
      arrow-up
      115
      arrow-down
      4
      ·
      8 months ago

      And yet with closed-source software you have no choice but to trust it blindly. At least open source software has people looking at the code.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      4
      ·
      edit-2
      8 months ago

      You are an idiot. It’s not blind. That’s how it was found.

      Not having world accessible SSH is the real fix here.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        8 months ago

        You are an idiot. It’s not blind. That’s how it was found.

        From the article…

        Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that’s only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”

        • 5C5C5C@programming.dev
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          8 months ago

          The fact that it was discovered early due to bad actor sloppiness does not imply that it could not have also been caught prior to wide spread usage via security audits that take place for many enterprise grade Linux distributions.

          • just_another_person@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            8 months ago

            I think it does though. They call it sloppy, I call it sophisticated. Same reason they major distro is running checking shit out right now.

        • uis@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Opensource = fast detection

          Opensource + sloppiness = faster detection

          Closedsource = never detected

          Closedsource + sloppiness = maybe detected

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            8 months ago

            You can put the pom-poms/rifle down, I’m not attacking open source, not in the slightest. I’m a big believer open source.

            But I also know that volunteer work is not always as rigorous as when paid for work is being done.

            The only point I’m trying to make in this conversation is getting confirmation if security audits are actually done, or if everyone just thinks they’re done because of “Open Source” reasons.

      • elshandra@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        Yeah I nearly panicked for a second there, then I remember noone’s getting near that anyway. Back to my relaxing weekend.

      • TheKMAP@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        Not really. The most important admin interfaces are the ones you can’t lock behind an IP whitelists.

        “whitelists good IPs” - OK but what if I need to manage the “good ip” infra, etc

    • promithyo@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 months ago

      And precisely that’s why the exploit was found. If it was a closed source programme, a lone threat actor modifying the code and passing it in a release can happen, and no one will find it out. In that case everyone trusts the internal security team of a closed source company blindly. I really don’t see this as an open source issue. These are malicious actors.

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      8 months ago

      As opposed to what? If you said “thats why you cannot trust any software blindly” it would have been not that wrong.

      • TheKMAP@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        10
        ·
        8 months ago

        Single point of failure on the lone maintainer of a popular package, vs having to hack an entire company like SolarWinds and make a backdoor that bypasses their entire SDLC. Which is harder?

        • Plopp@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          8 months ago

          A better way to compare the two would be a lone dev releasing open source software vs a lone dev releasing closed source. And a company releasing open source vs another company of the same size releasing closed source.

          • TheKMAP@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            8 months ago

            SolarWinds had garbage infosec but you gotta admit the attack chain is much longer and more complex than “kidnap one guy”.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 months ago

          There’s plenty of closed source packages or components with a single actor ultimately accountable for it.

          Imagine a tester even bothering to open a bug that starting a session takes 500ms longer to start than it used to. Imagine what the development manager is going to do with that defect. Imagine a customer complaining about that and the answer the company will give. At best they might identify the problematic component then ask the sole maintainer to give the “working as designed” explanation, and that explanation won’t be held to scrutiny, because at that point it’s just a super minor performance complaint.

          No, closed source is every bit as susceptible, of not more so because management is constantly trying to make all those tech people stop wasting time on little stuff that doesn’t matter, and no one outside is allowed to volunteer their interest in investigating.

          • GreyEyedGhost@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            Checking time to login is more likely in the security sector than anywhere else. A number of vulnerabilities based on timing have been identified and removed in the past.

    • Takios@feddit.de
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      8 months ago

      Imagine trying to make a helpdesk of a proprietary company take your “it’s taking 0.5 seconds longer to login” complaint seriously…

      • GreyEyedGhost@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        So many vulnerabilities were found due to time to login that one of the security features was to take longer to respond to a bad login so they couldn’t tell what part failed. Here’s an article I found about one such vulnerability.

      • uis@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 months ago

        Reflections on Trusting Trust

        - paper by Ken Tompson, 1984(not that one)

    • ddh@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      8 months ago

      You can trust blindly whatever software you like. Most of us, even those that can code, trust blindly whatever software we use because we have other priorities. But what you can do only with open source software, is open your eyes if you choose.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      Ftfy: And that’s why you cannot trust people blindly.

      Just because we cant observe the code of proprietary software and is sold legally doesn’t mean its all safe. Genuinely I distrust anything with a profit incentive.