• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • I’m pretty sure I’ve seen several different clips where he repeats the same “I’m a moron” spiel.

    While I have only watched what few clips came my way, I was under the impression that was the entire point of his podcast: Invite interesting* people, then validating them in discussion by agreeing to most of their takes regardless of how bizarre they are so that they freely speak of their topic.

    *wherein “interesting” is usually something from the categories of fringe beliefs (often conspiracies), drugs, culturally influential people, or experts on whatever is a big topic for his viewership at the time.

    Many of the experts are also those of the fringe belief kind.


    Basically, if you take Rogan’s views significantly more seriously than the beliefs of your local meth head, you are doing it wrong.








  • The solution I’ve sort-of found is to go to communities of Arch-based systems instead of Arch itself. The same solution should work in most cases*, and the communities are more newbie-friendly.

    *Depends on how close to Arch the distro is in this aspect/subsystem. The Manjaro community is probably less likely to offer AUR based solutions, since the AUR can be unreliable/unsafe on Manjaro.


  • Someone made a website to compile them you might find, but here’s what I remember:

    • Putting the extraordinarily unstable test release of a package in their normal release. That package specifically included disclaimers that it was for testing only, not meant for any users, and it was very clearly not meant for general release to unsuspecting end-users.

    • Getting banned off the AUR (twice?) for DDOS-ing it due to their faulty code. As I recall, every machine queried the AUR for updates constantly, or something like that.

    • Breaking AUR dependencies because of holding back releases for a few weeks, which they regularly to improve safety. Basically, don’t use AUR on Manjaro.



  • Yes and no. I’d prefer user choice/curating your own list of instance you interact with.

    However, each community also adds further burden on moderation. The communities you allow affect the culture, and some are very clearly more trouble than others.

    My current solution would be to have multiple accounts for different sections of the fediverse. Currently I only have a generic Kbin and a Lemmy account, but if you find a Lemmy instance that’s federated with the broader free-speech spectrum without just veering into insane territory itself, I’d be interested.


  • Kbin user here. It does not federate downvotes from lemmy. So far, I have a total of two (2) downvotes and every single interaction, including the one I got downvoted for, was quite positive.

    No toxicity in normal interactions so far. The only (slightly) toxic comment sections were regarding meta topics of users complaining about toxicity elsewhere and/or wanting to defederate more communities. Even those discussions were nearly entirely polite and productive.

    The only somwhat toxic topic I participated in was when one car-enthusiast complained about the fuckcars community and got called out throughout the comment section. Piling on like that was probably not the best way and they deleted their post some time after.


  • It’s a very difficult topic, and I don’t see any satisfying real-world solutions. Two big issues:

    1. Obvious solutions are impossible. Generative AI are impossible to “undo”. Much of the basic tech, and many simpler models, are spread far and wide. Research, likewise, is spread out both globally and on varying levels from large Megacorps down to small groups of researchers. Even severe attempts at restricting it would, at most, punish the small guys.

    I don’t want a world, where corporations like Adobe or Microsoft hold sole control over legal “ethically trained” generative AI. However, that is where insistence on copyright for training sets, or insistence on censored “safe” LLMs would lead us.

    1. Many of the ethical and practical concerns are on sliding scales. They are also on the edge of these scales. When does machine assistance become unethical? When does imitating the specific style of an artist become wrong? Where does inspiration end and intellectual rights infringement begin? At what point does reducing racial and other biases from LLMs switch over to turning them into biased propaganda machines?

    There are dozens of questions like these, and I have found no satisfying answers to any of them. Yet the answers to some of them are required in order to produce reasonable solutions.