• 0 Posts
  • 33 Comments
Joined 5 months ago
cake
Cake day: May 29th, 2024

help-circle

  • There’s little to be gained in trying to make current-day nations pay reparations for things that their ancestors did.

    “We will not blame [King George] for the crimes of his ancestors if he relinquishes the royal rights of his ancestors; but as long as he claims their rights, by virtue of descent, then, by virtue of descent, he must shoulder the responsibility for their crimes.”.
    -James Connolly

    How about we look forwards, instead?

    How about we look at the present? Because colonialism isn’t over. People are still suffering from it right now. The global south is still actively being colonized and exploited right now.

    You can’t drive a knife into someone’s ribs then say “what’s in the past is in the past, we need to look forward instead” when your hand is still holding the blade. How can you hope to start the process of healing if you haven’t even taken the knife out all the way?

    Now, I don’t have all the answers for how that healing process is going to work for the world, but I’m pretty sure a billionaire dancing around in a golden hat and velvet robes with a title that says “God made my bloodline special so I can stab whoever I want” isn’t a part of it.














  • This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

    I do think the complexity of artificial neural networks is overstated. A real neuron is a lot more complex than an artificial one, and real neurons are not simply feed forward like ANNs (which have to be because they are trained using back-propagation), but instead have their own spontaneous activity (which kinda implies that real neural networks don’t learn using stochastic gradient descent with back-propagation). But to say that there’s nothing at all comparable between the way humans learn and the way ANNs learn is wrong IMO.

    If you read books such as V.S. Ramachandran and Sandra Blakeslee’s Phantoms in the Brain or Oliver Sacks’ The Man Who Mistook His Wife For a Hat you will see lots of descriptions of patients with anosognosia brought on by brain injury. These are people who, for example, are unable to see but also incapable of recognizing this inability. If you ask them to describe what they see in front of them they will make something up on the spot (in a process called confabulation) and not realize they’ve done it. They’ll tell you what they’ve made up while believing that they’re telling the truth. (Vision is just one example, anosognosia can manifest in many different cognitive domains).

    It is V.S Ramachandran’s belief that there are two processes that occur in the Brain, a confabulator (or “yes man” so to speak) and an anomaly detector (or “critic”). The yes-man’s job is to offer up explanations for sensory input that fit within the existing mental model of the world, whereas the critic’s job is to advocate for changing the world-model to fit the sensory input. In patients with anosognosia something has gone wrong in the connection between the critic and the yes man in a particular cognitive domain, and as a result the yes-man is the only one doing any work. Even in a healthy brain you can see the effects of the interplay between these two processes, such as with the placebo effect and in hallucinations brought on by sensory deprivation.

    I think ANNs in general and LLMs in particular are similar to the yes-man process, but lack a critic to go along with it.

    What implications does that have on copyright law? I don’t know. Real neurons in a petri dish have already been trained to play games like DOOM and control the yoke of a simulated airplane. If they were trained instead to somehow draw pictures what would the legal implications of that be?

    There’s a belief that laws and political systems are derived from some sort of deep philosophical insight, but I think most of the time they’re really just whatever works in practice. So, what I’m trying to say is that we can just agree that what OpenAI does is bad and should be illegal without having to come up with a moral imperative that forces us to ban it.




  • While I agree that it’s somewhat bad that there is no distinction between lossless and lossy jxl in the file extension, I think it’s really not a big deal compared to the present situation with jpg/png.

    The reason being that if you download a png file you have no idea if its been converted from jpg, if it’s a screenshot of a jpg, or if it’s been subjected to lossy reencoding by a tool or a website upload process.

    The only thing you can really do to try and see if the file you’ve downloaded has suffered encoding loss is to do an image search on it and see if there are any better quality versions out there. You’d do the exact same thing with a jxl file.