A radio station in Poland fired its on-air talent and brought in A.I.-generated presenters. An outcry over a purported chat with a Nobel laureate quickly ended that experiment.
There’s this extremely cringe “museum” that OpenAI effectively paid for where they have all these AI exhibits, and one of them involves a phone you can pick up and talk to an AI generated Mr Rogers. This was done without the knowledge or consent of Fred Roger’s widow or family. They took his voice and his words, contorted and strung them up with software, and made them dance.
The man that spent decades teaching and entertaining children with puppets had now been turned into one, without his consent.
The women behind this place goes around trying to sell AI to museum professionals in the form of seminars and such. She had the audacity to say “When I’m feeling down, I just pick up the phone, and let Mr Rogers cheer me up.” to a room full of museum professionals whose entire job is to honestly interpret and represent history and the dead, and the never, ever, put words in their mouth.
She got chewed apart in the QandA. It was glorious.
Thanks for sharing this moment about the Q&A.
The lack of self-awareness about how ghoulish, and close to grave robbing these ‘reanimations’ are is concerning. Glad to hear that museum professionals are not entertaining it for a moment.
Would be hilarious if they took her ideas, and made a small AI exhibit section. But the first thing you read or watch is an explanation that AI is a marketing buzz term, and then explains LLM and machine learning.
How very black-mirror of them…
AI is a lie.
It’s not a lie, it’s just not what most people think it is. There’s a lot of ignorance and a lot of lies about AI.
Was it Google who claimed 25% of their production code last year was written by AI? Microsoft? Anyway, I’m going to call bullshit on that right now. Or they require a curious amount of bullshit code to run their business.
To be fair to all those people that misunderstand it, they are marketing it as Artificial Intelligence, which it isn’t. So one could argue it is in fact a lie, as most marketing seems to be these days. It’s difficult for us humans to see the difference between intelligence and an “alright prediction of what might come next”. Such as when we struggle to tell the difference between the truth and a lie someone told us. It can be deceiving.
Since marketers have bastardized the term, and we’ve begun using AGI in place of the old meaning, confusion is only going to get worse until existing LLMs become somewhat boring, and marketing latches onto some other trend.
With that said, I find the utility of this thing we now call AI to be pretty useful for my own needs, but that’s not stopping people from trying to fit this square shaped solution into circle shaped holes.
trying to fit this square shaped solution into circle shaped holes
This exactly. AI is great for certain things. What everyone tries to use it for, like the bullshit in this article, is not great.
In fairness, about 50% of my code by lines is written by AI these days, and I don’t have it linked into my code base. That claim isn’t ridiculous
Now, of that 50% is 88% long repetitive crap that I could easily write but find mentally draining, the other 10% is something simple that I would normally copy paste from elsewhere because I forgot the exact syntax (and don’t exactly remember where I used it last) and me giving it a shot with things I don’t want to do, like restyling a page. The last 2% is me giving it a shot with business logic for shits and giggles, occasionally I’ll try to coach it through the solution but usually I just grab bits and pieces and rewrite it myself
Granted, this is the easiest and most simple and repetitive code, but it’s still a godsend. Now can AI write the other 50%? With a proper setup where it ingests the code base into a vector store it might get up to 75%, if I was willing to coach it through my tasks carefully (taking more time than the task would take me) I could probably get it up to 85% or 90%, but that last 10%? It just can’t, it’s not even close
It’s not taking my job without a paradigm shifting breakthrough or two on the scale of “all you need is attention”. Even then, it only works if you write your prompts like code… If you don’t understand how to use it and understand the code well enough to communicate the goal explicitly and unambiguously, you’re not going to be able to drive it where you want it to go
To put it another way, you can build 90% of the system in 10% of the time it takes to complete the last 10%
I have access to AI integrated with my IDE. It mostly guesses at the line I’m going to write. It probably gets it right 50% of the time.
It also very, very often suggests stuff that works but isn’t very good. Like it offered some convoluted suggested for adding audit fields to Firebase. Ultimately it did suggest the solution I went with, but only after starting down the road of stupid ideas.
Like, if your code base is pretty good and you just need to tweaks stuff that is already good enough that’s one thing. I frequently look at the code base and wonder if it was implemented by someone who really knows Java at all.
I suppose it might be fair to assume a huge technology company would have their shit together, but technically I work for a huge tech company… just not the same core business. Tech enough that we have a whole mess of internal AI tooling to create AIs for specific things.
We can create an AI agent, but we can’t follow simple fucking rest standards.
Anyway it’s hard to quantify, but I get less mileage out of integrated AI tools than I do bouncing ideas off ChatGPT.
I think that’s fair.
I don’t have AI integration in my ide, mostly by choice -if I pushed for it I could make it happen, but I just don’t think that’s a good idea at this point
AI can be a crutch . One that limits you to the level of a baby developer. If you can’t effortlessly understand what it gives you, frankly you shouldn’t be using it.
Bounce ideas of chat gpt. It sounds like you’ve got the right idea - your reaction sounds correct to me, you should never ever trust it… You must only use it, and that’s the tone I get from your post.
It is a tool, you are a programmer. You exploit tools, you do not trust any tool. You are the one who turns ideas into actions, never forget that and you can use this new tool anywhere it makes your life easier
You could say it’s a L.A.I.
Technically a checkers program is also AI if you check the dictionary definition. Many will say it’s not “real AI” as it doesn’t have general intelligence but it’s still AI. Snake oil salesmen love that.
When I see this kind of thing, I think, “Screw that. I want to listen to real people.” But then I wonder if that’s because I’m GenX shaking my fist at cloud and in the future will this become normalized or even demanded?
Have you ever called a place and got the annoying automated answering voice? Have you ever sent an email to someone and got a boilerplate response? How did that make you feel?
Words aren’t math. They are how humans communicate. When you read/hear them, knowing they didn’t come from a human, they’re hollow.
It’s the facade of communication, because you’re not actually communicating to a human being. You’re using voice commands to control a computer. Asking an AI a question and getting a response is functionally no different than entering 2+2 in a calculator and getting 4 when you hit =.
If we get to a point humanity no longer recognizes or cares about that difference, we’ll be in an extremely dark place.