Sutskever, who also co-founded OpenAI and leads its researchers, was instrumental in the ousting of Altman this week, according to multiple sources. His role in the coup suggests a power struggle between the research and product sides of the company, the sources say.
I know very little of the situation (as does everyone else not directly involved), but out of experience, when the people making a thing are saying one thing and the people selling the thing (and thus running the show for some reason) feel everything is just fine, it means not great things for the final product…which in this case is the creation of sentient artificial life with unknown future ramifications…
This whole thing reads like the precursor to The Terminator.
November 17th, 2023. Sam Altman, CEO of OpenAI, is fired over growing concerns of safety and integrity of the ChatGPT program.
November 18th, 2023. Several key developers of ChatGPT resign in solidarity.
November 19th, 2023. Sam Altman announces a new startup called Cyberdyne, with a revolutionary new AI called Skynet.
In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 2026. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.
AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.
From what I understand he was fired by the non-profit board of the company and it’s the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.
Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.
I’ve not been super stoked on ai specifically because of my track record using them. Maybe it’s my use case (primarily technical/programming/cli questions that I haven’t been able to answer myself) or my prompts are not suited for ai assistance, but I’ve had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I’ve already told it are incorrect and/or just don’t work.
Ai has been nothing more than a disappointment to me.
I know very little of the situation (as does everyone else not directly involved), but out of experience, when the people making a thing are saying one thing and the people selling the thing (and thus running the show for some reason) feel everything is just fine, it means not great things for the final product…which in this case is the creation of sentient artificial life with unknown future ramifications…
This whole thing reads like the precursor to The Terminator.
Skynet fights back.
Nice to know where on the timeline of destroying ourselves. Unfortunately we have no Jon Conner.
A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.
AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.
From what I understand he was fired by the non-profit board of the company and it’s the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.
Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.
I’ve not been super stoked on ai specifically because of my track record using them. Maybe it’s my use case (primarily technical/programming/cli questions that I haven’t been able to answer myself) or my prompts are not suited for ai assistance, but I’ve had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I’ve already told it are incorrect and/or just don’t work.
Ai has been nothing more than a disappointment to me.