Who’s paying him? Seriously:
- If nobody is, then we got our value’s worth.
- If someone is, then we should look at who, how much, and why.
Who’s paying him? Seriously:
What’s the progress of the Human Connectome Project?
Russia has access to the Black Sea through the Sea of Azov, which is controlled by whoever controls Crimea… and to maintain control over Crimea, Russia needs supply lines over a land access at least across the Donbass, not just through a bridge that can be bombed at any time, as it has been already.
Both the Donbass and Crimea, Ukraine considers to be Ukrainian land, even though the history of both areas is plagued by forced resettlements during the USSR times.
Additionally, there are natural resources, some ports, and a nuclear plant in the Donbass area, which Russia would happily take over.
You’re right, I’ve checked my notes and it mentions Shell; technically British now, post-BREXIT, but it has branches all over the world.
Anyway, the problem with those $400B is… if a corporation can sell for $400B and it only costs them $200B to extract and distribute it plus $20B to kill everyone in Gaza… that’s $180B of “clean” money (just dripping some blood). Shell’s yearly revenue is $380B, with a net income of $40B, so they’re just the kind who might consider it a reasonable 5-10 year plan.
Some wars are about who gets control over some resources, or who will be collecting the taxes, without trying to wipe out the other side.
Wait until you hear it’s not “under Gaza” but under the sea, in what would be Gaza’s “economic influence” area… and that the Palestinian Authority has been in talks with Egypt to extract it, while Israel has been in talks with US corporations.
The gas is expected to be worth about 400 billion USD.
Russia wants to keep unobstructed access to the Black Sea, for its freight and military ships.
The EU and China want to keep a railroad from China to the EU, through Kazakhstan and Ukraine or Belarus, to cut in half the time freight ships take.
Both need control over the same piece(s) of land.
For reference:
The difference between Putin and Israel is… that Putin “rescued”, relocated, and gave a bunch of children to surrogate families, before bombing their parents.
I mean… that’s “technically” less inhumane, or something?
Depends on the caliber of their gun, whether they’re using frag bullets or not, and how many spare mags did they bring with themselves… wait, you meant “can”, or “should be allowed to”?
In a place with the living conditions of Gaza, one has to wonder who is stupid or evil enough to have so many children.
If you were to apply this sick “human shield” logic to both sides, most of the innocent civilians killed by Hamas on Oct 7th
There were no military hiding among the party goers, or among the people living in their homes. The military personnel killed, was stationed at separate military installations that Hamas also targeted.
I literally work in the business and know a thing or two on how these aid contracts are written.
I only know about the “ink printer” or “Gillette” aspect of the aid contracts. How does the money get back to fixing potholes? You mean through taxes on the consumables?
Nukes are becoming a problem, because China is ramping up production. It will be just natural for India to do the same. From a two-way MAD situation, we’re getting into a 4-way Mexican standoff. That’s… really bad.
There won’t be an “AI insurgency”, just enough people plugging in plugs for some dumb AIs to tell them they can win the standoff. Let’s hope they don’t also put AIs in charge of the multiple nuclear launch buttons… or let the people in charge check with their own, like on a smartphone, dumb AIs telling them to go ahead.
Climate change is clearly a done thing, unless we get something like unlimited fusion power to start some terraforming projects (seems unlikely).
You have a point with insects, but I think that’s just linked to climate change; populations will migrate wherever they get something to eat, even if that turns out to be Antarctica.
We used to run “machine learning”, “neural networks”, over 25 years ago. The “AI” term has always been kind of a sci-fi thing, somewhere between a buzzword, a moving target, and undefined since we lack a fixed comprehensive definition of “intelligence” to begin with. The limiting factors of the models have always been the number of neurons one could run in real-time, and the availability of good training data sets. Both have increased over a million-fold in that time, progressively turning more and more previously untractable problems into solvable ones to the point where the results are equal or better and/or faster than what people can do.
Right now, there are supercomputers out there orders of magnitude more capable than what runs stuff like ChatGPT, DallE, or all the public facing "AI"s that made the news. Bigger ones keep getting built… and memristors are coming, to become a game changer the moment they can be integrated anywhere near current GPU/CPU levels.
For starters, a supercomputer with the equivalent neural network processing power of a human brain, is expected for 2024… that’s next year… but it won’t be able to “run a human brain”, because we lack the data on how “all of” the human brain works. It will likely become obsoleted by ones with several orders of magnitude more processing power, way before we can simulate an actual human brain… but the question will be: do we need to? Does a neural network need to mimick a human brain, in order to surpass it? A calculator already does, and it doesn’t use a neural network at all. At what point the integration of what size and kind of neural network, with what kind of “classical” computer, can start running circles around any human… or all of humanity taken together?
And of course we’ll still have to deal with the issue of dumb humans telling/trusting dumb "AI"s to do things way over their heads… but I’m afraid any attempt at “regulation”, is going to end up like the case with “international law”: those who want, obey it; those who should, DGAF.
Even if all tech giants with all lawmakers got to agree on the strictest of regulations imaginable, like giving all "AI"s the treatment of weapons of mass destruction, there is a snowflake’s chance in hell that any military in the world will care about any of it.
Then we’ll need an AI running in ring -10 of every CPU to make sure you don’t run some unlicensed AI…
At some point ML (machine learning) becomes undistinguishable from BL (biological learning).
Whether there is any actual “intelligence” involved in either, hasn’t been proven yet.
The real risk is that humans will use AIs to asses the risk/benefits of starting a war… and an AI will give them the “go ahead” without considering mutually assured destruction from everyone else doing exactly the same.
It’s not that AIs will get super-human, it’s that humans will blindly trust limited AIs and exterminate each other.
three-way race between AI, climate change, and nuclear weapons proliferation
Bold of you to assume that people behind maximizing profits (high frequency trading bot developers) and behind weapons proliferation (wargames strategy simulation planners) are not using AI… or haven’t been using it for well over a decade… or won’t keep developing AIs to blindly optimize for their limited goals.
First StarCraft AI competition was held in 2010, think about that.
They went a bit too far with the argument… the AI doesn’t need to become self-aware, just exceptionally efficient at eradicating “the enemy”… just let it loose from all sides all at once, and nobody will survive.
How many people are there in the world, who aren’t considered an “enemy” by at least someone else?
Snowden is wrong though, there are two reasons:
The AI that ends up enslaving humanity, will start by convincing the people in charge of turning it off, that it would be a really bad idea to turn it off.