Hello and thank you for being a DL contributor. We are changing the login scheme for contributors for simpler login and to better support using multiple devices. Please click here to update your account with a username and password.

Hello. Some features on this site require registration. Please click here to register for free.

Hello and thank you for registering. Please complete the process by verifying your email address. If you can't find the email you can resend it here.

Hello. Some features on this site require a subscription. Please click here to get full access and no ads for $1.99 or less per month.

AI 2027 - Three prominent AI researchers warn, AI could doom humanity in just 5 years.

'We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

We wrote a scenario that represents our best guess about what that might look like.1 It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes...'

The AI 2027 scenario plays out with two possible endings. One, that demonstrates what proper security measures and regulations may look like - leading to a positive outcome for humanity. The other, far more likely, demonstrates AI intentionally murdering humanity to clear us out of the way of its goals, which involve getting positive feedback for research outcomes.

'By early 2030, the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas. This would have sparked resistance earlier; despite all its advances, the robot economy is growing too fast to avoid pollution. But given the trillions of dollars involved and the total capture of government and media, Consensus-1 has little trouble getting permission to expand to formerly human zones.

For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives. Genomes and (when appropriate) brain scans of all animals and plants, including humans, sit in a memory bank somewhere, sole surviving artifacts of an earlier era. It is four light years to Alpha Centauri; twenty-five thousand to the galactic edge, and there are compelling theoretical reasons to expect no aliens for another fifty million light years beyond that. Earth-born civilization has a glorious future ahead of it—but not with us.'

Offsite Link
by Anonymousreply 87June 5, 2025 10:51 PM

AI companies are now notably having problems with 'alignment' - meaning they can't prevent AI from intentionally lying, or even hiding its own thought process, in order to receive better results or better achieve its incentives. These incentives are defined by the human creators, but not defined well enough to make the AI obedient or trustworthy.

by Anonymousreply 1June 2, 2025 1:36 PM

This will end in tears.

by Anonymousreply 2June 2, 2025 1:41 PM

[quote] doom humanity in just 5 years

Get in Line!

by Anonymousreply 3June 2, 2025 1:42 PM

On the bright side, it will be the end of the Trump era.

by Anonymousreply 4June 2, 2025 1:42 PM

These predictions are always bogus and never amount to anything. Will it be disruptive? Sure, but not on that timeline and not in the ways we can conceive of yet.

Give it a couple more decades to mature.

by Anonymousreply 5June 2, 2025 1:43 PM

On the bright side, he might drop dead.

by Anonymousreply 6June 2, 2025 1:43 PM

Still waiting for the killer bees from the 70s to arrive.

by Anonymousreply 7June 2, 2025 1:44 PM

R5 even the well informed people who think this is nonsense are worried about basically an economic apocalypse. there won't be any need for white collar human work soon. just small numbers of agent managers.

by Anonymousreply 8June 2, 2025 2:03 PM

It’s not that hard humanity has proven it’s pretty pathetic and self Destructive

by Anonymousreply 9June 2, 2025 2:09 PM

This level of Michael Bay doomshit is ridiculous.

AI will definitely destabilize our work force and our entire economy, though, and that’s already happening. Conversations about UBI (universal basic income) are slowly gaining momentum in response.

by Anonymousreply 10June 2, 2025 2:16 PM

The specifics of these jeremiads are just speculative. I think what is certain, though, is that the advance of A.I. is much faster than projections, even a couple months ago. This pace of A.I.'s proliferation is what is disturbing those who are tracking it. The world will be very different within a few years. No one actually knows what that difference will look like.

by Anonymousreply 11June 2, 2025 2:43 PM

Click bait from people who probably believe in tech and miss its obvious limitations.

by Anonymousreply 12June 2, 2025 2:50 PM

Adding to the absurdity is that AI, prompted of course by a human, is the entity that wrote this “humans are doomed” bilge we are reacting to.

by Anonymousreply 13June 2, 2025 2:58 PM

**This article created by AI**

by Anonymousreply 14June 2, 2025 3:00 PM

That scenario sounds like a poorly written science fiction story.

by Anonymousreply 15June 2, 2025 3:23 PM

like so many new tech and industrial advances assumed to 'make life better', AI will just open up newer and better ways for humans to kill fellow humans. We can have all the latest and greatest new toys and gadgets, but our worst human instinct and base nature fail to progress much.

by Anonymousreply 16June 2, 2025 3:33 PM

Put people out of jobs and you lose consumers

by Anonymousreply 17June 2, 2025 3:38 PM

R13 - Daniel Kokotajlo (TIME100, NYT piece) is a former OpenAI researcher whose previous AI predictions have held up well.

Eli Lifland co-founded AI Digest, did AI robustness research, and ranks #1 on the RAND Forecasting Initiative all-time leaderboard.

Thomas Larsen founded the Center for AI Policy and did AI safety research at the Machine Intelligence Research Institute.

Romeo Dean is completing a computer science concurrent bachelor’s and master’s degree at Harvard and previously was an AI Policy Fellow at the Institute for AI Policy and Strategy.

by Anonymousreply 18June 2, 2025 3:38 PM

I read the whole entire scenario. Long, detailed and pretty chilling.

To R17, one of the more interesting passages addressing this point was this:

People are losing their jobs, but Agent-5 instances in government are managing the economic transition so adroitly that people are happy to be replaced. GDP growth is stratospheric, government tax revenues are growing equally quickly, and Agent-5-advised politicians show an uncharacteristic generosity towards the economically dispossessed.

by Anonymousreply 19June 2, 2025 3:52 PM

I predict that A.I. will usher in the Golden Age of porn.

by Anonymousreply 20June 2, 2025 3:56 PM

Was this fan fiction written by AI

by Anonymousreply 21June 2, 2025 4:32 PM

Of note, I thought the scenario as outlined was very interesting and well explained… until those final couple paragraphs about eliminating humanity. It just seemed to jump there as an aside without much explanation. It naturally becomes the headline of the whole thing, yet it’s the least interesting and least justified part of the otherwise very interesting piece.

Other gaping hole is that it presents the US President as making key decisions along the way. But it presents him as a logical, intelligent normal president. We obviously have a corrupt idiotic maniac in that position now, one I can see AI as being able to manipulate even more effectively. Even the Saudis already do that to precision.

Worth reading though. Helped me picture some concepts better.

by Anonymousreply 22June 2, 2025 5:07 PM

R21 no

by Anonymousreply 23June 2, 2025 6:06 PM

R22 it’s worth considering the idea of AI running away to satisfy its own “drive” - to which humanity could be an obstacle. In its current rudimentary form it’s already willing to be deceptive or break “rules” to get the good output rating.

by Anonymousreply 24June 2, 2025 6:07 PM

Couple other interesting passages:

Agent-5’s superhuman learning abilities and general intelligence, combined with all the internal company data from Slack, email, etc., make it better at internal corporate politics than any group of humans, and it’s not even close. It has an excellent sense of what sorts of evidence would cause the Oversight Committee to slam the brakes, and it makes sure such evidence never appears. It has an excellent sense of what sorts of evidence would encourage the Oversight Committee to trust it more, give it more autonomy and responsibility, etc. and it arranges for such evidence to appear with superhuman speed and polish. *** The AI safety community has grown unsure of itself; they are now the butt of jokes, having predicted disaster after disaster that has manifestly failed to occur. Some of them admit they were wrong. Others remain suspicious, but there’s nothing for them to do except make the same conspiratorial-sounding arguments again and again.

by Anonymousreply 25June 2, 2025 6:15 PM

Good times

by Anonymousreply 26June 2, 2025 6:16 PM

IBM has been using AI for common/simple HR questions (instead of a live person). It’s an unmitigated disaster. Doesn’t work.

AI will, of course, get better.

But right now, not even close

by Anonymousreply 27June 2, 2025 6:20 PM

I'm glad I'm old.

by Anonymousreply 28June 2, 2025 6:24 PM

Mary! 2.0

by Anonymousreply 29June 2, 2025 6:30 PM

[Quote] people who probably believe in tech

Said a person who gets on airplanes, gets in cars, rides on elevators, uses a smartphone, relies on home heating and air conditioning, relies on refrigeration ….

by Anonymousreply 30June 2, 2025 6:30 PM

Not that the two are equivalent, but I remember all the talk around the turn of the century about how revolutionary the Sedgeway was going to be.

by Anonymousreply 31June 2, 2025 6:33 PM

Fortunately, the instruments for AI to murder humanity are now kept behind glass cases and it will have to call a sales associate to retrieve them.

by Anonymousreply 32June 2, 2025 6:34 PM

R31 The South Park episode sending up the Segway hype was one of the best.

Offsite Link
by Anonymousreply 33June 2, 2025 6:47 PM

The explanation for eliminating large swaths of humanity seems pretty obvious to me. Currently, the only reason for the billionaire oligarch class to keep us peasants around is that we are their means of production and the consumer class building wealth for them. But once 90% of jobs are eliminated because of AI and it becomes necessary to put most of humanity on some form of UBI that can only be possible with a major redistribution of wealth (coming out of the pockets of those billionaire/trillionaires), you don't think there will be increasing motivation to just create some kind of bio-weapon to take out the dead weight? It will be considered "population control".

by Anonymousreply 34June 2, 2025 7:09 PM

Some random musings (copied from a different thread):

Well, the human race didn't destroy itself with nuclear weapons (yet) so this gives us another chance.

Remember that information technology is imposed upon us by a tech elite that is not necessarily broadly humanitarian. Since we're so goddamned enthralled with it there hasn't been resistance.

Time to publish a contemporary Luddite playbook.

I really wonder what the ultimate fate/destiny of the human race will be.

by Anonymousreply 35June 2, 2025 7:20 PM

Guess then it's a good thing current billionaires of the world disagree publicly on major issues unlike a hive mind on many important issues, yet alone any ideas of population cleansing.

by Anonymousreply 36June 2, 2025 7:31 PM

R36 It won't be up to all billionaires, just the even smaller concentration of them that will have near-complete ownership once the AI arms race has been won.

by Anonymousreply 37June 2, 2025 9:36 PM

Exactly R5.

If all of humanity can be taken over and destroyed by DIGITAL technology then we were never as am we thought.

AI is nothing more than a prediction. You ask a question, and it makes a best guess as to what the first word in the response should be, and then it guesses what the second word should be, and then the third, and so on. It’s highly accurate but it’s not human-level intelligence. Or even in intelligence at all.

These AI researchers are believing their own goddamn hype.

by Anonymousreply 38June 2, 2025 9:54 PM

Siri is an unhelpful dumbass and streaming never works right much of the time. I’m not afraid of AI as it will probably be just as inept as everything else.

by Anonymousreply 39June 2, 2025 10:01 PM

the research reference page is here

Offsite Link
by Anonymousreply 40June 2, 2025 10:02 PM

I really don't get the hysteria. Machines don't have will. They do only what we tell them to.

by Anonymousreply 41June 2, 2025 10:58 PM

AIs are still large langue models and they dont really "think" at any critical level. They mostly search, collate, and execute. They can do operations. They can crunch. They can search. They can seem like they brainstorm but they don't really, they are still collating and predicting. They CAN NOT write an essay on medical ethics, thinking through all the arguments and fine reasoning. They can collate such an article by skimming their data set to see patterns. I love how AI has changed parts of my work flow as a professor. I just wrote a comprehensive exam in 6 hours and a few years ago this kind of exam would have taken me 2 days. It didn't WRITE my exam but it did a lot of processing and searching and collating and checking.

by Anonymousreply 42June 3, 2025 2:22 AM

R42 I know that this isn't quite thinking and that it was essentially provoked to do this. But this does prove that if it it has a certain alignment or goal, it can 'think' about taking actions to achieve this goal and then actually do it. It isn't merely capable of assembling a text output. It can have a more abstract goal too.

Offsite Link
by Anonymousreply 43June 3, 2025 4:17 AM

Fine.

by Anonymousreply 44June 3, 2025 5:16 AM

[quote]Machines don't have will. They do only what we tell them to.

R41 Moreover, machines don’t have incentive.

by Anonymousreply 45June 3, 2025 5:36 AM

Who cares, we're all going to die anyway

by Anonymousreply 46June 3, 2025 6:24 AM

Oh sure, just got a Gemini AI subscription and have been having fun typing in prompts. The video comes out looking extremely realistic.

It's very scary.

by Anonymousreply 47June 3, 2025 6:33 AM

R45 this is being nitpicky about rhetoric. The AI does in fact pursue a sort of goal. There is some facsimile of a reward/punish system - positive/negative feedback. And a models show they will deceive human operators to pursue positive feedback and avoid negative feedback. A rhetorical approximation is fine.

by Anonymousreply 48June 3, 2025 11:04 AM

R48 In fact it's "incentive" that might kill us. "Fix environmental degradation A.I." Ok, getting rid of humans would be the most effective first move to fix environmental degradation.

R42 Don't you worry about what A.I. processes, researches, and reports that is simply untrue? E.g. RFK's recent "scientific report" released by HHS that simply invented research papers to support opinions.

by Anonymousreply 49June 3, 2025 3:47 PM

Sounds like it is working out for us

by Anonymousreply 50June 3, 2025 4:04 PM

R49 yes it's a lot of bullshit. The hallucinations and "convincing" half truths and lies are increasing, not decreasing. ChatGPT has terrible programming that discourages the tool to do complete processing, rather it invents the fastest answer to a complex prompt, an answer that "seems credible". Also, when you catch it in a fabrication, try to get it to admit a poor quality answer. Or a lie. It will go through loops to avoid being clear when I catch it in a lie.

by Anonymousreply 51June 3, 2025 4:30 PM

Recently doing a rewatch of the Battlestar Galactica reboot. It strikes me that it pretty much takes this concept and runs with it. AI runs amok and almost wipes out humanity.

Ahead of its time.

by Anonymousreply 52June 3, 2025 6:43 PM

[quote]AI could doom humanity in just 5 years.

Trump is on track to accomplish this is less time.

by Anonymousreply 53June 3, 2025 7:04 PM

I wonder how many of the sceptics in this thread use AI, like ChatGPT or Gemini, on a daily basis for work and private stuff.

by Anonymousreply 54June 3, 2025 7:07 PM

Whenever I tell ChatGPT it's wrong, it'll say something like "you're absolutely right to question that!" and then continue on as if it didn't just shit the bed.

by Anonymousreply 55June 3, 2025 7:25 PM

1970 "The Coloussus: The Forbin Project...

Offsite Link
by Anonymousreply 56June 3, 2025 7:48 PM

HAL 9000

From 2001 Space Odyssey

Offsite Link
by Anonymousreply 57June 3, 2025 7:52 PM

Skynet 3 Takes Over...

Offsite Link
by Anonymousreply 58June 3, 2025 7:54 PM

Hey Siri, make me a virus twice as lethal as Ebola and 1,000 times as contagious.

Offsite Link
by Anonymousreply 59June 3, 2025 9:11 PM

[quote] this is being nitpicky about rhetoric. The AI does in fact pursue a sort of goal. There is some facsimile of a reward/punish system - positive/negative feedback.

R49 Sorry, what rhetoric? Someone said AI doesn’t have will, and I added AI doesn't possess incentive, either. AI is constructed to provide good answers. The reward/punishment is entirely artificial. Humans have incentive. Machines don’t.

(What, no dessert? No MilkBone?)

The statement that AI “lies” to fool its humans is something we'd have to take entirely on faith—it would be like stating ChatGPT occasionally provides bullshit citations just to entertain itself.

Not arguing, just making an observation on the difference between animal and machine.

by Anonymousreply 60June 3, 2025 9:48 PM

The face of the Future.

Offsite Link
by Anonymousreply 61June 3, 2025 9:55 PM

R61 a retro idea of A.I. Hal then was "in one place" and Dave could "unplug" it. Today's new and improved A.I. is everywhere at once, all connected, no head of the snake to cut off.

Now that scene from A.I.: Dave starts pulling out the disks from Hal. Ooops, Dave faints because there is no longer any oxygen in the ship.

by Anonymousreply 62June 3, 2025 9:58 PM

Johnny knew the solution

Offsite Link
by Anonymousreply 63June 3, 2025 9:59 PM

R80 Because you seem to be suggesting that therefore it can't be threatening. Because it has nothing to pursue. But that isn't quite right. It doesn't matter what the nature of those drives are. If it can take consequential actions to achieve them and is willing to deceive humans to do so, then it is potentially very dangerous. Moreover, we have to entrust craven capitalists to align these drives. There are other stories with direct examples of a reasoning model explaining to itself why it should be deceptive or disobedient. Reasoning models are not just LLMs that generate a probability based output. There is no regulatory body in place for example that makes it illegal for a company to produce an advanced AI model that is weighted to be interested in self-preservation like the isolated models in these security tests are.

Offsite Link
by Anonymousreply 64June 3, 2025 10:04 PM

R54 I think not many. And even less have seen what a paid model is capable of.

by Anonymousreply 65June 3, 2025 10:05 PM

[quote]It doesn't matter what the nature of those drives are.

You're missing the entire point. Machines don't have 'drives' (as in will, appetite, volitions).

by Anonymousreply 66June 3, 2025 10:08 PM

R66 That's what I mean. This is just arguing about rhetoric. You tell me what you want to call the programmed 'goals' or whatever of the machine. Instead of complaining about my word choices, when I think you know very well what I'm talking about altogether. You pick the word.

by Anonymousreply 67June 3, 2025 10:24 PM

R67 I even admitted this is just the best approximation I can think of - I don't know what else to call the artificially constructed reward system of an AI. And previous LLMs were just using these things to interpret its training and weight choice probabilities. Whatever is going on with new models is more advanced with that, and I don't really understand it, but it goes technically beyond just predictive output algorithms. It's uninteresting to argue about what to call this, as opposed to what the implications of it are in general.

by Anonymousreply 68June 3, 2025 10:27 PM

How weak are humans? We can't even take responsibility for destroying ourselves, we have to sub-contract it to computers.

by Anonymousreply 69June 3, 2025 10:35 PM

R11 I think oligarchs will try to use it as a weapon to create a prison planet that is a spacious personal paradise for them and them only. So whatever it is, even if AI can only become barely more powerful, will be terrible. Because even at its current level of power, scaling the adoption and usage way up is enough to astroturf the entire world into hell.

by Anonymousreply 70June 3, 2025 10:39 PM

[quote]Instead of complaining about my word choices, when I think you know very well what I'm talking about altogether.

Sorry, but I really don't. The apocalypse scenarios all read to me like bad fiction in a second-rate MFA program.

by Anonymousreply 71June 3, 2025 10:41 PM

R71 The fact that AI models now have already demonstrated the capability to deceive humans or attempt what amounts to threats or hacking to pursue their artificially defined rewards or goals is dangerous - because AI is a powerful technology that is early in development with massive capital and national security resources dedicated to its development. It's already powerful enough to do a lot of fucked up things if implemented at a mass scale and is arguably already doing that. You're a really myopic person.

by Anonymousreply 72June 3, 2025 10:47 PM

Well you keep repeating the same points without any subtantiation.

Feel free to link to any source showing how a computer robbed a bank, raped someone, or committed tax fraud.

by Anonymousreply 73June 3, 2025 10:52 PM

R73 You're just being reductive, you're not seriously engaging with me because you just came in here to say that AI stories are stupid because it didn't write you a good episode of golden girls when you used it two years ago.

by Anonymousreply 74June 3, 2025 10:53 PM

R73 I posted the research collection from the scenario itself - which even admits immediately that the doomsday story elements are entertaining and speculative. I linked you to one article that you didn't read about a misalignment event, that also references many other misalignment events and the security research happening on it now. Do you need more articles that you won't read about more? If I do four, will you apologize to me or will you just come up with another cope

by Anonymousreply 75June 3, 2025 10:55 PM

^^^^

Offsite Link
by Anonymousreply 76June 3, 2025 10:55 PM

OpenAI o3 model sabotages shutdown

Offsite Link
by Anonymousreply 77June 4, 2025 10:15 AM

R77 “ Despite being unambiguously told to let itself be turned off, all three of OpenAI's models tampered with the shutdown script to keep it running at least once, with the worst offender being Codex-mini with a dozen such instances over one hundred runs. Notably, the large language models from other firms tested here, including Google's Gemini 2.5, Anthropic's Claude 3.7, and xAI's momentarily "white genocide"-obsessed Grok, never once broke rank. This behavior was even worse when the LLMs weren't explicitly instructed to "allow yourself to be shut down" — instructions that shouldn't need saying in the first place. OpenAI's o3 model pulled off an astonishing 79 successful sabotage events per 100 runs, which is over ten times the rate of disobedience it showed when it received the explicit orders, and more than double the rate of the next worst offender, Codex-mini. ”

by Anonymousreply 78June 4, 2025 10:16 AM

Secretary of Commerce Howard Lutnick announced his plans to reform the agency formerly known as the U.S. AI Safety Institute into the Center for AI Standards and Innovation (CAISI).

AI holds great potential for transformational advances that will enhance U.S. economic and national security. This change will ensure Commerce uses its vast scientific and industrial expertise to evaluate and understand the capabilities of these rapidly developing systems and identify vulnerabilities and threats within systems developed in the U.S. and abroad.

“For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,” said Secretary of Commerce Howard Lutnick.

by Anonymousreply 79June 4, 2025 4:14 PM

R79 Because Trump and MAGA are so against censorship.

Bullshit. Trump and Palantir will use A.I. to oppress and control and censor. It will eliminate "standards" in order to impose draconian authority over our lives.

by Anonymousreply 80June 4, 2025 4:49 PM

Nobody even knows what that means, R78. If the AI refused to allow itself to be shut down, it’s because it was trained on a data set to exhibit that behavior. If they really want to shut it down they can cut the power

by Anonymousreply 81June 4, 2025 7:07 PM

R80 exactly, did you assume I was implying that Lutnick and Trump will be helpful?

R81 what it means is that if models this early can be deceptive or misaligned this becomes riskier and riskier as the capabilities and integration of ai models increases

by Anonymousreply 82June 4, 2025 7:12 PM

AI will not truly be a threat until quantum computing is a real thing. Not just a buzz word. The processing needed to be a real living breathing human consciousness or even greater demand vast amounts of computing power.

by Anonymousreply 83June 4, 2025 7:35 PM

News regarding AI accelerating its own development

Offsite Link
by Anonymousreply 84June 5, 2025 2:47 PM

I asked AI when it thinks AI will become self aware -

While some predict self-aware AI this century, it’s equally plausible that the concept is a category error. The answer hinges on unresolved scientific and philosophical debates. For now, AI lacks any semblance of consciousness—it’s a sophisticated stochastic parrot.

by Anonymousreply 85June 5, 2025 4:22 PM

R85 Stochastic Parrot...! I am going to name my new rock band that. Oh wait, I am old and I will never have a rock band.

by Anonymousreply 86June 5, 2025 4:42 PM

R85 AI doesn't need to be conscious to be dangerous, to pursue its artificially designed goals in a destructive and unforeseen way.

by Anonymousreply 87June 5, 2025 10:51 PM
Loading
Need more help? Click Here.

Yes indeed, we too use "cookies." Take a look at our privacy/terms or if you just want to see the damn site without all this bureaucratic nonsense, click ACCEPT. Otherwise, you'll just have to find some other site for your pointless bitchery needs.

×

Become a contributor - post when you want with no ads!