Episode Transcript
[00:00:00] Do we need an AGI Manhattan Project?
[00:00:04] Slowly at first, then all at once, it will become clear this is happening. Things are going to get wild. This is the most important challenge for the national security of the United States since the invention of the atomic bomb. In one form or another, the national security state will get very heavily involved.
[00:00:26] The project will be the necessary, indeed the only plausible response.
[00:00:34] This is AI Unchained.
[00:00:46] What is up guys? Welcome back to AI Unchained, where we explore AI as a tool for individual freedom and autonomy and instead of centralized control and surveillance. I am Guy Swan and this show is brought to you by Swan. Bitcoin, the best place to buy Bitcoin or to plug your life, your finances, your business, your retirement into Bitcoin. And of course, coin kite makers of the cold card hardware wallet that will keep your Bitcoin safe from thieves, hackers, malware, you name it. Check them out. Links and details to both, plus a discount code right in the show notes.
[00:01:23] Now today we are concluding finally the read of situational awareness, the Decade Ahead in AI by Leopold Aschenbrenner. And this will be part four of really this incredible piece. And we're getting into the details of exactly what this quote unquote project will look like and how difficult it will be to pull it off.
[00:01:47] And this is actually there's a great piece recently that I've saved and I'll probably cover on the show by Zuckerberg, Mark Zuckerberg from Meta Facebook, whatever that I think is actually super relevant to this one. And it actually shares a lot of the perspective that I talked about at the end of the short commentary after part three of this piece. And I was really shocked how much it aligns with a lot of the things that I talked about. And he makes a really solid argument for the case I was trying to make. And so I might be covering that soon as well.
[00:02:27] And I think these really, really go hand in hand because they're two different perspectives on essentially two different people that I think kind of see the same outcome but how to actually deal with it. And I really like that there's kind of those two different ideas to kind of push and pull on and see what those how to compare those perspectives. But if you want to get a view of the coming super intelligence and intelligence explosion and why all the reliable trends, the these normal trends of AI over the past decade have us pretty clearly on the trajectory towards an intelligence explosion and what might come of it.
[00:03:13] This incredible piece should be listened to, read whatever the links are in the show notes and I have got the last three pieces that I or three parts of this in audio on this show. I will have the last three episodes linked so if you have not listened to them yet definitely go back and do so.
[00:03:32] And of course the full thing will be posted external to the podcast somewhere soon. Not exactly, don't really have the platform or the location to put it yet, but I will have something. So just stay tuned to the show, don't forget to subscribe and you'll find out when and where that will be.
[00:03:51] With that, let's go ahead and jump into Part four of Situational Awareness the Decade Ahead and it's titled Part four the Project as the race to Artificial General Intelligence intensifies, the National Security State will get involved. The US government will wake from its slumber and by 2027 or 28 will get some form of government Artificial General Intelligence Project.
[00:04:26] No startup can handle superintelligence.
[00:04:30] Somewhere in a scif, the end game will be on.
[00:04:36] We must be curious to learn how such a set of objects hundreds of power plants, thousands of bombs, tens of thousands of people massed in national establishments can be traced back to a few people sitting at laboratory benches discussing the peculiar behavior of one type of atomic Spencer R. Wert Many plans for AI governance are put forth these days, from licensing frontier AI systems to safety standards to a public cloud with a few hundred million in compute. For academics, these seem well intentioned, but to me it seems like they are making a category error. I find it an insane proposition that the US Government will let a random San Fran startup develop superintelligence.
[00:05:27] Imagine if we had developed atomic bombs by letting Uber just improvise.
[00:05:32] Super intelligence AI systems much smarter than humans will have vast power. From developing novel weaponry to driving an explosion in economic growth.
[00:05:43] Superintelligence will be the locus of international competition, a lead of months potentially decisive in military conflict.
[00:05:52] It is a delusion of those who have unconsciously internalized our brief respite from history that this will not summon more primordial forces.
[00:06:02] Like many scientists before us, the great minds of San Francisco hope that they can control the destiny of the demon that they are birthing right now. They still can, for they are among the few with situational awareness who understand what they are building.
[00:06:17] But in the next few years the world will wake up.
[00:06:20] So too will the National Security State. History will make a triumphant return.
[00:06:27] As in many times before COVID World War II, it will seem as though the United States is asleep at the wheel before all at once the government shifts into Gear in the most extraordinary fashion.
[00:06:41] There will be a moment in just a few years, just a couple more 2023 level leaps in model capabilities and AI discourse where it will be clear we are on the cusp of AGI and superintelligence shortly thereafter.
[00:06:57] While there's a lot of flux in the exact mechanics, one way or another, the US government will be at the helm. The leading labs will voluntarily merge.
[00:07:08] Congress will appropriate trillions for chips and power. A coalition of democracies formed startups are great for many things, but a startup on its own is simply not equipped for being in charge of the United States most important national defense project.
[00:07:25] We will need government involvement to have even a hope of defending against the all out espionage threat that we will face. The private AI efforts might as well be directly delivering superintelligence to the ccp.
[00:07:38] We will need the government to ensure even a semblance of a sane chain of command.
[00:07:44] You can't have random CEOs or random non profit boards with the nuclear button. We will need the government to manage the severe safety challenges of superintelligence, to manage the fog of war, of the intelligence explosion. We will need the government to deploy superintelligence to defend against whatever extreme threats unfold, to make it through the extraordinarily volatile and destabilized international situation that will follow.
[00:08:11] We will need the government to mobilize a democratic coalition to win the race with authoritarian powers and forge and enforce a non proliferation regime for the rest of the world. I wish it weren't this way, but we will need the government, yes, regardless of the administration.
[00:08:29] In any case, my main claim is not normative, but descriptive. In a few years, the project will be on the path to the project.
[00:08:42] A turn of events seared into my memory is late February to mid March of 2020. In those last weeks of February and early days of March, I was in utter despair. It seemed clear that we were on the COVID exponential. A plague was about to sweep the country. The collapse of our hospitals was imminent, and yet almost nobody took it seriously.
[00:09:04] The Mayor of New York was still dismissing Covid fears as racism and encouraging people to go to Broadway shows. All I could do was buy masks and short the market. And yet within just a few weeks, the entire country shut down and Congress had appropriated trillions of dollars, literally more than 10% of GDP.
[00:09:22] Seeing where the exponential might go ahead of time was too hard. But when the threat got close enough, existential enough, extraordinary forces were unleashed. The response was late, crude, blunt, but it came. And it was dramatic.
[00:09:38] The next few years in AI will feel similar.
[00:09:41] We're in the mid game now. 2023 was already a wild shift. AGI went from a fringe topic you'd be hesitant to associate with to the subject of major Senate hearings and summits of world leaders.
[00:09:54] Given how early we still are, the level of USG engagement has been impressive to me. A couple more 2023s and the Overton Window will be blown completely open.
[00:10:06] As we race through the orders of magnitude or ooms, the leaps will continue by 2025 or 2026 or so. I expect the next truly shocking step changes.
[00:10:20] AI will drive $100 billion of annual revenues for big tech companies and outcompete ph all problem solving smarts.
[00:10:31] Much as the COVID stock market collapse made many take Covid seriously. We'll have 10 trillion dollar companies and the AI mania will be everywhere. If that's not enough, by 2027 and 2028, we will have models trained on the $100 billion plus cluster. Full fledged AI agents drop in remote workers will start to widely automate software engineering and other cognitive jobs each year. The acceleration will feel dizzying.
[00:11:00] While many don't yet see the possibility of AGI, eventually a consensus will form.
[00:11:05] Some, like Szilard, saw the possibility of an atomic bomb much earlier than others. Their alarm was not well received. Initially, the possibility of a bomb was dismissed as remote. Or at least it was felt that the conservative improper thing was to play down the possibility.
[00:11:23] Szilard's fervent secrecy appeals were mocked and ignored. But many scientists, initially skeptical, started realizing a bomb was possible as more and more empirical results came in.
[00:11:36] Once a majority of scientists came to believe we were on the cusp of a bomb, the government in turn saw the national security exigency as too great. And the Manhattan Project got underway.
[00:11:50] As the orders of magnitude go from theoretical extrapolation to extraordinary empirical reality, gradually a consensus will form too, among the leading scientists and executives and government officials.
[00:12:04] We are on the cusp. On the cusp of AGI, on the cusp of an intelligence explosion, on the cusp of superintelligence.
[00:12:15] And somewhere along here we will get the first genuinely terrifying demonstrations of AI. Perhaps the oft discussed helping novices make bioweapons, or autonomously hacking critical systems, or something else entirely.
[00:12:31] It will become clear this technology will be an utterly decisive military technology.
[00:12:38] Even if we're lucky enough to not be in a major war, even, it seems likely that the CCP will have taken notice and launched a formidable AGI effort. Perhaps the eventual inevitable discovery of CCP's infiltration of America's leading AI labs will cause a big stir.
[00:12:57] Somewhere around 2026 or 2027, the mood in Washington will become somber. People will start to viscerally feel what is happening.
[00:13:07] They will be scared.
[00:13:09] From the halls of the Pentagon to the backroom congressional briefings will ring the obvious question, the question on everybody's minds, do we need an AGI Manhattan Project?
[00:13:23] Slowly at first, then all at once, it will become clear this is happening. Things are going to get wild.
[00:13:31] This is the most important challenge for the national security of the United States. And since the invention of the atomic bomb, in one form or another, the national security state will get very heavily involved.
[00:13:46] The project will be the necessary, indeed the only plausible response.
[00:13:53] Of course, this is an extremely abbreviated account. A lot depends on when and how, consensus forms, key warning shots, and so on.
[00:14:02] D.C. is infamously dysfunctional. As with COVID and even the Manhattan Project, the government will be incredibly late and ham fisted. After Einstein's letter to the president in 1939, drafted by Szilard, an advisory committee on uranium was formed. But officials were incompetent and not much happened initially. For example, fermi only got $6,000, about $135,000 in today's dollars, to support his research. And even that was not given easily and only received after months of waiting. Szilard believed that the project was delayed for at least a year by the shortsightedness and sluggishness of the authorities.
[00:14:41] In March 1941, the British government finally concluded a bomb was inevitable. The US Committee initially entirely ignored this British report for months, until finally, in December 1941, a full scale atomic bomb, effortless, was launched.
[00:14:59] There are many ways this could be operationalized in practice. To be clear, this doesn't need to look like literal nationalization, with AI lab researchers now employed by the military or whatever, though it might. Rather, I expect a more suave orchestration.
[00:15:15] The relationship with the DoD might look like the relationship the DoD has with Boeing or Lockheed Martin, perhaps via defense contracting or similar. A joint venture between the major cloud compute providers and AI labs and the government is established, making it functionally a project of the national security state. Much like the AI labs voluntarily made commitments to the White House in 2023. Western labs might more or less voluntarily agree to merge in the national effort. And likely Congress will have to be involved, given the trillions of investment involved. And for checks and balances.
[00:15:53] How all these details shake out is a story for another day.
[00:15:57] But by late 2026, 27 or 28, it will be underway. The Core AGI research team, a few hundred researchers, will move to a secure location. The trillion dollar cluster will be built in record speed. The project will be on.
[00:16:19] Why the project is the only way I am under no illusions about the government.
[00:16:28] Governments face all sorts of limitations and poor incentives. I am a big believer in the American private sector and would almost never advocate for heavy government involvement in technology or industry.
[00:16:41] I used to apply this same framework to AGI until I joined an AI lab. AI labs are very good at some things. They've been able to take AI from an academic science project to the commercial big stage in a way that only a startup can. But ultimately, AI labs are still startups. We simply shouldn't expect startups to be equipped to handle superintelligence.
[00:17:07] There are no good options here, but I don't see another way.
[00:17:12] When a technology becomes this important for national security, we will need the US Government.
[00:17:19] Superintelligence will be the United States most important national defense project.
[00:17:26] I've discussed the power of superintelligence in previous pieces. Within years, superintelligence would completely shake up the military balance of power.
[00:17:36] By the early2030s, the entirety of the US arsenal would. Like it or not, the bedrock of global peace and security will probably be obsolete. It will not just be a matter of modernization, but a wholesale replacement.
[00:17:52] Simply put, it will become clear that the development of AGI will fall in a category more like nukes than the Internet. Yes, of course, it'll be dual use, but nuclear technology was dual use too.
[00:18:07] The civilian applications will have their time. But in the fog of the AGI endgame, for better or for worse, national security will be the primary backdrop.
[00:18:19] We will need to completely reshape US forces within a matter of years in the face of rapid technological change, or risk being completely outmatched by adversaries who do.
[00:18:33] Perhaps most of all, the initial priority will be to deploy superintelligence for defense applications, to develop countermeasures to survive untold new threats, adversaries with superhuman hacking capabilities, new classes of stealthy drone swarms that could execute a preemptive strike on our nuclear deterrent, the proliferation of advances in synthetic biology that can be weaponized, turbulent international and national power struggles, and rogue superintelligence projects, whether nominally private or not. The AGI project will need to be will be integrally a defense project.
[00:19:16] And it will require extremely close cooperation with the national security state, a sane chain of command for superintelligence the power and the challenges of superintelligence will fall into a very different reference class than anything else we're used to seeing from tech companies.
[00:19:39] It seems pretty clear this should not be under the unilateral command of a random CEO. Indeed, in the private labs developing superintelligence world, it's quite plausible individual CEOs will have the power to literally coup the US government.
[00:19:58] Imagine if Elon Musk had final command of the nuclear arsenal. Or if a random non profit board could decide to seize control of the nuclear arsenal.
[00:20:09] It is perhaps obvious, but as a society we've decided democratic governments should control the military.
[00:20:17] Superintelligence will be, at least at first, the most powerful military weapon.
[00:20:23] The radical proposal is not the project.
[00:20:26] The radical proposal is taking a bet on private AI CEOs wielding military power and becoming benevolent dictators.
[00:20:37] Indeed, in the private AI lab world, it would likely be even worse than random CEOs with a nuclear button.
[00:20:45] Part of AI labs Abysmal security is their utter lack of internal controls. That is random AI lab employees with zero vetting could go rogue unnoticed.
[00:20:57] We will need a sane chain of command, along with all the other processes and safeguards that necessarily come with responsibly wielding what will be comparable to a wmd. And it'll require the government to do so.
[00:21:12] In some sense, this is simply a Burkean argument. The institutions, constitutions, laws, courts, checks and balances, norms and common dedication to the liberal democratic order. For example, generals refusing to follow illegal orders, and so on. That check the power of the government have withstood the test of hundreds of years.
[00:21:32] Special AI lab governance structures meanwhile, collapsed the first time they were tested.
[00:21:39] The US military could already kill basically every civilian in the United States or seize power if it wanted to. And the way we keep government power over nuclear weapons in check is not through lots of private companies with their own nuclear arsenals. There's only one chain of command and set of institutions that has proven itself up to this task.
[00:22:01] Again, perhaps you are a true libertarian and disagree normatively. Let Elon Musk and Sam Altman command their own nuclear arsenals. But once it becomes clear that super intelligence is a principal matter of national security, I'm sure this is how the men and women in D.C. will look at it.
[00:22:21] The civilian uses of super intelligence.
[00:22:27] Of course, that doesn't mean the civilian applications of superintelligence will be reserved for the government.
[00:22:34] The nuclear chain reaction was first harnessed as a government project and nuclear weapons permanently reserved for the government. But civilian nuclear energy flourished as Private projects in the 60s and the 70s before environmentalists shut it down.
[00:22:50] Boeing made the B29, the most expensive defense R and D project during World War II, more expensive than the Manhattan Project, and the B47 and B52 long range bombers in partnership with the military before using that technology for the Boeing 707, the commercial plane that ushered in the jet era. And today, while Boeing can only sell stealth fighter jets to the government, it can freely develop and sell civilian jets privately.
[00:23:19] And so it went for radar, satellites, rockets, gene technology, World War II factories, and so on.
[00:23:27] The initial development of superintelligence will be dominated by the national security exigency to survive and stabilize an incredibly volatile period. And the military uses of superintelligence will remain reserved for the government, and safety norms will be enforced. But once the initial peril has passed, the natural path is for the companies involved in the National Consortium and others to privately pursue civilian applications.
[00:23:54] Even in worlds with the project, a private, pluralistic, market based, flourishing ecosystem of civilian applications of superintelligence will have its day.
[00:24:06] Security I've gone on about this at length in the previous piece in the series.
[00:24:13] On the current course, we may as well give up on having any American AGI effort. China can promptly steal all the algorithmic breakthroughs and the model weights. Literally a copy of superintelligence directly. It's not even clear we'll get to North Korea. Proof security for superintelligence on the current course in the private startups developing AGI world, superintelligence would proliferate to dozens of rogue states. It's simply untenable.
[00:24:41] If we're going to be at all serious about this, we obviously need to lock this stuff down.
[00:24:48] Most private companies have failed to take this seriously. But in any case, if we are to eventually face the full force of Chinese espionage, for example, stealing the weights being the MSS's number one priority, it's probably impossible for a private company to get good enough security.
[00:25:07] It will require extensive cooperation from the US intelligence community at that point to sufficiently secure AGI.
[00:25:16] This will involve invasive restrictions on AI labs and on the core team of AGI researchers, from extreme vetting to constant monitoring, to working from an SCIF to reduced freedom to leave. And it will require infrastructure that only the government can provide, ultimately including the physical security of the AGI data centers themselves.
[00:25:39] In some sense, security alone is sufficient to necessitate the government project. Both the free world's preeminence and AI safety are doomed if we can't lock this stuff down. In fact, I think it's fairly likely to be a major factor in the ultimate trigger. Once the Chinese infiltration of the AGI labs becomes clear, every senator and congressperson and national security official will have a strong opinion on the matter.
[00:26:05] Safety.
[00:26:07] Simply put, there are a lot of ways for us to mess this up.
[00:26:12] From ensuring we can reliably control and trust the billions of superintelligent agents that will soon be in charge of our economy and military, the super alignment problem to controlling the risks of misuse of new means of mass destruction.
[00:26:28] Some AI labs claim to be committed to safety, acknowledging that what they are building if gone awry, could cause catastrophe, and promising that they will do what is necessary when the time comes.
[00:26:40] I do not know if we can trust their promise enough to stake the lives of every American on it. More importantly, so far they have not demonstrated the competence, trustworthiness or seriousness necessary for what they themselves acknowledge they are building.
[00:26:56] At core, they are startups with all the usual commercial incentives.
[00:27:01] Competition could push all of them to simply race through the intelligence explosion, and there will be at least some actors that will be willing to throw safety by the wayside.
[00:27:13] In particular, we may want to spend some of our lead to have time to solve safety challenges, but Western labs will need to coordinate to do so. And of course, private labs will already have had their AGI weights stolen, so their safety precautions won't even matter. We'll be at the mercy of the CCP's and North Korea's safety precautions.
[00:27:35] One answer is regulation.
[00:27:37] That may be appropriate in worlds in which AI develops more slowly. But I fear that regulation simply won't be up to the nature of the challenge or of the intelligence explosion.
[00:27:47] What's necessary will be less like spending a few years doing careful evaluations and pushing some safety standards through a bureaucracy. It'll be more like fighting a war.
[00:27:58] We will face an insane year in which the situation is shifting extremely rapidly every week, in which hard calls based on ambiguous data will be life or death. In which the solutions, even the problems themselves, won't be close to fully clear ahead of time, but come down to competence in a fog of war, which will involve insane tradeoffs like some of our alignment measures are looking ambiguous. We don't really understand what's going on anymore. It might be fine, but there's some warning signs about the next generation of superintelligence might go awry should we delay the next training run by three months to get more confidence on safety. But oh no. The latest intelligence reports indicate China stole our weights and is racing ahead on their own. Intelligence explosion what should we do?
[00:28:46] I'm not confident that a government project would be competent in dealing with this, but the superintelligence developed by startups alternative seems much closer to praying for the best than commonly recognized.
[00:29:01] We'll need a chain of command that can bring to the table the seriousness that making these difficult tradeoffs will require stabilizing the international situation.
[00:29:13] The intelligence explosion and its immediate aftermath will bring forth one of the most volatile and tense situations mankind has ever faced.
[00:29:23] Our generation is not used to this, but in this initial period, the task at hand will not be to build cool products. It will be to somehow, desperately make it through this period.
[00:29:37] We'll need the government project to win the race against authoritarian powers and to give us the clear lead and breathing room necessary to navigate the perils of this situation.
[00:29:49] We might as well give up if we can't prevent the instant theft of superintelligence model weights. We will want to bundle western efforts, bring together our best scientists, use every GPU we can find, and ensure the trillions of dollars of cluster build outs and happen in the United States. We will need to protect the data centers against adversary sabotage or outright attack.
[00:30:12] Perhaps most of all, it will take American leadership to develop and if necessary, enforce a non proliferation regime. We'll need to subvert Russia, North Korea, Iran and terrorist groups from using their own superintelligence to develop technology and weaponry that would let them hold the world hostage.
[00:30:32] We'll need to use superintelligence to harden the security of our critical infrastructure, military and government to defend against extreme new hacking capabilities.
[00:30:43] We'll need to use superintelligence to stabilize the offense and defense balance of advances in biology or similar.
[00:30:51] We'll need to develop tools to safely control superintelligence and to shut down rogue superintelligence that come out of others uncareful projects.
[00:31:00] AI systems and robots will be moving at 10 to 100x human speed. Everything will start happening extremely quickly. We need to be ready to handle whatever other six sigma upheavals and concomitant threats come out of compressing a century's worth of technological progress into a few years.
[00:31:23] At least in this initial period, we will be faced with the most extraordinary national security exigency.
[00:31:30] Perhaps nobody is up for this task, but of the options we have, the project is the only sane one.
[00:31:39] The project is inevitable, whether it's good or not.
[00:31:45] Ultimately, my main claim here is descriptive. Whether we like it or not, super intelligence won't look like a San Fran startup and in some way will be primarily in the domain of of national security.
[00:31:59] I've brought up the project a lot to my San Francisco partners in the past year. Perhaps what surprised me most is how surprised most people are about the idea. They simply haven't considered the possibility.
[00:32:11] But once they consider it, most agree that it seems obvious if we are at all right about what we think we are building. Of course, by the end of this we'll be in some form a government project.
[00:32:25] If a lab developed literal superintelligence tomorrow, of course, the feds would step in.
[00:32:33] One important free variable is not if but when.
[00:32:37] Does the government not realize what's happening until we're in the middle of an intelligence explosion? Or will it realize a couple of years beforehand? If the government project is inevitable, earlier seems better.
[00:32:50] We will dearly need these couple of years to do the security crash program to get the key officials up to speed and prepared to build a functioning merged lab, and so on. It'll be far more chaotic if the government only steps in at the very end and the secrets and waits will have already been stolen.
[00:33:10] Another important free variable is the international coalition. We can rally both a tighter alliance of democracies for developing superintelligence and a broader benefit sharing offer made to the rest of the world.
[00:33:23] The former might look like the Quebec Agreement, a secret pact between Churchill and Roosevelt to pool their resources to develop nuclear weapons while not using them against each other or against others without mutual consent. We'll want to bring in the UK DeepMind East Asian allies like Japan and South Korea, CHIP supply chain and NATO or other core democratic allies. Broader industrial base.
[00:33:49] A united effort will have more resources, talent and control. The whole supply chain, enable close coordination on safety, national security and military challenges and provide helpful checks and balances on wielding the power of superintelligence.
[00:34:06] The latter might look like atoms for peace, the IAEA and the mpt.
[00:34:12] We should offer to share the peaceful benefits of superintelligence with a broader group of countries, including non democracies and and commit to not offensively using superintelligence against them.
[00:34:23] In exchange, they refrain from pursuing their own superintelligence projects, make safety commitments on the deployment of AI systems and accept restrictions on dual use applications.
[00:34:34] The hope is that this offer reduces the incentives for arms races and proliferation and brings a broad coalition under a US led umbrella. For the post superintelligence world order, perhaps the most important free variable is simply whether the inevitable government project will be competent. How will it be organized? How can we get this Done.
[00:35:00] How will the checks and balances work? And what does a sane chain of command look like?
[00:35:06] Scarcely any attention has gone into figuring this out. Almost all other AI labs and AI governance politicking and is a sideshow. This is the ball game.
[00:35:18] The end game.
[00:35:21] And so by 2027 and 28, the end game will be on. By 2829, the intelligence explosion will be underway. By 2030, we will have summoned superintelligence in all its power and might.
[00:35:39] Whoever they put in charge of the project is going to have a hell of a task. To build AGI and to build it fast. To put the American economy on wartime footing. To make hundreds of millions of GPUs to lock it all down, weed out the spies and fend off all out attacks by the CCP. To somehow manage a hundred million AGI's furiously automating AI research, making a decade's leaps in a year and soon producing AI systems vastly smarter than the smartest humans to somehow keep things together enough that this doesn't go off the rails and produce rogue superintelligence that tries to seize control from its human overseers. To use those superintelligences to develop whatever new technologies will be necessary to stabilize the situation and stay ahead of adversaries. Rapidly remaking US forces to integrate those all while navigating what will likely be the tensest international situation ever seen.
[00:36:44] They better be good.
[00:36:46] I'll say that for those of us who get the call to come along for the ride. It'll be stressful, but it will be our duty to serve the free world and all of humanity if we make it through and get to look back on those years, it will be the most important thing we ever did. And while whatever secure facility they find probably won't have the pleasantries of today's ridiculously overcomped AI researcher lifestyle, it won't be so bad.
[00:37:16] San Fran already feels like a peculiar AI researcher college town. Probably this won't be so different. It'll be the same weirdly small circle. Sweating the scaling curves during the day and hanging out over the weekend, kibitzing over AGI and the lab politics of the day. Except, well, the stakes will be all too real.
[00:37:38] See you in the desert, friends.
[00:37:41] Part 5 Parting Thoughts what if we're right?
[00:37:49] I remember the spring of 1941 to this day. I realized then that a nuclear bomb was not only possible, it it was inevitable. Sooner or later, these ideas would not be peculiar to us. Everybody would think about them before long and some country would put them into action, and there was nobody to talk about it. I had many sleepless nights, but I did realize how very, very serious it could be, and I had then to start taking sleeping pills. It was the only remedy. I've never stopped since then. It's 28 years, and I don't think I've missed a single night in all those 28 years.
[00:38:25] James Chadwick, Physics Nobel laureate and author of the 1941 British Government Report on the Inevitability of an Atomic Bomb, which finally spurred the Manhattan Project into action.
[00:38:40] Before the decade is out, we will have built superintelligence.
[00:38:46] That is what most of the series has been about. For most people I talk to in San Fran, the that's where the screen goes black. But the decade after the 2000 and 30s will be at least as eventful.
[00:38:59] By the end of it, the world will have been utterly, unrecognizably transformed.
[00:39:05] A new world order will have been forged. But alas, that's a story for another time.
[00:39:12] We must come to a close. For now, let me make a few final remarks.
[00:39:19] AGI realism this is all much to contemplate, and many cannot Deep Learning is hitting a wall, they proclaim every year. It's just another tech boom, the pundits say confidently. But even among those at the San Fran epicenter, the discourse has become polarized between two fundamentally unserious rallying cries.
[00:39:43] On the one end, there are the doomers. They have been obsessing over AGI for many years. I give them a lot of credit for their prescience, but their thinking has become ossified, untethered from the empirical realities of deep learning, their proposals naive and unworkable. And they fail to engage with the very real authoritarian threat rabid claims of 99% odds of doom calls to indefinitely pause AI. They are clearly not the way.
[00:40:13] On the other end are the E ACCs. Narrowly, they have some good points, AI progress must continue.
[00:40:21] But beneath their shallow Twitter shitposting, they are a sham dilettantes who just want to build their rapper startups rather than stare AGI in the face. They claim to be ardent defenders of American freedom, but can't resist the siren song of unsavory dictators. Cash.
[00:40:38] In truth, they are real stagnationists in their attempt to deny the risks, they deny AGI. Essentially, all we'll get is cool chatbots, which surely aren't dangerous. That's some underwhelming accelerationism in my book, but as I see it, the smartest people in the space have converged on A different perspective. A third way. The one I will dub AGI realism.
[00:41:03] The core tenets are simple.
[00:41:06] One, superintelligence is a matter of national security.
[00:41:10] We are rapidly building machines smarter than the smartest humans.
[00:41:15] This is not another cool Silicon Valley boom. This isn't some random community of coders writing an innocent open source software package. This isn't fun and games.
[00:41:26] Superintelligence is going to be wild.
[00:41:29] It will be the most powerful weapon mankind has ever built. And for any of us involved, it'll be the most important thing we ever do.
[00:41:39] 2. America must lead the torch of Liberty will not survive XI Getting AGI first and realistically, American leadership is the only path to safe AGI too.
[00:41:52] That means we can't simply pause. It means we need to rapidly scale up U.S. power production to build the AGI clusters in the U.S.
[00:42:00] but it also means amateur startup security. Delivering the nuclear secrets to the CCP won't cut it anymore. And it means the core AGI infrastructure must be controlled by America, not some dictator in the Middle East. American AI labs must put the national interest first.
[00:42:19] And three, we need to not screw it up.
[00:42:24] Recognizing the power of superintelligence also means recognizing its peril.
[00:42:31] There are very real safety risks, very real risks that this all goes awry.
[00:42:37] Whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we're summoning is one we cannot yet fully control.
[00:42:50] These are manageable, but improvising won't cut it.
[00:42:54] Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered.
[00:43:03] As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming and take it as a solemn call to duty.
[00:43:18] What if we're right?
[00:43:22] At this point, you may think that I and all other San Fran folk are totally crazy. But consider, just for a moment, what if they're right?
[00:43:32] These are the people who invented and built this technology.
[00:43:35] They think AGI will be developed this decade. And though there's a fairly wide spectrum, many of them take very seriously the possibility that the road to super intelligence will will play out as I've described in this series.
[00:43:52] Almost certainly I've gotten important parts of the story wrong. If reality turns out to be anywhere near this crazy, the error bars will be very large.
[00:44:02] Moreover, as I said at the outset, I think there's a wide range of possibilities. But I think it's important to be concrete. In this series, I've laid out what I currently believe is the single most likely scenario for the rest of the decade. The rest of this decade. Because it's starting to feel real.
[00:44:22] Very real. A few years ago, at least for me, I took these ideas seriously. But they were abstract, quarantined in models and probability estimates.
[00:44:32] Now it feels extremely visceral. I can see it. I can see how AGI will be built. It's no longer about estimates of human brain size and hypotheticals and theoretical extrapolations and all that. I can basically tell you the cluster AGI will be trained on and when it will be built. The rough combination of algorithms we'll use, the unsolved problems in the path to solving them, the list of people that will matter. I can see it. It is extremely visceral. Sure, going all in leveraged long Nvidia in early 2023 has been great and all, but the burdens of history are heavy. I would not choose this, but the scariest realization is that there is no crack team coming to handle this. As a kid you have this glorified view of the world that when things get real, there are the heroic scientists, the uber competent military men, the calm leaders who are on it, who will save the day. It is not so. The world is incredibly small. When the facade comes off, it's usually just a few folks behind the scenes who are the live players who are desperately trying to keep things from falling apart.
[00:45:47] Right now there's perhaps a few hundred people in the world who realize what's about to hit us, who understand just how crazy things are about to get, who have situational awareness.
[00:45:58] I probably either personally know or am one degree of separation from everyone who could plausibly run the project.
[00:46:06] The few folks behind the scenes who are desperately trying to keep things from falling apart are you and your buddies and their buddies. That's it. That's all there is.
[00:46:14] Someday it will be out of our hands. But right now, at least for the next few years of mid game, the fate of the world rests on these people.
[00:46:22] Will the free world prevail?
[00:46:25] Will we tame super intelligence? Or will it tame us?
[00:46:30] Will humanity skirt self destruction once more?
[00:46:35] The stakes are no less.
[00:46:37] These are great and honorable people, but they are just people.
[00:46:42] Soon the AIs will be running the world.
[00:46:45] But we're in for one last rodeo.
[00:46:49] May their final stewardship bring honor to mankind.
[00:46:56] Alright, and that wraps up Situational Awareness the Decade Ahead By Leopold Aschenbrenner With Part four and Part five. I really, really want to get into a guy's take on this, but I just don't have time today. We got a bunch of conference stuff to get to and I'm going to have to head out. But I hope you guys really enjoyed the final part of this piece and we will be coming back to discuss so much of this. I've highlighted so many different piece sections from this 160 page work and, and actually there's a couple of things in the appendix that I actually want to cover too. And I also think it will be hugely relevant to the Mark Zuckerberg piece that I may not read in full, but we will discuss at length in relation to this to compare those two perspectives because there's, there are very di, like opposing views or perspectives on whether or not we need to go crazy closed down, like secrecy, like everything, the whole Manhattan Project perspective or if the only real our best chance at survival and at safety and at defense is literally doing this all out in the open and not even treating this like a battle like, like we are at odds, but that we are trying to build the right thing for humanity in the open and we know exactly what's going into this. And I think Zuckerberg has a really, really good point. So I want to hear. I'm going to flesh out his argument too and kind of give that case a, a good run through, like give it a thorough assessment. So we will do that. Do not forget to subscribe. Check out the Guy Swan Network, Bitcoin Audible, the Pear Report, all of the other shows that I'm doing, follow me on YouTube, on Twitter, on Noster, all that good stuff. You will find the links and details in the show notes. Don't forget to check out Swan Bitcoin, the best place to buy bitcoin and plug your entire life, business, retirement, you name it, into bitcoin. Whether you're just getting started or getting on a bitcoin standard and of course Coinkite, the makers of the cold hard hardware wallet. If you want to sleep like a baby at night, soundly and with all of the weight off your shoulders that you know your Bitcoiners safe, get a cold card. Get it with my discount code details in the show notes. You won't regret it. With that, thank you all so much for listening and I will catch you on the next episode of AI Unchained. Until then, everybody take it easy guys.
[00:49:26] Hold fast to dreams. For if dreams die, life is a broken winged bird that cannot fly.
[00:49:35] Langston Hughes.