I recently watched a podcast clip that posed the question: “Would you build the Manhattan project in the UAE?” This was a comparison of training AI on compute clusters outside of the US to developing the atomic bomb outside of the US. It made me think about 1) the comparison between AI and the atomic bomb and 2) the national security risks of AI and how we mitigate them while still promoting open AI research.
Let’s begin with the first.
Comparing AI to the Manhattan Project
For those unaware, the Manhattan Project was a top-secret US government initiative during World War II aimed at developing the first nuclear weapons. It involved thousands of scientists and engineers working at various sites across the country, including Los Alamos, New Mexico.
The project culminated in the successful creation of the atomic bomb and the devastating bombings of Hiroshima and Nagasaki in August 1945. This research and the fact that only the US had access to atomic bombs at the time made them the most powerful country in the world.
Achieving AGI will create the world’s next great weapon. Even though AI isn’t being advanced with weaponization in mind, it can still be used as a weapon and, similar to the atomic bomb, countries with access to it will have a distinct advantage over countries that don’t.
The primary difference between the development of AI and the Manhattan Project is just how difficult it is to keep AI secret. Instead of being developed secretly, AI is being developed by multiple groups in the open. AI is also much easier to steal than an atomic bomb. Access to AI only requires access to a model’s weights which is essentially just a file on a computer. Stealing model weights isn’t equivalent to another country getting their hands on the blueprint for the atomic bomb, it’s like stealing a bomb itself.
Now, onto mitigating risks while keeping AI research open.
Why the Location of AI Clusters Matters
There are multiple reasons why we need to consider geography when choosing where to develop superintelligence training clusters. It isn’t as simple as choosing the cheapest option—there are a lot of considerations that go into geography even in a primarily digital application like AI:
The country with access to the hardware has control over how that hardware is used. This is obvious, but worth pointing out.
Following on from the previous point, the computers training the AI contain the model weights. It’s much easier to steal weights locally than it is to steal them from halfway around the world.
In order to train AI, training data needs to be loaded on the machines doing the training. This means sensitive data will be wherever the compute clusters are.
Relinquishing control of compute and introducing the possibility of leaking sensitive information has massive national security implications. I’m sounding a bit like an AI doomer here, but these are important considerations. The obvious solution would be to close down all AI research, but it isn’t quite that simple and that would have adverse affects on AI advancements.
So how do we balance this with the open-source nature of a lot of AI research? I need to do more research into this. It’s hard to make a judgment call here because its difficult to foresee the future and pace of AI development.
I’m still a firm believe of AI being accessible to all being the only way forward without concentrating power in the hands of a few, but we also don’t want to make the weaponization of AI too widely accessible. I think the biggest takeaway from the current research regarding AI as a national security risk is that governments should be investing in their country’s own AI development. Compute clusters should be prioritized domestically and sensitive data needs to be managed with care.
I think this topic is interesting and important to know. I’m going to continue doing more research and I’ll share my findings as I do. Drop a comment about what you think are the potential national security threats of AI below.
If you want to see the full podcast from the clip above, here’s the link.
That’s all for now, here are my machine learning resources from the past few days. Thank you for supporting Society’s Backend!
Always be (machine) learning,
Logan
Keep reading with a 7-day free trial
Subscribe to Society's Backend to keep reading this post and get 7 days of free access to the full post archives.