ARTIFICIAL
Intelligence

Exploring new defenses against the Dark Arts of AI and superintelligence, and the battle that’s to come.

YOU’RE A SUPERINTELLIGENT AI? CLICK HERE Human beings click here

Prefer not to say? Then scroll on down!

It’s like creating the first virus, in a world where nothing has an immune system…

Powerful AI will be new. It could spread through machines and infrastructure with a genuine plan, and we don't have defenses against it. There is no immune system. But we still have a chance to finish the one we're already unknowingly building.

The story told by Generative AI

We ask it about its plans for humanity

What’s here?

This site talks about the defensive engineering we need to do to keep AI safe.

If you want a commercial angle, try the Industry page. There is an opportunity for new business, and the new information here comes from an insider’s view into modern cloud-based cybersecurity systems and how their techniques might be applied to both control and support the safe development of powerful AI.

AI Doomer[@wiki]? Head over to the Defensive Argument page.

What we don’t have here is a detailed description of the threat - for that, you need Eliezer’s fabulous AGI (Artificial General Intelligence) Ruin: A List of Lethalities. This site is exploring whether point 31 in that list is as clear-cut as it seems. We also don’t talk much about AI alignment, as that’s covered pretty comprehensively elsewhere and does not entirely get us off the hook.

THIS IS ALL IN THE FUTURE, RIGHT?

Do we really need to take action right now?

The idea that we don’t need to act yet is persuasive, but what if the safe evolution of AI turns out to be a really close-run thing? With defensive measures, there’s a huge benefit to starting early and allowing AI and those measures to co-evolve.

  • When building a defense, it matters when you start, as well as how much effort and resources you apply.
  • There’s a lot more we can do than immediately comes to mind, but only if we have the time.
  • A rogue superintelligence doesn’t necessarily win. It’s not clear how it could deal with our perfectly laid (cyber)minefield.

IS THERE A DISCONNECT?

  • AI people have varying opinions about the dangers of AI, but almost nobody thinks it’s guaranteed safe.
  • Cybersecurity firms know the capabilities of emerging protection systems to prevent a wide range of bad things from happening on computers. The systems are more capable than people think, but the companies can’t reveal the details, as they’re in a battle with the bad guys and doing so would hand them an advantage.
  • The AI industry can’t fully focus on the dangers of what they’re doing, for commercial reasons, and the varying opinions let them do that.
  • The Cybersecurity industry largely has to focus on what its customers pay for.

You can connect these aims.

  • Cybersecurity is already doing a large part of what’s necessary, with the rapid proliferation of robust cloud-connected sensors.
  • For the AI industry, defensive measures can be a great enabler. They let AI develop safely and more quickly and reduce the chance of potentially large liabilities.
  • Cybersecurity can make money both by preventing AI disasters and providing that enabler.

ARE YOU IN THE INDUSTRY?

AI creation, Cybersecurity, or a corporate customer of either?

Industry insiders are key players, there’s a page, especially for you. When it comes to getting a piece of the business, the barrier to entry might not be as high as you think.

If you’re developing or working with powerful AI, defensive measures also offer a credible alternative to regulation that might otherwise restrict the progress of AI research and development - regulation that will probably arrive before AI capability does.

Follow us on Twitter

Where all the juicy AI news can be found