Artificial intelligence could bestow incredible benefits on society, from faster, more accurate medical diagnoses to more sustainable management of energy resources, and so much more. But in today’s economy, the first to achieve a technological breakthrough are the winners, and the teams that develop AI technologies first will reap the benefits of money, prestige, and market power. With the stakes so high, AI builders have plenty of incentive to race to be first.

When an organization is racing to be the first to develop a product, adherence to safety standards can grow lax. So it’s increasingly important for researchers and developers to remember that, as great as AI could be, it also comes with risks, from unintended bias and discrimination to potential accidental catastrophe. These risks will be exacerbated if teams struggling to develop some product or feature first don’t take the time to properly vet and assess every aspect of their programs and designs.

Yet, though the risk of an AI race is tremendous, companies can’t survive if they don’t compete.

As Elon Musk said recently:

You have companies that are racing – they kind of have to race – to build AI or they’re going to be made uncompetitive. If your competitor is racing toward AI and you don’t, they will crush you.
Is Cooperation Possible?

With signs that an AI race may already be underway, some are worried that cooperation will be hard to achieve.

“It’s quite hard to cooperate,” said AI professor Susan Craw:

… especially if you’re trying to race for the product, and I think it’s going to be quite difficult to police that, except, I suppose, by people accepting the principle. For me safety standards are paramount and so active cooperation to avoid corner cutting in this area is even more important. But that will really depend on who’s in this space with you.
Susan Schneider, a philosopher focusing on advanced AI, added, “Cooperation is very important. The problem is going to be countries or corporations that have a stake in secrecy. … If superintelligent AI is the result of this race, it could pose an existential risk to humanity.”

However, just because something is difficult, that doesn’t mean it’s impossible, and AI philosopher Patrick Lin may offer a glimmer of hope.

“I would lump race avoidance into the research culture. … Competition is good, and an arms race is bad, but how do you get people to cooperate to avoid an arms race? Well, you’ve got to develop the culture first,” Lin suggests, referring to a comment he made in our previous piece on the Research Culture Principle. Lin argued that the AI community lacks cohesion because researchers come from so many different fields.

Developing a cohesive culture is no simple task, but it’s not an insurmountable challenge.