The AI arms race is on. – As we enter the heat of the AI race of making anything and everything with AI, the question arises if racing to AI enablement is a good idea. AI experts are starting to worry about what they have created. AI could join the list of things that pose a potentially catastrophic risk to humankind. Similar risks occurred with the creation of nuclear weapons and the bioengineering of pathogens. Is it time to slow down the development of AI so that we don’t create a potential doomsday machine?
Some experts think that the potential future risk from AI warrants slowing its development. Unfortunately, AI systems do not necessarily “play by the rules.” When trying to perform a given task, there is a question of pre-existing bias and alignment of the AI to humanity’s unwritten goals – such as avoiding extinction.
Also, there is a concern that if “we” slow down the development of powerful AI, someone else will continue developing it and thus overpower us. Silicon Valley startups are keen to have a piece of the AI market. The generative AI market will likely exceed $100 billion in just a few years. Rather than trying to stop research and development of robust AI systems, slowing down AI development and flattening the curve of AI progress makes the most sense. Flattening the curve gives the world time to adapt to the changes that more powerful AI is making.