Oct 30, 2023: The Biden administration has recently issued an executive order introducing stringent safety requirements for future artificial intelligence (AI) technologies. The directive targets explicitly foundational AI models that could potentially pose threats to national security or public health and mandates developers to divulge safety outcomes to federal entities.
While setting a robust standard for AI oversight, this policy is not expected to disrupt the current market landscape immediately. It represents a forward-looking strategic measure, preparing the groundwork for handling AI technologies that are still on the horizon. Notably, the order stops short of imposing sanctions for those who don’t fall in line, sparking debate among experts who favor stronger, more explicit regulatory action.
The directive urges agencies such as the Federal Trade Commission and the Justice Department to utilize their current authorities to oversee this new initiative. However, it does not provide any concrete enforcement measures. One of the immediate outcomes anticipated is guidance from the Office of Management and Budget on the responsible stewardship of AI.
The order provides detailed instructions for federal agencies on various aspects, including promoting innovation, ensuring competitive practices, and considering the impact of AI on the labor force. These agencies are expected to recruit AI specialists, develop strategic plans for AI adoption, and assess potential risks associated with AI technologies, aligning with the broader AI strategy of the administration.
The long-term impact of this directive is expected to be significant, especially in the business world. Specialists at Deloitte predict that the order may affect tech suppliers and key infrastructure services. As government agencies begin aligning with these new standards, it will be crucial for businesses to anticipate and adapt to the changes that are likely to follow.