While Silicon Valley prefers emerging technologies to be left unregulated from legal grounds, an EU proposal is likely to challenge Silicon Valley's opinions. The draft of the EU on AI sets out an authoritative regulatory structure that limits and even restricts applications of Artificial Intelligence with varying degrees of governance proportional to high-risk and low-risk AI systems. According to this draft, AI system providers and users must keep records and maintain information transparency and security, as telegraphed in White Paper on Artificial Intelligence in 2020.
The proposal also requires an agreeable definition of Artificial Intelligence to differentiate AI systems and sort them out in high-risk and low-risk procedures. While the report, as stated in White Paper on Artificial Intelligence, sounds bookish and resolves the purpose vaguely, the expansion of this definition will include systems that compute in statistical analytics such as machine learning, high-risk decision-making systems, and linear regression tools.
Similar to the GDPR (General Data Protection Regulation), the European Artificial Intelligence Board (EAIB) has been established and will be composed of national authoritative bodies from the European Data Protection Board (EDPB), chaired by the EU Commission. Considering that the EU Commission has opted for regulatory governance rather than directive, the proposal would need consistent legal legislation throughout Europe, or possibly even globally.
By introducing an authoritative regulation for AI, the EU is hoping for the same governance power as it has with GDPR (General Data Protection Regulation), which has now become the primary model for privacy regulation worldwide. And suppose the EU's approach is adopted or at least proposed again as another global norm. In that case, many nations with intelligent AI systems may be forced to regulate in the same line if they want to continue delivering AI services globally.
As with any governing body chaired by the EU Commission, this draft will soon become an effort to manage the risks associated with AI globally. The sooner beneficiary organizations and providers can adapt, the better their long-term success with AI. At the very beginning of any business model with AI, companies must implement multiple layers of effective risk-management systems integrated with manual supervision within their business operations. And because many AI systems compute with sensitive data, robust cybersecurity protocols will also be essential.