Artificial intelligence is seen differently in today’s era. Algorithms in artificial intelligence are capable of identifying the best candidate among the rest for a perfect job. For instance, algorithms help doctors to enable the list of patients on a daily basis, and in the same way, it assists lawyers before the court. The AI system was established in the 1980’s it work on an expert system that assists humans with a high level of expertise. The new thing is computers are now able to perform extremely complex tasks independently, but their designs are somehow no longer understandable.
The Draft AI Regulation reveals some transparency obligations that majorly used to promote low-risk AI systems. These transparency obligations are applied on three factors:
- Ai system that intended to get interacted with the human body, e.g., chatbots,
- recognizing the emotions or biometric categorization systems and
- Deep fakes.
There are some exceptions to the transparency obligations, for example– law enforcement purposes.
Regularizing the Sandboxes
National supervisory authorities may sanction Artificial Intelligence regulatory sandboxing offers to offer a controlled environment that facilitates the development, testing, and validation of AI under the direct supervision and regulatory oversight before the systems are launched on the market or put into service. The motive after these regulatory sandboxes are-
Help innovators in the ways to enhance legal certainty and ensure compliance of the AI system with the Draft AI Regulation, and
Raise the national competent authorities’ oversight and understanding of the opportunities, emerging risks, and the impacts of AI.
Like GDPR and the proposed Digital Services Act, the Draft AI Regulation also provides for substantial fines for non-compliances and other remedies. These remedies include the requirement of the withdrawal of the AI system. The tree of fines is applied at the dependency of the severity of the infringement that is
Around EUR 30 million and 6% of the total worldwide annual turnover for extending the prohibition on unacceptable-risk AI systems or encroach on the data of governance provisions for high-risk AI systems,
Around EUR 20 million and 4% of the total worldwide annual turnover for non-compliance of AI systems with any other requirement under the Draft AI Regulations.
Supervision and enforcement mechanism
The Draft AI Regulation inherits the double power system where national authorities at the Member State level supervise the application and enforcement of the Draft AI Regulation and where a cooperation mechanism applies the rules at the EU level to ensure the consistent application of the Draft AI Regulation. Each and every member of a Member State is highly required to hold a competent national authority. At the EU level, the Draft AI Regulation creates a European Artificial Intelligence Board, composed of representatives from the national supervisory authorities and the European Commission, which will facilitate the cooperation of national supervisory authorities and provide guidance on its various aspects.