Government is making rules to make sure computers, referring to the artificial systems, that think like humans are safe and work well. These rules will pay extra attention to how computers that act smart handle problems or issues in important areas like cars, drones, Metaverse, space internet and healthcare.
Once established, companies implementing AI technologies will undergo evaluation based on various criteria. Parameters include the time taken to restore systems in the face of hacking, malware, or breakdowns. At first, companies might need to say their own computers are good and give them a grade based on certain rules.
We need rules because smart computers are used a lot in different areas where people are directly involved. We are concerned that if these smart computers are attacked by bad people, they might give the wrong answers and could be dangerous for us. For instance, driverless cars rely on real-time decision-making, while automated car parkings heavily depend on AI connectivity. Any compromise in the systems connected to telecom networks could disrupt operations. So, having the same rules is important to check how quickly things can get back to normal and what should be done in the meantime when something goes wrong.
Right now, the Telecommunication Engineering Centre (TEC) is working together with businesses to make a report.
The TEC is expected to share a first report on this topic in the next two to three months. After that, they will talk with the public and people who are interested in it.
In the interim, the TEC has already released procedures for accessing and rating AI systems for fairness. The fairness score for smart computers checks if they are being fair or if they like some sellers or products more than others. However, compliance with TEC standards for AI fairness is not yet mandatory for platforms.