A Google Brain Scientist has now built a tool that can help the latest Artificial Intelligence system to explain how they arrived at their entire endpoint, a notoriously tricky task for the machine learning algorithms.
The tool which is called by the name as Testing with Concept Activation Vectors or which is short for TCAV can even get plugged into the machine learning algorithms to suss out how much they weighted different factors or types of data before churning out results, according to the report revealed by the Quanta Magazine.
Tools like the TCAV are in the high demand as Artificial Intelligence finds itself under the scrutiny with greater aspects for the racial and gender bias that even plagues with the artificial intelligence and the training data which is used to develop it.
With the TCAV, people using with the help of a facial recognition algorithm would going to be able to determine how much it factored in the race when say that matching up people against a database of some of the known criminals or even evaluating their job applications. With the help of this way, people will also be going to have the choice to question, reject, and maybe even fix a neural network conclusion rather than trusting the machine to be objective and much fair.
Google Brain Scientist has Been Kim revealed in a report that she does not need a tool that can explain the Artificial Intelligence decision-making process. Moreover, it is good even enough for now to have with something that can flag potential issues and give the humans insight into where something may have gone something wrong.
She likened the concept of reading the warning labels on a chainsaw before cutting down a tree.
“Now, I don’t fully understand how the chain saw works,” Kim told Quanta. “But the manual says, ‘These are the things you need to be careful of, to not cut your finger. So, given this manual, I’d much rather use the chainsaw than a handsaw, which is easier to understand but would make me spend five hours cutting down the tree.”