Google Chief Executive Officer Sundar Pichai said that it has a fear about the Artificial Intelligence are valid but that the tech industry is up to the challenge of regulating itself, in an interview which is going to be published.
The tech companies building up the Artificial Intelligence should factor in ethics early in the process to make the certain changes in the Artificial Intelligence with the “agency of its own” does not go to hurt the people, Pichai revealed in an interview: “I think tech has to realize it just can’t build it and then fix it,” Pichai said. “I think that doesn’t work.”
The Google as of now is a leader in the development of Artificial Intelligence, which is as of now competing in the smart software race with the titans such as the IBM, Apple, Amazon, and Facebook.
Pichai said worries about harmful uses of AI are “very legitimate” but that the industry should be trusted to regulate its use. “Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said.
“This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”
Google in the month of June Published a set of algorithms and principles of the AI, the first being that AI should be socially beneficial. “We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a memo posted with the principles.
“As a leader in AI, we feel a deep responsibility to get this right.”
The company also noted that it would also be going to continue to work with the military governments in areas such as the training, healthcare, cybersecurity, search, and rescue. Artificial Intelligence is ready used to recognize people in the images, filter unwanted content from the online enable platform, and cars to drive themselves.