The Most Amazing Milestone in Artificial Intelligence

Artificial Intelligence (AI) is a hot topic currently in technology, and the driving force behind most of the major technological breakthroughs of the past few years.

In fact, with all the hustle and bustle of the breath we hear today, it’s easy to forget that AI is not something new. Throughout the last century, he has moved from the domain of science fiction and into the real world. Possible theoretical and fundamental computer science has existed for decades.

Since the beginning of computing in the early 20th century, scientists and engineers have understood that the ultimate goal is to build machines that are capable of thinking and learning in ways that the human brain – the most sophisticated decision-making system in the known universe – no.

Today’s deepest learning of using artificial neural networks is the latest, but there are many milestones along the way that have made it possible. Here’s my overview of people who are generally considered the most significant.

1637 – Descartes breaks the differences:

Long before robots were even a feature of science fiction, scientist and philosopher Rene Descartes pondered the possibility that one day the machine would think and make a decision. While he erroneously decided that they would never be able to speak like humans, he identified the division between machines that might one day learn about doing one particular task and those who might be able to adapt to any work. At present, these two fields are known as special and general AI. In many ways, he set the stage for the challenge of creating AI.

1956 – Dartmouth Conference:

With the advent of ideas such as neural networks and machine learning, Dartmouth College professor John McCarthy coined the term “artificial intelligence” and organized intensive summer workshops that brought together leading experts in their fields.

During the brainstorming session, efforts were made to lay down a framework to enable academic exploration and development of “thinking” machines to start. Many fields that are fundamental to current AIs, including natural language processing, computer vision, and neural networks, are part of the agenda.

1966 – ELIZA votes on the computer:

ELIZA, developed at MIT by Joseph Weizenbaum, is probably the first chatbot in the world – and a direct ancestor of people like Alexa and Siri. ELIZA represents the initial implementation of natural language processing, which aims to teach computers to communicate with us in human language, rather than asking us to program it in computer code or interact through the user interface. ELIZA cannot speak like Alexa – she communicates through text – and she is unable to learn from her conversations with humans. However, he paved the way for further efforts to break down the communication barriers between humans and machines.

1980 – XCON and the rise of beneficial AI:

The XCON Digital Equipment Corporation expert learning system was used in 1980 and in 1986 was credited with generating an annual savings of $ 40 million for the company. This is important because until this point the AI system was generally considered an impressive technological achievement with limited real-world usability. Now it’s clear that the launch of smart machines into business has begun – in 1985 the company spent $ 1 billion a year on the AI system.

1988 – Statistical approach:

IBM researchers publish a Statistical Approach to Language Translation, introducing probability principles into the field of machine learning which until then has been driven by rules. It handles the challenges of automatic translation between human languages – French and English.

This marks a change in the emphasis on designing programs to determine the probability of various results based on the information (data) they train, rather than training them to determine rules. This is often considered a big leap in terms of mimicking the cognitive processes of the human brain and forming the basis of machine learning as used today.

1991 – Birth of the Internet:

The importance of this cannot be overstated. In 1991, CERN researcher Tim Berners-Lee created the world’s first online website and published the workings of the hypertext transfer (HTTP) protocol. Computers have been connected to share data for decades, especially in educational institutions and big businesses. But the arrival of webs around the world is a catalyst for the wider community to connect itself to the online world. In a few short years, millions of people from every part of the world will connect, produce and share data – AI fuel – at a level previously unimaginable.

1997 – Chess champion Garry Kasparov is defeated by Deep Blue:

Chess supercomputers IBM does not use techniques that will be considered true AI according to current standards. Basically, it depends on the “brute force” method for calculating every possible option at high speed, rather than analyzing gameplay and learning about games. However, it is important from the point of view of the publication – drawing attention to the fact that computers are developing very fast and becoming increasingly competent in activities where humans previously ruled without challenges.

2005 – DARPA’s Big Challenge:

2005 marked the second year DARPA held a Grand Challenge – a race for autonomous vehicles across more than 100 kilometres of off-road terrain in the Mojave desert. In 2004, no registrant successfully completed the course. However, the following year, five vehicles drove, with a team from Stanford University taking the prize for the fastest time.

This race is designed to spur the development of autonomous driving technology, and of course, it does. In 2007, a simulated urban environment was built so that vehicles could navigate, which meant they had to be able to handle traffic regulations and other mobile vehicles.

2011 – IBM Watson’s Jeopardy! Victory:

Watson’s cognitive computing machine faces the champions of the TV game Jeopardy !, beating them and claiming a $ 1 million prize. This is important because while Deep Blue has proven more than a decade earlier that games in which movement can be explained mathematically, such as chess can be conquered through violence, the concept of computer beats humans based on language, the game of creative thinking is unprecedented.

2012 – The true strength of deep learning introduced to the world – computers learn to identify cats:

Researchers at Stanford and Google including Jeff Dean and Andrew Ng published their paper. Building High Level Features Using Large Scale Unattended Learning, builds on previous research into multilayered neural nets known as deep neural networks.

Their research explores unsupervised learning, which eliminates the costly and time-consuming task of manually labelling data before it can be used to train machine learning algorithms. This will accelerate the pace of developing AI and open up a new world of possibilities when it comes to building machines to do work that until they can only be done by humans.

In particular, they chose the fact that their system became very competent in recognizing cat images.

This paper describes a model that will allow artificial networks to be built that contain about one billion connections. He also acknowledged that although this is a significant step towards building an “artificial brain”, there are still several ways to do it – with neurons in the human brain thought to join a network of around 10 trillion connectors.

2015 – The machine “sees” better than humans:

The researchers studied the annual ImageNet challenge – where competing algorithms to demonstrate their ability to recognize and describe a library of 1,000 images – stated that machines now outperform humans.

Since the contest was launched in 2010, the accuracy of the winning algorithm increased from 71.8% to 97.3% – prompting researchers to state that computers can identify objects in visual data more accurately than humans.

2016 – AlphaGo goes to a place that has never been a machine before:

Gameplay has long been the method chosen to show the ability of machine thinking, and the trend continues to make headlines in 2016 when AlphaGo, created by Deep Mind (now a Google subsidiary) defeated world champion Go Lee Lee Sedol for five matches. Although the Go movement can be explained mathematically, there are many variations of the game that can be played – there are more than 100,000 possible opening moves on Go, compared to 400 in Chess making the brute force approach impractical. AlphaGo uses neural networks to learn games and learn while playing.

2018 – Self-driving cars crash into the road:

The development of self-driving cars is the main use case for VR today – applications that have captured the public’s imagination more than others. Like an AI that empowers them, they are not something that appears overnight, regardless of how it appears to someone who has not followed technological trends. General Motors estimates the arrival of driverless vehicles at the 1939 World Exhibition. The Stanford Train – originally built to explore how moon vehicles can function, then be reused as autonomous road vehicles – launched in 1961

Report

Written by Siva Prasanna

Leave a Reply

Your email address will not be published. Required fields are marked *

Serverless Computing And Its Advantages

Serverless Computing And Its Advantages

India’s diversity can provide lots of fodder for conversational AI products