At the time of teaching language to the Artificial Intelligence system, we normally use the different set of annotations which describes how words really work. But this is not used in almost all the cases, even if everybody agresses on those annotations, it usually takes a lot of time to produce and can still seem an unnatural thing.
The MIT students have found out a solution, its researchers have developed a parser which intimates the kids the learning processes by observing scenes and making great connections.
The system which studies the captioned videos and learns to link some of the more capacitive words to objects and actions by easily determining the accuracy of description. It also turns out to be a potential meaning into the logical-mathematical expressions, picking the expression which will most closely represent what it actually thinks is going on.
While the Artificial Intelligence may start with a vast range of potential meaning and little idea as to what is normally seeing, it will also gradually whittle down some of the better possibilities. Annotations can help speed up the process, but the technology does not always need annotations to learn the different things.
The approach is much more flexible. As the system is merely observing the environment, and it can learn based on how people merely actually speak, not just the formal language. MIT envisions robots which could also adapt to the linguistic habits of the people around it, even with the sentence fragmentation and some of the signs of the informal dialog.
The childlike method ould speak up the entire process of learning and make the Artificial intelligence which can handle some of the uncommon languages which rarely get the AI friendly annotations.