in

2018 Most Significant Machine Learning Advances

2018 Most Significant Machine Learning Advances
2018 Most Significant Machine Learning Advances

Answer by Xavier Amatriain, Former ML researcher, now the leading Engineering team:

“If I have to summarize the main things about the progress of machine learning in 2018 in a few headlines, this is what I might say:

AI hype and fear of drying out to be cold.

Focus more on concrete problems such as justice, interpretation, or causal relationships.

In-depth learning is here to remain and be useful in practice for more image classification (especially for NLP).

The battle in front of the AI framework is heating up, and if you want to be someone, you should publish some of your own frameworks.

Let’s look at all this in more detail.

If 2017 is probably the culmination of AI’s fear of drying up and fanfare (as I mentioned in last year’s answer), 2018 seems to be the year where we start to calm down a little. While it’s true that some figures continue to push their AI fear messages, they may be too busy with other problems to make this an important point on their agenda. At the same time, it looks like the press and others have come to terms with the idea that while self-driving cars and similar technologies come to us, they won’t happen tomorrow. That being said, there are still voices defending a bad idea that we have to regulate AI rather than focusing on managing the results.

It’s good to see that this year, the focus seems to have shifted to more concrete problems that can be overcome. For example, there is a lot of talk about justice and there are not only a few conferences on topics (see FATAL or ACM FAT), even some online courses like this by Google.

In line with this, other issues that have been very discussed this year include interpretation, explanation and causality. Starting with the latter, causality seems to have been re-highlighted in large part because of the publication of “The Book of Why” by Judea Pearl. Not only did the author decide to write his first “generally accessible” book, but he also went down to Twitter to popularize the discussion around causality. In fact, even the popular press has written about this as a “challenge” for the existing AI approach (see this article on The Atlantic, for example). Actually, even awarding the best paper at the ACM Recsys conference is given in a paper that addresses the issue of how to incorporate causality in embeddings and we must focus again on more concrete problems such as interpretation or explanation. Speaking of explanations, one of the important things in this field might be the publication of papers and codes for Anchor, the follow-up to the famous LIME model by the same author.

Although there are still questions about Deep Learning as the most common AI paradigm (count me with questions that pose), while we continue to read the 9th iteration of discussions about this between Yann LeCun and Gary Marcus, it is clear that deep learning is not only here to stay, but still far from reaching the plateau in terms of what he can convey. More concretely, during this year the Deep Learning approach has demonstrated unprecedented success in different fields of Vision, ranging from Language to Health.

In fact, that might be in the NLP field, where we have seen the most interesting progress this year. If I have to choose the most impressive AI application this year, both are NLP (and both come from Google). The first is a very useful Google smart writing composition, and the second is their Duplex dialogue system.

Much of that progress was accelerated by the idea of ​​using language models, which were popularized this year by Fast.ai’s UMLFit (see also “Understanding UMLFit”). We have then seen other (and better) approaches such as ELMO Allen, Open AI transformers, or, recently BERT Google, which defeated many of SOTA’s results outside the gate. These models have been described as “Imagenet moments for NLP” because they demonstrate the practicality of transferring learning in the language domain by providing general, pre-trained and ready-to-use models that can also be adapted for specific tasks. Apart from language models, there are many other interesting advances such as Facebook multilingual embeddings, just to mention one more. It is interesting to note that we have also seen how fast this and other approaches have been integrated into more general NLP frameworks such as AllenNLP’s or FLAIR Zalando.

Talking about the framework, this year the “AI framework war” has heated up. Surprisingly, Pytorch seems to catch up to TensorFlow like Pytorch 1.0 was announced. While the situation around using Pytorch in production is still not optimal, it looks like Pytorch is catching up faster than Flow Tensor pursuing usability, documentation, and education. Interestingly, the possibility that Pytorch’s choice as a framework for implementing the Fast.ai library has played a big role. Because of this, Google is aware of all this and is pushing in the right direction by including Hard as first-class citizens within the framework or the addition of key leaders who focus on developers like Paige Bailey. In the end, we all benefit from having access to all these great resources, so go ahead!

Interestingly, another field that has seen many interesting developments in the space framework is the strengthening of learning. Although I don’t think the progress of RL research is as impressive as in previous years (only Impala’s recent work by DeepMind crossed my mind), it was surprising to see that in one year we had seen all the major AI players issue RL Frameworks. Google published a Dopamine framework for research while Deepmind (also within Google) published a rather competitive TRFL framework. Facebook cannot stay behind and publish Horizon while Microsoft publishes TextWorld, which is more specifically for training text-based agents. Hopefully, this open source good will help us see a lot of progress in RL in 2019.

Just to finish in front of the framework, I’m happy to see that Google recently published TFRank above the Flow Tensor. The ranking is a very important ML application that might get less love that is appropriate lately.”

Written by Srikanth

Passionate Tech Blogger on Emerging Technologies, which brings revolutionary changes to the People life.., Interested to explore latest Gadgets, Saas Programs

The Best of Delhi’s Startup Community, to Build a Network 1

The Best of Delhi’s Startup Community, to Build a Network

Oxigen ties up with ePayLater to provide 'Buy Now, Pay Later' services 2

Oxigen ties up with ePayLater to provide ‘Buy Now, Pay Later’ services