Image recognition has come a long way over the last few years and maybe more so than anybody else, Google has brought some of those advances to end users. To see how far we’ve come, just try searching through your own images on Google Photos, for example. But recognizing objects (and maybe basic scenes) is only a first step.
In September, Google showed how its approach, using the currently popular deep learning methodology, could not just recognize images of single objects but also classify different objects in a single image (think different kinds of fruits in a fruit basket, for example).
Once you can do that, you can also try to create a full natural language description of the image and that’s what Google is doing now. According to a new Google Research paper, the company has now developed a system that can teach itself how to describe a…
View original post 186 more words