add share buttonsSoftshare button powered by web designing, website development company in India

How Google Translate squeezes deep learning

Five years ago, if you gave a computer an image of a cat or a dog, it had trouble telling which was which. Thanks to convolutional neural networks, not only can computers tell the difference between cats and dogs, they can even recognize different breeds of dogs. Yes, they’re good for more than just trippy art—if you're translating a foreign menu or sign with the latest version of Google's Translate app, you're now using a deep neural net. And the amazing part is it can all work on your phone, without an Internet connection. Here’s how.

First, when a camera image comes in, the google translate camera app have to find the letters in the image. We need to get rid of background objects such as a tree or a car and take the words that we want to translate. This is seen in clumps of pixels that have a color similar to each other are also close to other similar blobs of pixels. They are probably the letter, and if they are close to each other, which makes a continuous line we must-read.

Second, Translate must recognize what each letter is true. It is coming-depth learning. We use a convolutional neural network, training on the letters, and non-letters so that they can learn what the different letters look like.

But interestingly, if we train only in the letters are very "clean" -looking, we risk not understand what the letters look like in real life. Letters go out in the real world ravaged by reflection, dirt, stains, and all sorts of oddities. So we built our plant's letters to create all kinds of false "dirt" to convince mimic real-world reflections noisiness of counterfeit, fake stains, false strangeness around.