Google kann noch immer ein gewaltiges Wachstum vorlegen und auch im vergangenen Jahr wieder neue Umsatzhöhen und Gewinnstufen erreicht.| GoogleWatchBlog
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being mo...| arXiv.org