In my previous blog post, I explored some of the early ways of word embeddings and their shortcomings. The purpose of this post is to explore one of the most widely used word representations in the natural language processing industry today. Word2Vec was created by a team of researchers led by Tomas Mikolov at Google. According to Wikipedia, Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to r...