These are a few of my notes from Eunsol Choi’s NLP class at UT Austin. Word Embeddings# --- Word embeddings are a type of word representation that captures the semantic meaning of words in a continuous vector space. Unlike one-hot encoding, where each word is represented as a binary vector of all zeros except for a single ‘1’, word embeddings capture much richer information, including semantic relationships, word context, and even aspects of syntax.