Glove Embeddings

https://nlp.stanford.edu/pubs/glove.pdf

Reading Embeddings

  1. import numpy as np
  2. import gensim.downloader as api
  3. # load 'glove-wiki-gigaword-50' pre-treained Embeddings
  4. model = api.load('glove-wiki-gigaword-50')
  5. # find the size of the model (no. of words / tokens)
  6. len(model)
  7. # get the embedding for any word
  8. model['india']
  9. model['king']
  10. # meaningful relationships
  11. vector1 = model['man'] + model['queen'] - model['king']
  12. model.most_similar(vector1, topn=1)
  13. vector2 = model['paris'] - model['france'] + model['india']

Google tag (gtag.js)