Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jhlau/doc2vec
Python scripts for training/testing paragraph vectors
https://github.com/jhlau/doc2vec
Last synced: 8 days ago
JSON representation
Python scripts for training/testing paragraph vectors
- Host: GitHub
- URL: https://github.com/jhlau/doc2vec
- Owner: jhlau
- License: apache-2.0
- Created: 2016-06-22T06:30:53.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2023-08-07T02:32:29.000Z (11 months ago)
- Last Synced: 2024-02-28T20:36:13.151Z (4 months ago)
- Language: Python
- Homepage:
- Size: 1.19 MB
- Stars: 634
- Watchers: 23
- Forks: 191
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Lists
- awesome-deeplearning-resources - Pre-Trained Doc2Vec Models
- text_mining_resources - Paragraph embedding scripts and Pre-trained models - trained Doc2Vec and Word2Vec models (APIs and Libraries / Knowledge Graphs)
README
The repository contains some python scripts for training and inferring test document vectors using paragraph vectors or doc2vec.
Requirements
============
* Python2: Pre-trained models and scripts all support Python2 only.
* Gensim: Best to use my [forked version](https://github.com/jhlau/gensim) of gensim; the latest gensim has changed its Doc2Vec methods a little and so would not load the pre-trained models.Pre-Trained Doc2Vec Models
==========================
* [English Wikipedia DBOW (1.4GB)](https://unimelbcloud-my.sharepoint.com/:f:/g/personal/jeyhan_lau_unimelb_edu_au/EgzpOQsDqjJIqN8Pd0DksgUBGXr6oW4NX1csPPWBjYFr-Q?e=3dqgGg): 2016-doc2vec/enwiki_dbow.tgz
* [Associated Press News DBOW (0.6GB)](https://unimelbcloud-my.sharepoint.com/:f:/g/personal/jeyhan_lau_unimelb_edu_au/EgzpOQsDqjJIqN8Pd0DksgUBGXr6oW4NX1csPPWBjYFr-Q?e=3dqgGg): 2016-doc2vec/apnews_dbow.tgzPre-Trained Word2Vec Models
===========================
For reproducibility we also released the pre-trained word2vec skip-gram models on Wikipedia and AP News:
* [English Wikipedia Skip-Gram (1.4GB)](https://unimelbcloud-my.sharepoint.com/:f:/g/personal/jeyhan_lau_unimelb_edu_au/EgzpOQsDqjJIqN8Pd0DksgUBGXr6oW4NX1csPPWBjYFr-Q?e=3dqgGg): 2016-doc2vec/enwiki_sg.tgz
* [Associated Press News Skip-gram (0.6GB)](https://unimelbcloud-my.sharepoint.com/:f:/g/personal/jeyhan_lau_unimelb_edu_au/EgzpOQsDqjJIqN8Pd0DksgUBGXr6oW4NX1csPPWBjYFr-Q?e=3dqgGg): 2016-doc2vec/apnews_sg.tgzDirectory Structure and Files
=============================
* train_model.py: example python script to train some toy data
* infer_test.py: example python script to infer test document vectors using trained model
* toy_data: directory containing some toy train/test documents and pre-trained word embeddingsModel Hyper-Parameter Explanation
=================================
* __sample__: this is the sub-sampling threshold to downsample frequent words; 10e-5 is usually good for DBOW, and 10e-6 for DMPV
* __hs__: 1 turns on hierarchical sampling; this is rarely turned on as negative sampling is in general better
* __dm__: 0 = DBOW; 1 = DMPV
* __negative__: number of negative samples; 5 is a good value
* __dbow_words__: 1 turns on updating of word embeddings. In DBOW, word embeddings are technically not learnt (only document embeddings are learnt). To learn word vectors, DBOW runs a step of skip-gram before the DBOW step to update the word embeddings. With dbow_words turned off, this means DBOW will randomly initialise word embeddings and keep them randomly initialised. This is rather bad in practice (as the model does not see relationships between words in the embedding space), so it should be turned on
* __dm_concat__: 1 = concatenate input word vectors for DMPV; 0 = sum/average input word vectors. This setting is only used for DMPV since DBOW has only one input word
* __dm_mean__: 1 = average input word vectors; 0 = sum input word vectors. Again, this setting is only used for DMPV. The original paragraph vector paper concatenates input word vectors for DMPV, and that's the setting we used in our paper
* __iter__: number of iterations/epochs to train the modelPublications
------------
* Jey Han Lau and Timothy Baldwin (2016). [An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation](https://arxiv.org/abs/1607.05368). In Proceedings of the 1st Workshop on Representation Learning for NLP, 2016.