Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ks-avinash/word2vec
Automatically exported from code.google.com/p/word2vec
https://github.com/ks-avinash/word2vec
Last synced: 3 months ago
JSON representation
Automatically exported from code.google.com/p/word2vec
- Host: GitHub
- URL: https://github.com/ks-avinash/word2vec
- Owner: ks-avinash
- License: apache-2.0
- Archived: true
- Created: 2015-10-07T09:35:19.000Z (about 9 years ago)
- Default Branch: master
- Last Pushed: 2015-10-07T09:37:51.000Z (about 9 years ago)
- Last Synced: 2023-09-12T17:31:58.221Z (over 1 year ago)
- Language: C
- Size: 262 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 35
-
Metadata Files:
- Readme: README.txt
- License: LICENSE
Awesome Lists containing this project
README
Tools for computing distributed representtion of words
------------------------------------------------------We provide an implementation of the Continuous Bag-of-Words (CBOW) and the Skip-gram model (SG), as well as several demo scripts.
Given a text corpus, the word2vec tool learns a vector for every word in the vocabulary using the Continuous
Bag-of-Words or the Skip-Gram neural network architectures. The user should to specify the following:
- desired vector dimensionality
- the size of the context window for either the Skip-Gram or the Continuous Bag-of-Words model
- training algorithm: hierarchical softmax and / or negative sampling
- threshold for downsampling the frequent words
- number of threads to use
- the format of the output word vector file (text or binary)Usually, the other hyper-parameters such as the learning rate do not need to be tuned for different training sets.
The script demo-word.sh downloads a small (100MB) text corpus from the web, and trains a small word vector model. After the training
is finished, the user can interactively explore the similarity of the words.More information about the scripts is provided at https://code.google.com/p/word2vec/