https://github.com/jwieting/iclr2016
Python code for training all models in the ICLR paper, "Towards Universal Paraphrastic Sentence Embeddings". These models achieve strong performance on semantic similarity tasks without any training or tuning on the training data for those tasks. They also can produce features that are at least as discriminative as skip-thought vectors for semantic similarity tasks at a minimum. Moreover, this code can achieve state-of-the-art results on entailment and sentiment tasks.
https://github.com/jwieting/iclr2016
Last synced: 20 days ago
JSON representation
Python code for training all models in the ICLR paper, "Towards Universal Paraphrastic Sentence Embeddings". These models achieve strong performance on semantic similarity tasks without any training or tuning on the training data for those tasks. They also can produce features that are at least as discriminative as skip-thought vectors for semantic similarity tasks at a minimum. Moreover, this code can achieve state-of-the-art results on entailment and sentiment tasks.
- Host: GitHub
- URL: https://github.com/jwieting/iclr2016
- Owner: jwieting
- Created: 2016-02-06T21:48:40.000Z (about 10 years ago)
- Default Branch: master
- Last Pushed: 2016-02-19T05:59:51.000Z (about 10 years ago)
- Last Synced: 2023-12-05T05:41:13.994Z (over 2 years ago)
- Language: Python
- Size: 23.4 KB
- Stars: 190
- Watchers: 12
- Forks: 52
- Open Issues: 3
Awesome Lists containing this project
- Awesome-Code - Towards Universal Paraphrastic Sentence Embeddings