https://github.com/goamegah/pytorch-stc
PyTorch implementation of Self-training approch for short text clustering
https://github.com/goamegah/pytorch-stc
autoencoder clustering deep-clustering deep-learning machine-learning pytorch representation-learning self-training sentence-embeddings short-text stc
Last synced: 13 days ago
JSON representation
PyTorch implementation of Self-training approch for short text clustering
- Host: GitHub
- URL: https://github.com/goamegah/pytorch-stc
- Owner: goamegah
- License: mit
- Created: 2024-03-09T18:20:06.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-27T00:21:20.000Z (11 months ago)
- Last Synced: 2025-04-15T10:02:38.458Z (13 days ago)
- Topics: autoencoder, clustering, deep-clustering, deep-learning, machine-learning, pytorch, representation-learning, self-training, sentence-embeddings, short-text, stc
- Language: Python
- Homepage:
- Size: 16.9 MB
- Stars: 8
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# PyTorch-Short-Text-Clustering
PyTorch version of Self-training approch for short text clustering### Original Paper: [A Self-Training Approach for Short Text Clustering](https://aclanthology.org/W19-4322/)

# Self-training Steps
### Step 1 - Train autoencoder
The aim of an auto-encoder is to have an output as close as possible to the input. In our study, the mean square error is used to measure the reconstruction loss after data compression and decompression. The autoencoder architecture can be seen in the figure below

### Step 2 - Initialize centroids through KMeans-like clustering​
Initialize cluster centroids involve apply clustering algorithme like K-Means over latent vector space.

### Step 3 - Joint Optimization of Feature Representations and Cluster Assignments through Self-training
Using our cluster centroids, to compute a fuzzy partition of data Q is computed with the following. It's kind of like similarity betweens all data point and centoids in terme of probability distribution.
Then, an auxiliary probability distribution P, stricter is computed that put more emphasis on data points assigned with high confidence, in the aim to improve cluster purity.

# Visualization

# Demo
Follow this [demo](REQUIRED.md) to know how to run scripts.