Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vita-group/self-pu
[ICML2020] "Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training" by Xuxi Chen, Wuyang Chen, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, Zhangyang Wang
https://github.com/vita-group/self-pu
pu-learning pytorch
Last synced: about 16 hours ago
JSON representation
[ICML2020] "Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training" by Xuxi Chen, Wuyang Chen, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, Zhangyang Wang
- Host: GitHub
- URL: https://github.com/vita-group/self-pu
- Owner: VITA-Group
- License: mit
- Created: 2020-06-10T13:58:26.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2021-12-29T07:45:34.000Z (almost 3 years ago)
- Last Synced: 2023-08-14T01:40:23.031Z (over 1 year ago)
- Topics: pu-learning, pytorch
- Language: Python
- Homepage:
- Size: 288 KB
- Stars: 55
- Watchers: 16
- Forks: 13
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# [Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training](https://arxiv.org/abs/2006.11280)
[ICML2020] Xuxi Chen*, Wuyang Chen*, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, Zhangyang Wang# Overview
We proposed Self-PU Framework that introduces self-paced, self-calibrated and self-supervised learning to the PU field.# Method
- Self-paced learning: We gradually selected confident samples from the unlabeled set and assign labels to them.
- Self-calibrated learning: Find optimal weights for unlabeled samples in order to obtain more source of supervision.
- Self-supervised learning: Fully exploit the learning ability of models by teacher-student structure.![framework](framework.png)
# Set-up
## Environment
```
conda install pytorch==0.4.1 cuda92 torchvision -c pytorch
conda install matplotlib scikit-learn tqdm
pip install opencv-python
```
## Preparing Data
Download cifar-10 and extract it into `cifar/`.# Evaluation
## Pretrained Model
MNIST: [Google Drive](https://drive.google.com/file/d/1RjVAIv_zPvKraLiyh8Oeshifun4zkgrm/view?usp=sharing "Google Drive"),
Accuracy: 94.45%CIFAR-10: [Google Drive](https://drive.google.com/file/d/1Ybzaph0355FYjxFlPorrJBiESo_6LfJC/view?usp=sharing "Google Drive"), Accuracy: 90.05%
## Evaluation Code
MNIST:
```python
python evaluation.py --model mnist.pth.tar
```CIFAR-10:
```python
python evaluation.py --model cifar.pth.tar --datapath cifar --dataset cifar
```# Training
## Baseline
### MNIST
```python
python train.py --self-paced False --mean-teacher False
```### CIFAR-10
```python
python train.py --self-paced False --mean-teacher False --dataset cifar --datapath cifar
```
## Self-PU (without self-calibration)
Training with self-calibation would be expensive. A cheap alternative:
### MNIST```python
python train_2s2t.py --soft-label
```
### CIFAR-10
```python
python train_2s2t.py --dataset cifar --datapath cifar --soft-label
```## Self-PU
### MNIST
```python
python train_2s2t_mix.py --soft-label
```### CIFAR-10
```python
python train_2s2t_mix.py --dataset cifar --datapath cifar --soft-label
```## Reproduce
| Seed | Accuracy on MNIST | Accuracy on CIFAR-10 |
| ---- | ---- | ---- |
| 3 | 93.87% | 89.68% |
| 13 | 94.68% | 90.15% |
| 23 | 94.44% | 89.38% |
| 33 | 93.84% | 89.69% |