Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/saisrivatsan/deep-opt-auctions
Implementation of Optimal Auctions through Deep Learning
https://github.com/saisrivatsan/deep-opt-auctions
auctions deep-learning economics economics-and-computation machine-learning mechanism-design multi-agent-systems
Last synced: 17 days ago
JSON representation
Implementation of Optimal Auctions through Deep Learning
- Host: GitHub
- URL: https://github.com/saisrivatsan/deep-opt-auctions
- Owner: saisrivatsan
- License: mit
- Created: 2019-01-17T17:11:24.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2019-11-24T22:06:15.000Z (almost 5 years ago)
- Last Synced: 2024-10-30T16:29:37.121Z (17 days ago)
- Topics: auctions, deep-learning, economics, economics-and-computation, machine-learning, mechanism-design, multi-agent-systems
- Language: Jupyter Notebook
- Homepage:
- Size: 3.89 MB
- Stars: 110
- Watchers: 7
- Forks: 37
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Optimal Auctions through Deep Learning
Implementation of "Optimal Auctions through Deep Learning" (https://arxiv.org/pdf/1706.03459.pdf)## Getting Started
Install the following packages:
- Python 2.7
- Tensorflow
- Numpy and Matplotlib packages
- Easydict - `pip install easydict`## Running the experiments
### RegretNet
#### For Gradient-Based approach:
Default hyperparameters are specified in regretNet/cfgs/.#### For Sample-Based approach:
Modify the following hyperparameters in the config file specified in regretNet/cfg/.
```
cfg.train.gd_iter = 0
cfg.train.num_misreports = 100
cfg.val.num_misreports = 100 # Number of val-misreports is always equal to the number of train-misreports
```For training the network, testing the mechanism learnt and computing the baselines, run:
```
cd regretNet
python run_train.py [setting_name]
python run_test.py [setting_name]
python run_baseline.py [setting_name]
```setting\_no | setting\_name |
:---: | :---: |
(a) | additive\_1x2\_uniform |
(b) | unit\_1x2\_uniform\_23 |
(c\) | additive\_2x2\_uniform |
(d) | CA\_sym\_uniform\_12 |
(e) | CA\_asym\_uniform\_12\_15 |
(f) | additive\_3x10\_uniform |
(g) | additive\_5x10\_uniform |
(h) | additive\_1x2\_uniform\_416\_47
(i) | additive\_1x2\_uniform\_triangle
(j) | unit\_1x2\_uniform
(k) | additive\_1x10\_uniform
(l) | additive\_1x2\_uniform\_04\_03
(m) | unit\_2x2\_uniform### RochetNet (Single Bidder Auctions)
Default hyperparameters are specified in rochetNet/cfgs/.
For training the network, testing the mechanism learnt and computing the baselines, run:
```
cd rochetNet
python run_train.py [setting_name]
python run_test.py [setting_name]
python run_baseline.py [setting_name]
```
setting\_no | setting\_name |
:---: | :---: |
(a) | additive\_1x2\_uniform |
(b) | additive\_1x2\_uniform\_416\_47
\(c\) | additive\_1x2\_uniform\_triangle
(d) | additive\_1x2\_uniform\_04\_03
(e) | additive\_1x10\_uniform
(f) | unit\_1x2\_uniform
(g) | unit\_1x2\_uniform\_23
### MyersonNet (Single Item Auctions)
Default hyperparameters are specified in utils/cfg.py.
For training the network, testing the mechanism learnt and computing the baselines, run:
```
cd myersonNet
python main.py -distr [setting_name] or
bash myerson.sh
```
setting\_no | setting\_name |
:---: | :---: |
(a) | exponential
(b) | uniform
\(c\) | asymmetric\_uniform
(d) | irregular
## Settings### Single Bidder
- **additive\_1x2\_uniform**: A single bidder with additive valuations over two items, where the items is drawn from U\[0, 1\].- **unit\_1x2\_uniform\_23**: A single bidder with unit-demand valuations over two items, where the item values are drawn from U\[2, 3\].
- **additive\_1x2\_uniform\_416\_47**: Single additive bidder with preferences over two non-identically distributed items, where v1 ∼ U\[4, 16\]and v2 ∼ U\[4, 7\].
- **additive\_1x2\_uniform\_triangle**: A single additive bidder with preferences over two items, where (v1, v2) are drawn jointly and uniformly from a unit-triangle with vertices (0, 0), (0, 1) and (1, 0).
- **unit\_1x2\_uniform**: A single unit-demand bidder with preferences over two items, where the item values from U\[0, 1\]
- **additive\_1x2\_uniform\_04\_03**: A Single additive bidder with preferences over two items, where the item values v1 ∼ U\[0, 4], v2 ∼ U\[0, 3]
- **additive\_1x10\_uniform**: A single additive bidder and 10 items, where bidders draw their value for each item from U\[0, 1\].
### Multiple Bidders
- **additive\_2x2\_uniform**: Two additive bidders and two items, where bidders draw their value for each item from U\[0, 1\].- **unit\_2x2\_uniform**: Two unit-demand bidders and two items, where the bidders draw their value for each item from identical U\[0, 1\].
- **additive\_2x3\_uniform**: Two additive bidders and three items, where bidders draw their value for each item from U\[0, 1\].
- **CA\_sym\_uniform\_12**: Two bidders and two items, with v1,1, v1,2, v2,1, v2,2 ∼ U\[1, 2\], v1,{1,2} = v1,1 + v1,2 + C1 and v2,{1,2} = v2,1 + v2,2 + C2, where C1, C2 ∼ U\[−1, 1\].
- **CA\_asym\_uniform\_12\_15**: Two bidders and two items, with v1,1, v1,2 ∼ U\[1, 2\], v2,1, v2,2 ∼ U\[1, 5\], v1,{1,2} = v1,1 + v1,2 + C1 and v2,{1,2} = v2,1 + v2,2 + C2, where C1, C2 ∼ U\[−1, 1].
- **additive\_3x10\_uniform**: 3 additive bidders and 10 items, where bidders draw their value for each item from U\[0, 1\].
- **additive\_5x10\_uniform**: 5 additive bidders and 10 items, where bidders draw their value for each item from U\[0, 1\].
## Visualization
Allocation Probabilty plots for **unit\_1x2\_uniform_23** setting learnt by **regretNet**:
Allocation Probabilty plots for **additive\_1x2\_uniform\_416\_47** setting learnt by **rochetNet**:
For other allocation probability plots, check-out the ipython notebooks in `regretNet` or `rochetNet` folder.
## Reference
Please cite our work if you find our code/paper is useful to your work.
```
@article{DFNP19,
author = {Paul D{\"{u}}tting and Zhe Feng and Harikrishna Narasimhan and David C. Parkes and Sai Srivatsa Ravindranath},
title = {Optimal Auctions through Deep Learning},
journal = {arXiv preprint arXiv:1706.03459},
year = {2019},
}
```