Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/khinthandarkyaw98/bf-ris-channel-covariance-deeplearning

Optimization of Transmit Beamforming based on Unsupervised Learning With Channel Covariances for MISO Downlink Assisted by Reconfigurable Intelligent Surfaces
https://github.com/khinthandarkyaw98/bf-ris-channel-covariance-deeplearning

beamforming bnn deeplearning downlink keras miso numpy optimization python3 reconfigurable-intelligent-surfaces tensorflow transmit unsupervised-learning waterfilling zeroforcing

Last synced: about 9 hours ago
JSON representation

Optimization of Transmit Beamforming based on Unsupervised Learning With Channel Covariances for MISO Downlink Assisted by Reconfigurable Intelligent Surfaces

Awesome Lists containing this project

README

        

# Optimization of Transmit Beamforming With Channel Covariances for MISO Downlink Assisted by Reconfigurable Intelligent Surfaces

## Citation
```
@INPROCEEDINGS{10595028,
author={Kyaw, Khin Thandar and Santipach, Wiroonsak and Mamat, Kritsada and Kaemarungsi, Kamol and Fukawa, Kazuhiko},
booktitle={2024 21st International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON)},
title={Optimization of Transmit Beamforming Using Channel Covariances for MISO Downlink Assisted by Reconfigurable Intelligent Surfaces},
year={2024},
volume={},
number={},
pages={1-6},
keywords={Array signal processing;Neural networks;Reconfigurable intelligent surfaces;MISO communication;Downlink;Numerical simulation;Telecommunications;Beamforming;optimization;downlink;RIS;channel covariance;MISO;neural network},
doi={10.1109/ECTI-CON60892.2024.10595028}}
```

-----

We propose an unsupervised beamforming neural network (BNN) to optimize transmit beamforming in downlink multiple input single output (MISO) channels. Our proposed BNN utilizes only channel covariances of UEs, which do not change often, and hence the transmit beams do not need frequent updates. The BNN outperforms the ZF scheme when the UE channels are sparse with rank one covariance. The sum-rate gain over ZF is pronounced in heavily loaded systems in which the number of UEs is closer to that of the BS antennas. The complexity of the BNN is shown to be much lower than that of the ZF. Future work includes improving the BNN for channel covariances whose rank is greater than one and joint optimization of the transmit beams with RIS elements.

### System Model
***
The implementation of the neural network model is adapted from [TianLin0509/BF-design-with-DL](https://github.com/TianLin0509/BF-design-with-DL) to meet our system requriements.

> [!IMPORTANT]
> For details on the custom Downlink Beamforming with Reconfigurable Intelligent Surface environment, please refer to the paper:
### K. T. Kyaw, W. Santipach, K. Mamat, K. Kaemarungsi and K. Fukawa [ "Optimization of Transmit Beamforming Using Channel Covariances for MISO Downlink Assisted by Reconfigurable Intelligent Surfaces"](https://ieeexplore.ieee.org/document/10595028), in _2024 21st International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON)_.



### Simulation Parameters
| Parameter | Current Value |
|---|---|
| Number of UEs| Default: 8
Otherwise: 6, or 10|
| Number of BS transmit antenna ($N_t$) | Default: 16
Otherwise: 10 |
| Number of RIS elements ($N$) | Default: 30
Otherwise: 60 |
| Downlink bandwidth | Assume mmWave Frequencies > 30 GHz |
| Channel bandwidth | Rayleigh Fading Model |
| Antenna configuration| MISO |
| Frequency reuse scheme | Large Frequency Reuse Factor |
| Mobility model | Stationary |
| Learning type| Unsupervised |

### Implementation Details of the proposed BNN
| Layer Name | Output Dimension | Activation Function |
|---|---|---|
| Input layer 1 | [M+K, 2, $N_t$, $N_t$] |- |
| Input layer 2 | [1] |- |
| Input layer 3 | [M+K, 2, $N_t$, 1] |- |
| Concatenate layer | [2 $N_t$ (M+K)($N_t$+1)+1, 1] |- |
| Dense layer 1 | [256, 1] |softplus |
| Dense layer 2 | [128, 1] |softplus |
| Dense layer 3 | [64, 1] |softplus |
| Lambda layer 1 | [32, 1] |- |
| Lambda layer 2 | [32, 1] |- |
| Dense layer 4 | [M+K, 1] |softplus |
| Dense layer 5 | [M+K, 1] |softplus |
| Lambda layer 3 | [M+K, 1] |- |
| Lambda layer 4 | [M+K, 1] |- |
| Lambda layer 5 | [M+K, $N_t$, 1] |- |
| Lambda layer 6 | [1] |- |

### Training Hyperparameters of BNN
| Hyperparameters | Value |
|---|---|
| Number of episodes | Maximum episodes = $500$ |
| Mini-batch size | $32$ samples |
| Network weight initializations | Keras' default wegihts |
| Optimizer | Adam |
| Learning rate | Maximium value = $1e-5$
Minimum value = $1e-7$ |

### Numerical Results
***
Figures of the sum rates and computaion time in the paper are found in the folder [sumRates](./sumRates/) and [elapsedTime](./elapsedTime/Bar_time.png) respectively or as belows. The hyperparameters follow all figures presented in the paper.






Please modify `N`, `Nt`, `totalUsers`, `Lm`, `Lk` in [NNUtils.py](./NNUtils.py) and respective `python` `plot` files to reproduce all figures in the paper.

### How to use
***
**0.Requirements**
```bash
python==3.10.10
matplotlib==3.7.1
numpy==1.24.3
tensorflow==2.15.0
keras==2.15.0
```

**1.Implementation**
* Generate the dataset:
```bash
python covariance.py
```

* Calculate the sum rate of ZF beams w/ water-filling pwr:
```bash
python water_filling.py
```

* Train the model:
```bash
python train_unsuper.py
```

* Test the model:
```bash
python test_unsuper.py
```

* Check the elapsed time:
```bash
python timer_calculation.py
```

* Plotting the graph:
```bash
python plot_corresponding_number_.py
```

Eplased time info, Loss curves and sum rate plots can also be viewed in `timer`, `train` and `Plotting` folders which will be automatically created after running the abovementioned files.