https://github.com/samuelebortolotti/neural-prnu-extractor
This repository provides a PyTorch implementation of FFDNet image denoising https://arxiv.org/abs/1710.04026. First implemented by Matias Tassano https://doi.org/10.5201/ipol.2019.231, the FFDNet source code has been adapted so as to extract cameras PRNU.
https://github.com/samuelebortolotti/neural-prnu-extractor
deep-learning denoising-network ffdnet neural-networks prnu python wiener-filter
Last synced: 4 months ago
JSON representation
This repository provides a PyTorch implementation of FFDNet image denoising https://arxiv.org/abs/1710.04026. First implemented by Matias Tassano https://doi.org/10.5201/ipol.2019.231, the FFDNet source code has been adapted so as to extract cameras PRNU.
- Host: GitHub
- URL: https://github.com/samuelebortolotti/neural-prnu-extractor
- Owner: samuelebortolotti
- License: gpl-3.0
- Created: 2021-11-21T11:16:37.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2022-02-09T13:51:01.000Z (over 3 years ago)
- Last Synced: 2024-11-15T17:45:07.859Z (6 months ago)
- Topics: deep-learning, denoising-network, ffdnet, neural-networks, prnu, python, wiener-filter
- Language: Python
- Homepage:
- Size: 10.5 MB
- Stars: 5
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.rst
- License: LICENSE
Awesome Lists containing this project
README
Neural-PRNU-Extractor
=====================A modified version of the ``PyTorch implementation of FFDNet image denoising``, created for the ``Signal, Image, and Video`` course of the master's degree program in Artificial Intelligence System and Computer Science at the ``University of Trento``.
About
=====Original author
^^^^^^^^^^^^^^^The original FFDNET implementation was provided by
* Author : Matias Tassano [email protected]
* Copyright : (C) 2018 IPOL Image Processing On Line http://www.ipol.im/
* Licence : GPL v3+
* Source code: `http://www.ipol.im/pub/art/2019/231/ `_
* Reference paper: `https://doi.org/10.5201/ipol.2019.231 `_Later authors
^^^^^^^^^^^^^* `Alghisi Simone `_\
* `Bortolotti Samuele `_\
* `Rizzoli Massimo `_\OVERVIEW
========Introduction
^^^^^^^^^^^^This source code provides a modified version of the "FFDNet image denoising, as in Zhang, Kai, Wangmeng Zuo, and Lei Zhang. ``FFDNet: Toward a fast and flexible solution for CNN based image denoising.``"
`FFDNet paper `_.This version, unlike the original, concentrates on detecting the cameras' `PRNU `_.
It includes the option of training the network using the `Wiener filter `_ as a strategy to detect and extract noise from images, in addition to the original method provided in the paper.
Objective
^^^^^^^^^Noise reduction is the process, which consists in removing noise from a signal. Images, taken with both digital cameras and conventional film cameras, will pick up noise from a variety of sources, which can be (partially) removed for practical purposes such as computer vision. ``Neural-PRNU-Extractor`` aims at predicting the noise from an image, provided a noise level :math:`\sigma \in \left[0, 75 \right]`.
In addition to noise extraction, ``Neural-PRNU-Extractor`` can compute and evaluate PRNU, given a dataset of flat images, and evaluate the natural images. PRNU, the acronym for ``photo response non-uniformity``, is a form of fixed-pattern noise related to digital image sensors, as used in cameras and optical instruments, and is used with the purpose of identifying which device generated an image.
Schema
------.. image:: presentation/imgs/prnu_extraction_pipeline.pdf
:alt: https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/prnu_extraction_pipeline.pdfUSER GUIDE
==========The code as-is runs in Python 3.9 with the following dependencies:
Dependencies
^^^^^^^^^^^^* `PyTorch v1.10.0 `_
* `scikit-image `_
* `torchvision `_
* `OpenCV `_
* `HDF5 `_
* `tensorboard `_
* `tqdm `_
* `prnu-python `_Usage
^^^^^To facilitate the use of the application, a ``Makefile`` has been provided; to see its functions, simply call the appropriate ``help`` command with `GNU/Make `_
.. code-block:: shell
make help
0. Set up
^^^^^^^^^For the development phase, the Makefile provides an automatic method to create a virtual environment.
If you want a virtual environment for the project, you can run the following commands:
.. code-block:: shell
pip install --upgrade pip
Virtual environment creation in the venv folder
.. code-block:: shell
make env
Virtual environment activation
.. code-block:: shell
source ./venv/ffdnet/bin/activate
Install the requirements listed in ``requirements.txt``
.. code-block:: shell
make install
**Note:** if you have Tesla K40c GPU, you can use dependency file for MMlab GPU [``requirements.mmlabgpu.txt``]
.. code-block:: shell
make install-mmlab
1. Documentation
^^^^^^^^^^^^^^^^The documentation is built using `Sphinx v4.3.0 `_.
If you want to build the documentation, you need to enter the project folder first:
.. code-block:: shell
cd neural-prnu-extractor
Install the development dependencies [``requirements.dev.txt``]
.. code-block:: shell
make install-dev
Build the Sphinx layout
.. code-block:: shell
make doc-layout
Build the documentation
.. code-block:: shell
make doc
Open the documentation
.. code-block:: shell
make open-doc
2. Data preparation
^^^^^^^^^^^^^^^^^^^In order to train the provided model, it is necessary to prepare the data first.
To this purpose, a set of commands has been created. It must be specified, however,
that such commands work while considering the syntax of the VISION dataset.This code does not include image datasets, however, you can retrieve one from:
`VISION Dataset `_Split into train and validation
-------------------------------First of all, you will need to split the original dataset into training and validation.
You can learn more about how to perform this operation by executing
.. code-block:: shell
python -m ffdnet prepare_vision --help
Generally, any dataset with a similar structure (no subfolders and images with experiment_name
``___.jpg``) can be
split by executing the following command:.. code-block:: shell
python -m ffdnet prepare_vision \
SOURCE_DIR \
DESTINATION_DIR \
--train_frac 0.7**NOTES**
* Use the ``-m`` option to move files instead of copying them
* ``--train_frac`` is used to specify the proportion of elements in training/validationPrepare the patches
~~~~~~~~~~~~~~~~~~~At this point, you will need to prepare the dataset composed of patches by executing
*prepare_patches.py* indicating the paths to the directories containing the
training and validation datasets by specifying as arguments *--trainset_dir* and
*--valset_dir*\ , respectively.You can learn more about how to perform this operation by executing
.. code-block:: shell
python -m ffdnet prepare_patches --help
**EXAMPLE**
To prepare a dataset of patches 44x44 with stride 20, you can execute
.. code-block:: shell
python -m ffdnet prepare_patches \
SOURCE_DIR \
DESTINATION_DIR \
--patch_size 44 \
--stride 20**NOTES**
* To prepare a grayscale dataset: ``python prepare_patches.py --gray``
* *--max_number_patches* can be used to set the maximum number of patches
contained in the database3. Training
^^^^^^^^^^^Train a model
-------------A model can be trained after having built the training and validation databases
(i.e. *train_rgb.h5* and *val_rgb.h5* for color denoising, and *train_gray.h5*
and *val_gray.h5* for grayscale denoising).
Only training on GPU is supported... code-block:: shell
python -m ffdnet train --help
**EXAMPLE**
.. code-block:: shell
python -m ffdnet train \
--batch_size 128 \
--val_batch_size 128 \
--epochs 80 \
--filter wiener \
--experiment_name en \
--gray**NOTES**
* The training process can be monitored with TensorBoard as logs get saved
in the *experiments/experiment_name* folder
* By default, noise added at validation is set to 25 (\ *--val_noiseL* flag)
* A previous training can be resumed passing the *--resume_training* flag
* It is possible to specify a different dataset location for training (validation) with ``--traindbf`` (``--valdbf``)
* Resource can be limited by users (when using torch 1.10.0) with the option ``--gpu_fraction``
* Training was performed by considering a file containing 50160 patches 100x100 with 50px of stride, while
for the validation we considered a file containing 16080 patches.4. Testing
^^^^^^^^^^You can learn more about the test function by calling the help of the test sub-parser
.. code-block:: shell
python -m ffdnet test --help
If you want to denoise an image using one of the pre-trained models
found under the *models* folder, you can execute.. code-block:: shell
python -m ffdnet test \
INPUT_IMG1 INPUT_IMG2 ... INPUT_IMGK \
models/WEIGHTS \
DST_FOLDERTo run the algorithm on CPU instead of GPU:
.. code-block:: shell
python -m ffdnet test \
INPUT_IMG1 INPUT_IMG2 ... INPUT_IMGK \
models/WEIGHTS \
DST_FOLDER \
--device cpuOr just change the flags' values within the Makefile and run
.. code-block:: shell
make test
Ouput example
-------------Original image
.. image:: presentation/imgs/original.jpg
:alt: https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/original.pdfHistogram equalized predicted noise
.. image:: presentation/imgs/histogram_equalized_prediction_noise.jpg
:alt: https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/histogram_equalized_prediction_noise.jpgDenoised image
.. image:: presentation/imgs/prediction_denoised.jpg
:alt: https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/prediction_denoised.jpg**NOTES**
* Models have been trained for values of noise in [0, 5]
* Models have been trained with the Wiener filter as a denoising method5. PRNU data preparation
^^^^^^^^^^^^^^^^^^^^^^^^In order to evaluate the model according to PRNU, it is necessary first to prepare the data.
To this purpose, a set of commands has been created. It must be specified, however,
that such commands work while considering the syntax of the VISION dataset.This code does not include image datasets, however, you can retrieve one from:
`VISION Dataset `_Split into flat and nat
-----------------------For this purpose, you will need to split the original dataset into flat and nat images.
In particular, it is required a dataset structure as follows:.. code-block:: shell
.
├── flat
│ ├── D04_I_0001.jpg
.....
│ └── D06_I_0149.jpg
└── nat
├── D04_I_0001.jpg
...
└── D06_I_0132.jpgYou can learn more about how to perform this operation by executing
.. code-block:: shell
python -m ffdnet prepare_prnu --help
Generally, any dataset with a similar structure (no subfolders and images with experiment_name
``___.jpg``) can be
split by executing the following.. code-block:: shell
python -m ffdnet prepare_prnu \
SOURCE_DIR**NOTES**
* Use the ``-m`` option to move files instead of copying them
* Use the ``--dst`` option to specify a different destination folder6. PRNU evaluation
^^^^^^^^^^^^^^^^^^To evaluate a model according to the PRNU, a set of commands with various options was created.
You can learn more about how to perform this operation by executing.. code-block:: shell
python -m ffdnet prnu --help
The evaluation uses a dataset, generated as described in the previous section, to evaluate a specific model.
.. code-block:: shell
python -m ffdnet prnu \
PREPARED_DATASET_DIR \
models/WEIGHTSOutput example
--------------Estimated PRNU
.. image:: presentation/imgs/prnu.jpg
:alt: https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/prnu.jpgStatistics
.. code-block:: python
{
'cc': {
'auc': 0.9163367807608622,
'eer': 0.19040247678018576,
'fpr': array([
...
]),
'th': array([
...
])
},
'pce': {
'auc': 0.8582477067737637,
'eer': 0.22678018575851394,
'fpr': array([
...
]),
'th': array([
...
]),
'tpr': array([
...
])
}
}Where:
* ``cc`` is the `cross-correlation `_
* ``pce`` is the `peak to correlation energy `_
* ``auc`` is the `area under the curve `_
* ``eer`` is the `equal error rate `_
* ``fpr`` is the `false positive rate `_
* ``th`` are the `thresholds `_**NOTES**
* Use the ``--sigma`` option to specify a set noise value for the dataset (if not specified, this is calculated for every image)
* Use the ``--gray`` option if using a gray dataset
* Use the ``--cut_dim`` option to specify the size of the cut of the images used for the estimation of the PRNU
* For the fingerprints extraction, we considered a set of 3 camera models with 130 (flat) images per modelABOUT THIS FILE
===============Copyright 2018 IPOL Image Processing On Line http://www.ipol.im/
Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.
ACKNOWLEDGEMENTS
================Some of the code is based on code by Yiqi Yan [email protected]