Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/34j/vr180-convert

Simple VR180 image (format) converter (e.g. equidistant → equirectangular, rotation calibration)
https://github.com/34j/vr180-convert

opencv python stereo-calibration stereo-camera stereo-images stereo-matching stereo-vision virtual-reality vr180

Last synced: 7 days ago
JSON representation

Simple VR180 image (format) converter (e.g. equidistant → equirectangular, rotation calibration)

Awesome Lists containing this project

README

        

# VR180 image converter



CI Status


Documentation Status


Test coverage percentage




Poetry


black


pre-commit




PyPI Version

Supported Python versions
License

---

**Documentation**: https://vr180-convert.readthedocs.io

**Source Code**: https://github.com/34j/vr180-convert

---

Simple VR180 image converter on top of OpenCV and NumPy.

## Installation

Install this via pip (or your favourite package manager):

```shell
pipx install vr180-convert
```

## Usage

Simply run the following command to convert 2 fisheye images to a SBS equirectangular VR180 image:

```shell
v1c lr left.jpg right.jpg
```

| left.jpg | right.jpg | Output |
| ------------------------------ | ------------------------------- | ---------------------------------------------------- |
| ![left](docs/_static/test.jpg) | ![right](docs/_static/test.jpg) | ![output](docs/_static/test.lr.PolynomialScaler.jpg) |

If left and right image paths are the same, the image is divided into two halves (left and right, SBS) and processed as if they were separate images.

## Advanced usage

### Automatic image search

If one of left or right image path is a directory, the program will search for the closest image (in terms of creation time) in the other directory.

```shell
v1c lr left.jpg right_dir
v1c lr left_dir right.jpg
```

Since clocks on cameras may not be very accurate in some cases, it is recommended to check how quickly the clocks of the two cameras shift, and synchronize the clocks before shooting.
However, it can be adjusted by specifying `-ac` option.

```shell
v1c lr left.jpg right_dir -ac 1 # the clock of the right camera is 1 second faster / ahead
v1c lr left_dir right.jpg -ac 1 # the clock of the right camera is 1 second faster / ahead
```

### Radius estimation

The radius of the non-black area of the input image is assumed by counting black pixels by default, but it would be better to specify it manually to get stable results:

```shell
v1c lr left.jpg right.jpg --radius 1000
v1c lr left.jpg right.jpg --radius max # min(width, height) / 2
```

### Calibration

[Rotation matching using the least-squares method](https://lisyarus.github.io/blog/posts/3d-shape-matching-with-quaternions.html) can be performed by clicking corresponding points that can be regarded as infinitely far away from the camera.

- Using AKAZE as a feature matcher:

```shell
v1c lr left.jpg right.jpg --automatch fm
```

Since matching by AKAZE involves false detections, matching points with high loss are considered outliers, and the least-squares method is repeated multiple times to remove them.
Checking if the image should be swapped using feature matching might be theoretically possible but not implemented.

#### See Also

- [1.1.16. Robustness regression: outliers and modeling errors](https://scikit-learn.org/stable/modules/linear_model.html#robustness-regression-outliers-and-modeling-errors)

- Manually specifying the corresponding points using the GUI:

```shell
v1c lr left.jpg right.jpg --automatch gui
```

- Manually specifying the corresponding points using the CLI:

```shell
v1c lr left.jpg right.jpg --automatch "0,0;0,0;1,1;1,1" # left_x1,left_y1;right_x1,right_y1;...
```

$$
a_k, b_k \in \mathbb{R}^3,
\min_{R \in SO(3)} \sum_k \|R a_k - b_k\|^2
$$

Please also refer to the [Documentation](https://vr180-convert.readthedocs.io/en/latest/math.html) for mathematical details.

### Anaglyph

`--merge` option (which exports as [anaglyph](https://en.wikipedia.org/wiki/Anaglyph_3D) image) can be used to check if the calibration is successful by checking if the infinitely far points are overlapped.

```shell
v1c lr left.jpg right.jpg --automatch gui --merge
```

### Swap

If the camera is mounted upside down, you can simply use the `--swap` option without changing the transformer or other parameters:

```shell
v1c lr left.jpg right.jpg --swap
```

Or the image can be simply swapped using the `swap` command:

```shell
v1c swap rl.jpg
```

in case one notices that the left and right images are swapped after the conversion.

### Convert to Google's format (Photo Sphere XMP Metadata)

This format is special in that it base64-encodes the right-eye image into the metadata of the left-eye image.
Required for Google Photos, etc.

You can convert the image to this format by:

```shell
v1c xmp lr.jpg
```

The [python-xmp-toolkit](https://github.com/python-xmp-toolkit/python-xmp-toolkit) used in this command requires [exempi](https://libopenraw.freedesktop.org/exempi/) to be installed. Note that if this command is called on Windows, it will attempt to install this library and its dependencies and then run the command on WSL using `subprocess`.

#### References

- [imrivera/google\-photos\-vr180\-test: Test for XMP metadata parsing for VR180 pictures in Google Photos](https://github.com/imrivera/google-photos-vr180-test)
- [temoki/make_vr180photo_py: 左眼カメラ画像と右眼カメラ画像を結合して VR180 3D フォトを作成する Python スクリプト](https://github.com/temoki/make_vr180photo_py)

### Custom conversion model

You can also specify the conversion model by adding Python code directly to the `--transformer` option:

```shell
v1c lr left.jpg right.jpg --transformer 'EquirectangularEncoder() * Euclidean3DRotator(from_rotation_vector([0, np.pi / 4, 0])) * FisheyeDecoder("equidistant")'
```

If tuple, the first transformer is applied to the left image and the second transformer is applied to the right image. If a single transformer is given, it is applied to both images.

Please refer to the [API documentation](https://vr180-convert.readthedocs.io/) for the available transformers and their parameters.
For `from_rotation_vector`, please refer to the [numpy-quaternion documentation](https://quaternion.readthedocs.io/en/latest/Package%20API%3A/quaternion/#from_rotation_vector).

### Single image conversion

To convert a single image, use `v1c s` instead.

### Running commands for all images in a directory

```shell
find left_dir -type f -name '*.jpg' -exec v1c lr {} right_dir --automatch fm --radius max -ac 0 --out-path out \;
```

### Help

For more information, please refer to the help or API documentation:

```shell
v1c --help
```

## Usage as a library

For more complex transformations, it is recommended to create your own `Transformer`.

Note that the transformation is applied in inverse order (new[(x, y)] = old[transform(x, y)], e.g. to decode [orthographic](https://en.wikipedia.org/wiki/Fisheye_lens#Mapping_function) fisheye images, `transform_polar` should be `arcsin(theta)`, not `sin(theta)`.)

```python
from vr180_convert import PolarRollTransformer, apply_lr

class MyTransformer(PolarRollTransformer):
def transform_polar(
self, theta: NDArray, roll: NDArray, **kwargs: Any
) -> tuple[NDArray, NDArray]:
return theta**0.98 + theta**1.01, roll

transformer = EquirectangularEncoder() * MyTransformer() * FisheyeDecoder("equidistant")
apply_lr(transformer, left_path="left.jpg", right_path="right.jpg", out_path="output.jpg")
```

## Tips

### How to determine which image is left or right

| | Left | Right |
| ------------------- | --------------------------- | --------------------------- |
| Subject Orientation | Right | Left |
| Film Color | ${\color{red}\text{Red}}$ | ${\color{blue}\text{Blue}}$ |
| Anaglyph Color | ${\color{blue}\text{Blue}}$ | ${\color{red}\text{Red}}$ |

- In a SBS image, the subject is oriented toward the center.

### How to edit images

This program cannot read RAW files. To deal with white-outs, etc., it is required to process each image with a program such as Photoshop, Lightroom, [RawTherapee](https://rawtherapee.com/downloads/), [Darktable](https://www.darktable.org/install/), etc.

However, this is so exhaustive, so it is recommended to take the images with JPEG format with care to **avoid overexposure** and to **match the settings** of the two cameras, then convert them with this program and edit the converted images.

Example of editing in RawTherapee (Light editing)

1. Rank the left images in RawTherapee.
2. Use this program to convert the images.
3. Edit the converted images in RawTherapee.

Example of editing in Photoshop (Exquisite editing)

1. Rank the left images in RawTherapee or Lightroom.
2. Open left image as a Smart Object `LRaw`.
3. Add right image as a Smart Object `RRaw`.
4. Make **minimal** corrections just to match the exposure using `Camera Raw Filter`.
5. Make each Smart Object into Smart Objects (`L`, `R`) again and do any image-dependent processing, such as removing the background.
6. Make both images into a single Smart Object (`P`) and process them as a whole.
7. Export as a PNG file.
8. Hide the other Smart Object (`L` or `R`) (created in step 3) in the Smart Object `P` (created in step 4) and save the Smart Object `P`, then export as a PNG file.
9. Use this program to convert the images.

## Contributors ✨

Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):

This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!

## Credits

This package was created with
[Copier](https://copier.readthedocs.io/) and the
[browniebroke/pypackage-template](https://github.com/browniebroke/pypackage-template)
project template.