Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/AbhiLegend/GanOpenVINO
https://github.com/AbhiLegend/GanOpenVINO
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/AbhiLegend/GanOpenVINO
- Owner: AbhiLegend
- Created: 2023-10-29T09:57:47.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2023-10-29T09:59:51.000Z (about 1 year ago)
- Last Synced: 2024-08-01T21:46:10.927Z (4 months ago)
- Language: Jupyter Notebook
- Size: 7.81 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-openvino - Implementing GAN with OpenVINO
README
# GanOpenVINO
OpenVINO, short for Open Visual Inference & Neural Network Optimization, is a toolkit from Intel that allows developers to deploy pre-trained deep learning models through a high-performance inference engine. OpenVINO supports optimization of models from various frameworks by converting them into an intermediate representation (IR), which is then used by the inference engine.Let's analyze the portion of the code related to OpenVINO:
1. Model Paths Setup:
- The code sets up paths to save the ONNX and OpenVINO IR (Intermediate Representation) models. The ONNX model will have a `.onnx` extension, while the IR model will have a `.xml` extension.
2. Loading and Exporting to ONNX:
- The trained generator model is loaded and set to evaluation mode.
- The model is then exported to ONNX format if it hasn't been exported already. This is done using `torch.onnx.export()`, which requires a dummy input (in this case, a tensor of size [1, 100]) to understand the input shape and type.
3. Conversion to OpenVINO IR:
- If the IR model does not already exist, the ONNX model is converted to OpenVINO's Intermediate Representation (IR) format. The code mentions a method `ov.convert_model(onnx_path)` for this purpose, but the implementation or import related to this method isn't visible in the provided code. Once converted, the model is saved in the IR format using `ov.save_model()`.
4. Print Statements:
- There are print statements indicating the status of the export and conversion processes. For example, the code prints messages when the ONNX model has been exported, or when the IR model already exists, etc.It's worth noting that once the model is in OpenVINO's IR format, it can be efficiently deployed on a wide range of Intel hardware (like CPUs, integrated GPUs, and VPUs) using the OpenVINO inference engine. However, the actual inference or deployment portion isn't visible in the provided code.