2022.3 says Ubuntu 22.04 is supported as a preview. Can this script be updated as well so the recommended/tested compute runtime version can be installed without having to do it manually?
System information (version)
OpenVINO => main branch
Operating System / Platform => Windows 64 Bit
Compiler => Visual Studio 2019
Problem classification: Compilation problem.
Detailed description
While I was try to build Open Vino from main branch with Wheel packages I got following infor...
System information (version)
OpenVINO => 2022.3.0
Operating System / Platform => Windows 64 Bit
Compiler => Visual Studio 2019
Problem classification: Model Conversion
Framework: TensorFlow
Model name: VGGFace
Detailed description
I am using VGGFace model for face recognition, successfull...
System information (version)
OpenVINO=> ❔
Operating System / Platform => ❔
Compiler => ❔
Problem classification => ❔
Detailed description
Steps to reproduce
Issue submission checklist
I report the issue, it's not a question
I checked the problem with documentation, FAQ, open is...
Hey, I've noticed that certain operations (SoftMax) have different opsets (e.g., 1 and 8). I am curious if it's somehow possible to specify extra commands/config to model optimizer, which would prefer a certain version. By default, SoftMax uses opset version 8, but I'd like to export it with opset v...
System information (version)
OpenVINO => 2022.3.0 and master
Operating System / Platform => Ubuntu 22.04
CMake => 3.22.1
Problem classification: build scripts
Detailed description
The issue can be reproduced with simple CMakeLists.txt file:
cmake_minimum_required(VERSION 3.21)
project(ope...
Operating system linux ubuntu
Openvino, Openvino-dev, Openvino-tensorflow (all 2022.3.0)
Tensorflow 2.9.1
Model: Delg (https://github.com/tensorflow/models/tree/master/research/delf)
Hi,
I am trying to convert a saved TF2 model with multiple Input/output containing signatures. I use the following co...
Hi,
I was wondering if it was possible for OpenVINO to limit the number of EUs utilized by the GPU plugin. I have an inference app that was developed on an 80EU part, and I'd like to get a rough estimate of how it would perform on the 48EU part for instance, before trying to buy one.
If this can't...
Hello everyone,
When I do an inference with Openvino runtime and Inference Engine, they give me a very different detection confidence score.
Here's the remarkable difference I can capture when doing inference on my efficientdet_d0 model (trained using TF2 API, and converted to openvino IR format) on...
from openvino import runtime as ov
x = ov.opset8.multiply(2,3)
print(x)
It prints output :
<Multiply: 'Multiply_2' ({})> # pyopenvino.Node
expect or target output:
Int 6
Can it return to my expected or target output?