Skip to content
This repository has been archived by the owner on Aug 26, 2024. It is now read-only.

Add neural networks model performance #231

Merged
merged 2 commits into from
Jul 28, 2021
Merged

Conversation

VanDavv
Copy link
Contributor

@VanDavv VanDavv commented Jul 21, 2021

This PR adds a model performance to the available models' table.

image

It was measured by running $ python3 depthai_demo.py -cnn <cnn_name>, without additional switches, to provide the same results as users would get when trying out these networks.

We can add something like Max FPS providing an example of how to run the demo script (or some custom one) to achieve the best performance

@VanDavv
Copy link
Contributor Author

VanDavv commented Jul 21, 2021

One idea @szabi-luxonis / @themarpe - if we would be able to provide the number of available shaves via the API, we could compile the models dynamically with the most optimal amount of shaves. Currently, we default to 4 shaves but we can customize it manually with --shaves param in the demo script

@themarpe
Copy link

I agree - in that case I'd most likely go 1 step further if clean OpenVINO integration was possible and allow to just specify IR instead. Then pipeline would compile into blob with max number of shaves available.

Although that depends on 2 important factors:

  1. Possible to expose a subset of resource manager which determines number of available shaves by asking nodes on pipeline about their properties.
  2. Possible to cleanly integrate multiple OpenVINO versions and their subsets (we could also just do openvino for latest otherwise, etc...).

@VanDavv
Copy link
Contributor Author

VanDavv commented Jul 22, 2021

I love this idea, to be able to just provide the IR and compile the blob automatically - it would make it a lot easier for someone already familiar with OpenVINO to make a transition to DepthAI.

One additional issue I see here though is where the compilation should happen and how to ensure we can compile the blobs. We can use the online converter, but this requires an internet connection. We can use the OpenVINO PyPi package, but it's not available on some OS-es.
But overall I think that releasing this as an additional feature, that may improve the experience for some of our users, would make sense.
Thoughts @Luxonis-Brandon?

@themarpe
Copy link

@VanDavv
In this case we'd be including the actual OpenVINO inside our library, so compilation would happen locally. No internet access required. Basically it'd be same as using OpenVINO, just that in our case that would happen opaquely in the background.

As mentioned, this depends on integration difficulties of OpenVINO itself and multiple versions of it.

@VanDavv
Copy link
Contributor Author

VanDavv commented Jul 28, 2021

I'll go ahead and merge this PR as it will be useful as an overall preview of the available networks and help with navigation around them.
@themarpe @szabi-luxonis @Luxonis-Brandon - feel free to still review this PR if you want, I'll address any change requests in a separate PR

@VanDavv VanDavv merged commit 73b2cba into master Jul 28, 2021
@VanDavv VanDavv deleted the add_model_performance branch July 28, 2021 11:02
@Luxonis-Brandon
Copy link
Contributor

Sounds great thank you!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants