Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration with Unity Barracuda Inference Engine #113

Closed
GeorgeAdamon opened this issue Jun 8, 2021 · 3 comments
Closed

Integration with Unity Barracuda Inference Engine #113

GeorgeAdamon opened this issue Jun 8, 2021 · 3 comments

Comments

@GeorgeAdamon
Copy link

Hello,

I'm linking to an issue I have raised with Unity, related to the integration of the MiDaS .onnx model with Unity's Barracuda engine.

Unity-Technologies/barracuda-release#187

As @FlorentGuinier from Unity pointed out:

So the problem here is that some convolution in the model are using "group" that are not 1 nor the input channel count. At the moment we only support those two version (ie regular convolution with group == 1, and depthwise convolution where group == input channel count).

I will clarify the error message, however the real question here is "Do you need DepthwiseConvolution where group != input channel?".

I was hoping that you could shed some light into this issue.

Thanks for your time!

@ranftlr
Copy link
Collaborator

ranftlr commented Jun 8, 2021

Unfortunately, the grouped convolutions are an integral part of the ResNeXt backbone and can't be removed without fundamentally changing the architecture and requiring complete re-training.

You could try having a look at the "small" model to see if it is accurate enough for your application: https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-small.onnx

If I remember correctly, the backbone has groups == 1 or groups == input_channel_size throughout, so this should work according to the specs mentioned by @FlorentGuinier. The small model has lower accuracy, but on the upside should be closer to your real-time requirements.

@GeorgeAdamon
Copy link
Author

Thanks a lot for your prompt answer @ranftlr, I will test with the small model and report my results!

@GeorgeAdamon
Copy link
Author

https://github.com/GeorgeAdamon/monocular-depth-unity

@ranftlr It worked! I used the small .onnx model as you suggested. Sustained 60fps on GTX970 (amazing)

Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants