Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to obtain unposed (without the head orientation) reconstruction data? #36

Open
galib360 opened this issue Jan 20, 2023 · 1 comment

Comments

@galib360
Copy link

EMOCA inference on images/videos reconstructs taking into account the head orientation as seen in the images/videos. Is there a way to output the "unposed" (cancelling out the head orientation) vertex data? I checked with both "trans_verts", "verts" in the vals dictionary hoping one of them would be the unposed data but both of them take into account the head orientation.

Or is there a rigid transformation matrix in the prediction "vals" of EMOCA inference that can be used to "unpose" the vertex data?

@radekd91
Copy link
Owner

It is very easy to unpose.
The first three values in the pose code correspond to the global rotation in axis-angle representation. Just set them to zero and this unposes the head.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants