You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
EMOCA inference on images/videos reconstructs taking into account the head orientation as seen in the images/videos. Is there a way to output the "unposed" (cancelling out the head orientation) vertex data? I checked with both "trans_verts", "verts" in the vals dictionary hoping one of them would be the unposed data but both of them take into account the head orientation.
Or is there a rigid transformation matrix in the prediction "vals" of EMOCA inference that can be used to "unpose" the vertex data?
The text was updated successfully, but these errors were encountered:
It is very easy to unpose.
The first three values in the pose code correspond to the global rotation in axis-angle representation. Just set them to zero and this unposes the head.
EMOCA inference on images/videos reconstructs taking into account the head orientation as seen in the images/videos. Is there a way to output the "unposed" (cancelling out the head orientation) vertex data? I checked with both "trans_verts", "verts" in the vals dictionary hoping one of them would be the unposed data but both of them take into account the head orientation.
Or is there a rigid transformation matrix in the prediction "vals" of EMOCA inference that can be used to "unpose" the vertex data?
The text was updated successfully, but these errors were encountered: