-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add four global data augmentations for 3D #1028
Conversation
2. Add wrap_angle_rad helper function.
2. Change rads -> radians 3. Good point. We could add X or Y rotation as future work. For this PR, let's check in Z rotation first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, looking great!
During inference time, the output will be identical to input. Call the layer with `training=True` to drop the input points. | ||
|
||
Input shape: | ||
point_clouds: 3D (multi frames) float32 Tensor with shape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we need to support 4D tensor [batch_size, num_frames, num_points, point_feat]?
Asking user to do tf.map_fn on their own might be too much cognitive load, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is supported. Please see the test case, where I tested augmentation with a batch dimension.
BOUNDING_BOXES = base_augmentation_layer_3d.BOUNDING_BOXES | ||
|
||
|
||
class GlobalRandomFlippingY(base_augmentation_layer_3d.BaseAugmentationLayer3D): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a discussion here: since we use "horizontal" and "vertical" in keras layers and Tensorflow in general, would it be better to be consistent with that?
https://keras.io/api/layers/preprocessing_layers/image_preprocessing/random_flip/
https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right
I'm not sure if that image concept applies to point concept though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The point coordinators are defined based on (X, Y, Z) instead of (row, col). I think it is better to use 'Y' instead of 'horizontal/vertical'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for consistency with the 2D equivalent we should call this GlobalRandomFlipY instead of GlobalRandomFlippingY
point_clouds: 3D (multi frames) float32 Tensor with shape | ||
[num of frames, num of points, num of point features]. | ||
The first 5 features are [x, y, z, class, range]. | ||
bounding_boxes: 3D (multi frames) float32 Tensor with shape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ianstenbit FYI, this would be a future improvement from our side to introduce 3d box format, given we already have 2d box format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 let's discuss this when the input pipeline is ready.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed -- we should support box format here, even if we only have 1 format. Once we have a training script skeleton set up I can set up the box format infra.
max_scaling_factor: A float scaler or Tensor sets the maximum scaling factor. | ||
""" | ||
|
||
def __init__(self, min_scaling_factor, max_scaling_factor, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a scalar, or a vector of size 3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scalar. We scale (x, y, z) using a single scalar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will it need to support scaling differently on different axes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
def __init__( | ||
self, | ||
min_scaling_factor_x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe we can make this consistent with randomzoom
? https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomZoom
So it's scaling_factor_x
-- a tuple of size 2, if it means [min_x, max_x], or a single scalar, if it means the factor is fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! These are great -- looking forward to getting them merged :)
A tuple of two Tensors (point_clouds, bounding_boxes) with the same shape as input Tensors. | ||
|
||
Arguments: | ||
keep_probability: A float scaler or Tensor sets the probability threshold for keeping the points. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would you feel about changing this argument to drop_rate
(and inverting the behavior such that high drop rate ~ low keep probability)
This seems a bit more consistent with the name of the layer as well as related conventions like dropout
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
The first 7 features are [x, y, z, dx, dy, dz, phi]. | ||
|
||
Output shape: | ||
A tuple of two Tensors (point_clouds, bounding_boxes) with the same shape as input Tensors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the output is really a dictionary, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Same for the other 3 layers)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the output shape is for augment_point_clouds_bounding_boxes function. Let me update all of them.
BOUNDING_BOXES = base_augmentation_layer_3d.BOUNDING_BOXES | ||
|
||
|
||
class GlobalRandomFlippingY(base_augmentation_layer_3d.BaseAugmentationLayer3D): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for consistency with the 2D equivalent we should call this GlobalRandomFlipY instead of GlobalRandomFlippingY
outputs = add_layer(inputs) | ||
self.assertNotAllClose(inputs, outputs) | ||
|
||
def test_not_augment_batch_point_clouds_and_bounding_boxes(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional: consider dropping this test case and the one below it.
I think this test and the one below it don't add any coverage, since we already have test cases for dropping all and dropping none of the points, as well as a single test for batched augmentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
point_clouds: 3D (multi frames) float32 Tensor with shape | ||
[num of frames, num of points, num of point features]. | ||
The first 5 features are [x, y, z, class, range]. | ||
bounding_boxes: 3D (multi frames) float32 Tensor with shape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed -- we should support box format here, even if we only have 1 format. Once we have a training script skeleton set up I can set up the box format infra.
|
||
def __init__( | ||
self, | ||
min_scaling_factor_x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make these tuples like:
x_factor
(which is a tuple of (min, max)
)
As a user of this API, I'd also expect to be able to pass y_factor=None
(probably should be a default parameter value) to indicate that I want no scaling on the y axis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
|
||
class GlobalScalingTest(tf.test.TestCase): | ||
def test_augment_point_clouds_and_bounding_boxes(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Same comment as the random flipping -- I think we should have a test case here with a positive assertion about the numerics of the scaled output)
e.g.
input_points = [some known set of points]
output_points = [the expected values of the scaled points]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added test_2x_scaling_point_clouds_and_bounding_boxes test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome -- I still think it would be good to do this for flipping and rotation layers as well
Arguments: | ||
x_translation_stddev: A float scaler or Tensor sets the translation noise standard deviation along the X axis. | ||
y_translation_stddev: A float scaler or Tensor sets the translation noise standard deviation along the Y axis. | ||
z_translation_stddev: A float scaler or Tensor sets the translation noise standard deviation along the Z axis. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit (throughout the PR): s/scaler/scalar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
""" | ||
|
||
def __init__( | ||
self, x_translation_stddev, y_translation_stddev, z_translation_stddev, **kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I'd also expect to be able to use a default y_translation_stddev=None
to indicate that no translation on the y axis should occur.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added
x_stddev = x_stddev if x_stddev else 0.0
y_stddev = y_stddev if y_stddev else 0.0
z_stddev = z_stddev if z_stddev else 0.0
""" | ||
|
||
def __init__( | ||
self, x_translation_stddev, y_translation_stddev, z_translation_stddev, **kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about calling these x_magnitude
or something like that (just trying to come up with something a bit more brief)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can change x_translation_stddev to x_stddev. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that sgtm
/gcbrun |
/gcbrun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just a few comments.
Thank you!
super().__init__(**kwargs) | ||
keep_probability = 1 - drop_rate | ||
if keep_probability < 0: | ||
raise ValueError("keep_probability must be >=0.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: since drop_rate
is what's in the API, maybe this should say "drop rate must be <= 1"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Done.
def augment_point_clouds_bounding_boxes( | ||
self, point_clouds, bounding_boxes, transformation, **kwargs | ||
): | ||
del transformation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just curious -- is this necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
|
||
def __init__( | ||
self, | ||
scaling_factor_x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(here and in the other KPLs where you've added defaults) -- let's make this default to None in the constructor like
scaling_factor_x=None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
""" | ||
|
||
def __init__( | ||
self, x_translation_stddev, y_translation_stddev, z_translation_stddev, **kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that sgtm
|
||
|
||
class GlobalScalingTest(tf.test.TestCase): | ||
def test_augment_point_clouds_and_bounding_boxes(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome -- I still think it would be good to do this for flipping and rotation layers as well
Is this PR growing too much? Why we have not contributed each augmentation in a separate PR? |
super().__init__(**kwargs) | ||
drop_rate = drop_rate if drop_rate else 0.0 | ||
|
||
if drop_rate <= 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be if drop_rate > 1
(Hopefully we'll have a test failure indicating this?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the typo.
scaling_factor_x: A tuple of float scalar sets the minimum and maximum scaling factors for the X axis. | ||
scaling_factor_y: A tuple of float scalar sets the minimum and maximum scaling factors for the Y axis. | ||
scaling_factor_z: A tuple of float scalar sets the minimum and maximum scaling factors for the Z axis. | ||
scaling_factor_x: A tuple of float scalar or a float scaler sets the minimum and maximum scaling factors for the X axis. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/scaler/scalar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
/gcbrun |
yeah it'd be nice to have separate PRs |
It's again timing out, so merging it manually |
* Add base augmentation layer for 3D preception. * Fix format. * Add copyright. * Minor change. * revert the minor change in the test file. * 1. Add global_z_rotation data augmentation. 2. Add wrap_angle_rad helper function. * Auto format. * 1. Standardize POINT_CLOUDS and BOUNDING_BOXES 2. Change rads -> radians 3. Good point. We could add X or Y rotation as future work. For this PR, let's check in Z rotation first. * Format. * Delete base_augmentation_layer_3d.py * Delete base_augmentation_layer_3d_test.py * Standardize POINT_CLOUDS and BOUNDING_BOXES names. * Change GlobalZRotation to GlobalRandomZRotation * Support rotation along X, Y and Z axes. * format. * Change file name from global_rotation to global_random_rotation. * Add four more global data augmentations for 3d. * format. * Remove unused import. * Fix a typo in GlobalRandomFlippingY. * Support scaling x, y, and z. * Format. * update random scaling. * Modified based on comments. * follow up. * Fix a typo in random_scaling_test.py * Update. * Fix two typos. Co-authored-by: Leng Zhaoqi <[email protected]>
What does this PR do?
Fixes # (issue)
Before submitting
Pull Request section?
to it if that's the case.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.