-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[backend-comparison] Add GitHub authentication to burnbench CLI #1285
Conversation
148d629
to
a7dec88
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Followed your instructions and seemed to work as described :)
As expected, the first command fails without auth:
You need to be authenticated to be able to share benchmark results.
Run the command 'burnbench auth' to authenticate.
After authenticating the benchmark completed successfully.
―――――――― Result ―――――――――
Samples 10
Mean 232.641ms
Variance 91.636µs
Median 231.984ms
Min 219.470ms
Max 249.241ms
―――――――――――――――――――――――――
Also noted a minor typo below.
a7dec88
to
202247d
Compare
I followed the steps successfully, and here are the results:
|
Also confirming that the permissions are set correctly on my Mac:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic! Thanks a lot!
Two considerations (you can tick them if they are deemed valid):
- Can we replace
println
andeprintln
withlog
functions? - Perhaps I'm wrong, but it seems there are some tests that could be deduplicated using intermediate functions. Is this wanted?
I think |
@Luni-4 Not sure about the addition of a log mechanism for now given that we already write the benchmarks to disk in the cache directory. Those prints are just part of the UI in a sense. But maybe it can be useful to add a logger for those later, feel free to open an issue about it. |
My idea of logging in a library (in our case, a crate) consists of discriminating among the different events that might happen when an API is called. A Anyway, I agree to maintain a |
Each test tests only one thing and each case is different, we cannot really do much more than the cleanup function that is already made to reduce redundancy. Maybe we could be more fancy with the setup and teardown but that does not seem to be easy in Rust. |
@syl20bnr i have used rstest in burn-import. It is pretty cool and easy to use. Check it out if you are interested: https://docs.rs/rstest/latest/rstest/ |
@antimora unfortunately |
In the run command, makes the --benches and --backends required The manual check is no longer necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for your changes! The code has been improved a lot!
Just the last change for me
c652b07
to
dcc7afc
Compare
Checklist
run-checks all
script has been executed.Related Issues/PRs
Issue #1072
Changes
Add a new
auth
command toburnbench
in order to create an access token that will be used to authenticate the user when sharing the benchmark results.Add a
--share
argument to therun
command to upload the benchmark results. This is a WIP and the actual upload will be added once the server side used the access token to authenticate the user.Found a crate called github-device-flow to help with implementing the device flow in order to create a user access token.
For now the token is not refreshed when it expires. This can be implemented later.
Currently the GitHub application is a temporary one called
Burnbench POC
created under my account. Once this is approved and merged we can create the official one in the Tracel org and update the client ID.Testing
Would be great if you can test the auth flow with the following commands in this order:
First, verify that the CLI errors out if you are not authenticated and the
--share
argument is passed:Then follow the authentication flow with:
At last rerun the benchmark with
--share
argument:Note that the actual upload is not yet implemented.