-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is is possible to "parametrize" a benchmark? #48
Comments
@lelit Am I correctly understanding that you want to parametrize (or make up from parametrization) the group name? |
The you have two options:
Maybe we could have @RonnyPfannschmidt's idea of |
Yes, I think Ronny is on the right path, but I think I'm missing what injects |
when taking the benchmakr group from a test item, the parameterization is already accessible on the test item so thats when things can be taken out |
An example hook, that you'd put in your def pytest_benchmark_group_stats(config, benchmarks, group_by):
result = defaultdict(list)
for bench in benchmarks:
result["%s: %s" % (bench.params['name'], bench.name)].append(bench)
return result.items() Not sure, you might need the master branch |
Yeah, with 3.0 you'd need to parse out that specific parameter from result["%s: %s" % (bench.param.split('-')[0], bench.name)].append(bench) |
Thank you! That function fulfilled my needs: I was able to reduce the original benchmarks down to a handful of functions. |
It didn't initially occur to me but you can also do this: @pytest.mark_parametrize('foo', [1,2,3])
def test_perf(benchmark, foo):
benchmark.group = '%s - perf' % foo
benchmark(....) |
Also, you may set the group from a fixture to reduce boilerplate, eg: @pytest.fixture(params=[1,2,3])
def foo(benchmark, request):
benchmark.group = '%s - perf' % request.param
return request.param
def test_perf(benchmark, foo):
benchmark(....) The only constraint is that the fixture needs to be function scoped (as benchmark fixture is). |
I want to benchmark different JSON engines serialization/deserialization functions, with different sets of data. More specifically, I'm trying to convert an already existing set of benchmarks to pytest-benchmark.
Here the
contenders
is a list of tuples(name, serialization_func, deserialization_func)
:This will produce two distinct benchmarks tables, one for the serialization function and one for its counterpart. I can go down the boring way of repeating that pattern for each dataset...
What I'd like to achieve is to factorize that to something like the following (that does not work):
That way I could reuse the very same code to create benchmarks against all other sets of data, without repeating the code, simply adding them to the initial
parametrize
:Is there any trick I'm missing?
The text was updated successfully, but these errors were encountered: