Skip to content

Tags: mayel/benchee

Tags

1.0.1

Toggle 1.0.1's commit message
* When memory measurements were actually different extended statistic…

…s was displayed although the option was not provided. Now correctly only displayed if the option is provided and values actually had variance.

1.0.0

Toggle 1.0.0's commit message
It's 0.99.0 without the deprecation warnings. Specifically:

* Old way of passing formatters (`:formatter_options`) vs. new `:formatters` with modules, tuples or functions with one arg
* The configuration needs to be passed as the second argument to `Benchee.run/2`
* `Benchee.collect/1` replaces `Benchee.measure/1`
* `unit_scaling` is a top level configuration option, not for the console formatter
* the warning for memory measurements not working on OTP <= 18 will also be dropped (we already officially dropped OTP 18 support in 0.14.0)

We're aiming to follow Semantic Versioning as we go forward. That means formatters should be safe to use `~> 1.0` (or even `>= 0.99.0 and < 2.0.0`).

0.99.0

Toggle 0.99.0's commit message
The "we're almost 1.0!" release - all the last small features, a bag …

…of polish and deprecation warnings. If you run this release succesfully without deprecation warnings you should be safe to upgrade to 1.0.0, if not - it's a bug :)

* changed official Elixir compatibility to `~> 1.6`, 1.4+ should still work but aren't guaranteed or tested against.

* the console comparison now also displays the absolute difference in the average (like +12 ms) so that you have an idea to how much time that translates to in your applications not just that it's 100x faster
* Overhaul of README, documentation, update samples etc. - a whole lot of things have also been marked `@doc false` as they're considered internal

* Remove double empty line after configuration display
* Fix some wrong type specs

* `Scenario` made it to the big leagues, it's no longer `Benchee.Benchmark.Scenario` but `Benchee.Scenario` - as it is arguably one of our most important data structures.
* The `Scenario` struct had some keys changed (last time before 2.0 I promise!) - instead of `:run_times`/`:run_time_statistics` you now have one `run_time_data` key that contains `Benchee.CollectionData` which has the keys `:samples` and `:statistics`. Same for `memory_usage`. This was done to be able to handle different kinds of measurements more uniformly as we will add more of them.

* `Benchee.Statistics` comes with 3 new values: `:relative_more`, `:relative_less`, `:absolute_difference` so that you don't have to calculate these relative values yourself :)

0.14.0

Toggle 0.14.0's commit message
* dropped support for Erlang 18.x

* Formatters no longer have an `output/1` method, instead use `Formatter.output/3` please
* Usage of `formatter_options` is deprecated, instead please use the new tuple way

* benchee now uses the maximum precision available to measure which on Linux and OSX is nanoseonds instead of microseconds. Somewhat surprisingly `:timer.tc/1` always cut down to microseconds although better precision is available.
* The preferred way to specify formatters and their options is to specify them as a tuple `{module, options}` instead of using `formatter_options`.
* New `Formatter.output/1` function that takes a suite and uses all configured formatters to output their results
* Add the concept of a benchmarking title that formatters can pick up
* the displayed percentiles can now be adjusted
* inputs option can now be an ordered list of tuples, this way you can determine their order
* support FreeBSD properly (system metrics) - thanks @[kimshrier](/kimshrier)

* Remove extra double quotes in operating system report line - thanks @[kimshrier](/kimshrier)

* all reported times are now in nanoseconds instead of microseconds
* formatter methods `format` and `write` now take 2 arguments each where the additional arguments is the options specified for this formatter so that you have direct access to it without peeling it from the suite
* You can no longer `use Benchee.Formatter` - just adopt the behaviour (no more auto generated `output/1` method, but `Formatter.output/3` takes that responsibility now)

* An optional title is now available in the suite for you to display
* Scenarios are now sorted already sorted (first by run time, then memory usage) - no need to sort them yourself!
* Add `Scenario.data_processed?/2` to check if either run time or memory data has had statistics generated

0.13.2

Toggle 0.13.2's commit message
bump it up 0.13.2

0.13.1

Toggle 0.13.1's commit message
Mostly fixing memory measurement bugs and related issues :) Enjoy a b…

…etter memory measurement experience from now on!

* Memory measurements now correctly take the old generation on the heap into account. In reality that means sometimes bigger results and no missing measurements. See [bencheeorg#216](bencheeorg#216) for details. Thanks to @michalmuskala for providing an interesting sample.
* Formatters are now more robust (aka not crashing) when dealing with partially missing memory measurements. Although it shouldn't happen anymore with the item before fixed, benchee shouldn't crash on you so we want to be on the safe side.
* It's now possible to run just memory measurements (i.e. `time: 0, warmup: 0, memory_time: 1`)
* even when you already have scenarios tagged with `-2` etc. it still correctly produces `-3`, `-4` etc. when saving again with the same "base tage name"

0.13.0

Toggle 0.13.0's commit message
Memory Measurements are finally here! Please report problems if you e…

…xperience them.

* Memory measurements obviously ;) Memory measurement are currently limited to process your function will be run in - memory consumption of other processes will **not** be measured. More information can be found in the [README](https://github.com/PragTob/benchee#measuring-memory-consumption). Only usable on OTP 19+. Special thanks go to @devonestes and @michalmuskala
* new `pre_check` configuration option which allows users to add a dry run of all
benchmarks with each input before running the actual suite. This should save
time while actually writing the code for your benchmarks.

* Standard Deviation is now calculated correctly for being a sample of the population (divided by `n - 1` and not just `n`)

0.12.0

Toggle 0.12.0's commit message
Adds the ability to save benchmarking results and load them again to …

…compare

against. Also fixes a bug for running benchmarks in parallel.

* Dropped Support for elixir 1.3, new support is elixir 1.4+

* new `save` option specifying a path and a tag to save the results and tag them
(for instance with `"master"`) and a `load` option to load those results again
and compare them against your current results.
* runs warning free with elixir 1.6

* If you were running benchmarks in parallel, you would see results for each
parallel process you were running. So, if you were running **two** jobs, and
setting your configuration to `parallel: 2`, you would see **four** results in the
formatter. This is now correctly showing only the **two** jobs.

* `Scenario` has a new `name` field to be adopted for displaying the scenario names,
as it includes the tag name and potential future additions.