Macrobenchmark also can be used to generate baseline profiles and now Startup Profiles. Both help to improve application performance.
All these features are provided by the androidx.benchmark library group. Despite the minor version increment from 1.1.1 to 1.2.0, a lot of new features have been added. Let’s dive into what’s new in Jetpack Benchmark 1.2.0.
Baseline profilesimprove overall code execution speed of included paths by about 30% by avoiding interpretation and the cost of class initialization by ahead of time compiling.
These profiles have been around for a while and are already used by many apps and games on the Google Play Store. Libraries can also contribute baseline profiles and improve app performance seamlessly. And with the new Gradle plugin, automated baseline profile generation is easier than ever.
✨ Baseline profile API is now stable ✨
With the Benchmark 1.2.0 release baseline profiles are no longer experimental.
When baseline profiles were introduced as an experimental feature in Benchmark 1.1.0, it shipped with the collectBaselineProfile API. It was replaced by the simpler collect.You now can generate stable baseline profiles and use UiAutomator to capture user journeys as part of the profile.
Above code collects a baseline profile for app startup and a user journey where UiAutomator is used to find the “For You” text, click it and then scroll a column down and up.
Set up baseline profile modules using Android Studio Canary
We’re working on improving the set up experience for baseline profiles. This new experience is available in the Android Studio Iguana Canary builds.
The Android Studio baseline profile Generator module template automates the creation of a new module to generate and benchmark baseline profiles. Running the template generates most of the typical build configuration, baseline rofile generation, and verification code. The template creates code to generate and benchmark baseline profiles. This helps to improve and measure app startup times.
Create a new module and select “Baseline Profile Generator”, go through the dialog and click finish.
Now you can generate baseline profiles directly from the Run dialog in Android Studio
Automate baseline rofile generation with a Gradle plugin
A high priority feature request from developers implementing baseline profiles was to help them with automating the process.
When using AGP 8.0 or newer you can now use the Baseline Profile Gradle plugin. This plugin lets you fully control baseline profile creation. It seamlessly integrates profile generation in the Gradle build pipeline. You can control when the profile will update and make sure it’s always fresh for a new release.
To use the Gradle plugin, apply it to your app and baseline profile modules. You can do this manually, following the guide or with Android Studio Iguana’s new module template for baseline profiles. The plugin offers wide configuration options for high flexibility. You can use Gradle managed devices for ease of running on a CI system. baseline-prof.txt source files no longer have to be part of your app’s source set when using the Gradle plugin. They can be considered reproducibly generated code.
Simplified profile filtering empower complex applications and library developers
Large code bases or libraries require filter mechanisms to only include specific classes in a baseline profile. The Gradle plugin provides just that. The new filter block enables including or excluding code via regular expressions.
Arguments to improve benchmarks
New profiling modes
Method tracing and stack sampling modes are now available with Macrobenchmark.
With these profile modes you can get detailed information on specific areas of the code that’s being executed. To enable it, use the androidx.benchmark.profiling.mode argument with the StackSampling or MethodTracing parameters.
With MethodTracing you can see exactly which methods are being called while a code block executes. This can help to identify whether the app is doing what you expect it to do at a glance.
StackSampling highlights where the benchmark spends time across all call stacks. With this you can see where the app spends most of the time during a run.
Note: These two modes are known to skew performance metrics. You can rely on them to see what is going on during runtime but do take frame timing or startup results with a grain of salt when method tracing or stack sampling is enabled.
Quicker validation with dry runs
The dryRunMode.enable instrumentation argument enables quicker validation. This can be useful for presubmit environments or during development of a benchmark and brings the flag from Microbenchmark to Macrobenchmark.
Use Perfetto Sdk Tracing for higher resolution metrics
To get even more details when benchmarking you can trace app behavior with Perfetto Sdk Tracing. While this loads a library during app runtime and slows down StartupMode.COLD benchmarks, it does give you a more detailed and full picture of what’s happening during a benchmark. This can be very useful when you suspect areas of improvement not being surfaced by regular tracing.
To enable Perfetto Sdk tracing, add these dependencies to your benchmark module,
and run the benchmarks with the perfettoSdkTracing.enable instrumentation argument.
New metrics in the Macrobenchmark library provide you with more insights into data and further empower automated monitoring.
Post process anything with PerfettoTraceProcessor
The PerfettoTraceProcessor API brings every detail from a benchmark result into a single API. It’s highly flexible and enables synchronous and async trace sections, kernel-level scheduling timing, binder events and anything else that’s captured in a Perfetto trace.
Now you can use the same trace querying API in macrobenchmark. This is especially useful to create your own fully custom TraceMetric.
Surface more details with TraceSectionMetric and TraceMetric
These trace specific APIs enable quick processing of specific sections and traces of an app.
The TraceSectionMetric enables capturing custom trace sections in an app. For example, you can use the TraceSectionMetric to capture how long specific parts during activity startup took. This example shows how you can collect the duration of the onStart and onResume methods. Additionally, below sample aggregates the amount of time it took to render frames by capturing the doFrame trace using Mode.Sum.
These metrics are all surfaced as part of the benchmark results and can help inform decisions on where to optimize as well as monitor for potential performance regressions.
Macrobenchmark now also ships with the new TraceMetric. This API enables you to create your own complex metrics from macrobenchmark data as it enables you to run arbitrary queries against Perfetto traces. You can take any data that is captured in the Perfetto trace during a benchmark run and turn it into a metric. You can learn more about how to use the available data in the Perfetto documentation.
Understand power consumption with PowerMetric
PowerMetric captures the change of power, energy or battery charge metrics over time for specified duration.
This metric provides details about Battery, Energy, and Power when running on a device with power rails such as Pixel 6 or newer.
With this information you can verify how resource hungry a specific part of an application is or how different polling intervals of APIs such as location services affect battery levels. And you can even see how animations or dark mode affect power consumption and tweak how your application can save power on different display types such as AMOLED or OLED.
Profiling a microbenchmark is important, so we’ve made the profiling iteration loop faster, and more configurable. Profiling now runs as a separate phase in microbenchmarks, after all metrics are collected, so you can see both profiling results and measurements, or run profiling continuously in CI.
We’ve also added an experimental API to let you select a profiler in the code, while you’re iterating on the microbenchmark. This experimental API lets you configure tracing, like enabling the custom `trace()` sections that are off by default in micro to avoid interference with shouldEnableAppTagTracing.
Configure Microbenchmark behavior from within
With MicrobenchmarkConfig you can configure the way to capture a benchmark without having to rely on instrumentation arguments.
This enables configuring profiling or trace behavior, meaning you can enable trace app tags or perfetto sdk tracing per benchmark rule.
With these enabled you see a new “Trace” and “Stack Sampling Trace” link in benchmark results, which can be inspected using Android Studio Profilers.
To enable tracing, you need to add dependencies on both, the perfetto tracing and perfetto binary libraries, to your microbenchmark tests. Failure to do this will result in this runtime exception.
java.lang.RuntimeException: Issue while enabling Perfetto SDK tracing in com.example.benchmark.test:
Error: Unable to locate libtracing_perfetto.so required to enable Perfetto SDK. Tried inside /data/app/com.example.benchmark.test/base.apk.
Control benchmark state without JUnit
BenchmarkState allows querying the state of a benchmark without relying on JUnit APIs. With this you can instantiate BenchmarkState, which is particularly useful when you’re not using JUnit4.
Capture and customize Perfetto traces
The PerfettoTraceRule helps you analyze test performance in detail. When using PerfettoTrace.record you can write trace-based unit tests or create your own test infrastructure on top of Perfetto traces.
Next to all these new features, bugs were fixed and performance of the library group improved. To see everything that changed in between version 1.1.1 and 1.2.0, read the release notes.