That means nothing will happen until the first pointer event is fired — for example, when a user presses down on the screen.
Why would we make such a significant change? Performance! 🙂
Similar to LazyColumn, we don’t want to execute work until it is needed.
The previous implementation executed each pointer input block of code up to the awaitPointerEvent() regardless of whether a user ever interacted with the composable or not.
The Compose gesture detectors, for example, tapping/pressing, dragging, and multi-touch, are actually built on top of pointerInput() as well, which means that anytime anyone uses a gesture detector, a good chunk of code is executed. Well, in many cases, that code might never be needed, for instance, if the user never interacts with your composable.
The new implementation avoids that. We never execute any of the code until it is needed because a user is interacting with your composable.
Let’s revisit the example above to see how it has changed in the new implementation:
Now, the entire block (see comments) will not execute until the first event comes in, and, if there is no pointer event, it will never execute.
Once an event is triggered, the block will be executed (including the first awaitPointerEvent() call). The code will suspend once again on any following awaitPointerEvent() calls until the next event is triggered.
If you have a custom gesture handler (or any code using pointerInput()) where you rely on code in that block executing right away (before an event or even if there isn’t an event), you will want to adjust your code to take this new change into account.
This is probably a pretty rare occurrence for most of you, but it is something to keep in mind as you get the improvements.