Many apps provide some sort of multi-select behavior, where you can sometimes drag to select a whole range of elements. For example, Google Photos lets you easily select a whole range of photos to share, add to an album, or delete. In this blog post, we’ll implement similar behavior with this end goal:
The steps we will take to get to this end result:
Implement a basic grid
Add the selection state to the grid elements
Add gesture handling so we can select / deselect elements with drag
Finishing touches to make the elements look like photos
We implement this grid as a LazyVerticalGrid, so that the app works well on all screen sizes. Larger screens will show more columns, smaller screens will show less columns.
We’re already referring to the elements as photos, even though we’re just showing a simple colored Surface at this point in time. With just these couple of lines of code, we already have a nice grid that we can scroll through:
However, a simple grid doesn’t bring us very far on our multi-select journey. We need to track the currently selected items, and whether we’re currently in selection mode, and make our elements reflect that state.
First, let’s extract our grid items into their own composable, that reflects their selection state. This composable will:
Be empty if the user is not in selection mode
Show an empty radio button when the user is in selection mode and the element is not selected
Show a checkmark when the user is in selection mode and the element is selected
This composable is stateless, as it doesn’t hold any of its own state. It simply reflects the state you pass into it.
To make the items respond to their selected states, the grid should keep track of these states. Also, the user should be able to change the selected value by interacting with the items in the grid. For now, we will simply toggle an item’s selected state when the user taps it:
We track the selected items in a set. When the user clicks one of the ImageItem instances, the id of that item is added or removed from the set.
Whether we’re in selection mode is defined by checking if there are any currently selected elements. Whenever the set of selected ids changes, this variable will automatically be recalculated.
With this addition, we can now add and remove elements from the selection by clicking them:
Now that we are tracking state, we can implement the correct gestures that should add and remove elements from the selection. Our requirements are as follows:
Enter selection mode by long-pressing an element
Drag after long-press to add all or remove all elements between origin and target element
When in selection mode, add or remove elements by clicking them
Long-press on an already selected element doesn’t do anything
The second requirement is the trickiest. As we will have to adapt the set of selected ids during drag, we need to add the gesture handling to the grid, not the elements themselves. We need to do our own hit detection to figure out which element in the grid the pointer is currently pointing at. This is possible with a combination of LazyGridState and the drag change position.
To start, let’s hoist the LazyGridState out of the lazy grid and pass it on towards our custom gesture handler. This allows us to read grid information and use it elsewhere. More specifically, we can use it to figure out which item in the grid the user is currently pointing at.
We can utilize the pointerInput modifier and the detectDragGesturesAfterLongPress method to set-up our drag handling:
As you can see in this code snippet, we’re tracking the initialKey and the currentKey internally in the gesture handler. We’ll need to set the initial key on drag start, and update the current key whenever the user moves to a different element with their pointer.
Let’s first implement onDragStart:
Walking through this step by step, this method:
Finds the key of the item underneath the pointer, if any. This represents the element that the user is long-pressing and will start the drag gesture from.
If it finds an item (the user is pointing at an element in the grid), it checks if this item is still unselected (thereby fulfilling requirement 4).
Sets both the initial and the current key to this key value, and proactively adds it to the list of selected elements.
We have to implement the helper method gridItemKeyAtPosition ourselves:
For each visible item in the grid, this method checks if the hitPoint falls within its bounds.
Now we only need to update the onDrag lambda, that will be called regularly while the user moves their pointer over the screen:
A drag is only handled when the initial key is set. Based on the initial key and the current key, this lambda will update the set of selected items. It makes sure that all elements between the initial key and the current key are selected.
With this setup, we can now drag to select multiple elements:
Finally, we need to replace the clickable behavior of the individual elements, so we can add/remove them from the selection while we’re in selection mode. This is also the right time to start thinking about the accessibility of this gesture handler. The custom drag gesture we created with the pointerInput modifier does not have accessibility support, so services like Talkback will not include that long-press and drag behavior. Instead, we can offer an alternative selection mechanism for users of accessibility services, letting them enter selection mode by long-pressing an element. We do this by setting the onLongClicksemantic property.
The semantics modifier allows you to override or add properties and action handlers used by accessibility services to interact with the screen without relying on touch. Most of the time, the Compose system handles this for you automatically, but in this case we need to explicitly add the long-press behavior.
In addition, by using the toggleable modifier for the item (and only adding it when the user is in selection mode) we make sure Talkback can provide information to the user about the current selected state of the item.
As you can see in the screen recording above, we currently can’t drag further than the top and bottom edges of the screen. This limits the functionality of the selection mechanism. We’d like the grid to scroll when we approach the edges of the screen with our pointer. Additionally, we should scroll faster the closer we user moves the pointer to the edge of the screen.
The desired end result:
First, we will change our drag handler to be able to set the scroll speed based on the distance from the top or bottom of the container:
As you can see, we update the scroll speed based on the threshold and distance, and make sure to reset the scroll speed when the drag ends or is canceled.
Now changing this scroll speed value from the gesture handler doesn’t do anything yet. We need to update the PhotoGrid composable to start scrolling the grid when the value changes:
Whenever the value of the scroll speed variable changes, the LaunchedEffect is retriggered and the scrolling will restart.
You might wonder why we didn’t directly change the scroll level from within the onDrag handler. The reason is that the onDrag lambda is only called when the user actually moves the pointer! So if the user holds their finger very still on the screen, the scrolling would stop. You might have noticed this scrolling bug in apps before, where you need to “scrub” the bottom of your screen to let it scroll.
With this last addition, the behavior of our grid is quite solid. However, it doesn’t look much like the example we started the blog post with. Let’s make sure that the grid items reflect actual photos:
As you can see, we expanded the list of photos to have a URL in addition to the id. Using that URL, we can load an image in the grid item. When switching between selection modes, the padding and corner shape of that image changes, and we use an animation to make that change appear smoothly.
Check the full code in this GitHub snippet. With less than 200 lines of code, we created a powerful UI that includes rich interactions.