Long before Google announced the Android Pie name, the company was talking up AI in the latest version of its mobile operating system. Two of the most interesting “AI features” are Adaptive Battery and Adaptive Brightness. Both are on by default in Pie, can be easily disabled (Settings => Battery => Adaptive Battery and Settings => Display => Adaptive Brightness), and work automatically in the background — machine learning handles all the heavy lifting.
And yet, until now Google hasn’t shared much information on how exactly these two features function, nor the impact Android users might see. That makes sense — the company’s internal dogfood numbers aren’t necessarily representative, and there was a lot of iteration during the beta program (Android Pie saw five developer previews). But three weeks ago, Android Pie was finalized and slowly started to roll out, so we caught up with Google to talk details.
Group product manager Ben Poiesz, in charge of a team responsible for Android’s intelligence features, sat down with us to explain how Adaptive Battery and Adaptive Brightness came about, what gains Google saw during the Android P beta, and the inner workings of the features. Each Android device user has unique preferences and use cases, but nobody in their right mind wants to actively optimize battery life and screen brightness, nor should they have to.
Poiesz’s team worked with DeepMind, an AI company which Google acquired in January 2014, on both these features. This wasn’t a short project — early brainstorming and investigation started a little before Oreo shipped in August 2017.
“Both the Android and the DeepMind teams have a presence in London,” Poiesz said. “They’re in the same building, just different floors. It enabled us to work closely together to figure out what the right strategy was, what the right API was, and to build something. It’s a good number of people from each side, but unfortunately I can’t say exact numbers.”
Where Poiesz does share numbers, you should keep in mind that we’re talking about the Android P Beta population, which is bigger than previous Android betas since it wasn’t limited to Google phones, but also included devices from Sony, Xiaomi, Nokia, Oppo, Vivo, OnePlus, and Essential.
Poiesz does note that the public roll-out of Android Pie could affect the results of Adaptive Battery and Adaptive Brigthness due to self-selection bias. Users who sign up for the beta aren’t the same as the broader Android population — they use their devices differently, install different apps, and so on. That said, Poiesz promised the beta population “gave us statistically significant results.”
Adaptive Battery: 5 percent reduction in overall CPU
Adaptive Battery, in a nutshell, is about figuring out which apps you use frequently and keeping those apps in memory, while the apps you don’t use often are purged once you’re finished with them. Put another way, Android Pie can adapt to your usage patterns so that it only spends battery power on the apps Adaptive Battery thinks you’ll need.
Here’s how Poiesz explains the thinking behind the feature.
“Unfortunately, up until P, it’s been pretty top of mind for all people to be wary of what the battery life might be on their phone because of the apps they choose to install, or bugs, or whatever it may be,” he said. This is something Android users should not have to think about, Poiesz believes, but rather Android should take care of.
Google has released various of battery life features for Android over the years, such as Doze and App Standby. All the ground work for Adaptive Battery in Android Pie, however, was laid in Android Oreo with Background Execution Limits.
When apps target Oreo, they have to rely on using jobs and alarms (which are time-bound but don’t have tightly defined start and stop times) to do background work, as opposed to background services. As more and more developers adopt this practice, features like Adaptive Battery will become more effective.
“And so Adaptive Battery in P was really looking at ‘can the OS provide a level of independent auditing of what apps you are using and how you are interacting with them?’” Poiesz explained. “And then look at how the apps want to run, and then make much more intelligent decisions about … Well, when should we run jobs? When should we execute alarms? How should this app be allowed to process in the background? Should it be allowed to run off power or wait until it’s on power or wait for some special sort of circumstances where it makes sense? Having the OS have that discretion enables the user to not have to worry about it. That was the primary goal for Adaptive Battery — to bring consistency to the device.”
Consistency doesn’t necessarily mean slower battery drain across the board. It means eliminating the days you run out of juice.
“If you have just have one bad battery day a month, you will remember that one bad day,” Poiesz pointed out. “You might be great every other day but if it’s in the back of your head that ‘it could fail for me’ — that’s a stressor. That was the number one goal, was to bring that level of stability so that individual apps that you may install on your device can’t tank your battery.”
Adaptive Battery uses a deep convolutional neural net to predict which apps you’ll use in the next few hours and which apps you probably won’t use until later. DeepMind tried different strategies but settled on a convolutional neural net because it provided the right balance between making the correct decision and using the least amount of power overhead. The team relied on DeepMind to choose the right machine learning approach and then build something custom for Android, but it wasn’t clear from the beginning AI would even play a role.
Maybe Android Pie could simply be smarter about which apps to throttle using classic heuristics? If an app is running in the background, that’s arguably power wasted, and maybe the operating system could defer it until the user plugs in their device. Doing this in a way where Android doesn’t break apps was the tricky part, but once that was achieved, it all came down to tuning the heuristics. The next step was to ask a few machine learning experts if the problem was appropriate.
“They looked at it and said ‘yeah, we think there’s something we could do here,’” Poiesz explained. “So we built the model in such a way where they ‘competed’ against a baseline heuristic. The baseline heuristic is [in] P today. The goal for them was, we have to be better than the baseline — so that means not only does the machine learning have to do better predictions, it has to do enough of a better prediction to outweigh the cost of running the machine learning model. It has to do a lot better, not just a little better. And they were able to. They were crazy professional about the idea of ‘it’s not just machine learning because machine learning. It has to be meaningful.’”
At its I/O developer conference earlier this year, the number Google shared (a 30 percent reduction in “CPU wakeups for apps”) was based on internal dogfooding. But that was a vague metric, and now that the beta is up, the company has concrete results based on the much broader beta population:
- A little over 5 percent reduction in overall CPU, which is one of the biggest components that consumes power
- Over 15 percent reduction in CPU for certain app segments
- 10 percent reduction in background data transmission, with some apps going as high as 20 percent
For the 5 percent figure, keep in mind that the gains the DeepMind team achieved was even greater, theoretically speaking. Practically, though, there’s a cost of running the machine learning model that must be factored in.
“Say you have an app that does a lot of processing; it has a scheduled job,” Poiesz elaborated. “And you’re not using that app but it continues to do processing. This system would then throttle that app until you are on power. And all that CPU that would have been used when you were unplugged can be pushed to when you are on power. That’s just straight-up savings. Because anything you do while you are plugged in is savings. It’s highly dependent on what the app does. You could have an app that doesn’t use background, so the savings is zero. You can also have an app that is really aggressive about during work in the background, and you could have substantial savings.”
Interestingly, mobile data savings was not a goal, but if you think about it, savings there make sense. “If the apps are running less when you’re unplugged, that means they’re not transmitting either. Data also is power. Because that’s when you’re going to spin up the modem, you’re going to go into a broadcast state, and that needs power. But it also results in consumer data plan savings.”
These numbers sound good and all, but how long does it take for these benefits to take effect?
“For Adaptive Battery, you see it effectively immediately, because some of the rules are foundationally different. There is a base model, so today, if you had a fresh phone on Oreo, you wouldn’t have any apps getting these restrictions. And when the base model is loaded for P, nothing is more lax because these rules didn’t exist before, it’s explicitly more restrictive. So you’ll get a savings immediately, and then it will continue to refine itself and get better.”
Adaptive Brightness: 10 percent fewer adjustments
You can think of Adaptive Brightness as automatic brightness. It learns your habits by following how you set your brightness slider in different circumstances, and then tries to do it for you.
How long does it take for the average user to see benefits? “My personal experience is about a week when I feel like I am no longer tweaking it,” Poiesz shared. “You don’t feel like it’s worse, which is a thing that we really wanted to nail — when you get the upgrade you don’t feel like there’s a downgrade there. It’s just that it continually gets better as you interact with it. I found that after a week I was feeling like I was barely touching it.”
The reason Poiesz couldn’t provide an answer for all users is simple: Everyone is different. Adaptive Brightness takes about a week or so to stabilize, but it keeps learning as long as the user is making interactions. If you’re close to the baseline model, it can take less time. If you’re far, it might be longer.
“Users have different preferences on brightness,” Poiesz explained. “The brightness strategies we had, before the new machine learning-based one, we saw people still having to finagle with the settings a bit. The way we set up the curves just wasn’t ideal — it was still pretty good in the middle, but when you got to extreme cases, like when you were in pitch black or in dead sunlight, it was pretty hard for the user to optimize both scenarios. You had to push the system more towards the bright side or push the system more towards the dark side. Adaptive Brightness is able to leverage the full spectrum much more effectively. After the user gives it enough inputs, it gets better and better at optimizing the curve between the extremes, and how far into the extremes you want to go.”
All of this was tricky because Google was simultaneously trying to improve battery life. It’s not easy to continuously learn where the optimal brightness setting is for every situation without draining the battery.
“The goal for this wasn’t necessarily to save power, because just arbitrarily saving power makes users unhappy because they can’t see the screen well,” Poiesz said. “It was to get the screen power to where it needs to be for the user to feel comfortable with looking at the screen.”
In other words, Google found that when users shifted the dynamic brightness curve for their device, either to be brighter or darker, the system got better at identifying one end of the spectrum but was simultaneously worse at the other end. The machine learning model is more intelligent on both ends of the spectrum at the same time, rather than being biased to one or the other.
Unlike with Adaptive Battery, which requires no input beyond simply using your Android device, Adaptive Brightness really shines when you manually adjust your brightness over time.
“The baseline model that we have in there, if you want something a little bit different, that’s when you have to then give it that input … by interacting with the slider. Once you’ve given it that feedback throughout the day — most people are already messing with the slider, that’s not necessarily a new phenomenon — then it will get better and your need to interact with it goes lower and lower and lower.”
Overall, Poiesz’s team saw people making adjustments with the slider 10 percent less during the beta, just in terms of the number of times users went in and changed the brightness. Furthermore, users were overall sharing positive feedback about the brightness feature. At the same time — and this is critical — on the whole Android was not using more power.
“If you overshoot, people are fine. Usually, unless you are in the dark, if you’re a little too bright it’s no biggie. If you’re too dim, it’s a problem. If we’re … avoiding too dim, and not overcorrecting by going too bright, then user perception is ‘wow, this is a better, brighter screen’ even though in reality, that may not be what’s happening; you may just be being more accurate. And that’s what we’re finding: Our total power is effectively the same.”
All of this machine learning led to an interesting finding for the team: The model in Oreo was set too dim. This means that even if you don’t adjust the slider much (or at all), Pie should result in a better brightness experience. You’ll naturally be able to find someone who thought Oreo’s model was just fine, but overall it needed a boost, the team found.
“The new baseline came out with a higher brightness. Having that as a starting point helped. Now that we can use the full range more effectively and these automatic modes, because the algorithm is better — that, combined, with having a better base, makes people then feel like their phone is brighter, even though it’s not.”
In short, Pie’s baseline is brighter than Oreo’s, and the new model learns which areas it needs to dim. Keep in mind that a lot of the time, your screen can probably be set to be a lot dimmer than maybe you would set it yourself, and sometimes people just boost the brightness up and call it a day.
“There’s a funny stat we found: Battery Saver, in the past, we used to halve the brightness, roughly. We took that feature out in P,” Poiesz said. “The reason was, people pushed the brightness back up [after Battery Saver kicked in], and sometimes they would push it up brighter than they had it before. Because it’s hard to eyeball stuff like that. The idea was, if we get the brightness right to what you need — if you turn on Battery Saver, your eyes didn’t change. Brightness is brightness. What ends up happening with machine learning, it settles with the spot that you want, and then you feel happy.”
If you ask people to increase the brightness, they’re not going to pick the optimal brightness — but the machine learning model, over time, will. People aren’t going to set a screen as too dim for themselves, but they are going to overshoot, which uses more power.
“That’s how we get the overall savings. If we avoid the user having to boost, to try and get out of these dim situations — every time you avoid that, and you’re at the optimal, that’s savings,” Poiesz said. “As a human, you’re going to have a hard time finding your optimal brightness. No one really wants to nitpick that. So the model is pretty forgiving in trying to help you find the right level without having to think about it.”
Adaptive Brightness has just two inputs: the ambient lighting from your device’s sensor and the brightness you set. Why not consider other factors, such as whether the input is artificial or natural light? There are a few reasons. First, and most importantly, all the machine learning and optimization is running locally on the device.
“The simpler the model is, the shorter the inference time is, and the cheaper it is to run, and so that saves power,” Poiesz emphasized. “You have to always make sure that the additional cost of running the model is outweighed by the savings.”
All of the computing doesn’t happen every time your brightness is automatically adjusted — only when you intervene. “That’s when you are doing the heavier lifting — when you are computing what should the model be. When [the device is] saying ‘oh here’s a new input of brightness from the ambient sensor, what should the nits level for the screen be?’ — that’s relatively cheap.” As a result, you won’t notice any meaningful difference in the time it takes for Android Pie to adjust brightness compared to its predecessors.
Next, whether you are inside or outside likely does not matter that much when it comes to optimization. But the team did investigate other factors, like what app you’re running and when you’re switching between apps.
“Suddenly having your brightness changed when you do app switches, maybe that’s really unnerving,” Poiesz said. “The most basic things we found to be rather effective, and when we did our dogfooding and then beta, users responded very positively.”
And lastly, this is just the beginning — more optimizations are on the way. “We’re going to look at more options through this year and next as to what more information should we be feeding into the model,” Poiesz explained.
Device Health Services
Arguably the biggest deal with Adaptive Battery and Adaptive Brightness is that they can be updated separately from Android. The team does not have to wait for Android 9.1 or the next over-the-air (OTA) update to improve their models. This is possible because the models reside in an Android app (an APK file): Device Health Services.
For the curious, this is the same app that is responsible for predicting, in the Settings app, when your battery will be depleted, in addition to Adaptive Battery and Adaptive Brightness.
“Both these components are updateable outside of the traditional OTA,” Poiesz shared. “That’s going to give us a lot of flexibility to keep on refining models. The [Google Play] store can update the logic. We’ll update it if we have a better model. If things shift, the baseline changes, or if we figure out a better efficiency, then we’ll update it.”
In fact, other Android device manufacturers can choose to use their own models. Poiesz sounded doubtful that partners will opt to use something other than Google’s, however. Although “I believe the DeepMind will be excited if someone can do better,” he added.
Since this is an APK, could it be backported to previous versions of Android?
“No, unfortunately. We had to do a lot of hooks in the OS, to let this APK provide the data so that the OS can make those decisions. The app isn’t doing the work here, the app is the model. All the work is still happening in the OS.”
Maybe that’s a good thing. It might help push more devices to Pie.