The Sensor Wars and other futures of the smartphone



Imagine a world where battery life is no longer a concern, a restriction. Where would smartphones be or what should they be? Is it a question of software catching up with hardware or would it be the other way around?


A couple months posts (been lazy) back I wrote about the Samsung Galaxy S3 being as much of a disappointment as it is a highly innovative device. Sammy kicked off the right revolution by making better use of standard hardware any self-respecting mobile would have these days and tried to make their phone that much smarter. And as expected, while the implementation didn’t quite kill the battery life; you still would be able to extend the On time (possible exception: Smart Stay) by turning them all off.

Google has the right idea with their Jelly Bean’s Google Now where it relies primarily on sensors like wifi, gps and agps then combines that set of data and pulls and presents you with the most relevant info it is able to muster e.g. your location and the current weather; your location, traffic conditions and the location of your home; your location, nearby public transport station and their schedules. And so on.  Of course all this becomes useless when the wrong info is presented (perhaps due to sensor error) or nothing pops up at all.

We’re not quite there yet.


So just how is this revolution supposed to happen?
Here are two of the most likely possibilities:

1. super efficient hardware combined with perfectly optimised OS and coding.

processors, sensors and radios die-shrunk to miniscule nanometers  would only be the tip of the iceberg. If you’ve got inefficient LCD displays running the show then everything else is a moot point. Or if you’re hampered by an OS that simply drains all the juice with badly written garbage collection, for example. Or if your 3rd party apps comes with a resource-hogging service running all the time in the background then this whole point also goes down the drain.

2. or it would have to be a seemingly never ending power source.

sure battery cell technology seemingly has stagnated in the past couple years- we’re not really seeing much in the way of improved battery life against improvements in processor muscle so something isn’t adding up.
But you shouldn’t forget there’s forms of energy elsewhere. The most obvious being sunlight. Solar panels still haven’t quite matured enough to be of much use in trickle-charging smartphones but there’s still hope. There have been plenty noise about solar embedded LCD displays for quite a number of years now- just imagine what sort of battery savings you’d get by completely eliminating the most energy demanding piece of hardware on a phone. There’s a reason why people size up a phone’s battery life by gauging the total amount of time a screen stays on while ignoring all other factors such as processor and radio.
Not forgetting the stuff of legends that is fuel cells. These have also been in talks since many years back but have not yet entered production due to safety concerns. Recently there’s been news of a fuel cell charger that can hold a cartridge of fuel to charge up an iphone 10-14 times.


Today we’re already at the stage where mobile phones are theoretically more powerful than us humans. Think about it, gyro, accelerometer, thermometer, proximity sensor, gps, altimeter, microphones and speakers, optics, Geiger counter, nfc etc
We humans only have the typical five sense…with the occasional sixth. At the very least, smartphones can see, hear, speak and feel to a certain extent.
Of course it would all be meaningless if no sense can be made out of all that data. Ideally all that data should be freely accessible (assumption being there are no privacy concerns), combined and cross-referenced to make be heads and tails of everything that’s going on around itself.
An example might be as follows:

front-facing camera- recognises user (for security and other purposes), gestures, emotions, eyeball movements and tracking, privacy awareness (someone looking over the shoulder)

rear-facing camera- constantly looking around for landmarks and combines data with the gps and agps for pinpoint accurate locations at all times; displays road conditions for user while s/he is busy typing; recognises known contacts then kicks off other proximity processes (contact details exchange for example); generally a third eye for the user taking pictures or videos in the background of things it recognises and might deem important

microphone- listens to its surroundings at all times; recognises media and displays related info when it thinks it’s the right time and the right location to (e.g. at home, displays tv schedule or related wikipedia when it hears tv/movie is on); voice command recognition; user recognition by voice

These are only the most common hardware sensors and we havent even begun to scratch the surface on the real smart stuff yet.. The possibilities are simply endless.

So what’s going to bring all these sensors together? Qualcomm’s Gimbal.
The SDK is supposed to talk directly to the sensors on the hardware level, making big battery savings in the process and then combining all that data to give contextual information. In fact, a quick look at their “Pricing” page should already pique any techie’s interest. It mentions “Geofencing” and “Interest Sensing” and before you label this as an advertiser’s dream come true, Gimbal has already thought of privacy concerns, also mentioned on the page.

The Sensor Wars

It’s most likely that every manufacturer is going to want a piece of the sensors action once it becomes more technically feasible because it truly is the likely future of smartphones. Expect to see patent fights tightly revolving around smartphone sensors or if you read it back to front: start patenting all those sensors NOW!

As a wild uneducated speculation, Apple might actually be the first to succeed because they’re in the best position to fulfill the “highly optimised OS and coding” part. iOS has always been known to be very well-optimised and Apple can always tighten their grip on 3rd party apps’ coding quality. 

But in a sense, Samsung/ Galaxy S3 has a head start (you could count HTC who came up with “turn over phone to silent”).
And Gimbal likely not the only sensor interaction SDK in the world, does state it’s for both iOS and Android.

This is going to be rather exciting.


Do you agree with this post?
Feel free to leave me a comment.

Related Posts