Archive for May, 2019

Airpods as the next platform, Part deux

Posted on May 31, 2019. Filed under: Uncategorized |

A while ago i started developing a thesis on “AirPods as the next platform.” I wrote about it here. Core to that thesis were two new primitives that the hardware exposed on which developers could build applications not previously possible. Since posting, I’ve spoken with a number really creative founders who are designing and developing apps native to this new hardware edge. It’s an area where i am spending a fair amount of my investment energy.

I realized this week that in addition to “always on audio access” and an “always on microphone interface” there is potentially a 3rd new primitive afforded by the AirPod hardware edge that is worth talking about for a minute. I say potentially because I haven’t studied the constraints and capabilities of the underlying technology yet. Please correct me if you have and I’m wrong here, but…

I’ll call this third primitive “head position.” Before i get into head position, we need to acknowledge the technical specification that enables it. AirPods have a sensor in them called an accelerometer (actually they have 2 accelerometers, the one I’m interested in measures motion and the other is an assist on the microphone). Your iPhone and your Apple Watch and your Fitbit have this type of sensor as well. The motion accelerometer is one of two ways, along with a light sensor, that your AirPods know when you’ve put them in your ears, or more generally their orientation and movement through space.

If your AirPods can detect how they are moving through space, and they are in your ears as they are moving, for the first time developers can have a sense for a users head position at any moment. So who cares if your AirPods know your head position?

I do. Because with head position applications and users can start to ask “what am i looking at?” The current state of the art in this line of questioning is “what’s here?” And that is a question that Google Maps, Foursquare, Yelp and a myriad of other valuable applications try to help users answer. But…if you combine “head position” with off the shelf mapping data, for example, a user can now look at a place or a statue or anything else and ask the internet what it is without pressing a button or describing it or even taking her phone out of her pocket.

This is interesting because it’s a commonly suggested augmented reality use case that is largely expressed today through the interaction of a user holding their open camera in front of what they are looking at in order to ask the same question. The open camera UX, in my opinion, is clunky, obtrusive, and feels like more of a gimmick than a useful tool. So maybe that “what am i looking at” or “tell me more about what I’m looking at” AR use case will end up being more of an audio AR experience then a visual/camera based UX, and “head position”, enabled by AirPods as motion sensors attached to your dome, could be the primitive that unlocks it. I’m guessing this data isn’t exposed via an API yet, but when a billion people are walking around with AirPods in their ears all day, bet that it will be and developers and designers will build awesome native applications against it. As I’ve said before, if your building applications that are native to AirPods I’d love to think with you, especially as you look toward your Series A: Jordan.cooper@gmail.com

Read Full Post | Make a Comment ( None so far )

    About

    I’m a NYC based investor and entrepreneur. I've started a few companies and a venture capital firm. You can email me at Jordan.Cooper@gmail.com (p.s. i don’t use spell check…deal with it)

    RSS

    Subscribe Via RSS

    • Subscribe with Bloglines
    • Add your feed to Newsburst from CNET News.com
    • Subscribe in Google Reader
    • Add to My Yahoo!
    • Subscribe in NewsGator Online
    • The latest comments to all posts in RSS

    Meta

Liked it here?
Why not try sites on the blogroll...