A Counterintuitive Proposal for How AR Will Unfold…

Posted on February 4, 2020. Filed under: Uncategorized |

You are walking down Bleecker street in New York City. You pass by a store with a cool jacket in the window. You say “Hey camera, how much is that jacket? And a voice replies into your Airpods, ”It’s $400:“ I think that’s gonna be the way it goes down.

There have been many proposals for how augmented reality (AR) will solve this use case and ones like it. Most demos you see today, a person takes her phone out of her pocket, opens an app that invokes the video camera, and then metadata about whatever is in the frame appears on the screen. I don’t think that’s the way it’s gonna go down.

More futuristic versions of this use case contemplate heads-up displays, where the metadata about the jacket is written visually to a lens on your glasses or even a contact lens. I don’t think that’s the way it’s gonna go down either (at least not this decade).

I think the insight that we will use computer vision to augment the way we process our physical surroundings is more or less a given. Car’s are perhaps further along than people in this regard. It seems implausible that this assistive capability will not follow us into all realms of our mobility (i.e. when we get out of our car and walk). What I don’t think is a given is a) that the camera we use to capture our surroundings will be on our phone, or b) that the response to a camera based query will be displayed visually.

Most read/write situations don’t traverse disparate medium. If you capture visual information, it tends to be displayed visually. If you capture audio information, it tends to be displayed acoustically. Even if you capture tactile information, it tends to be displayed/processed tactilely.

But in the case of AR, I see the capture/write function and the read function decoupling as it relates to media type. I think will use a wearable, voice activated camera to capture and query, and I think we’ll listen to the response or results that come from that query.

Ring and other always on camera have desensitized us to the reality that we might be recorded when in public. There is definitely a societal learning curve that Snapchat Spectacles began to climb, around wearing cameras in social settings. But I think consumers have learned what a “wake word” is thanks to Siri and Alexa, and have gotten comfortable with always on sensors that are known to be asleep unless awoken. My instinct is that voice and audio is paving the way for visual sensors to do the same.

It’s not just a privacy issue that will need to be solved, but also a fashion issue. Snapchat Spectacles came close but not close enough to solving the fashion challenge of wearing sensors on your face. Airpods solved it completely. Whatever camera wins will have to thread this same fashion needle. I could actually see a not too distant world where your Airpods have a camera(s) on them. If not, I guarantee you Apple is working on a Siri controlled wearable camera that is meant for everyday use (as opposed to gopro use cases, etc). Interestingly, I see these cameras as being utilitarian as opposed to creative in the near term. Spectacles captured media that was meant for human consumption, and in that way, kind of fell short of the production quality we’ve come to expect from our smartphone camera. But if the consumer of an image is a machine, the image doesn’t have to be pretty, or compelling, or even particularly high fidelity in order to be valuable…and in that way a hardware manufacturer can bend the typical constraints around size, form factor, cost, battery life, etc to optimize for the fashion and form over image quality.

I think the promise of AR is that we can look up at the world and not down at or even into a screen, and this camera/earbud architecture feels like the closest we can come to an invisible read/write interface that marries our surroundings to machine assisted computation and the internet.

If you are working on any piece of this future, be it hardware, software, or other, I’d be up to jam and maybe even lead your Series A round: jordan@pacecapital.com

Make a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

    About

    I’m a NYC based investor and entrepreneur. I've started a few companies and a venture capital firm. You can email me at Jordan.Cooper@gmail.com (p.s. i don’t use spell check…deal with it)

    RSS

    Subscribe Via RSS

    • Subscribe with Bloglines
    • Add your feed to Newsburst from CNET News.com
    • Subscribe in Google Reader
    • Add to My Yahoo!
    • Subscribe in NewsGator Online
    • The latest comments to all posts in RSS

    Meta

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 10,681 other followers

  • Top Posts

  • Archives

  • Twitter @jordancooper

Liked it here?
Why not try sites on the blogroll...

%d bloggers like this: