An example is the use of AR in local discovery, as seen in the Monocle feature in Yelp (left). Unfortunately, existing applications that annotate points in public space haven’t been compelling enough to see broad adoption. Instead of detecting that there is heavy foot traffic at a park, our apps can drill down to a specific feature in that park that people visit. Instead of just sensing that the noise level at a street corner is high, our applications can now pinpoint which particular source is contributing to the noise. We get to go beyond working with locations on a 2D map, and actually interact with points within 3D space. AR promises software the ability to insert itself into a higher fidelity world, to call attention to more granular details of our surroundings. We’re already interacting with and learning about locations in interesting ways through technology: a whole body of map-based data visualizations exist that marry location with other data to draw insights about that dataset’s relationship with place. It is exciting to think about what AR can do to draw attention to details in our surroundings. Can we create public AR that minimizes such confusion? I believe so, and I spent a few weeks at the end of last semester creating Viewfinder (described later in this piece) as one attempt to do so. ![]() Interaction with AR elements leads to confusing user behavior for an observing bystander this confusion is amplified in public space where numerous bystanders from different backgrounds meet. I’ve been thinking a bunch about augmented reality (AR) in public space. ![]() I’m halfway through my second year at SVA Interaction Design, and that also means I’m halfway through my grad thesis process. This was originally published on my thesis blog: Small Big Cities. ViewFinder: Framing Views In Augmented Reality
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |