A Small Rumination on Virtual Reality
May. 3rd, 2013 12:37 amEarlier, I tweeted this at an old acquaintance of mine from Second Life, qDot:
Let's unpack what's going on here. qDot and I are both big VR enthusiasts, for entirely different reasons. He's a hardware sort of guy, whereas I spend almost all of my time in software. What I'm saying here is that the spatial design that makes virtual reality actually work is missing from many designs, and that this design space seems, largely, underexplored.
This seems strange to me. The human mind is very good at spatial perception and spatial memory. So much so, that a common strategy of memorizing long strings of digits is to construct a memory palace that spatially organizes specific sets of digits into mnemonic objects or concepts.
It seems equally strange, then, that our common devices continue to use such a flat design. Even as I sit here typing this, I'm using a display model that's nothing more than an extension of the Xerox PARC GUI. Screen rendering is flat and fixed-axis, with the horizontal and vertical corresponding to the boundaries of my monitor. The third axis, depth, is entirely flat, constrained to compositing windows on top of one another in priority order. Depth of field is removed almost entirely.
It's a nice, clean, accessible design that misses out on much of what the brain has to offer in terms of processing power. So, what I really said to qDot above was:
How, then, does one make existing technologies make sense spatially? The games industry certainly solved it for themselves: look at the jump, for example, between Super Metroid and Metroid Prime.
This is less of a doing and more of an undoing, however. In those older 2D platformers, we were trained to the 2D abstraction. All the newer 3D games needed was to undo the flatland perspective, while retaining (and in many cases, forward-porting) all of the concepts, art, and lessons learned along the way.
I believe that this is so with the state of computing UI, as well. All we need to do is undo the flatland abstraction, while porting what we've learned along the way. I openly have no idea what form that will take, but I believe, from simple analogy and many, many experiments, that it's entirely practical.
It shouldn't surprise anyone, then, that I'm extremely excited about Google Glass, and to a lesser extent, the Oculus Rift. The primary source of my excitement is in how they change the UI model: from a flatland perspective into an overlay of reality.
This imparts in me a sort of visceral zen that I experienced, to a lesser degree, in Second Life. Even with its clumsy interface, terrible lag, and laundry list of other problems, Second Life provided for me one of the most compelling environments that I could tinker within. The sole reason: it offered me a 3D world that I could constantly alter, allowing me to bring my full mental resources to bear.
I haven't experienced quite this same feeling since then, beyond rare real life operations, working in a CAD tool like Blender, or playing the occasional 3D videogame. I miss it. But, I feel the return of this model is rapidly approaching, and it fills me with joy that others might get to finally experience this.
It seems trite of me, but I believe this small change in UI may profoundly impact how we see the world and see ourselves. That is, provided developers spend the time learning how to express their user interfaces, designs, and concepts in spatially-oriented, idiomatic ways.
Which is why I'm a software sort of dragon. I like abstraction. I enjoy playing around in the virtual ether and sharing my creations. And I just think (nay, hope) that this will let me express myself in ways that I feel are more like me.
@qDot VR in a nutshell: all this extra bandwidth by adding a dimension, and we still haven't figured out how to make it useful beyond porn.
@qDot The translation of the literal to the spacial is still a work in progress. That design problem is still surprisingly underexplored.
Let's unpack what's going on here. qDot and I are both big VR enthusiasts, for entirely different reasons. He's a hardware sort of guy, whereas I spend almost all of my time in software. What I'm saying here is that the spatial design that makes virtual reality actually work is missing from many designs, and that this design space seems, largely, underexplored.
This seems strange to me. The human mind is very good at spatial perception and spatial memory. So much so, that a common strategy of memorizing long strings of digits is to construct a memory palace that spatially organizes specific sets of digits into mnemonic objects or concepts.
It seems equally strange, then, that our common devices continue to use such a flat design. Even as I sit here typing this, I'm using a display model that's nothing more than an extension of the Xerox PARC GUI. Screen rendering is flat and fixed-axis, with the horizontal and vertical corresponding to the boundaries of my monitor. The third axis, depth, is entirely flat, constrained to compositing windows on top of one another in priority order. Depth of field is removed almost entirely.
It's a nice, clean, accessible design that misses out on much of what the brain has to offer in terms of processing power. So, what I really said to qDot above was:
"I think we (as a species) can do this VR thing much better, if we focus on the right spatial design problems."
How, then, does one make existing technologies make sense spatially? The games industry certainly solved it for themselves: look at the jump, for example, between Super Metroid and Metroid Prime.
This is less of a doing and more of an undoing, however. In those older 2D platformers, we were trained to the 2D abstraction. All the newer 3D games needed was to undo the flatland perspective, while retaining (and in many cases, forward-porting) all of the concepts, art, and lessons learned along the way.
I believe that this is so with the state of computing UI, as well. All we need to do is undo the flatland abstraction, while porting what we've learned along the way. I openly have no idea what form that will take, but I believe, from simple analogy and many, many experiments, that it's entirely practical.
It shouldn't surprise anyone, then, that I'm extremely excited about Google Glass, and to a lesser extent, the Oculus Rift. The primary source of my excitement is in how they change the UI model: from a flatland perspective into an overlay of reality.
This imparts in me a sort of visceral zen that I experienced, to a lesser degree, in Second Life. Even with its clumsy interface, terrible lag, and laundry list of other problems, Second Life provided for me one of the most compelling environments that I could tinker within. The sole reason: it offered me a 3D world that I could constantly alter, allowing me to bring my full mental resources to bear.
I haven't experienced quite this same feeling since then, beyond rare real life operations, working in a CAD tool like Blender, or playing the occasional 3D videogame. I miss it. But, I feel the return of this model is rapidly approaching, and it fills me with joy that others might get to finally experience this.
It seems trite of me, but I believe this small change in UI may profoundly impact how we see the world and see ourselves. That is, provided developers spend the time learning how to express their user interfaces, designs, and concepts in spatially-oriented, idiomatic ways.
Which is why I'm a software sort of dragon. I like abstraction. I enjoy playing around in the virtual ether and sharing my creations. And I just think (nay, hope) that this will let me express myself in ways that I feel are more like me.