Sustainable Virtual Reality

Pete Markiewicz


Explore features of virtual reality, alternate reality from a sustainable perspective: efficiency energy, accessibility, and UX.


Today I’m going to talk about SustainableUX in virtual reality. I want to discuss how we can use virtual reality to increase both structural and behavioral sustainability. What’s the scope of the talk? We’re going to talk about the theory of design for virtual media because it is a little bit different from doing sustainability for physical products. We’re going to talk about some of the trends in virtual design. A bit more detail about virtual reality itself and how we apply experience design for UX and VR. Finally, how can we create sustainable strategies for the virtual reality medium? I have a little bit of information about myself. I originally was a scientist and worked in biology. In the ’90s I switched over to the internet and the web. I’ve written a bunch of books on this. Currently, my big interest is actually virtual reality mediated through the browser, through the new Web VR JavaScript API that lets anyone design virtual reality who knows how to do web development.

So, sustainable VR. How green is virtual reality? Well, before I do that, let’s talk about what’s actually going on. Right now virtual and augmented reality move design away from graphic design to principles of 3D design. At the same time, we have these voice-only systems. If you look at this diagram, in the year 2000, we had webpages. If you look in 2010, it really was quite similar, in terms of the media we were working with. But by the end of the decade, 2020, we’re going to have split into two more media. The first one will be immersive, which is augmented and virtual reality, which I’ve displayed there. Finally, we’re going to have another thing where we get rid of the interface entirely except for a voice. Those are what I call “oracle systems,” things like Siri or Amazon Alexa.

The problem with all of this work is that very often, designers treat building a website or making an app as somehow weightless. How it’s green. But it’s not really true that I’m green because I replaced a book with a website. So what is the carbon footprint of virtual reality? I can say right now that often a website consumes as much or more energy than if you printed the same material. In the case of VR, virtual reality uses about seven times as much power as the web. That was an estimate that was made because to do high quality VR, you have to have a very high performance computer. That computer uses more power and currently, those are rigs that can use up to the power of three refrigerators. Very high end gamers will have a higher energy budget for their virtual experiences than real people in second world countries.

But at the same time, we have a collision going on. One thing we can predict is that virtual reality is going to become much more common. UX to date has focused on 2D design. Sometimes product design. In contrast. game design—which is where a lot of the VR is going to come from—has operated by totally different rules. The last thing they think about is efficiency or sustainability. I call it the “pleasures of uninhibited excess.”To create sustainable behavior, we’re going to have to adapt VR. It can’t be like a game. Fortunately, it can be better than the web because VR has a long form nature, as I’m going to discuss. Another feature of VR is that is can really encourage sustainable behavior much better than other media. My guess is in the next decade it’s going to be a routine focus of UX designers to encourage more sustainable behavior by creating virtual reality experiences. We can address sustainability with VR and we cannot leave these to the gamers.[/p]

Sustainable virtual design, which is the principle I used to talk about these virtual media, is that we have to look at the users and designers who create websites and apps. We have to have behavior encouraged by these virtual media that causes users to adopt more sustainable behaviors. We also have to make sure our overall strategy for creating these media does not degrade the internet’s virtual ecosystem. Our goals are going to be efficient use of technology when we’re working in VR. We’re going to want users to have greater focus and attention on the content, so there’s not a lot of meaningless content wasting CPU cycles. We’re going to want to have them complete tasks better. In the case of VR especially, we’re going to want an enhanced transfer of learning that people can do inside virtual reality simulations to the outside world.

Now, what are the mega trends? Some of you may have said that VR was looking big but I haven’t seen it. Isn’t it just for gamers? Actually, that’s not true. It is on a steady growth path. More importantly, it’s not a game. it’s going to turn into a new medium that’s going to be really important in about a decade. What makes it different? Both virtual reality and augmented reality create immersive 3D worlds, unlike a webpage. The behavioral cues, the affordances, the triggers I would use in experience design are different in 3D than they would be for a web or mobile app. Flat design tools that we’re used to using for doing this type of work in UX cannot represent this very well. One of the other big issues is people react with their whole body in VR. They don’t just use their hands, which is what we do with typical computers. In terms of whether or not this is going to be adopted, it is true that right now, only some of the extreme gamers can stomach some of the VR out there. But everyone seems to enjoy gentle VR experiences. My prediction is this will become very popular.

What are the milestones we had last year? The estimate was about 12% of consumers bought some kind of device. I think that’s been revised down to 7-8%. But it’s significant. These products are showing up at Best Buy. Interestingly enough, not the Apple store, which I’ll talk about. There are consumer devices out there for someone willing to spring $500-600 that lets you walk around a room and not just sit there in a virtual experience. This year, in 2017, Android software and hardware is going to get built in VR support. So it’s going to be possible to build VR experiences running through multiple devices using Google Cardboard very effectively. Finally, web browsers are going to get a new JavaScript library or API built in that’s going to let people who build webpages now, build virtual reality with the same tools that they’ve always used. My question is, “Where’s your headset?” Because guess what? If you’re just working on the web, you’re in print design. It’s 1994 and someone is talking to you about the world wide web.

Another feature is VR can do better than the web. Which is another reason you need to expand beyond the web. There are things that VR can do much better than the web, which we’re going to talk about. It has to do with the fact that VR Is not some kind of weird webpage. It’s not a strange movie. It is its own medium. It’s not like the web, it’s not like a console game, it’s not like a movie. It’s actually mostly like a book because VR allows long form storytelling, which is something you don’t do very well on the web, and you certainly can’t do in the an app. But you can do it with a movie or a book. I’ve listed some of the features of VR. We can see that VR is very immersive. It gives high engagement. Distraction is much lower than the web. The experience is more temporal and people are locked in to an experience in space and time. They have a very high feeling of presence compared to the web, especially in a mobile app. My interactions are much more intuitive, concrete, and literal. Think about the web. We’ve learned to click these abstract shapes and symbols to do things, but in VR, I grab something. Then finally, the one area where VR is lagging is social. Social networks on the web and in apps are really good, but on VR, it’s limited. People are just figuring out how to do that.

The big thing about VR is the combination of that long form I just talked about making it a very empathic medium. People say VR is the empathy machine. People develop very strong emotional engagement with the content, unlike the web, but like a great book. Users are willing to keep exploring. They have much less desire to leave or click somewhere else, which is one of the features of the web in apps. They have a much stronger sense of having been somewhere when they finish the experience. It’s more like the fourth wall of movies. The higher your resolution and the more 3D it is, the higher the empathy of people. You can see this enormous jump in empathy if you give people high quality VR experiences vs other media. The take home is VR has more emotional bandwidth than the web If you’re designing for that, you can cause very strong emotional connections to your message.

The sort of things you can do: you can put someone into a scene where they’ve never been. I say the “knowing ghost” model, where you’re floating there and experiencing something but maybe not directly part of it. VR can actually rework your body. I have an example of two people switching their bodies in VR, they develop empathy for other individuals. It’s very effective for therapy, because I can simulate upsetting situations. Training people not to get upset by spiders is something that VR Is being used for right now. We can also act out behaviors. If you design a behavior that encourages sustainable behavior, it’s much better in VR than it will be if you lecture them or give them a manual of what they’re supposed to do. Finally, in VR, we can create very complicated 3D systems and let people examine these systems we always talk about in sustainability, for themselves.

There is a dark side. It may be that VR is more addicting than online games. Who knows. We don’t know yet. There hasn’t been enough experience. VR is likely to follow a bell curve in engagement. Eventually, we’re going to get people who are addicts in the medium. It’s also true the addiction may be worse than the web. The problem with the web, compared to VR, is that the web is distracting. People click around to different websites all the time. In VR, you don’t do that. So it’s possible that people just lock in to a single experience and stay there. That may not be a good thing.

I’ve been saying VR is its own medium so let’s explore that a little bit deeper. I’ve listed a group of people that have developed the theory of experience design in VR over the last year. You can look at them here. Adrienne Hunter, Maya Alger, Josh Carpenter. They’re all great to look at. These guys have a lot of real insights into how to do stuff in VR that will actually be effective with your users. Hardware formats we probably have to talk about first. One great thing about it is that smartphones are powerful enough to generate VR. You can take a new iPhone or Android system and put it into something that costs almost nothing like a Google Cardboard format and get reasonable VR. That’s great. The higher end version of VR, which is still consumer, is the headset. It uses a dedicated desktop or laptop and maybe about 15% of your audience at present would have something like this. Some of them will have roomscale. They’ll be able to walk around. But present their headset will be tethered to a computer in the room. So it is a little bit clumsy. The last thing i should point out, because so many UX people have Macintosh computers, there is no Macintosh powerful enough to use a headset. You cannot plug an Oculus into a Mac or an HTC Vive or anything like that. Macs flat out are not powerful enough. What’s worrying is that Apple is really not doing anything about this. They’ve said something about working in AR, but it does mean, if you’re a Mac-only person, you may have to think outside that box if you’re going to get involved with this medium.

What are the levels of interaction that a UX designer would work with? You can be passive, looking around a world. Which is what people do sometimes with 3D video or 360 video. But what’s much more common is someone looks around and gazes at something, they select an object to interact with by looking at it. In the more complicated systems, you’ll have haptic systems, which are hardware sensors that you’ll hold in your hands or attach to your body in some way. That will allow you to interact or select. There are full body suits but most people don’t have them. Over here I’ve given some examples of what those haptics look like. On the low end systems, they’re glorified game pads. If you have a higher end system, you will have something you hold in your hand, like those rings up at the top, or the Oculus or the HTC solutions. They actually are very good at showing the position of your hands and arms when you’re working in there. They typically will also show the device inside VR so people understand what they’re doing.

Inside VR, another consideration is what people are looking at. The field of view is much higher than it would be for a movie or webpage or mobile. Typically, the viewpoints are at least 90°. They can be 110° in a lot of the headsets. People are pushing towards 270°, which would be pretty remarkable if they get there. The other rage across the field of view is most people can’t look closer than half a meter, so the range you have is between 1-20 meters for interaction in the third dimension.

Mike Alger defines zones for interaction inside there. The no-no zone is less than a meter away from your head. People hate that. They will actually pull off their headset if you do that. There’s an area he calls the hands quadrant, which is a quarter sphere below the user’s arms. Then there’s a very distant, you can’t interact beyond 20 meters. That corresponds to about a on pixel width. The danger zone is like the no-no zone except it’s especially worrisome. That’s basically anything above people’s heads freaks them out. If you have an object, and it comes down, and they look up and see it, they’ll just rip off their headsets. Finally, the devices that are higher end will be able to draw a wall. If the person is walking around a room and they’re about to walk into a wall, they actually draw a wall so people don’t walk into it. I’m showing the behavior, again, Mike Alger’s diagram, where the comfort zone, which is much wider than traditional media you’re probably used to. Then we have a peripheral zone where people are aware of things. They might turn their head and the curiosity zone which they will explore if you’ve made something interesting. In 3D it looks like this. The no-no region and content zone, and finally there where you can control and interact is an area of the lower half of the body, about a quarter sphere, is the touch UI zone on this diagram.

One thing I should also say: text is terrible in 3D. You cannot use text to get your message across. It’s very different from the web or print media. Animated text sometimes works but you have to be careful. Static text is definitely a last resort. We’re going to need new grid systems to describe this.

Another issue is how to get in and out of it. Currently, to get in and out of VR, that’s part of the experience and it has to be designed. Currently, our entry into the VR scene involves a lot of clumsy and possibly frustrating user actions that have to be designed for efficiency. The Web VR API lets us put it into webpages. In theory, you could load a webpage and put on your headset. But the fact of the matter is that no one has worked this stuff out yet. It’s a new frontier for experience designers.

A little bit more about technique before we talk about sustainability. The techniques we would have to use given the features of the medium. We can do prototypes in VR. They’re necessary, but they will not be the traditional wireframes. It doesn’t work because you have a 3D environment. One method people have used is called spherical gray boxing. Instead of making a gray box, the way you would if you were designing an app, you great a gray polygon that substitutes for an object. Another thing is to design a cylinder of interactions. You don’t design the full 3D space but you draw a cylinder around the person and you put your objects on that cylinder. What’s nice about that is is allows you to unroll it into a flat tool, like Adobe Illustrator, then wrap it to test it. I’ve given an example of what these sort of templates look like. This is showing the content and curiosity zones. Here I’m showing the cylinder example, where someone has taken the flat screen and mapped it into 3D. The tool here being used here is Cinema 4D, which is actually used by a lot of the VR practitioners.

Responsive design is also important in VR. But here we’re not worried about screen size because your field of view is full. What we worry about is the quality of the experience in terms of resolution and capability. Our responsive breakpoints would be low-end systems like the Google Cardboard on the left, where it can only move my head. That’s all. I can just look at things. The next level is where I have a larger field of view, about 100°, and I have a button so I can select things on the headset. I can select things by looking at them and by pressing a button. Then, if I have devices that look like handheld devices, that gives me a greater degree of freedom in there as a breakpoint. Finally, being able to walk in the VR simulation would be the final breakpoint in VR.

What are the affordances? I’m giving you some examples from the Samsung Gear VR the buttons that people use when they use Gear VR scenes. In addition to the physicalbuttons, you can put a flat overlay, which I’ve shown on the lower right, on the scene that people can select. A map could also work there. The thing you should not do is a text input field. It’s almost impossible to do. On the other hand, if you make an object, the gesture to the user will behave or animate in such a way that it explains what it’s for. It’s almost like sign language. Another thing you can do is make the object look like the thing it’s supposed to manipulate. Finally, if you’re using a physical haptic device that you hold in your hand, it has to be replicated in the scene as an affordance.

Now, the triggers. With prolonged gaze, you could make an object react and then tell you if it’s ready to go. It would expand or change its appearance if you stare at it. The problem is, you can’t use standard button clicks. You can have someone press a button on the headset, but if you make it where someone pushes their finger out and presses a button, it’s more like putting your finger in a container of water. It really does not work well. The thing that does work is audio. Changing audio cues in an environment is very effective at guiding the experience.

Finally, feedback in a VR world. It can give tactical triggers, often a hardware button. The object you’re interacting wth can move or change in some way. A new object could appear. You’d have to be very careful with scene dissolves, where you jump from scene to scene. Which is the equivalent to jumping from one webpage or another. If you’re not very careful, you will make someone sick or even throw up. As I said before, though, audio changes work really well as feedback. Rather than changing the whole scene, you can open a path, a door, or a gateway to show that I am now in a new state inside my experience.

In terms of handling motion, if you move your scene around without the user’s consent, which is what we do in video where you pan movement, you’re just going to make them throw up. If people are looking forward and you move forward, you minimize the negative effects. Another thing that has been used by some VR apps is to reduce the field of view during motion. That turns out being really effective at designing a VR experience. They’re using that in the Google Earth VR app.

What are the experience metaphors that you can imagine? Moving up to a higher level. You could have a curved wall, which is like you’re at some trade show and looking around at some booth. That has been used. It’s pretty low end but maybe it’s a start. A related aspect would be theater in the round, where events are happening around you, but you’re essentially motionless. You can make room scale caves. That’s most similar to traditional games. It’s quite difficult. To date, I’m not really aware of anything that’s worked very effectively. That tells you, again, that VR is not like a game. One of the most interesting experience metaphors people have come up with is putting the user in the position of being disabled. Their disabilities, your limitations, match the limitations in VR. On another end, you can make a workbench and people are moving around virtual objects to create something. If you want to show people things that they’ve never seen, one thing that works is the ghost’s dream, which is what I call it. You’re in a world and you’re moving like a ghost through this world. I’ve seen some extremely effective things where you’re part of the world and you’re able to move in it. At the same time, the people in that world can’t see you. The final way is something I call gone fishing. You’re in an environment and the world moves past you, but very slowly. That turns out to work really well in certain types of experiences.

For the objects, the ground gives you orientation, the sky gives you scale. Audio fade ins can help you understand if you’re undergoing a change in your experience. In terms of what you’re going to be able to select, people will typically put a context reticle, which I’ve shown in this diagram in the lower right. The objects can create paths or rows or ways I can move, or even ways I can move my head. Finally, something that is novel but works really well, is to have a talking object. They can substitute for text that people would normally read.

There are anti patterns. If you move the user when they aren’t moving, which is a very common error in 360 video, you make them throw up. Static text, they won’t be able to read it. If you teleport, they barf. If you jump them to a scene slowly but then you start having action immediately, they won’t like it. You need about 15 seconds to adjust to a scene change in VR. If you aren’t allowing feedback, like you aren’t showing what’s going on with the user’s own body in VR, people tend to find the experience irritating and uncomfortable. Finally, any sort of motion towards the user, especially coming towards their head, will cause them to rip off their helmet. Here are some more anti patterns. One thing you can put in there is a safety barrier because objects should be blocked and people should understand they can’t get too close to that. You might have a little fence these things can’t get past. People have to be able to control where they’re looking. You cannot take over the camera, vs movies and video where that works very effectively. Finally, another thing that doesn’t work well at all is to change the person’s size and perspective. The Alice in Wonderland size change or an unnatural speed of motion is likely to put them on the vomit comet.

What have we talked about so far? We’ve talked about why VR is important. The idea it has a very high energy footprint and that we can design here, but because it has such powerful features of empathy, designing in VR may allow people to have more sustainable behaviors. Our goal in SustainableUX in VR is to make the VR itself more sustainable. Which means we make it more efficient in the way that we use it. Finally, we craft messages inside VR that make people use more sustainable behaviors. I call those both structural and storytelling.

What would be the strategy for doing this, adding sustainability to a VR project? I call it the Ouroboros Strategy. What you do is, you create a database, by which I really mean a table, and you list all of the elements you’re using to create your VR project. You look at those elements and you find out if there are replacements that are more efficient or greener in some way. You score that. If you can, you swap in the greener ingredients for the less green ones. If you can quantify, you could, but it’s very difficult to do carbon footprint calculations on the web and VR is going to be even more difficult. We know roughly it uses a lot more power than the web. A lot more resources. But actually figuring this out is really not practical. Following the ingredients model can lower the carbon footprint just because you’re picking things that are more greener and you keep swapping them into your project.

Here are some green ingredients you could use in a VR project. One thing that would work would be to reduce physical discomfort so you don’t push people into weird positions. Maybe you allow them to sit and do relaxed movement in the environment. You can do features inside VR that minimize energy consumption. You render things only at the level they need to be and you put objects in there only as they need to be in there. Rather than going for super high resolution. You can do things that maximize inclusion. Make sure your VR system works for everyone, whether or not they’re on a low end Google Cardboard or a super high end Oculus. You create things that allow task completion in VR so you can train and simulate environments. Most importantly, you use the incredibly powerful empathy features of VR that people connect emotionally to their experience, so that they will then transfer that learning more effectively to the real world than they would if they read a brochure or looked at some website.

VT storytelling, in particular, can compensate for its high carbon footprint. If you’re using empathy features, you can change behavior in the real world. My prediction is that VR is going to be vastly better at changing user behavior than any other medium we’ve ever created. With that in mind, what are VR storytelling methods? One way you can tell the story is that you have people in a video and they’re like the ghost I was talking about. I also call it a God’s throne in a world of pain. You’re looking in this world and you see things that cause emotional behavior in yourself. You might experience people in a situation where the environment has been damaged or there’s some other difficulty. Putting the person there, immersed in it, even though they can’t directly touch and interact, will cause a strong emotional reaction. Another way to do it is show people things and scroll it past them. If you move slowly, you can show them a lot of things. It’s almost like a 3D news feed. Yet another way to do this would be to create a simulation. You simulate sustainable behavior and the user practices sustainable behavior inside the VR environment. That might involve using things there or they might create custom objects populated in an empty world and choose things that are more sustainable. The final thing you can do is that a VR scene might encourage meditation or contemplation of an idea. Because of VR’s empathy features, people are willing to sit there with one idea in their mind for many minutes, if you portray it. You don’t have to have a lot of action or movement. They will focus on that idea. I’m showing the Zen Parade, which is a great example of a VR experience that is like that.

So what are the stories you could do? You could show a complex process. Explaining something about the environmental movement or climate change. You build a model and people can walk around and explore and look inside it. They can get inside it. It can be much more complicated than a print or web model. Then people can play with it an understand things more deeply. Another very interesting idea is we tell customer journeys in storytelling but the fact of the matter is the customer journey is basically a VR experience. We don’t need to translate it to a wireframe or put it on a flat screen. The customer journey is our first generation VR experience. That’s one way to think about how you would fit it into your workflow. You place people in scenes that arouse emotional empathy and perhaps you can even have people, to some extent, suffer consequences inside the VR simulation of bad choices or show what happens when that happens. Then you have people using VR to practice sustainable behavior.

Here’s what you need to do. Pretty short. If you’re a UX designer and you don’t know about VR and you’re interested in sustainability, all I can say is, “Where’s your headset?” You need to move into this area now. This is going to be the most powerful area of creating sustainable behavior that we’ve ever had in the next decade. Here’s what your road is. You need to get a headset. A cheap on would be something like Google Daydream or Samsung Gear VR. That might work with your current mobile phone. The next thing you can do is experiment with some of the commercially produced worlds, like Steam community. You will have to learn a little bit about audio to do VR. Because audio is so important in VR. It’s very effective as an engagement device. You probably do need to go find those blogs. There are several blogs now whose title are UX in VR. That would be a place you could start learning. Because VR encourages long form thinking, you can move away from this idea that on the web you’re always distracted and you constantly have to reengage attention, because you can do long form storytelling inside VR. That’s why I said maybe your customer journey already is a VR experience. Finally, you have to develop a strategy for creating deliverables to describe VR experiences. They won’t be traditional wireframes. They’ll be some of the examples I gave in this talk.

That’s the end of my show. I want to thank you for your attention. Thanks very much.


My background is in science. I originally graduated with a doctorate in Molecular and Cellular Biology from the University of Chicago. Later, I research multivalent vaccines, and ultimately landed at UCLA, where I studied protein structure and evolution under Dr. Jeffry Miller. In late 1993, I made a huge career shift from biology to Interactive Design and Development. I left UCLA to co-found Indiespace (formerly Kaleidospace) with Jeannie Novak. Indiespace went online in March 1994, and was the first web-based arts & entertainment company to sell indie products. During the 1990s and early 2000s, Indiespace was the prototype for modern video and music streaming websites. The site is still available today at In addition to Indiespace, I also co-wrote three books on the Internet, technology, and entertainment with Jeannie Novak available on Amazon.

I have a strong interest in generations, and beginning in the early 2000s, I worked with William Strauss & Neil Howe of Lifecourse Associates, the creators of the “Millennial” generation concept. I was a co-writer of their book “Millennials and the Pop Culture” from 2006, which predicted many of the trends we see in Millennials today. I also developed seminars on the Millennial Generation, pop culture, and virtual worlds for USC’s CTM Programs at the Marshall School of Business.

In 2005, I was a Team Leader for Team Robomonster, a robotic, self-driving rock vehicle entered in the DARPA 2005 Grand Challenge. The team, composed mostly of Web Design and Development students, made it to the second of three rounds in the contest. Currently, I teach Interactive and Web Design at the Art Institute of California, Los Angeles. I also do freelance consulting work with web, game, and virtual reality companies.

In the last few years, my web experience led me to become concerned with the long-term sustainability of the online word. My main interest here is extending web sustainability beyond web performance (WPO) alone, to create a framework for sustainability modeled on those found in other fields including Architecture and Industrial Design. I’ve also been developing “Green Boilerplate” – a concept for a sustainable web architecture that can be used as a starting point for web-based projects.
Sustainable Design Blog
Green Boilerplate
Amazon Author Page (has a Robomonster video)
Millennials and Pop Culture