And now for something completely different! Following advice from an industry expert, I decided to stretch my design skills by exploring Virtual Reality, and teamed up with two fellow MS-HCI students to construct the following experience:
Alone in a strange and dark forest, you find a lantern, your sole source of light and comfort. But soon you are joined by two characters, and each makes a compelling argument as to why you should give the lantern to them. Who do you trust? You decide! This virtual reality experience places you at the penultimate moment of Cartoon Network's Emmy-award winning "Over the Garden Wall" mini-series.
This experience was coded in Unity and implemented for Oculus Rift; scroll down to find a brief video about our implementation.
Concept
In a dramatic scene, the protagonist is asked for the lantern by each of the two non-playable characters: “the Beast” and “the Woodsman”. Our team wanted to investigate whether the appearance, actions, and voice of the two non-playable characters would affect which character the interactor (the person wearing the VR headset) would trust and give the lantern. The Beast is furtive, hides in the woods just out of the light, and evades the glance of the interactor, yet he has a mellifluous voice and sounds helpful. The Woodsman steps right into view, but his voice and manner evokes a cranky old neighbor (“get off my yard, sonny”); he warns about The Beast, but says nothing helpful to the interactor.
The team needed to learn and build simultaneously on a tight time schedule. We adopted an approach to this project informed by Agile principles: pick a part of the project and build it quickly in "good enough" quality, then move on to another part of the project (and repeat). We agreed we would use prebuilt assets when it saved us time, and we preferred free assets when available in sufficient quality. 
Design
The team started by creating a shared understanding of how the experience would unfold. The team walked through the scenario and reached agreement as to how it would play out. I documented our understanding in a storyboard using Google Slides (which incorporated several images created by a teammate); it also includes a “VR map”, which described how the location of the interactor affects the position of the non-playable characters. This storyboard helped us create the initial project plan and served as a useful touchstone throughout the project. Click on the thumbnail below to download a PDF of the entire document.
The team agreed that using the distinctive original voices from the series was important to measure trust. This meant we must construct a dialogue to fit our scenario entirely using sound clips from the series. I listened to each episode to identify useful quotes from the two characters, then got creative to construct a meaningful script from those quotes. A teammate performed the audio editing to compile the sound clips. Click on the thumbnail below to download a PDF of the entire document.
Implementation
I created the forest, and made it appropriately dark and spooky. First I explored Unity’s standard assets, and found a free asset called SpeedTree. I did a quick test build to determine if we could get a “good enough” effect from SpeedTree: could this save the time (or cost) of other approaches? After the test build, I showed these examples to the team and recommended we use SpeedTree and focus our attention elsewhere. The team agreed. 
To make the forest dark, I researched how to replace the default skybox material with something darker. Again I was able to quickly find some free material to darken the skybox. In this image, most of the light comes from a light source attached to the lantern. 
I delivered one of the two animated non-playable characters: the Beast. Again I started by exploring free assets, and found a character on Mixamo.com that was already rigged and came with animation scripts (left). The team agreed to use this character; a teammate later added a “skin” to make him look more like the character in the Cartoon Network series. In the dark, he looked suitably creepy (right).
The team wanted the two endings to appear and be substantially different, based on which character the interactor chose to trust with the lantern. I researched how handing the lantern to a character could trigger sound and lighting changes to make the endings more distinctive, and collaborated with teammates on the implementation. The sound and lighting changes can be seen near the end of the video below.
Results & Retrospective
What was delivered: We successfully implemented and demonstrated the VR experience; below is a video showing selected scenes.
After completing the experience once, participants asked to replay the experience; they wanted to experience "what happened if they gave the lantern to the other character". We believe that showed successful engagement in a Virtual Reality experience. We found it interesting that we could not establish participants' preference (higher trust) for either character. 
What I learned:
The Storyboard really helped with understanding among team members, and was especially valuable while planning the project, dividing work, and tracking progress.
VR development during this project (Spring 2017) was still very early-stage, with many challenges. For example, version control of Unity code and assets is not easy with git or other source code managers. We handled version control by developing individually, then meeting weekly to merge our individually-developed code and assets into a shared update. That code became the basis for the next week’s development. Soon we each developed a good sense as to what work could be done individually, and what work should be deferred until we could meet.
Another challenge was Oculus’ approach to room-sense. Our scenario required 360-degree operation: if the interactor turned their back to the Oculus motion sensors, the experience was degraded (for example, lifting the hand holding the lantern could result in the headmounted unit not showing the hand being lifted). Fortunately Oculus had offered advice regarding using a third sensor. We had the opportunity to test that out, and that we found that approach to be solve some of our issues.
Due to the multiple VR hardware environments available, and their incompatibilities, this experience was only tested on Oculus Rift. As a followon personal project, I hope to make this experience available on HTC Vive.

You may also like

Back to Top