“Congratulations to Chanel Summers, our VP of Creative Development, for her expert and insightful contributions to the book,
New Realities in Audio: A Practical Guide for VR, AR, MR and 360 Video”!

While I hate to give away any secret sauce, the book and this interview validate the huge impact of proper audio design on the end user experience. Our commercial VR attractions have benefited from her expertise and audio is something that we design in from the beginning of a project. As a result, our attractions set the standard for rich, compelling VR experiences.”
– Kevin Vitale, CEO VRstudios

 

  REVIEW of New Realities in Audio and INTERVIEW with Stephan Schütze and Chanel Summers

by  | May 21, 2018 |

To read the full article and interview, go to: 

https://www.designingmusicnow.com/2018/05/21/review-of-new-realities-in-audio-and-interview-with-stephan-schutze-and-chanel-summers/

Introduction and Chanel Summers Interview Below:

 

Introduction

I had the chance to catch up with Stephan Schütze and his wife, Anna Irwin-Schütze, co-authors of this pioneering text, at GDC this year.

I also spoke with Chanel Summers, Vice President of Creative Development at VRstudios, who wrote an informative section of the book about location-based VR audio and story telling with audio. In this post, both Stephan and Chanel answer a wide range of detailed questions about the nature of VR audio and some novel approaches to this young and exciting field.

The book is an extremely comprehensive and also highly approachable volume on the dark arts of XR spatial audio.  It covers the basics, but also does a very deep dive into the technical aspects of creating, recording and implementing audio in real 3D spaces that are brought to us in XR.

XR is, of course, shorthand for the 3 new types of reality – AR, VR and MR.  I like to think of VR as a 100% digital experience, whereas AR and MR are some mixture of digital and objective reality, roughly 50% of each.

Perhaps the ultimate goal of XR is to create a reality indistinguishable from our normal, objective reality.  In order to accomplish this, we need to have an audio experience that would be indistinguishable as well.  This leaves stereo and even surround sound in the dust, and we need to extend sound to individual objects in the 3D space.  Birds in the trees, frogs in the water, lightning in the distance should all be coming from their sources.  Imagine an orchestra where each individual instrument has sound coming directly from it, obeying the laws of physics each instrument is constrained by. We are a long way from this, but the technology is appearing that approximates this, and I would say that we are closer to modelling real 3D acoustics than we are of modelling real 3D graphics in VR.

Also at GDC, I had a chance to give a short presentation at the Google booth about a VR experience created by Runaway Play called Flutter VR which is set in the Amazon Rain Forest. This game is designed to run on Daydream, therefore on mobile devices, and I was able to demonstrate that we could have more than 50 point source audio emitters and only use roughly 30% of the audio CPU.  Theoretically we could have had more than 100 point sources without affecting the game’s frame rate thanks to the power of Google’s Resonance Audio.  The codecs for playing back 3D audio are getting better, but are our techniques to create sound and music in those environments keeping up?

This book features a who’s who of audio professionals in addition to Chanel Summers of VRstudios.  Martin Dufour, CTO of WWISE, Simon Goodwin of DTS, Sally-anne Kellaway of Microsoft, Viktor Phoenix of The Sound Lab, Jay Sheen of Criterion Games and independant sound designers Robert Rice and Garry Taylor.

In order to level up your abilities in this brave new world of spatial audio, I recommend this book highly – in fact I would go so far as to say it is a must read for all composers and sound designers who are working in XR or who will be working in XR, and if things keep going the way I think they will, that will be all of us!

 

DMN: What are the difference between location-based VR experiences vs. at-home consumer ones?

Chanel Summers: Location-based entertainment (LBE) VR experiences are typically more social than at-home consumer content and can provide much more elaborate and immersive experiences not possible in an average consumer’s home. They can also be a powerful partner to in-home gaming VR and AR experiences and even possibly accelerate the adoption of VR by consumers by exposing consumers to VR content, perhaps for the first time, and acclimating them to the new technology. These LBE VR or out-of-home experiences utilize many of the same audio techniques as consumer VR software, but there are also significant differences to consider when designing and implementing audio for LBE VR. I delve deeply into these in my chapter entitled, “Creating Immersive & Aesthetic Auditory Spaces for Location-Based VR Experiences”.

DMN: How important is audio to the overall VR experience when compared with other game elements?

CS: It has been my experience that those elements that make a VR environment so immersive and compelling are created more in the audio space than in the visual space, for several reasons.
First, audio can represent all of space rather than just what the viewer is seeing, including sounds that emanate from behind the user, not in their field of view, or ones where sources have not been graphically rendered.

Second, the perceptual complexity allowed for by the human body in audio reception is greater than that allowed by the eyes for the visual reception. The auditory system can simultaneously process multiple frequencies with a variety of amplitudes whereas visually each bit in an image corresponds to a specific color, but cannot represent multiple colors simultaneously.

Audio can be received vibrationally, and therefore physiologically, by several parts of the body simultaneously in addition to the ears. Thus, audio can affect us on a subconscious, psychological, and physiological level.

Very early in the design process, we need to consider how we approach the creation of the spaces we are building and throughout the process we need to create spaces that are coherent, consistent, and cohesive within the story and game space.

The narrative of the game and the desired gameplay set the foundation for the visuals of each setting within the game, but adding the audio brings them alive, making the experience effectively real in the player’s mind.

DMN: Why do you think spatial audio is important in VR and in gameplay? Should everything be spatialized?

CS: As you are inside the experience and not just detached from it, you need some things to be spatialized in order to give the world depth, with individual ambient world sounds always positioning as you rotate your head, and therefore, you always feel a sense of direction and depth within the world. Also, if you are creating a VR game, it will be crucial to have spatialized audio cues in order to play the game effectively. The effectiveness of the gameplay is greatly reduced if the players don’t look where they need to look at any point in time during play.

But I want to add that not all audio in a VR environment needs to be spatialized. I believe that in a VR experience, audio must take kind of a hybrid approach, where some audio is spatialized while others can be in simple stereo. For instance, there may be sounds that are static or head relative. If you are going to have user interface sounds, those most likely should be 2D. Same goes for a musical score; this would probably be best in 2D unless there is a physical source of the music in the game world. And with low-frequency sounds, it’s harder to tell where they are emanating from. Low end is good for the feeling of a sound and affecting physiology and great for giving an object weight, presence, and size! Sounds that are primarily low frequency like an energy pulse or a rumble are well suited to being stereo sounds.

One example of a hybrid approach occurred on “VR Showdown in Ghost Town”, an LBE experience that I worked on for Knott’s Berry Farm. We mixed in 2D looped stereo ambiances (such as general ambiance that we did not feel required spatialization) with individual 3D mono spatialized sounds which worked quite well and felt very natural. The wind and general ambiance were not designed as a quad-array with emitters placed around the listener in all directions, as we didn’t find that necessary with this experience.

DMN:  What are a few tips and tricks that you use when designing the audio and music to help tell the story in LBE VR?

CS: First off, audio needs to be an integral design element from the start, conveying elements of narrative, characterization or gameplay by itself and in concert with other game elements. Audio must be more than a list of assets to be compiled and assembled like the items on a shopping list. Rather than just coupling each of the visual elements of a game with a corresponding functional sound element, audio should always further the goals of story, characterization, and the creation of a holistic ecosystem. Well-executed sounds and a brilliantly composed soundtrack have minimal value when accompanied by nothing more than surface meaning.

You need to consider the choreography of the audio, creating a cohesive, holistic unit of all the elements as well as a “rhythm” within a game experience. If you are creating a fictional world, you will need to make this really immersive environment where people feel like they are there, in the story. For instance, in the Knott’s Berry Farm project, players are transported to a future to defend a western town called Calico. Very early in the design process, we needed to consider how we would approach the creation of the space we were building so that throughout the process we could create a space that would be coherent, consistent, and cohesive within the story and game space. We needed to be able to use audio to create a truly immersive environment and make the players believe they were actually transported to this future western town. But we also had to supply players with very strong audio cues so that they could play the game in this super sound-rich environment. Therefore, we had to strike a balance between creating effective audio that satisfied basic gameplay requirements and building a soundscape that worked well and was also cohesive within the world. We needed to supply the players with auditory cues so they could play the game while also making them feel like they’re truly in some futuristic world that was somehow transported from the Wild West.

A guiding philosophy that the team discussed was this idea of dramatic divides and departures in which it would be essential to aurally capture the melding of a world of futuristic technology with the dusty grind of the Wild West. Perhaps somewhat similar to HBO’s Westworld, there would be two different “worlds” in existence here between players starting in the futuristic Lobby scene and then teleporting to the futuristic version of Calico: the Lobby world being similar to the fictional Westworld laboratory with clean, light, minimalistic ambiences and Calico being similar to the fictional Westworld theme park: gritty, grimy, and very sound-rich. This philosophy even applied to the music in which the team from Cedar Fair and Knott’s wanted us to think about having traditional western topes meet hi-tech electro stylings. But with all of this in mind, it was absolutely Imperative that the audio give the “vibe” and familiarity of Knott’s Berry Farm’s actual, real world Ghost Town area.

DMN: What are your thoughts on the use of non-diegetic and adaptive music in VR experiences?

CS: In VR, the use of underscore is still being debated, with many disagreeing how this should be treated in VR. Some argue that non-diegetic music breaks immersion while others put forth that it actually helps to create immersion by guiding the players’ emotional states and aiding in the interpretation of the actions and events they see unfolding before them. I strongly believe that how you approach music in VR will be based on the scenario you are creating and what your objectives are for your project.

My team has incorporated non-diegetic and dynamic music into several LBE VR experiences. These scores became very important aspects of these games as they ended up being major contributors to the soundscapes and environments — heightening emotional impact, setting the mood, tone, and pacing of the environment, and as a game design mechanic with transitioning intensity layers as indicators of player success.

In the Knott’s experience, as there would be no physical source of the music in the game world, the music would be non-diegetic underscore and sit as a 2D stereo send that enveloped the world. This left room for the “appropriate” environmental audio to be spatialized. Also, the team from Cedar Fair and Knott’s preferred the music not to emanate from an in-game source like the saloon, as they didn’t want distracting and potentially irritating volume attenuations based on player movement.

On another project, “Barking Irons,” the developer wanted to experiment with the idea of having a fader dial attached to the players’ heads which would cause the music to change based on ducking activity. The audio team built a real-time “Head Volume Fader” RTPC (player_ducking_musicvolume) In Wwise, but we ended up not using it as we deemed it could sound ”wrong” and disruptive to the experience if we kept changing the music based on players bobbing up and down to dodge enemy bullets.

DMN: I just read that China is opening up a full VR theme park. Also, there is a company who is adding VR experiences to rollercoasters. Do you think VR is going to be a predominant attraction for US based theme parks?

CS: That’s just the tip of the iceberg! Most, if not all, US-based theme parks are either looking at adding VR or have already done so. Look to see this expand over time, particularly with the introduction of high-quality augmented and mixed reality solutions.

 

About Stephan Schutze

Stephan Schutze has been an audio creator in the game industry for close to 20 years. In that time he has written music for everything from chip tunes to live orchestral scores, he has created a collection of sound effects libraries used by studios around the world such as Disney, EA, Warner Brothers and Skywalker Sound. He has created sound content on nearly every game platform over the last two decades and is now heavily involved with new reality design and production. Having worked for Magic Leap, the Facebook Spatial Audio team, Oculus and many smaller VR developers made Stephan perfectly suited to write the first book on audio concepts and production for new reality media. He continues to be both an advocate and practitioner for spatial audio and is incredibly excited about the potential of these new formats. He is always on the lookout for new challenges in audio production so he can continue to do what he loves most.

About Chanel Summers

Chanel Summers joined VRstudios in 2017 as its first Vice President of Creative Development, in which capacity she is responsible for delivering the kind of breakthrough content experiences the industry has come to expect from the leading provider of VR-enabled attractions for location-based entertainment operators. A pioneer in the field of interactive audio, Chanel has been a respected game producer and designer, Microsoft’s first audio technical evangelist, and a member of the original Xbox team, having helped to design and support the audio system for that groundbreaking console and creating the first ever support team for content creators.

Prior to joining VRstudios, Chanel was an accomplished touring drummer and founder of the highly regarded audio production and design company Syndicate 17, specializing in sound design, music production, and audio implementation for location-based attractions and virtual, augmented, and mixed reality products.

Some of Syndicate 17’s recent work includes audio for MediaMation’s REACTIVR, first shown at IAAPA 2015; Intel Labs/5D Global/USC WbML’s Leviathan, featured and was an Official Selection at Sundance New Frontiers Festival 2016; VRstudios/VRcade’s Barking Irons, which debuted at CVR 2016 and planktOs: Crystal Guardians which showcased at Immerse Technology Summit 2016; the VR Experience (“The Repository”) for Universal Studios Orlando’s 2016 Halloween Horror Nights; the large-scale VR installation “VR Showdown in Ghost Town” for Knott’s Berry Farm, the first permanent free-roaming VR experience at a U.S. theme park; and Terminal 17, an intense, multiplayer (up to 8 players) adventure game specifically designed for the VRcade Arena.

Chanel has consulted for a number of organizations and innovative technology companies, and lectured and educated around the world on subjects as diverse as the aesthetics of video game audio, world-building, and secondary-level STEM education for young women. Chanel is also a lecturer and director of the Experimental Audio Design Lab at the University of Southern California School of Cinematic Arts, was recently artistic director at Forest Ridge School of the Sacred Heart and primary faculty advisor at the Sony/USC Summer Associate Virtual Reality Innovation Program, and serves as a member of the World Building Institute, an associate member of the BAFTA VR Advisory Group, a member of the AudioVR Board of Advisors, and a member of the Virtuosity Entertainment consortium.

In 2016, Chanel contributed “Making the Most of Audio in Characterization, Narrative Structure, and Level Design” to the CRC Press book Level Design: Processes and Experiences. In 2018, Chanel contributed “Creating Immersive and Aesthetic Auditory Spaces for Location-Based VR Experiences” to another CRC Press book, New Realities in Audio: A practical Guide for VR, AR, MR, and 360 Video.

About Dale Crowley

Dale is a veteran video game composer based in Northern California with over 20 years experience in the video game industry. As founder and Audio Director of Gryphondale Studios, he  is a prolific composer and writer. He is also a sound designer and voice over actor for video games. He recently joined the team at Elias Software to help them realize their vision to bring adaptive music to video games. Dale has developed video games and music for Disney, Sony Entertainment, THQ, Electronic Arts, ESPN, FOX Sports and Mattel to name a few.

He started studying the piano at an early age, and by the age of 14 was performing Gershwin’s Rhapsody in Blue and Rachmaninoff’s Piano Concerto No. 2. Dale has continued his music education throughout his life by taking courses at Berklee College of Music and Berkeley Jazz School. He has studied with Grammy Award winner Gary Burton, and jazz pianist and author of the definitive book on jazz theory, Mark Levine. Prior to founding Gryphondale Studios, Dale was a pioneer of mobile games, having developed the very first mobile versions of Bejeweled, Tetris, Space Invaders, Civilization, Spider-Man, and dozens of games for Disney including games for Mickey Mouse, The Lion King, and The Little Mermaid.

He has been both a speaker at the Game Developer Conference and E3, and was a speaker at numerous mobile conferences such as CTIA and a Mobile Summit in Austria given to CEO’s of the top wireless carriers in Europe. He has also appeared on CNN Headline News discussing video games. He has also been a C++ programmer working for independent developer Mind Control Software, and then AOL Games and Psygnosis, and often programs in C# for Unity3D projects. He is a proponent of middleware such as FMOD, WWISE, Elias and Unity.

Dale has traveled the world gathering influences of music from China, South East Asia, and Europe. On those travels, he even lived as a Zen Buddhist Monk in China for several years. He has a degree in Physics from Purdue University, has worked on satellite technology at General Electric Aerospace, and has done high energy particle physics research at Stanford Linear Accelerator.