I’ve been doubling down into the VR audio mixing world lately and getting my rig and studio re-positioned to do 5.1 immersive and ambisonics mixing. I’ve been involved with several VR mixing projects for major video game companies, but still don’t have anywhere near the skill set to manage an entire audio implementation project, which is my end goal. In two or three years I’d like to drop music mixing completely and move to only providing services to AR/VR content publishers who require advanced format deliverables. I’ll still be using my knowledge of music mixing, and most of the audio tools are the same.
He controls the Reaper mixer and plugins from inside his helmet at 4:45.
Helmets/Visors suck. So I started buying up every VR/AR system on the market. I’m also spending a lot more time listening to AR/VR content. But I’ve discovered a lot of variables that make working in this environment difficult. You can’t wear those helmets/visors for long periods of time. They’re uncomfortable and hard on your eyes. VR games are built on a normal LCD/hdmi screen then deployed to the visor for testing. So a VR designer is not required to wear the helmet for 8 hours a day. The audio can be processed (compressed/EQ’d etc) in stereo, but at the end of the day, there’s no way around testing it with headphones.
Workflow. I thought that deploying a daw inside of a VR helmet would be awesome. I used to dream of the day this would happen. But now that the technology is finally here, I’ve realized how problematic this is. You can’t see through your visor, and if the software requires motion control, you have to continuously put down the trackers and feel your way back onto your daw keyboard when you want to do something as simple as ‘trim region start at cursor’ (which is ‘a’ in protools) or ‘select and add adjacent region’ (which is Shft+p / ; in protools). The whole idea behind the VR mixer was that you wouldn’t have to keep taking your helmet off and putting it back on to make changes to the mix. As you see in the video, you can leave your helmet on. The problem is editing. As amazing as it is to see your DAW tracks in 3d, every feature is useless except for the pan. Shoot at an edit button, miss and hit the wrong button, CMD Z is not at your fingertips like it is on a DAW. Long way to go on the engineering here.
The lack of standardized everything: standardized deliverable format, lack of tools, lack of workflow solutions, and lack of clients make this a very big gamble. The risk constantly nags at me, but I’m gonna keep pushing forward with this until its clear I should call it a day. Basically, market expectations vs technology is a problem for someone investing in this skill and toolkit. My hope is that technology FINALLY gets to where it can deliver this medium at an affordable price, but it has been trying for almost 50 years unsuccessfully. However if it does, my hope is to have a portfolio up and running, and a number of projects in place by the time people really start needing someone to do this stuff.
As discouraging as some of that stuff is, there are still a number of good reasons to pursue this. Devices are smaller and faster than they’ve ever been. Internet bandwidth is becoming fast enough to stream VR. Augmented reality and holograms are out of reach for the public consumer, but could be right around the corner. VR is in the gutter, but gaming revenue has escalated into a half trillion dollar industry.