Steps toward VR mixing... off to a bit of a rough start. 3 things I can't stand

I’ve been doubling down into the VR audio mixing world lately and getting my rig and studio re-positioned to do 5.1 immersive and ambisonics mixing. I’ve been involved with several VR mixing projects for major video game companies, but still don’t have anywhere near the skill set to manage an entire audio implementation project, which is my end goal. In two or three years I’d like to drop music mixing completely and move to only providing services to AR/VR content publishers who require advanced format deliverables. I’ll still be using my knowledge of music mixing, and most of the audio tools are the same.

He controls the Reaper mixer and plugins from inside his helmet at 4:45.

  1. Helmets/Visors suck. So I started buying up every VR/AR system on the market. I’m also spending a lot more time listening to AR/VR content. But I’ve discovered a lot of variables that make working in this environment difficult. You can’t wear those helmets/visors for long periods of time. They’re uncomfortable and hard on your eyes. VR games are built on a normal LCD/hdmi screen then deployed to the visor for testing. So a VR designer is not required to wear the helmet for 8 hours a day. The audio can be processed (compressed/EQ’d etc) in stereo, but at the end of the day, there’s no way around testing it with headphones.

  2. Workflow. I thought that deploying a daw inside of a VR helmet would be awesome. I used to dream of the day this would happen. But now that the technology is finally here, I’ve realized how problematic this is. You can’t see through your visor, and if the software requires motion control, you have to continuously put down the trackers and feel your way back onto your daw keyboard when you want to do something as simple as ‘trim region start at cursor’ (which is ‘a’ in protools) or ‘select and add adjacent region’ (which is Shft+p / ; in protools). The whole idea behind the VR mixer was that you wouldn’t have to keep taking your helmet off and putting it back on to make changes to the mix. As you see in the video, you can leave your helmet on. The problem is editing. As amazing as it is to see your DAW tracks in 3d, every feature is useless except for the pan. Shoot at an edit button, miss and hit the wrong button, CMD Z is not at your fingertips like it is on a DAW. Long way to go on the engineering here.

  3. The lack of standardized everything: standardized deliverable format, lack of tools, lack of workflow solutions, and lack of clients make this a very big gamble. The risk constantly nags at me, but I’m gonna keep pushing forward with this until its clear I should call it a day. Basically, market expectations vs technology is a problem for someone investing in this skill and toolkit. My hope is that technology FINALLY gets to where it can deliver this medium at an affordable price, but it has been trying for almost 50 years unsuccessfully. However if it does, my hope is to have a portfolio up and running, and a number of projects in place by the time people really start needing someone to do this stuff.

As discouraging as some of that stuff is, there are still a number of good reasons to pursue this. Devices are smaller and faster than they’ve ever been. Internet bandwidth is becoming fast enough to stream VR. Augmented reality and holograms are out of reach for the public consumer, but could be right around the corner. VR is in the gutter, but gaming revenue has escalated into a half trillion dollar industry.

I can’t think of a single reason why this would be desirable. I can see VR instruments being useful.

But then again, I don’t see why mixing is a thing at all in games. Isn’t the mixing done in the engine in real time?

It was intended to be a workflow solution. It was designed to allow the mix engineer to never have to take the helmet off to observe the relation between the source, ambience, obstruction (game objects between the source and the sound) and peripheral panning. Like in non-VR mixing, all sounds mixed are context dependent. The context changes when you move from the Unity design platform to the helmet.

Something that I didn’t grasp until I started doing this was that ambisonics and 5.1 surround (or Dolby Atmos) have very little in common.

The VR daw link (Pro Tools, Reaper, Nuendo) allows you to access the game on one screen, then open DAW screens (mixes, FX rack, edit window) on pop up windows, and punch buttons and turn knobs without closing the game, making the change, saving, then re-launching the game. For example, a sound might work real well in a DAW, but as soon as the players position moves backwards or forwards from the source, you need to tune the reverbs just right so that the amount of reverb adjusts accordingly to when the player moves. It appears you sometimes guess wrong, but you don’t find out until you put the helmet on then try it.

Yeah, its is, but you have to tell the engine to trigger a certain sound in a certain way. For example if you’re looking at stage with the drummer on the left. Then if you turn your head left (so you’re looking strait at the drummer), it needs to be told to move the drummer position center. If you look up at the ceiling, you have to tell it to make the drummer sound like he’s below you in front of you. The liaison software between the game engine and the motion tracker is doing all the intensive calculations, but you’ve got to be able to monitor it.

I think it was a great idea… it just doesn’t work very well at the moment lol.

1 Like

Thought you’d get a kick out of this plugin! :slight_smile: :wink: