So, somehow these guys deliver analog online mastering?

Wow, and they use an operating system to do the analog mastering while noone listens to the result whether is good or not.

I just don’t get how an automated OS can do analog mastering? It sounds like an idiot sales pitch or am I missing something?

This is total bullshit. I believe that an OS is pumping hot air through analog mastering gear. What I can’t fucking believe is that people don’t realize the value of the mastering engineer is the dialogue between the producer and the ME. This sounds no different than that automatic mixer machine that Izotope put out. Perhaps the difference being a computer might be deciding which analog hardware unit to pump data in and out of, rather than a computer deciding which plugin to manipulate.

A good ME charges about $150-$200/hr. And they’re worth every penny if the artist your producing has a track headed for the global market. I would never pay a machine to do this.

Now I’ll tell you what I really think. Compliance Exchange published an article on March 30th with a source citing Goldman Sachs had laid off 600 bankers and replaced them with an army of 200 coding engineers to design algorithms that could beat the S&P (Standard and Poors) 500 Index. Robo-Lawyers (computerized paralegals) also ousted over a thousand paralegals from law firms in the last 2 years.

I believe the reason this works so well for finance, accounting, and law, are because computers crunch huge chunks of data better than people, and even out play them in chess lol. The problem is that the process of creating music is far from being mathed into a set of numeric variables. Basically, music right now is not a set of number we can hand to a computer and say ‘here…crunch this’. I believe computers will eventually mix music better than people, but we’re not there yet. No where even fucking close. I believe computers are already capable of writing music better than people. I see no reason they couldn’t master stuff too one day. But not today.

Mark Cuban in an interview at Oxford, talked about how the future of technology is in automating human processes. And as of now, I agree with him to the point I’m considering re-positioning my own investment portfolio to jump on board with this technological movement.

Nah. I wouldn’t pay that computer to master a damn thing out of my studio.


That’s why it’s better to stick with poker. :wink: Computers don’t know how to bluff.
AI is taking major leaps forward, and you’ll see masses of it coming in the near future. Maybe we’re already there. They are working on ethics for self-driving cars and battle protocols for automated tanks and artillery.

The bartender in “The Passengers” (Chris Pratt, Jennifer Lawrence) is a really interesting exploration of AI and androids.

As to the supposed automated mastering service: they say it’s being done in real time and with analog equipment, and that they built an operating system “from the ground up”. If there were a “mastering robot” that could do the mastering, it would have to be done in real time if using analog gear. It sounds like the audio is quickly analyzed, a solution is determined and executed, run at real time through the analog gear, and out pops the result for you.

I’m pretty skeptical. Building an operating system from the ground up would have to be a monumental and exorbitantly expensive task. Unless they are using a modified Linux platform or something (and that’s not truly ground up). Nevermind how the mastering decisions would be made, but how are they controlling analog gear knobs with a computer? Robot engineers? How are they gauging signal flow and gain staging? Reading meters? The logistics of it don’t make a lot of sense to me. It kind of sounds like a lot of hogwash.

That said, until someone tries the results and compares them to human, who’s to say there isn’t something to the process?

What might be cool is to have a virtual machine where you could log into their server, upload your audio, and conduct the mastering operation yourself with their analog gear collection. Or at least supervise and inspect the “robot” method and tweak or approve the final product before it runs the real time processing.

1 Like

I did a quick search to see if there was any ground-breaking technology that was being unveiled. Nope. I found this.

Then three other companies doing the same thing

I’m sorry, but I don’t think we’re gonna get to hear this head to head with a pro mastering engineer. I can’t see how someone who needs a pro ME has any use for that aria service.

Avid, Moog, Empirical Labs, Yamaha, Dangerous, Harrison, Studer, Neve…all those guys have analog circuitry that is digitally controlled. I imagine you could find or possibly build some analog gear that linked to an open source control mechanism that could respond to a set of conditions??? But OMG does that sound like an insane amount of work. The first thing I thought of was my Dangerous 2Bus summing mixer. Where you change the sound based on how hard you push the signal going in and find a sweet spot like that. But there’s several issues. First of all, it doesn’t really do anything. It just kind of sits there and warms things up. But it would be incredibly easy to integrate into an AI based rig.

It really provides more a workflow advantage though than it does a sonic advantage. I love what it does, but it doesn’t do all that much.

Thats one of the things that I think they could actually automate easily. Oh!! Now that I think of it, if I had a DAW that could talk to a Euconized AI, I think you could do a mastering job like this with Avid hardware. The reason is that Avid makes true analog hardware in which every parameter is digitally controlled. So on Avids compressors have a 100% analog signal path, but pro tools can send automation signals to adjust the attack, decay, knee, sustain, release, slope, then it can feed 2 compressor in series, or re-align them in parallel. The compressors are very very clean, as are their EQ’s and preamps. Hmmmm…that is a fascinating question.

Agreed. I doubt that Aria process is anything even close to what we’re imagining it is.

??? But you’d be in your room, with your monitors. That’s half the problem, and the other is that I wouldn’t really know what to do with a mastering rig even if I had one.

Jim: My pod malfunctioned :frowning:
Arthur (AI): ??? Impossible. Does not compute!

Musican: My Pod HD malfunctioned
Guitar Center: Great! We’ll sell you our Pro Coverage warranty on your next one!

They even let you choose from A through to E on what type of mastering you want.

Yeah that’s true, so using their great room is out, but is the robot even using their great room? For the mastering rig I was just thinking of someone who would want to use that analog gear rather than their own plugins or whatever, but yeah the learning curve could be prohibitive.

The robot has all the digital wave forms in his head. What’s he need a room for?

This isn’t like a remote session where you tap in to their mastering rig from your home studio. Like Shack said two post above, you upload your track, you select one of five mixes, then it spits out selection D. Or Selection A.

Right, but I was imagining if they could offer that kind of a service what would it be like? May be unworkable, but I was just looking at it from a different angle.

1 Like

That’s a fascinating thought. I can even see how it might work. You give the robot a reference track.
Form there, it’s simple coding for it to suss out which tracks have frequency build ups that aren’t “desirable” and where to cut/boost stuff to avoid clashes.

But computers will never be able to do one thing. Choose to ignore a random build up in a certain spot because it suits the song/ sound/mood/emotion of the track better than clinically removing that build up.

If a build up is in a certain spot because it suits the song, seems to me that it wouldn’t be random. If its just a matter of telling it to ignore something, why could that too not be placed in the code? Along with tolerances, other algorithms, and multiple solutions to choose from?

You (as is common) miss my point. I don’t mean a build up in a reguylar, identifiable spot. What if I like the way the eq gives the vocal a nasally sound in random places where the vocalist sings in a certain way.
If that were the case, I’d have to ID each time mark, or each frequency (which to create that effect might be several specific frequencies) which would mean I was doing a long-wionded pre-mix before the computer started it’s work which would rather defeat the purpose of the computer.

So what? Then don’t use the computer. No one is going to force you to not mix by hand.

OOH. Who tugged your chain?
See: I thought we were diuscussing whether a computer COULD do the job or not.
I didn’t realise you had a vested interest.