Hey, does anyone here know how an analog-to-digital converter works? Question for ya

Starting with this as a source:

My question is WHAT EXACTLY does the ADC architecture consist of? Are we basically talking about a computer chip? And is the accuracy of the converter ultimately determined by the design of the chip?

Also, its not clear (in the case of audio) to me what is being converted. Is it super detailed fluctuations within the voltage of the energy waves from the source?

I always thought it converted analog audio waves, but then it occurred to me that when you plug a guitar into an amp, the amplified analog audio wave does not exist until it comes out the speaker on the other side.

That’s a good question. I think the conversion just happens with microchips and related electronics, but as to how it converts nuance, detail and subtlety of sound I really don’t know. The samples are just snippets of digital representations of analog ‘sound’ or voltage/current I guess. Understanding how digital photography works seems rather simple, it’s just different color combinations in very micro pixel arrangements, and depends on the depth resolution. I guess it’s similar with audio, but I don’t quite get how (good) distortion and “tone”, for example, are translated from analog to digital. You only have so many bits and bytes to describe the complexities we can hear with our ears.

Hi Just wanted you to know that I read all of these posts, but don’t understand them :wink: I had someone the other day ask me if I was recording in stereo. I told them that I pan things left and right, is that stereo? He said “forget I even asked the question”

ps I do take a lot of notes

sincerely

Paul

2 Likes

Hmm, not sure either. Not a big tech guy. All I know is that Pre-Emphasis thread I started earlier has to do with them. Apparently they introduce noise, the old ones being worse than the new ones. So what the Japanese came up with was a solution called Pre-Emphasis. It acts as a high-shelf filter around 5238 Hz, that adds a lot of treble before putting it through the A/D converter. After it’s been put on a CD, the CD player decodes this by doing the inverse EQ. This gets rid of the noise introduced in the original recording by undoing the treble boost, but at the same time it decreases noise introduced in A/D conversion since the noise it introduced was with the really bright signal. The music is back to normal, and you’re eradicating some of the noise generated by the A/D converter. It only works with standalone CD players unfortunately, so if you have a really bright CD it could be it has pre-emphasis. If you rip the CD without decoding you still get the bright signal because there isn’t anything that would decode it. Some of the original Pink Floyd CD’s are that way, Tom Petty’s Southern Accents is like that, and so is Toto IV. That’s all I know.

Here you go - the simple explanation…

https://www.soundonsound.com/sound-advice/q-how-do-converters-work

…and the very in depth one:
https://www.soundonsound.com/techniques/digital-problems-practical-solutions

1 Like

That did help a little. That article answered the question that it is definitely voltage fluctuations being captured…which are sent from a mic diaphragm, guitar pickup coil, or the output jack of hardware keyboard. That article, It explained what it does, but it didn’t say anything about how.

What I’m really wanting to know is what component senses the fluctuations in voltage BEFORE the computer chip quantizes the snapshot to binary code?

(Forget about the preamp). What is in the signal path between the XLR jack on the converter box and the computer chip that begins the calculating and converting the voltage flux from the source???

Maybe you should ask @bozmillar

Haven’t seen him on here in a few days. (Life does that to you) lol.

If you mean the parts involved, the simplest versions have a basic input stage that converts the analog signal to voltage, apparently an op-amp can be used for this purpose. The op-amp sends voltage to the converter chip, which acts like a step ladder, where if the voltage goes beyond step 1’s capacity, it flows into step 2, and so on for the number of bits in the converter. image

1 Like

Thanks Styles. I had THOUGHT it was op amps but didn’t want to guess.

What do you mean by analog signal to voltage? When a guitar pickup (running direct) or a microphone picks up sound waves, according to the SOS article @ColdRoomStudio posted, isn’t it automatically sending fluctuating voltage pulses down the mic or instrument cable? (Which correspond with what the musician is playing)?

Yeah, pretty much. The input stage needs to control the voltage to what the A/D chip wants, but that’s pretty much what it does. Whether it is D/A or A/D, you basically take the waveform and sample it into or out of voltage steps to recreate the analog waveform. The performance of the circuit relies very heavily on the chip. I was selling high end audio when the CD player first came out, and it took years for manufacturers to figure out how to build the right chips. The theory that turning everything into numbers would create flawless sound was a very long time in the making. The first players were 14 bit, then 16 bit became popular, then oversampling, then one bit, etc. etc. It took a ton of R&D to get it right, and then you need to sell millions of the chips for it to pay off. That’s why early audiophile CD players were thousands of dollars. I now work for an electronics distributor, and we sell pretty good converters for about $30. I wouldn’t use them for recording, but for taking an optical output from a TV and making it analog to a receiver, they work fine. We don’t sell many A/D converters, most of the guys buying them are doing residential audio systems, and they are rarely needed.

I watched a few brief videos on what a microchip actually is. I can’t imagine owning and operating the machinery to produce these things, or being the dude that designs them. Some companies like Neve, Manley, and API pride themselves in making their own transformers, but manufacturing microprocessors must be an enormously different world.

Very few musicians would care about this stuff, but I would assume software designers who work in the audio field would at least need to have a basic knowledge of it. Its one of those things I don’t really want to learn, but can’t help my own curiosity :smiley:

I’ve always had trouble figuring out how these chips are made, since my brain doesn’t grasp things happening at 64,000 samples per second. Then try jamming all those calculations into an area the size of your thumbnail. And make it out of silicon. Then stamp them out by the thousands.

I watched the first episode of WestWorld last night. Kind of goes along with this stuff, in terms of being able to do pretty much anything you want if you can identify it and write the software. All we need now is a digital to brainwave converter.

Me too my brother. I’m fascinated by the detail these guys go into.

1 Like

Hi
Yes, I often think how much better my stuff would sound if I had their skills. Humble is good

Sincerely

Paul

Didn’t read the links, so I don’t know what I’m repeating or…

There are many ways to make a digital converter, just like there are many ways to build a house (wood framed with stucco and drywall, bricks and mortar…).

Probably the most straight forward way is a flash converter: You have a series resistor string to ground with the input voltage at top, and at each junction a comparator—you see which ones light up, and encode that into a digital word. 256 compactors for 8-bit, 65526 for 16…no you won’t be using these for audio, gets impractical real fast. (But I used to work for a company that sold them for $500 each for an 8-bit converter for video, decades ago—real fast).

The most basic realistic converter for audio is a successive-approximation converter (based on a SAR—successive approximation register). A basic and accurate DAC is pretty simple to make, so this method leverages a DAC. A SAR ADC basically takes a digital guess, then compares the voltage output of the DAC to the input voltage. Too low, it guesses higher, and too high it guesses lower. Using a binary search, it doesn’t take that many guesses. And there’s plenty of time between samples. To get the input voltage to hold still over that time, there is a “sample and hold” step—basically charging a small capacitor quickly then disconnecting it to hold that value.

There are other variations of that, running at a higher interval rate and using a smaller DAC—all the way down to one-bit. But that’s it in a nutshell.

3 Likes

Dude, that’s amazing. Thanks so much!!!

Dude, that’s amazing. Thanks so much!!!

I was just shooting for, “ok, that’s a start”, so thanks for letting me know!

This is a complex question as there are many different types of AD/DA conversion types. As for typical audio conversion itvis time based.

Time based uses a reference clock to devide the signal into slices based of a fixed time segment. During the point in time the signal level is measured. In 44.1 the signal is measured 44100 tikes a second.

When the signal is measured the value is stored as number based of the bit depth.

The higher the refference clock the better resolution resulting in a higher bandwidth. The higher bit depth again the better resolution resulting in better detail.

This isc very simple explanation but hould suffice here

Hey @TeeThomas54…thanks for chiming in. The first question was what type of signal is measured before the microprocessor initiates the conversion. Bob and Andrew both answered…voltage.

I’m not sure what you mean by isc. The answer to the second part of the question (that styles referenced) was that op-amps are placed between the input and the microprocessor.