My question is WHAT EXACTLY does the ADC architecture consist of? Are we basically talking about a computer chip? And is the accuracy of the converter ultimately determined by the design of the chip?
Also, its not clear (in the case of audio) to me what is being converted. Is it super detailed fluctuations within the voltage of the energy waves from the source?
I always thought it converted analog audio waves, but then it occurred to me that when you plug a guitar into an amp, the amplified analog audio wave does not exist until it comes out the speaker on the other side.
Thatâs a good question. I think the conversion just happens with microchips and related electronics, but as to how it converts nuance, detail and subtlety of sound I really donât know. The samples are just snippets of digital representations of analog âsoundâ or voltage/current I guess. Understanding how digital photography works seems rather simple, itâs just different color combinations in very micro pixel arrangements, and depends on the depth resolution. I guess itâs similar with audio, but I donât quite get how (good) distortion and âtoneâ, for example, are translated from analog to digital. You only have so many bits and bytes to describe the complexities we can hear with our ears.
Hi Just wanted you to know that I read all of these posts, but donât understand them I had someone the other day ask me if I was recording in stereo. I told them that I pan things left and right, is that stereo? He said âforget I even asked the questionâ
Hmm, not sure either. Not a big tech guy. All I know is that Pre-Emphasis thread I started earlier has to do with them. Apparently they introduce noise, the old ones being worse than the new ones. So what the Japanese came up with was a solution called Pre-Emphasis. It acts as a high-shelf filter around 5238 Hz, that adds a lot of treble before putting it through the A/D converter. After itâs been put on a CD, the CD player decodes this by doing the inverse EQ. This gets rid of the noise introduced in the original recording by undoing the treble boost, but at the same time it decreases noise introduced in A/D conversion since the noise it introduced was with the really bright signal. The music is back to normal, and youâre eradicating some of the noise generated by the A/D converter. It only works with standalone CD players unfortunately, so if you have a really bright CD it could be it has pre-emphasis. If you rip the CD without decoding you still get the bright signal because there isnât anything that would decode it. Some of the original Pink Floyd CDâs are that way, Tom Pettyâs Southern Accents is like that, and so is Toto IV. Thatâs all I know.
That did help a little. That article answered the question that it is definitely voltage fluctuations being capturedâŚwhich are sent from a mic diaphragm, guitar pickup coil, or the output jack of hardware keyboard. That article, It explained what it does, but it didnât say anything about how.
What Iâm really wanting to know is what component senses the fluctuations in voltage BEFORE the computer chip quantizes the snapshot to binary code?
(Forget about the preamp). What is in the signal path between the XLR jack on the converter box and the computer chip that begins the calculating and converting the voltage flux from the source???
If you mean the parts involved, the simplest versions have a basic input stage that converts the analog signal to voltage, apparently an op-amp can be used for this purpose. The op-amp sends voltage to the converter chip, which acts like a step ladder, where if the voltage goes beyond step 1âs capacity, it flows into step 2, and so on for the number of bits in the converter.
Thanks Styles. I had THOUGHT it was op amps but didnât want to guess.
What do you mean by analog signal to voltage? When a guitar pickup (running direct) or a microphone picks up sound waves, according to the SOS article @ColdRoomStudio posted, isnât it automatically sending fluctuating voltage pulses down the mic or instrument cable? (Which correspond with what the musician is playing)?
Yeah, pretty much. The input stage needs to control the voltage to what the A/D chip wants, but thatâs pretty much what it does. Whether it is D/A or A/D, you basically take the waveform and sample it into or out of voltage steps to recreate the analog waveform. The performance of the circuit relies very heavily on the chip. I was selling high end audio when the CD player first came out, and it took years for manufacturers to figure out how to build the right chips. The theory that turning everything into numbers would create flawless sound was a very long time in the making. The first players were 14 bit, then 16 bit became popular, then oversampling, then one bit, etc. etc. It took a ton of R&D to get it right, and then you need to sell millions of the chips for it to pay off. Thatâs why early audiophile CD players were thousands of dollars. I now work for an electronics distributor, and we sell pretty good converters for about $30. I wouldnât use them for recording, but for taking an optical output from a TV and making it analog to a receiver, they work fine. We donât sell many A/D converters, most of the guys buying them are doing residential audio systems, and they are rarely needed.
I watched a few brief videos on what a microchip actually is. I canât imagine owning and operating the machinery to produce these things, or being the dude that designs them. Some companies like Neve, Manley, and API pride themselves in making their own transformers, but manufacturing microprocessors must be an enormously different world.
Very few musicians would care about this stuff, but I would assume software designers who work in the audio field would at least need to have a basic knowledge of it. Its one of those things I donât really want to learn, but canât help my own curiosity
Iâve always had trouble figuring out how these chips are made, since my brain doesnât grasp things happening at 64,000 samples per second. Then try jamming all those calculations into an area the size of your thumbnail. And make it out of silicon. Then stamp them out by the thousands.
I watched the first episode of WestWorld last night. Kind of goes along with this stuff, in terms of being able to do pretty much anything you want if you can identify it and write the software. All we need now is a digital to brainwave converter.
Didnât read the links, so I donât know what Iâm repeating orâŚ
There are many ways to make a digital converter, just like there are many ways to build a house (wood framed with stucco and drywall, bricks and mortarâŚ).
Probably the most straight forward way is a flash converter: You have a series resistor string to ground with the input voltage at top, and at each junction a comparatorâyou see which ones light up, and encode that into a digital word. 256 compactors for 8-bit, 65526 for 16âŚno you wonât be using these for audio, gets impractical real fast. (But I used to work for a company that sold them for $500 each for an 8-bit converter for video, decades agoâreal fast).
The most basic realistic converter for audio is a successive-approximation converter (based on a SARâsuccessive approximation register). A basic and accurate DAC is pretty simple to make, so this method leverages a DAC. A SAR ADC basically takes a digital guess, then compares the voltage output of the DAC to the input voltage. Too low, it guesses higher, and too high it guesses lower. Using a binary search, it doesnât take that many guesses. And thereâs plenty of time between samples. To get the input voltage to hold still over that time, there is a âsample and holdâ stepâbasically charging a small capacitor quickly then disconnecting it to hold that value.
There are other variations of that, running at a higher interval rate and using a smaller DACâall the way down to one-bit. But thatâs it in a nutshell.
This is a complex question as there are many different types of AD/DA conversion types. As for typical audio conversion itvis time based.
Time based uses a reference clock to devide the signal into slices based of a fixed time segment. During the point in time the signal level is measured. In 44.1 the signal is measured 44100 tikes a second.
When the signal is measured the value is stored as number based of the bit depth.
The higher the refference clock the better resolution resulting in a higher bandwidth. The higher bit depth again the better resolution resulting in better detail.
This isc very simple explanation but hould suffice here
Hey @TeeThomas54âŚthanks for chiming in. The first question was what type of signal is measured before the microprocessor initiates the conversion. Bob and Andrew both answeredâŚvoltage.
Iâm not sure what you mean by isc. The answer to the second part of the question (that styles referenced) was that op-amps are placed between the input and the microprocessor.