Welcome to the first blog in the Art of Imaging blog series. We hope you’ll like these articles and hope that they’re informative in that they may help you a bit in understanding how recordings are made. In this first article, we’re going to focus on tricking your brain and ears through a field of science called psychoacoustics. Psychoacoustics is all about how your brain perceives and processes the information it gets from your ears, and how it interprets this information to give you an idea of what’s going on around you.
Where’s the lion?
After all, we humans are just a bunch of animals. Sure, we’ve got these cool opposable thumbs going for us, but in the end our hearing isn’t super advanced or anything. Because we don’t really need super advanced hearing; our senses are developed to just give us humans, like all animals, a sense of the world, an abstract model from which we can extract what information we need at any given moment. Just imagine being a caveman, listening carefully for any dangerous animals that might be around. Do we need to know the exact sound pressure level in dB at 3kHz of this lion’s roar? Or do we just roughly need to know where it is and whether it’s actually a lion or something else? The former might give you some great watercooler-talk for you and your fellow audiophile cavemen, but the latter will make the difference between being a caveman and being some lion’s lunch.
So how do we make sure we know where this lion is? That’s right, psychoacoustics. Our ears pick up the big cat’s roar, and your brain interprets these signals so as to give you some clue on the lion’s whereabouts. And two of the ways your brain interprets these signals are small differences in the intensity and timing of these signals.
Intensity-based stereophony
Let’s start with intensity-based stereophony. Quite a fancy term, but in simple terms this means we’re using the relative intensity or loudness of signals to trick our brains into localizing a sound.
As a true audiophile, you must have heard countless recordings with a singer right smack dab in the center of the stereo image, as though they’re actually standing precisely in between your speakers. This happens when the microphone they’re singing into is sent to the left and right channels of the mix at exactly the same level. In other words, it’s panned to the center of the mix. If this balance would be changed so that the left channel is louder than the right channel, it would sound as though the singer has moved to the left side of your system.
One of the ways we our brain interprets where a sound source comes from, is simply by looking at the difference in the intensity of the sound: if it’s louder on the left the source is localized on the left, and if the source is louder on the right it’s localized on the right.
This example we’ve just looked at is if we were to take a monophonic source and pan it in the stereophonic image, but as you might imagine, things get a lot more fun if we actually start to record in stereo rather than taking a bunch of mono things and mashing them together in a sort of pseudo-stereo patchwork. So, let’s have a look at how we can use this psychoacoustic phenomenon, where the difference in intensity between two signals help your brain determine where the source is located.
Intensity-based stereo recording
Let’s have a look at our first microphone technique, shall we? This one, called the XY microphone array, uses differences in intensity to record a stereophonic image.
XY microphone array, courtesy of DPA Microphones
In this image you see two microphones, perfectly aligned but with a 90º angle between them so that one “looks” more to the right, and the other “looks” more to the left. If you place a sound source like a singer exactly in the center in front of these microphones, both mics pick up their voice exactly as loud. When played back on a set of speakers, the singer is reproduced through the left speaker exactly as loud as through the right one, so your brain interprets this as “the singer is standing in the middle between my loudspeakers.”
Now, let’s move our singer a bit, shall we? When we move them to the right, suddenly the microphone pointing right will pick our singer up louder than the one pointing left. This is then reproduced by your speakers in the same way; with the loudspeaker on the right reproducing the singer louder than the left one, and thus your brain interprets this as “the singer is standing to the right of the center between my loudspeakers.” If the singer walks the other way, the left mic will pick them up louder, making the left speaker reproduce them louder, which in turn makes your brain think “the singer just walked to the left between my loudspeakers.”
The difference in level between the left and right channels can be seen in the graph below: a sound source 90º to the left of the microphone position will be roughly 15dB louder on the left channel than on the right. The sound level in both left and right channels are exactly the same if the sound source is standing exactly in the middle of the setup at 0º.
Difference in sound level between channels in an XY microphone array.
Let’s have a look at another graph, showing the relationship between where the source is standing in relation to this XY microphone array, and its apparent location between two loudspeakers.
The localization graph for an XY microphone array.
The X-axis indicates where the original (recorded) source is standing in reality, and the Y-axis indicates its apparent localization between the reproducing speakers. For example, a source recorded at a 60º angle to the right from the mics will be reproduced at roughly 19º to the right between your speakers. An average orchestra, when listening to it from the first row of an average concert hall, encompasses a total angle of approximately 135º, which means that if you record it with an XY-array like this, it will be reproduced between -22,5º and 22,5º between your speakers. Quite narrow, right? So, let’s have a look at another option, which is delay-based stereophony.
Delay-based stereophony
Our brains use more than just the difference in intensity between your left and right-ear signals to infer a sound source’s location, it also uses the speed of sound to do so. You see, sound travels relatively slowly (roughly 34,3 centimeters or 13,5 inches per millisecond), which means that a sound coming from your left will arrive at your left ear slightly earlier than at your right ear. If your brain recognizes the same sound, but slightly delayed between left and right, it uses that delay to calculate where the source is coming from.
We can use this phenomenon to our advantage when we’re recording as well: if we take two microphones and space them out a bit, then a sound coming from the left of the pair will be picked up by the closest mic first, followed by its counterpart. When reproduced through your speakers, one speaker will then play back this signal first, followed by the other. Your one ear, in turn, picks it up first, followed by your other ear, and voilà: your brain localizes the source. Let’s have a look at a simple microphone technique that makes use of this phenomenon, shall we?
AB microphone array, image courtesy of DPA Microphones.
This microphone technique is called the AB-array. Contrary to the XY-array, it doesn’t use intensity differences between the left and right signals, but rather the tiny differences in timing which your brain interprets as stereophonic information.
Difference in sound level and timing between channels in an AB microphone array.
In this graph, you can see that the difference in sound intensity are negligible, but there are definitely differences in the timing of the signals. For example, a sound source coming from 90º to the left arrives at the left channel of this particular pair of microphones roughly 1,2ms earlier than the right. A source exactly in the middle arrives exactly at both microphones at exactly the same time, meaning there’s no timing difference. Below is a graph showing how your brain interprets this delay-based stereophony, and how sources are localized depending on their timing.
Localization graph of the AB array.
This looks quite different from the XY-array’s graph, right? The graph shows that sources 30º to the left are reproduced 22,5º to the left on your speaker setup, much closer to the original stereo width. The orchestra from our previous example would now completely fill all the space between your speakers, and then some — a bit of the sides are even clipped to the left and right speakers. That being said, in my personal opinion I rarely enjoy orchestral performances this close-up and wide; I usually like to sit a couple of rows back for a more natural perspective.
Lastly, let’s have a look at something called coherency. As you may know, low-frequency waves are big and high-frequency ones are tiny. With an XY-array, the size of the wave doesn’t really matter at all, but when our microphone technique uses timing or phase differences between signals, suddenly the size of the wave can make a big difference. For example, a low-frequency wave might be so big as to have no discernable phase or timing differences between left and right, meaning that low-frequency waves tend to get more monophonic than their high-frequency counterparts. See the graph below.
Coherency graph for an AB array.
Without going into too much technical detail, this graph shows the input frequency on the X-axes, and the coherency index on the Y-index — in simple terms, the lower the coherency index the more “spatial” the sound will be, and the higher the coherency index, the more monophonic it will be. As you can see, frequencies near or under 100Hz will be quite monophonic, and near or above 300Hz will be quite spatial.
It’s all about awesome
During my master’s degree research into stereo and multichannel recording techniques, I found that listeners tend to favor recordings that sound “impressive” in their stereo image, but not too much so that the image is all distorted and sources are clipping into the speakers. So, what constitutes this “impressiveness” in a recording?
What I found was that one of the attributes that make a recording sound impressive is a relatively low coherency index at low frequencies. And since this AB-array has quite a high coherency index at low frequencies, the participants in my listening tests found recordings made with it to sound a bit underwhelming.
Now, how can we change this AB-array to sound more impressive? Well, we can do this by spacing the microphones farther apart, perhaps from its original 40cm to 100cm.
Coherency graph of an AB array with the microphones spaced 100cm apart.
Here, you can see that sounds at 100Hz have a coherency index of about 0.25, much lower than the 0.85 of the previous microphone array. Therefore, this recording technique will likely sound more impressive and spacious. Let’s have a look at the localization curve of this technique, however, to see how it picks up a stereophonic image.
Localization graph of an AB array with the microphones spaced apart 100cm.
As you can see, this curve is much steeper than its 40cm-spaced counterpart. When reproduced, sources 17,5º to the left of this microphone array will be completely clipped into the left loudspeaker. Very impressive indeed! Spacing the microphones even further apart means that low frequencies will be less monophonic and that the recording will sound more impressive, but it unfortunately also means that the stereophonic width of higher frequencies may be exaggerated. One way to fix this is to introduce an additional microphone to the mix.
Front and center
We can create stability in the stereo image by adding a microphone in the center, between the left and right mics of the array we’ve just created. We then feed this microphone’s signal to both left and right loudspeakers, so to create some sort of “virtual speaker” in the middle. See the picture below.
An LCR array, based on an AB array but with an additional microphone in the center.
Having these three microphones, also called an LCR-array, means we can have both impressive low frequency width as well as a good stereophonic spread for the mid and high frequencies. Let’s see how this works, by adding a mic exactly in the middle of our previous setup, shall we?
Localization graph of an LCR array where all mics are aligned.
Whoa, wait a minute — suddenly we have not one but two stereo images? Well, yes, but also no. Your brain still “sees” the one image, but because the differences between these images are quite large, the image is a bit diffuse or unclear. But before we delve too much into that, let’s first have a look at the two images we’ve just created. The green line shows the localization between the left and center channels, and the blue line shows the localization between the center and right channels. Your brain adds these up, creating a diffuse localization between those two lines.
A sound source recorded at 30º to the left of the microphones will be very diffusely localized between roughly 28º to the left, all the way to 6º to the right. In order to gain more focus, we would need to try to get these two images closer together. We can do this by moving the center microphone forward towards the sound source, creating an imaginary triangle shape between the three mics. If we move the center mic forward 25cm, our localization graph looks like this:
Localization graph of an LCR array with the center microphone placed 25cm forward.
Slowly but surely, we’re getting there! The blue and green lines are much closer together, meaning that there’s much less of a diffuse combined stereo image. Here, this same sound source at 30º to the left of the microphones will be localized between 15º to the left and dead center in your speaker setup, much less diffuse than before. However, despite making the image more focused and clearer, we’ve also made it a lot narrower. Let’s see if we can fix that by making the whole array a bit wider, say 125cm instead of 100cm.
Localization of the OOA array.
This is beginning to look promising; the image is a bit wider and still not too diffuse. And surprise, surprise, we’ve just arrived at what I coined as OOA (Optimized Omnidirectional Array). Out of a large number of listening tests with a large group of participants, OOA was picked out as one of the most realistic-sounding microphone techniques, and for a reason. Aside from its balance of focus and sonic impressiveness, it also works very well in stereo, surround and immersive mixes! Now, you might say it’s still quite narrow — sources 30º to the left of the mics will be localized between 19º left and dead center when reproduced, right? Well, not really. You see, by using three microphones rather than two, we’ve created not just one more stereo image, but rather, two. Whereas using two mics will result in one image between them, using three mics will result in three images: one between left and center, one between center and right, and one between left and right. So, let’s graph that one too, shall we?
LCR Localization of the OOA array.
In the graph above, you can now see an additional (purple) line indicating the (pretty extreme) stereo image between the left and right channels. This image, however, introduces such strong stereophonic cues to our brains, that it pulls the resulting stereo image towards itself, resulting in a combined image that is much less diffuse than you would assume from looking at the graph, and actually has a great 1:1 relationship between the position and angle of the source, and the position and angle of the reproduced signal! Without getting into the specifics of weighted localization in the field of psychoacoustics, the resulting localization of a sound source recorded at 30º to the left is almost precisely 30º to the left. Furthermore, have a look at this coherency graph below — impressively impressive, right?
Coherency in an OOA array.
Here, the green line represents the coherency index between the left and right signals, and the orange-red line the coherency index between the left (or right) and center signals.
And this, my dear reader, is the birth of OOA, the (not so) secret sauce every single recording we do at TRPTK is based on. By using OOA, we create a perfect balance between diffuseness and clarity, and between localization and impressiveness. During my master’s thesis research, I found that this is why so many people prefer it, and I think this is at least partly why TRPTK’s recordings are so often describes as “realistic”. And in the end, isn’t realism all that we should achieve? To add nothing and remove nothing from what’s already there, natural beauty.
Join us in the next blog article, where we delve a bit more into the different kinds of microphones we use, and how they affect imaging and stereophony. See you there!
The Art of Imaging: I. Stereophony
Welcome to the first blog in the Art of Imaging blog series. We hope you’ll like these articles and hope that they’re informative in that they may help you a bit in understanding how recordings are made. In this first article, we’re going to focus on tricking your brain and ears through a field of science called psychoacoustics. Psychoacoustics is all about how your brain perceives and processes the information it gets from your ears, and how it interprets this information to give you an idea of what’s going on around you.
Where’s the lion?
After all, we humans are just a bunch of animals. Sure, we’ve got these cool opposable thumbs going for us, but in the end our hearing isn’t super advanced or anything. Because we don’t really need super advanced hearing; our senses are developed to just give us humans, like all animals, a sense of the world, an abstract model from which we can extract what information we need at any given moment. Just imagine being a caveman, listening carefully for any dangerous animals that might be around. Do we need to know the exact sound pressure level in dB at 3kHz of this lion’s roar? Or do we just roughly need to know where it is and whether it’s actually a lion or something else? The former might give you some great watercooler-talk for you and your fellow audiophile cavemen, but the latter will make the difference between being a caveman and being some lion’s lunch.
So how do we make sure we know where this lion is? That’s right, psychoacoustics. Our ears pick up the big cat’s roar, and your brain interprets these signals so as to give you some clue on the lion’s whereabouts. And two of the ways your brain interprets these signals are small differences in the intensity and timing of these signals.
Intensity-based stereophony
Let’s start with intensity-based stereophony. Quite a fancy term, but in simple terms this means we’re using the relative intensity or loudness of signals to trick our brains into localizing a sound.
As a true audiophile, you must have heard countless recordings with a singer right smack dab in the center of the stereo image, as though they’re actually standing precisely in between your speakers. This happens when the microphone they’re singing into is sent to the left and right channels of the mix at exactly the same level. In other words, it’s panned to the center of the mix. If this balance would be changed so that the left channel is louder than the right channel, it would sound as though the singer has moved to the left side of your system.
One of the ways we our brain interprets where a sound source comes from, is simply by looking at the difference in the intensity of the sound: if it’s louder on the left the source is localized on the left, and if the source is louder on the right it’s localized on the right.
This example we’ve just looked at is if we were to take a monophonic source and pan it in the stereophonic image, but as you might imagine, things get a lot more fun if we actually start to record in stereo rather than taking a bunch of mono things and mashing them together in a sort of pseudo-stereo patchwork. So, let’s have a look at how we can use this psychoacoustic phenomenon, where the difference in intensity between two signals help your brain determine where the source is located.
Intensity-based stereo recording
Let’s have a look at our first microphone technique, shall we? This one, called the XY microphone array, uses differences in intensity to record a stereophonic image.
XY microphone array, courtesy of DPA Microphones
In this image you see two microphones, perfectly aligned but with a 90º angle between them so that one “looks” more to the right, and the other “looks” more to the left. If you place a sound source like a singer exactly in the center in front of these microphones, both mics pick up their voice exactly as loud. When played back on a set of speakers, the singer is reproduced through the left speaker exactly as loud as through the right one, so your brain interprets this as “the singer is standing in the middle between my loudspeakers.”
Now, let’s move our singer a bit, shall we? When we move them to the right, suddenly the microphone pointing right will pick our singer up louder than the one pointing left. This is then reproduced by your speakers in the same way; with the loudspeaker on the right reproducing the singer louder than the left one, and thus your brain interprets this as “the singer is standing to the right of the center between my loudspeakers.” If the singer walks the other way, the left mic will pick them up louder, making the left speaker reproduce them louder, which in turn makes your brain think “the singer just walked to the left between my loudspeakers.”
The difference in level between the left and right channels can be seen in the graph below: a sound source 90º to the left of the microphone position will be roughly 15dB louder on the left channel than on the right. The sound level in both left and right channels are exactly the same if the sound source is standing exactly in the middle of the setup at 0º.
Difference in sound level between channels in an XY microphone array.
Let’s have a look at another graph, showing the relationship between where the source is standing in relation to this XY microphone array, and its apparent location between two loudspeakers.
The localization graph for an XY microphone array.
The X-axis indicates where the original (recorded) source is standing in reality, and the Y-axis indicates its apparent localization between the reproducing speakers. For example, a source recorded at a 60º angle to the right from the mics will be reproduced at roughly 19º to the right between your speakers. An average orchestra, when listening to it from the first row of an average concert hall, encompasses a total angle of approximately 135º, which means that if you record it with an XY-array like this, it will be reproduced between -22,5º and 22,5º between your speakers. Quite narrow, right? So, let’s have a look at another option, which is delay-based stereophony.
Delay-based stereophony
Our brains use more than just the difference in intensity between your left and right-ear signals to infer a sound source’s location, it also uses the speed of sound to do so. You see, sound travels relatively slowly (roughly 34,3 centimeters or 13,5 inches per millisecond), which means that a sound coming from your left will arrive at your left ear slightly earlier than at your right ear. If your brain recognizes the same sound, but slightly delayed between left and right, it uses that delay to calculate where the source is coming from.
We can use this phenomenon to our advantage when we’re recording as well: if we take two microphones and space them out a bit, then a sound coming from the left of the pair will be picked up by the closest mic first, followed by its counterpart. When reproduced through your speakers, one speaker will then play back this signal first, followed by the other. Your one ear, in turn, picks it up first, followed by your other ear, and voilà: your brain localizes the source. Let’s have a look at a simple microphone technique that makes use of this phenomenon, shall we?
AB microphone array, image courtesy of DPA Microphones.
This microphone technique is called the AB-array. Contrary to the XY-array, it doesn’t use intensity differences between the left and right signals, but rather the tiny differences in timing which your brain interprets as stereophonic information.
Difference in sound level and timing between channels in an AB microphone array.
In this graph, you can see that the difference in sound intensity are negligible, but there are definitely differences in the timing of the signals. For example, a sound source coming from 90º to the left arrives at the left channel of this particular pair of microphones roughly 1,2ms earlier than the right. A source exactly in the middle arrives exactly at both microphones at exactly the same time, meaning there’s no timing difference. Below is a graph showing how your brain interprets this delay-based stereophony, and how sources are localized depending on their timing.
Localization graph of the AB array.
This looks quite different from the XY-array’s graph, right? The graph shows that sources 30º to the left are reproduced 22,5º to the left on your speaker setup, much closer to the original stereo width. The orchestra from our previous example would now completely fill all the space between your speakers, and then some — a bit of the sides are even clipped to the left and right speakers. That being said, in my personal opinion I rarely enjoy orchestral performances this close-up and wide; I usually like to sit a couple of rows back for a more natural perspective.
Lastly, let’s have a look at something called coherency. As you may know, low-frequency waves are big and high-frequency ones are tiny. With an XY-array, the size of the wave doesn’t really matter at all, but when our microphone technique uses timing or phase differences between signals, suddenly the size of the wave can make a big difference. For example, a low-frequency wave might be so big as to have no discernable phase or timing differences between left and right, meaning that low-frequency waves tend to get more monophonic than their high-frequency counterparts. See the graph below.
Coherency graph for an AB array.
Without going into too much technical detail, this graph shows the input frequency on the X-axes, and the coherency index on the Y-index — in simple terms, the lower the coherency index the more “spatial” the sound will be, and the higher the coherency index, the more monophonic it will be. As you can see, frequencies near or under 100Hz will be quite monophonic, and near or above 300Hz will be quite spatial.
It’s all about awesome
During my master’s degree research into stereo and multichannel recording techniques, I found that listeners tend to favor recordings that sound “impressive” in their stereo image, but not too much so that the image is all distorted and sources are clipping into the speakers. So, what constitutes this “impressiveness” in a recording?
What I found was that one of the attributes that make a recording sound impressive is a relatively low coherency index at low frequencies. And since this AB-array has quite a high coherency index at low frequencies, the participants in my listening tests found recordings made with it to sound a bit underwhelming.
Now, how can we change this AB-array to sound more impressive? Well, we can do this by spacing the microphones farther apart, perhaps from its original 40cm to 100cm.
Coherency graph of an AB array with the microphones spaced 100cm apart.
Here, you can see that sounds at 100Hz have a coherency index of about 0.25, much lower than the 0.85 of the previous microphone array. Therefore, this recording technique will likely sound more impressive and spacious. Let’s have a look at the localization curve of this technique, however, to see how it picks up a stereophonic image.
Localization graph of an AB array with the microphones spaced apart 100cm.
As you can see, this curve is much steeper than its 40cm-spaced counterpart. When reproduced, sources 17,5º to the left of this microphone array will be completely clipped into the left loudspeaker. Very impressive indeed! Spacing the microphones even further apart means that low frequencies will be less monophonic and that the recording will sound more impressive, but it unfortunately also means that the stereophonic width of higher frequencies may be exaggerated. One way to fix this is to introduce an additional microphone to the mix.
Front and center
We can create stability in the stereo image by adding a microphone in the center, between the left and right mics of the array we’ve just created. We then feed this microphone’s signal to both left and right loudspeakers, so to create some sort of “virtual speaker” in the middle. See the picture below.
An LCR array, based on an AB array but with an additional microphone in the center.
Having these three microphones, also called an LCR-array, means we can have both impressive low frequency width as well as a good stereophonic spread for the mid and high frequencies. Let’s see how this works, by adding a mic exactly in the middle of our previous setup, shall we?
Localization graph of an LCR array where all mics are aligned.
Whoa, wait a minute — suddenly we have not one but two stereo images? Well, yes, but also no. Your brain still “sees” the one image, but because the differences between these images are quite large, the image is a bit diffuse or unclear. But before we delve too much into that, let’s first have a look at the two images we’ve just created. The green line shows the localization between the left and center channels, and the blue line shows the localization between the center and right channels. Your brain adds these up, creating a diffuse localization between those two lines.
A sound source recorded at 30º to the left of the microphones will be very diffusely localized between roughly 28º to the left, all the way to 6º to the right. In order to gain more focus, we would need to try to get these two images closer together. We can do this by moving the center microphone forward towards the sound source, creating an imaginary triangle shape between the three mics. If we move the center mic forward 25cm, our localization graph looks like this:
Localization graph of an LCR array with the center microphone placed 25cm forward.
Slowly but surely, we’re getting there! The blue and green lines are much closer together, meaning that there’s much less of a diffuse combined stereo image. Here, this same sound source at 30º to the left of the microphones will be localized between 15º to the left and dead center in your speaker setup, much less diffuse than before. However, despite making the image more focused and clearer, we’ve also made it a lot narrower. Let’s see if we can fix that by making the whole array a bit wider, say 125cm instead of 100cm.
Localization of the OOA array.
This is beginning to look promising; the image is a bit wider and still not too diffuse. And surprise, surprise, we’ve just arrived at what I coined as OOA (Optimized Omnidirectional Array). Out of a large number of listening tests with a large group of participants, OOA was picked out as one of the most realistic-sounding microphone techniques, and for a reason. Aside from its balance of focus and sonic impressiveness, it also works very well in stereo, surround and immersive mixes! Now, you might say it’s still quite narrow — sources 30º to the left of the mics will be localized between 19º left and dead center when reproduced, right? Well, not really. You see, by using three microphones rather than two, we’ve created not just one more stereo image, but rather, two. Whereas using two mics will result in one image between them, using three mics will result in three images: one between left and center, one between center and right, and one between left and right. So, let’s graph that one too, shall we?
LCR Localization of the OOA array.
In the graph above, you can now see an additional (purple) line indicating the (pretty extreme) stereo image between the left and right channels. This image, however, introduces such strong stereophonic cues to our brains, that it pulls the resulting stereo image towards itself, resulting in a combined image that is much less diffuse than you would assume from looking at the graph, and actually has a great 1:1 relationship between the position and angle of the source, and the position and angle of the reproduced signal! Without getting into the specifics of weighted localization in the field of psychoacoustics, the resulting localization of a sound source recorded at 30º to the left is almost precisely 30º to the left. Furthermore, have a look at this coherency graph below — impressively impressive, right?
Coherency in an OOA array.
Here, the green line represents the coherency index between the left and right signals, and the orange-red line the coherency index between the left (or right) and center signals.
And this, my dear reader, is the birth of OOA, the (not so) secret sauce every single recording we do at TRPTK is based on. By using OOA, we create a perfect balance between diffuseness and clarity, and between localization and impressiveness. During my master’s thesis research, I found that this is why so many people prefer it, and I think this is at least partly why TRPTK’s recordings are so often describes as “realistic”. And in the end, isn’t realism all that we should achieve? To add nothing and remove nothing from what’s already there, natural beauty.
Join us in the next blog article, where we delve a bit more into the different kinds of microphones we use, and how they affect imaging and stereophony. See you there!