X-Git-Url: http://shamusworld.gotdns.org/cgi-bin/gitweb.cgi?a=blobdiff_plain;f=_manual%2F17_mixing%2F02_panning%2F01_stereo_panner.html;h=ebb0d043d03e3714cee8074c504093aa58a7548e;hb=4801f478cfd119257a1e190955def7e237f66a72;hp=1b7cd6d7b8644fae4375eb47e0edf22f2bbabeda;hpb=3727aad197181c86efdfc3251f10c96aa175f7e9;p=ardour-manual-diverged diff --git a/_manual/17_mixing/02_panning/01_stereo_panner.html b/_manual/17_mixing/02_panning/01_stereo_panner.html index 1b7cd6d..ebb0d04 100644 --- a/_manual/17_mixing/02_panning/01_stereo_panner.html +++ b/_manual/17_mixing/02_panning/01_stereo_panner.html @@ -4,14 +4,16 @@ title: Stereo Panner ---
- The default stereo panner distributes 2 inputs to 2 outputs. Its - behaviour is controlled by two parameters, width and position. The - default settings for the stereo panner are width=100%, - position=center (L=50%, R=50%). This panner assumes that the signals - you wish to distribute are either uncorrelated (that means totally - independent), or they contain a stereo image which is - mono-compatible, such as a co-incident microphone recording, or a - stereo image that has been created with pan pots.* + The default stereo panner distributes two inputs to two outputs. Its + behaviour is controlled by two parameters, width and + position. The + default settings for the stereo panner are width=100% and + position=center. + This stereo panner assumes that the signals + you wish to distribute are either uncorrelated (i.e. totally + independent), or that they contain a stereo image which is + mono-compatible, such as a co-incident microphone recording, or a + sound stage that has been created with pan pots.*
@@ -26,7 +28,7 @@ title: Stereo Panner
- The panner user interface consists of 3 elements, divided between + The panner user interface consists of three elements, divided between the top and bottom half. Click and/or drag in the top half to control position; click and/or drag in the bottom half to control width (see below for details). @@ -42,7 +44,7 @@ title: Stereo Panner In the bottom half are two signal indicators, one marked "L" and the other "R". The distance between these two shows the width of the stereo image. If the width is reduced to zero, there will only be a - single signal indicator marked "M" (for mono), and whose color will + single signal indicator marked "M" (for mono), whose color will change to indicate the special state.
@@ -66,7 +68,7 @@ title: Stereo Panner left and right speakers
-Let's take a look at what happens when you record a source at 45° to the -right side with an ORTF array and then manipulate the width. +Let's take a closer look at what happens when you record a source at 45° to the +right side with an ORTF stereo microphone array and then manipulate the width.
For testing, we apply a pink noise signal to both inputs of an Ardour stereo bus with the stereo panner, and feed the bus output to a two-channel analyser. -Since pink noise contains equal energy per octave, the readout is a straight line: +Since pink noise contains equal energy per octave, the expected readout is a +straight line, which would indicate that our signal chain does not color the +sound:
@@ -219,10 +223,11 @@ analyser.
Recall that an ORTF microphone pair consists of two cardioids spaced 17 cm
-apart, with an opening angle of 110°.
-For a source at 45° to the right, the time difference between the capsules
-is 350 usecs or approximately 15 samples at 44.1 kHz. The level difference
-due to the directivity of the microphones is about 7.5dB.
+apart, with an opening angle of 110°.
+For a far source at 45° to the right, the time difference between the capsules
+is 350 μs or approximately 15 samples at 44.1 kHz. The level difference
+due to the directivity of the microphones is about 7.5 dB (indicated by the
+distance between the blue and red lines in the analyser).
Now for the interesting part: if we reduce the width of the signal to 50%, @@ -231,17 +236,18 @@ happens to the frequency response of the left and right outputs:
-You may argue that all spaced microphone recordings will get comb filters -later, when the two channels recombine in the air between the speakers. But -perceptually, this is a world of difference, since our hearing system is -very good at eliminating comb filters in the real world, if their component -signals are spatially separated. But once you combine two delayed signals -inside your signal chain, this spatial separation is lost. As usual, you +You may argue that all spaced microphone recordings will undergo comb +filtering later, when the two channels recombine in the air between the speakers. +Perceptually however, there is a huge of difference: our hearing system is +very good at eliminating comb filters in the real world, where their component +signals are spatially separated. But once you combine them +inside your signal chain, this spatial separation is lost and the brain will +no longer be able to sort out the timbral mess. As usual, you get to keep the pieces.