behaviour is controlled by two parameters, <dfn>width</dfn> and
<dfn>position</dfn>. By default, the panner is centered at full width.
</p>
+
<p>
The stereo panner assumes that the signals
you wish to distribute are either uncorrelated (i.e. totally
<dfn>mono-compatible</dfn>, such as a co-incident microphone recording, or a
sound stage that has been created with pan pots.<sup><a href="#caveat">*</a></sup>
</p>
+
<p class="note">
With the default values it is not possible to alter the position,
since the width is already spread entirely across both outputs. To
<h2>Stereo Panner User Interface</h2>
<img src="/images/stereo-panner-annotated.png" alt=""/>
+
<p>
The <dfn>panner user interface</dfn> consists of three elements, divided between
the top and bottom half. Click and/or drag in the top half to
control position; click and/or drag in the bottom half to control
width (see below for details).
</p>
+
<p>
In the top half is the position indicator, which shows where the
center of the stereo image is relative to the left and right
centered between the left and right outputs. When it all the way to
the left, the stereo image collapses to just the left speaker.
</p>
+
<p>
In the bottom half are two signal indicators, one marked "L" and the
other "R". The distance between these two shows the width of the
single signal indicator marked "M" (for mono), whose color will
change to indicate the special state.
</p>
+
<p>
It is possible to invert the outputs (see below) so that whatever
would have gone to the right channel goes to the left and vice
<h2><a name="caveat"></a>Stereo panning caveats</h2>
<p class="warning">
-The stereo panner will introduce unwanted side effects on
-material that includes a time difference between the channels, such
-as A/B, ORTF or NOS microphone recordings, or delay-panned mixes.<br />
-When you reduce the with, you are effectively summing two highly
-correlated signals with a delay, which will cause <dfn>comb filtering</dfn>.
+ The stereo panner will introduce unwanted side effects on
+ material that includes a time difference between the channels, such
+ as A/B, ORTF or NOS microphone recordings, or delay-panned mixes.<br />
+ When you reduce the width, you are effectively summing two highly
+ correlated signals with a delay, which will cause <dfn>comb filtering</dfn>.
</p>
+
<p>
-Let's take a closer look at what happens when you record a source at 45° to the
-right side with an ORTF stereo microphone array and then manipulate the width.
+ Let's take a closer look at what happens when you record a source at 45° to the
+ right side with an ORTF stereo microphone array and then manipulate the width.
</p>
+
<p>
-For testing, we apply a <dfn>pink noise</dfn> signal to both inputs of an Ardour stereo
-bus with the stereo panner, and feed the bus output to a two-channel analyser.
-Since pink noise contains equal energy per octave, the expected readout is a
-straight line, which would indicate that our signal chain does not color the
-sound:
+ For testing, we apply a <dfn>pink noise</dfn> signal to both inputs of an Ardour stereo
+ bus with the stereo panner, and feed the bus output to a two-channel analyser.
+ Since pink noise contains equal energy per octave, the expected readout is a
+ straight line, which would indicate that our signal chain does not color the
+ sound:
</p>
+
<img src="/images/stereo-panner-with-ORTF-fullwidth.png" />
+
<p>
-To simulate an ORTF, we use Robin Gareus' stereo balance
-control LV2 to set the level difference and time delay. Ignore the Trim/Gain
-— its purpose is just to align the test signal with the 0dB line of the
-analyser.
+ To simulate an ORTF, we use Robin Gareus' stereo balance
+ control LV2 to set the level difference and time delay. Ignore the Trim/Gain—its purpose is just to align the test signal with the 0dB line of the
+ analyser.
</p>
-<p>
-Recall that an <dfn>ORTF</dfn> microphone pair consists of two cardioids spaced 17 cm
-apart, with an opening angle of 110°.
-For a far source at 45° to the right, the time difference between the capsules
-is 350 μs or approximately 15 samples at 44.1 kHz. The level difference
-due to the directivity of the microphones is about 7.5 dB (indicated by the
-distance between the blue and red lines in the analyser).
+
+<p>
+ Recall that an <dfn>ORTF</dfn> microphone pair consists of two cardioids
+ spaced 17 cm apart, with an opening angle of 110°. For a far source at
+ 45° to the right, the time difference between the capsules is 350 μs
+ or approximately 15 samples at 44.1 kHz. The level difference due to the
+ directivity of the microphones is about 7.5 dB (indicated by the
+ distance between the blue and red lines in the analyser).
</p>
+
<p>
-Now for the interesting part: if we reduce the width of the signal to 50%,
-the time-delayed signals will be combined in the panner. Observe what
-happens to the frequency response of the left and right outputs:
+ Now for the interesting part: if we reduce the width of the signal to 50%,
+ the time-delayed signals will be combined in the panner. Observe what
+ happens to the frequency response of the left and right outputs:
</p>
+
<img src="/images/stereo-panner-with-ORTF-halfwidth.png" />
+
<p>
-You may argue that all spaced microphone recordings will undergo comb
-filtering later, when the two channels recombine in the air between the speakers.
-Perceptually however, there is a huge of difference: our hearing system is
-very good at eliminating comb filters in the real world, where their component
-signals are spatially separated. But once you combine them
-inside your signal chain, this spatial separation is lost and the brain will
-no longer be able to sort out the timbral mess. As usual, you
-get to keep the pieces.
+ You may argue that all spaced microphone recordings will undergo comb
+ filtering later, when the two channels recombine in the air between the speakers.
+ Perceptually however, there is a huge of difference: our hearing system is
+ very good at eliminating comb filters in the real world, where their component
+ signals are spatially separated. But once you combine them
+ inside your signal chain, this spatial separation is lost and the brain will
+ no longer be able to sort out the timbral mess. As usual, you
+ get to keep the pieces.
</p>
+
<p class="note">
-Depending on your material and on how much you need to manipulate the width,
-some degree of comb filtering may be acceptable. Then again, it may not. Listen
-carefully for artefacts if you manipulate unknown stereo signals — many
-orchestra sample libraries for example do contain time-delay components.
+ Depending on your material and on how much you need to manipulate the width,
+ some degree of comb filtering may be acceptable. Then again, it may not. Listen
+ carefully for artefacts if you manipulate unknown stereo signals—many
+ orchestra sample libraries for example do contain time-delay components.
</p>
-