Acoustic analysis with soundgen

Andrey Anikin

2020-05-24

1 Purpose

There are numerous programs out there for performing acoustic analysis, including several open-source options and R packages. For in-depth analysis of individual mammalian sounds it’s hard to beat PRAAT (batch processing is possible, but a bit tricky, because PRAAT uses its own, rather unusual scripting language). For bird sounds, a sophisticated tool is Sound Analysis Pro. In R, the most general-purpose acoustic toolkit is the seewave package. Soundgen builds upon the functionality of seewave, adding high-level functions for sound synthesis (see the vignette on sound synthesis), manipulation, and analysis.

Reasons to use soundgen for acoustic analysis might be:

  1. User-friendly approach: a single call to the analyzeFolder function will give you a dataframe containing dozens of commonly used acoustic descriptors for each file in an entire folder. So if you’d rather get started with model-building without delving too deeply into acoustics, you are one line of code away from your dataset.
  2. Flexible pitch tracking: soundgen uses several popular methods of pitch detection in parallel, followed by their integration and postprocessing. While the abundance of control parameters may initially seem daunting, for those who do wish to delve deeply this makes soundgen’s pitch tracker very versatile and offers a lot of power for high-precision analysis.
  3. An interactive app for manual correction of pitch contours - pitch_app().
  4. Audio segmentation with in-built optimization: the tools for syllable segmentation and detection of energy bursts are fast and simple (based on smoothed intensity contours) but quite flexible. Control parameters can also be optimized automatically as long as you have a manually segmented training sample.
  5. Additional specialized tools for acoustic analysis such as modulation spectra and self-similarity matrices.

Many of the large variety of existing tools for acoustic analysis were designed with a particular type of sound in mind, usually human speech or bird songs. Soundgen’s pitch tracker was written to analyze human non-linguistic vocalizations like screams and laughs. These sounds are much harsher and noisier than ordinary speech and stand much closer to the vocalizations of other mammals than to human speech. In addition, the original corpus (Anikin & Persson, 2017) was collected from online videos, so that both sampling rate and microphone settings varied tremendously. From the very beginning, the focus has thus been on developing a pitch tracker and a segmenting tool that would be robust to noise and recording conditions. This makes soundgen highly suitable for performing acoustic analysis of animal vocalizations. You can of course apply soundgen to speech, but note that it was not optimized for speech, unlike specialized phonetic software like Praat.

To summarize, you might want to look at soundgen’s tools for acoustic analysis if you are extracting a large number of acoustic predictors from a large number of audio files, for example:

The most relevant functions are:

TIP Soundgen’s functions for acoustic analysis are not meant to be exhaustive. MFCC extraction is readily available in R (e.g., with tuneR::melfcc), so there was no need to duplicate it in soundgen. Linear predictive coding (LPC) is also implemented in R (see phonTools::lpc and phonTools::findformants). As a convenience, soundgen::analyze shows the output of phonTools::findformants, but for serious formant analysis you might want to use an interactive program like PRAAT and check everything manually. A good approach may be to start with soundgen::analyze to get a table of many common acoustic predictors and then add some more using other R packages, software, or manual measurements.

This vignette is designed to show how soundgen can be used effectively to perform acoustic analysis. It assumes that the reader is already familiar with key concepts of phonetics and bioacoustics.

TIP This vignette mostly covers acoustic analysis with soundgen. In many cases, there are related R functions from other packages. For a tour-de-force overview of alternatives together with highly accessible theoretical explanations of sound characteristics, see Sueur (2018) “Sound analysis and synthesis with R”

2 Acoustic analysis with analyze

To demonstrate acoustic analysis in practice, let’s begin by generating a sound with a known pitch contour. To make pitch tracking less trivial and demonstrate some of its challenges, let’s add some noise, subharmonics, and jitter:

library(soundgen)
## Loading required package: shinyBS
s1 = soundgen(sylLen = 900, temperature = 0,
              pitch = list(time = c(0, .3, .8, 1), 
                           value = c(300, 900, 400, 1300)),
              noise = c(-40, -20), 
              subFreq = 100, subDep = 20, jitterDep = 0.5, 
              plot = TRUE, ylim = c(0, 4))

# playme(s1)  # replay as many times as needed w/o re-synthesizing the sound

The contour of f0 is determined by our pitch anchors, so we can calculate the true median pitch:

true_pitch = getSmoothContour(anchors = list(time = c(0, .3, .8, 1),
                                             value = c(300, 900, 400, 1300)),
                              len = 1000)  # any length will do
median(true_pitch)  # 633 Hz
## [1] 633.2559

2.1 Basic principles

At the heart of acoustic analysis with soundgen is the short-time Fourier transform (STFT): we look at one short segment of sound at a time (one STFT frame), analyze its spectrum using Fast Fourier Transform (FFT), and then move on to the next - perhaps overlapping - frame. As the analysis window slides along the signal, STFT shows which frequencies it contains at different points of time. The nuts and bolts of STFT are beyond the scope of this vignette, but they can be found in just about any textbook on phonetics, acoustics, digital signal processing, etc. For a quick R-friendly introduction, see seewave vignette on acoustic analysis.

Putting the spectra of all frames together, we get a spectrogram. analyze calls another function from soundgen package, spectrogram, to produce a spectrogram and then plot pitch candidates on top of it. See the examples in ?spectrogram for plot customization like color themes, contrast, brightness, etc. To analyze a sound with default settings and plot its spectrogram, all we need to specify is its sampling rate (the default in soundgen is 16000 Hz):

## Scale not specified. Assuming that max amplitude is 1
## [1] 633.2559
## [1] 562.0305

There are several key parameters that control the behavior of STFT and affect all extracted acoustic variables. The same parameters serve as arguments to spectrogram. As a result, you can immediately see what frame-by-frame input you have fed into the algorithm for acoustic analysis by visually inspecting the produced spectrogram. If you can hear f0, but can’t see individual harmonics in the spectrogram, the pitch tracker probably will not see them, either, and will therefore fail to detect f0 correctly. The first remedy is thus to adjust STFT settings, using the spectrogram for visual feedback:

2.2 Basic spectral descriptives

Apart from pitch tracking, analyze calculates and returns several acoustic characteristics from each non-silent STFT frame:

2.3 Custom spectral descriptives

The function soundgen::analyze returns a few spectral descriptives that make sense for nonverbal vocalizations, but additional predictors may be useful for other applications (bird songs, non-biological sounds, etc.). One way to obtain extra predictors is to add the necessary code to the internal function soundgen:::analyzeFrame() and to soundgen::analyze(). If you want deltas, they can be extracted directly from the output of analyze(..., summary = FALSE). But in many cases the easiest solution may be to just extract the spectra and then process them manually, without calling analyze(). In fact, many popular spectral descriptors are mathematically trivial to derive - all you need is the spectrum for each STFT frame, or perhaps even the average spectrum of the entire sound. Here is how you can get these spectra.

For the average spectrum of an entire sound, go no further than seewave::spec or seewave::meanspec:

##            x            y
## [1,] 0.00000 0.0001049391
## [2,] 0.03125 0.0001847369
## [3,] 0.06250 0.0003998262
## [4,] 0.09375 0.0010422899
## [5,] 0.12500 0.0027282554
## [6,] 0.15625 0.0038266912

If you are interested in how the spectrum changes over time, extract frame-by-frame spectra - for example, with spectrogram(..., output = 'original'):

##  num [1:400, 1:77] 1.38e-04 1.07e-04 5.36e-05 3.28e-05 2.20e-05 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : chr [1:400] "0.005" "0.0250250626566416" "0.0450501253132832" "0.0650751879699248" ...
##   ..$ : chr [1:77] "0" "15" "30" "45" ...

Let’s say you are working with frame-by-frame spectra and want to calculate skewness, the 66.6th percentile, and the ratio of energy above/below 500 Hz. Before you go hunting for a piece of software that returns exactly those descriptors, consider this. Once you have normalized the spectrum to add up to 1, it basically becomes a probability density function (pdf), so you can summarize it in the same way as you would any other distribution of a random variable. Look up the formulas you need and just do the raw math:

## Warning in min(which(cumsum(df$d) >= 2/3)): no non-missing arguments to min;
## returning Inf
##       skew            quantile66         ratio500        
##  Min.   : 0.04044   Min.   :0.02503   Min.   :  0.00162  
##  1st Qu.: 0.30685   1st Qu.:0.90613   1st Qu.: 12.74491  
##  Median : 0.71210   Median :1.17647   Median : 28.01190  
##  Mean   : 1.66136   Mean   :1.01126   Mean   : 56.02645  
##  3rd Qu.: 1.22995   3rd Qu.:1.26658   3rd Qu.: 85.98513  
##  Max.   :13.72659   Max.   :1.90738   Max.   :296.09418  
##  NA's   :1          NA's   :1         NA's   :1

If you need to do this analysis repeatedly, just wrap the code into your own function that takes a wav file as input and returns all these spectral descriptives. You can also save the actual spectra of different sound files and add them up to obtain an average spectrum across multiple sound files, work with cochleograms instead of raw spectra (check out tuneR::melfcc), etc. Be your own boss!

2.4 Loudness

The digital representation of a sound is a long vector of numbers on some arbitrary scale, say [-1, 1]. Values further from zero correspond to a higher amplitude - in physical terms, to greater pertubations of sound pressure level caused by the propagating sound wave. A smoothed line following peak amplitude values is known as an amplitude envelope. However, there is no simple correspondence between the absolute height of amplitude peaks and the subjectively experienced loudness of the corresponding sound. A commonly reported measure of sound intensity is its root mean square (RMS) amplitude, which takes into account the average value of sound pressure, and not only the height of peaks. More sophisticated estimates of loudness also take into account the relative sensitivity of human hearing to different frequencies, masking of adjacent tones in the time and frequency domains, etc.

To illustrate the differences between these estimates, let’s look at a pure tone sweeping with fixed absolute amplitude from 100 to 4000 Hz over 2 s:

Smoothed absolute amplitude envelope (flat):

RMS amplitude per STFT frame, as returned by analyze(), column “ampl”:

## Scale not specified. Assuming that max amplitude is 1

An estimate of subjectively experienced loudness in sone, column “loudness”:

Soundgen also has a dedicated function for calculating the loudness and plotting the output, getLoudness(). Loudness values are overlaid on the spectrogram - observe how the loudness peaks as f0 reaches about 2-3 kHz and then drops. The absolute values in sone are only an approximation, since they are dictated by the playback device (e.g. your headphones), but the change of loudness within one sound, or across different sounds analyzed with the same settings, is informative.

## Warning in getLoudness(sweep, samplingRate = samplingRate): Scale not specified.
## Assuming that max amplitude is 1

2.5 Pitch tracking

If you look at the source code of soundgen::analyze() and embedded functions, you will see that almost all of this code deals with a single acoustic characteristic: fundamental frequency (f0) or its perceptual equivalent, pitch. That’s because pitch is both highly salient to listeners and notoriously difficult to measure accurately. The approach followed by soundgen’s pitch tracker is to use several different estimates of f0, each of which is better suited to certain types of sounds. You can use any pitch tracker individually, but their output is also automatically integrated and postprocessed so as to generate the best overall estimate of frame-by-frame pitch. There are four currently implemented classes of pitch estimates in soundgen: autocorrelation, lowest dominant frequency, cepstrum, and spectrum (ratios of harmonics). These four methods of pitch estimation are not treated as completely independent in soundgen. Autocorrelation is performed first to provide an initial guess at the likely pitch and harmonics-to-noise ratio (HNR) of an STFT frame, and then this information is used to adjust the expectations of the cepstral and spectral algorithms. In particular, if autocorrelation suggests that the pitch is high, confidence in cepstral estimates is attenuated; and if autocorrelation suggests that HNR is low, thresholds for spectral peak detection are raised, making spectral pitch estimates more conservative.

The plot below shows a spectrogram of the sound with overlaid pitch candidates generated by five different methods (listed in pitchMethods), with a very vague prior - that is, with no specific expectations regarding the true range of pitch values. The size of each point shows the certainty of estimation: smaller points are calculated with lower certainty and have less weight when all candidates are integrated into the final pitch contour (blue line).

## Scale not specified. Assuming that max amplitude is 1

Different pitch tracking methods have their own pros and cons. Cepstrum is helpful for speech but pretty useless for high-frequency whistles or screams, harmonic product spectrum (hps) is easily mislead by subharmonics (as in this example), lowest dominant frequency band (dom) can’t handle low-frequency wind noise, etc. The default is to use “dom” and “autocor” as the most generally applicable, but you can experiment with all methods and check which ones perform best with the specific type of audio that you are analyzing. Each method can also be fine-tuned (see below), but first it is worth considering the general pitch-related settings.

2.5.1 General settings

analyze has a few arguments that affect all methods of pitch tracking:

  • entropyThres: all non-silent frames are analyzed to produce basic spectral descriptives. However, pitch tracking is both computationally costly and can be misleading if applied to obviously voiceless frames. To define what an “obviously voiceless” frame is, we set some cutoff value of Weiner entropy, above which we don’t want to even try pitch tracking. To disable this feature and track pitch in all non-silent frames, set entropyThres to 1.
  • pitchFloor, pitchCeiling: absolute thresholds for pitch candidates. No values outside these bounds will be considered.
  • priorMean and priorSD specify the mean and sd of gamma distribution describing our prior knowledge about the most likely pitch values. The prior works by scaling the certainties associated with particular pitch candidates. If you are working with a single type of sound, such as speech by a male speaker or cricket sounds, specifying a strong prior can greatly improve the quality of the resulting pitch contour. When batch-processing a large number of sounds with analyzeFolder(), the recommended approach is to set a vague, but still mildly informative prior. priorMean is specified in Hz, but the expected deviation from this typical value is calculated on a musical scale, so priorSD is in semitones. For example, if we expect f0 values of about 300 Hz plus-minus half an octave (6 semitones), a prior can be defined as priorMean = 300, priorSD = 6. For convenience, the prior can be plotted with getPrior:

TIP The final pitch contour can still pass through low-certainty candidates, so the prior is a soft alternative (or addition) to the inflexible bounds of pitchFloor and pitchCeiling But the prior has a major impact on pitch tracking, so it is by default shown in every plot

  • nCands: maximum number of pitch candidates to use per method. This only affects pitchAutocor, pitchCep, and pitchSpec.
  • minVoicedCands: minimum number of pitch candidates that have to be defined to consider a frame voiced. It defaults to ‘autom’, which means 2 if dom is among the candidates and 1 otherwise. The reason is that dom is usually defined, even if the frame is clearly voiceless, so we want another pitch candidate in addition to dom before we classify the frame as voiced.

2.5.2 Pitch tracking methods

Having looked at the general settings, it is time to consider the theoretical principles behind each pitch tracking method, together with arguments to analyze that can be used to tweak each one.

2.5.2.1 Autocorrelation

Time domain: pitch by autocorrelation, PRAAT, pitchAutocor.

This is an R implementation of the algorithm used in the popular open-source program PRAAT (Boersma, 1993). The basic idea is that a harmonic signal correlates with itself most strongly at a delay equal to the period of its fundamental frequency (f0). Peaks in the autocorrelation function are thus treated as potential pitch candidates. The main trick is to choose an appropriate windowing function and adjust for its own autocorrelation. Compared to other methods implemented in soundgen, pitch estimates based on autocorrelation appear to be particularly accurate for relatively high values of f0. The settings that control pitchAutocor are:

  • autocorThres: voicing threshold, defaults to 0.7. This means that peaks in the autocorrelation function have to be at least 0.7 in height (1 = perfect autocorrelation). A lower threshold produces more false positives (f0 is detected in voiceless, noisy frames), whereas a higher threshold produces more accurate values f0 at the expense of failing to detect f0 in noisier frames.
  • autocorSmooth: the width of smoothing interval (in bins) for finding peaks in the autocorrelation function. If left NULL, it defaults to 7 for sampling rate 44100 and smaller odd numbers for lower sampling rate.
  • autocorUpsample: upsamples the autocorrelation function in high frequencies in order to improve the resolution of analysis.
  • autocorBestPeak: amplitude of the lowest best candidate relative to the absolute maximum of the autocorrelation function.

To use only autocorrelation pitch tracking, but with lower-than-default voicing threshold and more candidates, we can do something like this (prior is disabled so as not to influence the certainties of different pitch candidates):

## Scale not specified. Assuming that max amplitude is 1

2.5.2.2 Dominant frequency

Frequency domain: the lowest dominant frequency band, dom.

If the sound is harmonic and relatively noise-free, the spectrum of a frame typically has little energy below f0. It is therefore likely that the first sizable peak in the spectrum is in fact f0, and all we have to do is choose a reasonable threshold. Naturally, there are cases of missing f0 and misleading low-frequency noises. Nevertheless, this simple estimate is often surprisingly accurate, and it may be our best shot when the vocal cords are vibrating in a chaotic fashion (deterministic chaos). For example, sounds such as roars lack clear harmonics but are perceived as voiced, and the lowest dominant frequency band often corresponds to perceived pitch.

The settings that control dom are:

  • domThres (defaults to 0.1, range 0 to 1): to find the lowest dominant frequency band, we look for the lowest frequency with amplitude at least domThres. This key setting has to be high enough to exclude accidental low-frequency noises, but low enough not to miss f0. As a result, the optimal level depends a lot on the type of sound analyzed and recording conditions.
  • domSmooth (defaults to 220 Hz): the width of smoothing interval (Hz) for finding the lowest spectral peak. The idea is that we are less likely to hit upon some accidental spectral noise and find the lowest harmonic (or the lowest spectral band with significant power) if we apply some smoothing to the spectrum of an STFT frame, in this case a moving median.

For the sound we are trying to analyze, we can increase domSmooth and/or raise domThres to ignore the subharmonics and trace the true pitch contour:

## Scale not specified. Assuming that max amplitude is 1

2.5.2.3 Cepstrum

Frequency domain: pitch by cepstrum, pitchCep.

Cepstrum is the FFT of log-spectrum. It may be a bit challenging to wrap one’s head around, but the main idea is quite simple: just as FFT is a way to find periodicity in a signal, cepstrum is a way to find periodicity in the spectrum. In other words, if the spectrum contains regularly spaced harmonics, its FFT will contain a peak corresponding to this regularity. And since the distance between harmonics equals the fundamental frequency, this cepstral peak gives us f0. Actually, in soundgen the FFT is applied to raw spectrum, not log-spectrum, since it appears to produce better results. Cepstrum is not very useful when f0 is so high that the spectrum contains only a few harmonics, so soundgen automatically discounts the contribution of high-frequency cepstral estimates.

The settings that control pitchCep are:

  • cepThres: voicing threshold (defaults to 0.3).
  • cepSmooth: the width of smoothing interval (in Hz) for finding peaks in the cepstrum. If left NULL, it defaults to 31 bins for sampling rate 44100 and smaller odd numbers for lower values of sampling rate.
  • cepZp (defaults to 0): zero-padding of the spectrum used for cepstral pitch detection (points). Zero-padding may improve the precision of cepstral pitch detection, but it also slows down the algorithm.
## Scale not specified. Assuming that max amplitude is 1

2.5.2.4 Ratio of harmonics

Frequency domain: ratios of harmonics, BaNa, pitchSpec.

All harmonics are multiples of the fundamental frequency. The ratio of two neighboring harmonics is thus predictably related to their rank relative to f0. For example, (3 * f0) / (2 * f0) = 1.5, so if we find two harmonics in the spectrum that have a ratio of exactly 1.5, it is likely that f0 is half the lower one (Ba et al., 2012). This is the principle behind the spectral pitch estimate in soundgen, which seems to be particularly useful for noisy, relatively low-pitched sounds.

The settings that control pitchSpec are:

  • specThres (0 to 1, defaults to 0.3): voicing threshold for pitch candidates suggested by the spectral method. The scale is 0 to 1, as usual, but it is the result of a rather arbitrary normalization. The “strength” of spectral pitch candidates is basically calculated as a sigmoid function of the number of harmonic ratios that together converge on the same f0 value. Setting specThres too low may produce garbage, while setting it too high makes the spectral method excessively conservative.
  • specPeak (0 to 1, defaults to 0.35), specHNRslope (0 to Inf, defaults to 0.8): when looking for putative harmonics in the spectrum, the threshold for peak detection is calculated as specPeak * (1 - HNR * specHNRslope). For noisy sounds the threshold is high to avoid false sumharmonics, while for tonal sounds it is low to catch weak harmonics. If HNR (harmonics-to-noise ratio) is not known, say if we have disabled the autocorrelation pitch tracker or if it returns NA for a frame, then the threshold defaults to simply specPeak. This key parameter strongly affects how many pitch candidates the spectral method suggests.
  • specSmooth (0 to Inf, defaults to 150 Hz): the width of window for detecting peaks in the spectrum, in Hz. You may want to adjust it if you are working with sounds with a specific f0 range, especially if it is unusually high or low compared to human sounds.
  • specMerge (0 to Inf semitones, defaults to 1): pitch candidates within specMerge semitones are merged with boosted certainty. Since the idea behind the spectral pitch tracker is that multiple harmonic ratios should converge on the same f0, we have to decide what counts as “the same” f0.
  • specSinglePeakCert: (0 to 1, defaults to 0.4) if apitchSpec candidate is calculated based on a single harmonic ratio (as opposed to several ratios converging on the same candidate), its weight (certainty) is taken to be specSinglePeakCert. This mainly has implications for how much we trust spectral vs. other pitch estimates.
## Scale not specified. Assuming that max amplitude is 1

TIP As you can guess by now, any pitch tracking method can be tweaked to produce reasonable results for any one particular sound (read: to agree with human intuition). The real trick is to find settings that are accurate on average, across a wide range of sounds and recording conditions. The default settings in analyze are the result of optimization against manually verified pitch measurements of a corpus of 260 human non-linguistic vocalizations. For other types of sounds, you will need to perform your own manual tweaking and/or formal optimization.

2.5.2.5 Harmonic product spectrum

Frequency domain: pitchHps.

This is a simple spectral method based on downsampling the spectrum several times and then multiplying them. This results in emphasizing the lowest harmonic present in the signal, which is hopefully f0. By definition, this method is easily misled by subharmonics (additional harmonics between the main harmonics of f0), but it can be useful in situations when the subharmonic frequency is actually of interest.

The settings that control pitchHps are:

  • hpsThres (0 to 1, defaults to 0.3): voicing threshold for pitch candidates suggested by hps method
  • hpsNum (defaults to 5): the number of times the spectrum is downsampled (defaults to 10). Increasing the number improves sensitivity in the sense that the method converges on the lowest harmonic, which is generally (but not always) desirable
  • hpsNorm: the amount of inflation of hps pitch certainty (0 = none). Because the downsampled spectra are multiplied, the height of the resulting peak tends to be rather low; hpsNorm (defaults to 2, 0 = none) compensates for it, otherwise this method would have very low confidence compared to other pitch trackers
  • hpsPenalty (defaults to 2, 0 = none): hpsPenalty the amount of penalizing hps candidates in low frequencies (0 = none). As a methor, HPS doesn’t perform very well at low frequencies, so the certainty in low-frequency candidates is attenuated
## Scale not specified. Assuming that max amplitude is 1

2.5.3 Missing fundamental

The perception of pitch does not depend on the presence of the lowest partial corresponding to the actual fundamental frequency: even if it is removed or masked by low-frequency noise, the pitch remains unchanged. By definition, the “dom” estimate of pitch cannot function when this lowest partial is missing (it works by literally tracking the lowest dominant frequency band). However, the remaining four pitch tracking methods - autocorrelation, cepstrum, BaNa, and HPS - have no problem dealing with a missing fundamental frequency because they take the entire spectrum into account, not only the lowest partial.

A sound with four partials at 300 Hz (f0), 600 Hz, 900 Hz, and 1200 Hz:

The pitch is tracked correctly:

## Scale not specified. Assuming that max amplitude is 1

The same sound, but without the first partial (f0).

Again, no problem with pitch tracking, although now the pitch contour is following a partial that is no longer there:

## Scale not specified. Assuming that max amplitude is 1

The implications are as follows: if the lower part of your signal is degraded (wind noise, an engine running, somebody else talking in the background, etc.), you can apply a high-pass filter to remove low frequencies. Even if you filter out the first partial by doing so, pitch tracking will still be possible. BUT: do NOT use the “dom” pitch estimate if the f0 is either filtered out or invisible because of noise!

2.6 Postprocessing of pitch contour

Pitch postprocessing in soundgen includes a whole battery of distinct operations through which the pitch candidates generated by one or more tracking methods are integrated into the final pitch contour. We will look at them one by one, in the order in which they are performed in analyze. But first of all, here is how to disable them all:

## Scale not specified. Assuming that max amplitude is 1

When the sound is not too tricky and enough pitch candidates are available, postprocessing actually makes little difference. In terms of the accuracy of median estimate of f0, you are likely to get a good result even with postprocessing is completely disabled. However, if you are interested in the actual intonation contours, not just the global average, postprocessing can help a lot.

2.6.1 Continuous voiced fragments

It often makes sense to make assumptions about the possible temporal structure of voiced fragments, such as their minimum expected length (shortestSyl) and spacing (shortestPause). If these two parameters are positive numbers, the first stage of postprocessing is to divide the sound into continuous voiced fragments that satisfy these assumptions. The default minimum length of a voiced fragment is a single STFT frame. If shortestSyl is longer than a single frame, then we need at least two adjacent voiced frames to start a new voiced fragment. A single voiced frame surrounded by unvoiced frames then gets discarded (assumed to be unvoiced). If two voiced fragments are separated by less than shortestPause, they are merged. What this means is simply that they are processed as a single syllable by pathfinder() (see below). No interpolation takes place at this stage.

The next few blocks of postprocessing are performed by an internal function, soundgen:::pathfinder(). Its input is a matrix of pitch candidates for each frame of a single voiced syllable, usually with multiple candidates per frame. Each candidate is also associated with a different certainty. We want to find a good path through these candidates - that is, a pitch contour that both passes close to the strongest candidates and minimizes pitch jumps, producing a relatively smooth contour. The simplest first approximation is to take a mean of all pitch candidates per frame weighted by their certainty - the “center of gravity” of pitch candidates - and for each frame to select the candidate that lies closest to this center of gravity. This initial guess at a reasonable path may or may not be processed further, depending on the settings described below.

2.6.2 Interpolation

To make sure we have at least one pitch candidate for every frame in the supposedly continuous voiced fragment, we interpolate to fill in any missing values. The same algorithm also adds new pitch candidates with certainty interpolCert if a frame has no pitch candidates within interpolTol of the median of the “center of gravity” estimate over plus-minus interpolWin frames. The frequency of new candidates is equal to this median. For example, if interpolTol = 0.05, new candidates are calculated if there are none within 0.95 to 1.05 times the median over the interpolation window. You can also enable interpolation to fill in unvoiced frames, but without adding new pitch candidates in voiced frames. To do so, set interpolTol = Inf.

Here is an example (interpolated segments are shown with a dotted line)

## Scale not specified. Assuming that max amplitude is 1
## Scale not specified. Assuming that max amplitude is 1

2.6.3 Pathfinding

The next step after interpolation is pathfinding proper - searching for the optimal path through pitch candidates. If pathfinding = "none", this step is skipped, so we just continue working with the path that lies as close as possible to the (possibly interpolated) center of gravity of pitch candidates. If pathfinding = "fast" (the default option), a simple heuristic is employed, in which we walk down the path twice, first left to right and then right to left, trying to minimize the cost measured as a weighted mean of the distance from the center of gravity and the deviation from a smooth contour. The key setting is certWeight, which specifies how much we prioritize the certainty of pitch candidates vs. pitch jumps / the internal tension of the resulting pitch curve. Low certWeight (close to 0): we are mostly concerned with avoiding rapid pitch fluctuations in our contour. High certWeight (close to 1): we mostly pay attention to our certainty in particular pitch candidates. The example below is intended as an illustration of how pathfinding works, so all other types of smoothing are disabled, forcing the final pitch contour to pass strictly through existing candidates.

## Scale not specified. Assuming that max amplitude is 1
## Scale not specified. Assuming that max amplitude is 1