# Intro

## Purpose

The function soundgen is intended for the synthesis of animal vocalizations, including human non-linguistic vocalizations like sighs, moans, screams, etc. It can also create non-biological sounds that require precise control over spectral and temporal modulations, such as special sound effects in computer games or acoustic stimuli for scientific experiments. Soundgen is NOT meant to be used for text-to-speech conversion. It can be adapted for this purpose, but existing specialized tools will probably serve better.

Soundgen uses a parametric algorithm, which means that sounds are synthesized de novo, and the output is completely determined by the values of control parameters, as opposed to concatenating or modifying existing audio recordings. Under the hood, the current version of soundgen generates and filters two sources of excitation: sine waves and white noise.

The rest of this vignette will unpack this last statement and demonstrate how soundgen can be used in practice. To simplify setting the control parameters and visualizing the output, soundgen library includes an interactive Shiny app. To start the app, type soundgen_app() from R or try it online at cogsci.se/soundgen.html. To generate sounds from the console, use the function soundgen. Each section of the vignette focuses on a particular aspect of sound generation, both describing the relevant arguments of soundgen and explaining how they can be set in the Shiny app.

## Before you proceed: consider the alternatives

In R, there are at least three other packages that offer sound synthesis: tuneR, seewave, and phonTools. Both seewave and tuneR implement straightforward ways to synthesize pulses and square, triangular, or sine waves as well as noise with adjustable (linear) spectral slope. You can also create multiple harmonics with both amplitude and frequency modulation using seewave::synth() and seewave::synth2(). There is even a function available for adding formants and thus creating different vowels: phonTools::vowelsynth(). If this is ample for your needs, try these packages first.

So why bother with soundgen? First, it takes customization and flexibility of sound synthesis much further. You will appreciate this flexibility if your aim is to produce convincing biological sounds. And second, it’s a higher-level tool with dedicated subroutines for things like controlling the rolloff (relative energy of different harmonics), adding moving formants and antiformants, mixing harmonic and noise components, controlling voice changes over multiple syllables, adding stochasticity to imitate unpredictable voice changes common in biological sound production, and more. In other words, soundgen offers powerful control over low-level acoustic characteristics of synthesized sounds with the benefit of also offering transparent, meaningful high-level parameters intended for rapid and straightforward specification of whole bouts of vocalizing.

Because of this high-level control, you don’t really have to think about the math of sound synthesis in order to use soundgen (although if you do, that helps). This vignette also assumes that the reader has some training in phonetics or bioacoustics, particularly for sections on formants and subharmonics.

## Basic principles of sound synthesis in soundgen

Feel free to skip this section if you are only interested in using soundgen, not in how it works under the hood.

Soundgen’s credo is to start with a few control parameters (e.g., the intonation contour, the amount of noise, the number of syllables and their duration, etc.) and to generate a corresponding audio stream, which will sound like a biological vocalization (a bark, a laugh, etc). The core algorithm for generating a single voiced segment implements the standard source-filter model (Fant, 1971). The voiced component is generated as a sum of sine waves and the noise component as filtered white noise, and both components are then passed through a frequency filter simulating the effect of human vocal tract. This process can be conceptually divided into three stages:

1. Generation of the harmonic component (glottal source). At this crucial stage, we “paint” the spectrogram of the glottal source based on the desired intonation contour and spectral envelope by specifying the frequencies, phases, and amplitudes of a number of sine waves, one for each harmonic of the fundamental frequency (f0). If needed, we also add stochastic and non-linear effects at this stage: jitter and shimmer (random fluctuation in frequency and amplitude), subharmonics, slower random drift of control parameters, etc. Once the spectrogram “painting” is complete, we synthesize the corresponding waveform by generating and adding up as many sine waves as there are harmonics in the spectrum.

Note that soundgen currently implements only sine wave synthesis of voiced fragments. This is different from modeling glottal cycles themselves, as in phonetic models and some popular text-to-speech engines (e.g. Klatt, 1980). Normally multiple glottal cycles are generated simultaneously, with no pauses in between them (no closed phase) and with a continuously changing f0. Starting from version 1.1.0, it is also possible to add a closed phase, in which case each glottal cycle is generated separately, with f0 held stable within each cycle. In future versions of soundgen there may be an option to use a particular parametric model of the glottal cycle as excitation source as an alternative to generating a separate sine wave for each harmonic.

2. Generation of the turbulent noise component (aspiration, hissing, etc.). In addition to harmonic oscillations of the vocal cords, there are other sources of excitation, notably turbulent noise. For example, aspiration noise may be synthesized as white noise with rolloff -6 dB/octave (Klatt, 1990) and added to the glottal source before formant filtering. It is similarly straightforward to add other types of noise, which may originate higher up in the vocal tract and thus display a different formant structure from the glottal source (e.g., high-frequency hissing, broadband clicks for tongue smacking, etc.)

Some form of noise is synthesized in most sound generators. In soundgen noise is created in the frequency domain (i.e., as a spectrogram) and then converted into a time series via inverse FFT. Noise is generated with a flat spectrum up to a certain threshold, followed by user-specified linear rolloff (Johnson, 2012).

3. Spectral filtering (formants and lip radiation). The vocal tract acts as a resonator that modifies the source spectrum by amplifying certain frequencies and dampening others. In speech, time-varying resonance frequencies (formants) are responsible for the distinctions between different vowels, but formants are also ubiquitous in animal vocalizations. Just as we “painted” a spectrogram for the acoustic source in (1), we now “paint” a spectral filter with a specified number of stationary or moving formants. We then take a fast Fourier transform (FFT) of the generated waveform to convert it back to a spectrogram, multiply the latter by our filter, and then take an inverse FFT to go back to the time domain. This filtering can be applied to harmonic and noise components separately or - for noise sources close to the glottis - the harmonic component and the noise component can be added first and then filtered together.

Note that this FFT-mediated method of adding formants is different from the more traditional convolution, but with multiple formants it is both considerably faster and (arguably) more intuitive. If you are wondering why we should bother to do iFFT and then again FFT before filtering the voiced component, rather than simply applying the filter to the rolloff matrix before the iFFT, this is an annoying consequence of some complexities of the temporal structure of a bout, especially of applying non-stationary filters (moving formants) that span multiple syllables. With noise, however, this extra step can be avoided, and we only do iFFT once.

Having briefly looked at the fundamental principles of sound generation, we proceed to control parameters. The aim of the following presentation is to offer practical tips on using soundgen. For further information on more fundamental principles of acoustics and sound synthesis, you may find the vignettes in seewave very helpful, and look out for the upcoming book on sound synthesis in R by Jerome Sueur, the author of the seewave package. Some essential references are also listed at the end of this vignette, especially those sources that have inspired particular routines in soundgen.

# Using soundgen

## Where to start

To generate a sound, you can either type soundgen_app() to open an interactive Shiny app or call soundgen() from R console with manually specified parameters. presets contains a collection of presets that demonstrate some of the possibilities.

## Audio playback

Audio playback may fail, depending on your platform and installed software. Soundgen relies on tuneR library for audio playback, via a wrapper function called playme() that accepts both Wave objects and simple numeric vectors. If soundgen(play = TRUE) throws an error, make sure the audio can be played before you proceed with using soundgen. To do so, save some sound as a vector first: sound = soundgen(play = FALSE) or even simply sound = rnorm(10000). Then try to find a way to play this vector sound. You may need to change the default player in tuneR or install additional software. See the seewave vignette on sound input/output for an in-depth discussion of audio playback in R. Some tips are also available here.

Because of possible errors, audio playback is disabled by default in the rest of this vignette. To turn it on without changing any code, simply set the global variable playback = TRUE:

playback = c(TRUE, FALSE)[2]

## From the console

The basic workflow from R console is as follows:

library(soundgen)
s = soundgen(play = playback)  # default sound: a short [a] by a male speaker
# 's' is a numeric vector - the waveform. You can save it, play it, plot it, ...

# names(presets)  # speakers in the preset library

### Closed glottis

If f0 is very low, as in vocal fry or some animal vocalizations like crocodile roaring or elephant rumbling, individual glottal pulses can be both seen on a spectrogram and peiceived as distinct percussion-like acoustic events separated by noticeable pauses. Soundgen can create such sounds by switching to a new mode of production: instead of synthesizing continous sine waves spanning the entire syllable, it creates each glottal pulse individually (each with its full set of harmonics) and then puts them together with pauses in between.

This is a lot slower than continuous sine wave synthesis and mostly justified for very low-pitched sounds, since with higher pitch there will be too few points per glottal cycle to sound convincing without increasing the samplingRate to astronomical values. For example:

# Not a good idea: samplingRate is too low
s1 = soundgen(pitchAnchors = c(1500, 800), glottisAnchors = 75,
samplingRate = 16000, play = playback)

# This sounds better but takes a long time to synthesize:
s2 = suppressWarnings(soundgen(pitchAnchors = c(1500, 800), glottisAnchors = 75,
samplingRate = 80000, play = playback,
invalidArgAction = 'ignore'))
# NB: invalidArgAction = 'ignore' forces a "weird" samplingRate value
# to be accepted without question

# Now this is what this feature is meant for: vocal fry
s3 = soundgen(sylLen = 1500, pitchAnchors = c(75, 40),
glottisAnchors = c(0, 700),
samplingRate = 16000, play = playback)
plot(s3, type = 'l', xlab = '', ylab = '')

## Nonlinear effects

Soundgen can add subharmonics and sidebands or even approximate deterministic chaos by adding strong jitter and shimmer. These effects basically make the sound appear harsh. Jitter and shimmer are created by adding random noise to the periods and amplitudes, respectively, of the “glottal cycles”. Subharmonics could be created by adding rapid amplitude and/or frequency modulation, but for maximum flexibility soundgen uses a different - slightly hacky, but powerful - technique of literally setting up an additional sine wave for each subharmonic based on the desired frequency of subharmonics (subFreq). The actual frequency will not be exactly equal to subFreq, since it must be a fraction of f0 at all time points (one half, one third, etc). The amplitude of each subharmonic is a function of its distance from the nearest harmonic of the f0 stack and the desired width of sidebands (subDep). This way we can create either subharmonics or narrow sidebands that vary naturally as f0 changes over time, producing bifurcations and switching between different subharmonic regimes (see Wilden et al., 2012).

The main limitation of this approach is that it is too computationally costly to generate variable numbers of subharmonics for the entire bout. The solution currently adopted in soundgen is to break longer sounds into so-called “epochs” with a constant number of subharmonics in each. The epochs are synthesized separately, trimmed to the nearest zero crossing, and then glued together with a rapid crossFade(). This is suboptimal, since it shortens the sound and may introduce audible artifacts at transitions between epochs. shortestEpoch controls the approximate minimum length of each epoch. Longer epochs minimize problems with transitions, but the behavior of subharmonics then becomes less variable, since their number is constrained to be constant within each epoch.

### Nonlinear regimes

To add nonlinear effects, you can use just two parameters – nonlinBalance and nonlinDep – that together regulate what proportion of the sound is modified and to what extent. However, for best results it is advisable to set advanced settings manually (see below). At temperature > 0, nonlinBalance creates a random walk that divides each syllable into epochs defined by their regime, using two thresholds to determine when a new regime begins (see Fitch et al., 2002):

1. Regime 1: no nonlinear effects. If nonlinBalance = 0%, the whole syllable is in regime 1.

2. Regime 2: subharmonics only. Note that subharmonics are only added to segments with subFreq < f0 / 2.

3. Regime 3: subharmonics and jitter. If nonlinBalance = 100%, the whole syllable is in regime 3.

nonlinDep is a hyper-parameter that adjusts several settings at once, making the voice harsher in pitch regimes 2 and 3, but without affecting the balance between regimes.

### Subharmonics

Moving on to advanced nonlinear effects settings, subFreq (“Subharmonic frequency, Hz” in the app) and subDep (“Width of sidebands, Hz”) define the properties of subharmonics in pitch regimes 2 and 3. Say your vocalization has a relatively flat intonation contour with a fundamental frequency of about 800 Hz, and you want to add a single subharmonic (g0). You then set the expected subharmonic frequency to 400 Hz. Since g0 is forced to be an integer fraction of f0 at all time points, it will not be exactly 400 Hz, but it will produce a single subharmonic at f0 / 2 (as long as f0 stays close to 800 Hz: if f0 goes up to 1200 Hz, you will get two subharmonics instead, since 1200 / 400 = 3). The width of sidebands defines how quickly the energy of subharmonics dissipates at a remove from the nearest f-harmonic. For example, our single subharmonic is audible but weak at sideband width = 150 Hz, while it becomes strong enough to be perceived as the new f0 at sideband width = 400 Hz, effectively halving the pitch:

s1 = soundgen(subFreq = 400, subDep = 150, nonlinBalance = 100,
jitterDep = 0, shimmerDep = 0, temperature = 0,
sylLen = 500, pitchAnchors = c(800, 900),
play = playback, plot = TRUE, ylim = c(0, 3))
s2 = soundgen(subFreq = 400, subDep = 400, nonlinBalance = 100,
jitterDep = 0, shimmerDep = 0, temperature = 0,
sylLen = 500, pitchAnchors = c(800, 900),
play = playback, plot = TRUE, ylim = c(0, 3))

Sidebands may be easier to understand for high-pitched sounds with low subharmonic frequencies. For example, chimpanzees emit piercing screams with narrow subharmonic bands. If we set subFreq to 75 Hz and subDep to 130 Hz, subharmonics literally form a band around each harmonic of the main stack, creating a very distinct, immediately recognizable sound quality:

s = soundgen(subFreq = 75, subDep = 130, nonlinBalance = 100,
jitterDep = 0, shimmerDep = 0, temperature = 0,
sylLen = 800, formants = NULL, play = playback,
pitchAnchors = data.frame(time=c(0, .3, .9, 1),
value = c(1200, 1547, 1487, 1154)))
spectrogram(s, 16000, windowLength = 50, ylim = c(0, 5), contrast = .7)

### Jitter / shimmer

As for jitter in pitch regime 3, it wiggles both f0 and g0 harmonic stacks, blurring the spectrum. Parameter jitterDep (“Jitter depth, semitones” in the app) defines how much the pitch fluctuates, while jitterLen (“Jitter period, ms”) defines how rapid these fluctuations are. Slow jitter with a period of ~50 ms produces the effect of a shaky, unsteady voice. It may sound similar to a vibrato, but jitter is irregular. Rapid jitter with a period of ~1 ms, especially in combination with subharmonics, may be used to imitate deterministic chaos, which is found in voiced but highly irregular animal sounds such as barks, roars, noisy screams, etc. This works best for high-pitched sounds like screams.

# To get jitter without subharmonics, set temperature = 0, subDep = 0
# and specify the required jitter depth and period
s1 = soundgen(jitterLen = 50, jitterDep = 1,  # shaky voice
sylLen = 1000, subDep = 0, nonlinBalance = 100,
pitchAnchors =  c(150, 170), play = playback)
s2 = soundgen(jitterLen = 1, jitterDep = 1,   # harsh voice
sylLen = 1000, subDep = 0, nonlinBalance = 100,
pitchAnchors =  c(150, 170), play = playback)

To get jitter + shimmer + subharmonics, set temperature to 0 (nonlinear effects are then applied to the entire sound) or use nonlinBalance close to 100% with temperature > 0 (effectively the same, but preserving stochastic behavior of other parameters). For example, barks of a small, annoying dog can be roughly approximated with this minimal code (ignoring respiration to keep things simple):

s = soundgen(repeatBout = 2, sylLen = 140, pauseLen = 100,
vocalTract = 8, formants = NULL, rolloff = 0,
pitchAnchors = c(559, 785, 557), mouthAnchors =  c(0, 0.5, 0),
nonlinBalance = 100, jitterDep = 1, subDep = 60, play = playback) 

### Chaos

There is no way to synthesize true deterministic chaos with residual harmonic structure in soundgen. However, there are several roundabout ways to achieve a comparable effect. As already mentioned, strong jitter and shimmer create harsh sounds that are perceptively similar to deterministic chaos, especially for higher f0 values. Another method is to encode very rapid pitch jumps between harmonically related values, like this:

s = soundgen(sylLen = 1200,
pitchAnchors = list(
time = c(0, 80, 81, 230, 231, 385,
# 500 time anchors here - an episode of "chaos"
seq(385, 850, length.out = 500),
851, 1020, 1021, 1085),
value = c(700, 1130, 1000, 1200, 1860, 1840,
# random f0 jumps b/w 1.2 & 1.8 KHz
sample(c(1200, 1800), size = 500, replace = T),
1620, 1540, 1220, 900)),
temperature = 0.05,
tempEffects = list(pitchAnchorsDep = 0),
nonlinBalance = 100, subDep = 0, jitterDep = .3,
rolloffKHz = 0, rolloff = 0, formants = c(900, 1300, 3300, 4300),
samplingRate = 22000, play = playback, plot = TRUE, osc = TRUE)

Incidentally, you can use similar tricks for introducing variation in any soundgen parameter. For example, you can use runif() or rnorm() to randomly vary things like mouth opening, pitch, amplitude. That’s the best part of working in R!

s = soundgen(sylLen = 800,
mouthAnchors = rnorm(n = 5, mean = .5, sd = .3),
play = playback)

## Unvoiced component (noise)

In addition to the tonal (harmonic, voiced) component, which is synthesized as a stack of harmonics (sine waves), soundgen produces broad-spectrum turbulent noise (unvoiced component). This noise can be added to the voiced component to create breathing, sniffing, snuffling, hissing, gargling, etc. This can be done in two ways (in the app, go to “Tract / Unvoiced type”):

1. Breathing. This noise type is generated as white noise with spectral rolloff given by rolloffNoise (“Noise rolloff, dB/octave” in the app) above a certain cutoff value (flatSpectrum, the default is currently 1200 Hz). It is added to the voiced component before formant filtering. As a result, it follows exactly the same formant structure as the voiced component, and you cannot modify its spectrum beyond the basic rolloff setting. This is useful for adding noise that originates deep in the throat, close to the vocal cords. To generate breathing, specify noiseAnchors, but leave formantsNoise blank (NA, which is its default value). Soundgen then assumes that the unvoiced component should have the same formant structure as the voiced component.
s = soundgen(noiseAnchors = data.frame(time = c(0, 500), value = c(-40, 20)),
formantsNoise = NA,  # breathing - same formants as for voiced
sylLen = 500, play = playback)
1. Any other noise type is added to the voiced component after formant filtering, and therefore this noise can be filtered independently of the voiced component. To generate such noise, you can use one of the available presets in the app (for now, only a few human consonants) or specify the formants for the unvoiced component manually in exactly the same format (formantsNoise) as for the voiced component (formants).
s = soundgen(noiseAnchors = data.frame(time = c(0, 500), value = c(-40, 20)),
# specify noise filter ≠ voiced filter to get ~[s]
formantsNoise = list(
f1 = data.frame(freq = 6000,
amp = 50,
width = 1000)
),
rolloffNoise = 0, pitchAnchors = NA,
sylLen = 500, play = playback, plot = TRUE)

TIP: pitchAnchors = NA or NULL removes the voiced component, so that only turbulent noise is synthesized. In the app, untick the box Intonation / Intonation syllable / Generate voiced component?

In the shiny app, the tab “Source / Unvoiced timing” is for specifying the amplitude contour of the unvoiced component. It shows the timing of noise relative to the voiced component of a typical syllable. Note that noise is allowed to fill the pauses between syllables, but not between bouts. For example, in this two-syllable bout noise carries over after the end of each voiced component, since syllable duration is 120 ms and the last breathing time anchor is 209 ms:

s = soundgen(nSyl = 2, sylLen = 120, pauseLen = 120,
temperature = 0, rolloffNoise = -5,
noiseAnchors = data.frame(time = c(39, 56, 209),
value = c(-80, -10, -50)),
formants = list(f1 = c(860, 530),  f2 = c(1280, 2400)),
formantsNoise = c(420, 1200),
plot = TRUE, osc = TRUE, play = playback)

Both the timing and the amplitude of noise anchors are defined relative to the voiced component. Because noise can extend beyond voiced fragmets, however, time anchors for noise MUST be specified in ms (unlike all the other contours, which also accept time anchors on any arbitrary scale, say 0 to 1). If the noise starts before the voiced part, the first time anchor will be negative. This is easier to specify in the app, which provides a preview. From R console, you can also preview the noise amplitude contour implied by your anchors by calling getSmoothContour, for example:

a = getSmoothContour(anchors = data.frame(time = c(-50, 200, 300),
value = c(-80, 20, -80)),
voiced = 200, plot = TRUE, ylim = c(-80, 40), main = '')

TIP: if the voiced part is shorter than permittedValues['sylLen', 'low'], it is not synthesized at all, so you only get the unvoiced component (if any). The voiced part is also not synthesized if the noise is at its loudest, namely permittedValues['noiseAmpl', 'high'] (40 dB)

Unlike the voiced component, breathing noise is NOT enriched with stochastically added formants if formantsNoise are specified explicitly (i.e., if this is not aspiration noise). If formantsNoise = NA or NULL (i.e., if this is aspiration noise), formant structure is calculated based on vocal tract length, as usual. For example, to create simple sighs, you can just specify the length of your creature’s vocal tract:

s1 = soundgen(vocalTract = 17.5,  # ~human throat (17.5 cm)
formants = NULL, attackLen = 200, play = playback,
noiseAnchors = list(time = c(0, 800), value = c(40, 40)))
# NB: since there is no voiced component, the only way to control syllable length
# is by specifying the appropriate noiseAnchors$time, in this case 0 to 800 ms s2 = soundgen(vocalTract = 30, # a large animal formants = NULL, attackLen = 200, play = playback, noiseAnchors = list(time = c(0, 800), value = c(40, 40))) # NB: voiced component not generated, since noiseAnchors$value >= 40 dB
# Another way to remove the voiced component is to write pitchAnchors = NULL

To contrast, this will produce only two specified formants, ignoring the specified vocal tract length:

s3 = soundgen(vocalTract = 17.5, formantsNoise = c(1000, 2000),
noiseAnchors = list(time = c(0, 800), value = 40),
play = playback, plot = TRUE)

Finally, the excitation source for the unvoiced component can be synthesized as white noise (if rolloffNoise = 0) or as turbulent noise with a spectrum that linearly (not exponentially!) loses power over a certain threshold, which is currently fixed at 1200 Hz. The parameter rolloffNoise thus controls the source spectrum of the unvoiced component:

s1 = soundgen(vocalTract = 17.5, rolloffNoise = 0,
formants = NULL, attackLen = 200, play = playback,
noiseAnchors = list(time = c(0, 800), value = c(40, 40)))

s2 = soundgen(vocalTract = 17.5, rolloffNoise = -10,
formants = NULL, attackLen = 200, play = playback,
noiseAnchors = list(time = c(0, 800), value = c(40, 40)))
# NB: voiced component not generated, since noiseAnchors$value >= 40 dB # Combining two sounds To achieve a complex vocalization, sometimes it may be necessary - or easier - to synthesize two or more sounds separately and then combine them. If the components are strictly consecutive, you can simply concatenate them with c(). If there is no silence in between, it is safer to use crossFade(), otherwise there can be transients like clicks between the two sounds: par(mfrow = c(1, 2)) sound1 = sin(2 * pi * 1:5000 * 100 / 16000) # pure tone, 100 Hz sound2 = sin(2 * pi * 1:5000 * 200 / 16000) # pure tone, 200 Hz # simple concatenation comb1 = c(sound1, sound2) # playme(comb1) # note the click plot(comb1[4000:5500], type = 'l', xlab = '', ylab = '') # note the abrupt transition, which creates the click # spectrogram(comb1, 16000) # cross-fade comb2 = crossFade(sound1, sound2, samplingRate = 16000, crossLen = 50) # playme(comb2) # no click plot(comb2[4000:5500], type = 'l', xlab = '', ylab = '') # gradual transition # spectrogram(comb2, 16000) par(mfrow = c(1, 1)) Here is a more elaborate example, in which two components of the same syllable are so different that it’s easier to synthesize them separately and then cross-fade, rather than to try and find a set of parameters that will generate the entire syllable in one go: cow1 = soundgen(sylLen = 1400, pitchAnchors = list(time = c(0, 11/14, 1), value = c(75, 130, 200)), temperature = 0.1, rolloff = -6, rolloffOct = -3, rolloffParab = 12, mouthOpenThres = 0.6, formants = NULL, vocalTract = 36.5, mouthAnchors = list(time = c(0, 0.82, 1), value = c(0.6, 0, 1)), noiseAnchors = list(time = c(0, 1400), value = c(-25, -25)), rolloffNoise = -4, addSilence = 0) cow2 = soundgen(sylLen = 310, pitchAnchors = c(359, 359), temperature = 0.05, nonlinBalance = 100, subFreq = 150, subDep = 70, jitterDep = 1.3, rolloff = -6, rolloffOct = -3, rolloffKHz = -0, formants = NULL, vocalTract = 36.5, noiseAnchors = list(time = c(0, 26, 317, 562), value = c(-80, -23, -22, -80)), rolloffNoise = -6, attackLen = 0, addSilence = 0) s = crossFade(cow1 * 3, cow2, # adjust the relative volume by scaling samplingRate = 16000, crossLen = 150) # playme(s, 16000) spectrogram(s, 16000, osc=T, ylim = c(0, 4)) If you want the two sounds to overlap, you can use addVectors(), which simply makes sure two waveforms are padded with zeros to the same length and overlapped intelligently. Note that in this case cross-fading is not appropriate, so it may be safer to apply fade-in/out to both sounds to soften the attack. For example, here is how to add chirping of birds in the background: sound1 = soundgen(sylLen = 700, pitchAnchors = 250:180, formants = 'aaao', addSilence = 100, play = playback) sound2 = soundgen(nSyl = 2, sylLen = 150, pitchAnchors = 4300:2200, attackLen = 10, formants = NA, temperature = 0, addSilence = 0, play = playback) insertionTime = .1 + .15 # silence + 150 ms samplingRate = 16000 insertionPoint = insertionTime * samplingRate comb = addVectors(sound1, sound2 * .05, # to make sound2 quieter relative to sound1 insertionPoint = insertionPoint) # soundgen softens attack by default, so no clicks # playme(comb) spectrogram(comb, 16000, windowLength = 10, ylim = c(0, 5), contrast = .5, colorTheme = 'seewave') # Morphing two sounds Sometimes it is desirable to combine the characteristics of two different stiimuli, producing some kind of intermediate form - a hybrid or blend. This technique is called morphing, and it is employed regularly and successfully with visual stimuli, but not so often with sounds, because it turns out to be rather tricky to morph audio. Since soundgen creates sounds parametrically, however, morphing becomes much more straightforward: all we need to do is define the rules for interpolating between all control parameters. For example, say we have sound A (100 ms) and sound B (500 ms), which only differ in their duration. To morph them, we could generate five otherwise identical sounds that are 100, 200, 300, 400, and 500 ms long, giving us the originals and three equidistant intermediate forms - that is, if we assume that linear interpolation is the natural way to take perceptually equal steps between parameter values. In practice this assumption is often unwarranted. For example, the natural scale for pitch is log-transformed: the perceived distance between 100 Hz and 200 Hz is 12 semitones, while from 200 Hz to 300 Hz it is only 7 semitones. To make pitch values equidistant, we would need to think in terms of semitones, not Hz. For other soundgen parameters it is hard to make an educated guess about the natural scale, so the most appropriate interpolation rules remains obscure. For best results, morphing should be performed by hand, pre-testing each parameter of interest and creating the appropriate formulas for each morph. However, for a “quick fix” there is an in-built function, morph. morph takes two calls to soundgen (as a character string or a list of arguments) and creates several morphs using linear interpolation for all parameters except pitch and formant frequencies, which are log-transformed prior to interpolation and then exponentiated to go back to Hz. The morphing algorithm can also deal with arbitrary contours, either by taking a weighted mean of each curve (method = 'smooth') or by attempting to match and morph individual anchors (method = 'perAnchor'): a = data.frame(time=c(0, .2, .9, 1), value=c(100, 110, 180, 110)) b = data.frame(time=c(0, .3, .5, .8, 1), value=c(300, 220, 190, 400, 350)) par(mfrow = c(1, 3)) plot (a, type = 'b', ylim = c(100, 400), main = 'Original curves') points (b, type = 'b', col = 'blue') m = soundgen:::morphDF(a, b, nMorphs = 15, method = 'smooth', plot = TRUE, main = 'Morphing curves') m = soundgen:::morphDF(a, b, nMorphs = 15, method = 'perAnchor', plot = TRUE, main = 'Morphing anchors') par(mfrow = c(1, 1)) Here is an example of morphing the default neutral [a] into a dog’s bark: m = morph(formula1 = list(repeatBout = 2), # equivalently: formula1 = 'soundgen(repeatBout = 2)', formula2 = presets$Misc$Dog_bark, nMorphs = 5, playMorphs = playback) # use$formulas to access formulas for each morph, $sounds for waveforms # m$formulas[[4]]
# playme(m$sounds[[3]]) TIP Morphing a completely unvoiced sound with a voiced sound is currently not implemented. Add a very quiet voiced component to avoid glitches. Also try to make formants and formantsNoise compatible in both formulas: either leave both NULL or specify both in the same way (e.g. with or without explicitly defined amplitudes and bandwidths) # Matching an existing sound When synthesizing a new sound with the function soundgen(), a serious challenge is to find the values of all its many arguments that will together produce the result you want. Below I discuss three methods for adjusting soundgen settings: (1) manual matching by ear, (2) matching by acoustic analysis, and (3) matching by formal optimization. ## Matching by ear If the sound you are trying to create exists only in your imagination, there is nothing for it but to tinker with argument values until a satisfactory result is achieved. Even if you have an existing audio recording that you wish to duplicate, the fastest and surest way to find the appropriate soundgen settings - in my experience - is to do it manually, using soundgen_app() and/or typing and editing R scripts with calls to soundgen(). I prefer to work with scripts and suggest that you proceed as follows. 1. Open the target sound, if you have one, in an interactive audio editor(s) of your choice. I mostly use Audacity, although Praat offers the nice feature of being able to click on the spectrogram and see the exact frequency of a particular spectral element. 2. Match the temporal parameters. If there are several stereotypical syllables, set repeatBout. If syllables are repetitive but not identical, with an overall drift of f0 and formants, set nSyl. Note that sylLen and pauseLen refer to the duration of voiced segments and pauses between them - unvoiced segments do not count. If the syllables are very different, synthesize them one by one with separate calls to soundgen() and then concatenate as described in section “Combining two sounds”. Biphonic sounds with more than one fundamental frequency can be synthesized separately and overlaid. 3. Match the fundamental frequency. No existing pitch tracker is reliable enough, so just find f0 manually using your ears and a narrow-band spectrogram (window length of 40-50 ms is usually about right). Use as few pitch anchors as possible: pitchAnchors = 440 for flat intonation, pitchAnchors = c(440, 300) for a linear slide, pitchAnchors = c(300, 440, 300) for a rising-falling contour, or pitchAnchors = data.frame(time = c(0, .1, 1), value = c(300, 440, 300)) for more complex contours with values specified at arbitrary time points. For multiple syllables, describe how f0 changes across syllables using pitchAnchorsGlobal. Remember that you don’t need to manually code every tiny fluctuation of f0: you can also add (regular) vibrato, (irregular) jitter with large jitterLen, or increase the effect of temperature on f0 with tempEffects = list(pitchDriftDep = ..., pitchDriftFreq = ..., pitchAnchorsDep = ...). 4. Match the formants. No existing algorithm for finding formants even remotely approaches the sensitivity of human perception, so again, just do it manually. Often it may be tricky to find the formants by eyeballing the spectrogram, especially if the sound is short, tonal, and high-pitched. In the worst case try a schwa with formants = NULL, vocalTract = my-best-guess-in-cm (for humans vocal tract length is between 10 and 20 cm). If you can hear or see the first few formants, specify them using as few anchors as possible, always starting at F1. For example, for stationary F1-F3 type formants = c(600, 1700, 3000) (f4 and above will be added automatically based on the estimated vocal tract length); for moving F1, type formants = list(f1 = c(500, 700), f2 = 1700, f3 = 3000; for more complicated cases, see the section on formants above. Remember that formant transitions apply to the entire bout, i.e. across multiple syllables if nSyl > 1. If formant tracks are roughly parallel (e.g. all formants descend together), it’s easier to write stationary formants and add something like mouthAnchors = c(0.6, 0.4). 5. Match nonlinear effects by adding subharmonics / sidebands, jitter, and shimmer. Don’t forget to set nonlinBalance to a positive number. 6. Match the turbulent noise component by adjusting noiseAnchors. Often the formant structure of turbulent noise is similar enough to the voiced component to leave the default formantsNoise = NULL; if not, specify formantsNoise separately. A bit of breathing provides excellent glue between syllables - set the last value of noiseAnchors$time to more than sylLen to extend breathing beyond the voiced part.
7. Match the spectral envelope by adjusting rolloff and rolloffNoise. Plot the long-term average spectrum of the target and of the candidate sound using seewave::meanspec() and try to match the two spectra. Don’t start with this until you are satisfied that you have got the formants right, because spectral slope depends strongly on formant frequencies.
8. Match the amplitude envelope with amplAnchors, attack with attackLen, and/or amplitude modulation with amDep = ..., amFreq = .... This is best done once you are happy with other settings, since amplitude envelope is affected by the chosen values of f0, formants, noise, and rolloff. For really sharp attack, reduce windowLength.
9. Adjust the amount of stochasticity by generating the sound repeatedly and varying temperature and tempEffects = list(...).

TIP Every time you change something, call soundgen(...your-pars..., play = TRUE, plot = TRUE) to get immediate visual and auditory feedback

## Matching by acoustic analysis

In addition to manual matching, there are two ways to find the optimal values of control parameters semi-audomatically: (1) perform acoustic analysis of the target sound to guide the choice of soundgen settings, and (2) automatically optimize some soundgen settings to match the target. Below are some tools and tips for doing this.

DISCLAIMER: what follows is work in progress, not guaranteed to produce the desired results. Above all, don’t expect a magic bullet that will completely solve the matching problem without any manual intervention

The first thing you might want to do with your target audio recording is to analyze it acoustically and extract precise measurements of syllable number and duration, pitch contour, and formant structure. You can use any tool of your choice to do this, including soundgen’s functions segment and analyze, which are described in the vignette on acoustic analysis. Once you have the measurements, you can convert them into appropriate values of soundgen arguments. An even easier solution is to use the function matchPars without optimization (maxIter = 0), which will perform a quick acoustic analysis and translate the results into soundgen settings, as follows:

target = soundgen(repeatBout = 3, sylLen = 120, pauseLen = 70,
pitchAnchors =c(300, 200),
rolloff = -5, play = playback)  # we hope to reproduce this sound
# playme(target)

m1 = matchPars(target = target,
samplingRate = 16000,
maxIter = 0)  # no optimization, only acoustic analysis
## [1] "Failed to improve fit to target! Try increasing maxIter."
# ignore the warning about failing to improve the fit: we don't want to optimize yet

# m1$pars contains a list of soundgen settings cand1 = do.call(soundgen, c(m1$pars, list(play = playback, temperature = 0)))

Without optimization, we simply match soundgen parameters based on acoustic analysis. In particular, matchPars() calls segment() and analyze() to get some basic descriptives of the target sound and to choose the appropriate settings for soundgen based on these measurements. If you are very lucky, this might in fact accurately match the temporal structure, pitch, and (stationary) formants of your target. Most likely, it won’t. In particular, for animal vocalizations a better option is often to estimate the vocal tract length from the dispersion of a few consecutive formants you can identify on the spectrogram (use estimateVTL()) and set vocalTract = your_estimate, formants = NULL.

At this point you can copy-paste your call to soundgen into the Shiny app and adjusting these settings in an interactive environment, rather than from the console. For example, to use the parameters in m1$pars, type call('soundgen', m1$pars), remove the “list()” part from the output, and you have your formula:

call('soundgen', m1$pars) # copy-paste from the console and remove "list(...)" to get your call to soundgen(): # soundgen(samplingRate = 16000, nSyl = 3, sylLen = 79, pauseLen = 114, # pitchAnchors = list(time = c(0, 0.5, 1), value = c(274, 253, 216)), # formants = list(f1 = list(freq = 821, width = 122), # f2 = list(freq = 1266, width = 36), # f3 = list(freq = 2888, width = 117))) Load this formula into the Shiny app. To do so, run soundgen_app(), click “Load new preset” on the right-hand side of the screen, copy-paste the formula above (no quotes), and click “Update sliders”. If all goes well, all the settings should be updated, so that clicking “Generate” should produce the same sound as cand1 above. Now you can tinker with the settings in the app, improving them further. TIP It can be very helpful to have the Shiny app running, while also having access to R console. Start two R sessions to achieve that ## Matching by optimization Let’s assume that you have a working version of your candidate sound, which resembles the target in terms of its temporal structure, pitch contour, and perhaps even the formant structure. You can also add some non-tonal noise manually in the app, experiment with effects like subharmonics and jitter, and make other modifications. But the number of possible combinations of soundgen settings is enormous, making the process of matching the target sound very time-consuming. You can sometimes speed things up by using formal optimization. The same function as above, matchPars, offers a simple way to optimize several parameters by randomly varying their values, generating the corresponding sound, and comparing it with the target. The currently implemented version uses simple hill climbing and is best regarded as experimental. m2 = matchPars(target = target, samplingRate = 16000, pars = 'rolloff', maxIter = 100) # rolloff should be moving from default (-12) to target (-5): sapply(m2$history, function(x) {
paste('Rolloff:', round(x$pars$rolloff, 1),
'; fit to target:', round(x$sim, 2)) }) cand2 = do.call(soundgen, c(m2$pars, list(play = playback, temperature = 0)))

# References

Fant, G. (1971). Acoustic theory of speech production: with calculations based on X-ray studies of Russian articulations (Vol. 2). Walter de Gruyter.

Hawkins, S., & Stevens, K. N. (1985). Acoustic and perceptual correlates of the non‐nasal–nasal distinction for vowels. The Journal of the Acoustical Society of America, 77(4), 1560-1575.

Johnson, K. (2011). Acoustic and auditory phonetics, 3rd ed. Wiley-Blackwell.

Klatt, D. H. (1980). Software for a cascade/parallel formant synthesizer. The Journal of the Acoustical Society of America, 67(3), 971-995.

Klatt, D. H., & Klatt, L. C. (1990). Analysis, synthesis, and perception of voice quality variations among female and male talkers. The Journal of the Acoustical Society of America, 87(2), 820-857.

Fitch, W. T., Neubauer, J., & Herzel, H. (2002). Calls out of chaos: the adaptive significance of nonlinear phenomena in mammalian vocal production. Animal Behaviour, 63(3), 407-418.

Khodai-Joopari, M., & Clermont, F. (2002). A Comparative study of empirical formulae for estimating vowel-formant bandwidths. In Proceedings of the 9th Australian International Conference on Speech, Science, and Technology (pp. 130-135).

Moore, R. K. (2016). A Real-Time Parametric General-Purpose Mammalian Vocal Synthesiser. In INTERSPEECH (pp. 2636-2640).

Stevens, K. (2000). Acoustic phonetics. MIT press.

Sueur, J. (Forthcoming). Sound in R. Springer.

Tappert, C. C., Martony, J., & Fant, G. (1963). Spectrum envelopes for synthetic vowels. Speech Transm. Lab. Q. Progr. Status Rep, 4, 2-6.

Wilden, I., Herzel, H., Peters, G., & Tembrock, G. (1998). Subharmonics, biphonation, and deterministic chaos in mammal vocalization. Bioacoustics, 9(3), 171-196.