Generating smooth sine tone output on iOS
It has been several years since I studied signal processing in depth. I’ve been working on a pet project recently and needed to generate a sine tone on iOS. I almost gave up and just dragged in the AudioKit framework but decided it would be more rewarding to try and create it myself. So — […]
20 December 2018

It has been several years since I studied signal processing in depth. I’ve been working on a pet project recently and needed to generate a sine tone on iOS. I almost gave up and just dragged in the AudioKit framework but decided it would be more rewarding to try and create it myself.

So — as a net positive, I’m going to describe how I built the sine tone generator. There’s probably some mistakes or inaccuracies in the mathematical notation or use of code even though it works to my satisfaction.

For the problem statement: generate a 200Hz sinusoidal PCM tone.

Theory

Let’s start with a simple sine wave, defined by y = sin(x).

We can see this wave has amplitude 1, a frequency of 1/(2 * pi) Hz, and a phase of 0, defined by the classic equation.

Now, when considering a 200Hz tone, the frequency or oscillations per second becomes 200. Note that we need to increase the number of points in the linear vector to account for the new frequency.

Great. In this figure, we show a one second representation of the sine waves output. Typically, for a CD quality PCM tone, we need to sample this one second representation 44100 times a second. As the frequency of this wave is only 200 Hz, we satisfy the sampling theorem.

Let’s zoom into the graph and see the sampling, also known as quantisation, of the wave.

Usually, audio is placed into buffers and then queued for playback. In this example, one buffer will be enqueued while the other provide smooth playback. The buffer is small to reduce memory impact.

Assuming each buffer is 512 samples, let’s see how the wave is partitioned after 0.05 seconds.

Here are the 5 buffers, represented by different coloured lines, that represent the segmentation of 0.05 seconds of the sinusoidal tone.

Code

Let’s try and build up to the above in a real code example. We’ll first build the skeleton in Swift Playgrounds to generate the tone and then we’ll optimise the execution.

We’ll use the AVAudio parts of the AVFoundation for the initial concept. We’ll be looking into the classes and subclasses of AVAudio.

Let’s initialise an engine that we’ll use for playback.

Next, we’ll create the audio buffer that we’ll use for the sine tone. We’ll also create a player node that we’ll use to pipe the audio to the engine and declare the format of the PCM buffer.

Sorted! Now, let’s generate the 200Hz tone, taking into account the theory above. Initially we’ll use a really large buffer and scale it back later. Note, we’re being a bit naughty here and assuming all the allocation succeeds.

Awesome. It sounds good albeit runs a bit slow in Playgrounds. We can vectorise the equation later for better performance. Before we abstract the implementation into something reusable, let’s try and use multiple buffers.

This was pretty simple. We just added another for loop to encapsulate the buffer generation.

Now, that we have the serial flow understood. We can now start building a class to simplify things.

This following code will play a tone given a certain frequency, phase, and amplitude. Here is the final code that generates a 200Hz tone for five seconds.

There’s more opportunity here to handle scheduling of multiple buffers, overlapping with system sounds, ramping of frequencies, etc but this works as a good first pass.


// Antony Jepson
// A simple sine tone generator
// Developed in Xcode 10.1 for iOS 12

import Foundation
import AVFoundation

public struct Constants {
static let pcmSampleRateFloat: Float = 44100.0
static let pcmSampleRateFloat32: Float32 = 44100.0
static let pcmSampleRateDouble: Double = 44100.0
static let defaultBufferLengthInSeconds: Double = 0.2
}

public enum ToneTypes {
case sine
case cosine
}

// Tone: representation of a sinusoidal wave
public struct Tone {
private let twoPi: Float32 = Float32.pi * 2
public var ToneType: ToneTypes = .sine
public var frequency: Float32 = 100.0
public var amplitude: Float32 = 0.2
public var phase: Float32 = 0.0

public func eval() -> Float32 {
switch ToneType {
case .sine:
return amplitude * sin(twoPi * frequency + phase)
case .cosine:
return amplitude * cos(twoPi * frequency + phase)
}
}
}

public class ToneGenerator {
private let audioBufferSize: AVAudioFrameCount = AVAudioFrameCount(Constants.pcmSampleRateDouble * Constants.defaultBufferLengthInSeconds)
private let audioFormat: AVAudioFormat = AVAudioFormat(standardFormatWithSampleRate: Constants.pcmSampleRateDouble, channels: 2)!
private var eng: AVAudioEngine = AVAudioEngine()
private var pn: AVAudioPlayerNode = AVAudioPlayerNode()
private var ab: AVAudioPCMBuffer
private var isPlaying: Bool = false
private var dq: DispatchQueue = DispatchQueue(label: “ToneGenerator”)
private var tone: Tone

init(tone: Tone) {
self.tone = tone
ab = AVAudioPCMBuffer(pcmFormat: audioFormat,
frameCapacity: audioBufferSize)!
eng.attach(pn)
eng.connect(pn, to:eng.mainMixerNode, format: audioFormat)

do {
try eng.start()
} catch {
print(“AVAudioEngine didn’t start.”)
}
}

private func fillBuffer(_ buffer: AVAudioPCMBuffer) -> Void {
var initialisedBuffer: [Float32] = Array(
stride(from: 0.0 as Float32,
through: Float32(self.audioBufferSize – 1),
by: 1.0 as Float32)
)

initialisedBuffer = initialisedBuffer.map {
(sampleSeekTime) -> Float32 in
return Tone(ToneType: self.tone.ToneType,
frequency: self.tone.frequency / Constants.pcmSampleRateFloat32 * sampleSeekTime,
amplitude: self.tone.amplitude,
phase: self.tone.phase).eval()
}

buffer.frameLength = self.audioBufferSize
buffer.floatChannelData![0].initialize(from: &initialisedBuffer,
count: Int(self.audioBufferSize))
buffer.floatChannelData![1].initialize(from: &initialisedBuffer,
count: Int(self.audioBufferSize))
}

private func scheduleLoopingBuffer(_ buffer: AVAudioPCMBuffer) -> Void {
pn.scheduleBuffer(buffer, at: nil, options: AVAudioPlayerNodeBufferOptions.loops) {
// code to execution upon completion
}
}

private func scheduleBuffer(_ buffer: AVAudioPCMBuffer) -> Void {
pn.scheduleBuffer(buffer) {
// code to execute upon completion
}
}

public func playLoop() {
dq.async {
self.fillBuffer(self.ab)
self.scheduleLoopingBuffer(self.ab)
self.pn.play()
}
}

public func playSingle() {
dq.async {
self.fillBuffer(self.ab)
self.scheduleLoopingBuffer(self.ab)
self.pn.play()
}
}

public func stop() {
pn.stop()
pn.reset()
}

deinit {
eng.stop()
}
}

let tone: Tone = Tone(ToneType: .sine,
frequency: 200,
amplitude: 0.1,
phase: 0.0)

let toneGenerator: ToneGenerator = ToneGenerator(tone: tone)

toneGenerator.playLoop()
sleep(5)
toneGenerator.stop()