*Praat* generally does an excellent job of identifying formants, in contrast to my own feeble attempts.

Nevertheless, I continue to get sidetracked on the task of correctly identifying formants. My current method is fairly simple: I identify all the peaks in the fairly coarse DFT, and sort them by frequency. A “peak” is simply an inflection point, where the prior and next points are below the current point.

The results are fairly good, but there are occasional spots where formants “jump around”, so there’s a need to add a tracking mechanism of some sort, which I haven’t gotten around to writing.

For fun, I decided to overlay the formants on a display of the FFT, to see how accurate it was.

I’d played around with rendering an FFT before, but could never render them in a way that resembled the sonograms in other programs.

After convincing myself that the code actually was working, I finally settled on using the *square root* of the magnitude, along with fudge factor that reduced the amplitude of lower frequencies. The algorithm for pseudo-color was reverse engineered from an image I found on Wikipedia:

r = math.floor(lerp( 19, 252, n ))
b = math.floor(lerp( 108, 52, n ))
if n > .5 then
g = math.floor(lerp( 19, 252, (n*2)-1 ))
else
g = 0
end

The result was slow, but pretty:

Estimated formants overlaid onto an FFT of the wave.

I’ve added cosine windowing, and that cleaned up the results.

This version of the code merely identifies peaks. Adding peak tracking seems like something that’s fairly doable.

So I think it’s worth taking a bit more time to see if I can roll this code myself, instead of relying on *Praat*.