I posted a long time ago about a study that tried to apply both linguistic analysis and musical analysis to the composer Leos Janacek’s notations of speech melodies. Janacek transcribed the speech of those around him for some 30 years at a time when prosody in speech was barely even considered by linguists. The field itself was still relatively young at the time and it seems that prosody is one aspect of phonetics that is still poorly understood today.

Initially, this interested me greatly as the intersections of music and linguistics are what I plan on focusing my studies on, but half way through reading the study I was sort of wishing it would just end. The thing is, this wasn’t so much as a scientific study as it was a whimsical look at Janacek’s pet habit. Occasionally, Jonathan Secora Pearl, the author, compares what Janacek notated to how the phrase in question might actually be said based on our modern understanding of Czech prosody, but more often then not he simply describes what was notated. These descriptions are complete with tonal analyses as if they were literally musical scores in A minor, or whatever.

What was more troubling was that, even though Pearl acknowledged multiple times that we have no recordings of what Janacek heard to determine the accuracy of the transcriptions, he still attempts to draw conclusions. At one point, he describes an oddly placed rest in the middle of a phrase, stating that this could actually happen but would be very difficult to notice. This was meant to be some sort of remark on Janacek’s keen ear but, really, we don’t know what Janacek was actually notating. Even if he wrote down something that was in every way possible and even common, we don’t know if notated the phrase accurately or just coincidentally notated something that’s possible.

The paper reads as if the author is desperately searching for ways to connect music and linguistics via Janacek’s speech melodies but, ultimately, none of his attempts make any sense specifically because there’s no way to be certain of the accuracy of the transcriptions. Maybe my expectations were too high because I also would’ve liked to connect the two fields but it seems this is the wrong way to go about it.

I’m still hopeful, though. My own attempt at analyzing speech through software was fairly eye opening. One thing I’ve done is taken my own speech and converted the first three formants of all the vowels into musical pitches to create chords. The results were pretty dissonant for the most part, or simply full of octaves. I was hoping they would align with chords found on tonal harmony in a fairly regular way but this doesn’t seem to be the case. Of course, I also used an equal tempered tuning system for reference, which is probably not the best way to do this. I’ll be reworking the comparison using just intonation soon enough to see if the results are still the same and, either way, I think I might just make some music out of the chords I do get. Because, ya know, why not?