There were only a few papers about lyrics at ISMIR this year but some of them were really interesting.
Matt McVicar tested
a rather profound hypothesis about the relationship of music and lyrics: that the common determining factor for music and words is the mood that the songwriter and composer hope to create together. Matt showed how principal axes apparently representing the valence (sad-happy) and arousal (calm-excited) scales commonly used to describe emotions fall directly out of a
Canonical Correlation Analysis applied to lyrics and audio features for 120,000 songs. As Matt admitted in his presentation, the principal axes found by CCA in the audio space are very highly correlated with energy and loudness, so what's been found so far may simply show the association of happy, active lyrics with loud music and vice versa. But this is an elegant line of research that I'd enjoy seeing pursued in more depth.
Xiao Hu from the University of Denver looked just at the lyrics of 2,700 songs from the
MIREX Mood Tag Dataset, which have mood annotations based on Last.fm tags. Her
investigations concern the relationship of mood and creativity, defined in terms of some simple textual features mainly concerning the size and diversity of vocabulary used in a song. In a nutshell it seems that there are more ways to feel sad than there are to feel happy. But maybe you knew that already.
Another takehome message for me was that even simple textual features may turn out to be pretty useful for some music classification and prediction tasks. Rudolf Mayer and Andreas Rauber of
TUW report
some fancy methods of combining features for genre classification. They see a hefty increase in accuracy when using statistical features to summarise the style of lyrics in addition to audio features, presumably because musical genres have their own characteristic lyrical styles and poetic forms too.