Music and morality
We often like songs with lyrics that align with our empathy levels and moral needs, and enable us to express our social values. Because
songs have propositional content, the role of the music is to clarify or amplify – to guide – the intended emotional response to the moral stance of the sung words. Computational experiments with 200 English song lyrics spanning a balanced mix of genres and titles from the 1960s to the 2010s showed that certain timbral, harmonic and melodic features align with specific lyrical moral themes. Songs where lyrics express care or kindness featured clearer melodic lines and smoother, more harmonically stable timbral structures (see Figure 1, top panel). In contrast, lyrics associated with harm or aggression were more often accompanied by rougher or noisier timbres and wider intervallic leaps (pitch or note ranges; see Figure 2). Psychologically, wide pitch ranges tend to be strongly associated with high arousal emotions. Importantly, these associations could be identified from the sound of the music alone: lyrics were annotated in isolation, without the accompanying music, and machine learning models were then trained only on the audio content of the songs to predict their lyrics annotations.
Music and gender
A parallel story unfolds in a different context: toy TV advertisements for children. Here music also acts as an emotional guide, this time alongside images and narration, reinforcing multimodal associations between gendered meanings to signal who (what gender) a toy is ‘for’. An extensive analysis of more than 600 toy commercials aired in the UK between 2012 and 2022 showed that ads targeted at girls
used music that is rhythmically simpler, harmonically clearer, with a smoother spectrum and higher tonal key clarity. By contrast,
commercials aimed at boys are typically louder and more timbrally rough, with higher spectral entropy and noisiness. Mixed-audience ads fall somewhere in between but often lean closer to the ‘masculine’ musical profile. These differences are not subtle: machine learning models predicted the gender target of a toy commercial (determined by the gender of the majority of actors after accounting for tokenism) with high accuracy using musical audio features alone. Similarly strong polarisation emerged in perceptual ratings of the
soundtracks of the commercials by 152 musically trained listeners: ads targeting boys were perceived as more ‘electric’ than ‘acoustic’, more distorted, louder or ‘punchier’, with stronger beats and higher rhythmic complexity, and a less clear melodic contour than those targeting girls.
Algorithms and echo chambers
Viewed together, these findings reveal recurring cultural patterns between how music is used to encode moral meaning and how it
is used to encode gender: musical consonance and smoothness signal empathy and femininity; dissonance and roughness signal
aggression and masculinity. Music thus becomes a shared expressive medium through which different worldviews are mapped onto the
same psychoacoustical cues. This makes music a powerful site for cultural expression and influence – especially when access to it is increasingly shaped by algorithmically-generated recommendations. Music streaming algorithms influence listening habits and create echo chambers at a scale never seen before, raising urgent questions about social progress in an era of AI-driven cultural content
recommendations, with implications for entertainment, wellbeing and communication campaigns.
The studies described in this article are part of the PhD work of Vjosa Preniqi and Luca Marinelli under the author’s supervision, Charalampos Saitis.
This article was published in the Institute of Acoustics' Bulletin of March/April 2026.