Meet Alexander Lerch!
Alexander Lerch is Assistant Professor at the Georgia Tech Center for Music Technology.
He works on the design of new digital signal processing algorithms for music analysis and synthesis. Lerch studied Electrical Engineering at the Technical University of Berlin and Tonmeister at the University of Arts Berlin and received his Diplom-Ingenieur degree and his PhD in 2000 and 2008, respectively. He is co-founder of the company zplane.development – a research-driven technology provider for the music industry. His book “An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics” has been published in 2012 by Wiley/ IEEE Press.
Were you interested in both music and engineering as a child? Did one lead to the other?
As a kid, my main interest was in making music. Computers (at that time it was not yet standard for every household let alone person to have one) held a certain fascination, and I wrote a few small programs at that time, but I wouldn’t say I spent a lot of time at the computer. However, later in high school I started to ask questions that relate music to engineering: how do acoustic and electric instruments produce sound, and how can I record an instrument and get the sound I want, how do microphones and speakers work?
I started with Electrical Engineering mostly because at that time the EE programs offered the most advanced classes in acoustics and sound. But I have to admit I wasn’t very happy with the focus of my undergraduate studies. I did fine but I was bored by the subjects taught and wanted to learn something more related to audio and sound. I decided to apply at the University of Arts for their Tonmeister-program. Tonmeister is a very specific German degree which is probably best translated with music producer – classes focus on music history, theory, performance, and ear training. But just when I started at the University of Arts the EE classes suddenly also became quite interesting with topics like signal processing, signal theory, and acoustics; I ended up studying at both universities simultaneously.
How has the interaction between music and engineering changed since you’ve started?
Technology was used to automate processes or to increase audio quality, and often was complicated to use correctly. Nowadays, engineers begin to teach computers to listen to and to understand music; the computer interacts with listeners and musicians on a musical and not on a technical level. It recommends music representing your current mood, it suggests the pitches for a vocal arrangement, it extracts a lead sheet from a recording, and it accompanies live performances, just to name a few examples.
What do you love about the combination of music and engineering?
It is great to work in interdisciplinary teams with musicians, developers, engineers, and others. It is great to design tools that musicians and producers can use to create new music.
What challenges do you face working in the field you are in?
One of the characteristics of music is that there is no absolute or objectively measurable scale of quality. What is the right sound? What is good music? Which music performance is better? What emotions are being evoked when you listen to specific songs? These are examples of things that are highly subjective and subject to change. To derive models that can be both general but also adapt to the individual user, and to evaluate these models in a meaningful way is challenging.
Over the last several decades, how we listen to music has changed considerably…from records to tape, to CDs to mp3 players, and digital distribution – what do you think will come next?
Listening habits have indeed changed; nowadays, we have access to any kind of music anytime and everywhere. What I hope to see in the future are more creative listening environments: What if you speed up and slow down music based on your work-out pace? What if you I like a song, but want to hear it with the drum track from another song? What if you, even without playing a musical instrument, could easily create a new and unique piece of music?
I work on two facets of music technology: algorithms to change or produce music, and algorithms to analyze music. The first category encompasses audio effects and related processes. Examples are algorithms to create a whole chorus from one vocal recording, automatically correcting a singer’s wrong intonation, or to synchronize the tempo of two songs. The analysis of music – a research field that is referred to as Music Information Retrieval – aims at extracting information such as tempo, chords, and musical mood from the audio signal and to use this information for, e.g., music recommendation systems and intelligent music software in general.
Were there other career paths you thought about following? What impacted your decision?
It was always very clear to me that I wanted to work in a field that has at least a vague relation to music. In school I was very interested in musical acoustics and room acoustics. After university, I could have easily made the choice to be an audio engineer or music producer. But, and I know that sounds a bit geeky, the final decision to focus on signal processing and music technology in my career was because I found the work on music processing algorithms to be more creative and more fun.
What programs or activities would you recommend to pre-university students wanting to know more about how engineering and music interact?
Get your hands dirty. Play around with DJ software, digital audio workstations, and audio plugins on your computer. Make music, record it, and share it with your friends. If you are into programming, try out tools that let you create your own processors (Max/MSP, PD, Reaktor, etc.). If you are a hardened programmer already, consider developing your first audio plugin (e.g., a VST or AudioUnit). There are tutorials that can get you started with something simple such as a delay effect. You can explore the world of music technology and software on many different levels, and each level gives you plenty of creative options.
How long have you been a member of IEEE? What prompted you to join?
I have been a member of the IEEE since I was a graduate student. The main reason for joining IEEE was the possibility to get in touch and to interact with a huge international network of professionals.
It is terrific to see musicians and producers using the tools you helped developing and creating a musical work or sound effect that is new and unexpected. It is rewarding to see your algorithms used by millions of people worldwide, even if they don’t know you have been involved.
Can you share a story about how the work you do has impacted the world?
Creating tools for music production is fun and rewarding, but although music plays an important role in our everyday life, the impact on the world will only be indirect and hard to measure. But you can make a difference in related areas. One minor example: we looked into using our music analysis in language training and found that visualizing the pitch, the timbre, and the tempo of the students speech in comparison with the teachers, greatly improved the students’ pronunciation and natural melody of speech compared to students with traditional learning methods.
What advice would you give a pre-university student who was interested in working in blending a career in engineering and music or entertainment?
There are many ways of combining engineering and music. To name a few examples: Mechanical Engineering might open your way to room acoustics, bio-acoustics, etc., Electrical Engineering can give you the fundamentals to develop new hardware and software for music production, and Audio Engineering teaches you how to produce a record. There is a growing number of Music Technology Bachelor and Master programs with varying focus – some target the creation of music with technology while others put more weight on the technological part. It should be relatively easy to find a program that fits your goals. But my main advice is: try to identify a career path that you feel passionate about.