The Creative-Ai (AI and the Artistic Imaginary – WASP-HS) and MUSAiC project teams at KTH kindly welcome you to the first seminar in our series “dialogues: probing the future of creative technology” on Thursday 31 March, 10:00(sharp)-11:00.
In this seminar, we talk about “Interaction with generative music frameworks”, and have Dorien Herremans and Kıvanç Tatar as guests. We start with short presentations by both guests (more info below), followed by a discussion.
The seminar will be held on zoom (https://kth-se.zoom.us/j/67706212115).
We look forward to seeing you all!
On behalf of the project teams
Andre Holzapfel & Bob Sturm
Read about Ai and the Artistic Imaginary – the Creative AI project
Read about MUSAiC project
Short biographies of the speakers:
Dorien Herremans: Controllable deep music generation with emotion
Abstract: In its more than 60-year history, music generation systems have never been more popular than today. While the number of music AI startups are rising, there are still a few issues with generated music.
Firstly, it is notoriously hard to enforce long-term structure (e.g.
earworms) in the music. Secondly, by making the systems controllable in terms of meta-attributes like emotion, they could become practically useful for music producers. In this talk, I will discuss several deep learning-based controllable music generation systems that have been developed over the last few years in our lab. These include TensionVAE, a music generation system guided by tonal tension; MusicFaderNets, a variational autoencoder model that allows for controllable arousal; and seq2seq a controllable lead sheet generator with Transformers. Finally, I will discuss some more recent projects by our AMAAI lab, including generating music that matches a video.
Bio: Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design, where she is also Director of Game Lab. At SUTD she teaches Computational Data Science, AI, and Applied Deep Learning. Before being at SUTD, she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London. She received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods, and graduated as a business engineer in management information systems at the University of Antwerp in 2005.
After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. Dr. Herremans’research interests focus on AI for novel applications such as Music and Audio.
Kıvanç Tatar: Musical Artificial Intelligence Architectures with Unsupervised Learning in Improvisation, Audio-Visual Performance, Interactive Arts, Dance, and Live Coding
Abstract: Generalized conceptualization of music suggests that music is “nothing but organised sound”, involving multiple layers where any sound can be used to produce music, and strong connections exist between pitch, noise, timbre, and rhythm. This conceptualization indicates two kinds of organization of sound: 1- organization in latent space to relate one sound to another, 2- organization in time to model musical actions and form. This talk covers different Artificial Intelligence architectures that were developed with the perspective of generalized understanding of music. These architectures train on a dataset of audio recordings using unsupervised learning, which make these technologies to cover a wide range of aesthetic possibilities, and enable them to be incorporated into various musical practices. The example projects will span musical agents in live performances of musical improvisation and audiovisual performance, interactive arts and virtual reality installations, music-dance experiments, and live coding approaches.
Bio: Kıvanç Tatar works in the field of advanced Artificial Intelligence in Arts and Music, active both as a researcher (with important theoretical and technical contributions) and an artistic practitioner, as an experimental musician and audiovisual artist, often in artistic collaborations. His research has expanded to multimodal applications that combine music with movement computation, and visual arts, and his computational approaches have been integrated into musical performances, interactive artworks, and immersive environments including virtual reality. Tatar has a dual educational background in music and technology, with a PhD from Simon Fraser University in Canada (2019) and started as Assistant Professor in Interactive AI in Music and Art at Chalmers in 2021, funded by a WASP-HS grant until 2026.