The MUSICA project has two main research areas. We are interested in music and natural language, investigating computational models to facilitate human-machine conversations about music. We also are exploring jazz and musical improvisation, focusing on how musicians communicate with each other through motifs and other musical features.
Music and Natural Language
Natural language has only been sparsely studied in musical contexts. We are interested in exploring it both because of the potential for developing novel interfaces for musicians and because it may offer insights into language for other spatial-temporal domains.
Composition by Conversation (CbC) is a task or type of interaction where a human and machine collaboratively compose music primarily through the use of natural language commands. This task involves modeling natural language for musical concepts, developing computational models of music that facilitate integration with natural language processing algorithms, and developing algorithms to query and manipulate scores based on potentially ambiguous or incomplete information. CbC manifests in our software as a chat interface where users can both leverage the system’s music generation abilities and also request specific changes to the score.

A major focus of our work in music and natural language has been the development of a collection of elementary composable ideas, or ECIs, for both musical and general concepts. ECIs are collections of simple concepts that can be assembled hierarchically (or composed) to represent more complex concepts. Our ECIs are used both in our parser and in our representations of musical scores.
Jazz and Musical Improvisation
Our work on jazz improvisation originally focused on trading fours, where participants alternate improvising 4-measure-long solo passages. In these practice sessions between two musicians, communication takes place through the creation and adoption of musical motifs between the instrumentalists. It has since been generalized to other musical situations and other styles are also being explored. Problems for the computer in this domain involve pattern detection (locating musical motifs), pattern-based generation, and making stylistically correct decisions.
We have since expanded our work to interactive jazz in a much broader sense. This includes the generation of accompanying bass and harmony parts to provide harmonic and rhythmic context and to multiple styles and music forms. In our online demo, users can set the structure of the piece being played, determining when computer-performed melodies and improvisational solos take place among their own soloing sections. All of the computer-generated parts involve real-time decision making, meaning that the experience will be a bit different each time a piece is performed.
Our algorithms for generating jazz solos and accompaniment can also stand by themselves. Below is an example of an algorithmically performed piece where all parts are played and improvised by the computer using professional-quality virtual instruments.
Our strategy for generating music in real-time, interactive scenarios is also applicable to other domains. Recently, we applied it to the creation of interactive/adaptive soundtrack generation for a Unity-based game. You can read more about this project on here and you can also try the game yourself online.