Transactions of the International Society for Music Information Retrieval (Mar 2022)
Steerable Music Generation which Satisfies Long-Range Dependency Constraints
Abstract
Although music is full of repetitive motifs and themes, artificially intelligent temporal sequence models have yet to demonstrate the ability to model or generate musical compositions that satisfy steerable, long-range constraints needed to evoke such repetitions. Markovian approaches inherently assume a strictly limited range of memory while neural approaches—despite recent advances in evoking long-range dependencies—remain largely unsteerable. More recent models have been published that attempt to evoke repetitive motifs by imposing unary constraints at intervals or by collating copies of musical segments. Although the results of these methods satisfy long-range dependencies, they come with significant—potentially prohibitive—sacrifices in the musical coherence of the generated composition or in the breadth of satisfying compositions which the model can create. We present REGUALR non-homogeneous Markov models as a solution to the long-range dependency problem which uses RELATIONAL automata to enforce binary constraints to compose music with repeating motifs. The solution we present preserves musical coherence (i.e., Markovian constraints) for the duration of the generated compositions and significantly increases the range of satisfying compositions that can be generated.
Keywords