Machine-learning models have been successfully applied to musical composition in a variety of forms, including audio classification, recognition, and synthesis. The capability of algorithms to learn complex musical elements allow composers to more deeply investigate the development of their aesthetic. Coupled with the history of interdisciplinary solutions found in computer music and system aesthetics, this capability has led to an exploration of the integration of machine learning and music composition. Composition systems that take advantage of this integration have the opportunity to be connected with algorithms in theory, application, and art.
In my systems, conditional restricted Boltzmann machines (CRBM) synthesize musical timbre by learning autoregressive connections between the current output, an abstracted non-linear hidden feature layer, and past outputs. This provides a creative space where composers can synthesize audio spectra in collaboration with machines, defining novel creative systems that explore compositional material in an abstract, non-linear paradigm.
By implementing CRBMs in timbral-synthesis composition systems, I provide concrete support that such an integration advances art as well as machine learning. I demonstrate this in a variety of audio synthesis experiments. I start by accurately synthesizing specific instrumental timbres and different musical pitches, demonstrating the aural capabilities of directly using the algorithms. I then build from these experiments, creating a variety of compositional utilities that provide the composer with a rich pallet to provoke aesthetic introspection. These synthesis and compositional utilities are then applied in artistic contexts, where the algorithms themselves are manipulated and explored as a means to realized the works.
I validate the capability of these models by performing several audio synthesis experiments, comparing the performance of two algorithmic structures that provide the basis for my composition systems: a single layer conditional restricted Boltzmann machine (CRBM) and a single layer factored conditional restricted Boltzmann machine (FCRBM).
Upon validation, I demonstrate these models in artistic application, synthesizing dynamic timbral textures and provoking aesthetic introspection. The resulting systems show the power and potential of integrating music composition and machine learning, endorsing an interdisciplinary approach to the development of art and technology.