Research

Here is a list of selected research works.  These explore the intersection of music and art with computer science, human computer interaction, and artificial intelligence.  They are arranged chronologically.

Nereides / Νηρηΐδες (2023)

This piano study demonstrates how we can integrate algorithmic techniques introduced by Iannis Xenakis, with more traditional, or open-ended compositional styles.  Conceptually, the piece captures the ever-changing interplay between light and clouds – how clouds scatter light, how they absorb light, and how they create different shades of light through a stormy sky (see image below).

A reduced version of this piece, for two-hands, was performed at the Megaron / The Athens Concert Hall in November 2023, as part of the celebration for the 70th anniversary of The Friends of Music Society of Greece.

For more information, see this online presentation at the 2023 International Computer Music Conference, in Shenzhen, China:

Also, see this associated publication:

on the Fractal Nature of Being… (2022)

This piece explores how stochastic and aleatoric techniques introduced by Iannis Xenakis may be combined with classical music theory, modern mathematics – fractal geometry, and modern technology.  It consists of a minute-long harmonic theme, interwoven into a fractal.  The theme is introduced on piano, then repeated by different instruments (cello, smartphones, bassoon, and guitar), at different levels of granularity (higher tempi and registers) – for the fractal to unfold.   The audience participates via smartphones (speakers and accelerometers).

A Xenakian probability density function guides a continuous transition from consonance to dissonance.  This is best described visually through M.C. Escher’s, “Day and Night” (1938).

Day and Night, 1938 - M.C. Escher

This smooth transition is a meta-Xenakian idea, as he mainly focused on statistical probabilities between sound and silence.  The piece was composed for the 2022 Meta-Xenakis transcontinental celebration, and first performed at the Music Library of Greece, May 2022.

Here is the complete keynote lecture (May 31, 2022):

For more information on the technology, see

Éolienne PH (2022)

“Éolienne PH” (also known as “Be the Wind”) uses recordings of sounds found in nature (such as birdsong and flowing water) to create a restorative, meditative experience.  It invites the audience to participate through their smartphones.  It is inspired by Iannis Xenakis’ 1958 piece, “Concret PH” (see below).  Similarly to its sister piece, sounds are partitioned into small fragments, and then pitch-shifted and overlaid, to create a granular, ever-unfolding sound texture.

The piece uses audience smartphones to deliver sounds.  Audience members are asked to move around freely.  This creates infinite possibilities for sound placement – aleatoric sound trajectories – as in nature.  Participants may contribute produce (high-quality, binaural) wind chime sounds, tapping on their smartphone screens. This invites deep listening, and possibly collaboration in the unfolding soundscape.  The piece was composed for the 2022 Meta-Xenakis transcontinental celebration, and performed at the International Symposium on Electronic Art (ISEA 2022), Barcelona, Spain, June 2022.

For more information on the technology, see

Concret PH – A Retelling (2022)

This is a recreation of Iannis Xenakis’s avant-garde piece, “Concret PH” (1958).  The original was created for the 1958 World’s Fair and performed at the Philips Pavilion, using hundreds of speakers.  This recreation utilizes speakers on audience smartphones.

Xenakis used tape recordings of burning charcoal – partitioned into one-second fragments, pitch-shifted and overlaid – to generate a granular, unfolding sound texture.  The recreation uses sounds from the original, and hammer-on-anvil sounds.  Audience members are asked to move freely around, resembling people moving inside the Philips Pavilion in 1958.  It uses a probability function to maintain the density of the unfolding sound texture – regardless of how many smartphones are participating.  The piece was presented during a lecture at the University of Maryland, College Park, USA, April 2022.  It was captured via high-quality, binaural microphone – use headphones.

For more information on the technology, see

Liminal Space (2018)

Liminal Space is an aleatoric piece for cello, motion capture, and interactive software. It explores what happens when the past – J.S. Bach’s Sarabande from Cello Suite No. 1 in G major (BWV1007) – meets the present, i.e., movement computing, stochastic music, and interaction design. Through aleatoric means, the composition creates an interface between a cellist and a dancer. As the dancer moves, she creates sounds. The two performers engage in a musical dialog, utilizing Bach’s original material.

Liminal Space was performed, as part of the 15th Sound & Music Computing Conference (SMC 2018) music program, in Limassol, Cyprus, July 2018.

For more information on the technology, see

The Veil (2017)

The Veil is an experiment in musical group dynamics, i.e., musical interaction and collaboration among performers. It was first presented at the Music Library of Greece, in Athens Greece, Dec. 2017.

The Veil framework involves the stitching of several Kinect and other motion sensors (e.g., LeapMotion) into a cohesive whole, with a common coordinate system to register user movement, and to assign semantics for musical interaction. It is used to explore the musical language and gestures that may emerge, given a particular set of mappings for interaction.

See a brief demonstration at the Music Library of Greece (Dec. 2017):

For more information on the technology, see

Also, see an article (in Greek) on The Veil – and Computing in the Arts, in general.


SoundMorpheus (2016)

SoundMorpheus is a sound spatialization and shaping interface, which allows the placement of sounds in space, as well as the altering of sound characteristics, via arm movements that resemble those of a conductor. The interface displays sounds (or their attributes) to the user, who reaches for them with one or both hands, grabs them, and gently or forcefully sends them around in space, in a 360° circle. The system combines MIDI and traditional instruments with one or more myoelectric sensors.

These components may be physically collocated or distributed in various locales connected via the Internet. This system also supports the performance of acousmatic and electronic music, enabling performances where the traditionally central mixing board, need not be touched at all (or minimally touched for calibration). Finally, the system may facilitate the recording of a visual score of a performance, which can be stored for later playback and additional manipulation.

For more information on the technology, see


Time is All I Have Now (2016)

This piece combines aesthetic image sonification and computer programming (i.e., sonifying an aesthetically-pleasing image with the intent of preserving / mapping that aesthetic onto sound), with traditional music composition techniques (the latter provided by Maggie Dimogiannopoulou).  Leslie Jones on cello.  Recorded at the 2016 NSF Workshop on Computing in the Arts, UNC Asheville, May 2016.

For more information, see


Migrant (2015)

Migrant  is a cyclic piece combining data sonification, interactivity, and sound spatialization. It utilizes migration data collected over 23 years from 56,976 people across 545 US counties and 43 states. 120 people were selected randomly.  Each person becomes a single note.  Melody, harmony and dynamic are driven by the data.  The composition plays with the golden ratio against harmonic density, dissonance and resolution (hint – listen carefully to the sounds, behind the sounds).  It looks at people’s lives as interweaving – sometimes consonant, sometimes dissonant, and many times somewhere in between – unresolved.

Migrant was originally composed for Undomesticated, a public-art installation in the context of ArtFields 2015, Lake City, SC, USA (http://www.artfieldssc.org).

Here is a live performance at the American College of Greece, Mar. 2016 (John Bafaloukas, piano):

And here is the original live performance, as part of the ISMIR 2015 music program, in Málaga, Spain, Oct. 2015 (Bill Manaris, guitar).

For more information on the technology, see


Diving Into Infinity (2015)

Diving into Infinity is a Kinect-based system which explores ways to interactively navigate M.C. Escher’s works involving infinite regression. It focuses on Print Gallery, an intriguing, self-similar work created by M.C. Escher in 1956.

The interaction design allows a user to zoom in and out, as well as rotate the image to reveal its self-similarity, by navigating prerecorded video material. This material is based on previous mathematical analyses of Print Gallery to reveal / explain the artist’s depiction of infinity. The system utilizes a Model-View Controller architecture over OSC.

For more information, see


Time Jitters (2014)

Time Jitters is a four-projector interactive installation, which was designed by Los Angeles-based visual artist Jody Zellen for the Halsey Institute of Contemporary Art in Charleston, SC, USA, Jan. 2014.

Time Jitters includes two walls displaying video animation, and two walls with interactive elements. The concept is to create an immersive experience for participants, which confronts them with a bombardment of visual and sound elements. This project synthesizes AI, interaction, music and visual art. It utilizes invisible, computer-based intelligent agents, which interact with participants. Each person entering the installation space is tracked by a computer-based agent. The agent presents a unique image and sounds, which change as the person moves through the space.

For more information, see


JythonMusic (2014)

JythonMusic is a software environment for developing interactive musical experiences and systems. It is based on jMusic, a software environment for computer-assisted composition, which was extended within the last decade into a more comprehensive framework providing composers and software developers with libraries for music making, image manipulation, building graphical user interfaces, and interacting with external devices via MIDI and OSC, among others. This environment is free and open source. It is meant for musicians and programmers alike, of all levels and backgrounds. For instance, here is a first-year university class performing Terry Riley’s “In C”.

JythonMusic is based on Python, therefore it provides more economical syntax relative to Java- and C/C++-like languages. JythonMusic rests on top of Java, so it provides access to the complete Java API and external Java-based libraries as needed. Also, it works seamlessly with other tools, such as Pd, Max/MSP, and Processing, among others. It is being actively used to develop interactive sound art installations, new interfaces for sound manipulation and spatialization, as well as various explorations on mapping among motion, gesture and music.

For more information, see


Harmonic Navigator (2013)

Harmonic Navigator is a real-time, interactive system for navigating vast harmonic spaces in music corpora. It provides a high-level view of the harmonic (chord) changes that occur in such corpora, and may be used to generate new pieces, by stitching together chords in meaningful ways.

A Piece

This piece was generated by the system from exploring the harmonic space of 371 Bach chorales.  In this recording, it is performed by the Student String Orchestra, at the College of Charleston (conducted by Yiorgos Vassilandonakis).

 

Visual Navigation – example 1

Here is one user interface for navigating harmonic spaces.   In this example we use 371 Bach chorales.  The system is making suggestions (yellow circle), which the user chooses to follow or ignore.  Red denotes dissonance, blue denotes consonance, and shades of purple denote everything in between.

Interesting possibilities emerge, as harmonies flow into new and unexpected places, yielding new ideas for inspiration and exploration.

 

Visual Navigation – example 2

Here is another user interface for navigating harmonic spaces.  In this example we use 371 Bach chorales.  The system generates a harmonic flow presenting all alternatives at every step (chord).  This user interface allows to scrub back-and-forth, and select different alternatives. Red denotes dissonance, blue denotes consonance, and shades of purple denote everything in between.

User makes selections, and system outputs generated chord sequence.

 

For more information, see


Monterey Mirror (2011)

Monterey Mirror is an experiment in interactive music performance. It is engages a human performer and a computer (the mirror) in a game of playing, listening, and exchanging musical ideas.

The computer player employs an interactive stochastic music generator, which incorporates Markov models, genetic algorithms, and power-law metrics. This approach combines the predictive power of Markov models with the innovative power of genetic algorithms, using power-law metrics for fitness evaluation. For more information, see


Armonique (2009)

Armonique is a music search engine, where users navigate through large musical collections based solely on the similarity of the music itself, as measured by hundreds of music-similarity metrics based on Zipf’s Law.  In contrast, the majority of online music similarity engines are based on user listening habits and tagging by humans.  This includes systems like Pandora, which involve either musicologists listening and carefully tagging every new song across numerous dimensions, and other systems which capture listening preferences and ratings of users.

Our approach uses 250+ metrics based on power laws, which have been shown to correlate with aspects of human aesthetics. Through these metrics, we are able to automatically create our own metadata (e.g., artist, style, or timbre data) by analyzing the song content and finding patterns within the music. Since this extraction does not require interaction by humans (musicologists or listeners) it is capable of scaling with rapidly increasing data sets.  The main advantage of this technique is that (a) it
requires no human pre-processing, and (b) it allows users to discover songs of interest that are rarely listened to and are hard to find otherwise.

For more information, see


NevMuse (2007)

NevMuse (Neuro Evolutionary Music environment) is a prototype of an evolutionary music composer, which evolves music using artificial music critics based on power laws.

Tools based on this framework could be utilized by human composers to

  • help generate new ideas,
  • help overcome “writer’s block”, and
  • help explore new compositional spaces.

Several experiments have been conducted, exploring the system’s ability to “compose” novel, aesthetically pleasing music.  For example, here are two pieces composed by humans utilizing output from the above tool:

Tranquility

A piece composed by Bill Manaris using NEvMuse’s Variation H.  For more info, see here.

 

Daydream

A piece composed by Patrick Roos using NEvMuse’s Variation Z and Variation Q. For more info, see here.

 

Approach

We use 250+ metrics based on power laws, which have been shown to correlate with aspects of human aesthetics. Through these metrics, we are can automatically classify music according to style, composer, and even perceived pleasantness (or popularity).  For example, these figures show calculated differences between J.S. Bach’s pieces BWV500 through BWV599, and Beethoven’s piano sonatas (1 through 32).  For more info, see here.

BachScape – a 3D contour map of six metrics over 32 Bach pieces.

BeethovenScape – a 3D contour map of six metrics over 32 Beethoven pieces.

For more information, see


Other Projects

For earlier projects, see Zipf’s LawSUITEKeys, and NALIGE.

 

Skip to toolbar