Here is a list of selected research works. These explore the intersection of music and art with computer science, human computer interaction, and artificial intelligence. They are arranged chronologically.
Contents
- 1 Nereides / Νηρηΐδες (2023)
- 2 on the Fractal Nature of Being… (2022)
- 3 Éolienne PH (2022)
- 4 Concret PH – A Retelling (2022)
- 5 Liminal Space (2018)
- 6 The Veil (2017)
- 7 SoundMorpheus (2016)
- 8 Time is All I Have Now (2016)
- 9 Migrant (2015)
- 10 Diving Into Infinity (2015)
- 11 Time Jitters (2014)
- 12 JythonMusic (2014)
- 13 Harmonic Navigator (2013)
- 14 Monterey Mirror (2011)
- 15 Armonique (2009)
- 16 NevMuse (2007)
- 17 Other Projects
Nereides / Νηρηΐδες (2023)
This piano study demonstrates how we can integrate algorithmic techniques introduced by Iannis Xenakis, with more traditional, or open-ended compositional styles. Conceptually, the piece captures the ever-changing interplay between light and clouds – how clouds scatter light, how they absorb light, and how they create different shades of light through a stormy sky (see image below).
A reduced version of this piece, for two-hands, was performed at the Megaron / The Athens Concert Hall in November 2023, as part of the celebration for the 70th anniversary of The Friends of Music Society of Greece.
For more information, see this online presentation at the 2023 International Computer Music Conference, in Shenzhen, China:
Also, see this associated publication:
- B. Manaris, and A. Forgette, “Meta-Xenakis: Developing Style-Agnostic, Stochastic Algorithmic Music“, Proceedings of the 49th International Computer Music Conference (ICMC 2023), Shenzhen, China, Oct. 2023.
on the Fractal Nature of Being… (2022)
This piece explores how stochastic and aleatoric techniques introduced by Iannis Xenakis may be combined with classical music theory, modern mathematics – fractal geometry, and modern technology. It consists of a minute-long harmonic theme, interwoven into a fractal. The theme is introduced on piano, then repeated by different instruments (cello, smartphones, bassoon, and guitar), at different levels of granularity (higher tempi and registers) – for the fractal to unfold. The audience participates via smartphones (speakers and accelerometers).
A Xenakian probability density function guides a continuous transition from consonance to dissonance. This is best described visually through M.C. Escher’s, “Day and Night” (1938).
This smooth transition is a meta-Xenakian idea, as he mainly focused on statistical probabilities between sound and silence. The piece was composed for the 2022 Meta-Xenakis transcontinental celebration, and first performed at the Music Library of Greece, May 2022.
Here is the complete keynote lecture (May 31, 2022):
For more information on the technology, see
- A. Forgette, B. Manaris, M. Gillikin, and S. Ramdsen, “Speakers, More Speakers!!! – Developing Interactive, Distributed, Smartphone-Based, Immersive Experiences for Music and Art“, Proceedings of the 28th International Symposium on Electronic Art (ISEA 2022), Barcelona, Spain, Jun. 2022.
Éolienne PH (2022)
“Éolienne PH” (also known as “Be the Wind”) uses recordings of sounds found in nature (such as birdsong and flowing water) to create a restorative, meditative experience. It invites the audience to participate through their smartphones. It is inspired by Iannis Xenakis’ 1958 piece, “Concret PH” (see below). Similarly to its sister piece, sounds are partitioned into small fragments, and then pitch-shifted and overlaid, to create a granular, ever-unfolding sound texture.
The piece uses audience smartphones to deliver sounds. Audience members are asked to move around freely. This creates infinite possibilities for sound placement – aleatoric sound trajectories – as in nature. Participants may contribute produce (high-quality, binaural) wind chime sounds, tapping on their smartphone screens. This invites deep listening, and possibly collaboration in the unfolding soundscape. The piece was composed for the 2022 Meta-Xenakis transcontinental celebration, and performed at the International Symposium on Electronic Art (ISEA 2022), Barcelona, Spain, June 2022.
For more information on the technology, see
- A. Forgette, B. Manaris, M. Gillikin, and S. Ramdsen, “Speakers, More Speakers!!! – Developing Interactive, Distributed, Smartphone-Based, Immersive Experiences for Music and Art“, Proceedings of the 28th International Symposium on Electronic Art (ISEA 2022), Barcelona, Spain, Jun. 2022.
Concret PH – A Retelling (2022)
This is a recreation of Iannis Xenakis’s avant-garde piece, “Concret PH” (1958). The original was created for the 1958 World’s Fair and performed at the Philips Pavilion, using hundreds of speakers. This recreation utilizes speakers on audience smartphones.
Xenakis used tape recordings of burning charcoal – partitioned into one-second fragments, pitch-shifted and overlaid – to generate a granular, unfolding sound texture. The recreation uses sounds from the original, and hammer-on-anvil sounds. Audience members are asked to move freely around, resembling people moving inside the Philips Pavilion in 1958. It uses a probability function to maintain the density of the unfolding sound texture – regardless of how many smartphones are participating. The piece was presented during a lecture at the University of Maryland, College Park, USA, April 2022. It was captured via high-quality, binaural microphone – use headphones.
For more information on the technology, see
- A. Forgette, B. Manaris, M. Gillikin, and S. Ramdsen, “Speakers, More Speakers!!! – Developing Interactive, Distributed, Smartphone-Based, Immersive Experiences for Music and Art“, Proceedings of the 28th International Symposium on Electronic Art (ISEA 2022), Barcelona, Spain, Jun. 2022.
Liminal Space (2018)
Liminal Space is an aleatoric piece for cello, motion capture, and interactive software. It explores what happens when the past – J.S. Bach’s Sarabande from Cello Suite No. 1 in G major (BWV1007) – meets the present, i.e., movement computing, stochastic music, and interaction design. Through aleatoric means, the composition creates an interface between a cellist and a dancer. As the dancer moves, she creates sounds. The two performers engage in a musical dialog, utilizing Bach’s original material.
Liminal Space was performed, as part of the 15th Sound & Music Computing Conference (SMC 2018) music program, in Limassol, Cyprus, July 2018.
For more information on the technology, see
- K. Stewart, B. Manaris, and T. Kohn, “K-Multiscope: Combining Multiple Kinect Sensors into a Common 3D Coordinate System“, Proceedings of 6th International Conference on Movement and Computing (MOCO 2019), Tempe, AZ, Oct. 2019.
The Veil (2017)
The Veil is an experiment in musical group dynamics, i.e., musical interaction and collaboration among performers. It was first presented at the Music Library of Greece, in Athens Greece, Dec. 2017.
The Veil framework involves the stitching of several Kinect and other motion sensors (e.g., LeapMotion) into a cohesive whole, with a common coordinate system to register user movement, and to assign semantics for musical interaction. It is used to explore the musical language and gestures that may emerge, given a particular set of mappings for interaction.
See a brief demonstration at the Music Library of Greece (Dec. 2017):
For more information on the technology, see
- K. Stewart, B. Manaris, and T. Kohn, “K-Multiscope: Combining Multiple Kinect Sensors into a Common 3D Coordinate System“, Proceedings of 6th International Conference on Movement and Computing (MOCO 2019), Tempe, AZ, Oct. 2019.
Also, see an article (in Greek) on The Veil – and Computing in the Arts, in general.
SoundMorpheus (2016)
SoundMorpheus is a sound spatialization and shaping interface, which allows the placement of sounds in space, as well as the altering of sound characteristics, via arm movements that resemble those of a conductor. The interface displays sounds (or their attributes) to the user, who reaches for them with one or both hands, grabs them, and gently or forcefully sends them around in space, in a 360° circle. The system combines MIDI and traditional instruments with one or more myoelectric sensors.
These components may be physically collocated or distributed in various locales connected via the Internet. This system also supports the performance of acousmatic and electronic music, enabling performances where the traditionally central mixing board, need not be touched at all (or minimally touched for calibration). Finally, the system may facilitate the recording of a visual score of a performance, which can be stored for later playback and additional manipulation.
For more information on the technology, see
- C. Benson, B. Manaris, S. Stoudenmier, and T. Ward, “SoundMorpheus: A Myoelectric-Sensor Based Interface for Sound Spatialization and Shaping“, Proceedings of the 16th International Conference on New Interfaces for Musical Expression (NIME 2016), Brisbane, Australia, Jul. 2016.
Time is All I Have Now (2016)
This piece combines aesthetic image sonification and computer programming (i.e., sonifying an aesthetically-pleasing image with the intent of preserving / mapping that aesthetic onto sound), with traditional music composition techniques (the latter provided by Maggie Dimogiannopoulou). Leslie Jones on cello. Recorded at the 2016 NSF Workshop on Computing in the Arts, UNC Asheville, May 2016.
For more information, see
- B. Manaris and A.R. Brown, Making Music with Computers: Creative Programming in Python, Chapman & Hall/CRC Textbooks in Computing, pp. 502, May 2014.
Migrant (2015)
Migrant is a cyclic piece combining data sonification, interactivity, and sound spatialization. It utilizes migration data collected over 23 years from 56,976 people across 545 US counties and 43 states. 120 people were selected randomly. Each person becomes a single note. Melody, harmony and dynamic are driven by the data. The composition plays with the golden ratio against harmonic density, dissonance and resolution (hint – listen carefully to the sounds, behind the sounds). It looks at people’s lives as interweaving – sometimes consonant, sometimes dissonant, and many times somewhere in between – unresolved.
Migrant was originally composed for Undomesticated, a public-art installation in the context of ArtFields 2015, Lake City, SC, USA (http://www.artfieldssc.org).
Here is a live performance at the American College of Greece, Mar. 2016 (John Bafaloukas, piano):
And here is the original live performance, as part of the ISMIR 2015 music program, in Málaga, Spain, Oct. 2015 (Bill Manaris, guitar).
For more information on the technology, see
- B. Manaris, and S. Stoudenmier, “Specter: Combining Music Information Retrieval with Sound Spatialization“, Proceedings of the 16th International Conference on Music Information Retrieval (ISMIR 2015), Malaga, Spain, Oct. 2015.
Diving Into Infinity (2015)
Diving into Infinity is a Kinect-based system which explores ways to interactively navigate M.C. Escher’s works involving infinite regression. It focuses on Print Gallery, an intriguing, self-similar work created by M.C. Escher in 1956.
The interaction design allows a user to zoom in and out, as well as rotate the image to reveal its self-similarity, by navigating prerecorded video material. This material is based on previous mathematical analyses of Print Gallery to reveal / explain the artist’s depiction of infinity. The system utilizes a Model-View Controller architecture over OSC.
For more information, see
- B. Manaris, D. Johnson, and M. Rourk, “Diving into Infinity: A Motion-Based, Immersive Interface for M.C. Escher’s Works“, 21st International Symposium on Electronic Art (ISEA 2015), Vancouver, Canada, Aug. 2015.
Time Jitters (2014)
Time Jitters is a four-projector interactive installation, which was designed by Los Angeles-based visual artist Jody Zellen for the Halsey Institute of Contemporary Art in Charleston, SC, USA, Jan. 2014.
Time Jitters includes two walls displaying video animation, and two walls with interactive elements. The concept is to create an immersive experience for participants, which confronts them with a bombardment of visual and sound elements. This project synthesizes AI, interaction, music and visual art. It utilizes invisible, computer-based intelligent agents, which interact with participants. Each person entering the installation space is tracked by a computer-based agent. The agent presents a unique image and sounds, which change as the person moves through the space.
For more information, see
- D. Johnson, B. Manaris, Y. Vassilandonakis, and S. Stoudenmier, “Kuatro: A Motion-Based Framework for Interactive Music Installations“, 40th International Computer Music Conference (ICMC 2014), Athens, Greece, Sep. 2014.
JythonMusic (2014)
JythonMusic is a software environment for developing interactive musical experiences and systems. It is based on jMusic, a software environment for computer-assisted composition, which was extended within the last decade into a more comprehensive framework providing composers and software developers with libraries for music making, image manipulation, building graphical user interfaces, and interacting with external devices via MIDI and OSC, among others. This environment is free and open source. It is meant for musicians and programmers alike, of all levels and backgrounds. For instance, here is a first-year university class performing Terry Riley’s “In C”.
JythonMusic is based on Python, therefore it provides more economical syntax relative to Java- and C/C++-like languages. JythonMusic rests on top of Java, so it provides access to the complete Java API and external Java-based libraries as needed. Also, it works seamlessly with other tools, such as Pd, Max/MSP, and Processing, among others. It is being actively used to develop interactive sound art installations, new interfaces for sound manipulation and spatialization, as well as various explorations on mapping among motion, gesture and music.
For more information, see
- B. Manaris, B. Stevens, and A.R. Brown, “JythonMusic: An Environment for Teaching Algorithmic Music Composition, Dynamic Coding, and Musical Performativity“, Journal of Music, Technology & Education 9(1), pp. 55-78, May 2016.
- B. Manaris and A.R. Brown, Making Music with Computers: Creative Programming in Python, Chapman & Hall/CRC Textbooks in Computing, pp. 502, May 2014.
- The JythonMusic website.
Harmonic Navigator is a real-time, interactive system for navigating vast harmonic spaces in music corpora. It provides a high-level view of the harmonic (chord) changes that occur in such corpora, and may be used to generate new pieces, by stitching together chords in meaningful ways.
A Piece
This piece was generated by the system from exploring the harmonic space of 371 Bach chorales. In this recording, it is performed by the Student String Orchestra, at the College of Charleston (conducted by Yiorgos Vassilandonakis).
Visual Navigation – example 1
Here is one user interface for navigating harmonic spaces. In this example we use 371 Bach chorales. The system is making suggestions (yellow circle), which the user chooses to follow or ignore. Red denotes dissonance, blue denotes consonance, and shades of purple denote everything in between.
Interesting possibilities emerge, as harmonies flow into new and unexpected places, yielding new ideas for inspiration and exploration.
Visual Navigation – example 2
Here is another user interface for navigating harmonic spaces. In this example we use 371 Bach chorales. The system generates a harmonic flow presenting all alternatives at every step (chord). This user interface allows to scrub back-and-forth, and select different alternatives. Red denotes dissonance, blue denotes consonance, and shades of purple denote everything in between.
User makes selections, and system outputs generated chord sequence.
For more information, see
- D. Johnson, B. Manaris, Y. Vassilandonakis, “Harmonic Navigator: An Innovative, Gesture-Driven User Interface for Exploring Harmonic Spaces in Musical Corpora”, 16th International Conference on Human-Computer Interaction (HCII 2014), Heraklion, Crete, Greece, Human-Computer Interaction, Advanced Interaction Modalities and Techniques, Lecture Notes in Computer Science, LNCS 8511, Springer-Verlag, pp. 58-68, Jun. 2014.
- D. Johnson, B. Manaris, Y. Vassilandonakis, “A Novelty Search and Power-Law-Based Genetic Algorithm for Exploring Harmonic Spaces in J.S. Bach Chorales”, EvoMUSART 2014 – 3rd International Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design, Granada, Spain, Lecture Notes in Computer Science, LNCS 8601, Springer-Verlag, pp. 95-106, Apr. 2014.
- B. Manaris, D. Johnson, and Y. Vassilandonakis, “Harmonic Navigator: A Gesture-Driven, Corpus-Based Approach to Music Analysis, Composition, and Performance”, 2nd International Workshop on Musical Metacreation (MUME 2013), Proceedings of AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE’13), Boston, MA, pp. 67-74, Oct. 2013.
Monterey Mirror (2011)
Monterey Mirror is an experiment in interactive music performance. It is engages a human performer and a computer (the mirror) in a game of playing, listening, and exchanging musical ideas.
The computer player employs an interactive stochastic music generator, which incorporates Markov models, genetic algorithms, and power-law metrics. This approach combines the predictive power of Markov models with the innovative power of genetic algorithms, using power-law metrics for fitness evaluation. For more information, see
- B. Manaris, D. Hughes, Y. Vassilandonakis, “Monterey Mirror: an experiment in interactive music performance combining evolutionary computation and Zipf’s law“, Evolutionary Intelligence 8(1), Springer-Verlag, pp 23-35, Mar. 2015.
- B. Manaris, D. Hughes, and Y. Vassilandonakis, “Monterey Mirror: Combining Markov Models, Genetic Algorithms, and Power Laws“, Proceedings of 1st Workshop in Evolutionary Music, 2011 IEEE Congress on Evolutionary Computation (CEC 2011), New Orleans, LA, USA, pp. 33-40, Jun. 2011.
- B. Manaris, P. Roos, D. Krehbiel, T. Zalonis, and J.R. Armstrong, “Zipf’s Law, Power Laws and Music Aesthetics“, in T. Li, M. Ogihara, G. Tzanetakis (eds.), Music Data Mining, pp. 169-216, CRC Press – Taylor & Francis, July 2011.
Armonique (2009)
Armonique is a music search engine, where users navigate through large musical collections based solely on the similarity of the music itself, as measured by hundreds of music-similarity metrics based on Zipf’s Law. In contrast, the majority of online music similarity engines are based on user listening habits and tagging by humans. This includes systems like Pandora, which involve either musicologists listening and carefully tagging every new song across numerous dimensions, and other systems which capture listening preferences and ratings of users.
Our approach uses 250+ metrics based on power laws, which have been shown to correlate with aspects of human aesthetics. Through these metrics, we are able to automatically create our own metadata (e.g., artist, style, or timbre data) by analyzing the song content and finding patterns within the music. Since this extraction does not require interaction by humans (musicologists or listeners) it is capable of scaling with rapidly increasing data sets. The main advantage of this technique is that (a) it
requires no human pre-processing, and (b) it allows users to discover songs of interest that are rarely listened to and are hard to find otherwise.
For more information, see
- D. Hughes and B. Manaris, “Fractal Dimensions of Music and Automatic Playlist Generation – Similarity Search via MP3 Song Uploads“, Proceedings of 8th IEEE International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIHMSP 2012), Piraeus-Athens, Greece, pp. 436-440, July 20, 2012.
- B. Manaris, P. Roos, D. Krehbiel, T. Zalonis, and J.R. Armstrong, “Zipf’s Law, Power Laws and Music Aesthetics“, in T. Li, M. Ogihara, G. Tzanetakis (eds.), Music Data Mining, pp. 169-216, CRC Press – Taylor & Francis, July 2011.
- B. Manaris, J.R. Armstrong, T. Zalonis, and D. Krehbiel, “Armonique: a framework for Web audio archiving, searching, and metadata extraction“, International Association of Sound and Audiovisual Archives (IASA) Journal, vol. 35, pp. 57-68, June 2010. Based on a presentation at the 40th Annual Conference of the International Association of Sound and Audiovisual Archives (IASA 2009), Athens, Greece, Sep. 2009 (video).
- B. Manaris, D. Krehbiel, P. Roos, and T. Zalonis, “Armonique: Experiments in Content-Based Similarity Retrieval Using Power-Law Melodic and Timbre Metrics“, Proceedings of the Ninth International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, PA, pp. 343-348, Sep. 2008.
- P. Roos and B. Manaris, “A Music Information Retrieval Approach Based on Power Laws,” Proceedings of 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI-07), Patras, Greece, vol. 2, pp. 27-31, Oct. 2007.
NevMuse (2007)
NevMuse (Neuro Evolutionary Music environment) is a prototype of an evolutionary music composer, which evolves music using artificial music critics based on power laws.
Tools based on this framework could be utilized by human composers to
- help generate new ideas,
- help overcome “writer’s block”, and
- help explore new compositional spaces.
Several experiments have been conducted, exploring the system’s ability to “compose” novel, aesthetically pleasing music. For example, here are two pieces composed by humans utilizing output from the above tool:
Tranquility
A piece composed by Bill Manaris using NEvMuse’s Variation H. For more info, see here.
Daydream
A piece composed by Patrick Roos using NEvMuse’s Variation Z and Variation Q. For more info, see here.
Approach
We use 250+ metrics based on power laws, which have been shown to correlate with aspects of human aesthetics. Through these metrics, we are can automatically classify music according to style, composer, and even perceived pleasantness (or popularity). For example, these figures show calculated differences between J.S. Bach’s pieces BWV500 through BWV599, and Beethoven’s piano sonatas (1 through 32). For more info, see here.
For more information, see
- B. Manaris, P. Roos, P. Machado, D. Krehbiel, L. Pellicoro, and J. Romero, “A Corpus-Based Hybrid Approach to Music Analysis and Composition,” Proceedings of 22nd Conference on Artificial Intelligence (AAAI-07), Vancouver, BC, pp. 839-845, Jul. 2007.
- B. Manaris, J. Romero, P. Machado, D. Krehbiel, T. Hirzel, W. Pharr, and R.B. Davis, “Zipf’s Law, Music Classification and Aesthetics,” Computer Music Journal 29(1), MIT Press, pp. 55-69, Spring 2005.
- B. Manaris, P. Machado, C. McCauley, J. Romero, and D. Krehbiel, “Developing Fitness Functions for Pleasant Music: Zipf’s Law and Interactive Evolution Systems,” EvoMUSART2005 – 3rd European Workshop on Evolutionary Music and Art, Lausanne, Switzerland, Lecture Notes in Computer Science, Applications of Evolutionary Computing, LNCS 3449, Springer-Verlag, pp. 498-507, Mar. 2005.
- P. Machado, J. Romero, M.L. Santos, A. Cardoso, and B. Manaris, “Adaptive Critics for Evolutionary Artists,” EvoMUSART2004 – 2nd European Workshop on Evolutionary Music and Art, Coimbra, Portugal, Lecture Notes in Computer Science, Applications of Evolutionary Computing, LNCS 3005, Springer-Verlag, pp. 437-446, Apr. 2004.
- P. Machado, J. Romero, B. Manaris, A. Santos, and A. Cardoso, “Power to the Critics – A Framework for the Development of Artificial Critics,” Proceedings of 3rd Workshop on Creative Systems, 18th International Joint Conference on Artificial Intelligence (IJCAI 2003), Acapulco, Mexico, pp. 55-64, Aug. 2003.
- B. Manaris, D. Vaughan, C. Wagner, J. Romero, and R.B. Davis, “Evolutionary Music and the Zipf–Mandelbrot Law: Progress towards Developing Fitness Functions for Pleasant Music,” EvoMUSART2003 – 1st European Workshop on Evolutionary Music and Art, Essex, UK, Lecture Notes in Computer Science, Applications of Evolutionary Computing, LNCS 2611, Springer-Verlag, pp. 522-534, Apr. 2003.
- B. Manaris, T. Purewal, and C. McCormick, “Progress Towards Recognizing and Classifying Beautiful Music with Computers – MIDI-Encoded Music and the Zipf-Mandelbrot Law,” Proceedings of the IEEE SoutheastCon 2002 Conference, Columbia, SC, pp. 52-57, Apr. 2002.
Other Projects
For earlier projects, see Zipf’s Law, SUITEKeys, and NALIGE.