-
Wallace, Benedikte; Nymoen, Kristian; Martin, Charles Patrick & Tørresen, Jim
(2020).
Towards Movement Generation with Audio Features.
-
Wallace, Benedikte; Nymoen, Kristian & Martin, Charles Patrick
(2019).
Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks.
-
Næss, Torgrim Rudland; Tørresen, Jim & Martin, Charles Patrick
(2019).
A Physical Intelligent Instrument using Recurrent Neural Networks.
-
Martin, Charles Patrick & Torresen, Jim
(2019).
An Interactive Music Prediction System with Mixture Density Recurrent Neural Networks.
-
Faitas, Andrei; Baumann, Synne Engdahl; Torresen, Jim & Martin, Charles Patrick
(2019).
Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks.
-
Martin, Charles Patrick; Næss, Torgrim Rudland; Faitas, Andrei & Baumann, Synne Engdahl
(2019).
Session on Musical Prediction and Generation with Deep Learning.
-
Martin, Charles Patrick & Tørresen, Jim
(2019).
An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks.
-
Martin, Charles Patrick
(2019).
Workshop on Making Predictive NIMEs with Neural Networks.
-
Nygaard, Tønnes Frostad; Nordmoen, Jørgen Halvorsen; Martin, Charles Patrick; Tørresen, Jim & Glette, Kyrre
(2019).
Lessons Learned from Real-World Experiments with
DyRET: the Dynamic Robot for Embodied Testing.
-
Nygaard, Tønnes Frostad; Martin, Charles Patrick; Tørresen, Jim & Glette, Kyrre
(2019).
Self-Modifying Morphology Experiments with DyRET: Dynamic Robot for Embodied Testing.
-
Nygaard, Tønnes Frostad; Nordmoen, Jørgen Halvorsen; Ellefsen, Kai Olav; Martin, Charles Patrick; Tørresen, Jim & Glette, Kyrre
(2019).
Experiences from Real-World Evolution with DyRET: Dynamic Robot for Embodied Testing.
-
Jensenius, Alexander Refsum; Martin, Charles Patrick; Erdem, Cagri; Lan, Qichao; Fuhrer, Julian Peter & Gonzalez Sanchez, Victor Evaristo
[Show all 9 contributors for this article]
(2019).
Self-playing Guitars.
Show summary
In this installation we explore how six self-playing guitars can entrain to each other. When they are left alone they will revert to playing a common pulse. As soon as they sense people in their surroundings they will start entraining to other pulses. The result is a fascinating exploration of a basic physical and cognitive concept, and the musically interesting patterns that emerge on the border between order and chaos.
-
-
Martin, Charles Patrick & Tørresen, Jim
(2018).
Predictive Musical Interaction with MDRNNs.
-
Søyseth, Vegard Dønnem; Nygaard, Tønnes Frostad; Martin, Charles Patrick; Uddin, Md Zia & Ellefsen, Kai Olav
(2018).
ROBIN-Stand ved Cutting Edge 2018.
-
-
Martin, Charles Patrick; Lesteberg, Mari; Jawad, Karolina; Aandahl, Eigil; Xambó, Anna & Jensenius, Alexander Refsum
(2018).
Stillness under Tension.
-
Martin, Charles Patrick; Gonzalez Sanchez, Victor Evaristo; Zelechowska, Agata; Erdem, Cagri & Jensenius, Alexander Refsum
(2018).
Stillness under Tension.
-
Tørresen, Jim; Garcia Ceja, Enrique Alejandro; Ellefsen, Kai Olav & Martin, Charles Patrick
(2018).
Equipping Systems with Forecasting Capabilities .
-
Martin, Charles Patrick
(2018).
Deep Predictive Models in Interactive Music.
-
Martin, Charles Patrick; Glette, Kyrre; Nygaard, Tønnes Frostad & Tørresen, Jim
(2018).
Self-Awareness in a Cyber-Physical Predictive Musical Interface.
Show summary
We introduce a new self-contained and self-aware interface for musical expression where a recurrent neural network (RNN) is integrated into a physical instrument design. The system includes levers for physical input and output, a speaker system, and an integrated single-board computer. The RNN serves as an internal model of the user’s physical input, and predictions can replace or complement direct sonic and physical control by the user. We explore this device in terms of different interaction configurations and learned models according to frameworks of self-aware cyber-physical systems.
-
Nygaard, Tønnes Frostad; Martin, Charles Patrick; Tørresen, Jim & Glette, Kyrre
(2018).
Exploring Mechanically Self-Reconfiguring Robots for Autonomous Design.
Show summary
Evolutionary robotics has aimed to optimize robot control and morphology to produce better and more robust robots. Most previous research only addresses optimization of control, and does this only in simulation. We have developed a four-legged mammal-inspired robot that features a self-reconfiguring morphology. In this paper, we discuss the possibilities opened up by being able to efficiently do experiments on a changing morphology in the real world. We discuss present challenges for such a platform and potential experimental designs that could unlock new discoveries. Finally, we place our robot in its context within general developments in the field of evolutionary robotics, and consider what advances the future might hold.
-
Martin, Charles Patrick
(2018).
MicroJam.
Show summary
MicroJam is a mobile app for sharing tiny touch-screen performances. Mobile applications that streamline creativity and social interaction have enabled a very broad audience to develop their own creative practices. While these apps have been very successful in visual arts (particularly photography), the idea of social music-making has not had such a broad impact. MicroJam includes several novel performance concepts intended to engage the casual music maker and inspired by current trends in social creativity support tools. Touch-screen performances are limited to 5-seconds, instrument settings are posed as sonic "filters", and past performances are arranged as a timeline with replies and layers. These features of MicroJam encourage users not only to perform music more frequently, but to engage with others in impromptu ensemble music making.
-
Garcia Ceja, Enrique Alejandro; Ellefsen, Kai Olav; Martin, Charles Patrick & Tørresen, Jim
(2018).
Prediction, Interaction, and User Behaviour.
Show summary
The goal of this tutorial is to apply predictive machine learning models to human behaviour through a human computer interface. We will introduce participants to the key stages for developing predictive interaction in user-facing technologies: collecting and identifying data, applying machine learning models, and developing predictive interactions. Many of us are aware of recent advances in deep neural networks (DNNs) and other machine learning (ML) techniques; however, it is not always clear how we can apply these techniques in interactive and real-time applications. Apart from well-known examples such as image classification and speech recognition, what else can predictive ML models be used for? How can these computational intelligence techniques be deployed to help users?
In this tutorial, we will show that ML models can be applied to many interactive applications to enhance users’ experience and engagement. We will demonstrate how sensor and user interaction data can be collected and investigated, modelled using classical ML and DNNs, and where predictions of these models can feed back into an interface. We will walk through these processes using live-coded demonstrations with Python code in Jupyter Notebooks so participants will be able to see our investigations live and take the example code home to apply in their own projects.
Our demonstrations will be motivated from examples from our own research in creativity support tools, robotics, and modelling user behaviour. In creativity, we will show how streams of interaction data from a creative musical interface can be modelled with deep recurrent neural networks (RNNs). From this data, we can predict users’ future interactions, or the potential interactions of other users. This enables us to “fill in” parts of a tablet-based musical ensemble when other users are not available, or to continue a user’s composition with potential musical parts. In user behaviour, we will show how smartphone sensor data can be used to infer user contextual information such as physical activities. This contextual information can be used to trigger interactions in smart home or internet of things (IoT) environments, to help tune interactive applications to user’s needs, or to help track health data.
-
Martin, Charles Patrick; Glette, Kyrre & Tørresen, Jim
(2018).
Creative Prediction with Neural Networks.
Show summary
The goal of this tutorial is to apply predictive machine learning models to creative data. The focus of the tutorial will be recurrent neural networks (RNNs), deep learning models that can be used to generate sequential and temporal data. RNNs can be applied to many kinds of creative data including text and music. They can learn the long-range structure from a corpus of data and “create” new sequences by predicting one element at a time. When embedded in a creative interface, they can be used for “predictive interaction” where a human collaborates with, influences, and is influenced by a generative neural network.
We will walk through the fundamental steps for training creative RNNs using live-coded demonstrations with Python code in Jupyter Notebooks. These steps are: collecting and cleaning data, building and training an RNN, and developing predictive interactions. We will also have live demonstrations and interactive live-hacking of our creative RNN systems!
You’re welcome to bring a laptop with python to the tutorial and load up our code examples, or to follow along with us on the screen!
-
Martin, Charles Patrick
(2018).
Predictive Music Systems for Interactive Performance.
Show summary
Automatic music generation is a compelling task where much recent progress has been made with deep learning models. But how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users?
Musical performance requires prediction to operate instruments, and perform in groups. Predictive models can help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning can allow data-driven models with a long memory of past states.
This process could be termed "predictive musical interaction", where a predictive model is embedded in a musical interface, assisting users by predicting unknown states of musical processes. I’ll discuss a framework for predictive musical interaction including examples from our lab, and consider how this work could be applied more broadly in HCI and robotics. This talk will cover material from this paper: https://arxiv.org/abs/1801.10492
-
Jensenius, Alexander Refsum; Martin, Charles Patrick; Bjerkestrand, Kari Anne Vadstensvik & Johnson, Victoria
(2018).
Stillness under Tension.
-
Martin, Charles Patrick; Jensenius, Alexander Refsum & Tørresen, Jim
(2018).
Composing an ensemble standstill work for Myo and Bela.
Show summary
This paper describes the process of developing a standstill performance work using the Myo gesture control armband and the Bela embedded computing platform. The combination of Myo and Bela allows a portable and extensible version of the standstill performance concept while introducing muscle tension as an additional control parameter. We describe the technical details of our setup and introduce Myo-to-Bela and Myo-to-OSC software bridges that assist with prototyping compositions using the Myo controller.
-
Martin, Charles Patrick; Xambó, Anna; Visi, Federico; Morreale, Fabio & Jensenius, Alexander Refsum
(2018).
Stillness under Tension.
Show summary
Stillness Under Tension is an ensemble standstill work for Myo gesture control armband and Bela embedded music platform. Humans are incapable of standing completely still due to breathing and other involuntary micromotions. This work explores the expressive space of standing still through an inverse action-sound mapping: less movement leads to more sound. Four performers stand as still as possible on stage, each wearing a Myo armband connected to a Bela embedded sound processing platform. The Myo is used to measure the performers movement, and the muscle activity in their forearm which they can use--both voluntarily and involuntarily--to control a synthesised sound world. Each performer uses one Myo and Bela in a musical space defined by their physical position and posture while standing still.
-
Gonzalez Sanchez, Victor Evaristo; Martin, Charles Patrick; Zelechowska, Agata; Bjerkestrand, Kari Anne Vadstensvik; Johnson, Victoria & Jensenius, Alexander Refsum
(2018).
Bela-based augmented acoustic guitars for sonic microinteraction.
Show summary
This article describes the design and construction of a collection of digitally-controlled augmented acoustic guitars, and the use of these guitars in the installation \textit\{Sverm-Resonans\}. The installation was built around the idea of exploring `inverse’ sonic microinteraction, that is, controlling sounds by the micromotion observed when attempting to stand still. It consisted of six acoustic guitars, each equipped with a Bela embedded computer for sound processing (in Pure Data), an infrared distance sensor to detect the presence of users, and an actuator attached to the guitar body to produce sound. With an attached battery pack, the result was a set of completely autonomous instruments that were easy to hang in a gallery space. The installation encouraged explorations on the boundary between the tactile and the kinesthetic, the body and the mind, and between motion and sound. The use of guitars, albeit with an untraditional `performance’ technique, made the experience both familiar and unfamiliar at the same time. Many users reported heightened sensations of stillness, sound, and vibration, and that the `inverse’ control of the instrument was both challenging and pleasant.
-
-
Jensenius, Alexander Refsum; Martin, Charles Patrick; Bjerkestrand, Kari Anne Vadstensvik & Johnson, Victoria
(2017).
Sverm-Muscle.
-
-
Jensenius, Alexander Refsum; Martin, Charles Patrick; Gonzalez Sanchez, Victor Evaristo; Zelechowska, Agata & Johnson, Victoria
(2017).
Sverm-Resonans.
-
Jensenius, Alexander Refsum; Bjerkestrand, Kari Anne Vadstensvik; Johnson, Victoria; Gonzalez Sanchez, Victor Evaristo; Zelechowska, Agata & Martin, Charles Patrick
(2017).
Sverm-resonans.
Show summary
An installation that gives you access to heightened sensations of stillness, sound and vibration. Unlike traditional instruments these guitars are “played” by (you) trying to stand still. The living body interacts with an electronic sound system played through the acoustic instrument. In this way, Sverm-Resonans explores the meeting points between the tactile and the kinesthetic, the body and the mind, and between motion and sound.
-
Jensenius, Alexander Refsum; Bjerkestrand, Kari Anne Vadstensvik; Johnson, Victoria; Gonzalez Sanchez, Victor Evaristo; Zelechowska, Agata & Martin, Charles Patrick
(2017).
Sverm-Resonans.
-
-
Martin, Charles Patrick
(2017).
Musical Networks: Using Recurrent Neural Networks to Model and Complement Musical Creativity.
Show summary
The use of artificial neural networks and deep learning systems to generate visual artistic expressions has become common in recent times. However, musical neural networks have not been applied to the same extent. While image-generation systems often use convolutional networks, musical generation systems rely on, less well-developed, recurrent neural networks (RNNs) that are trained to model sequences of data through time.
RNNs usually apply special artificial neurons known as Long Short-Term Memory (LSTM) cells that can store information over several time-steps and learn when and how to update their memory during training. As RNNs model sequences and include a kind of memory, they are very applicable to the temporal structure of music, where patterns may be regularly repeated. In musical applications, these networks are usually applied as sequence generators; that is, given a sequence of notes, the network generates a possible next note.
In this talk, I will discuss current designs for RNNs and the latest applications in Google's Magenta project, and in our own Neural Touch-Screen Ensemble which has been developed at the University of Oslo, Department of Informatics. Both of these projects are notable for focusing on interactive applications of musical networks. Magenta implements a call-response improvisation system that allows performers to probe the musical affordances of an RNN. The Neural Touch-Screen Ensemble simulates an ensemble response to a single live performer by encoding data captured over several years of collaborative iPad improvisations. These interactive musical AI systems point to future possibilities for integrating musical networks into musical performance and production where they could be seen as "intelligent instruments" that assist and enhance musicians and casual music makers alike.
-
Martin, Charles Patrick
(2017).
Making Social Music with MicroJam.
-
Jensenius, Alexander Refsum; Bjerkestrand, Kari Anne Vadstensvik; Johnson, Victoria; Gonzalez Sanchez, Victor Evaristo; Zelechowska, Agata & Martin, Charles Patrick
(2017).
Sverm-Resonans.
-
Jensenius, Alexander Refsum; Bjerkestrand, Kari Anne Vadstensvik; Johnson, Victoria; Gonzalez Sanchez, Victor Evaristo; Zelechowska, Agata & Martin, Charles Patrick
(2017).
Sverm-Puls.
Show summary
An installation that gives you access to heightened sensations of stillness, sound and vibration.
Approach one of the guitars. Place yourself in front of it and stand still. Feel free to put your hands on the body of the instrument. Listen to the sounds appearing from the instrument. As opposed to a traditional instrument, these guitars are “played” by (you) trying to stand still. The living body interacts with an electronic sound system played through the acoustic instrument. In this way, Sverm-Puls explores the meeting points between the tactile and the kinesthetic, the body and the mind, and between motion and sound.
-
Jensenius, Alexander Refsum; Bjerkestrand, Kari Anne Vadstensvik; Johnson, Victoria; Gonzalez Sanchez, Victor Evaristo; Zelechowska, Agata & Martin, Charles Patrick
(2017).
Sverm-Resonans.
Show summary
En installasjon som gir deg tilgang til stillstand, lyd og vibrasjon.
Stå stille. Lytt. Finn lyden. Beveg deg. Stå stille. Lytt. Hør spenningen. Kjenn på bevegelsene dine. Slapp av. Stå enda stillere. Lytt dypere. Føl på grensen mellom det kjente og det ukjente, det kontrollerbare og det ukontrollerbare. Hvordan møter kroppen lyden? Hvordan møter lyden kroppen? Hva hører du?
Gå bort til en av gitarene. Plasser deg fordan den og kjenn på stillstanden din. Hvis du vil kan du plassere hendene på instrumentet. Forsøk å lukke øynene. Åpne sansene for lydvibrasjonene du føler og hører. Stå så lenge du vil og kjenn på utviklingen av lyden, og dine indre opplevelser, bilder og assosiasjoner. I motsetning til et tradisjonelt instrument "spilles" disse gitarene ved at du står stille. Den levende kroppen interagerer med et elektronisk lydsystem spilt gjennom et akustisk instrument. Sverm-resonans utforsker møtepunktet mellom det taktile og det kinesiske, kroppen og sinnet, og mellom bevegelse og lyd.
-
Martin, Charles Patrick
(2017).
Virtuosic Interactions / Performing with a Neural iPad Band.
-
Martin, Charles Patrick
(2017).
MicroJam: A Social App for Making Music.
-
Martin, Charles Patrick
(2017).
Pursuing a Sonigraphical Ideal at the Dawn of the NIME Epoch.
In Jensenius, Alexander Refsum & Lyons, Michael J. (Ed.),
A NIME Reader: Fifteen Years of New Interfaces for Musical Expression.
Springer Science+Business Media B.V..
ISSN 978-3-319-47213-3.
p. 103–105.
doi:
10.1007/978-3-319-47214-0.
Show summary
What is a musical instrument? What are the musical instruments of the future? This anthology presents thirty papers selected from the fifteen year long history of the International Conference on New Interfaces for Musical Expression (NIME). NIME is a leading music technology conference, and an important venue for researchers and artists to present and discuss their explorations of musical instruments and technologies.
Each of the papers is followed by commentaries written by the original authors and by leading experts. The volume covers important developments in the field, including the earliest reports of instruments like the reacTable, Overtone Violin, Pebblebox, and Plank. There are also numerous papers presenting new development platforms and technologies, as well as critical reflections, theoretical analyses and artistic experiences.
The anthology is intended for newcomers who want to get an overview of recent advances in music technology. The historical traces, meta-discussions and reflections will also be of interest for longtime NIME participants. The book thus serves both as a survey of influential past work and as a starting point for new and exciting future developments.
-
Nygaard, Tønnes; Glette, Kyrre; Tørresen, Jim & Martin, Charles Patrick
(2020).
Legging It: An Evolutionary Approach to Morphological Adaptation for a Real-World Quadruped Robot.
Universitetet i Oslo.
ISSN 1501-7710.
Full text in Research Archive
-
Næss, Torgrim Rudland; Martin, Charles Patrick & Tørresen, Jim
(2019).
A Physical Intelligent Instrument using Recurrent Neural Networks.
Universitetet i Oslo.
-
Wallace, Benedikte & Martin, Charles Patrick
(2018).
Predictive songwriting with concatenative accompaniment.
Universitetet i Oslo.
-