Generative/Reactive Music API for Games

I have been working with a new API for generative/reactive music for a long time now and I think it is time to show something.

New: Source code available! See webpage for more info.

The editor is available here: Windows installer, Executable jar

Webpage for the editor with some examples, specification (very sparse at the moment), and some tutorials:
http://www.springworldgames.com/rmleditor

A video that demonstrates a module:

Another video that demonstrates two modules and the editor


The second demo is at 6:08 and the editor demo is at 8:00 if you want to skip forward :slight_smile:

The Music Generator produces one musical measure or bar at a time given a large set of possible property values. It also keeps track of a harmonic rythm.

The Reactive Music Language (RML) supports scripting, action execution (parallel, conditional etc.), external parameter mapping, states with reactions.

The examples with multiple channels are the most interesting ones to listen to, the others are more tutorial-like and demonstrate one particular functionality.

I think that the most interesting application for this API is to put RML modules inside games that produce music depending on what happens.

Wow, that is pretty awesome! Is there a way to save the sounds you create?

Yes, I have just disabled it in the applets. I’ll probably make a “save sound” and “save image” function with JWS-API without using signed applets.

I guess that you can just download the rml.jar and start the com.springworldgames.musicgenerator.MusicGeneratorTesterApplication class, but I haven’t tested that myself.

Thanks for trying it!

The more I use it, the more I love it. <3
Could you please provide a download link to the rml.jar :slight_smile:

Here is the rml.jar link:
www.springworldgames.com/musicgenerator/rml.jar

Use: java -cp rml.jar com.springworldgames.musicgenerator.MusicGeneratorTesterApplication

Here is also the XML-schema for the RML-format:
http://www.springworldgames.com/musicgenerator/song_schema.xsd

I am currently working on an editor for RML-files.

Wow, neat! Wish I knew more about music to play around with this properly.

Can you make a random-music-generation button which fiddles with all the settings randomly?

I will probably add more intuitive layers above the Music Generator and RML that hides much of the music theory. One such layer could for example control the Music Generator in a random fashion but make sure that the settings are “reasonable”.

Another option is to add wizards where you pick a musical style and then the settings are randomized. I will probably need some help from a proper musician to make this happen though :). I am just a programmer with minor knowledge of music theory and zero knowledge of musical styles.

Very interesting. I really could use this for my RPG-sessions: A gui with five buttons: happy, moody, tension, picking up pace, furious! I guess, it would mostly be the composing part for this to be done on top of your project?

Yes, pretty much. But the composing is very different from normal, static composing.
The most difficult part of reactive music is perhaps to make the switches between moods sound right. But if you are a skilled composer you might know how to do this. A simple way out is to add a “stinger” sound that hides abrupt switches.

Planned features:

  • Real-time control of certain parameters like cutoff frequency, volume etc. that is combined with the composed part but with much shorter delay
  • Useful editor for RML-modules. This one is in active development since it is a pain to write the XML manually
  • VST instrument support. Already tested a Java wrapper of this and it works really good. Only works on PCs though…
  • Ornamentation (trills, grace notes etc.)

It’s a very interesting project! You’ve encoded and made accessible a LOT of musical functionality.

Is it possible to translate some of your structures to different scale sizes? For example, I often write in pentatonic, hexatonic or octatonic. In each, building “triads” results in rather different sounds. Just a side idea to explore. An ascending scale of triads in pentatonic might be (using CDEGA as the scale) CEA, DGC’, EAD’, GC’E’, AD’G’, C’E’A’. Two six-note scales I’ve explored are (1) adding one more step to the pentatonic, for example, B-flat or B to the above, (2) an intriguing Miles Davis sort of thing using majors and minors, as in C, e-flat, E, G, a-flat, B, C’.

An interesting resource for generative/responsive music might be some of the manuals written for silent film accompaniment from the 1920’s. Pretty cool that they actually had how-to’s in those days! One title I recall was by Edith Lang, “Musical Accompaniment of Moving Pictures”. It has examples of selections that are altered slightly to get different moods: happy, sad, “mysterioso” if I recall correctly. Might be a source of good ideas. (I just looked it up and it’s still available. I think I’m going to get a copy for myself! I had used it to research and write a score for a silent film during my last year at UC Berkeley.)

(1) the use of Laban notation. It is a notational system used in dance, tracks “effort qualities” using 3 scales: time, direction, force–to oversimplify dramatically. There are definite correlates to motion qualities and emotion. Picture the difference of a “Jab” short:direct:strong, to “Float” long:meandering:light. I think it would be interesting to explore translating these qualities (or a set of parameters) to things like tempo, melodic direction, harmonic context.

(2) branching music: scores written with various branch points/decision points. For example, if one has theme A for inside a certain dungeon and theme B for outside, write a “set” of bridges that can tap into various points along theme A and end up at theme B. Using info in your patterns and scales structures, perhaps one can tell what the current harmony is in A when the transition occurs, and use that to manufacture a pattern that smoothly bridges to B from that particular chord. I think FMod has something along these lines, but I haven’t researched it half as much as I’d like to.

Such an interesting topic! I’ve been planning to get more into it after getting my first 2D puzzle game written–a teach-myself-Java-learning project “Hexara”.

Thank you philfrei for your very good pointers and ideas!

I have to look up that silent film stuff!
Edit: That Edith Lang reference seems to be extremely useful :slight_smile:

Also, I have not read about the Laban notation before and I will look it up as well.

Different scale sizes are already supported by using custom scales. You can set your own scales and use the diatonic triads from those. It is also possible to define a different scale and harmony for each part of the harmony. Another option is to go for a normal scale where your pentatonic scale is a subset. Then you just use the settings for the diatonic harmonic rythm and select those notes that you want in your harmony.

The music is already branching with the help of the RML-modules. It is basically a state machine that contains actions such as repetition, parallelization, conditional execution (if clauses) and state switches. With the help of scripting, a lot of interesting reactive/generative modules can be constructed. You can implement smooth transitions you talked about by making a lot of the properties change with the script functions etc.

[quote]Effort:
Effort, or what Laban sometimes described as dynamics, is a system for understanding the more subtle characteristics about the way a movement is done with respect to inner intention. The difference between punching someone in anger and reaching for a glass is slight in terms of body organization - both rely on extension of the arm. The attention to the strength of the movement, the control of the movement and the timing of the movement are very different. Effort has four subcategories, each of which has two opposite polarities.
[/quote]
from Wikipedia http://en.wikipedia.org/wiki/Laban_notation

Something as simple as C-D-E-F-G could be realized as a punch or a reach, if you come up with a way to encode these effort parameters. Obvious first steps: force = loud vs. soft, & time is enhanced if you can create the illusion of mass or momentum as the notes “move about”. In terms of direction, one has start and end notes, and the path between is either direct or meanders, e.g., start and end C/G: CDEDFEFAG is more meandery than CDEFG or the most direct CG.

Nice, just saw this quote [quote]The Action Efforts have been used extensively in some acting schools to train the ability to change quickly between physical manifestations of emotion.
[/quote]
If one is able to encode and use these effort parameters, there’s also the possibility of encoding light levels and mapping them to brightness in timbre. Brassy notes with lots of overtones are very bright gold, in a way, yes? And something more hushed and muted, with a lot of rolloff in the filtering, is a better instrument choice for dark scenes.

Good composers have made these connections, I think, are composing with at least a subconscious awareness of all of this and more. It’s only a matter of time before it all gets rationalized and encoded, for better or worse.

Very interesting!

Are these Laban terms used in the music world or is it currently mostly used for dance?

Are the terms abstractions for dancers to reason about the music they dance to or are there also composers that use these terms (and speak them) when they compose?

I have heard musicians talk about “tension”, “melody movement”, “intensity”, and the connection between a melody and a story. Musicians also talk about the important balance between structure and variation, which seems to be important in all creative areas.

TLDR alert…

[quote]Are these Laban terms used in the music world or is it currently mostly used for dance?
[/quote]
I’ve not seem them used in the music world, except in one instance coming across them as a way to generate ideas for improvising accompaniments for dance classes. Often a dance class will have a hired musician to provide a beat, and the musician will do this via improvisation.

[quote]Are the terms abstractions for dancers to reason about the music they dance to or are there also composers that use these terms (and speak them) when they compose?
[/quote]
Laban notation is mostly used by choreographers (but interestingly, has been picked up by acting schools as a way to teach the ability to move from one emotional state to another), but not even all choreographers use it by any means. But I imagine that to the extent words are used (mostly dance is communicated by demonstration), there would be a tendency to gravitate towards this conceptual framework. I haven’t explored much in dance theory besides Laban. In the music world, these terms are NOT generally formally used or analysed, with a few exceptions here and there.

[quote]I have heard musicians talk about “tension”, “melody movement”, “intensity”, and the connection between a melody and a story. Musicians also talk about the important balance between structure and variation, which seems to be important in all creative areas.
[/quote]
Absolutely!

It’s really hard for me not to go off the deep end here. If you look at most musical terms, they are metaphors that map mere vibrations in the air to physical space and body sensation. There are musical theorists (Kerman, McClary) that are happy to make use of mappings to the physical & emotional world in their analysis, and others like Stravinsky as a classic example of those who say that music is purely abstract form. Suzanne Langer was important in providing a philosophical link, and I think the work on the metaphorical basis of language by G. Lakoff is starting to find some music theorists that are directly applying his linguistic theories to music.

But that doesn’t mean some clever programmers have to wait for these folks to sort all this stuff out in order to apply it to game programming.

Why not do an angular velocity analysis of a melodic line? Lay out the X in time and the Y via scale steps (or log scale of pitch). From this one can infer whether the melody is acting “as if” it is an object behaving in a physical space or not. Our minds automatically try to map observed behavior and action to intention, it is how we are built. So, constrain the building of the melodic motion to create shapes that fit the game state.

Overlay this with another map of dissonance/consonance to the current tone center, and you have degrees of restfulness. Some melodies stay low near the tone centers, others tend to come to resting points only on relatively dissonant tones, creating a sense of relative instability or restlessness. Game state can determine which way to tend.

Composing is a bit like being a mime. You create illusions. For example, a melodic line (that has indeed established that it is a line by behaving in a line-like way, rather than a random way) first has to create the space, perhaps a wall, by bumping into it (think of a melody that bounces off of a certain note rather than progressing beyond it) and then with a more vigorous approach bursts past that “barrier.” Or, using our Laban Effort scales tranlated to transformational methods of the motivic material…vary the approach to that barrier in a game-appropriate way.

I would love to collaborate on laying out some of these ideas into code form. It’s an ambitious project, and there’s a lot of virgin territory here, though I think there are already some pretty sophisticated programs that are able to identify the composer of a composition by purely analytical methods, or even generate new compositions “in the style of.”

Thanks for the very good answers to my questions :slight_smile:

I just have to get an editor up and running and provide more examples so that musicians like yourself can start to experiment with this software. It would be great to implement new features and add better abstractions with good feedback as well.

You got some really good ideas there! I’d love to cooperate. But as I said before, a reasonably good editor is neccessary to endure this type of composition :).

Very cool!

One thing I was wishing my editor could do (I use either Finale or Sonar HomeStudio) was to take a “finished” piece and automatically chunk it into overlapping wavs (or whatever format) to aid branching structures. Not sure what direction you are going with your editor, but something like this: if one could set it to take a 20 measure section, for example and export 20 “single” measures, but in such a fashion that each export includes the reverb tails, the instrument decays et cetera for each measure. The segmented playback stream could then either be played in an overlapping fashion and sound seamless, or be interrupted at any measure and not sound like it was suddenly cut off.

Or any other idea that supports branching music or ability to easily view/audit the sorts of transforms that one might try to do. That is on my wish list. I will try to stop now and let you get on with your work!

The editor that I build now is all about creating the reactive components that produce Midi-like data one measure at a time. The result can be synthesized by a Midi synthesizer (such as Gervill or any VST instrument) or converted to a Midi file.

It would not be difficult to create a segmented result that can be used for loops but then you probably want to use FMOD or some other tool for sequencing the segments. My editor is more about creating branching/parallel/sequence actions that control the Midi music generation in real-time.

I have made a long video that demostrates two modules and the editor:

The video is long and if you want to skip forward, the second demo is at 6:08 and the editor demo is at 8:00.

amazing stuff

So cool how you condensed music into just 3 controls.

Love the high-intensity, high harshness unhappy music, feels like the nintendo music when time’s running out in mario or something.

How much CPU does it take?

And if you and I team up and hook it up to some AI (or my little brother has lots of free time) could we pump out tunes for top-20 pop hits?! :smiley: