Generative/Reactive Music API for Games

[quote]Effort:
Effort, or what Laban sometimes described as dynamics, is a system for understanding the more subtle characteristics about the way a movement is done with respect to inner intention. The difference between punching someone in anger and reaching for a glass is slight in terms of body organization - both rely on extension of the arm. The attention to the strength of the movement, the control of the movement and the timing of the movement are very different. Effort has four subcategories, each of which has two opposite polarities.
[/quote]
from Wikipedia http://en.wikipedia.org/wiki/Laban_notation

Something as simple as C-D-E-F-G could be realized as a punch or a reach, if you come up with a way to encode these effort parameters. Obvious first steps: force = loud vs. soft, & time is enhanced if you can create the illusion of mass or momentum as the notes “move about”. In terms of direction, one has start and end notes, and the path between is either direct or meanders, e.g., start and end C/G: CDEDFEFAG is more meandery than CDEFG or the most direct CG.

Nice, just saw this quote [quote]The Action Efforts have been used extensively in some acting schools to train the ability to change quickly between physical manifestations of emotion.
[/quote]
If one is able to encode and use these effort parameters, there’s also the possibility of encoding light levels and mapping them to brightness in timbre. Brassy notes with lots of overtones are very bright gold, in a way, yes? And something more hushed and muted, with a lot of rolloff in the filtering, is a better instrument choice for dark scenes.

Good composers have made these connections, I think, are composing with at least a subconscious awareness of all of this and more. It’s only a matter of time before it all gets rationalized and encoded, for better or worse.

Very interesting!

Are these Laban terms used in the music world or is it currently mostly used for dance?

Are the terms abstractions for dancers to reason about the music they dance to or are there also composers that use these terms (and speak them) when they compose?

I have heard musicians talk about “tension”, “melody movement”, “intensity”, and the connection between a melody and a story. Musicians also talk about the important balance between structure and variation, which seems to be important in all creative areas.

TLDR alert…

[quote]Are these Laban terms used in the music world or is it currently mostly used for dance?
[/quote]
I’ve not seem them used in the music world, except in one instance coming across them as a way to generate ideas for improvising accompaniments for dance classes. Often a dance class will have a hired musician to provide a beat, and the musician will do this via improvisation.

[quote]Are the terms abstractions for dancers to reason about the music they dance to or are there also composers that use these terms (and speak them) when they compose?
[/quote]
Laban notation is mostly used by choreographers (but interestingly, has been picked up by acting schools as a way to teach the ability to move from one emotional state to another), but not even all choreographers use it by any means. But I imagine that to the extent words are used (mostly dance is communicated by demonstration), there would be a tendency to gravitate towards this conceptual framework. I haven’t explored much in dance theory besides Laban. In the music world, these terms are NOT generally formally used or analysed, with a few exceptions here and there.

[quote]I have heard musicians talk about “tension”, “melody movement”, “intensity”, and the connection between a melody and a story. Musicians also talk about the important balance between structure and variation, which seems to be important in all creative areas.
[/quote]
Absolutely!

It’s really hard for me not to go off the deep end here. If you look at most musical terms, they are metaphors that map mere vibrations in the air to physical space and body sensation. There are musical theorists (Kerman, McClary) that are happy to make use of mappings to the physical & emotional world in their analysis, and others like Stravinsky as a classic example of those who say that music is purely abstract form. Suzanne Langer was important in providing a philosophical link, and I think the work on the metaphorical basis of language by G. Lakoff is starting to find some music theorists that are directly applying his linguistic theories to music.

But that doesn’t mean some clever programmers have to wait for these folks to sort all this stuff out in order to apply it to game programming.

Why not do an angular velocity analysis of a melodic line? Lay out the X in time and the Y via scale steps (or log scale of pitch). From this one can infer whether the melody is acting “as if” it is an object behaving in a physical space or not. Our minds automatically try to map observed behavior and action to intention, it is how we are built. So, constrain the building of the melodic motion to create shapes that fit the game state.

Overlay this with another map of dissonance/consonance to the current tone center, and you have degrees of restfulness. Some melodies stay low near the tone centers, others tend to come to resting points only on relatively dissonant tones, creating a sense of relative instability or restlessness. Game state can determine which way to tend.

Composing is a bit like being a mime. You create illusions. For example, a melodic line (that has indeed established that it is a line by behaving in a line-like way, rather than a random way) first has to create the space, perhaps a wall, by bumping into it (think of a melody that bounces off of a certain note rather than progressing beyond it) and then with a more vigorous approach bursts past that “barrier.” Or, using our Laban Effort scales tranlated to transformational methods of the motivic material…vary the approach to that barrier in a game-appropriate way.

I would love to collaborate on laying out some of these ideas into code form. It’s an ambitious project, and there’s a lot of virgin territory here, though I think there are already some pretty sophisticated programs that are able to identify the composer of a composition by purely analytical methods, or even generate new compositions “in the style of.”

Thanks for the very good answers to my questions :slight_smile:

I just have to get an editor up and running and provide more examples so that musicians like yourself can start to experiment with this software. It would be great to implement new features and add better abstractions with good feedback as well.

You got some really good ideas there! I’d love to cooperate. But as I said before, a reasonably good editor is neccessary to endure this type of composition :).

Very cool!

One thing I was wishing my editor could do (I use either Finale or Sonar HomeStudio) was to take a “finished” piece and automatically chunk it into overlapping wavs (or whatever format) to aid branching structures. Not sure what direction you are going with your editor, but something like this: if one could set it to take a 20 measure section, for example and export 20 “single” measures, but in such a fashion that each export includes the reverb tails, the instrument decays et cetera for each measure. The segmented playback stream could then either be played in an overlapping fashion and sound seamless, or be interrupted at any measure and not sound like it was suddenly cut off.

Or any other idea that supports branching music or ability to easily view/audit the sorts of transforms that one might try to do. That is on my wish list. I will try to stop now and let you get on with your work!

The editor that I build now is all about creating the reactive components that produce Midi-like data one measure at a time. The result can be synthesized by a Midi synthesizer (such as Gervill or any VST instrument) or converted to a Midi file.

It would not be difficult to create a segmented result that can be used for loops but then you probably want to use FMOD or some other tool for sequencing the segments. My editor is more about creating branching/parallel/sequence actions that control the Midi music generation in real-time.

I have made a long video that demostrates two modules and the editor:

The video is long and if you want to skip forward, the second demo is at 6:08 and the editor demo is at 8:00.

amazing stuff

So cool how you condensed music into just 3 controls.

Love the high-intensity, high harshness unhappy music, feels like the nintendo music when time’s running out in mario or something.

How much CPU does it take?

And if you and I team up and hook it up to some AI (or my little brother has lots of free time) could we pump out tunes for top-20 pop hits?! :smiley:

CPU usage: 2% for the first demo when everything is maxed.

The CPU usage can easily be made to be dominated by the synthesizer, if it is of the software kind.

The algorithms I use are not so complicated (no search or constraint processing involved) and the music generator just runs every 1-2 second when a new measure is generated.

It is now possible to test the editor:

http://www.springworldgames.com/rmleditor

Cool, will there be any tutorials on how to use it? :smiley:

That is certainly my intention. It is very difficult to use it without knowing where to start :slight_smile:


if (externalInterest + myEnergy > tutorialThreshold) {
  spawnTutorial();
}

There is now a tutorial draft available here:

http://springworldgames.com/rmleditor/tutorials/tutorial_1.html

New video available:

Some new features implemented:

  • Ornament zones (for flams, rolls, grace notes etc.)
  • Settings for reverb + chorus types
  • Chromatic connection notes
  • Midi echo
  • Improved note velocity pattern generation

Preset 1 in the RML editor is my current game music prototype. The editor is available here:
http://www.springworldgames.com/rmleditor/

Glad to see you’re still on this :stuck_out_tongue: I’ll definately use this, once I add music :smiley:

Nice to hear Mads!
The current plan is to release the source for music generation once it becomes clean enough and add useful classes for playing the result with minimal dependencies.

I love this! The full track/video was quite wonderful.

I hope you are able to continue to expand upon this!

Source code for the Music Generator and RML is now available.

I have two versions of the source code:

The first one requires Gervill to compile out of the box, but can be used without it with some magic.
The second will compile directly.

There are not many examples available for using the API, but that will change.