Subject: MAX Digest - 25 Aug 1998 to 26 Aug 1998
Date: Thu, 27 Aug 1998 00:00:00 -0400
From: Automatic digest processor 
Reply-To: MAX - Interactive Music/Multimedia Standard Environments
To: Recipients of MAX digests 

There are 5 messages totalling 146 lines in this issue.

Topics of the day:

  1. MAX/msp in performance
  2. polyphonic voice allocation in MSP
  3. Fast time-stretching with MSP? (2)
  4. HyperPrism Vst

Email to MAX should now be sent to
LISTSERV commands should be sent to
Information is available on the WEB at


Date:    Tue, 25 Aug 1998 10:12:15 +0100
From:    Lawrence Casserley 
Subject: MAX/msp in performance


Just in case any of you are in the London area in September, I thought
you might like to know about a performance using MAX/msp at the 1998
Colourscape Music Festival on Clapham Common, off Rookery Road, London
SW4, UK.

A new version of my music-theatre piece "Labyrinth" will use ISPW to
process the sound of Simon Desorgher's flute, and MAX/msp on the G3 to
process my voice and percussion. The performance also includes dancer Jo
Shapland in a re-telling of the Theseus and the Minotaur legend.

The Mac will also be used to do some of the processing for Simon
Desorgher's "Chakras", which uses sounds of the human body to represent
the colour Chakras and relate them to the colours of Colourscape.

More information about the festival at:

End of commercial and apologies to those not interested! :-)>


Lawrence Electronic Operations -Tel +44 1494 481381 -FAX +44 1494 481454
Signal Processing for Contemporary Music -email


Date:    Wed, 26 Aug 1998 11:58:52 -0700
From:    Matt Wright 
Subject: polyphonic voice allocation in MSP

I noticed that Max's poly object seems to work fine with any
integer as the "key number", not necessarily in the range 0-127.

That means you could assign arbitrary "note IDs" to each note and
still use poly to do voice stealing.  And of course the "split
and join it later" patch postd recently will let you have a big
arbitrary list of floats (or symbols or whatever) with all the
info for each note.



Date:    Wed, 26 Aug 1998 12:04:31 -0700
From:    Matt Wright 
Subject: Re: Fast time-stretching with MSP?

(Forgive my unattributed quoting of earlier messages on this topic...)

>i think the most straightforward and effective way to do this is to simply
>vary the playback speed of the audio (from a buffer~, presumably), and
>shift the pitch simultaneously to compensate for the variation. getting
>this to sound good is a bit of a challenge...

The pitch shifting algorithms that I know of do in fact vary the playback
speed of the audio, and then compensate for the change in timing by some
sort of granular synthesis-ish windowing or overlap-add.  (If you're pitch
shifting up, you have to play your sample faster, so somewhere you have to
find a way to repeat little sections of your sample often enough so that
the overall timing comes out about right.  Conversely, when pitch-shifting
down, you need to find little bits to omit entirely.)

>>user can influence the speed "conducting"
>>([using]...some MIDI ...device).

You might be interested in the paper "An Adaptive Conductor Follower" by
Mike Lee, Guy Garnett, and David Wessel, from the 1992 San Jose ICMC.  I
believe they used a Buchla Lightning as a baton, then decoded the MIDI
spatial positioning information in Max via the neural net object (thanks
for the plug, Les) to produce tempo information.  I know that Guy did a
piece for orchestra and electronics where the one human conductor used this
technology and the human orchestra members and the Max-controlled
electronics both followed him.

>How about using an fft analysis/resynthesis approach ?

Once you get into the frequency domain it's much easier to have independent
control of pitch and timing, but the "price of admission" to the frequency
domain is getting an analysis that accurately represents the original sound
in a mutable way, and that can be a bit tricky.  (!)  One big issue is that
once you start varying the playback speed you need to worry about the



Date:    Wed, 26 Aug 1998 12:35:23 -0700
From:    dudas 
Subject: Re: Fast time-stretching with MSP?

Oeyvind Brandtsegg writes:
>How about using an fft analysis/resynthesis approach ?
[to do real-time time stretching]

as far as I know and understand (and maybe I'm totally and completely
wrong) the csound fft deals with fft bins of amplitudes and instantaneous
frequencies (the difference between fft analysis window phases with respect
to the distance between successive analysis windows).
this makes time stretching easy because you just change the analysis window
step, and presto! the csound pvoc function recreates the correct phases for
each successive resynthesis - so you do not get heavy amplitude modulation
due to phase cancellation.

doing time-stretching with the msp fft~ object would be a bit tricky,
because it hands you real and imaginary fft bins which you have to then
convert to amplitude and phase, and THEN keep running tabs on the phase
part in order to calculate the instantaneous frequency. the result would
need to be stored in a buffer~ someplace and the reading of it delayed
before doing the whole thing in reverse!

if anyone is dedicated enough to attempt to do this, please send me the
patch, because I'm too damn lazy to do it myself.

if anyone finds flaws in my understanding of the whole fft business, please
correct me, because I am by no means an authority on the subject.



Date:    Wed, 26 Aug 1998 17:42:26 +0000
From:    Mark 
Subject: HyperPrism Vst

Has anyone had experience using HyperPrism Vst plug-ins

with MSP yet? I am considering using them and would appreciate any

------------------------------ End of MAX Digest - 25 Aug 1998 to 26 Aug 1998 **********************************************