Thursday, September 17, 2009

NAudio Tutorial 7 – The Basics of MIDI Files


After the invigorating ride with the MIDI interface, I've done what I didn't originally set out to do and fallen for MIDI. It's been a bit of an arms length association for me; I actually started developing OpenSebJ (and BeatIt before that) many years ago because I didn't want to buy a MIDI keyboard and because I admittedly wasn't impressed with what what I associated with MIDI – that tinny sound that streams through your speakers when you started browsing the internet, after founding some ones home page on a free hosting site, that thought it would be wonderful to share with you a piece of music that could only be pitifully rendered through some inbuilt wave table on your Sound Blaster 16 (if you were so fortunate).

I digress; however that history is somewhat important as my focus has shifted since those humble beginnings to now understanding that MIDI does have a role in my future, for two primary reasons

1) It's the industry standard for interfacing Audio Equipment with a computer

2) It's a standard file format that can be read and written by most audio applications and means that layout's and scores using this information are almost universally transferable.

Don't get me wrong, I'm still a sucker for samples and that's where I'll end up targeting all of my development and time any way but MIDI in and of itself, is certainly an assisting means to that end.

This NAudio tutorial will be focusing on the MIDI File Format; we will start with the basics before moving on to the more intricate elements within the format. If you haven't had a chance to review the other posts in the NAudio Tutorial series yet, you can find them here:

The Format of Events

We can basically think of a MIDI file as a collection of events. These events are the same type of events which were introduced in the previous tutorial. The NoteOnEvent is arguably the most important and it is made up of:

AbsoluteTime – The time when this event will occur, in milliseconds
Channel – The channel (or you can think of it as the instrument), which this event relates to
NoteNumber – The number for the note; basically each note is assigned a number and this is how we work out which note on the scale will be played. Have a look at this nice SVG on Wikipedia which explains it.
Velocity – How hard we want to play the note
NoteLength (Duration) – How long the note is to be played for

So to put this together and create an event:

int AbsoluteTime = 1000; // 1 Second in on the track
int Channel = 1; // Channel needs to be between 1 and 16
int NoteNumber = 54;
int Velocity = 127; // Velocity is from 0 which is considered off, to 127 which is the maximum
int Duration = 250;

NoteOnEvent note1On = new NoteOnEvent(AbsoluteTime, Channel, NoteNumber, Velocity, Duration);

NoteOnEvent note1Off = new NoteOnEvent(AbsoluteTime + Duration, Channel, NoteNumber, 0, 0); // This is in effect a note off event – letting us know that the note can stop playing now.

Each NoteOn needs a corresponding NoteOff. A note off is defined by the Velocity == 0. If we don't have a corresponding NoteOff for a NoteOn event we will get a lovely exception thrown informing us of our civic duty to add a NoteOff for every NoteOn.

One note on and note off event by itself is interesting but not very useful. If we want to keep a set of events together then we should use the MidiEventCollection.

The Collection of Events

A MidiEventCollection is exactly what the name suggests, a collection of MIDI events. However it is a very sophisticated collection and is structured in such a way that allows for easy translation to a Midi file when required. If we have a look at the constructor we have the following:

MidiEventCollection events = new MidiEventCollection(FileType, DeltaTicksPerQuarterNote);

The file type is referring to what format we will be using for the MIDI File – we can set this to one for the purposes of this demonstration.

DeltaTicksPerQuarterNote is what it implies but we wont be going in to detail on this item in this tutorial, for now you can just set it to a value of 120.


The MIDI specification can contain a number of tracks (think separate instruments) within the one file. Therefore each Event needs to be associated to a Track. In the version of the MIDI file we are working with in this example, Track 0 is used to store basic meta data about the composition. We add tracks to the MidiEventCollection like so:

int outputTrackCount = 2;
for (int track = 0; track < outputTrackCount; track++)

Add Events to the Collection

To add an event to a track all we need to use the Add method of the MidiEventCollection class. The Track is used as the array position identifier and the method then stores the events on that track – like so:


Export the Collection to a file (Save MIDI File)

Quick recap, we now have a single note being played defined, which is made up of 2 events, a NoteOn event and a corresponding NoteOff event. We have added 2 tracks to the MidiEventCollection, Track 0 & Track 1 and finally we have added the 2 events to Track 1. Before we export our lone playing note composition EndMarkers need to be appended to each Track. Fortunately there is a pre-supplied function for this which makes it rather straight forward, you will need to add it to your class though:

private void AppendEndMarker(IList<MidiEvent> eventList)
    long absoluteTime = 0;
    if (eventList.Count > 0)
        absoluteTime = eventList[eventList.Count - 1].AbsoluteTime;
    eventList.Add(new MetaEvent(MetaEventType.EndTrack, 0, absoluteTime));

Then it's just a matter of calling the method:


After this it's matter of calling the Export function and passing in the file name where the file is to be saved and the MidiEventCollection storing all of the events, aka:

MidiFile.Export(filename, events);

That's seriously it. We have saved our Midi file to some location. Go play it and hear a single note, exciting.

Other NAudio Tutorials

For more tutorials in this series, please see the following:

Sunday, August 30, 2009

UI Threading on WPF

While trying to hook up a number of development components, I needed to call a method in a custom control that would update the fill of a rectangle. Calling this method, from within the event of a ButtonClick worked fine, however when I attempted to programmatically access the same method within the Custom Control I encountered an error:

InvalidOperationException - “The calling thread cannot access this object because a different thread owns it.”

It’s all due to the multi threaded nature of what’s going on in the background and thankfully the multithreading is going on, giving us our dynamic, performing environment. But there is a slight addition in complexity to resolve this issue, which is the

The Dispatcher is linked to the UI thread to handle events which require access to UI elements, and in the WPF world it looks a bit simpler than the world of Win Forms. To utilise the Dispatcher there is a bit of setup to complete.

To put this example in context, look at the following screen shot:

<Screen shot to come another day, just imagine a piano keyboard for now>

This is part of a larger custom control, representing a keyboard scale. Each key is made up of a rectangle. The fill can be replaced with an alternate fill to indicate that the key is down and subsequently reverted back to the original fill when the key is released. This control has 2 methods to invoke this functionality:

ColourKey(int key);
UnColourKey(int key);

An instance of this control has been created and added on to a WPF window and linked via the MIDI Framework provided by NAudio, to a MIDI Controller Keyboard. These methods on the custom control can’t be called directly from a method not being invoked from the UI – i.e not a button click. As such we need to utilise a delegate, I’ve opted for two in this example.

//// This delegate enables asynchronous calls for setting
//// the Colour of the key
delegate void dColourKey(int key);
delegate void dUnColourKey(int key);

These delegates provide a safe way to call the methods which have been setup on the custom control.:

/// <summary>
Called via the delegate after using this.Dispatcher.Invoke
/// </summary>
<param name="key">The key to colour on the scale control</param>
private void ColourKey(int key)

/// <summary>
Called via the delegate after using this.Dispatcher.Invoke
/// </summary>
<param name="key">The key to UnColour on the scale control</param>
private void UnColourKey(int key)

To access these methods, via the delegate we need to use the following;

this.Dispatcher.Invoke(new dColourKey(ColourKey),ne.NoteNumber);

A small explanation is in order. The first part is obviously calling the Invoke method on the Dispatcher. The two parameters are where it all happens. The first is initialising the delegate, and passing in the name of the method which is to be called – in this example, the name of the method is ColourKey:

new dColourKey(ColourKey)

The second is the argument which will be passed to the calling method & in this case it’s the number of the note which is being played:


This small overhead ensures that we don’t have a situation where our InvalidOperationException will be generated.

Extended Learning:

One of the reasons I like to write up items like this is so that I can increase my own understanding about the topic. Being able to explain a concept to some one else not only assists the person who your explaining it to but reinforces your own understanding and through writing this post I’ve opened my own eyes further. I’ve refracted my own code and included the Dispatcher and Delegate code in the custom control; which means it’s no longer required by the calling WPF form. Amazing.

Thursday, August 27, 2009

NAudio Tutorial 6 - MIDI Interfacing

It’s been a while.

So kicking off from where we left off this instalment is going to be looking at MIDI and how we can interoperate control characters sent through the MIDI interface in our application. This tutorial as per the other tutorials in the series assumes that you have read the previous tutorials, as we will be building upon concepts and understanding from each of these sections. Feel free to read through this tutorial cold but if your looking for the background for anything not covered here it would be best to check out the other Tutorials as a first stop.

As you may by now expect, NAudio has a set of functions for this as well. You will find the useful set of functions under NAudio.Midi;  

Setting It

Lets create a class to encapsulate the bulk of the MIDI functionality that we will be calling upon for this tutorial.

using NAudio.Midi;
namespace AudioInterface
    public class NAudioMIDI
       public MidiIn midiIn; 
       private bool monitoring; 
       private int midiInDevice;

Our midiInDevice represents what MIDI device on the system we want to use for this interface; in case you have more than one MIDI device connected to your system. I only have a single MIDI device however going through this process is obviously useful for those who have more than one and it’s useful to check that the MIDI device I have is actually plugged in and switched on.

Once we have defined what MIDI device we will be using, it will be initiated and the midiIn instance will relate to that device.

/// <summary>
/// Get a list of MIDI Devices
/// </summary>
/// <returns>string[] of MIDI Device Names</returns>
public string[] GetMIDIInDevices()

     // Get a list of devices 
     string[] returnDevices = new string[MidiIn.NumberOfDevices]; 

     // Get the product name for each device found 
     for (int device = 0; device < MidiIn.NumberOfDevices; device++) 
          returnDevices[device] = MidiIn.DeviceInfo(device).ProductName; 
     return returnDevices;

Assuming that we want to allow the user to select a Device from a list of Device’s then we would pass this list back to a control which will populate this list with the available devices. With something like this from our Load method on the form class:

private void NAudioTutorial6_Load(object sender, EventArgs e)

      // Populate the devices available for the MIDI interface 
      string[] MIDIDevices = AudioInterface.NAudioInterface.nMIDI.GetMIDIInDevices();
      foreach (string devices in MIDIDevices)
          boxMIDIIn.SelectedIndex = 0; 
      }catch(Exception except){ 
          System.Windows.Forms.MessageBox.Show("No MIDI Device Detected"); 

Starting It

Brilliant, so now we have a list of available MIDI devices, loaded in to a list box control, that the user can select from. Now we need to know when the user has actually chosen the MIDI control they would like us to monitor; so let’s put in a button on our UI to trigger this.

private void cmbMonitor_Click(object sender, EventArgs e)

    // Setup the MIDI interface to start monitoring the selected device 

    // Add the event handler, to handle the MIDI messages received
    AudioInterface.NAudioInterface.nMIDI.midiIn.MessageReceived += new EventHandler<MidiInMessageEventArgs>(midiIn_MessageReceived);

When StartMonitoring is called we through back to the nMIDI instance we created earlier and (using the selected MIDI device) setup the midiIn device and set the midiIn device to Start – which in turn, kicks NAudio in to gear to start monitoring MIDI messages received from the MIDI device.

public void StartMonitoring(int MIDIInDevice)

      if (midiIn == null
          midiIn = new MidiIn(MIDIInDevice); 
      monitoring = true;

Going back to cmbMonitor(…) we next setup the EventHandler for the MIDI messages which are going to be received:

// Add the event handler, to handle the MIDI messages received
AudioInterface.NAudioInterface.nMIDI.midiIn.MessageReceived += new EventHandler<MidiInMessageEventArgs>(midiIn_MessageReceived);

Playing It

For this to work, we need to have an event handler method setup to receive the messages, within the same class. From the line above you should see that the method is midiIn_MessageReceived - which we will have a look at now:

public  void midiIn_MessageReceived(object sender, MidiInMessageEventArgs e)

     // Exit if the MidiEvent is null or is the AutoSensing command code 
     if (e.MidiEvent != null && e.MidiEvent.CommandCode == MidiCommandCode.AutoSensing) 

Assuming that MIDI Event Command Code represents a Note On Event, then we need to interpret what Note On Event has been sent. To do this we need to cast the MidiEvent to a NoteOnEvent:            

     if (e.MidiEvent.CommandCode == MidiCommandCode.NoteOn) 
          // As the Command Code is a NoteOn then we need
          // to cast the MidiEvent to the NoteOnEvent
          NoteOnEvent ne; 
          ne = (NoteOnEvent)e.MidiEvent;

ne is now a NoteOnEvent which has some specific MIDI attributes, such as a NoteNumber, which is an int that represents a single note from the full scale; as well as a Velocity which represents how hard the MIDI note has been played, in this example it how hard was the MIDI controller pressed (assuming that the MIDI controller you have can report this information ala levels of sensitivity).

Each NoteNumber represents an incremental note on the scale, starting with C0 == 0, Db0 == 1 (C# aka D-before-0), D0 ==2, Eb0 ==3 (D#), E0 == 4 etc. this relationship continues on. For practical purposes (read number of samples for the Piano scale that I have, tops out at 96 which is C8)  two sets of notes have been mapped within the NAudioInterface class, in a single array. The first set of notes, 0 – 100 are consider mf (quite). Notes 100 – 200 represent the same positions, but contain samples loaded that are ff (loud). Separating by a round 100 makes all the additions and subtractions to interface with these notes rather straight forward. This mapping is contained within the vKeys Class and is a whole heap of excitement, if a long list of static mappings is your thing. A snip-it of the class:

public static class vKeys


vFFKeysFileNames[48] = "ff.C4.wav";
vFFKeysFileNames[49] = "ff.Db4.wav";
vFFKeysFileNames[50] = "ff.D4.wav";
vFFKeysFileNames[51] = "ff.Eb4.wav";
vFFKeysFileNames[52] = "ff.E4.wav";


Ohh Ahh..

Back to the velocity, so we have a number, ne.Velocity which represents how hard the note has been played, as such we use that to then work out what sample should be played. If it’s less then 50, then the quite sample is played, else loud.

          if (ne.Velocity < 50) 
               AudioInterface.NAudioInterface.Play(ne.NoteNumber + 100); 

Stoping It

This means that we can now play a note and conversely we need to be able to stop playing a note. To fulfil this requirement we have the following, which is effectively the converse with the exception that no checking of the Velocity is required, instead all related samples are requested to be faded out, both the loud and the soft. One may ask whys that, basically a model of the real instrument. When a single note in an instrument stops playing, all of the note stops playing. If it had first been played softly and then loudly but then has stoped being played, then the note is no longer being played – regardless of original velocity. To this end, both sets of the notes are Faded Out. 

     if (e.MidiEvent.CommandCode == MidiCommandCode.NoteOff) 
          NoteEvent ne; 
          ne = (NoteEvent)e.MidiEvent; 

          AudioInterface.NAudioInterface.FadeOut(ne.NoteNumber + 100); 

Changing It

The home stretch and in fact this could easily be left off. This last section relates to a controller value being changed. The controller, at least on the MIDI device I have represents a set of buttons and knobs – the following code is more fixed then you would put in production code but it suits the purpose of a tutorial and most important, scratches an itch.

Determining if this is a ControlChange event and assuming it is, then the MidiEvent needs to be cast to a ControlChangeEvent:  

     if (e.MidiEvent.CommandCode == MidiCommandCode.ControlChange) 
          ControlChangeEvent cce; 
          cce = (ControlChangeEvent)e.MidiEvent;

Similar to the NoteOnEvent, the ControlChangeEvent has a numerical value, the attribute Controller - which is used to determine which Controller’s value has been changed. For this example we are only monitoring one specific controller, 71. The individual notes above also have the attribute of sensitivity, similarly Controllers have a ControllerValue. The ControllerValue is a value in the range of 0 – 127. This controller has been used to define the time out value for the notes which are played. The longer the fade out, the more of the note duration is heard.

          if ((int)cce.Controller == 71) 
               int timeOutValue; 
               if (cce.ControllerValue < 127) 
                    // Calculate a sliding value for the fade out based on the 
                    // ControllerValue. This could be drematically improved.. 
                    // It is meant to be very granular at one end and more extreme 
                    // at the other but the calculation could surley be improved.  
                    timeOutValue = (int)Math.Exp(Math.Log(cce.ControllerValue) * 1.75); 
                    timeOutValue = 100000; 

Finishing It

Tha, tha, that’s all folks.

For more NAudio guidance, please review the other NAudio tutorials in the series.

Thursday, April 09, 2009

MIDI Controller Acquired

Finally I've entered the world of MIDI. I have stood my distance ever since I started looking at Audio Composition, Programming and Development because of a perceived cost barrier and maybe I was right at the time, however things change and if the price hasn't then the accessibility of the price has.

Yesterday I handed over AUS $185 for a UMX49 made by Behringer, which is just a MIDI controller with some bundled software;

Why so compelling a purchase? Well it comes with a cut down copy of Ableton 4 Live (Lite Behringer Edition) and some other, free and Open Source programs - which you can actually download from their website here: it looks like a nice collection of VST plugins, with every body's favourite (or at least mine) Open Source Audio Sample Production Tool, Audactiy. But some software by itself isn't that compelling, especially if OpenSebJ is missing, so what else? Well the package also includes a USB "Sound Card" - the UCA200 which doesn't appear to be sold separately but seems to be basically like the UCA202 except for a missing Optical Output, Headphone Jack and Volume Slider no idea if the internals are the same but for comparison the UCA202 is priced around AUS $50; and I was thinking of getting one of those previously to test if I received a reduction of timing in latency for ASIO.

So what? Well it looks nice too. The real key thing for me is that now I have a way to test MIDI signals and hook them up in the audio tools that I am developing, soon you will be able to forget Ableton, OpenSebJ will be coming to a home near you, with MIDI interface support. vScaleNotes will get a face lift and start allowing all the keys on the fully sampled piano to be playable via MIDI, so people who actually know how to play a keyboard, rather than type, will actually be able to use it and produce some wonder sounds.

Bring on the MIDI.

A word from our sponsors: We have none but if you would like to become a sponsor of OpenSebJ and vScaleNotes, please let us know what Audio Hardware you could supply to feed our new Audio Hardware craving, hmm or a shirt and hat would be nice too.

Sunday, April 05, 2009

NAudio Tutorial 5 - Recording Audio

Time for another installment of the NAudio Tutorials; this week we will be looking at how to Record Audio using NAudio from two different recording scenarios. The first being the use of NAudio to record any and all sound coming from the local Sound Card input, whether that be from a Microphone, the Line In Device or the Sound Cards on board wave mixer. The second approach we will be looking at is recording only the Audio that has been mixed by NAudio, regardless of what other audio is being played on the system at the time. This is useful for scenarios where you want to play over a backing track, or play your samples against a click track being played from another program but don't want to record the click track. The additional advantage of recording the audio mixed directly from NAudio is that there is 0 degradation in quality through the process; no audio playing means pure silence, rather than almost silence which for your average audio hardware would be the result, there is always some level of noise when working with an analog signal.

This NAudio Recording audio tutorial builds upon the concepts presented in previous NAudio Tutorials, if you haven't yet had the opportunity to review them I suggest that you venture there first and resume reading this tutorial after you have understood the basic NAudio concepts.

Time for another disclaimer, the second approach discussed here, recording the mix directly from NAudio has been suggested as a feature for inclusion in the main branch. I'm not sure if it fits in to the long term direction for the WaveMixerStream32 class, in any case, the code for these modifications have been included in this Tutorial and thanks to the Open Source nature of NAudio you can make these same changes to an instance of the library for yourself. You can find the specific details of this suggest contained in this forum post:

If you have any feedback on this tutorial, drop me a line or post a question in the comments section below.

Download the full article (AbiWord and RTF Format), example C#.Net Source Code and tutorial program here.

Recoding from the Sound Card

This is remarkably simple to achieve in NAudio, short of having a big red button which we push before it leaves the factory. First step is to setup.. ah forget the steps here is the code:

// WaveIn Streams for recording
WaveIn waveInStream;
WaveFileWriter writer;

waveInStream = new WaveIn(44100,2);
writer = new WaveFileWriter(outputFilename, waveInStream.WaveFormat);

waveInStream.DataAvailable += new EventHandler<WaveInEventArgs>(waveInStream_DataAvailable);

No joke that's almost it. The only interesting thing here that we need to consider it that we have added an EventHandler that needs to be setup to handle data when it's ready to be handed off to the WaveFileWriter:

void waveInStream_DataAvailable(object sender, WaveInEventArgs e)
   writer.WriteData(e.Buffer, 0, e.BytesRecorded);

Er, thats it to start recording. We can stop the recording as such:

waveInStream = null;
writer = null;

See it would have been to simple a tutorial if we stopped here but feel free to stop reading and give it a crack. Using this method any audio which isn't muted on you input mixer will be recorded; it's up to you and the windows Mixer API to decide what you want to record, isn't that nice; except that you can't only record Audio from your audio application if there are other applications playing sounds in the background - say you get a call on your VOIP connection right in the middle of the hottest composition ever, or some one PM's you in IRC, or you click around on your PC looking for that cool new sample to load, with all the button clicks and other useless sounds being saved in to your mixed composition - oh no. Lets now look at how this unfortunate situation can be avoided.

Direct-To-Disk recoding via the NAudio WaveMixerStream32 Class

Now this is slightly more complicated but much more fun and presents you with a superior audio recording (especially on lousy or average audio hard ware, say my PC for instance). We will cover the code which will be required within the calling application first and then secondly review the changes required within the NAudio library, just so we have some comparison in amount of effort required for both approaches.


Assuming you already have the mixer defined, thats all that is required to start recording. We can pause the streaming to disk by:


Or resume by:


Finally stopping by:


Easy enough but we should cover whats required for this to actually work right? So lets dive in to the modifications in the WaveMixerStream32.cs file and hack till our hearts are content. In the declaration section of the class we need to add the following:

// Declarations to support the streamToDisk recording methodology
private bool streamToDisk;
private string streamToDiskFileName;
WaveFileWriter writer;

Now we add in the methods that support our calls:

/// <summary>
/// Starts the Strem To Disk recording if a file name to save the stream to has been setup
/// </summary>
public void StartStreamingToDisk()
   if (streamToDiskFileName != "")
       streamToDisk = true;

/// <summary>
/// Pause's the stream to disk recording (No further blocks are written during the mixing)
/// </summary>
public void PauseStreamingToDisk()
   streamToDisk = false;

/// <summary>
/// Resume streaming to disk
/// </summary>
public void ResumeStreamingToDisk()
   streamToDisk = true;

/// <summary>
/// Stop the streaming to disk and clean up
/// </summary>
public void StopStreamingToDisk()
   streamToDisk = false;

/// <summary>
/// Setup the StreamMixToDisk file and initalise the WaveFileWriter
/// </summary>
/// <param name="FileName">FileName to save the mixed stream</param>
public void StreamMixToDisk(string FileName)
   streamToDiskFileName = FileName;
   writer = new WaveFileWriter(FileName, this.WaveFormat);

/// <summary>
/// Using the final set of data passed through in the overriden read method to also be passed to the WaveFileWriter
/// </summary>
/// <param name="buffer">Data to be written</param>
/// <param name="offset">The Offset, should be 0 as we are taking the mixed data to write and want it all</param>
/// <param name="count">The total count of all the mixed data in the buffer</param>
private void WriteMixStreamOut(byte[] buffer, int offset, int count)
   // Write the data to the file
   writer.WriteData(buffer, offset, count);

All thats left is the modification to the Read method to pass this data back to the WriteMixStream method. Rather than pasting in the whole read method, even though it may make it look like I've done some extra work, I'll just copy in the last 8 or so lines:

position += count;
// If streamToDisk has been enabled the mixed audio will be streamed directly to a wave file, so we need to send the data to the wave file writer
if (streamToDisk)
   WriteMixStreamOut(readBuffer, 0, count);
return count;

Having jammed the check for streaming out to disk, after the final calculation and before the method is exited gives us everything we need to stream to our file. So now we have two methods of recording audio data and you want to know what my favorite part is?

You can actually use both at the same time and get multi-track / multi-channel audio recording on the same machine with a fairly standard sound card!

I normally refrain from using exclamation points but I was actually quite excited when I tested this. It means that some one can be jamming along on say a C# Audio Synthesizer / Beat Box or composition tool like OpenSebJ while another person is singing vocals or playing in a guitar riff through line in. I guess if your really talented you could be doing both at the same time, perhaps signing to the jam is more ilkley - what ever it is it can actually work; you can record both sets of audio separately - because the NAudio Stream-To-Disk method is not actually using your sound card to save the mixed result. Cool, well I think so.

Download the example program and have a look for yourself.


As pre usual, I've packaged up a copy of the entire article, along with a copy of the example program and source for your consumption. For the modifications required to the NAudio library, I have also copied in to the zip the modified version of WaveMixerStream32.cs for your convenience. Let me know if you have any questions, comments or if your keen to contribute to a project like OpenSebJ.

Until next time, when we look at - well I have actually decided yet. There are two things which are on the list from Tutorial 3 however I don't think they are currently the items which are peaking my interest, so lets assume it will most likely be something from the list below:

  •  Adding Audio Effects to a Stream 

  •  Transposing the frequency of the stream being played back

  •  Using MIDI to trigger audio samples

  • Playing compressed Audio (MP3 & OGG)

  •  Or something else that takes my fancy, write to me and suggest what that may be.

If you haven't already; Download the full article (AbiWord and RTF Format), example C#.Net Source Code and tutorial program here.