MIDI, as you may know, is a very integral part of the music production process. Understanding it can unlock the power of your audio software and electronic instruments. It stands for Musical Instrument Digital Interface, and is a protocol that was developed in the 1980s.
When some people think of MIDI, their minds go back and cringe at the thought of the terribly amateurish sounds of General MIDI. However, since the 80s, sampling and synthesizers have only gotten exponentially better, and bedroom producers are able to create soundscapes and play instruments very realistically at a much lower cost.
The standardization of the protocol has allowed the technology to evolve so fast. Imagine if we constantly had different standards competing against each other. It would be the typical VHS versus Beta, HD DVD versus Blu-Ray and so on.
Competition is great for progress, but I think we could agree and be thankful for the small community in the 80s that allowed it to prosper and become the standard.
The Difference Between MIDI and Audio
MIDI is really nothing other than a language that allows musical devices to communicate with each other. There is no audio in the MIDI messages. The messages are there to tell the synthesizer or the sampler which audio file or patch to play, which notes to play, and other information.
Today, a lot of scenarios involve MIDI hardware, such as a keyboard controller that doesn't have any built-in sounds communicating with the MIDI software synthesizer that has thousands upon thousands of different samples to choose from.
Why Use MIDI?
The advantages of using MIDI are numerous:
Any instrument, at any time – Software synthesizers are so advanced today that it gives the player an amazing amount of control in choosing their arrangement. Craig Anderton said, in a recent NAMM panel, that MIDI allows players to have a lot of power for composition. It allows keyboard players to play a multitude of different instruments to compose their pieces. If they want an orchestra they can have one at their fingertips, without having to call up Al Schmitt and book the San Francisco symphony orchestra.
Size – Since MIDI is just pure information, songs composed entirely with MIDI messages are very small. Obviously, you'll run into trouble if you transfer your MIDI song from one computer to the next and it doesn't have the same synthesizers for reproducing your sounds. But overall, there is some convenience to be had from that.
Options – Even if you were to transport your MIDI song from computer to computer you can always find another synthesizer to substitute for your old one. Since it is just information, slapping on a different synth is simple and easy. Say you don't have the best soft-synths at your bedroom studio but you compose your song with what you have. You could then technically take it to someone who has the most expensive samples and software synths in order to instantly jazz it up and make it sound better.
- Manipulation – MIDI allows you total flexibility when it comes to manipulating your performance. No need to re-record a performance that was only lacking in a few parts. Just move the notes around to make it fit. Quantization gets a lot of flak for making performances sound robotic, but the flipside is the extremely useful, and timesaving way you can manipulate your musical data.
What Can You Manipulate?
As said before, you can use MIDI to play a plethora of different instruments. But that's not all.
You can also use MIDI for any number of functions and program your keyboard to do many other different things than just play music. You can use your keyboard as your transport control, as mute functions on auxes and stuff like that. I've even seen Nintendo Wii controllers used as MIDI controllers in DAWs so it's definitely a very flexible protocol to use.
To quote one of the guys at the NAMM panel about MIDI:
MIDI is just a great way to make music.”
An Introduction to MIDI Parameters
MIDI parameters have a data range from 0–127. This is most easily explained with the note parameter and velocity.
Simply put, 0 represents the lowest C on a keyboard, while 127 represents a G note 9 octaves above. So if you were to play a middle C, MIDI would recognize it as simply '60'. Robotic and efficient.
Velocity is one of the integral parameters of sampling with MIDI. Whenever you play a note, on a MIDI controller for instance, the software synth reproduces the signal with a value between 0–127. The harder you play, the closer the value to 127. So when someone creates samples for MIDI they record multiple audio files of the same note, played at different volumes.
And the better the software synth, the more work went into creating the samples. Imagine if you hire a pianist and ask him to create the samples for your software synth. First you would ask him to play very softly, almost inaudibly. You would record that sample as the lowest velocity value. From there you would record him play progressively harder and harder until you've filled out all the velocity values and ended up with an thoroughly sampled piano synth.
Then there are some synths that have the same audio sample regardless of how hard you play. Remember those old snare samples that all sounded the same, giving you that machine gun snare sound. It's been used pretty creatively, but not a lot of thought went into creating different velocities since the only thing they changed was the volume and not the touch and feel.
Velocity is also apparent in the other parameters of MIDI. For instance, the parameter 'Note On' and 'Note Off', which is what MIDI registers when you press and depress a particular note can also have velocity parameters. A pad may swell differently and have a different attack depending on how hard you press down on the keyboard so the velocity control associated with 'note on' will register this.