Wee have also Sound-houses, where we practise and demonstrate all Sounds, and the Generation. Wee have harmonies which you have not, of Quarter-Sounds, and lesser Slides of Sounds. Diverse Instruments of Musick likewise to you unknowne, some sweeter than any you have; Together with Bells and Rings that are dainty and sweet. Wee represent Small Sounds as well as Great and Deepe; Likewise Great Sounds, Extenuate and Sharpe; Wee make diverse Tremblings and Warblings of Sounds, which in their Originalle are Entire. Wee represent and imitate all Articulate Sounds and Letters, and the Voices and Notes of Beasts and Birds. Wee have certain Helps, which sett to the Eare doe further the Hearing greatly. Wee have also diverse Strange and Artificiall Echos's, Reflecting the Voice many times, and as it were Tossing it: And some that give back the Voice lowder than it come, some Shriller, some Deeper; Yea some rendering the Voice, Differing in the letters or Articulate Sound, from that they receyve, Wee have also means to convey Sounds in Trunks and Pipes, in strange Lines, and Distances.

Francis Bacon, The New Atlantis, 1624.

I worked at Electronic Music Studios Ltd (EMS) in London, England from 1969 until 1973. (The web pages for EMS are now maintained by Graham Hinton.) My principal contribution to the studio was the MUSYS system for the composition and performance of electronic music. EMS was owned and controlled by Peter Zinovieff. David Cockerell built hardware for the studio and I wrote software. One of my rewards was to become an early owner of a VCS-3 (at left). I sold mine cheaply, not realizing that, after more than 30 years, VCS-3s would become valuable again.

At left, Peter Zinovieff sets up the Synthi 100. There are larger pictures of the studio: in black and white, and colour.

I met many composers at EMS and had the privilege of working with a few of them: see left.

EMS was unusual for the time, because it had two minicomputers, named after their owner's children. The first computer, a PDP8/S (Sofka), was acquired in 1967; the second, a PDP8/L (Leo), in 1968. (In these notes, "Sofka" and "Leo" refer to these computers, not to people.) In the photo, Leo is in Rack #1, below the tape unit with four spools, and Sofka is in Rack #3, near the top.

Both computers were slow by modern standards. Sofka executed about 100,000, and Leo about 600,000, instructions/second. They were also small: Sofka had 8K 12-bit words of RAM (about 12 Kb) and Leo had 4K 12-bit words (about 6 Kb). Digital signal processing was far beyond their capabilities. Instead, the existing voltage-controlled devices were modified for digital control. Equipment built after the computers had been installed was designed for digital control.

Sofka was originally used simply as a glorified sequencer. Composers were required to type in long lists of numbers, and a simple program would deliver the numbers to the devices. Of course, the lists could be generated by another computer. For example, Alan Sutcliffe used an ICL computer to realize a stochastic composition that was performed at the Queen Elizabeth Hall in 1968 by Sofka.

When I joined EMS, early in 1969, Leo had just been acquired. Since Leo was an order of magnitude faster than Sofka, and Sofka was already connected to various electronic music instruments, we decided that Leo should do the computing and then pass results to Sofka for delivery. My first job was to write code for Leo to cooperate with code that Peter Zinovieff would write for Sofka. This code eventually became MUSYS/1. For later versions of MUSYS, I wrote the code for both computers.

Sofka controlled a motley assortment of equipment. The main sound source was a bank of 64 oscillators providing, effectively, an "organ" with three "manuals" and a range of 5 1/2 octaves. The oscillators could be tuned arbitrarily but were usually tuned to the conventional chromatic scale. The three outputs from the oscilator bank were passed through envelope shapers, filters, and a mixer to produce the final signal.

In addition to these instruments, Sofka also controlled six amplifiers, three digital/analog converters, three "integrators" (devices that generated voltages that varied linearly with time), twelve audio switches, six DC switches, and a 4-track Ampex tape-deck. EMS used patch panels rather than patch cables; the patch panel for the computer-controlled equipment is below Sofka in Rack #3. Inserting a pin into a patch panel links an input to an output. Since the audio and DC switches were routed through the patch panel, parts of the patch could be changed by the computer during performance.

These devices could be controlled by a low-bandwidth data stream. For example, a single note could be specified by: pitch, waveform, amplitude, filtering, attack rate, sustain rate, and decay time. Some of these parameters, such as filtering, would often be constant during a musical phrase, and would be transmitted only once. Some notes might require more parameters, to specify a more complicated envelope, for instance. But, for most purposes, a hundred or so events per second, with a time precision of about 1 msec, is usually sufficient. (These requirements are somewhat similar to the MIDI interface which, of course, did not exist in 1970.)

With these devices under software control, many things became possible. Obviously, music could be synthesized under computer control. Less obviously, the computer could be used to tune the oscillator bank to equal temperament, just intonation, or any other desired tuning system. This was possible because the computer could turn oscillators on or off, fine tune them, and measure their frequencies using the zero-crossing detector.

During a MUSYS performance, Leo read data from the disk and passed it to Sofka; Sofka delivered the data to the appropriate devices. Sofka, interrupted once per millisecond, controlled the timing of the performance, requesting data as necessary.

The composition component of MUSYS was written for Leo. The problem was to write a program that could translate a "score", written in a format to be determined, into a data stream suitable for Sofka. The memory constraints were quite severe. There was a FORTRAN compiler but after discovering that a 15 line program filled the memory, I decided not to use it. The alternative was assembly language.

The MUSYS compiler (interpreter might be a better word, but we always called it a compiler) was based on two ideas. The first, macro expansion, was inspired by Christopher Strachey's Macrogenerator, written in 1965 to bootstrap CPL onto the Titan (successor of Atlas) computer at Cambridge University. The second idea was to use Leo to generate commands and Sofka to implement them.

A MUSYS user would typically start by defining some macros. Here is a typical macro that corresponds to a single note of music:

      NOTE O1.%A. A1.%B. E1.%B/2+7. T1.%C-1. E1.%B/2+2<7. T1.1 T=T+%C @

The two letter codes identify devices. For example: O1 is oscillator 1, A1 is amplifier 1, and E1 is envelope shaper 1. The definition of NOTE assumes that oscillator 1 has been patched through amplifier 1 and envelope shaper 1. (It was possible to alter the patch dynamically using electronic switches, but not many composers used this facility.) There are also pseudo-devices: T1, for example, indicates a delay.

A composition consisting of a single note might look like this:

      #NOTE 56, 12, 15;

This note has pitch 56 (chosen from an eight-octave chromatic scale with notes numbered from 0 to 63), loudness 12 (on a logarithmic scale from 0 to 15), and duration 15/100 = 0.15 seconds. The loudness value also determines the envelope of the note.

The following program is more elaborate. It plays fifty random tone rows:

      50 (N = 0 X = 0
      1  M=12^  K=1  M-1 [ M (K = K*2) ]
         X & K[G1]
         X = X+K  N = N+1  #NOTE M, 15^, 10^>3;
         12 - N[G1]

Compositions consisting of random notes are generally not very interesting, although very occasionally you get lucky. Usually, the composer had to write a list of the notes to be played in a separate file that was read by the MUSYS compiler.

The MUSYS compiler read the program file and data file and compiled a list of device/data pairs. Most pairs consisted of two 6-bit bytes, but a few devices required more data than 6 bits. The single note composition above, for example, would generate (codes such as "O1" would actually be 6-bit numbers):

      O1 56 A1 12 E1 13 T1 14 E1 7 T1 1

When the list of pairs has been generated and stored on disk, the composition could be "performed". Performance consisted of Leo sending the data in the list to Sofka. In fact, since Sofka kept control of the time, she asked Leo for data when she needed it.

The compiler could generate up to six interleaved streams of data called, for some obscure reason, "buses". This feature simplified programming considerably. Apart from the obvious application - providing six independent voices - there were many other applications. For example, you could put the notes, without dynamics, on one bus and the dynamics on a second bus, to provide smoother phrasing than is possible if a particular dynamic is associated with each note.

MUSYS was used for many compositions. Those that I was involved in to a greater or lesser extent, with date and place of premiere, where known, are:

MUSYS was awarded a 1,000,000 lire prize for Electronic Music Software by Radio Milano in 1972.

A technical description of MUSYS appeared as "MUSYS: Software for an Electronic Music Studio" in Software — Practice and Experience, 3, pages 369-383, 1973.

The MUSYS language lives on, but it is now called Mouse.