Computer Science Honours
Honours Thesis 1987
Basser Department of Computer Science
Madsen Building F09, University of Sydney, NSW 2006, Australia
I would like to thank the following people for their help, companionship and criticisms during the duration of my honours project and thesis: First of all, my supervisor Dr. Allan G. Bromley, Associate Professor at the Basser Department of Computer Science, University of Sydney, for his many helpful comments during all parts of the project (but especially during the design phase) and his extremely useful criticisms of the initial drafts of the thesis; Mr. Ian Fredericks from the Music Department, University of Sydney, for his constructive comments on the viability of the Rubato language from a musician's point of view; Bruce Ellis, for an interesting afternoon in which he showed me his synthesizer collection; my fellow Computer Science Honours students for providing companionship and light entertainment throughout the year, especially during the late nights; my flatmate Vladimir for keeping out of the way; and last, but not least, Ling Hwa Cheah for being my best friend and partner.
(Italian = robbed)
Over the centuries the cry of Ancora rubato (robbed again) has echoed through the corridors of opera-houses as the orchestral musicians opened their pay-packets. In time, the word became so associated with the sight of players hanging about the stage door waiting to argue with the manager that it seemed natural to apply it to hanging about while playing an expressive melody. It is in fact the subtle art of flexing the rhythm in such a way as to enhance its expressiveness, sometimes retarding, sometimes accelerating, but always preserving a coherent musical shape. Its tasteful use almost invariably depends on an awareness of the relation between a melody and its supporting harmony. Questions of rubato should usually not be worked out but left to the inspiration of the moment. Abused, it can result in gross distortions of the music.
- Anthony Hopkins, Downbeat Music Guide, Oxford University Press, London (1977)
Computer music is an interdisciplinary field combining aspects of art, science and technology. Under the aegis of computer science as well as musicology, it has been an actively pursued research topic for some twenty years.
Some of the currently active areas in computer music research include:
This thesis presents a description of the design and implementation of the Rubato system, which consists of a music input language coupled to a music performance system. Rubato is a set of tools and a programming environment that collectively allow a user to enter music into a computer using a language (called the Rubato language) that is compact, flexible and modelled closely on conventional (common practice) musical notation. Once a piece of music has been entered in this manner, it can be transformed or analysed or performed on a music synthesizer provided suitable hardware and interfacing circuits are available. Current members of the Rubato family are a compiler, an assembler and linker, an interpreter and debugger, and finally a player.
It was inevitable that the notion of programming languages specifically designed for musical applications developed not long after reseach had begun in the field of computer music. Solving musical problems and music generation with computers involves questions of efficiency, representation and modelling. While it is possible for a musician or composer to encode a musical composition by embedding it within an existing general purpose programming language(1), (see Moxc : real-time programming package for more details on a set of library functions that allow musical programs to be written in the C Programming language in a style similar to Moxie(2)) using a computer language which has been specifically designed for musical applications and containing the embodiment of musical paradigms will allow a wide range of compositional strategies to be realized.
There is currently a widely accepted means of notating music developed by Western musical tradition which will be referred to in this thesis as Conventional Music Notation or CMN for short. This notation was devised as a visual or graphical means of encoding the interrelated properties of musical sound, including pitch, intensity, time, timbre and pace, as pictorial symbols on paper.
CMN encodes a static representation of musical compositions as note elements (note-heads, stems, flags and beams) together with accents, dynamics and phrase marks, upon a staff(3). It quantizes the continuous musical stream into discrete event specifications (called notes) that code parameters such as pitch, onset time and duration. If each of these parameters is regarded as a perceptual dimension in music, then notes may be regarded as points placed on the perception space encompassing the music.
CMN has been criticized for its inadequacy in representation. What is denoted by CMN is but the tip of a much greater body of oral knowledge and tradition in the practices of composition, performance, and analysis. The information lost by the abstraction or quantization process is recovered through a set of implicit rules known as performance interpretation most of which are automatically assumed by trained performers realizing an encoded work.
The foremost observation that can be made about CMN is that it is designed by musicians for musicians. It combines many levels of perceptual dimensions together, resulting in a concise and effective symbolic representation. However, CMN is very difficult to represent in a formal manner due to the implicit rules of interpretation. A computer performance of a musical score omitting the interpretation rules often result in a wooden rendition of the musical piece.
Alternative means of notating music have been proposed, even as early as in the year 1742 by Jean-Jacques Rousseau, but it can be safely asserted that CMN continues to be the dominant means of representing music by musicians.
The above discussion would imply that using CMN as the basis of any specialized computer language for music input or performance would be difficult. Yet it is certainly possible to design and implement graphical editors that display and edit musical scores using a highly stylized subset of CMN.(4) Such editors have been implemented in the past. However, the extent of CMN embodied by these editors is often minimal, and furthermore the problems generated by the implicit rules are almost always ignored or deferred. Music performance systems that do attempt to recognize the implicit rules often employ concepts and paradigms that differ substantially from CMN. Experience has shown that the design and implementation of a substantially complete music editor or music input system incorporating most of the major features of CMN, in addition to a music performance system that recognizes performance interpretation rules, is a non-trivial exercise in artificial intelligence techniques, i.e. knowledge representation.
For this reason, it is usual for compromises to be made when designing a music input language. A music input language is often regarded as an intermediary means of allowing musical information to be represented in the computer. The musical information, once represented, may then be used for musical analysis, computer music typesetting, or even music generation and performance. Music input languages may sometimes be an alphanumeric representation of a limited subset of CMN with irregularities removed and simplifications effected, but often they use substantially different representation schemes from CMN which are often idiosyncronistic in appearance and style.
The paradigms employed in the design of music input languages are often derived from the 'model' used to represent the continuous musical stream(5) and has a profound effect on the subsequent 'style' of the language. Currently, a wide variety of models are used to represent musical data, including mathematical (stochastic, combinatorial and statistic), linguistic, algorithmic, process (object oriented) and models derived from artificial intelligence.
The Rubato system is a partial attempt at solving the music representation problem mentioned in the last section. It seeks not to defy the concepts and abstract principles embodied in CMN but to embrace them. Instead of finding an alternative means of representing music without the limitations imposed by CMN, it shares the same paradigms as CMN in the recognition of CMN's universal acceptance in the musical world. Hence Rubato shares with CMN some of the defects inherent to the model of representation common to both systems of music notation. However, Rubato is more formally consistent than CMN as it does not attempt to duplicate every aspect of CMN, only the concepts and axioms behind the symbolic notation rules.
The main problem of alternative representations of music is the learning curve associated with mastering the representation by the very people who would benefit the most from a music input and performance system - the electronic music synthesist, the casual composer and music analyst. Also, it is inherently difficult to transcribe music written in CMN (which comprises most of Western music!) into an alternative representation scheme. Rubato does not currently solve the problem of implicit performance interpretation rules which are missing from CMN (and hence from the Rubato language), but it is hoped that some of these rules will be encoded into the interpreter and/or player in the future so that the performance of Rubato encoded pieces will be more realistic.
The Rubato system is also an attempt at creating a music input and performance system using established techniques in compiler construction and language design and taking advantage of recent research into parallel computation and the development of concurrent languages. Rubato attempts to draw as many analogies as possible between music performance and the execution of computer software. Hence the components of the Rubato system directly parallels similar tools available in the programming environment for a computer system. Writing music into a computer is a process that can be likened to writing software that will be executed on a computing machine. For example, the music itself can be thought of as a set of algorithms for 'playing' a piece and the computer as a virtual machine designed to 'execute' these algorithms, i.e. 'play' music.
Given these analogies, the Rubato language can therefore be viewed as a computer language for expressing musical algorithms. The Rubato system can be viewed as a virtual computer system that plays music. The compiler will attempt to translate user input in a high level language into a low level language that resembles machine language on physical computers. This machine language will be further translated by the assembler into 'executable' code. Multiple music source files may also be 'linked' or merged together from portions of music compiled and assembled separately into a whole. The interpreter attempts to execute Rubato machine code by simulating a virtual music playing machine using the computer's native code and supporting hardware. If something wrong occurs, the debugger may be used to isolate problems in the music representation. Finally, at the end of the chain, the player takes the output of the interpreter and generate codes that will drive a music synthesizer connected to the computer(6). This performance will then be interpreted by human ears as the realization of the musical piece.
The Rubato system has been successively implemented on a DEC VAX 11/780 running Unix Version 8 and an IBM PC/XT running MS-DOS Version 3.30. Currently, music performance is only possible on the IBM PC/XT via a player program called adagio(1) (see The Adagio Language) and the Roland MPU-401 MIDI Processing Unit.
The Rubato system is the union of the following subsystems:
This is the user interface to the system. The Rubato language is an typographic language with features similar to CMN but at the same time less inconsistent. It is highly algorithmic and employs block structuring of declarations to facilitate music analysis. The design of the Rubato language is an extension of the grand tradition of 'structured' computer languages beginning with Algol and continues on today in general purpose computer languages such as Pascal and Ada. In addition, the language feature concurrency and syncronization primitives derived from research in process concurrency that resulted in experimental languages such as Communicating Sequential Processes(CSP) and Occam.
This is again subdivided into components which taken as a whole, will accept as an input file musical pieces encoded in the Rubato language by a user and plays them on a music synthesizer. The compiler scans text files written in the Rubato language and generates intermediate assembly code suitable for processing by the assembler, which in turn will generate machine code suitable for the interpreter. The interpreter simulates the execution of the machine code. This results in a file suitable for input by the player. The player performs the music piece on an electronic synthesizer interfaced to the computer.
The following are the major modules of the Rubato system and what they do:
This compiles a source file written in the Rubato language into the assembly language suitable for the assembler. A Rubato source file is created using a text editor on the computer system and is just a normal text file. The Rubato language and compiler is portable and stable across implementations.
The Rubato assembler accepts assembly instructions in the one instruction per line format common to most assemblers for computer systems. Each assembler instruction corresponds to a musical event, such as a command to start playing a note or set a particular attribute to a certain value or a command that changes the internal state of the virtual machine. The design of the assembly language is also portable across implementations. It is in fact independent of the language. It is possible to modify the design of either the Rubato language or virtual machine without affecting the other.
The linker will link up separately compiled and assembled source files into a musical piece that can be 'executed' by the interpreter. External declarations and references can be resolved at this stage. The linker is also portable across implementations.
The interpreter will simulate the execution of the Rubato music performance machine. Essentially, it executes Rubato machine code in the host environment. Currently, the interpreter generates a text file which is then read in by a player program that actually plays the music. Hence it is portable across implementations provided each implementation uses the same format for the player file. Ideally, the interpreter should execute in 'real-time' and play the music directly onto the synthesizer hardware interface. In other words, the functionality of the interpreter and the player should be combined into one unit at the expense of portability. Besides efficiency considerations, merging the interpreter and the player will allow the language and the system to be extended in the future for human/computer interaction during the performance of the music.
The Debugger will allow machine code, whether hand-written or output generated by the compiler, to be debugged. Commands typical of a system debugger such as single-step, trace, disassembly and memory examine and state examine are available. At this stage, the debugger is quite useful for debugging both the compiler and interpreter!
Currently, the player is an independent program which is separate from the interpreter. The player used in the current implementation is called Adagio and has been developed by Roger Dannenberg at Carnegie-Mellon University.
The most important component of the Rubato system is undoubtedly the Rubato language. It is essentially the user interface to the Rubato system. This chapter of the thesis describes the goals, assumptions, design approaches and decisions taken leading to the specification of the Rubato language.
The following goals were instrumental in the shaping of the nature of the Rubato language:
The language should be loosely based on the structural organization, abstract concepts and principles of CMN, i.e. it should employ many of the same abstractions (the quantization of the musical stream into notes with psychoacoustical properties such as pitch, rhythm, loudness and timbre being a primary CMN concept that must be retained by the language). The rationale behind this goal is to minimize the effort required to code musical pieces written in CMN. Ideally, there should be an injective relationship between entities and concepts in CMN and their equivalent counterparts in the language and it should be possible to transcribe musical pieces in CMN directly into the language with minimal effort and by following a few simple rules.
The language should allow the encoding of as many CMN concepts as possible, if not in the same style and conventions, then at least in the same principle. In particular, the language should permit easy and painless specification of articulation marks, dynamics (including continuous dynamic specifications such as crescendo and ritartando), rhythmic stresses, key, meter and time signatures, barlines and various other notational devices of CMN traditionally ignored by computer music input languages. Any notation in CMN not explicitly handled by the language should be easy to simulate within the language by the user through devices resembling macros or procedures in computer programming languages. The purpose of this goal is to enlarge the class of music specifiable in the language as wide as possible.
In spite of the above goals, the language should be mainly a typographical language, i.e. it should be possible to encode a piece of music as a sequence of alphanumeric and special characters commonly available on a typewriter keyboard. This allows music pieces to be coded on a simple computer terminal without special music input facilities or a graphical user interface. The reason for this is to allow the language to be implementable across a wide variety of computer hardware and operating systems. The benefits of a graphical based language is heavily outweighed by the special hardware and software requirements necessary to support a graphical interface.
The language should encode musical pieces as compactly as possible, to minimize the time required to code musical pieces as well as the number of keystrokes required to represent a particular musical entity. A compact representation is also beneficial in that it requires minimal storage within the computer.
The language should allow a hierarchical organization of components in a musical piece when coded in the language representation. This parallels the (implied) concept of hierarchy in CMN. Traditionally, large pieces of music are broken up into smaller and smaller components such as movements, sections, themes, melodies, motives, phrases down to the individual notes. The representation of a musical piece in the language should mirror the piece's internal hierarchical organization as far as possible. The hierarchical organization should be flexible enough to allow common sections in the hierarchy, e.g. a phrase or melody interspersed throughout the piece across sections, to be represented or stored only once. Future invocations of a previously specified section of a piece should be easy to perform.
Hierarchical levels within a music piece should be nameable in the language, i.e. it should be possible to assign a sequence or characters forming a word to any component of the music. Specifying the name at a later stage should invoke the section of the music assigned to the name. Name assignment in the hierarchical structure should be nestable rather like declaration nesting in block-structured languages such as Algol or Pascal. In particular, a local assignment of a name to a musical structure should be visible to all lower layers of the hierarchy but invisible to the higher layers. This allows different sections of a piece to assign the same name to different structures independently without the concern that one assignment may overwrite the other.
The language should be inherently concurrent, i.e. it should be possible to specify sections which will be performed simultaneously. Music is not usually considered as a linear stream of notes but rather a set of note streams operating in parallel. This is exemplified in pieces written for a sympony orchestra. During the performance of a symphony, for example, each member of the orchestra can be considered a process that spews out musical notes independently of the other performers and yet the collection of independent players form a joint entity that can bring pleasure to a listener's ears. Often, performers share a musical part, such as members of the cello section often play a common phrase in unison. This can be likened to processes which are separate instances of a single process specification.
This is major difference between the new language and most of the other computer music input languages.
Typically, a computer music input language forces a piece of music with many parts to be encoded into a sequential stream which is processed sequentially. The language should allow each musical part to be specified separately, together with a notation to bind the separate pieces so that they will be performed simultaneously. Hence, the language can take advantage of upcoming parallel computer architectures because of its inherent concurrency. The implementation of the interpreter and player on a typical sequential computer entails time-sharing the computational power of the computer between active entities in the music in a round-robin fashion.
Finally, musical pieces written in the language should be aesthetic and pleasing to look at. To achieve this goal, the language may have to allow free-form input, i.e. spaces and blank lines and the typographical format of elements of the language are not syntactically significant.
In designing the language, some assumptions about the nature of music and the structure of musical pieces were made and these assumptions guided the approaches taken and the design decisions made. The assumptions may not be correct or even generally valid, but it is hoped that they are at least consistent and prove to be useful.
Music is composed of notes which are locally homogeneous. Locally within a piece, all notes typically lie relatively close to one another in pitch, and are usually similar in other attributes. For example, all notes in a phrase or melody are likely to be spaced equally apart in time and have the same duration, loudness, timbre, etc. A language that exploits this feature will simplify the keying in of musical phrases because the attributes of a note which are common across a phrase need only be specified for the phrase rather than for each individual note. Similarly, a group of phrases may possess similar attributes in the same way that notes do. Hence, the savings achieved can be replicated up the hierarchical structure.
Most musical pieces will repeat portions of themselves. Indeed, certains parts or types of music will often repeat endlessly, possibly with variations or in juxtaposition with another section which may not repeat at all. Even in cases where the melody does not repeat, the rhythmic pattern of the notes may be replicated. Repetition of musical sections are often intimately linked with the hierarchical structure of the musical piece. The language should be able to handle repetitions in form as well as content intelligently.
The performance of a musical piece can be likened to the sequential execution of a computer program. For instance, when a piece of music is being composed or performed, there is often an acute awareness in the mind of the composer or performer of the 'flow' of the musical piece with respect to time. This flow of a musical piece can be diverted within CMN using repetition marks and there are even constructs directly analogous to control flow statements in programming languages such as goto or if..then..else! Hence, the flow of music is very similar to the flow of execution within a computer program. A language for representing music should therefore be algorithmic and possess iteration and control flow primitives akin to the control flow statements available in a typical computer programming language.
The concepts used in Rubato are often just as applicable to other areas of real-time control systems as it is to music representation and performance. While the language should not be designed as a general purpose real-time control language, the design of the language must be flexible enough to allow the language to be used for nonmusical applications. To this end, there must exist alternative means of event specifications that do not correspond to CMN. For example, there should be at least two different ways of specifying the pitch of a note. It can either be specified as a key which is relative to the current key signature and the current octave, or as an absolute pitch number which corresponds to the number that will be sent out by the interface when playing the note. Similarly, the duration of a note can be specified as a time period which is relative to the current musical tempo, or as an absolute time interval.
With the previous goals and assumptions at hand, an initial design of the language was completed. The language was then modified and the design phase was reiterated until it was felt that the language was in a stable enough phase for a lexical analyser and parser to be constructed. It was found that the design often had to be modified in order to simplify the lexical analyser and/or parser. However, any modifications made were mostly cosmetic. Hence, by the time the specification of the language was drawn out, a rudimentary parser existed which could traverse through sample music representations written in the grammar of the language.
The following concepts were the result of the approaches examined during the design phase of the language:
The language views a musical stream as a sequence of notes possessing attributes. Each note must possess a pitch, along with attributes such as delay, duration, velocity, patch and channel. These attributes will be fully discussed in the chapter entitled A Specification of the Rubato Language, but it suffices for now to view notes as event specifications with each attribute of a note regarded as a dimension on which the note may be placed.
When defining a note, the only attribute that needs to be specified is the pitch. All other attributes will take on default values if not specified. The use of default attributes is a means of allowing note representations to be more compact then they would normally be. Since the default values do not change from note to note unless done explicitly, note specifications are largely context free and independent of one another.
There are two fundamental groupings of notes. A phrase is a collection of entities (such as notes) that will be played sequentially, one after another. In a sense, the 'execution' of a phrase will yield a melody or a musical phrase. A chord is a collection of entities that will be played simultaneously. These are the two fundamental 'building blocks' of hierarchy in the Rubato language. Phrases and chords may be nested, containing other phrases or chords. The Rubato language also allow phrases and chords to possess attributes which become the default attributes for all entities within the phrase or chord. Phrases in the Rubato language are analogous to subroutine calls in a programming language and chords are analogous to a primitive that spawns new processes in concurrent programming languages.
Within a phrase, execution proceeds sequentially unless a control flow statement is encountered which may or may not divert the 'execution flow' within the phrase. Control flow statements are similar to those in programming languages. There are iterative control flow statements as well as conditional statements.
There are currently a wide variety of diverse methods of entering music into a computer in a format suitable for musical analysis and/or performance. Conventional music notation has been found to be less than ideal for music input and representation, hence a variety of music input languages have been developed. This chapter of the thesis looks at existing approaches to music notation and input languages and also compares between the features of different designs and the restrictions due to paradigms employed in the designs. A review of music performance systems employing one or more of the above input languages will also be covered. Whenever convenient, a comparison between the Rubato system and the system being reviewed will also be made.
This review will cover existing music input languages, music generation and performing languages, as well as musical programming languages that implements musical data types and operations and allow a representation of time in a musical composition. The languages surveyed will be presented here in a nondeterministic order, although some effort have been made to present languages related to one another in sequence for easy comparison.
Early programs developed for computer sound and music synthesis, generation and performance often define a musical note as the "specification of an acoustic event". This is mainly due to the history of acoustic experimentations in electronic music moving into the digital computer domain. The interest on timbre was a primary impetus in the growth of electronic music and, later on, computer music synthesis. The advantage of specifying music as a collection of acoustic parameters that is synthesized into waveforms is the freedom of expression beyond that of conventional musical instruments. However, every detail of a music performance had to be specified in painstaking detail.
The first, and most influential, of these family of programs were a suite of programs, named Music I to Music V, developed under Max Mathews et al at AT&T Bell Laboratories. These programs spawned a whole tree of descendants with similar names such as Music 4B, Music 4BF, Music 10 etc. I will simply take Music V as a role model and refer to the whole class of programs for the remainder of this essay as MUSIC N.
The Music V environment is a fast general purpose computer with mass storage and digital-to-analogue converters. Music V takes a set of synthesis algorithms and note specifications as input, and generates waveform samples for the entire musical score onto tape. An auxiliary program then reads the samples off the tape and sends them across to the digital-analogue converters, hence producing music.
Synthesis algorithms are specified by interconnecting unit generator modules resulting in an instrument. A series of note statements containing a list of expressions specifying the note parameters are than parsed by Music V and converted into waveforms using the instruments defined previously.
Music V processes its input in three passes: a parameter conversion pass, a note sorting pass, and the actual synthesis pass. Instruments can be reentrant (i.e. the same instrument may be playing more than one note), a feature new to Music V and not present in Music IV.
In pass I, the score is read and data statements are interpreted into operations. Statements are free form in that they are terminated by semicolons and more than one statement may coexist on the same line. The first field is the operation code, the second the action time that specifies when the operation is to be done. Further fields are specific to the operation type. Fields are separated by white space or commas.
The operation code is a three letter mnemonic which is converted to a numeric equivalent.
After pass I, a temporary file is created which is directed on to pass II. Each statement is then sorted by action time in ascending order and a metronome function is applied to change the time scale.
Finally, pass III computes the actual acoustic samples by organizing unit generators into instruments and playing the instruments.
MUSIC N's complete lack of structure in the event (note) specifications allowed new approaches to composition. However, the acoustical model can sometimes constrain musical composition. Transcribing from CMN to MUSIC N is possible but is quite tedious for humans (A graphical interface to MUSIC IV has been done). MUSIC N does not operate in real time, and often a 1 minute composition may take up to 20 minutes of computation. Performance interpretation, missing from CMN, is very difficult to code on MUSIC N, and the implicit assumption of notes as static entities that cannot vary once instantiated is not valid for certain types of music.
MUSPEC (J. P. Citron, MUSPEC pp. 97 - 111 ) is a high-level tool in musical composition. It may also be used as a music input language. The paradigms employed in MUSPEC are musicological rather than numerical in nature, but MUSPEC is not a true music input language. Input data into MUSPEC is only casually related to musical notation. Instead, the user is expected to think in terms of musical ideas such as pitches in tone systems, chord structures and rhythms.
MUSPEC grew out of research on the use of computers in aural pattern recognition. As a result, MUSPEC views a musical composition as "... a voiced harmonic continuity and its subsequent melodization."
The output of MUSPEC is a printed listing of notes and time durations, but not in musical notation. There are currently no means of performing (realizing) MUSPEC output as MUSPEC does not feature supporting hardware for musical performance. The only way of "listening" to MUSPEC output is to transcribe the results into conventional music notation for instrumental performance, or into a computer music performance system.
MUSPEC has been used to generate and compare the musical characteristics of persistent lines of chemical substances, phases of seismic disturbances, electrocardiograms, stellar luminosity plots and other phenomenological recordings. In fact, the name MUSPEC is derived from the projected use of the program as a "MUsical SPECtroscope".
MUSPEC input is divided into two blocks: a declarative block and a 'executable' block.
This is the declarative section of the input. Some possible declarations are:
This is the pitch system of the music, given in user-defined symbols. Note that MUSPEC translates the symbols into numbers for internal processing and reconverts the resultant numbers back into symbols for output. The actual significance of the symbols as pitches in some musical scale is ignored by MUSPEC itself. An example of a tonal system declaration is:
TONSYS C DF D EF E F GF G AF A BF B
which defines the twelve-tone scale.
This is the allowed set of root tones upon which chords may be constructed. The entries are numeric offsets to the symbols named in the TONSYS statement.
ROOTS 1 3 5 6 8 10 12
CYCLES 5 2 3 5 2 5
gives the allowed root progressions according to the root tone scale in ROOTS.
Structures are numbers which enumerate intervals in the TONSYS scale separating consecutive notes in the structure. For example,
STRCTR 4 3 3
specifies a seventh chord on allowable root tones. Structures are used for harmonic composition as well as melodization.
Voicing statements specifies the harmonic continuity of the music and control the voice leading between chords. The entry of a particular voicing position specifies the structure note of the next chord which the note in the current structure component must move to. An example is
VOICNG 1 3 4 2
The first chord is voiced using either CHORD1 or CHORDP. For example,
CHORD1 1 3 4 2
states that the initial reference choird is ordered root, third, fourth and second structure tone. CHORDP will cause structures and root tones to be "phased" in calculations.
Basic Rhythmic Group specifies and overall or "macrorhythmic" control:
establishes 2 units of time with a maximum of 6 attacks, 1 unit of time with a chord change (as indicated by the minus sign) and maximum 4 attacks, and 1 unit of time with a maximum of 3 attacks and a minimum of 2 attacks.
"Microrhythmic" attack patterns are specified as relative duration groups, i.e. duration is an integer with respect to a minimum value. A negative duration value indicates a rest:
RELDUR 3 1 2 2 -1 3 2 2
A MELODY statement is a string of integers representing notes selected from interval structures.
The second data block may contain any set of numbers at all, whether abstractly chosen or purposely contrived. This block begins with a line with the word LINES starting in column one. The numbers in the line data are used to select material from the musical data. First, a root tone cycle is selected from available CYCLE scales, followed by voicing choice, macro- and microrhythmic selection and melody. Hence, each set of numbers of a line triggers a set of selections and corresponds to the 'executable' section of a programming language.
One of the reasons behind the implementation of MUSPEC was to allow composers and arrangers to communicate musical thoughts in a relatively high level to the computer, and in this respect it is fairly successful. MUSPEC employs a different set of paradigms from CMN hence converting from musical scores into MUSPEC input requires a bit of thought. The biggest disadvantage of