[Humanist] 31.85 automated musicianship

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Wed Jun 7 07:08:00 CEST 2017


                  Humanist Discussion Group, Vol. 31, No. 85.
            Department of Digital Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org



        Date: Tue, 6 Jun 2017 17:19:32 +0200
        From: Tim Smithers <tim.smithers at cantab.net>
        Subject: Re:  31.67 automated musicianship ...
        In-Reply-To: <20170531055338.F39D61B65 at digitalhumanities.org>


Dear Willard,

May I add to Joris' and James' thoughts on automated
musicianship, following Henry Schaffer's pointer to "Computers
Are The New Composers."

Your two consequences:

 (1) more human composers of music will be put out of work
     than have already;

 (2) we will find out more than we already know about music as
     a creative activity.

are not, I think, necessary ones.  I'll begin with the second.

For attempts to build computational composing systems to tell
us anything about how humans compose music, the work requires,
I would say, approaches like David Cope's EMI (Experiments in
Musical Intelligence).  He started this in 1981, as a response
to composer's block, [1].  Essentially, what he did was to
develop a computational representation of compositional style,
and then built a computational mechanisms that used this style
representation to generate new music "in the style of."  See
the EMI "Vivaldi" (Option 1) on the webpage Henry pointed us
to, for example, and compare this with some real Vivaldi
(Option 2) [2].  Over the years, Cope compiled a large
database of different composing styles that EMI could use to
make its music.

In the 1990s, Cope built Emily Howell, a program that uses EMI
to generate music, but which has a user interface that allowed
him (or others) to adjust the work of the EMI part, using
musical notation or comments in English, to "teach" it to
compose music more to his liking.  Then, from about 2003, Cope
set upon a yet more radical path.  He kept all the EMI
compositions, but discarded the styles database.  He then gave
collections of these EMI compositions to the Emily Howell
program, and, again using its interface, guided it towards new
styles and kind of compositions, see [3] for the real details
of all this.

David Cope's work with EMI generated controversy, and some
opposition, but I think this and his later work counts as some
of the most significant, and certainly most sustained efforts
to use computation to explore musical composition in humans
and machines.  Other composers have done, and are doing,
similar things: working closely and in detail with different
kinds of computation to explore what musical composition can
be, but I'll not attempt to list them here.  Harold Cohen's
four decades of work to develop his AARON painting machine,
starting in 1973, is another example of this kind of sustained
work to understand something people can do, painting, see [4].

What, for me, is important in these attempts, is that they are
not just examples of pushing lots of samples of music to
machine learning systems, that can then be run backwards to
generate music "like" the samples they have been trained on.
This is what the Jukedeck system does, see Henry's post.  It
uses machine learning techniques to build an automated
(artificial) music generator.  You tell it how long you want
your sound track to be (for your video, say), and it makes the
music.  This kind of work, I think, tells us nothing about how
humans compose music.  It does, on the other hand, tell us
something about what some people are prepared to tolerate as
music.

Which brings me to your first consequence: Jukedeck and its
cousins will put more human composers out of work.  Perhaps,
yes.  But it doesn't have to be like this.

Take a look at and listen to these two recent video works.

 LUNAR <https://vimeo.com/217051213> [7m20s]
 by Christian Stangl ( http://www.christianstangl.at )
 music by Wolfgang Stangl ( http://www.wolfgangstangl.com ) 

and 

 2016 AICP Sponsor Reel - Dir Cut [2m47s]
 <https://vimeo.com/169599296> 
 music by Major Lazer, Light It Up (Remix)

and compare the experience with watching any of the many many
videos you can find on Vimeo and Youtube that use artificial
Jukedeck music like sound tracks.

Or, if you've got an hour to spare, try this.

 Music for 18 Musicians [56m34s]
 by  Damien Henry
 <https://youtu.be/-wmtsTuHkt0>
 music by Steve Reich

Here a machine learning system is used to generate the visual
part, to go with Steve Rich's minimalist score "Music for 18
Musicians" (1974-6).

If you don't feel the difference between videos with real
music and videos with artificial music, then human composed
music isn't needed.  But, if you do, like most people do,
(perhaps without realising how important the music is in
making the experience) then human composed music is needed,
and will always be needed.

Given that sites like Youtube and Video and others allow
viewers to vote down (in the case of Youtube) and comment on
what they see, if more (and more and more ...)  people left
comments saying they'd prefer real music on the videos they
watch, then maybe, just maybe, we (collectively) could teach
our (collective) selves to demand human composed music, and
reject machine made artificial music.

Best regards,

Tim

References

[1] David Cope, Experiments in Musical Intelligence
     http://artsites.ucsc.edu/faculty/cope/experiments.htm

[2] For Video Soundtracks, Computers Are The New Composers
    <http://www.npr.org/sections/alltechconsidered/2017/05/29/530259126/for-video-soundtracks-computers-are-the-new-composers>

[3] David Cope, 2005: Computer Models of Musical Creativity, 
    MIT press.

[4] Harold Cohen, 1995: The further exploits of AARON, Painter
     http://web.stanford.edu/group/SHR/4-2/text/cohen.html
    and
    Jane Wakefield, 2015: Intelligent Machines: AI art is
    taking on the experts, BBC Technology News report
     http://www.bbc.com/news/technology-33677271

> On 31 May 2017, at 07:53, Humanist Discussion Group <willard.mccarty at mccarty.org.uk> wrote:
> 
> 
>                  Humanist Discussion Group, Vol. 31, No. 67.
>            Department of Digital Humanities, King's College London
>                       www.digitalhumanities.org/humanist
>                Submit to: humanist at lists.digitalhumanities.org
> 
>  [1]   From:    Willard McCarty <willard.mccarty at mccarty.org.uk>          (18)
>        Subject: automation
> 
> --[1]------------------------------------------------------------------------
>        Date: Tue, 30 May 2017 06:04:16 +0100
>        From: Willard McCarty <willard.mccarty at mccarty.org.uk>
>        Subject: automation
> 
> In response to Henry Schaffer's pointer to an article on the automation 
> of musical composition, there are two consequences I think will hold -- 
> though I'll be glad for arguments to the contrary:
> 
> (1) more human composers of music will be put out of work than have already;
> (2) we will find out more than we already know about music as a creative 
> activity.
> 
> The 70 year-old fear of automation has, as we all know, not proven 
> groundless, as Shoshana Zuboff and others have demonstrated. The life of 
> a musician is in general not an easy one; many find themselves working 
> as musical hacks, doing just the sort of thing that software can now do. 
> Can we say that the net gain has been worth the cost?
> 
> Yours,
> WM
> -- 
> Willard McCarty (www.mccarty.org.uk/), Professor, Department of Digital
> Humanities, King's College London; Adjunct Professor, Western Sydney
> University and North Carolina State University; Editor,
> Interdisciplinary Science Reviews (www.tandfonline.com/loi/yisr20)
> 





More information about the Humanist mailing list