The Semiotics of Music: From Peirce to AI

Abstract

Music is the art of sound. It has been along with humans for a very long time and conveys rich meanings among people and through time. Music is considered as a “universal language of mankind” because of its affective power across the boundaries of languages. The power of music roots in its symbolic systems. This paper discusses the semiotics of music in the theoretical framework developed by C. S. Peirce. Then, I examine the similarities and differences between language and music. It is evident that music and language share the same brain area. However, they differ a lot, especially in formal structure. I also analyze how music serves as a cognitive artifact that is distributed among members of social groups. Next, I briefly review the history of music digitalization and see great potential in computer-generated music. I explore the parallel between humans and computers ways of pattern recognition in music and consider this comparison as a useful direction for future studies both in human music cognition and computer science.

1. Introduction

Music is an art form that uses sound as a medium to transfer meanings. We all have our experience about music. Why does music have so much power? The reason lies in the symbolic systems of music. C. S. Peirce, the co-founder of semiotics provided us with theoretical tools to analyze signs, also useful for examining music. In addition, as the most researched branch in semiotics, linguistics has the potential to provide music with useful methods and frameworks; though there are many differences between music and language. Ray Jackendoff and others wrote detailed papers and books in which they looked into the parallels and nonparallels between music and language from an interdisciplinary perspective. This paper also discusses how digitalization impacts the music industry and the great potential in computer composition.

2. Music as a symbolic system

Music has been along with us since the very dawn of human civilization. The oldest musical instrument so far is thought to be a bone flute forty thousand years’ old[1]. Its function remains unknown but probably for nothing more than religious, military, or entertainment purposes.

We all listen to some kinds of music in our lives. Some people listen to music as background when they are walking or driving, while others are immersed in music in concerts or clubs. In addition, music is ubiquitous. Churches, restaurants, yoga clubs, shopping malls, and other places all play different music to build different atmospheres. The secret lies in the rich meanings of music that can be understood by humans. As a symbolic species, we humans can give meaning to and take meaning from all perceptible signals, be it visual, olfactory, tactile or acoustic. Music is an art of acoustics, definitely a symbolic system.

Emotional responses can be evoked by some music, while others tell you stories. Some music could even provide you with religious ecstasy. These meanings are all conveyed through musical signs.

2.1 Peircean Signs in music

C.S. Peirce was known as a co-founder of semiotics, and he developed many valuable concepts and methods in order to study signs. As he put it, “we think only in signs.” Everything could be a sign as long as it is interpreted by someone[2]. Peirce’s theories of semiotics are useful for analyzing all kind of symbolic systems, music included.

Peirce’s model of signs consists of three components—representamen, object, and interpretant.

  • The representamen, also called the “sign” or “sign vehicle” by some scholars, is the form of the sign. In music, the representamen can be many things—the music itself, a movement, a melody, a beat, a genre, a music score, a performance, a recording, a sound effect, the environment of the listener, the stage design, the clothes the performers are wearing, or even a mistake. As long as it is interpreted as something else rather than itself, it could be seen as a musical sign. For example, American ethnomusicologist Thomas Turino once analyzed how musical meanings were built in Jimi Hendrix’s Woodstock performance of “The Star-Spangled Banner.” Many things were considered by him as signs, including Hendrix’s wearing of a seemingly contradictory tuxedo and tennis sneakers, his use of a loud electric guitar with “feedback and distortion,” and the sound effects of airplanes and siren[4]. These signs were interpreted in a meaning-rich context—a specific concert (Woodstock) and a certain historical period with a unique international situation and ideological trend, which had a huge impact on people’s understanding of Hendrix’s music.
  • An object is something to that the sign refers, mainly in the form of an abstract concept. For example, a song called “A Morning of the Slag Ravine” by Joe Hisaishi from Hayao Miyazaki’s film Laputa: Castle in the Sky is a sign for the morning, therefore it’s object is the abstract concept of “morning.” People who are familiar with Hayao Miyazaki’s films would think of the morning sun emerging from the eastern horizon when they hear the song even without visual cues.

A Morning of the Slag Ravine by Joe Hisaishi

a-morning-of-the-slag-ravine

Beginning of A Morning of the Slag Ravine

  • An interpretant is a sense in the observer’s mind where the representamen and object are brought together.

In addition, Peirce developed three modes of signs, which represent three kinds of relationship between signifiers and signified[1]:

  • Icon is a mode for resemblance, such as a portrait. For example, at the beginning of Garth Brooks’s The Thunder Rolls, there is a sound effect imitating the sound of thunder, which is an iconic sign.

Garth Brooks – The Thunder Rolls

  • Index is a mode that is “mediated by some physical or temporal connection between sign and objects[3].” For example, ripples on the surface of the water are indexical to the wind. An example in music is the famous beginning of Beethoven’s Symphony No.5 in C minor. It sounds like someone is knocking at the door, but it does not completely imitate the knocking sound, as the beginning of The Thunder Rolls imitating thunder sound. The sound of door knocking is an index for someone at the door. The beginning of Symphony No.5 indicates fate knocking at the door.
beethoven_symphony_5_opening_new

Beginning of Beethoven’s Symphony No.5

  • Symbol is a mode in which the sign and object are connected by social convention, such as language and number. In this mode, the relationship between sign and object is arbitrary and must be “agreed upon and learned[2].” According to Thomas Turino, unlike language, most musical signs function as icons and indices, but there also exists many musical symbols[4]. For instance, in his book Signs of Music: A Guide to Musical Semiotics, Finnish musicologist, and semiologist Eero Tarasti gave an example of J. S. Bach’s Fugue in C sharp minor from Book I of the Well-Tempered Clavier, as shown below. For listeners of the Baroque period, this subject is a symbol of “cross and thus the Christ,” which got quoted a lot in other music with the same symbolic meaning[5]. The relationship between this melody and its sign was built based on historically religious convention. Most people in the twenty-first century no longer recognize it as a symbol for Christ. Another example is the beginning of Beethoven’s Symphony No.5 I mentioned before. It became so famous that it was adopted and quoted a lot in other genres of music as a symbol for victory, partially because of its analogue to the Morse code for the letter V—“dit-dit-dit-dah” (another symbolic sign). During the World War II, BBC even used these four notes as the beginning of its broadcasting programs[6].
Fugue-subject of Bach’s Fugue in C sharp minor represented cross and thus the Christ in the Baroque period. Credit: Eero Tarasti

Fugue-subject of Bach’s Fugue in C sharp minor represented cross and thus the Christ in the Baroque period. Credit: Eero Tarasti

Musicians themselves could become symbols, too. A perfect example is Sixto Rodriguez, the American singer depicted in the 85th Academy Award Winner film Searching for Sugar Man. He remained unknown in his home country but had earned significant fame in Australia, Botswana, New Zealand, Zimbabwe, and especially South Africa, where he became a symbol for anti-Apartheid activities and influenced many musicians protesting against the government[7]. Another example is Jian Cui, the first and the most famous rock star in the mainland of China. He was called the Godfather of Chinese rock and roll. His songs and performances teem with music signs. For instance, he was well-known for covering his eyes with a piece of red cloth in his performances. And he also was very good at combining western rock music with Chinese traditional instruments like suona horn, building an unexpected conflicting but subtly harmonious atmosphere. These behaviors were interpreted as signs of a rebel against traditional values and political realities. “I covered my eyes with a red cloth to symbolize my feelings,” he said in an article for Time in 1999[8]. He was considered in China as a symbol of rock spirit and freedom.

Jian Cui was singing with a piece of red cloth covering his eyes.

Jian Cui was singing with a piece of red cloth covering his eyes.

Another noticeable symbolic system in music is the musical marks and symbols we use to notate music in scores. There are a lot of musical notations based on historical and cultural conventions that should be learned in order to understand. The most widely used method today is five-line staff that originated from Europe. However, in ancient China, people used a totally different music notation called “Gongchepu” (工尺谱). Unlike five-line staff that mostly uses non-word characters, Gongchepu uses Chinese characters and punctuations to notate music, and it was written from top to bottom and then from right to left, just like ancient Chinese writings on bamboo slips, another demonstration for the arbitrariness of symbols.

Two scanned pages from the second volume of a song named “阳关三叠” in the scorebook using Gongchepu notation written by He Zhang in 1864. Credit: Wikimedia

Two scanned pages from the second volume of a song named “阳关三叠” in the scorebook using Gongchepu notation written by He Zhang in 1864. Credit: Wikimedia

The first two lines of the same song “阳关三叠” using five-line staff notation. Credit: http://www.gepuwang.net/gangqinpu/80658.html

The first two lines of the same song “阳关三叠” using five-line staff notation. Credit: http://www.gepuwang.net/gangqinpu/80658.html

However, as Daniel Chandler pointed out in his Semiotics: The Basics, the three Peircean modes are not mutually exclusive. That is to say, a sign could be any combination of the three. For example, as I mentioned before, the beginning of Beethoven’s Symphony No.5 is an index for someone knocking at the door, as well as a symbol for victory.

Another effect characterizing the interaction among different musical signs is the “semantic snowballing” proposed by Thomas Turino[4]. He suggested that music can simultaneously comprise many signs which interact with one another and mix together as time goes on, like snowballs. Moreover, musical semiotics has a chain effect, in which the object of a sign becomes the representamen of another sign and so on[4]. I find this snowballing and chain reaction happening not only inside the music domain but also between music and other symbolic systems. For example, in the Chinese Gongchepu notation, there are two symbols for beats—“板” (bǎn) notated as “。” for strong beats and “眼” (yǎn) notated as “、” for weak beats. Gradually, the two characters of “板” and ”眼” became symbols for beats and many idioms were developed based on them. For instance, the idiom of “一板一眼” that literally means strictly following the standard beat is used as an adjective to describe scrupulousness and stiffness. Likewise, the English word “offbeat” that originally means not following the beat also became a synonym for “unconventional.” Thus, music symbols snowball into linguistic symbols.

An interesting theory stressed by Thomas Turino in his Signs of Imagination, Identity and Experience is the identity-building function of musical indexical signs in “high-context” communication, as Edward Hall called it. A shared music-related experience among members in an intimate social group could serve as a source of the affective power. In this way, great meanings are stored in and transmitted through music[4]. For example, a couple who watched Titanic as they were dating for the first time might consider the song My Heart Will Go On as an index for their relationship in their later lives.

Similarly, I personally find that game music has an extraordinarily strong indexical power. Many people find the background music of Super Mario Bros and Contra evokes an intense reminiscent mood. The reason, I think, lies in the repetitive strengthening of the immersive game experience. As Koji Kondo, the composer for the soundtrack of Super Mario Bros put it, he had two goals for his music: “to convey an unambiguous sonic image of the game world,” and “to enhance the emotional and physical experience of the gamer[9].” This emotional enhancement is so strong that sometimes they got upgraded to the level of symbol. The quality of the synthesized tones in early game music (a “qualisign” in Peirce’s theoretical frame[10]) is considered as the token for a music genre called “chiptune” or “8-bit music,”which greatly impacts later electronic dance music[11].

2.2. A linguistic perspective of music

In human symbolic systems, the most sophisticated and characteristic one is language. Language is like the jewel in the crown of human cognition. According to American linguist Ray Jackendoff in his book Foundations of Language, human language is really unique and much more complex than other communication systems such as the sound of whales and birds because human utterance could transmit unlimited information with unlimited and arbitrary forms from limited rules and lexicon[12]. Likewise, as another equally sophisticated symbolic system, music shares many features with language. In addition, music seems more competent in some ways such as emotional arousal or affect enhancement[10]. It is even able to function across the boundaries of languages. It is evident that music can induce universal emotion-related responses[13]. Therefore, many people consider music as another kind of language, even a “universal language of mankind[14].”

But strictly speaking, how similar is music to language? Can linguistic methods be used to approach music? How could a linguistic perspective help us understand music? Many types of research have been conducted with this theme.

In my Music as a Language, one of my weekly essays of the course of Semiotics and Cognitive Technologies, I discussed some similarities between music and language. For example, they both consist of sequences of basic elements of sound such as the phoneme in language. They both have structural rules, for example, syntax in language and chord progression in music. In addition, people from different areas tend to develop their own dialect and grammar both in language and music[15]. However, I didn’t examine them in detail and didn’t inspect their differences either.

In his Parallels and Nonparallels between Language and Music, Ray Jackendoff discussed his detailed observation of music and language in many aspects including general capacities, ecological functions, and formal structures[16]. He emphasized that language and music differ in their functions in human life. As he put it, language can be put to both propositional and affective use while music can only convey affective content, though sometimes the distinction between them blurs, for instance, in poetry.

In particular, Jackendoff reiterated the generative theory of tonal music (GTTM) proposed by himself and music theorist Fred Lerdahl[17]. He considers metrical grid as a capacity shared by music and language in the rhythmic domain, but saw no credible analogue in the use of pitch space, even in tone languages such as Chinese and many West African languages. So he concluded that the capacity of the use of linguistic pitch is entirely different from that in music.

However, other pieces of literature provide evidence pointing to the other way. In his Musicophilia: tales of music and the brain, British neuroscientist Oliver Sacks scrutinized the intriguing correlations between musical absolute pitch (AP) and linguistic background[18]. He specifically mentioned the research conducted by Diana Deutsch, a cognitive psychologist at the University of California, San Diego, and colleagues. Deutsch observed, “native speakers of tone languages – Mandarin and Vietnamese – were found to display a remarkably precise and stable form of absolute pitch in enunciating words[19].” By further detailed comparative study on AP in two populations of first-year music students—one at the Central Conservatory of Music in Beijing and the other at the Eastman School of Music in New York, Deutsch found the percentage of students possessing AP in Beijing was way higher than American students—60% vs. 14% in the group in which students began their music training at the age of 4 and 5, 55% vs. 6% in the group of age 6 and 7, and 42% vs. 0% in age of 8 and 9 group[20]. It’s clear evidence that the capacity of musical pitch correlates with that of linguistic pitch.

Back to Jackendoff’s theory, despite the structural rules such as bars and chord progressions, he didn’t think music has a counterpart to the linguistic syntactical structure as complex and strict as in language. Even GTTM’s prolongational structure suggested by himself that has a similar “recursive headed hierarchy” like language was not considered as a common ground between music and language. However, partially inspired by the hierarchical structure of actions in robotics, Jackendoff suggested that complex actions that integrate “many subactions stored in long-term memory” has the potential to serve as a candidate for a “more general, evolutionarily older function” shared by language and music since they are evidently all implemented in the Broca’s area of the brain[21].

Following this action-related road, Rie Asano and Cedric Boeckx took “the grammar of action” into account and developed a more general syntactical framework in terms of “action-related components,”in which they suggested the difference between the syntax of music and language boils down to their different goals in hierarchical plans for actions[22].

I conclude that linguistic methods might provide us with useful perspectives to understand music because they share many cognitive capacities. But we have to remember, music and language are two different symbolic systems with their own features. Just as philosopher Susanne Langer rationally put it, symbolic systems other than languages don’t have “vocabulary of units with independent meanings” and their laws are entirely different from the linguistic syntax that governs language. Therefore, we should not blindly apply linguistic principles and methods on other media such as photography, painting, and music[2].

2.3. The signs of genres

Despite so many differences between music and language, they both tend to develop many dialects or genres as we call them in music. Basically, a music genre is a conventional category of music that shares some recognizable features and patterns. Those features and patterns are musical signs too.

The signs that we use to determine genres vary a lot. Sometimes a genre is recognized through the music instruments, or more precisely, the sound quality used by musicians. For example, heavy electric guitar (sometimes distorted) may indicate rock music, while a song using acoustic guitars probably is country music.

Chord progressions sometimes serve as signs for genres. For example, the 12-bar blues chord progression is considered as a sign for blues music.

An example of a 12-bar blues progression in C, chord roots in red. Credit: Wikimedia

An example of a 12-bar blues progression in C, chord roots in red. Credit: Wikimedia

Scale sometimes has the power to determine a genre as well, especially for traditional music because most modern music uses a diatonic scale. For example, we can easily recognize the Japanese style in the famous piece of Sakura Sakura largely because the unique pentatonic Japanese scale—major second, minor second, major third, minor second, and major third (for example, the notes A, B, C, E, F, and up to A)[23]. Similarly, Chinese music, Indian Raga music, Arabic music, jazz, and blues also have their identifiable scales.

Sometimes the quality of the vocal also allows us to identify the genre. For example, the deep vocal of Amy Winehouse was very soul and jazzy, while the quick rapping vocal of Eminem indicates rap music.

However, digitalization allows for a fusion of music genres. Today, we often hear more than one genre-specific signs in one song. For example, Karen Mok, a Chinese singer from Hong Kong released a jazz album called “Somewhere I Belong” in 2013, in which she adapted twelve songs of different genres into jazz. One of the songs is While My Guitar Gently Weeps originally by the Beatles, in which Mok used guzheng, a Chinese traditional plucked musical string instrument with over 2500 years of history to play it in a jazz style, producing a really creative style of world music.

Karen Mok / While My Guitar Gently Weeps

3. Music as a cognitive artifact

In his Cognitive Artifacts, Donald Norman defined cognitive artifact as “an artificial device designed to maintain, display, or operate upon information in order to serve a representational function[24].” In this view, just like language, music is definitely a cognitive artifact.

First of all, music has many cognitive functions. When we offload cognitive efforts onto music, the performance of the whole system is improved. The reason partially lies in the affective power of music. For example, religious music helps gather people together to form a community with a transcendent purpose without much verbal persuasion. Love songs enhance lovers’ emotion, positively or negatively. Music could even serve as a political weapon. Rodriguez’s music I mentioned before is an example of this.

Second, music can transmit information and emotion through space and time. Gloomy Sunday transmits a gloomy mood to many people in different parts of the world and even was blamed for several suicides according to some urban legend. Johnny B Goode stores a story from the 1950s inside it and still passes on the information to people in the twenty-first century.

In addition, music plays an important role in every culture. It helps build a collective identity. Even four-month-old infants can recognize and prefer the music of their own culture, according to research[25].

Therefore, music becomes a perfect example for distributed cognition since it could distribute across the members of social groups, coordinate between internal and external structure, and distribute through time[26], as we analyzed above.

4. Digitalization of music

From the 1950s, musicians started to use electronic instruments to record, produce, transmit, and store music. Fourier Transform was used to transform acoustic vibration into digital signals[27]. For example, in the 1970s, Alan Kay’s Smalltalk language was used to create programs that captured tones played on the keyboard and then produced editable music scores accordingly with different colors representing different timbres[28].

At first, computers could only complete limited tasks. The range of computer synthesized sounds was restricted. As the computing power got stronger and stronger thanks to Moore’s Law, computers became meta-media and revolutionized the production and distribution of music with new hardware and software tools.

Gradually, computers cannot only imitate existing instruments with increasing precision but also create brand new sound effects and combinations that never existed before. For example, a vocoder is a kind of machine designed to record, compress, and digitalize human voices into editable formats that could be stored and manipulated in unprecedented ways. There is no doubt that digitalization opens up unlimited new possibilities for musical creation. Musicians are provided with a nearly infinite repository of materials. For example, in their song Contact from the album “Random Access Memories,” French electronic music duo Daft Punk used an audio sample from the Apollo 17 mission in which NASA astronaut Eugene Cernan was talking about something strange outside their spaceship. They also used a sample from another song called “We Ride Tonight” by Australian rock band The Sherbs. However, the combination of existing materials creates a novel musical experience.

In this digitalization process, new genres are emerging, such as electronic dance music and ambient music. An interesting demonstration of the power of digitalization is in using software, someone slowed Justin Bieber’s fast-tempo song by eight times, so that it sounds very ethereal like Sigur Rós but nothing like Justin Bieber himself.

Justin Bieber’s U Smile 800% Slower

As I discussed before, music is full of signs, which, in their essence, are recognizable patterns. Computers are good at pattern recognition and matching. One consequence is that computers are more and more capable of identifying patterns that we previously thought could only be recognized by humans. For example, some subtle signs of genres and some musicians’ personal styles can be recognized by computers. Using machine learning algorithms, computers are even competent in “creating” music in certain styles or genres.

One example is David Cope, a composer, and scientist at the University of California, Santa Cruz, who writes programs and algorithms that analyze existing music and create new music in that style. In his patent US 7696426 B2, he described his software Emmy’s logical framework, which contains pattern matching, segmentation step, hierarchical analysis step, non-linear recombination step, and then result in the output. His software takes many factors into accounts, such as pitch, duration, channel number, and dynamics. As he put it, “style is inherent in recurrent patterns of the relationships between the musical events, in more than one work.” Following this logic, based on probability principles, his software is able to capture and rank recurrent patterns as signatures for styles and create new pieces of music[29]. The following video is one of Emmy’s works, which styles in Bach.

Bach-style chorale by musical intelligence computer program created by David Cope. 

Another example is a Beatles-styled song called “Daddy’s Car” composed by an Artificial Intelligence software at SONY CSL Research Lab[30]. Today, some software can produce jazz music too.

Daddy’s Car: a song composed by Artificial Intelligence – in the style of the Beatles

Computers cannot feel music like us. They cannot feel the affection in music, sway to the music, and develop their own preferences, either. They can only break music into 0s and 1s and look for patterns in it. However, I think, the ways they learn and create music are not necessarily different from us. They both involve a process of information storing, retrieving, and pattern matching. They both need to draw patterns from perceivable signals first, then store them in long-term memories, rank them based on how recurrent they are, and match new patterns with those stored memories, although they differ a lot in details. This potential parallel might be a future research direction in order to understand music cognition and develop music-related computer algorithms.

5. Conclusion

Music is a symbolic system that can be approached through C. S. Pierce’s semiotics framework. Despite its similarities with language, music has its unique structure that has no counterpart in linguistics. So it is necessary to develop its own theoretical framework. Music also serves as a cognitive artifact that distributes through time and among people. The digitalization of music creates unlimited possibilities for the music industry, among which, the most noticeable achievement today is the computer-generated music produced by algorithms. I conclude that the computer and the human brain share similar models for musical sign recognition.


References

[1] Massey, Reginald, and Jamila Massey. The Music of India. Abhinav Publications, 1996.

[2] Chandler, Daniel. Semiotics: The Basics. 2nd ed. Basics (Routledge (Firm)). London ; New York: Routledge, 2007.

[3] Deacon, Terrence William. The Symbolic Species: The Co-Evolution of Language and the Brain. 1st ed. New York: W.W. Norton, 1997.

[4] Turino, Thomas. “Signs of Imagination, Identity, and Experience: A Peircian Semiotic Theory for Music.” Ethnomusicology 43, no. 2 (Spring 1999): 221–55.

[5] Tarasti, Eero. Approaches to Applied Semiotics [AAS] : Signs of Music : A Guide to Musical Semiotics. Berlin/Boston, DE: De Gruyter Mouton, 2002. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10755238.

[6] MacDonald, James. “British Open ‘V’ Nerve War; Churchill Spurs Resistance.” The New York Times, July 20, 1941. http://www.nytimes.com/learning/general/onthisday/big/0719.html#article.

[7] Bartholomew-Strydom, Craig, and Stephen Segerman. Sugar Man: The Life, Death, and Resurrection of Sixto Rodriguez. London: Bantam Press, an imprint of Transworld Publishers, 2015.

[8] JIAN, CUI. “Rock ‘N’ Roll.” Time, September 27, 1999. http://content.time.com/time/world/article/0,8599,2054475,00.html#ixzz2VFsiqp88.

[9] Schartmann, Andrew. Koji Kondo’s Super Mario Bros. Soundtrack. New York: Bloomsbury Academic, 2015.

[10] Parmentier, Richard J. Signs in Society: Studies in Semiotic Anthropology. Advances in Semiotics. Bloomington: Indiana University Press, 1994.

[11] Driscoll, Kevin, and Joshua Diaz. “Endless Loop: A Brief History of Chiptunes.” Transformative Works and Cultures 2, no. 0 (February 17, 2009). http://journal.transformativeworks.org/index.php/twc/article/view/96.

[12] Jackendoff, Ray. Foundations of Language: Brain, Meaning, Grammar, Evolution. OUP Oxford, 2002.

[13] Egermann, Hauke, Nathalie Fernando, Lorraine Chuen, and Stephen McAdams. “Music Induces Universal Emotion-Related Psychophysiological Responses: Comparing Canadian Listeners to Congolese Pygmies.” Frontiers in Psychology 5 (2015). doi:10.3389/fpsyg.2014.01341.

[14] Colin, Newton. “IN the Words of American Poet Henry Wadsworth Longfellow, ‘music Is the Universal Language of Mankind.’” Sunday Mail (Adelaide), May 16, 2004.

[15] Wang, Jieshu. “Music as a Language – Jieshu Wang | CCTP711: Semiotics and Cognitive Technology.” Accessed December 13, 2016. https://blogs.commons.georgetown.edu/cctp-711-fall2016/2016/09/22/music-as-a-language-jieshu-wang/.

[16] Jackendoff, Ray. “Parallels and Nonparallels Between Language and Music.” Music Perception 26, no. 3 (February 2009): 195–204.

[17] Jackendoff, Ray. A Generative Theory of Tonal Music. MIT Press, 1985.

[18] Sacks, Oliver W. Musicophilia: Tales of Music and the Brain. 1st ed. New York: Alfred A. Knopf, 2007.

[19] Henthorn, Trevor, Mark Dolson, and Diana Deutsch. “Absolute Pitch, Speech, and Tone Language: Some Experiments and a Proposed Framework.” Music Perception 21, no. 3 (Spring 2004): 339–56.

[20] Deutsch, Diana, Trevor Henthorn, Elizabeth Marvin, and HongShuai Xu. “Absolute Pitch among American and Chinese Conservatory Students: Prevalence Differences, and Evidence for a Speech-Related Critical Period.” The Journal of the Acoustical Society of America 119, no. 2 (January 31, 2006): 719–22. doi:10.1121/1.2151799.

[21] Patel, Aniruddh D. “Language, Music, Syntax and the Brain.” Nature Neuroscience 6, no. 7 (July 2003): 674–81. doi:10.1038/nn1082.

[22] Asano, Rie, and Cedric Boeckx. “Syntax in Language and Music: What Is the Right Level of Comparison?” Frontiers in Psychology 6 (2015). doi:10.3389/fpsyg.2015.00942.

[23] Harich-Schneider, Eta. A History of Japanese Music. London: Oxford University Press, 1973.

[24] Norman, Donald A. “Cognitive Artifacts.” In Designing Interaction, 17–23. New York: Cambridge University Press, 1991.

[25] Soley, Gaye, and Erin E. Hannon. “Infants Prefer the Musical Meter of Their Own Culture: A Cross-Cultural Comparison.” Developmental Psychology 46, no. 1 (n.d.): 286–92.

[26] Hollan, James, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research.” ACM Trans. Comput.-Hum. Interact. 7, no. 2 (June 2000): 174–196. doi:10.1145/353485.353487.

[27] Chagas, Paulo C. Unsayable Music: Six Reflections on Musical Semiotics, Electroacoustic and Digital Music. Leuven University Press, 2014. http://www.jstor.org.proxy.library.georgetown.edu/stable/j.ctt9qf0qh.

[28] Kay, Alan, and Adele Goldberg. “Personal Dynamic Media.” Edited by Noah Wardrip-Fruin and Nick Montfort. Computer 10, no. 3 (March 1977): 31–41.

[29] Recombinant music composition algorithm and method of using the same. Accessed December 14, 2016. http://www.google.com/patents/US7696426.

[30] “AI Makes Pop Music in Different Music Styles.” Flow Machines, September 19, 2016. http://www.flow-machines.com/ai-makes-pop-music/.

发表评论

邮箱地址不会被公开。 必填项已用*标注