Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.contributor.advisorLiu, Yan (COMP)-
dc.creatorChen, Gong-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/10270-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic University-
dc.rightsAll rights reserveden_US
dc.titleArtificial intelligence in music : composition and emotionen_US
dcterms.abstractMusic is a universal feature in all human societies. People compose music for deliver their mind. In turn, music evokes people's emotion. This thesis investigates new artificial intelligence techniques for music composition and emotion analysis. Algorithmic composition enables the computer to compose music just like human musician. We utilize deep learning, a powerful artificial intelligence technology, in algorithmic composition. To achieve both good musicality and novelty of machine-composed music, we propose a musicality-novelty generative adversarial neural nets (MNGANs) model. Two games named musicality game and novelty game are designed and optimized alternatively. Specifically, the musicality game makes the output music samples follow the distribution of human-composed music examples, while the novelty game makes the output samples to be far from the nearest human example. A series of experiments validate the effectiveness of the proposed algorithmic composition technique. To provide more convincing evaluation of the machine-composed music, we propose to analyze the brainwave of audience by using electroencephalography (EEG). A new psychological paradigm employing auditory stimuli with different extent of musicality are designed. Based on this paradigm, we propose a novel computational model to evaluate the musicality for machine-composed music. Several empirical experiments are performed, and the results validate the effectiveness of the proposed method. Some interesting neuroscience observations are also reported. For the music emotion analysis, we deliver two pieces of works. The first work is a novel multi-label emotion classification model named quantum convolutional neural network (QCNN), which classify the music according to the audio signal. Inspired by the concepts of superposition state and collapse in quantum mechanism, we model the emotion measurement process by a new designed convolutional neural network. Empirical experiments validate that our method has achieved good classification result by learning from the labels provided by humans. The second work of music emotion analysis is based on the EEG of the audience. Considering that the EEG-based music emotion classification can be influenced by the inter-trial effect, we design a novel single-trial music listening paradigm. The new paradigm avoids the inter-trial effect on EEG data. To address the severe inter-subject difference in the single-trial setting, we propose a novel computation model named resting-state alignment, which aims to eliminate the difference between subjects and at the same time, find the features contributing to emotion recognition. Empirical experiments demonstrate the performance of the proposed technique and the observations are consistent with the previous studies from neuroscience. Future works will be explored in algorithmic composition to evoke certain emotion, which has both academic values and commercial potentials.en_US
dcterms.extentxviii, 111 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2019en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.LCSHArtificial intelligence -- Musical applicationsen_US
dcterms.LCSHComputer musicen_US
dcterms.LCSHComputer composition (Music)en_US
dcterms.LCSHEmotions in musicen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
991022289507603411.pdfFor All Users6.38 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/10270