dc.contributor.author | Dadman, Shayan | |
dc.contributor.author | Bremdal, Bernt Arild | |
dc.contributor.author | Bang, Børre | |
dc.contributor.author | Dalmo, Rune | |
dc.date.accessioned | 2023-01-04T12:34:54Z | |
dc.date.available | 2023-01-04T12:34:54Z | |
dc.date.issued | 2022-11-30 | |
dc.description.abstract | Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of imitating musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. For instance, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations. | en_US |
dc.identifier.citation | Dadman, Bremdal, Bang, Dalmo. Toward Interactive Music Generation: A Position Paper. IEEE Access. 2022 | en_US |
dc.identifier.cristinID | FRIDAID 2090539 | |
dc.identifier.doi | 10.1109/ACCESS.2022.3225689 | |
dc.identifier.issn | 2169-3536 | |
dc.identifier.uri | https://hdl.handle.net/10037/28026 | |
dc.language.iso | eng | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.relation.journal | IEEE Access | |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2022 The Author(s) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0 | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | en_US |
dc.title | Toward Interactive Music Generation: A Position Paper | en_US |
dc.type.version | publishedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |