IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i15p2747-d879242.html
   My bibliography  Save this article

Enhanced Evaluation Method of Musical Instrument Digital Interface Data based on Random Masking and Seq2Seq Model

Author

Listed:
  • Zhe Jiang

    (Department of Autonomous Things Intelligence, Graduate School, Dongguk University–Seoul, Seoul 04620, Korea)

  • Shuyu Li

    (Department of Multimedia Engineering, Graduate School, Dongguk University–Seoul, Seoul 04620, Korea)

  • Yunsick Sung

    (Department of Multimedia Engineering, Dongguk University–Seoul, Seoul 04620, Korea)

Abstract

With developments in artificial intelligence (AI), it is possible for novel applications to utilize deep learning to compose music by the format of musical instrument digital interface (MIDI) even without any knowledge of musical theory. The composed music is generally evaluated by human-based Turing test, which is a subjective approach and does not provide any quantitative criteria. Therefore, objective evaluation approaches with many general descriptive parameters are applied to the evaluation of MIDI data while considering MIDI features such as pitch distances, chord rates, tone spans, drum patterns, etc. However, setting several general descriptive parameters manually on large datasets is difficult and has considerable generalization limitations. In this paper, an enhanced evaluation method based on random masking and sequence-to-sequence (Seq2Seq) model is proposed to evaluate MIDI data. An experiment was conducted on real MIDI data, generated MIDI data, and random MIDI data. The bilingual evaluation understudy (BLEU) is a common MIDI data evaluation approach and is used here to evaluate the performance of the proposed method in a comparative study. In the proposed method, the ratio of the average evaluation score of the generated MIDI data to that of the real MIDI data was 31%, while that of BLEU was 79%. The lesser the ratio, the greater the difference between the real MIDI data and generated MIDI data. This implies that the proposed method quantified the gap while accurately identifying real and generated MIDI data.

Suggested Citation

  • Zhe Jiang & Shuyu Li & Yunsick Sung, 2022. "Enhanced Evaluation Method of Musical Instrument Digital Interface Data based on Random Masking and Seq2Seq Model," Mathematics, MDPI, vol. 10(15), pages 1-17, August.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:15:p:2747-:d:879242
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/15/2747/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/15/2747/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lvyang Qiu & Shuyu Li & Yunsick Sung, 2021. "3D-DCDAE: Unsupervised Music Latent Representations Learning Method Based on a Deep 3D Convolutional Denoising Autoencoder for Music Genre Classification," Mathematics, MDPI, vol. 9(18), pages 1-17, September.
    2. Lvyang Qiu & Shuyu Li & Yunsick Sung, 2021. "DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification," Mathematics, MDPI, vol. 9(5), pages 1-17, March.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Shuyu Li & Yunsick Sung, 2023. "MRBERT: Pre-Training of Melody and Rhythm for Automatic Music Generation," Mathematics, MDPI, vol. 11(4), pages 1-14, February.
    2. Shuyu Li & Yunsick Sung, 2023. "Transformer-Based Seq2Seq Model for Chord Progression Generation," Mathematics, MDPI, vol. 11(5), pages 1-14, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yihang Zhang & Yunsick Sung, 2023. "Traffic Accident Detection Using Background Subtraction and CNN Encoder–Transformer Decoder in Video Frames," Mathematics, MDPI, vol. 11(13), pages 1-15, June.
    2. Shuyu Li & Yunsick Sung, 2023. "MRBERT: Pre-Training of Melody and Rhythm for Automatic Music Generation," Mathematics, MDPI, vol. 11(4), pages 1-14, February.
    3. Yihang Zhang & Yunsick Sung, 2023. "Traffic Accident Detection Method Using Trajectory Tracking and Influence Maps," Mathematics, MDPI, vol. 11(7), pages 1-14, April.
    4. Yu-Huei Cheng & Che-Nan Kuo, 2022. "Machine Learning for Music Genre Classification Using Visual Mel Spectrum," Mathematics, MDPI, vol. 10(23), pages 1-19, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:15:p:2747-:d:879242. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.