Justin Kerobo

Home > J. Kerobo, I. Bukvic, “Exploring the Intersection of Affect and Musical Engagement to Define Affective Computing in Musical Collaboration,” Psychology and Music – Interdisciplinary Encounters (PAM-IE 2024), 2024, Zagreb, Croatia.

J. Kerobo, I. Bukvic, “Exploring the Intersection of Affect and Musical Engagement to Define Affective Computing in Musical Collaboration,” Psychology and Music – Interdisciplinary Encounters (PAM-IE 2024), 2024, Zagreb, Croatia.

Feb 2026 16

Abstract

This paper explores the interplay between affect and musical engagement, drawing on psychological theories and empirical studies. We propose extending this exploration into telematic music, investigating the potential for creating an emotional AI collaborator.  The study aims to compile a dataset for machine learning by converting physiological responses, self-reported emotions, and musical passages. That data will allow for the generation of symbolic music from ML techniques. This exploration involves a social-psychological inquiry into human perceptions and feelings towards musical engagement and a deeper investigation into the physiological and neuroscientific responses when exposed to musical stimuli. This paper introduces Affective Computing in Musical Collaboration, exploring the intersection of physiological factors and music, particularly in telematic settings, to foster emotional expression and collaboration. It employs flow theory to understand musical engagement, examining cognitive, affective, and psychophysiological indicators to characterize flow states. Integrating AI, machine learning, and human feedback is proposed to deepen understanding and enable continuous measurement of physiological patterns, thereby advancing music information retrieval knowledge and enhancing emotional expression, collaboration, and engagement in musical performance.

Bibliography

Blackwood, D. H. R., & Muir, W. J. (1990). Cognitive Brain Potentials and their Application. The British Journal of Psychiatry, 157(S9), 96–101. https://doi.org/10.1192/S0007125000291897

Bukvic, I. (2009, January 1). L2Ork » Linux Laptop Orchestra. http://l2ork.music.vt.edu/main/

Bukvic, I. (2020, January 1). L2Ork » L2Ork Tweeter. https://l2ork.music.vt.edu/main/make-your-own-l2ork/tweeter/

Csikszentmihalyi, M. (2009). Flow: The Psychology of Optimal Experience. Harper Collins.

de Manzano, O., Theorell, T., Harmat, L., & Ullén, F. (2010). The psychophysiology of flow during piano playing. Emotion (Washington, D.C.), 10(3), 301–311. https://doi.org/10.1037/a0018432

Eck, D., & Schmidhuber, J. (2002a). A First Look at Music Composition using LSTM Recurrent Neural Networks [Technical Report]. Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale.

Eck, D., & Schmidhuber, J. (2002b). Finding temporal structure in music: Blues improvisation with LSTM recurrent networks. Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, 747–756. https://doi.org/10.1109/NNSP.2002.1030094

Ferreira, L., & Whitehead, J. (2019). Learning to Generate Music With Sentiment. Proceedings of the 20th International Society for Music Information Retrieval Conference, ISMIR 2019, Delft, The Netherlands, November 4-8, 2019. http://archives.ismir.net/ismir2019/paper/000045.pdf

Gifford, T., Knotts, S., Kalonaris, S., & Mccormack, J. (2017). Evaluating Improvisational Interfaces. https://www.semanticscholar.org/paper/Evaluating-Improvisational-Interfaces-Gifford-Knotts/cc4f195806907b7be79cff4d3e4708e2b9fa8c2e

Huang, C.-Z. A., Cooijmans, T., Roberts, A., Courville, A., & Eck, D. (2019). Counterpoint by Convolution (No. arXiv:1903.07227). arXiv. https://doi.org/10.48550/arXiv.1903.07227

Huang, C.-Z. A., Hawthorne, C., Roberts, A., Dinculescu, M., Wexler, J., Hong, L., & Howcroft, J. (2019). The Bach Doodle: Approachable music composition with machine learning at scale (No. arXiv:1907.06637). arXiv. https://doi.org/10.48550/arXiv.1907.06637

Huang, C.-Z. A., Vaswani, A., Uszkoreit, J., Shazeer, N., Simon, I., Hawthorne, C., Dai, A. M., Hoffman, M. D., Dinculescu, M., & Eck, D. (2018). Music Transformer. arXiv:1809.04281 [Cs, Eess, Stat]. http://arxiv.org/abs/1809.04281

Hung, H.-T., Ching, J., Doh, S., Kim, N., Nam, J., & Yang, Y.-H. (2021). EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation (No. arXiv:2108.01374). arXiv. https://doi.org/10.48550/arXiv.2108.01374

Jackson, H. (2002). Chapter 11—Toward a Symbiotic Coevolutionary Approach to Architecture. In P. J. Bentley & D. W. Corne (Eds.), Creative Evolutionary Systems (pp. 299–313). Morgan Kaufmann. https://doi.org/10.1016/B978-155860673-9/50049-5

Janisse, M. P. (1970). Attitudinal effects of mere exposure: A replication and extension. Psychonomic Science, 19(2), 77–78. https://doi.org/10.3758/BF03337428

Kerobo, J. A., & Bukvic, I. I. (2024). Real-Time Human-Classified Emotional MIDI Dataset Integration for Symbolic Music Generation. 2024 International Conference on Machine Learning and Applications (ICMLA), 520–527. https://doi.org/10.1109/ICMLA61862.2024.00076

Kim, T., Chung, M., Jeong, E., Cho, Y. S., Kwon, O.-S., & Kim, S.-P. (2023). Cortical representation of musical pitch in event-related potentials. Biomedical Engineering Letters, 13(3), 441–454. https://doi.org/10.1007/s13534-023-00274-y

Landhäußer, A., & Keller, J. (2012). Flow and Its Affective, Cognitive, and Performance-Related Consequences. In S. Engeser (Ed.), Advances in Flow Research (pp. 65–85). Springer. https://doi.org/10.1007/978-1-4614-2359-1_4

Medsker, L., & Jain, L. C. (1999). Recurrent neural networks: Design and applications. CRC press. https://books.google.com/books?hl=en&lr=&id=ME1SAkN0PyMC&oi=fnd&pg=PA1&dq=recurrent+neural+network&ots=7dwydO7PSq&sig=jwgxc7HRnV3beTkq6cm86a66X24

MIDI.org – Expanding, promoting, and protecting MIDI technology for the benefit of artists and musicians around the world. (n.d.). Retrieved October 5, 2024, from https://midi.org/

Nakamura, J., & Roberts, S. (2016). The Hypo-egoic Component of Flow. In K. W. Brown & M. R. Leary (Eds.), The Oxford Handbook of Hypo-egoic Phenomena (p. 0). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199328079.013.9

Oore, S., Simon, I., Dieleman, S., Eck, D., & Simonyan, K. (2020). This time with feeling: Learning expressive musical performance. Neural Computing and Applications, 32(4), 955–967. https://doi.org/10.1007/s00521-018-3758-9

Pachet, F. (2006). Enhancing individual creativity with interactive musical reflexive systems. In Musical Creativity (pp. 375–391). Psychology Press. https://doi.org/10.4324/9780203088111-35

Panda, R. E. S., Malheiro, R., Rocha, B., Oliveira, A. P., & Paiva, R. P. (2013). Multi-Modal Music Emotion Recognition: A New Dataset, Methodology and Comparative Analysis. 10th International Symposium on Computer Music Multidisciplinary Research (CMMR 2013), 570–582. https://estudogeral.uc.pt/handle/10316/94095

Peifer, C., Wolters, G., Harmat, L., Heutte, J., Tan, J., Freire, T., Tavares, D., Fonte, C., Andersen, F. O., van den Hout, J., Šimleša, M., Pola, L., Ceja, L., & Triberti, S. (2022). A Scoping Review of Flow Research. Frontiers in Psychology, 13, 815665. https://doi.org/10.3389/fpsyg.2022.815665

Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714

Schlagowski, R., Nazarenko, D., Can, Y., Gupta, K., Mertes, S., Billinghurst, M., & André, E. (2023). Wish You Were Here: Mental and Physiological Effects of Remote Music Collaboration in Mixed Reality. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3544548.3581162

Šimleša, M., Guegan, J., Blanchard, E., Tarpin-Bernard, F., & Buisine, S. (2018). The Flow Engine Framework: A Cognitive Model of Optimal Human Experience. Europe’s Journal of Psychology, 14(1), 232–253. https://doi.org/10.5964/ejop.v14i1.1370

Sulun, S., Davies, M. E. P., & Viana, P. (2022). Symbolic Music Generation Conditioned on Continuous-Valued Emotions. IEEE Access, 10, 44617–44626. IEEE Access. https://doi.org/10.1109/ACCESS.2022.3169744

Sur, S., & Sinha, V. K. (2009). Event-related potential: An overview. Industrial Psychiatry Journal, 18(1), 70–73. https://doi.org/10.4103/0972-6748.57865

Wang, S., Wang, T., Chen, N., & Luo, J. (2020). The preconditions and event-related potentials correlates of flow experience in an educational context. Learning and Motivation, 72, 101678. https://doi.org/10.1016/j.lmot.2020.101678

Wright, M., & Freed, A. (1997). Open soundcontrol: A new protocol for communicating with sound synthesizers. ICMC. https://www.adrianfreed.com/sites/default/files/open-soundcontrol-a-new-protocol-for-communicating-with.pdf

Wrigley, W. J., & Emmerson, S. B. (2013). The experience of the flow state in live music performance. Psychology of Music, 41(3), 292–305. https://doi.org/10.1177/0305735611425903

Yu, Y., Si, X., Hu, C., & Zhang, J. (2019). A review of recurrent neural networks: LSTM cells and network architectures. Neural Computation, 31(7), 1235–1270.

Zajonc, R. B. (1968). Attitudinal effects of mere exposure. Journal of Personality and Social Psychology, 9(2, Pt.2), 1–27. https://doi.org/10.1037/h0025848