PI: Emmanuel Vincent (firstname.lastname@example.org)
Start: December 2015 to March 2016
Duration: 16 months
To apply: send a CV, a motivation letter, a list of publications, and one or more recommendation letters to email@example.com by October 9, 2015
Automatic music improvisation aims to enable a machine to listen to other musicians and improvise with them in real time. While recurrent neural networks (RNNs) have shown their benefit for the generation of pitch sequences  and polyphonic music [2,3], current improvisation systems still rely on variable-order N-grams of pitch sequences, which can be learned in real time .
The goal of this postdoc position is to introduce the use of (potentially deep) RNNs in the context of automatic music improvisation. One or more of the following challenges shall be investigated:
– learn the RNN from a small amount of data using musically-motivated network architectures and parameter tying strategies
– update the RNN and generate meaningful music in real time given input by the other musicians
– jointly model heterogeneous musical dimensions (pitch, rhythm, harmony…) in the line of 
– jointly account for multiple time scales (tatum, beat, bar, structural block…)
To do so, we will leverage recent advances both in deep learning and in music modeling, e.g., [6,7].
This position is part of a funded project with Ircam. The successful candidate will collaborate with a PhD student and participate in project meetings at Ircam.
Salary: 2600 €/month gross, plus free health insurance and additional benefits
Prospective candidates should hold or be about to obtain a PhD in the area of machine learning or speech and music processing. Knowledge about RNNs and RNN programming practice (e.g., Theano) are necessary. Previous experience with music is not required but would be an asset.
 D. Eck and J. Schmidhuber, "Finding temporal structure in music: Blues improvisation with LSTM recurrent networks", in Proc. NNSP, 2002.
 N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent, "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription", in Proc. ICML, 2012.
 I.-T. Liu and B. Ramakrishnan, "Bach in 2014: Music composition with recurrent neural network", arXiv:1412.3191, 2014.
 G. Assayag and S. Dubnov. Using factor oracles for machine improvisation. Soft Computing, 2004.
 G. Bickerman, S. Bosley, P. Swire, and R. M. Keller, "Learning to create jazz melodies using deep belief nets", in Proc. ICCC, 2010.
 C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions", in Proc. CVPR, 2015.
 F. Bimbot, G. Sargent, E. Deruty, C. Guichaoua, and E. Vincent. Semiotic description of music structure: An introduction to the Quaero/Metiss structural annotations. In Proc. AES 53rd Int. Conf. on Semantic Audio, 2014.
Aditya Arie Nugraha
Posted by: Aditya Arie Nugraha <firstname.lastname@example.org>
|Reply via web post||•||Reply to sender||•||Reply to group||•||Start a New Topic||•||Messages in this topic (1)|
INFO LOWONGAN DI BIDANG MIGAS:
INGIN KELUAR DARI MILIS BEASISWA?
Kirim email kosong ke email@example.com