论文标题
分层的音色和铰接产生
Hierarchical Timbre-Painting and Articulation Generation
论文作者
论文摘要
我们根据指定的F0和响度提出了一种快速且高保真的方法,以便合成的音频模仿目标仪器的音色和表达。生成过程由学习的源过滤网络组成,该网络以增加的分辨率重建信号。该模型将多分辨率光谱损失优化为重建损失,使音频听起来更现实的对抗性损失以及感知F0损失,以使输出与所需的输入音高轮廓保持一致。所提出的架构可以使仪器的高质量拟合,给定一个可以短达几分钟的样本,该方法表明了最先进的音色传输功能。代码和音频样本在https://github.com/mosheman5/timbre_painting上共享。
We present a fast and high-fidelity method for music generation, based on specified f0 and loudness, such that the synthesized audio mimics the timbre and articulation of a target instrument. The generation process consists of learned source-filtering networks, which reconstruct the signal at increasing resolutions. The model optimizes a multi-resolution spectral loss as the reconstruction loss, an adversarial loss to make the audio sound more realistic, and a perceptual f0 loss to align the output to the desired input pitch contour. The proposed architecture enables high-quality fitting of an instrument, given a sample that can be as short as a few minutes, and the method demonstrates state-of-the-art timbre transfer capabilities. Code and audio samples are shared at https://github.com/mosheman5/timbre_painting.