Generative autoregressive networks for 3d dancing move synthesis from music

Hyemin Ahn, Jaehun Kim, Kihyun Kim, Songhwai Oh*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

31 Citations (Scopus)

Abstract

This letter proposes a framework which is able to generate a sequence of three-dimensional human dance poses for a given music. The proposed framework consists of three components: A music feature encoder, a pose generator, and a music genre classifier. We focus on integrating these components for generating a realistic 3D human dancing move from music, which can be applied to artificial agents and humanoid robots. The trained dance pose generator, which is a generative autoregressive model, is able to synthesize a dance sequence longer than 1,000 pose frames. Experimental results of generated dance sequences from various songs show how the proposed method generates human-like dancing move to a given music. In addition, a generated 3D dance sequence is applied to a humanoid robot, showing that the proposed framework can make a robot to dance just by listening to music.

Original languageEnglish
Article number9019619
Pages (from-to)3501-3508
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume5
Issue number2
DOIs
Publication statusPublished - 1 Apr 2020

Keywords

  • entertainment robotics
  • Gesture
  • novel deep learning methods
  • posture and facial expressions

Fingerprint

Dive into the research topics of 'Generative autoregressive networks for 3d dancing move synthesis from music'. Together they form a unique fingerprint.

Cite this