Researchers at UC Berkeley have submitted a new application for AI, which will help someone break A move. To be called deep science for dance, the new method was first discovered by The Verge and it was published on arXiV. The research document describes how deep learning techniques can be used to transfer movement between people in different videos. In simple terms, the method can be described to overall dance changes from a source video to a target video. But it is far more complicated than that of a number of steps to achieve good results.
The whole process of creating the dance routine can be divided into three phases, bag detection, global making normalization and mapping from normalized bag figures to the target subject. To detect poses, the system needs a source video and a target video where it will surpass the routine. A sub program then converts the movements in the videos into a stick figure. But for target video, about 20 minutes of good quality video is required at 120 frames per second of the program to get a good quality transfer of poses to holding shapes. In addition, information about clothes is not cited, and people who dance in the target video must wear tight clothes with minimal wrinkles.
It is a given that there will be a difference between the movements of people dancing in the source video and the target video. This is where it will be difficult and global positioning will come to fruition. There are differences between the source and the target body and the positions in the frame. Finally, the team made a system for learning the mapping from the normalized bag figures, created in the first step of the process, to images of the target person with conflicting training.
The whole process is assisted by subroutines that work to smooth the movements so that there are fewer jerks. There is also a neural network dedicated to generating the face of a person in the target video to increase realism. However, there are some cases where AI fails to encode the dance changes. This is especially evident when the input motion or the velocity of movement is different in the source and the target. You can notice rendering errors and jitters when the source's movements are too fast to overlap over the person who dances in the target video. Nevertheless, the results are both impressive and worrying as how video manipulation can be quite accurate and easily done using AI.
<! – commented @ July 6, 2016
Other popular offers
* Includes Cashback
Top Engineering Colleges
-> <! –