Skip to content
/ surat Public

implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"

Notifications You must be signed in to change notification settings

leventt/surat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

surat

Implementation based on this publication:

https://research.nvidia.com/publication/2017-07_Audio-Driven-Facial-Animation

Modified, to try with morph tagets as output. However, that data wasn't enough so my latest commits are for per vertex data. Also modified, so it uses MFCC instead of auto-correlations.

https://vimeo.com/338394571

Model for this video is trained with about 30 seconds of data, which is 1/10th of what was used for the publication above. Clips in the video are for validation. They don't appear in training data.

Rig is used from this IOS app:

http://www.bannaflak.com/face-cap/

About

implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages