FaceFX Documentation and support
This is how analysis works: select a WAV file, and presto! Here is your animation.
The following operations are performed for each audio file (these steps are performed multiple times for batch operations)
The preloadaudio python callback is called as the audio is loaded to verify it is valid.
The preloadaudio python callback is called a second time as the audio is loaded for real.
The text is analyzed for chunking tags. For each text chunk (or for the entire text file if no chunk tags exist)
The text chunk is stripped of punctuation and valid text tags
Invalid text tags that aren’t in the expected format, but have an opening and closing text tag marker are removed, effectively being ignored by our system.
Coarticulation is run on the final phoneme list, adding curves specified in the mapping to the output animation.
Punctuation is stripped from the text prior to text tag processing.
Curve text tags are analyzed, inserting curves into the output animation.
Analysis events are generated from the audio and added to the gesture animation.
An event take is performed on the gesture animation.
The gesture animation and its events are baked and analysis curves are generated for all nodes in the analysis actor that do not start with an underscore. The gesture curves are copied from the gesture animation to the output animation.
Some events from the gesture animation take are copied to the output animation event template. Specifically, events from groups that begin with an underscore are left behind, and other events are copied to the output animation.