Jenkin, Michael2019-07-022019-07-022019-02-042019-07-02http://hdl.handle.net/10315/36254Adding an interactive avatar to a human-robot interface requires the development of tools that animate the avatar so as to simulate an intelligent conversation partner. Here we describe a toolkit that supports interactive avatar modeling for human-computer interaction. The toolkit utilizes cloud-based speech-to-text software that provides active listening, a cloud-based AI to generate appropriate textual responses to user queries, and a cloud-based text-to-speech generation engine to generate utterances for this text. This output is combined with a cloud-based 3D avatar animation synchronized to the spoken response. Generated text responses are embedded within an XML structure that allows for tuning the nature of the avatar animation to simulate different emotional states. An expression package controls the avatar's facial expressions. The introduced rendering latency is obscured through parallel processing and an idle loop process that animates the avatar between utterances. The efficiency of the approach is validated through a formal user study.enAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.A Cloud-Based Extensible Avatar For Human Robot InteractionElectronic Thesis or Dissertation2019-07-02HCIHRIRoboticsAvatarText-to-speechSpeech-to-textAICloud-BasedParallel processingDistributed processingArtificial intelligenceHuman-computer InteractionHuman-robot interactionRenderingAnimationXML