AIR/Java Augmented Reality Talking Head

I have been working on this project for the past few weeks and it is now coming together nicely so I thought I would create a little blog entry about it. The idea is to create an Augmented Reality head with which one can have a conversation.

So far I have got as far as being able to type something and have the augmented reality head answer me back.

What takes place is that we have an AIR client (built using Cairngorm) communicating with a Java server side using Remote Objects over BlazeDS. The text is sent to the Java server application using remote objects where a text response is generated using AIML and a Java chatbot framework. This text response is passed to a text to speech (TTS) socket server to generate both an mp3 byte array and something called MBROLA input format. MBROLA input format is a stream of text symbols (phonemes) together with duration in milliseconds, that represent visemes (mouth shapes).

The whole lot is packaged and sent back over the wire via BlazeDS where we have an Augmented Reality Viewer created as an Advanced Flex Visual Component (using Papervision3D and FLARToolkit). The model head was created in Maya and is an animated Collada with 13 different mouth shapes that have been mapped to the output received from the MBROLA stream.

To play the speech response in AIR, the mp3 byte array is written as a temporary file, read into a sound object and then played back. At the same time the MBROLA stream has been parsed into an ArrayCollection of frames (for the model head) and durations and this is now iterated over in the handler method of a timer.

Coming soon hopefully will be speech recognition via the Merapi Java/AIR so that you can talk to the head.


8 comments so far

  1. […] of sources.  Javad is showing his contribution through this admirable project.  Stop by his blog and say hello. Hello there! If you are new here, you might want to subscribe to the RSS feed for […]

  2. Thomas K Carpenter on

    Nice video Javad. I’m impressed on how you got all those different programs to work together to achieve the resulting talking head. I featured the video on my AR blog. If you have any more updates, let me know and I’ll be happy to help get the word out.

    • javadz on

      Thanks Thomas. Am working on speech recognition using the merapi Java/AIR Bridge so hopefully will have a demo before too long.

  3. Roxy Chaney on

    Fabulous! I haven’t tried anything this ambitious yet- Are you using an ALICE type bot for response text?

  4. Kirky on

    Hey Javad – really impressive – esp. how you are linking the languages so fluently. I am in Colombo working on language operating systems. Are there optional responses in an AIML query – so given an input A are there more possibilities than B? Or is it currently just In and Out? K

  5. flashplayer on

    Great post!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: