Augmented Reality Girl

It has been a while since my last blog post as I am busy working out of Paris and London on a very interesting and high profile project creating fully integrated Flex front ends as part of a global SAP implementation project for one of the worlds leading advertising agency groups.

Anyway, once I finished the Talking Head I decided it would be fun to push the boat out a little further and create a full body character with a wider range of interactions. So the result is this augmented reality girl. Pressing the cmd button gets her to dance and you can get her to walk around using the arrow keys. She also has all the lip synch functionality from the talking head.

The technology used is the same as for the head and is in fact the same custom written Java web application, deployed to tomcat, using BlazeDS to communicate with the AIR application.

I also briefly looked at the new sound API available in flash 10.1 beta but unfortunately it doesn’t look like you can feed it an mp3 byte array as it requires a raw sound byte array instead. Although being able to manipulate and dynamically generate audio is a great new feature, I think it would be awesome to be able to generate mp3 byte arrays on a server and just load and play them directly from within the flash player. The draw back currently is that we still need to write the byte array as an mp3 file to the local drive and then load it into a Sound object before playing it.

Advertisements

3 comments so far

  1. Asim Ahmed on

    I like your work, am also working in Augmented reality which you can view at http://www.charag.wordpress.com

    do share your knowledge and guidelines. Thks

  2. Alex on

    Hi, this is very interesting work. I’m trying to understand how did you set up the girl animation because I´m involved in an AR project with character animation. Did you animated her in Max and then exported the data?

    • javadz on

      The animations are in the model and this is exported in COLLADA format as a dae file. The Flex application is then able to use the papervision library to manipulate the model in rotation or moving forward etc. The animations themselves are at known frames. These exist for specific mouth shapes for speech and for set pieces like the dance.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: