Archive for the ‘BlazeDS’ Tag

Lego Mindstorm Flex AIR2

This is something I dreamt up and actually finished in July but haven’t yet had time to write up a post on it. The original idea was to use Flex/AIR on the desktop to control a NXT Lego Mindstorm robot but then things got a little more ambitious by adding a bit of Augmented Reality together with real time Speech Generation.

The first step was to replace the firmware on the Mindstorm brick with LeJOS, which includes a Java Virtual Machine that allows the Mindstorm to be programmed in Java. This was a tense moment but actually went relatively smoothly.

I then created a Java application on my Macbook Pro that could control the robot via a Bluetooth link. I discovered that there is a nice little library called icommand that is designed to allow you to do just this.

Once I had that connection up and running the next step was to make sure that I could use this application as a bridge to control the Robot from an AIR application.

I therefore created an interface to my desktop Java application using Merapi, which is the same Java/AIR bridge that I used to create a previous project for Flex speech recognition.

Using this, I then created an AIR application that could send and receive Merapi messages across the bridge, which allowed to me to get sensor information back from the Mindstorm and to send messages to it for navigation and pincer/sensor control.

Having got all that working I couldn’t resist putting in some Augmented Reality, so I added an iphone to the Robot and used that as a wireless webcam to get the video feed into my AIR application.

I then adapted and incorporated my previous augmented reality projects to allow me to have interactive avatars that I could switch at runtime. These are integrated with a remote java server using BlazeDS to give me the artificial intelligence part that they need to provide realistic answers and speech.

Now around this time AIR 2 was released so I decided to create the whole thing using that instead. Instead of having separate AIR and Java Applications the whole lot is bundled together and deployed as a Native Application. This is very cool and I think adds a whole new dimension to what can be done with AIR. What actually happens is that the Air application contains on its source path, the executable java jar file for the Minstorm’s controller. Once the AIR2 application is launched it is able to launch the java application as a native process and then communicate with it across the bridge using Merapi by serialising objects between Java and ActionScript.

Advertisements

AIR/Java Augmented Reality Talking Head

I have been working on this project for the past few weeks and it is now coming together nicely so I thought I would create a little blog entry about it. The idea is to create an Augmented Reality head with which one can have a conversation.

So far I have got as far as being able to type something and have the augmented reality head answer me back.

What takes place is that we have an AIR client (built using Cairngorm) communicating with a Java server side using Remote Objects over BlazeDS. The text is sent to the Java server application using remote objects where a text response is generated using AIML and a Java chatbot framework. This text response is passed to a text to speech (TTS) socket server to generate both an mp3 byte array and something called MBROLA input format. MBROLA input format is a stream of text symbols (phonemes) together with duration in milliseconds, that represent visemes (mouth shapes).

The whole lot is packaged and sent back over the wire via BlazeDS where we have an Augmented Reality Viewer created as an Advanced Flex Visual Component (using Papervision3D and FLARToolkit). The model head was created in Maya and is an animated Collada with 13 different mouth shapes that have been mapped to the output received from the MBROLA stream.

To play the speech response in AIR, the mp3 byte array is written as a temporary file, read into a sound object and then played back. At the same time the MBROLA stream has been parsed into an ArrayCollection of frames (for the model head) and durations and this is now iterated over in the handler method of a timer.

Coming soon hopefully will be speech recognition via the Merapi Java/AIR so that you can talk to the head.