Chat Server Internals

The newest part of the system is the conversational interface. This is right now acomplished in two server parts - the Brain and AliceBot. At the moment these pieces aren't even running on the same computer. Client access to this interface is provided by a new chat dialog included in the smarthome server and is modeled to look like a typical instant messaging system, but instead of chatting with another user, you chat with smarthome. I also have a web based chat interface for access from the Internet. I can also send email to the system which upon authentication is passed to the conversational interface, processed and a return email sent with the responses. The big limit there is that each sentence currently has to be on its own line. I could remove this restriction but it's not a high priority - as long as I can send an email "Please turn on the outside lights." and have it work I'm happy for now even if it has to be on one line.
 
AliceBot is a chatbot engine whose behavior is defined in a language called AIML (Artificial Intelligence Markup Language) which is XML based. It provides a depth and "personality" to the smarthome conversational interface. You can ask smarthome what its favorite TV show is, some information about artificial intelligence, "How do you feel?", and other general conversation. AliceBot is also capable of remembering certain information it might have learned about you during the conversation like your name, age, favorite color or other things it has been programmed for. It is also capable of keeping track of the topic of conversation. Currently I use a java implementaion of AliceBot known as "Program D", but I am working on writing a Prolog version which can be directly incorporated into "the Brain".
 
The Brain handles the more active part of a conversation with smarthome and is the part which has access to the CORBA based services in the house. The Brain makes the first attempt to parse and respond to an input sentence. If it can parse it then it does the processing itself. If the Brain cannot handle the input it passes it through to the AliceBot engine to produce the response.
 
If you ask smarthome "Is it raining?" it is the Brain which handles this question, checks the conditions at the weather station on the roof and gives you the correct answer. Likewise for commands like "Please turn off the garage light." The Brain performs the requested action and then responds accordingly. The really nice thing is that the system knows which chat interface a request was made from, so the Brain can use this information to determine the correct action. If you ask "Please turn on the light." the Brain can determine which room you make the request from. It then checks its database to determine which are the preferred lights in that room and turns them on. If you command "Please turn on the lights." then it turns on all of the lights in your room. You can always specify which location you want to act on if it is not the room you are in, for example "Please turn off the outside lights." I have also provided queries for the weather forecast so you can ask smarthome "What's the forecast for tomorrow night?"
 
The eventual integration of an AIML based ChatBot into the Brain's Prolog engine will provide lots of new exciting possibilities. Currently the Brain does not have access to the facts the AliceBot has learned about you during the conversation. It also does not have access to AliceBot's ability to keep track of the current topic of conversation. Once the systems have been merged, it would be easy to incorporate smarthome sensor data into conversational responses, for example if asked "How do you feel?" (and answer that currently has a pretty static or random response) an integrated system might be able to answer "I'm happier now that it has stopped raining."
 
The really cool thing about this part of the project is that I've started to review the human/computer interactions in science fiction movies and some of them are getting pretty close to achievable. For instance the movie "2001, A Space Odyssey" has an interaction model where someone is chatting with HAL and a new message comes in. As part of the ongoing conversation HAL mentions the incoming message and asks if he should read it. The trickiest part of implementing this in real life is the concept I'll call "presence". Since as much as it might sound I don't spend much of my day just chatting with smarthome, in order for the system to tell me about an incoming message (email from Mom for example) it has to have some idea where to find me. If I have just recently interacted with smarthome from a known place then it can try me there first, but I might have just gone somewhere else. I now have motion sensors in some rooms which may eventually give smarthome an idea which areas are occupied, but eventually I want to go further than that. I'm pretty sure that with knowledge of things like time/day of the week, front and garage door openings, motion detection, and IP address I last checked my email from (raw data for all of these things is currently available, but not organized yet) I could give smarthome a pretty decent ability to "guess" at my current location (home, work, in car commuting) which would imply a best method of delivering a message to me. Once the presence concept is established at all, having smarthome announce the sender of an email and its contents is already something I've got working.