Please ensure Javascript is enabled for purposes of website accessibility

Ongoing Projects

Structured Narrative Processing and Generation

Current researchers: Naren Sivakumar

Although I have worked on neurosymbolic automated story generation for a while, I believe neurosymbolic techniques are becoming even more important in today’s era of large language modeling. Symbolic techniques are older AI techniques that often involve a significant amount of hand engineering but can result in predictable & interpretable output and behavior. These can range from rule-based systems to planners. Neurosymbolic storytelling enables systems to have the flexibility and broad range of output that LLMs can generate while maintaining important storytelling components such as plot, coherency, and consistency.

AI Agents for Playing Dungeons and Dragons

Current researchers: Patty Delafuente, Arya Honraopatil, Saksham Kumar Sharma

Dungeons & Dragons (D&D) is a tabletop roleplaying game originally created in the 1970s. Now that we’ve seen AI that can play games like Go, DotA 2, or Diplomacy, I believe D&D is the next big challenge in AI. The game lends itself to a variety of interesting problems where players have to use skills in Theory of Mind and improvisational storytelling, all while following the rules of the game.

Community-Sensitive NLP Tools for Augmentative and Alternative Communication (AAC)

Current researchers: Shadab Choudhury, Asha Kumar, Marcus McAllister

Augmentative and Alternative Communication (AAC) tools have been largely ignored by the natural language processing community. However, even though it has been an exciting time in NLP, we must implement such tools with caution – taking into consideration what AAC users actually want. I am interested in improving AAC by bringing their technology to the era of ChatGPT without sacrificing users’ autonomy, privacy, or personality.

Customizable and Emotive Speech Synthesis

Current researchers: Arya Honraopatil, June Young, Shawn Bray, Sai Vallurupalli

Now that transformer models have greatly improved speech synthesis (also called text-to-speech/TTS) capabilities, I believe the next step is to make voices that can emulate the rich tapestry of people’s voice qualities.