Please ensure Javascript is enabled for purposes of website accessibility

LARA Lab Theses

PhD

Lara J. Martin,

PhD Human-Centered Computing at Georgia Institute of Technology

Neurosymbolic Automated Story Generation


Defended: 2020-12-07

Abstract: Although we are currently riding a technological wave of personal assistants, many of these agents still struggle to communicate appropriately. Humans are natural storytellers, so it would be fitting if artificial intelligence (AI) could tell stories as well. Automated story generation is an area of AI research that aims to create agents that tell good stories. With goodness being subjective and hard-to-define, I focus on the perceived coherence of stories in this thesis. Previous story generation systems use planning and symbolic representations to create new stories, but these systems require a vast amount of knowledge engineering. The stories created by these systems are coherent, but only a finite set of stories can be generated. In contrast, very large neural language models have recently made the headlines in the natural language processing community. Though impressive on the surface, even the most sophisticated of these models begins to lose coherence over time. My research looks at both neural and symbolic techniques of automated story generation. In this dissertation, I created automated story generation systems that improved coherence by leveraging various symbolic approaches for neural systems. I did this through a collection of techniques; by separating out semantic event generation from syntactic sentence generation, manipulating neural event generation to become goal-driven, improving syntactic sentence generation to be more interesting and coherent, and creating a rule-based infrastructure to aid neural networks in causal reasoning.

MS

Naren Sivakumar,

MS Computer Science at University of Maryland, Baltimore County

Emulating Rational Decisions with Traditional and Contemporary AI


Defended: 2025-04-21

Abstract: With the introduction of Large Language Models (LLMs) into society, they have been used for increasingly more complex and important tasks in various governmental and corporate settings. This has further been exacerbated with the introduction of "thinking" models, which are designed and geared towards thinking and logical reasoning. With this in mind, this thesis aims to be a study of conflict resolution on a national scale, with different factors that affect decision-making involved, such as the level of cooperation, existing relationships, and types of governance. Each country is first grouped according to the type of governance. This allows us to extract government-specific actions that have been used by governments in the past to achieve their goals. Each country is then given a set of actions, relationships, and resources to use throughout the conflict resolution. The authors aim to reach the Nash Equilibrium as the resolution point, where no party can change the outcome of the conflict by modifying only their own decisions. This serves as an adequate completion point because hypothetically the only step that can be taken after reaching the Nash Equilibrium is to declare war, something that is to be avoided as much as possible. The thesis finally conducts versus matches between the MCTS algorithm and LLMs, with each party being either cooperative or uncooperative.

Shadab Choudhury,

MS Computer Science at University of Maryland, Baltimore County

Connecting Language and Emotion in Large Language Models for Human-AI Collaboration


Defended: 2025-04-18

Abstract: Large Language Models demonstrate linguistic abilities on par with humans, generating short texts, stories, instructions, and even code that's often indistinguishable from what is created by humans. This allows humans to use large language models (LLMs) collaboratively — as communication aides or writing assistants. However, humans cannot always assume an LLM will behave the same way another person would. This is particularly evident in subjective scenarios such as where emotion is involved. In this work, we explore to what depth do LLMs perceive and understand human emotions, and look at ways of describing an emotion to an LLM for collaborative work. First, we study the problem of classifying emotions and show that LLMs perform well on their own, and can also improve smaller models at the same task. Secondly, we focus on generating emotions, using the problem space of keyword-constrained generation and a human participant-study to see where human expectations and LLM outputs diverge and how we can minimize any such misalignment. Here, we find that using English words and lexical expressions valence-arousal-dominance (VAD) scales lead to good alignment and generation quality, while Emojis or numeric representations of VAD fare worse.