[Module 0] [Module 1] [Module 2] [Module 3] [Module 4] [Module 5] [Module 6]
Thursday, August 29, 2024 to Thursday, August 29, 2024
Welcome to the class! In this introductory module, you will become acquainted with interactive fiction (since you’re probably too young to know what it is) and learn about the field of automated story generation (since it’s a small subfield of AI and you probably haven’t heard of it). You’ll even get a chance to make your own mini interactive fiction game the old-school way!
Academic Papers:
Peter A. Jansen, A Systematic Survey of Text Worlds as Embodied Natural Language Environments
Marc-Alexandre Côté, Ákos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Ruo Yu Tao, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler, TextWorld: A Learning Environment for Text-based Games
Matthew Hausknecht, Prithviraj Ammanabrolu, Marc-Alexandre Côté, Xingdi Yuan, Interactive Fiction Games: A Colossal Adventure
Supplemental Media:
Jason Scott, GET LAMP: The Text Adventure Documentary (video, 2 hours)
Mark Riedl, An Introduction to AI Story Generation
Tuesday, September 3, 2024 to Thursday, September 19, 2024
With large language models becoming more popular within Natural Language Processing/Generation (NLP/NLG), automated story generation researchers realized how much easier it is to generate text. (And this also helped NLP researchers get interested in story generation!) Here, you’ll learn about neural language models, particularly the transformer, how to work with them, and how they are used to generate stories.
Academic Papers:
Ilya Sutskever, Oriol Vinyals, Quoc V. Le, Sequence to Sequence Learning with Neural Networks
Alex Graves, Generating Sequences With Recurrent Neural Networks
Supplemental Media:
Elle Hunt (The Guardian), Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter
Tom Simonite (Wired), The AI Text Generator That’s Too Dangerous to Make Public
Academic Papers:
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Margaret Mitchell, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin, [Transformer paper] Attention is All You Need
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, [GPT-2 paper] Language Models are Unsupervised Multitask Learners
Roy Schwartz, Jesse Dodge, Noah A. Smith, Oren Etzioni, Green AI
Supplemental Media:
Hugh Laurie, Stephen Fry, A Bit of Fry & Laurie: Concerning Language (video, 7 minutes, 23 seconds)
Daniel Jurafsky, James H. Martin, Speech and Language Processing, Chapter 6: Vector Semantics and Embeddings
Academic Papers:
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, The Curious Case of Neural Text Degeneration
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, et al., [GPT-3 paper] Language Models are Few-Shot Learners
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Supplemental Media:
Kory Mathewson, What is Prompting, Really?
Janelle Shane, AI Weirdness (Newsletter)
Academic Papers:
Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, [word2vec paper] Efficient Estimation of Word Representations in Vector Space
Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, [FastText paper] Enriching Word Vectors with Subword Information
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu, [T5 paper] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper Presentations:
Ashish Vaswani, et al. (2017). Attention is All You Need. [paper] [slides] - presented by Pavan Sanjana Cirruguri
Colin Raffel, et al. (2020). Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. [paper] [slides] - presented by Rohith Mada
Jacob Devlin, et al. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [paper] [slides] - presented by Varun Magotra
Roy Schwartz, et al. (2019). Green AI. [paper] [slides] - presented by Sukhbir Singh Sardar
Academic Papers:
Angela Fan, Mike Lewis, Yann Dauphin, Hierarchical Neural Story Generation
Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning, Do Massively Pretrained Language Models Make Better Storytellers?
Supplemental Media:
Thursday, September 19, 2024 to Thursday, October 3, 2024
Scripts can be considered the backbone of storytelling. They help us fill in the gaps of knowledge that we would otherwise be missing from reading a story, and they help us reason about why events happen and what order they happen in. This Module will teach you about scripts, causal chains, and events. We’ll also look at how people have been using these techniques in the age of the neural network.
Academic Papers:
Roger Schank, Robert Abelson, Scripts Plans Goals and Understanding: An Inquiry Into Human Knowledge Structures (Chapter 3: Scripts)
Karl Pichotta, Raymond Mooney, Learning Statistical Scripts with LSTM Recurrent Neural Networks
Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, Mark O. Riedl, Event Representations for Automated Story Generation with Deep Neural Nets
Nathanael Chambers and Dan Jurafsky, Unsupervised Learning of Narrative Schemas and their Participants
Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, Yejin Choi, proScript: Partially Ordered Scripts Generation via Pre-trained Language Models
Abhilasha Sancheti, Rachel Rudinger, What do Large Language Models Learn about Scripts?
Marie-Laure Ryan, Fiction, Non-Factuals, and the Principle of Minimal Departure
Yidan Sun, Qin Chao, Boyang Li, Event Causality Is Key to Computational Story Understanding
Academic Papers:
Belinda Z. Li, Maxwell Nye, Jacob Andreas, Implicit Representations of Meaning in Neural Language Models
Qing Lyu, Li Zhang, Chris Callison-Burch, Goal-Oriented Script Construction
Li Zhang, Qing Lyu, Chris Callison-Burch, Reasoning about Goals, Steps, and Temporal Ordering with WikiHow
Li Zhang, Qing Lyu, Chris Callison-Burch, Intent Detection with WikiHow
Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi, SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
Academic Papers:
Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, Rui Yan, Plan-And-Write: Towards Better Automatic Storytelling
Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, Jianfeng Gao, PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking
Pradyumna Tambwekar, Murtaza Dhuliawala, Lara J. Martin, Animesh Mehta, Brent Harrison, Mark O. Riedl, Controllable Neural Story Plot Generation via Reward Shaping
Shanshan Huang, Kenny Q. Zhu, Qianzi Liao, Libin Shen, Yinggong Zhao, Enhanced Story Representation by ConceptNet for Predicting Story Endings
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen, A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories
Paper Presentations:
Pradyumna Tambwekar, et al. (2019). Controllable Neural Story Plot Generation via Reward Shaping. [paper] [slides] - presented by Patty Delafuente
Abhilasha Sancheti & Rachel Rudinger. (2022). What do Large Language Models Learn about Scripts?. [paper] [slides] - presented by Pooja Guttal
Belinda Z. Li, et al. (2021). Implicit Representations of Meaning in Neural Language Models. [paper] [slides] - presented by Arya Honraopatil
Shanshan Huang, et al. (2020). Enhanced Story Representation by ConceptNet for Predicting Story Endings. [paper] [slides] - presented by Hanuma Sashank Samudrala
Academic Papers:
Tuesday, October 8, 2024 to Tuesday, October 22, 2024
Academic Papers:
Supplemental Media:
Peter Norvig and Stuart J. Russell, Artificial Intelligence: A Modern Approach, Chapter 3
Peter Norvig and Stuart J. Russell, Artificial Intelligence: A Modern Approach, Chapter 11
Wikipedia, Stanford Research Institute Problem Solver
Kory Becker, Artificial Intelligence Planning with STRIPS, A Gentle Introduction
Academic Papers:
Rogelio Cardona-Rivera, Arnav Jhala, Julie Porteous, R. Michael Young, The Story So Far on Narrative Planning
Michael Lebowitz, Story-Telling as Planning and Learning
Michael Lebowitz, Planning Stories
Stephen G. Ware, R. Michael Young, CPOCL: A Narrative Planner Supporting Conflict
Mark O. Riedl, R. Michael Young, Open-world planning for story generation
R. Michael Young, Stephen Ware, Brad Cassell, Justus Robertson, Plans and planning in narrative generation: a review of plan-based approaches to the generation of story, discourse and interactivity in narratives
Julie Porteous, Marc Cavazza, Controlling narrative generation with planning trajectories: The role of constraints
Stephen G. Ware, Cory Siler, Sabre: A Narrative Planner Supporting Intention and Deep Theory of Mind
Stephen G. Ware, R. Michael Young, Glaive: A State-Space Narrative Planner Supporting Intentionality and Conflict
Mihai Polceanu, Julie Porteous, Alan Lindsay, Marc Cavazza, Narrative Plan Generation with Self-Supervised Learning
Michael Mateas, Andrew Stern, Integrating Plot, Character and Natural Language Processing in the Interactive Drama Façade
Manu Sharma, Santiago Ontañón, Manish Mehta, Ashwin Ram, Drama Management and Player Modeling for Interactive Fiction Games
Stephen G. Ware, Edward T. Garcia, Alireza Shirvani, Rachelyn Farrell, Multi-Agent Narrative Experience Management as Story Graph Pruning
Supplemental Media:
Guest Lecturer: Nisha Simon
Academic Papers:
Nisha Simon & Christian Muise, Want To Choose Your Own Adventure? Then First Make a Plan.
Yi Wang, Qian Zhou, David Ledo, StoryVerse: Towards Co-authoring Dynamic Plot with LLM-based Character Simulation via Narrative Planning
Evgeniia Razumovskaia, Joshua Maynez, Annie Louis, Mirella Lapata, Shashi Narayan, Little Red Riding Hood Goes Around the Globe: Crosslingual Story Planning and Generation with Large Language Models
Nisha Simon, Does Robin Hood Use a Lightsaber?: Automated Planning for Storytelling
Anbang Ye, Christopher Cui, Taiwei Shi, Mark O. Riedl, Neural Story Planning
Paper Presentations:
Anbang Ye, et al. (2023). Neural Story Planning. [paper] [slides] - presented by Shawn Bray
Evgeniia Razumovskaia, et al. (2024). Little Red Riding Hood Goes Around the Globe: Crosslingual Story Planning and Generation with Large Language Models. [paper] [slides] - presented by Vishwanth Reddy Jakka
Yi Wang, et al. (2024). StoryVerse: Towards Co-authoring Dynamic Plot with LLM-based Character Simulation via Narrative Planning. [paper] [slides] - presented by Sathvik Reddy Musku
Academic Papers:
Md Sultan Al Nahian, Spencer Frazier, Mark Riedl, Brent Harrison, Learning Norms from Stories: A Prior for Value Aligned Agents
Markus Eger & Kory W. Mathewson, dAIrector: Automatic Story Beat Generation through Knowledge Synthesis
Haoyu Wang, Muhao Chen, Hongming Zhang, Dan Roth, Joint Constrained Learning for Event-Event Relation Extraction
Supplemental Media:
Michael Mateas & Andrew Stern, Façade
Michael S. Gentry, Anchorhead
William Wallace Cook, PLOTTO: A New Method of Plot Suggestion for Writers of Creative Fiction
Thursday, October 17, 2024 to Tuesday, November 5, 2024
Academic Papers:
Robyn Speer, Joshua Chin, Catherine Havasi, ConceptNet 5.5: An Open Multilingual Graph of General Knowledge
Martha Palmer, Claire Bonial, Jena Hwang, VerbNet: Capturing English verb behavior, meaning and usage
Christiane Fellbaum, WordNet
Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, Yejin Choi, ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
Lei Shi & Rada Mihalcea, Putting Pieces Together: Combining FrameNet, VerbNet and WordNet for Robust Semantic Parsing
David Gunning, Machine Common Sense Concept Paper
Kathy Panton, Cynthia Matuszek, Douglas Lenat, Dave Schneider, Michael Witbrock, Nick Siegel, and Blake Shepard, Common Sense Reasoning – From Cyc to Intelligent Assistant
Shane Storks, Qiaozi Gao, Joyce Y. Chai, Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches
Gabor Angeli & Chris Manning, NaturalLI: Natural Logic Inference for Common Sense Reasoning
Supplemental Media:
Allen Institute for Artificial Intelligence, AI2 Common Sense leaderboards
Yejin Choi, Vered Shwartz, Maarten Sap, Antoine Bosselut, Dan Roth, ACL 2020 Commonsense Tutorial
Academic Papers:
Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi, (COMET-)ATOMIC2020: On Symbolic and Neural Commonsense Knowledge Graphs
Nasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, Jennifer Chu-Carroll, GLUCOSE: GeneraLized and COntextualized Story Explanations
Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme, Guided Generation of Cause and Effect
Peter Clark, Bhavana Dalvi, Niket Tandon, What Happened? Leveraging VerbNet to Predict the Effects of Actions in Procedural Text
Peter West, Ronan Bras, Taylor Sorensen, Bill Lin, Liwei Jiang, Ximing Lu, Khyathi Chandu, Jack Hessel, Ashutosh Baheti, Chandra Bhagavatula, Yejin Choi, NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Academic Papers:
Paper Presentations:
Prithviraj Ammanabrolu & Mark Riedl (2019). Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning. [paper] [slides] - presented by Lalith Avinash Donkina
Maarten Sap, et al. (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. [paper] [slides] - presented by KV Yaswanth
Gabor Angeli & Chris Manning (2014). NaturalLI: Natural Logic Inference for Common Sense Reasoning. [paper] [slides] - presented by Josh Li
Kathy Panton, et al. (2006). Common Sense Reasoning – From Cyc to Intelligent Assistant. [paper] [slides] - presented by Reece Robertson
Rachel Chambers, et al. (2024). BERALL: Towards Generating Retrieval-augmented State-based Interactive Fiction Games. [paper] [slides] - presented by Saksham Kumar Sharma
Academic Papers:
Thursday, October 31, 2024 to Thursday, November 7, 2024
No homework for this module.
Academic Papers:
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, Jason Weston, Learning to Speak and Act in a Fantasy Text Adventure Game
Wai Man Si, Prithviraj Ammanabrolu, Mark O. Riedl, Telling Stories through Multi-User Dialogue by Modeling Character Relations
Shrimai Prabhumoye, Margaret Li, Jack Urbanek, Emily Dinan, Douwe Kiela, Jason Weston, Arthur Szlam, I love your chain mail! Making knights smile in a fantasy game world: Open-domain goal-oriented dialogue agents
Chris Callison-Burch, Gaurav Singh Tomar, Lara J. Martin, Daphne Ippolito, Suma Bailis, David Reitter, Dungeons and Dragons as a Dialogue Challenge for Artificial Intelligence
Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, Yejin Choi, Connotation Frames of Power and Agency in Modern Films
Anvesh Rao Vijjini, Faeze Brahman, Snigdha Chaturvedi, Towards Inter-character Relationship-driven Story Generation
Xuhui Zhou, Zhe Su, Tiwalayo Eisape, Hyunwoo Kim, Maarten Sap, Is this the real life? Is this just fantasy? The Misleading Success of Simulating Social Interactions With LLMs
Ben Samuel, James Ryan, Adam J. Summerville, Michael Mateas, Noah Wardrip-Fruin, Bad News: An Experiment in Computationally Assisted Performance
Paper Presentations:
João Sedoc, et al. (2019). ChatEval: A Tool for Chatbot Evaluation. [paper] [slides] - presented by Srinivas Badiga
Annie Louis & Charles Sutton (2018). Deep Dungeons and Dragons: Learning Character-Action Interactions from Role-Playing Game Transcripts. [paper] - presented by Tristan Galcik
Stephen Roller, et al. (2021). Recipes for Building an Open-Domain Chatbot. [paper] [slides] - presented by Dayakar Reddy Kadasani
Oriol Vinyals & Quoc Le (2015). A Neural Conversational Model. [paper] [slides] - presented by Maitri Mistry
Jack Urbanek, et al. (2019). Learning to Speak and Act in a Fantasy Text Adventure Game. [paper] [slides] - presented by Mukesh Kumar Vidam
Academic Papers:
Tuesday, November 12, 2024 to Tuesday, December 10, 2024
No homework for this module.
Guest Lecturer: Peiqi “Patrick” Sui
Academic Papers:
Peiqi Sui, Eamon Duede, Sophie Wu, Richard So, Confabulation: The Surprising Value of Large Language Model Hallucinations
Saidiya Hartman, Venus in Two Acts
Peiqi Sui, et al., Mrs. Dalloway Said She Would Segment the Chapters Herself
Guest Lecturer: Kory Mathewson
Academic Papers:
Piotr Mirowski, Juliette Love, Kory Mathewson, Shakir Mohamed, A Robot Walks into a Bar: Can Language Models Serve as Creativity SupportTools for Comedy? An Evaluation of LLMs’ Humour Alignment with Comedians
Kory W. Mathewson, et al., Communicative capital: a key resource for human–machine shared agency and collaborative capacity
Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, Richard Evans, Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals
Guest Lecturer: Sai Vallurupalli
Academic Papers:
Sai Vallurupalli, Katrin Erk, Francis Ferraro, SAGA: A Participant-specific Examination of Story Alternatives and Goal Applicability for a Deeper Understanding of Complex Events
Sai Vallurupalli, Sayontan Ghosh, Katrin Erk, Niranjan Balasubramanian, Francis Ferraro, POQue: Asking participant-specific outcome questions for a deeper understanding of complex events
Guest Lecturer: Max Kreminiski
Academic Papers:
John Joon Young Chung & Max Kreminski, Patchview: LLM-Powered Worldbuilding with Generative Dust and Magnet Visualization
Barrett R. Anderson, Jash Hemant Shah, Max Kreminski, Homogenization Effects of Large Language Models on Human Creative Ideation
John Joon Young Chung, Melissa Roemmele, Max Kreminski, Toyteller: Toy-Playing with Character Symbols for AI-Powered Visual Storytelling
Guest Lecturer: Zhiyu Lin (林之雨)
Academic Papers:
Zhiyu Lin, et al., Beyond Following: Mixing Active Initiative into Computational Creativity
Zhiyu Lin & Mark Riedl, An Ontology of Co-Creative AI Systems
Zhiyu Lin, et al., Beyond prompts: Exploring the design space of mixed-initiative co-creativity systems
Zhiyu Lin, Rohan Agarwal, Mark Riedl, Creative wand: a system to study effects of communications in co-creative settings