What if you can build your own story world for the video games and play with the characters that you have created. Isn’t it something exciting! Well, scientists, including one of Indian-origin, are developing a new Disney tool that can allow people to easily build their own digital story worlds for video games consisting of ‘smart’ characters and objects.
This is all possible with new easy-to-use digital interface being developed by Disney research. The graphical interface guides users through the process of creating a story world, helping them populate the domain with “smart” characters and objects, determine their relationships and how they interact with each other and define events that can drive compelling narratives.
“We experience a shift from simple, linear stories into complex, participatory story worlds that are told across different media types,” said Markus Gross, vice president of research at Disney Research. “As the boundaries between content consumers and content creators continue to blur, we want to democratise story world creation and expand the pool of authors by making it possible for both experts and novice users to construct a space for compelling narrative content,” he added
“Our story world builder is designed to build up components of a full story world with the semantics required for automatic story generation,” Steven Poulakos Poulakos, a researcher at Disney Research, explained.
To enable a computer programme to reason, infer and ultimately generate stories, people must first provide the computer with a wealth of domain knowledge – information on places, characters and objects and how they relate to one another, as well as how they interact. However, the languages and interfaces used for specifying this domain knowledge are highly specialised, making them difficult to use by anyone who lacks equally specialised knowledge, said Steven Poulakos.
“Our long-term mission is to empower anyone to create their own digital stories by providing easy-to-use, intuitive visual authoring interfaces,” said Mubbasir Kapadia, an assistant professor in the Computer Science Department at Rutgers University.
The graphical interface leads the user through three main steps. The first step is story world creation, in which the user configures the scene and establishes all of the possible states and relationships.
In a bank robbery scenario that the researchers used to demonstrate their system, this step included such specifications as a bank vault being locked, or a character designated as a robber, or relationships defined, such as two characters being allies.
In the next step, users author “smart characters” and “smart objects” by defining how characters and objects interact with each other. For example, a robber is able to grab an object or press a button that might open a vault door. In the final step, event creation, the user designs Parameterised Behaviour Trees, which provide a graphical, hierarchical representation for complex, multi-character interactions.
The research team, which includes researchers from Rutgers University and Swiss Federal Institute of Technology, presented the findings at the eighth Workshop on Intelligent Narrative Technologies in Santa Cruz, USA.