Along with the completion of development for Chapter 3, all of the core elements for dialogue have been created as well. I always forget that something so simple can be so complex until I remember that the design document calls for portraits fading in and out as the speaker changes, support for keyboards, controllers, and touch screens to advance the text, mapping out emotions that need to be displayed over portraits, a system for tracking how many lines of text are in this conversation so it knows when to end, and popups in case you need something to display as a text message instead of something face to face.
It’s so important to design these things to be modular. Creating a rich system to display all of this should be easy to transfer into new conversations, not a burden on your time. Here I’m using one main variable called "global.conversation" to track the progression of the text and base everything else around it. When a button is pressed for the next line, everything runs a quick to check to see if it needs to change its image or create something new. For example, all I have to is create a line saying that at this point Cole is talking and he’s mad. Everything else takes over and knows that the character portrait should change to display Cole if it isn’t already. Because we’re setting "global.emotion" (a variable for tracking character reactions) to “1”, it knows that a value of 1 means create the effect for anger and then clear itself out. Stuff like this has let me create long and complex conversations in just a few minutes by simply mapping out these key pieces of information.