Category Archives: tech considerations

Building Workflows for Authoring Digital Fiction

When Haig Armen and I set about building a technical infrastructure for our workshop (and for digital storytelling generally), we scoped out a presentation layer (focusing the reader’s experience) and an authoring layer (for the writer’s experience). We’ve happily been building upon Caleb Troughton’s excellent Deck.js framework, which provides an open, layered, web-native toolkit for building sequential stories. But the authoring environment has been a different story.

We thought that a good candidate for an authoring and content-management system would be the fabulously flexible Tiddlywiki, a “personal wiki” toolkit written entirely as a browser-based Javascript application, and very, very malleable. Malleable, yes, but all designs carry their own constraints, and over the past few weeks I’ve learned that the places we needed to go with this project don’t mesh well with Tiddlywiki’s assumptions. To put it briefly, Tiddlywiki attempts to provide an abstracted layer away from the usual building blocks of the web (HTML, CSS, Javascript), where what we wanted to do was to work with them directly. So I spent a couple of weeks learning where my assumptions conflicted with the assumptions built in to Tiddlywiki. Eventually, I got to a point where – despite my admiration for the way Tiddlywiki has been designed, and the flexibility it affords – I needed to move on.

I moved on – or rather back – to my document-production standby, Pandoc, a “swiss-army knife” for document conversion. Pandoc can convert just about any structured format into any other structured format. In our case, I need it to start with a simply formatted story, in plain text, with some annotations marking narrative structures, media, and events, and build that into the HTML+Javascript structures that Deck.js brings to life.

To do so, I built a custom HTML template that defines all the boilerplate CSS and Javascript needed by Deck.js (and its extensions). Next, I crafted some simple structural cues that allow sections to be defined, background media to be called, and on-screen formatting to be set up. The result is a very simple text file that an author creates, and a one-step build process to turn it into a rich media environment. That took only about an hour to craft, using Pandoc’s rich toolkit. What a difference compared with last week’s hacking!

My next steps are:

  1. to further tweak the Pandoc process so that writers can write less code (or at least write media directions with a minumum of formal syntax); and
  2. to embed the authoring and build process in a wiki or other web-based CMS so that we can have a collaborative writing experience rather than people working in isolation.

More to come! Stay tuned…

Of cards and decks

We’ve been thinking more about cards as the metaphor for the visual nodes of a story; everywhere the analogy gets reinforced, including today, when we had a good look at the excellent deck.js framework as a building block for our technology infrastructure. Deck.js was originally designed as an HTML5 slideshow/presentation tool, but it’s so nicely put together, modular, and well documented that we’ve begun adapting it as a basic platform for DF. A deck of slides or a deck of cards?

Similarly, we’ve been looking at the almost ineffably amazing TiddlyWiki5 as a content store and editorial management system; again, wiki’s origins were as a ‘deck of cards’ in software. TiddlyWiki5 is a stunning piece of recursive architecture, with the entire system built out of “tiddlers”–cards, that is –that hold content and/or the Javascript code that makes up the system itself. As Alan Kay would say, “it’s turtles all the way down.”

Beyond the technical details themselves, our goal is to put together a flexible system for assembling layers of media for digital fiction. We want to be able to support, on the fly, the creative directions set (or discovered) by our workshop participants, and to do so in an open, connective, and collaborative system that’s unconstrained by proprietary software or external limitations. That said, that such powerful toolkits are available on the open web (“Fork me on GitHub”) is a lovely thing.

Watch this space.

 

On Units of Meaning in DF

What’s the proper atomic unit of DF?

In last week’s post Towards a Technical Infrastructure, we worked from a model of an Episode, made up of a number of Scenes (a term with deep dramaturgical roots), where the majority of web-infrastructural things pertain—URL, content management hooks, media file associations, etc. But Scenes are made up of what? At this finer grained level, the more minute details of rhetoric, pacing, and media layers take hold; this is where a good deal of the “user experience” considerations are grounded. In our provisional model, we called these Shots, a term taken from cinema, but none of us were particularly happy with that term.

Kate suggested Clips as an alternative, a term which is similar enough to Shots, perhaps more of the era of YouTube than of 8mm film. As Kate pointed out, Clip “still has the weaponry association, but less so.” Haig noted that the CYOA platform Twinery.org uses the much more active term “Passages,” which evoeks both a section of text and a way through it.

We’re also aware of the resurgence of the term “cards” in the past year or two; from Twitter to Inkling, the card metaphor again finds its place, more than two decades after Apple’s unparalleled hypermedia toolkit, HyperCard. Indeed, the card metaphor has deep roots; Haig and I wrote a paper last year that looked into the substantial history and possible futures of cards and cardplay

Does Card make sense for the unit of immediate engagement in digital fiction? It certainly is more friendly and concrete in comparison with analytical language like “lexia” or “actemes,” or Aarseth’s “scriptons and textons”  And yet, a card is a static piece; the card metaphor denies any flux or dynamic play in the reader’s engagement. It is all about the node, and not about the link. By contrast, Clip, as Kate pointed out to me the other day, is also a verb; things can be clipped, and clipped together.

Towards a technical infrastructure

In planning for the Pathways workshop and the technical infrastructure required, we’ve been looking to a few historical precedents for guidance. We’ve taken stock of Kate Pullinger’s digital fiction projects, Inanimate Alice & Flight Paths, and also the original CBCRadio3 Digital Magazine circa 2002–2005, which Haig Armen was a part of.

All these publications were created in Flash—state of the art at the time—and have a good deal in common, architecturally. In particular, all three examples make use of the strategic layering imgtxtaudioof text, image (often animated), and audio to tell a story. All three play with the reader’s relative attention to these three modes; to what extent does the work direct or orchestrate the reader’s attention to one or other layer; and to what extent does it leave the reader free to attend as she likes.

 

We want first and foremost to create an HTML5-based environment capable of providing a platform like Flight Paths or a CBCR3 story, with that same weaving of media layers. To that end, Haig and I whiteboarded a provisional ‘object model’ for the environment, like so:

EPISODE, composed of one or more:

    - SCENE, which has the following:
        - straightforward URL
        - master audio track
        - and is composed of one or more

        - SHOT, which can have the following:
            - transitions in and out
            - triggers (for audio, for nav buttons to show, etc.)
            - maximum duration
            - text
            - image/animation
            - audio track
            - nav elements (buttons, game elements)

SHOTwbSo, a SHOT (of all the terminology we’ve used, I’m least happy with that one) has 3 visual layers: fg, mid, bg – each of which may be populated or not – as well as access to a SCENE-level background and audio for a 4th layer.

This chunking and layering of media elements is designed to allow a fair bit of flexibility in assembling orchestrated pieces. In many cases, not all the layers would be used; allowing a background image or soundtrack to simply flow across a sequence of shots.

The design of this will evolve over the next few weeks, and as we scaffold it into existence. So far, this isn’t terribly different than what HTML5 slideshow frameworks allow, but with more flexibility for bringing things in and out of focus.