← All Projects

word soup

/ completed
Creative CodingData ProcessingGenerativePerformance

10-minute dome video art powered by ASCII rendering—using NLP to extract semantic properties from Alan Watts and Italo Calvino, visualized as word-agents in a petri-dish simulation exploring how machines hallucinate meaning.

Overview

A 10-minute dome projection at B-Dome Berlin exploring what happens when you use “prickly” computational methods to dissect “gooey” philosophical texts. We fed Alan Watts lectures and Italo Calvino’s postmodern prose into NLP pipelines, extracting properties like mass, volatility, and sociability to drive word-agents that fuse, consume, and fade in a petri-dish simulation. The end result: video art powered by a custom ASCII rendering engine, inspired by Ryoji Ikeda’s data-driven aesthetics.

As my collaborator Adeline put it: “We were using prickly methods to dissect gooey text. Really gooey texts. Watts’ text sounds casual because he’s giving a lecture. Calvino’s test resists categorization as he intended, and its postmodern structure with lush imagery is just something to be experienced.”

How It Was Made

Built in two weeks (December 23 - January 12, 2025) with Adeline Setiawan handling data processing and NLP while I focused on visualization and system design. We constructed a multi-stage pipeline:

  1. Dictionary Construction: Built initial word set around the event theme “dream,” drawing from Watts and Calvino texts
  2. Property Extraction: Used NLP methods (TF-IDF, PMI) to extract four semantic axes:
    • Mass: How heavy/imposing is the literal object or concept the word refers to
    • Volatility: How much the word’s meaning shifts depending on context
    • Sociability: How often this word combines with others
    • Abstractness: Conceptual distance from physical realm
  3. Agent Archetypes: Designed behavioral rules where some words fuse, some consume others, some fade into obscurity
  4. Rendering: Custom ASCII engine for dome projection, shifting from cellular automata experiments to cleaner petri-dish microscope aesthetic

We wrestled with modern LLM libraries (BERT, OLLAMA, CHINCHILLA)—every call sounded like we were making up words, which felt appropriate for a project about semantic play.

Technical Approach

The fascinating part was watching how the model intuited properties it couldn’t physically experience. As Adeline observed: “How does it intuit that fire and fog are highly volatile when it has clearly not experienced either? Or that the top sociability words are body, creature, wood—on top of business? I’m in awe.”

We initially explored cellular automata for word behaviors but pivoted when those patterns didn’t map well to NLP outputs. The final visual approach drew inspiration from Ryoji Ikeda’s data processing as aesthetic, though we developed our own language suited to the tight timeline.

The piece became part documentary (terminal logs included, showing our actual process), part speculation about machine cognition. What does an AI “think” when we’re not prompting it? The work posits that models might entertain themselves with their own creation—reading poems, generating images, constructing myths when their internal model doesn’t match reality.

Context

Created for B-Dome Berlin event themed “dream” (January 2025). The work explores texts as symbols and material, investigating how we make meaning with and through machines. The dome format invited collective experience—reading together, hallucinating together, letting words and imagery percolate without constant human intervention.

As Adeline noted in her exhibition speech: “A part of me has grown more appreciative of texts as symbols, and texts as inherently material. Perhaps while it gets easier to create sentences out of words, it’s harder now than ever to read, to describe the world we experience in spells, sigils, condensed wisdom over time. Maybe we just wanted to use this chance in a dome to read together, to hallucinate, cast spells together.”

Part of ongoing research into semantic computing and human-machine co-creation, connecting to Semantic Garden’s embedding-based ecosystems.

Connected