“Jane Writing Project” in 2025 Ars Electronica Festival : Panic -Polyphony in POSTCITY

Polyphony by C-LAB Taiwan Sound Lab (TW), 3.–7.Sept.
At the Ars Electronica Festival in POSTCITY, the featured work is The Jane Writing Project. Initiated on December 16, 2023, the project generates one AI-produced portrait and one daily story of “Jane” each day, establishing a continuous cycle of image–text–image translation. The works presented in this exhibition cover the period through June 30, 2025, while the project itself continues as a durational and evolving practice.

The process begins with a digital portrait of Jane, which is uploaded to ChatGPT along with a fixed set of instructions. From this, the system creates a new role for Jane and writes a 300-word story inspired by the image. The story is then translated into Traditional Chinese, paired with an explanatory note, and finally re-imagined as a new square-format portrait. Through this daily sequence, the project sustains an iterative loop of image and text, where each output becomes the basis for the next transformation.
From its outset, the project deliberately abandoned the traditional notion of authorship. By embedding fixed prompts at the beginning of each generative cycle, the work releases part of the creative agency to the AI system, allowing ChatGPT-4 and ChatGPT-4o’s language and generative mechanisms to construct stories, roles, and images with a degree of autonomy. In this way, artificial intelligence emerges not merely as a tool but as a co-author. The algorithm begins with blurred and indeterminate portraits, symbolizing unstable identities, which gradually take shape and gain clarity as narratives unfold and networks of roles expand. This trajectory—from data to hybrid images, through language and algorithmic reconstruction—produces multiple digital identities of “Jane” while raising a fundamental question: does creativity stem from human intention, or from the generative agency of AI operating within heterogeneous computational spaces? Jane thus becomes more than a fictional figure; she is also a mirror for examining AI’s agency in posthuman artistic practice.
Because the generation of stories and images relies on fixed instructions, the resulting content in both textual structure and visual style is closely tied to the characteristics of the underlying base models. This positions the creative process not as unilateral control but as a form of collaboration shaped by model parameters, training data, and algorithmic logic. Within this framework, the stability of platforms and the boundaries set by their policies become essential conditions for sustaining the work. When restrictions or anomalies occur at the platform level, the project may face interruptions, leaving “gaps” in the long temporal arc of its production.
Through this long-term and systematic approach, The Jane Writing Project not only produces a vast archive but also positions itself as a critical experiment in algorithmic authorship and aesthetic mediation. By tracing the iterative transformations between image and text, the project makes visible the subtle biases, interpretive tendencies, and cultural framings embedded within generative models. Each daily story and portrait functions as both an autonomous creative output and a fragment of a larger inquiry into how machine intelligence perceives, narrates, and reimagines the figure of “Jane.” (Vocus, Medium)
For audiences, the project opens up a layered encounter with algorithmic presence—at once intimate and distant, repetitive and evolving. The durational rhythm of daily production foregrounds temporality, persistence, and accumulation as central artistic strategies, while the multi-platform dissemination creates shifting contexts in which “Jane” circulates, resonates, and reappears.
In addition to the daily cycle of portraits and stories, the project also develops an AI-driven analysis website that allows visitors to browse by date range, and to watch, listen, and read Jane’s stories and images. Beyond access, the site incorporates advanced data visualization techniques. A keyword network map extracts recurring terms, concepts, and character names from the stories, presenting them as nodes and connections: black nodes indicate core characters (such as The Signal Forger, The Veil Whisperer, The Facekeeper), while gray nodes represent thematic keywords (e.g., memory, sorrow, blur, testimony, imagination), radiating outward in constellations of co-occurrence. A separate character network map semantically clusters role titles such as The Whisperer of Vanishing Faces, The Memory Splicer, The Quiet Architect, and The Painter of Shadows, showing affinities and symbolic overlaps between different roles.

The website further integrates an MBTI personality trait body map, visualizing the distribution of Jane’s psychological functions across a human outline, with colors and percentages representing varying strengths (e.g., blue for Extraverted Feeling, red for Introverted Feeling, green for Thinking). For example, on June 30, 2025, Jane appeared as The Memory Duplicator, whose profile was defined by Introverted Intuition (45%) and Introverted Feeling (30%), supported by Introverted Thinking (15%) and Extraverted Sensing (10%). This visualization highlights her inclination toward deep insight and fidelity to inner values, balanced by rational structure and sensory experience.‘
By combining storytelling, portraiture, and data visualization, The Jane Writing Project not only accumulates a living archive of algorithmic imagination but also offers audiences new ways to perceive the entanglement of narrative, identity, and machine intelligence.


Digital Portraits of Jane
The idea of “digital portraits of Jane” lies at the heart of the entire series. Each portrait is not simply a visual likeness but a computational composite, formed through layers of data, code, and algorithmic interpretation. Since its inception, the series has used online traces of “Jane” as creative material, transforming anonymous fragments of the internet into an evolving portrait archive.
The production process combines multiple technologies—automated search platforms, real-time digital image processing, facial recognition, interactive controls, text-to-voice conversion, big data analysis, affective computing, and artificial intelligence. A custom program searches Yahoo! Flickr for photographs tagged “Jane,” applies facial recognition to the downloaded images, and synthesizes the detected faces into composite portraits. Since January 1, 2018, this method has generated nearly 2,185 portraits. In 2021, the project expanded to Instagram, where portraits and daily writings continue to be published through the account i_digi_jane.
Over time, these digital portraits of Jane have been recontextualized in diverse works and exhibitions, including I Am Jane, What’s Your Scent Today?, Flowing Jane, Portraits of Jane, Whispers, Portrait Garden, Pseudo Travel Notes, Jane’s Journal, Please Listen to Me, i_digi Jane, and Flowing Room. Presented in forms ranging from documentary records to immersive installations, these works invite audiences into encounters that are both technological and affective.
By situating Jane in the shifting space between real and virtual perception, the project reveals how identity can be continuously reconstituted through data, algorithms, and cultural imagination. For audiences, the experience of Jane emerges not as a single image but as a constellation of presences—at once symbolic, emotional, and computational.

The Art Project was funded by the National Science and Technology Council, Republic of China (Taiwan)
Produced by Taiwan Living Arts Foundation C-LAB Taiwan Sound Lab
Organized by Ministry of Culture, Republic of China (Taiwan)
