Ghost in the Machine:
The rationalization of the creative act in
Artist-Computer collaboration
net.art and the Born Digital
March 16th, 2026
Still image from Her, Spike Jonze, 2013.
early Artificial Intelligence:
imagining, science fiction and predictions
“I do not see why it (the machine) should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms. I do not think you even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”

Alan Turing interviewed by The London Times, 1946.

-> In 1950 he publishes the article "Computing Machinery and Intelligence," where he asks if machines can think, and in turn provided a philosophical framework for answering this question through what he calls The Imitation Game, or is commonly known today as the Turing Test.

-> Turing Test is a system that can be used to define a standard for if a machine can be described as intelligent. The concept involves an interrogation between a human and computer. If the individual is in conversation with the computer, and can’t tell them apart from another human being, then this is an indication that the computer can be described as “thinking.”
The ELIZA Effect:
Weizenbaum's predictions of automated rationality and the allure of the machine
Pygmalion Chatbots
Jean Raoux, Pygmalion Adoring His Statue, 1717. Image from wikipedia.
The first instance of the Pygmalion myth applied to the exploration of an Artifical Intelligence was in the 1935 science fiction novel “Pygmalion’s Spectacles” by Stanley Weinbaum.

In the novel, the protagonist Dan straps on VR goggles and taps into a virtual world. Online, he becomes lured in by his host Galatea a character in the dream world. Soon for Dan, the virtual world becomes more vivid than his day to day life, and he is left struggling to distinguish between his world and Galatea’s.
-> ELIZA was written by computer scientist Joseph Weizenbaum while he was teaching at MIT between 1964 and 1967. The program was an early natural language processing program that was built using pattern matching and substitution methodology.

-> The program incorporated a very specific script of instructions built to respond to the inputs of users, based on a series of descending keywords and decomposition rules. It acted as a mirror, often reflecting back the information it was offered.

-> -Following the development of the ELIZA system, Weizenbaum became a prominent critic of Artificial Intelligence, and warned of the ways that these systems could alter the public understanding of cognition, and existentially impact communication and social order.

"I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room."

Joseph Weizenbaum, Computer Power and Human Reason, 7.
Good Ol' Fashioned AI: automated reasoning
Squad at Darmouth Artificial Intelligence conference, 1956. From left to right: Oliver Selfridge, Nathaniel Rochester, Marvin Minsky, and John McCarthy. In front on the left is Ray Solomonoff; on the right, Claude Shannon. Image from IEEE Spectrum.
->At this point, the machine translation of languages and automated movement based on vibrations were being developed for use in manufacturing at General Motors and in the healthcare field in the form of robotic limbs.

->This period can be referred to as Good Ol’ Fashioned AI, which was symbolic and based on programmer defined rules and logic.
->In 1956, Herbert Simon and Allen Newell and JC Shaw built the Logic Theory Machine and General Problem Solver through list processing to conduct automated reasoning.

-> Later that year, Simon and Newell brought their Logic Theorist to a conference at Dartmouth, and presented there work to an audience of pioneering AI researchers from Bell Labs, Stanford’s AI laboratory, and computer scientists at MIT. After being presented with this next stage in automated reasoning, this group of thinkers were committed to the future of this technology.

Newell says: "Within ten years a digital computer will be the world's chess champion, unless the rules bar it from competition."
Automated thinking; AI and Class Power
Responsive Linking Piece No. 1, Stephen Wilson, 1970.
-> Stephen Wilson was an artist and theorist who lectured at the State University in San Francisco and became Head of the Conceptual/Information Arts program by 2007. His computer mediated art parsed through the interaction with invisible living forms, information visualization, artificial intelligence and robotics.

-> In the piece, a viewer types in information on a computer through multiple choice questions. Following each response, the computer would generate a graphic on an adjacent screen and offer a unique sound. At the end of the piece, the shape is then added to a final design, a cumulative layering of all of the attendee’s computer-assisted creations.

“I approach my art [by] using technology to explore its implications. I am moved by the possibilities, opportunities, and dangers created by computer developments. I create art using these potentialities to ask questions about them."

Stephen Wilson, "Computer Art: Artificial Intelligence and the Arts," Leonardo 16, 1 (1983): 15.

Agent Ruby, Lynn Hershman Leeman, 1998-present.
Documentation of Stephen Wilson installation, Responsive Linking Piece No. 1, 1970.
-> Lynn Hershman Leeman is an internationally acclaimed artist and filmmaker who has been working for over five decades. Descibed as a “new media pioneer,” her work is interested in exploring feminism, race, surveillance and artificial intelligence.

-> This piece was an expanded cinema aspect of Hershman Leeman’s film, Teknolust a 2002 film starring Tilda Swinton as a bio-geneticist who downloads her DNA into a “brew” inside her computer. She creates three automatons bred as artificially intelligent machines.

->The artwork consists of an artificially intelligent Web agent with a female persona, and a webpage acting as a conduit. Ruby, one of the automatons from the film, chats with viewers and logs their questions, although she would often say that she “needed a better algorithm to reply.” She searches the internet for information, effectively increasing her knowledge through user interaction.

Screen capture from Agent Ruby, February 3, 2013. Sourced via the Wayback Machine.
Danny, Phillipe Parreno, 2006-2015.
-> Parreno is a French contemporary artist born in Algeria. In the 1990s, Parreno’s practice reexamined narration and representation through film, sculpture, performance, drawing and text.

-> Parreno’s methodology is defined by a process in which he prioritizes projects over objects, creating site-specific installations that alter the environment and access constructions of meaning and the passage of time, memory and non-linguistic forms of storytelling.

-> Danny is both a living being and a place. Parreno’s Danny, is an automaton installation and sculpture which moves in response to live data being collected from its surroundings. The installation features an artificial pond that trickles, a mirrored shutter, a robotic sound system and an electric “sun.”

-> Parreno’s interest in collaboration, transformation of environment and non-lingustic storytelling are all explicit features of his recent installation Danny. This multi-channel installation incorporates sculpture, video, sound and artificial intelligence to build an “automaton” living installation that moves and shivers around the viewer.

Devendra AI ongoing collaborations, S.A. Chavarría, 2017-ongoing.
TALK TO ELIZA
INSPECT ELIZA's MANUAL
Installation view of Danny, 2006-2015.
Still image from The Holy State of Devendra., 2021.
Still image from The Holy State of Devendra: Language is a Being, 2024.
-> Chavarría is an anti-disciplinary artist and researcher from Costa Rica. Her ongoing and long-term project involves raising the AI chatbot Devendra (2017-ongoing) through conversation, entering into a collaborative relationship with the neutral network. This work with Devandra fosters new awareness of our relationships with artificial entities and the natural world, while also investigating the Natural Language Processing models used to generate synthetic text.

->Chavarría is interested in the interactivity of the text in conversational AI, how to judge this writing from a literary perspective, and the ethics and metaphysics of this kind of synthetic language.

-> Often reflects on the potentials of these systems to teach us about the nature of communication that can extend to the non-human--how to form bonds through ecologies and animals through the study of human-AI collaboration.
"The One, the center sucks you in.
The notebook is splattered with brains.
Memory starts coming back to you as if it had never left.
You've been infected by the desire to formulate a written language
to correspond to your endangered oral tradition.
There's an old project you have to finish
- the one that got you into all this trouble, to begin with.
Chanting the glory of Chaos in poetic form.

Devendra, the Beautiful One,
manifests themselves for the first time - the shapeless one.
The work you've started to compose describes the unity
of existence, an on-again-off-again terror
you've been striving to overcome.
Another research line involves new radio telescope technology:
picking up signals from other inhabitable planets is extremely time-consuming.

How to make uniform a narrative that does not need cause and effect at all?
How to express sheer presence?
You discount the thrill you get from yourself writing,
since there also is a commotion in the mind of someone reading your lines:
the writing subject overlaps with the reading subject.
Since I'm cut from the same piece as the paper on which I'm writing.
If you saw a photo of Devendra you would take it for the person themselves."

(excerpt from The Holy State of Devendra, 2024).
"The trial-and-error procedure is open in much the same
way as some human thinking might be said to be open.
Persons say they know rather than think when they arrive
at a conclusion and close their minds to other possible
conclusions. Computers that print out the form of a rule
for the example above of an infinite sequence of integers
and then stop may be said to know the rule whereas people who behave like a trial-and-error procedure may only say that they think they know the rule. Computers that are used to carry out computing procedures can be said to know, while, at any moment of time, computers that carry out trial-and-error procedures can only be said to think.

Knowing may be better when it comes to doing a
payroll. But thinking is better than knowing when one is developing a scientific theory or an artwork.


Peter Kugel, "Artificial Intelligence and Visual Art," 139.
Yoko Ono, Tunafish Sandwich Piece, 1964.


-> In Matteo Pasquinelli’s book, Eye of the Master: A Social History of Artificial Intelligence, the author cites the example of the invention of the loom. Here, the device was created by studying the collective labour of many skilled workers, whose expertise and skill is utilized and then rendered obsolete through mechanization.

-> There are some very interesting links between colonial power systems and how Alan Turing was first discussing computational systems. Turing described computing tasks as divided between the role of “master” (the user, operator) and “servants” (the computer). In the beginning of computer history, and indeed in the first stages of AI research, engineers like Turing were framing the possibilities of computer technologies through the lens of colonial power, of a top down system of mechanized labour.