00:04 |
SpokenWeb Podcast Intro |
[Instrumental music overlapped with feminine voice]
Can you hear me? I don’t know how much projection to do here. |
00:17 |
Maia Harris: |
What does literature sound like? What stories will we hear if we listen to the archive?
Welcome to the SpokenWeb podcast, stories about how literature sounds. |
00:31 |
Maia Harris: |
My name is Maia Harris, subbing in for our usual hosts for a very special edition of the SpokenWeb podcast, recorded live at the 2024 SpokenWeb symposium here on Treaty Seven Land.
Each month, we bring you different stories of Canadian literary history and our contemporary responses to it created by scholars, poets, students, and artists from across Canada.
How can artists harness algorithmic processes to generate poetry, music, and dance? And what can we learn from the longer history of creative coding and early experiments in human-computer collaboration?
In this episode of the SpokenWeb podcast, we will venture into the roots and future directions of algorithmic art. |
01:18 |
Chelsea Miya: |
Thanks, Maia. Hi everyone. I am Chelsea Miya. |
01:22 |
Nicholas Beauchesne: |
And I’m Nick Beauchesne. And this is our live studio audience. . . |
01:28 |
Live Studio Audience: |
[Cheers and applause] |
01:36 |
Chelsea Miya: |
[Beat music plays and fades]
Thanks to the “algos,” or algorithms, used in social media to curate content and drive engagements. Most people have at least heard the term, even if they have little understanding of what it means.
The concept of an “algorithm” predates computers, originating back in the ninth century. An “algorithm” is understood to mean a set of rules for executing a particular task or a set of operations. You can create an algorithm for getting ready in the morning, baking a cake, or driving to work. As we’ll see later in the episode, algorithms can even be used to generate poetry, compose music and choreograph dances. |
02:14 |
Nicholas Beauchesne: |
The clip you’re about to hear is from the University of Alberta campus radio show “Voice Print.” You can learn more about the series and its early contributions to experimental literary radio on the SpokenWeb podcast episode: “Academics on Air.”
This particular voice-print episode was themed “Printing and Poetry in the Computing Era,” and it aired in 1981. The archival recording anticipated the hopes and fears for automated computer-generated art that, in some ways, have come to be realized in the present. |
02:45 |
Audio from the “Popular Poetics” segment of The Voiceprint episode “Printing And Poetry In The Computer Era,” 20 May 1981; Read By Anna Altmann. |
Although documentation is lacking, it is probable that computer poetry was invented simultaneously at various locations in the 1950s by engineers occupied in such language tasks as mechanical translation during the 1960s. However, these developments came to the attention of poets and literary scholars, who then began to explore the literary possibilities of computer technology.
Although somewhat disturbed by the implications of such activity, these pioneers were more fascinated by the superhuman inventiveness of the computer and by the inability of the reader to distinguish with certainty between machine and human products. Although no recognized masterpieces of cybernetic literature have yet been produced, it seems only a matter of time before computer poetry becomes a respected form of verse in its own right. Indeed, the possibility exists that a future Milton or Shakespeare is at this very moment studying computer science at a technical school or university. |
03:44 |
Nicholas Beauchesne |
The Milton or Shakespeare of computer poetry may not have arisen yet, but one contender could be open: AI’s ChatGPT, which debuted in 2022. Other AI chatbots entered the mix soon after. Google’s Gemini, Microsoft Co-pilot, and even Adobe Photoshop have an AI-assisted editor mode. These technologies raise fundamental, ethical and existential questions about what constitutes art.
Can a programmer or a program be a poet? They can certainly try. As ChatGPT told us in the form of haiku:
craft with words untold
ChatGPT offers aid
poetry unfolds |
04:26 |
Chelsea Miya: |
Our first guest is Mike O’Driscoll, and he’s the Director of SpokenWeb at the University of Alberta. He’s an authority on early experiments in procedural or algorithmic poetry as he explains the “Dada” movement—and that’s “Dada” with a “d,” not a “t” as in “data”—was an anti-art movement. These early coders became infamous for their avant-garde performance pieces. The instructions were generated randomly, not with digital tech since this was before bits and bytes, but with everyday analog tools: paper, a pen, and, oddly, a hat. |
05:04 |
Mike O’Driscoll: |
Tristan Zara, one of the leaders of the “Dadaist” Movement, would pass a hat around the room—think about a Vienna Cafe 1916—and invite audience members to put a word into the hat and then the hat would be gathered and as the words came out of the hat that would construct the poem. That’s a “procedural” poetic. That is a way of making a poem according to a particular rule-driven methodology that might or might not be modified before, during, or after, in terms of human intention and other creative roles that the human participants might play. |
05:48 |
Chelsea Miya: |
Fast-forward to the 1960s. IBM [“International Business Machines”] had just debuted powerful new computing machines, and almost from the get-go, the company founders imagined using these machines to create art. They invited a number of artists to their laboratories, including Jackson Mac Low. |
06:08 |
Mike O’Driscoll: |
In Southern California in 1969, Jackson was invited to participate alongside computer technologists in the production of some poetry, which he dubbed the “PFR-3 Poems” [PFR: Phonemic Face Realizations]. These were using film readers that could be read automatically by a computer program that would essentially take the inputs that he produced and randomize them in different ways so he could enter up to a hundred lines of text with up to 48 characters per line. The program would identify units of that text, whether words or sentences and then randomize those and produce poems by displaying on a screen every 10th line produced through that algorithmic procedure. So, that was a very early instance of Jackson Mac Low engaging computer technology to produce a poem. |
07:09 |
Chelsea Miya: |
As Michael O’Driscoll explains, this was not Mac Low’s first experiment with computational art, as described by a poet who worked like a computer before computers. Mac Loew had been experimenting with rule-based language games for years. |
07:25 |
Mike O’Driscoll: |
Jackson had already been working by hand for six years before that on what he called his “diastic writing through” method, which was essentially an algorithmic procedure that uses source texts and seed texts or index text to determine which words are pulled out of the source text and displayed on the page of the poem. That procedure depends specifically on the very exacting rule of matching letter positions in words in the seed text to letter positions in words in the source text to determine the material that becomes the poem. That’s a process that Jackson was doing by hand from 1963 and did for the next 26 years by hand. And if anyone wants to try this, I welcome them to try it. But the manic patience it takes to do this is astounding and impressive. |
08:25 |
Nicholas Beauchesne |
Jackson Mac Low was not the only artist who experimented with algorithmic methods. He was part of an experimental art movement called “Fluxus.” Like the “Dadaists” who came before, the Fluxus of the 1960s was more interested in the process of making art than the finished piece. In fact, the art was never finished. |
08:46 |
Mike O’Driscoll: |
It’s important to note that much of that performance work was done through collaborative processes that demanded or asked of the performers and the artists a certain level of attentiveness and attunement to each other in terms of what was going on in the moment. So there’s this deeply relational aspect to what’s going on there. There is also a modelling of certain kinds of social or political formations. And so what Jackson is doing there is bringing the procedural into contact with human agency and with human community. |
09:25 |
Nicholas Beauchesne |
One of the best examples of Mac’s process in action is a vocabulary for Sharon Belle Mattlin. Here’s a clip of a live performance featuring an all-star cast of readers, including Susan Musgrave, George Macbeth, Sean O’Hagan, and the unmatchable BpNichol. |
09:46 |
Audio From “A Vocabulary for Sharon Belle Mattlin” By Jackson Mac Low; Performance by Susan Musgrave, George Macbeth, Sean O’Huigin, BpNichol, and Jackson Mac Low, 1974. |
Share name, nation share name, nation share name, nation share name, belly Battle, battle Bay, west Marsh, marble Linen, melon, melon, noble, bitter liberal meat bite, bite meat. Tell them tell us anymore. Tell them Stare, stare. Helen, stare. Tell stare. Stare hen. Be lamb eel. Tell, tell them. Tell them laws tell them rain eel brain reliable metal la, reliable trash, reliable trash, reliable trash, stellar trash, reliable trash, termination. See, stellar trash. She Athens, taste me, taste me. |
10:55 |
Chelsea Miya: |
As our live audience can see projected above us, here is the “score” for the performance. The page, as I’ll describe it for our listeners, is a jumble of words, some written in tiny, cramped font, other larger, some angled in different directions, or flipped upside down. Each word is a variation or riff on the name of the person the poem is dedicated to: Sharon Belle Mattlin. Some configurations of letters from her name morphed into elation, emanation, mint, share, shame, and so on. The performers were free to interpret, explore, and respond to these freewheeling scores at the moment of the performance. But always within the bounds of agreed-upon rules. |
11:42 |
Mike O’Driscoll: |
It’s a brilliant field of text in which what Jackson has done, is written in by hand all of the words derived from the dedicatee of the piece. The performers that can then move across that page in ways that they are inclined to do, whether they are articulating work words or singing or in the case of instrumentalists that you could hear the flute music. In that case, they are transposing the letters to particular notes that Jackson has determined for them in advance. And so, what you’re getting, in that case, is, again, quite a rule-bound production of the text and its performance. But also, that opportunity for the performers themselves to move across and through that work in ways that they intuit and that they conduct in response to their fellow performers. |
12:46 |
Chelsea Miya: |
Algorithmic processes are increasingly reshaping our world. So, we asked Mike what Mac’s work can teach us about the role of human decision makers in our data-driven society. |
13:00 |
Mike O’Driscoll: |
Jackson works deliberately at the limit between “chance” and “choice,” between “procedure” and “intention.” He does so in part to trouble that boundary, to disturb or even deconstruct the boundary between the machine and the human, between the automatic and the “age-gentle.” And he does so for very deliberate political reasons.
|
|
|
In part one, I contend that Jackson draws attention to what I’ve been calling the ideology of machine agency. That notion that machines that algorithms, that computers are somehow themselves operative, are somehow themselves “age-gentle.”
|
|
|
But this is, in many ways, a kind of illusion that the presumption of machine agency is itself ideological, is itself something about which we should beware. |
14:02 |
Nicholas Beauchesne |
Mike O’Driscoll is editing a new collection of Jackson Mac Low’s The Complete Stein Poems, which will feature over 100 never-before-published poems. This new version by MIT Press will hit the shelves in Fall 2025 |
14:17 |
Audio from “Tape Mark I” by Nanni Balestrini [Voiceprint episode “Printing and Poetry in the Computer Era,” 20 May 1981]; performed by Roman Onufrijchuk. |
Aeons deep in the ice. I paint all time in a whirl bang. The sludge has cracked aeons deep in the ice. I see gelled time in a whirl. The sludge has cracked all green in the leaves. I smell dark pools in the trees crash. The moon has fled all white in the buds. I flash snow peaks in the spring bang, the sun has fogged |
14:52 |
Chelsea Miya: |
Mac Low’s computer poems continue to be performed and encoded in new ways. Next, we’ll hear from Kevin William Davis, a contemporary composer and cellist based at the University of Virginia. Davis is a big fan of Jackson Mac Low, and he was particularly captivated by his computer poems. |
15:12 |
Kevin Davis: |
Yeah, poetry is actually a really big inspiration of mine. I mean to me, I can read orchestral scores, I can kind of like see them and imagine them in the way that one might sit with a book of poetry maybe sound some of it out, on a score on the piano, maybe you would actually read some of the poetry out loud. |
15:34 |
Chelsea Miya: |
Davis’ Musicology students didn’t at first share his enthusiasm for poetry and they were kind of baffled when he brought a book of poems to practise. But when they started scoring Mac Low’s computer poems, working line by line to transform the words into sounds, something clicked. |
15:51 |
Kevin Davis: |
As a music teacher, I see people struggle with notation constantly. It’s a very difficult thing to turn symbols into movements, in time. And when they were doing these, this Mac Low stuff, it was effortless. That directly, I think, inspired my thinking about, “OK, what if I then turned speech back into music?” Can I get these . . . can I get these uh [laughs] percussionists to execute rhythms that are more complex than they could with actual musical notation? |
16:29 |
Nicholas Beauchesne |
Mac Low not only adopted methods from computing, also music theory. He studied with composer John Cage and sound, as we heard, was integral to the performance of his work. The Fluxus movement itself spanned multiple countries and multiple fields of practice—not just poetry, but also sculpture, dance, and music.
|
|
|
So, when Davis and his students decided to remake Mac Low’s PFR (Permutation, Replacement, and Form) poems in a different genre, creating music from the printed words, it was a very Fluxus thing to do.
|
|
|
Instead of transcribing the words into notes, they created a series of sonic doodles. The new, re-created score looks on the page like a series of loops and squiggles, each shape corresponding to lines from the poem. |
17:15 |
Kevin Davis: |
My concept of this was transformation of elements of the poem into movement, which then would result in sound. And so for literally like the drums are tracing out the letters of the poem on the surface of their instruments. And so just different ways, some of them almost silly, just different ways of transforming this movement into sound in that process. Yeah, I spent a lot of time with the words like saying the words. The four poems that were in the collection that I have were each very different. They were very much like movements of a musical work.
|
|
|
Are we allowed to pause for a second? I think this would be an easier discussion to have with the book, which I was like, I should have grabbed that book.
|
|
|
Hold on just a second. So Know it’s around here somewhere. |
18:06 |
Chelsea Miya: |
So behind me are stills from the interview that I did with Kevin over Zoom. And at this point in the interview, Davis left the frame and rummaged around in the background. |
18:18 |
Kevin Davis: |
Oh, here it is. |
18:20 |
Chelsea Miya: |
And he pulled out a copy of Jackson Mac Low’s Collected Works: Thing of Beauty (2008). The pages are scribbled with notes for his performance, just like he would do for a score. Davis’s favourite poem, the one with a lot of annotations, is “From from David.” He confesses he was more than a little nervous about performing the speaking parts. But for this particular poem, he felt it was important to read the actual text. |
18:48 |
Kevin Davis: |
I’m so much more comfortable playing a musical instrument than speaking. And especially speaking as performance. There are things you find in the experience of reading one of those kinds of texts over and over. It seems like in a lot of ways more about language itself more than just any kind of emotional idea he’s trying to get across. It’s a kind of anti narrative really.
|
|
|
While like I said before, in the reading. I tried to strike a tone. The funniness is just being like kind of pummelled by this absurdity of, you know, just these different transformations of this very simple idea of like is this is David asking what happened. [laughs] |
19:26 |
[Audio From From “From ‘David’” Composed By Kevin Davis From Three PFR-3 Poems By Jackson Mac Low For Percussion Quartet And Speaker, 2017; Performance By UVA Percussion Quartet.] |
Where did David ask what happened? How did David ask? Where did David happen to have asked me? Asking what had been, happened. David asks, had anything happened when David asked who was there? When David asked, how did David ask what happened, what had been happening when David was asking what had been happening, what was happening when David was asking happened? How had David been asked what had happened? When did David ask what had happened? Whom— |
20:08 |
Kevin Davis: |
[Live reading from the interview] It’s from David. David asked whether anything had been happening. Whom did David ask? What happened?
|
|
|
Well, it’s like I messed up a couple times. I really, when I did it, especially in the recording and performance, I had to practise some to be ready. It’s not a tongue twister exactly, but it almost gets in that territory. There’s just so much repetition, it can get a little difficult. This one more than any of them is really a lot like reading music. Even the most notated classical piece involves improvisation on the part of the performer. It may be just in small ways. |
20:44 |
Kevin Davis: |
And it made me think about that. This feels like the kind of improvising you do when you play Mozart or Bach or something. And then you kind of like put little ends of phrases that you’re you. But in the moment, if you know it well enough, you’re able to play with it. You’ll all do this end this way this time.
|
|
|
And what I love about this one is that some of the lines have question marks and some don’t. And so you can play around with this thing that’s often unconscious that we do, where we indicate a question through raising the pitch. |
21:23 |
Nicholas Beauchesne |
Davis’s reading of Mac Low’s computer scores was, in part, inspired by his experiences growing up in Appalachia. One of his first experiences with the live performance of music and voice was at his Baptist Church. When he read Mac Low’s poems, he imagined the relationship between instruments and the voice, the way the spoken text echoes the sounds, as a kind of congregation. |
21:47 |
Kevin Davis: |
We did these things, called responsive readings. Have you ever heard of these? So there’ll be whatever text or sometimes Bible verses, and then the pastor will talk and then the congregation, the words will be in bold and you’ll go back and forth. And there are all these hetero-phonic artifacts of like people sort of speaking together. I found them compellingly odd, and it was. It’s such a different way of interacting than singing.
|
|
|
Me, as a little kid, I thought it was really interesting. Well, it’s just this sound of like 200 people’s voices of all ages kind of like having this resonance together. But like it’s all soft on the edges because of the different ways that people are speaking. And whenever they hit like a “tee” then it’s like “tuh-tuh-tuh.” Right. It’s kind of like dancing around the room, whereas the vowels will all be kind of like these kinds of flowing singing things, you know, like sounds. |
22:51 |
Chelsea Miya: |
Davis doesn’t just perform computer poems. He also, on occasion, helps write computer programs. Interactive sonic events, people sounding together, have always intrigued him. After reflecting on the parallel practices of church congregations and Fluxus artists, he got to thinking: could these social dynamics of sonic performance be captured and re-created computationally? |
23:20 |
Audio from “Elegia” from On Remembrance, 2020; composed by Kevin William Davis using the Murmuration software in collaboration with Eli Stine. |
I worked with a friend, Eli Stein, who’s a fantastic programmer. We came up something that’s a flocking algorithm, a bird flocking algorithm. Fifty little particles of sound, and then they just kind of flock around. You just use that flocking as kind of like a starting point. An agent of kind of chaos to spread things out and then you can stop them, freeze them. |
23:56 |
Chelsea Miya: |
Have you ever seen flocks of starlings? They move together in this hypnotic way, dancing across the sky, almost like jellyfish or giant misshapen bubbles, stretching and contracting. That behaviour is called Murmuration. And that’s what Davis and his partner dubbed the software: The Murmurator.
|
|
|
It’s a tool for creating interactive, multi-channel sound installations. In developing the software, they experimented with increasingly elaborate speaker set-ups, bigger “flocks” so to speak: 50 speakers, then 100, in various configurations.
|
|
|
Once you execute the program and the flock takes flight, the particles of sound will move, seemingly independently. The human user, however, is working behind the scenes, “conducting” the performance as it happens by adjusting the settings and creating different flocking patterns. |
24:57 |
Nicholas Beauchesne |
There are echoes of these sonic dances in the Jackson Mac Low performances we’ve been hearing. Lately, Davis has been thinking more and more about human-computer interaction and its implications for art and creativity. He’s particularly fascinated by watching computers play games. |
25:16 |
Kevin Davis: |
I think a lot about chess and how, you know, people were at first very disturbed that no human could beat a computer at chess anymore. But there’s been this evolution of chess playing computers, especially through machine learning, where they’re starting to come up with chess ideas that are coming from an alien planet or something. It’s not things that anybody would have thought of.
|
|
|
Kevin Davis: I would wind up watching these games on YouTube that were like computers playing computers. And first of all, that’s existentially weird, like watching, right? But it’s a strange alien kind of beauty that’s coming out of these games.
|
|
|
[START MUSIC]
|
|
|
So what has happened is now those ideas have reintroduced all kinds of like openings that people maybe had forgotten. There are these ways that like technologies can inspire creativity and actually give people ideas solutions to artistic or creative ideas that they hadn’t considered. Maybe find a part of yourself that you were not able to access. |
26:43 |
Audio From “Tape Mark I” By Nanni Balestrini [Voiceprint Episode “Printing And Poetry In The Computer Era,” 20 May 1981]; Performed By Roman Onufrijchuk. |
The landscape of your clay mitigates me coldly by your recognizable shape. I am wronged the perspective of your frog feeds me dimly by your wet love. I am raked. |
26:59 |
Chelsea Miya: |
Our last guest is a choreographer and performer, Kate Sicchio, Associate Professor of Dance and Media Technology at Virginia Commonwealth University. Sicchio explores the interface between choreography and technology with wearable technology, live coding, and real-time systems (“About”). We asked her how she made the leap from dancing in her own human body to dancing virtually with technology. |
27:27 |
Kate Sicchio: |
Way back when I was a high schooler, I had this internship, it was the nineties of the.com boom. So, I worked at what was then a web start-up. It’s so different than what web start-ups are now. [Laughs] But basically, I had this internship where I had taught myself some HTML to make my own geo-cities page. And so they’d give me giant Photoshop files and I would code them into HTML. So that was like my after-school job. And then I also was a dancer. So I would go for my after-school job to dance class and um did a lot of ballet and modern. And then went to do a BFA (Bachelor of Fine Arts) in dance and was like, I don’t, I’m not interested in this technology thing, whatever. I’m just gonna be a dancer. And then about halfway through my undergraduate degree, I got injured and I had a bunch of knee surgeries |
28:20 |
Kate Sicchio: |
I still have knee problems. My knee is really swollen right now as we speak. Um but I had to take six months off from dancing. So I went to my school’s multimedia department. I was like, I know HTML. Do you have any classes I can take? And they were like, take anything. [Laughs] So I started doing actually a lot of video work at the time and then these other sorts of different interactive classes and then when I was well enough to dance, someone kind of mentioned kind of offhand to me like, oh, well, why don’t you combine the dance courses and the multimedia courses? Why don’t these two things come together? And that was my epiphany moment. Like oh yeah, these things could come together. I really started, yeah, working a lot with um in particular video projections and making them interactive in real time. From there I went to the UK to do a master’s degree in digital performance. I kind of kept going on that trajectory and now I’m still doing it like 20 years later. |
29:30 |
Chelsea Miya: |
Can you describe some of the collaborations that you’ve done with robots and the things that are exciting but also challenging about working with robot collaborators and duetting with them in a sense? |
29:43 |
Kate Sicchio: |
I work a lot with um Dr. Patrick Martin who’s now at University of Richmond, who is a roboticist. We created our first piece together, it was performed in 2022, called Amelia and the Machine.
|
|
|
[Audio starts playing. From “Amelia and the Machine,” 2022; danced by Amelia Virtue; robotics by Patrick Martin, Charles Dietzel, Alicia Olivo; music by Melody Loveless and Kate Sicchio.]
|
|
|
So, that was a duet for a small manipulator robot, which is basically a Rumba with an arm. [Laughs]. And it’s not very tall, it’s under 2 ft tall. Um And then Amelia is the dancer, Amelia Virtue. So, the aim of that piece was just to like, can we do this, can we put a robot and a person on stage together and what will that mean?
|
|
|
So, we’re really interested in the idea of human-robot teams. And a big part of that for me is I want them to improvise together. How can they like inform each other’s decision-making about movement together? We actually created this machine learning algorithm where Amelia could teach the robot a new gesture on stage by manipulating its arm. |
30:48 |
Kate Sicchio: |
So she literally like grabs the arm, there’s sensors on the motors that can see where she’s put it. She only has to do that three times and then it’s learned it, it stored it, it can call it back later in the performance. So that was our small moment of improv in that piece.
|
|
|
But actually to do that became its own engineering accomplishment and that actually became like a new machine learning algorithm, which we call dancing from demonstration algorithm. So, so we had this like small like discovery of this algorithm in the process of making this piece. |
31:26 |
Chelsea Miya: |
The role of empathy seems to come up in your design process in terms of imagining how these robots and robot bodies would move differently and perceive the world differently and us differently. |
31:43 |
Kate Sicchio: |
Yeah, I think that’s part of it. Well, I guess you just realize so quickly that they’re not human. [Laughs] And like it’s a thing that comes up a lot. I’m asked like, why don’t you put costumes on your robots? And I’m like, they’re not people, they shouldn’t be seen as people. Let’s not like make them cute little characters. [Laughs]
|
|
|
Even like the moving of the robot arm, we call it an arm, but it’s nothing like our arm, it doesn’t have the same joints or the same movement pathways. So, even when you’re choreographing the robot arm, you’re just moving five motors. And you become very aware of that very quickly. Like, it’s not an arm. |
32:24 |
Nicholas Beauchesne |
Kate explains that audiences connect to the robot performers in surprising ways. Often, the people who come to her shows will respond in emotional, affective ways to the machines on stage. |
32:36 |
Kate Sicchio: |
So, I think because Amelia and the machine start with her physically touching the robot, it really sets up this like very intimate relationship with the robot. And she’s very careful. She’s like is very intentional, right, in teaching it the gesture. She wants to get it just right. So here’s this person touching and teaching this robot. And it does become this like, yeah, they clearly have established this relationship together, Amelia and the robot. And people have read this in all kinds of ways. So, I have a young son. So, um he was a toddler when that piece came out. So everyone was like, this is about you and your son because the robot’s the size of a toddler. And I was like, no, it’s not! But [laughs] But um, but yeah, they just saw a woman and this toddler-sized machine and this intimate thing of teaching a toddler-sized thing. So it automatically read like that to a lot of people. And then also this, um, yeah, this clear thing where they’re dancing together but not like, um, often not in unison that sets up this relationship that they’re different but working together, um, that people really read into as well, yeah. |
33:49 |
Chelsea Miya: |
How does being a choreographer give you different insights into technology and code that might not occur to a traditional coder? |
33:57 |
Kate Sicchio: |
Yeah, there are a few ways, I think. One is just expert movers. So I try to teach this to my dance students all the time. You’re an expert mover. People need you to share your insights on the body. So there’s a lot of like systems that are being made now. Even our phones, right? Like we carry around a computer on our body all the time. We have all these gestures that we do to make it work. But these aren’t necessarily being come-up with by people who are very into using their body, right? They might be computer scientists or if you’re lucky, they’re UX designer who’s interested in the body. But usually they’re a UX designer who’s more like, oh, well, if it takes more than three clicks, people get bored. [Laughs] Right? But our interfaces are becoming more and more about the body. |
34:55 |
Kate Sicchio: |
And so there’s this place where dancers’ knowledge really could feed into how we design our technologies. Also, how we understand them. So um I’m really interested in things like how gestures hold meaning or even like an emotion, right?
|
|
|
So like if I’m like doing something really heavy and sudden it’s gonna look like a punch, right? So like if I’m gonna design like a gesture on my phone that’s heavy and sudden it’s like I’m angry. That has a whole yeah, design approach to it, right? Or I love to pick on the gesture of Tinder, right?
|
|
|
So you’re constantly flicking just like light and indirect and kind of careless. When we say, oh yeah, yeah, I’m swiping. There is a carelessness to that. This isn’t how you’re gonna find a spouse [laughs]. Because you’re just throwing people away. [Laughs]
|
|
|
So, yeah, I think about dancers as being able to bring that knowledge to tech and design. |
35:56 |
Chelsea Miya: |
I was curious too about like, whether your work changes the way you observe and perceive like technology in the, in the world. Do you ever, like, see machines, machines or tech and be like, wow, that’s a beautiful dance? |
36:06 |
Kate Sicchio: |
Yeah. Yeah. Actually I do all the time. [Laughs] Yeah, I’m trying to think of something I’ve seen recently where I was like, oh I love this. But yeah, I have. I see these like machine choreography everywhere.
|
|
|
[Audio from crane loading at the construction site.]
|
|
|
Oh, I saw some really beautiful—they’re always building. Oh, I guess in every city now. But in Richmond we have a lot of building going on. So these cranes were moving, um, and sort of like shifting. They were like counterpoint cranes on the skyline. [Laughs] And I was like, oh look at that dance. [Laughs] Yeah. |
36:39 |
Chelsea Miya: |
There is something hypnotic about technology and the way that it moves and this sort of kinetic aspect. |
36:46 |
Kate Sicchio: |
Yeah. I think that’s like a draw as a choreographer for me for sure. Because you, you say robot and everyone assumes these kinds of like sudden jerky movements, but they’re so smooth and they do have dynamics and they do have potential for like moving in different ways. That’s what gets exciting as a choreographer. It’s not like just sequencing. You can make a range of dynamics and all the stuff that gets exciting as a mover. Yeah. |
37:18 |
Nicholas Beauchesne |
Kate performs live coated dances where the code itself is projected in real time on the walls ceiling. Even the performer’s bodies. She’s sometimes seated at the side of the stage at a desk with her laptop. Yet even when she decentres herself, her embodied interactions with the computer program, her finger strikes on the keys, even sips of water she takes are a crucial extension of the dance in this nexus of performer performance and audience of process and product. We again, think of the Fluxus movement. We asked her about that movement enduring legacy today. |
37:57 |
Kate Sicchio: |
Yeah. And I was also talking about Fluxus prompts the other day in terms of like people talking about AI prompts, like oh, for Midjourney or whatever, giving it a prompt. And I was like, is this just a new way of doing Fluxus art? Like that’s only what they did. They just wrote prompts, right? [Laugha[
|
|
|
Are we all just Fluxus artists now? Yeah [Laughs]. |
38:19 |
Nicholas Beauchesne |
Whether used for poetry, music, or dance, or any other creative medium, algorithms have such generative potential. Algorithmic art is so peculiar in that it is seemingly chaotic, random, and illogical, yet intensely rule-bound and orderly.
|
|
|
We would like to leave the last word to another computer artist, the Italian poet and programmer Nanni Balestrini. The following poem, entitled “Tape Mark I,” is a computer-generated remix of three source texts: Michihito Hachiya’s Hiroshima Diary, Paul Goldwin’s The Mystery of the Elevator, and the philosophical treatise attributed to the sage Lao Tzu’s, the Tao Te Ching (Balestrini 55). The original “experiment” was performed on an IBM 7070 computer at the Electronic Centre of the Lombard Provinces Savings Bank in Milan in October, 1961 (55). The reader is Voiceprint producer Roman Onufrijchuk, who also read the previous two interludes of computer poetry. Onufrijchuk has an admirable knack for mimicking the monotone, mechanical voice of an imagined computer author and reader. |
39:28 |
Audio from “Tape Mark I” by Nanni Balestrini [Voiceprint episode “Printing and Poetry in the Computer Era,” 20 May 1981]; performed by Roman Onufrijchuk. |
While the multitude of things comes into being in the blinding fireball, they all returned to their roots. They expand rapidly until he moved his fingers slowly when it reached the stratosphere and lay motionless without speaking 30 times brighter than the sun endeavouring to grasp. I envisaged their return until he moved his fingers slowly in the blinding fireball, they all returned to their roots, hair between lips and 30 times brighter than the sun lay motionless. Without speaking, they expand rapidly. Endeavouring to grasp the summit. |
40:08 |
SpokenWeb Theme Song |
Can you hear me? |
40:11 |
Maia Harris: |
The SpokenWeb podcast is a monthly podcast produced by the SpokenWeb team as part of distributing the audio collected from and created using Canadian literary archival recordings found at universities across Canada. Our producers this month are Chelsea Miya, a postdoctoral fellow at McMaster University’s Sherman Centre for Digital Scholarship, and Nicholas Beauchesne, a musician and instructor at the University of Alberta, who also engineered this episode’s audio. The score was created by Nix Nihil through remixing samples from Kevin William Davis and Voiceprint and adding synthesizers and sound effects. Additional score sampled from performances by Davis and Kate Sicchio.
|
|
|
Nick Beauchesne engineered this episode’s audio and the 2024 SpokenWeb symposium.
|
|
|
Participants are our live studio audience. |
41:08 |
Live Audience |
[Cheers and applause] |
41:11 |
Maia Harris: |
Our usual hosts are Hannah McGregor and Katherine McLeod, our supervising producer is me, Maia Harris. Our sound designer is James Healy, and our transcriptionist is Yara Ajeeb.
|
|
|
To find out more about SpokenWeb, visit spokenweb.ca. Subscribe to the SpokenWeb podcast on Apple Podcast, Spotify or wherever you may listen. If you love us, let us know. Rate us and leave a comment on Apple Podcasts or say hi on our social media at SpokenWeb Canada.
|
|
|
Stay tuned to your podcast feed later this month for Shortcuts, with the amazing Katherine McLoed, short stories about how literature sounds.
|
|
|
You were a wonderful audience. |
41:52 |
Live Audience |
[Cheers and applause] |