This interview leapt to mind when I read this post
on controlling randomness in AI output with “temperature”:
Of course, you can overdo this and get gibberish –
and I think it’s really interesting that Gibson calls out exquisite corpse
as capable of sometimes producing the same result.
I also think of much dumber randomness engines,
like Markov chain bots and pre-OpenAI neural networks.
They were fun tools that were leagues away from even ChatGPT 2,
yet sometimes produced output that could be really great.
It strikes me how derogatory “AI generated” could be at that time.
Part of the humor was finding meaning in what was obviously nonsense.
I wonder if anyone will even remember this kind of thing
now that the vastly more capable large language models have redefined what “AI” means.
Some of my favorites back then were from AI Weirdness, with entries like this recipe:
Anyway, what’s going on here?
Why does randomness have this effect on the quality of creative output?
According to some theories,
the difference between what we expect and what actually happens
is a defining feature of our psychology.
One, called perceptual control theory,
goes as far as to claim it is a fundamental feature of life itself:
In other words, we act to minimize surprise.
Naturally, this means we pay a lot of attention to surprise.
And that’s why, I suppose, creativity benefits from randomness.
If an author can surprise their reader, that reader becomes engaged,
trying to understand the source of what is so new and different about the author’s writing.
The best way to close this post is with an example of this in the meta.
We are so divorced from our relationship to surprise
that being described in terms of that relationship is by itself surprising.
I came across perceptual control theory
via Slate Star Codex,
the author of which also wrote this: