On the last day of the last month of the last year, I thought I'd play some word games, and maybe ramble about some offensively hot-button topics.
_
First, the games!
What if we could cut the
X
out of getting
Y
?
What if we could cut the doctor out of getting medical treatment?
What if we could cut the lawyer out of getting legal advice?
What if we could cut the mechanic out of getting automotive repairs?
What if we could cut the pianist out of getting piano music?
What if we could cut the portrait artist out of getting a portrait?
What if we could cut the interested romantic partner out of getting laid?
What if we could cut the other person out of getting emotional connection?
What if we could cut the co-creator out of getting collaboration?
What if we could cut the
you
out of getting
us ?
_
One of the things that interests my slightly miswired brain is taking a formula and trading out the variables, over and over, to see how it changes things – to see where lines are crossed.
It's kind of like taking a piece of music and playing it on a dozen different instruments. (By which I mean taking a clip of notes and swapping out the instrument plug-in that I'm using in Bitwig, because I've already successfully cut the pianist out of getting piano music HALLUCINOGENIC-ROCK-ORGAN-SYNTH-SPACE-ELECTRONICA.)
Sometimes, it becomes crap. Sometimes, what started as crap magically sounds decent. Sometimes, I wake up an unknown number of hours later with 27 layers of distortion and blood coming out of my ears.
Anyways, doing this with statements is kind of a fun thought experiment. At what point does something become questionable? Are there certain areas where the formula simply cannot be applied?
...Was the formula questionable to begin with, but we didn't notice because of the area to which it was applied?
Here's another one:
Work smarter, not harder.
Learn smarter, not harder.
Try smarter, not harder.
Play smarter, not harder.
Rejoice smarter, not harder.
Sing smarter, not harder.
Grieve smarter, not harder.
Despair smarter, not harder.
Rage smarter, not harder.
Delight smarter, not harder.
Woo smarter, not harder.
Inspire smarter, not harder.
Dream smarter, not harder.
Express smarter, not harder.
Feel catharsis smarter, not harder.
Give smarter, not harder.
Sacrifice smarter, not harder.
Create beauty smarter, not harder.
Empathize smarter, not harder.
Believe smarter, not harder.
Love your wife smarter, not harder.
If something works in one arena, why shouldn't it work in others?
I don't mean this as a rhetorical question: I mean we should really have an answer for that. If a particular statement is absolutely applicable to X, but applying it to Y feels wrong, from whence doth this wrongness flow?
Are some things worth doing the hard way? What things? Why?
We frequently silo things without thinking about the dividing boundaries – sometimes because the dividing boundaries are not protecting the thing within them but actually protecting the logic of the formula from being exposed to a subject that would draw it into question.
It's totally okay to X in the context of [thing I don't care about], but absolutely inappropriate to X in [thing I actually care about] because, well, ...I care about it.
There is not a perfect transferability between all arenas – some things are just different.
At the same time, a great and uncomfortable truth of the universe is that what we do trains us. We may (choose to) believe that two areas of application are completely, clearly distinct from one another, but does our brain?
If I spend years never saying no to a cheeseburger or a peek at a Playboy magazine, what makes me think that it will be easier to say no a briefcase full of untraceable cash while I'm an elected official or a sordid night of infidelious passion with a dozen swimsuit models and a duffel bag full of peyote while I'm married? Just because the context is different? Just because the stakes are higher?
Arenas may be distinct, but we use the same mind across all of them. What we teach it in one place, it will know in another.
_
I think of this more and more as the hurricane-in-a-very-expensive-teacup that is generative AI rages louder and weirder.
As best I can tell, the dialogue so far has gone thusly:
A: This is lazy and you should be ashamed.
B: This is the future and you're going to be left in the dust.
A: You're a hack.
B: You're an elitist snob.
A: You didn't earn anything.
B: You're just afraid of being unseated from your ivory tower.
A: What you make will never be as good as what I make.
B: Yes it will.
A: Audiences will never accept it.
B: They just did.
A: Audiences are stupid.
B: What you made was secretly never good to begin with.
A: I've gathered an angry mob of inquisitors.
B: Stop persecuting my disability, you ableist.
_
Meanwhile, the corporate conversation looks something like this:
(
C: Look, it's a Bosnian war criminal but Pixar style!
D: Here's 2 billion dollars.
)
_
So far, we mostly just seem to be turning the volume up. Normally, I'm all for hooking up 27 amps and maxing out the drive on all of them, but I think in this instance, it might be missing the point a little bit.
_
What we do trains us.
_
I got prompted the other day. It was kind of weird.
Prompt engineering is, after all, the art of saying whatever you have to in order to get the response that you want.
This is okay, because the thing to which you are talking is an unthinking, unfeeling machine.
...It just talks like it thinks and feels.
I worry that practicing the art of saying whatever you have to in order to get the response that you want, regularly, on nonhuman targets that exhibit humanlike behavior, might just – like the human-silhouette-shaped targets on the Marine boot camp firing range – make us more likely to do what we are doing to actual humans.
After all, what we do trains us.
As someone who has been that human target, I can say it's a very strange and not entirely delightful experience.
Having spent 15+ years in the videogame industry, I have a lot of very techy friends with very techy interests. When a new piece of technology comes out, they dive into it with wild abandon. Generative AI in all of its various iterations – music, art, writing – has been no exception.
I've mostly smiled and nodded. Tech people play with tech toys. I sculpt tiny dinosaurs with blunderbusses and googly eyes for tabletop games that I will never finish designing. Unto each their own.
One of these friends recently invited me to a creative collaboration – a just for fun comic book project that we would make together purely for our own enjoyment.
She showed me a character, gave a detailed dscription of the setting, genre, and tone, and then asked me if I wanted to write the story.
It could be any story that I could come up with, she said, as long it was about that character, in that world, told in that genre style, and maintaining that tone.
Anything you can think of:
- About this character.
- In this world.
- In this genre style.
- In this tone.
It was… an odd pitch. Everything that would normally make up high-level narrative design was already set in stone.
All that there was left for me to do was write some scripts.
I eventually turned her down, simply because I did not think that I was capable of writing the genre and tone that she had decided on.
When I told another friend about it, they responded, "I think you just got AI prompted."
I'm not sure that that statement is wrong.
All of the creative vision and goals were already set.
The only thing left for me to do was create some scripts generate the script assets.
_
What was pitched was not an act of co-creation.
It might have actually been a pretty good job listing for a freelance gig: here's exactly what I want you to do, and here's exactly what I'll pay you. Look at how much freedom you have within these very firm parameters! Surprise me!
It was a set of marching orders for a hireling – or an automated tool – to whom part of a project was being outsourced.
It's not really fun to be the empty machine that brings someone else's creative vision to life – not for free, not when the goal is collaboration.
_
I've never used chat GPT.
I am concerned that extended, one-sided collaboration with an AI – an entirely obedient, submissive, obsequious partner – would affect my ability to collaborate with real human creators in exactly the same way that rampant consumption of pornography erodes the addict's ability to engage in actual romance with actual people.
The more large language model AIs mimic human behavior, the more I fear that this training is going to affect our treatment of other people.
I don't want to learn to say whatever I have to in order to get what I want.
I don't want to train myself that having an idea should be met with instantaneous product.
I don't want to train myself that there is nothing on the other side to consider and compromise with.
_
We talk about these dangers with sexy chat bots and virtual girlfriends, but not our other interactions with AI.
The context makes us uneasy: The woman who leaves her husband for her AI boyfriend sounds crazy. The guy who won't go on a date with a real girl because he's got a virtual buxom goth soulmate waiting on his iPhone makes us concerned for the national birth rate.
But when we swap out the variables, what is actually different?
_
Someone who will always give me exactly what I want when I want it is not, long-term, good for me.
When I spend all day every day getting what I want from such a person, what will my interactions with anyone else be like?
These concerns stay with me regardless of the quality of work that the machine in question can churn out, or the ethicality of how it was created, or the environmental impact of its use... or the arena in which it is happening.
What we do trains us,
whether we want it or not.

Comments