Andrei Vyktor Georgescu When I hear the word ‘wisdom’, I bare my fangs!
When I hear the word ‘wisdom’, I bare my fangs!

Seedless Flowers — Artificial Intelligence and Creativity Fetishism

Marguerite by Andrei Vyktor Georgescu (2023)

We are staring down the barrel of the destruction of a spirituality which founded itself on improvisational grounds. People who haven’t gone through a conscious process of understanding the material nature of their existence are liable to freak out. Schwab saying “you will own nothing and be happy” isn’t even getting at how bad it will be. “You will be nothing and be happy.”

Karl Nord

Chapters

  1. Flowers Born Without Seeds
  2. Creativity Fetishism
  3. The Mind of a Genius
  4. Humility in the Light of the Computer
  5. The Ethical Dilemma
  6. The Curious Case of Getty
  7. Conclusion

1. Flowers Born Without Seeds

What is creativity and where does it come from?

Judging by many reactions to AI art, any attempt to frame creativity as a computable function is impossible, stupid and/or disgusting. Critics wax poetically about the untouchable grandeur of human creativity, and internet mobs are giddy to see people’s careers destroyed for dabbling with generative AI.

My favourite encapsulation of this position comes from Nick Cave, where he described ChatGPT-generated lyrics as ‘a grotesque mockery of what it is to be human’.  

Nick’s comment is in line with the Romantic conception of creativity; the 18th century poet Edward Young was convinced that ‘learning is borrowed knowledge’, while ‘genius is knowledge innate, and quite our own’. The modern version is accusing generative AI of stealing its ideas, while claiming that human creativity comes from innate genius.

Edward was upset that writers in his day & age were obsessed with copying the old Greco-Roman masters, rather than letting their own expression guide their work, so he wrote a book called Conjectures on Original Composition, where he argued that there are originals and imitations—with original work being scarcer and more valuable.

I’m sympathetic to Edward’s idea that we shouldn’t blindly imitate the past or shun our own perspectives, but I’m less sympathetic to the claims about what creativity is and where it comes from. As a Romantic, Edward wasn’t too concerned with empirical appraisals of reality, but rather with expressing himself. He’s fond of Latin quotes from Ovid like ‘natos sine semine flores‘, meaning ‘flowers born without seeds‘.

It’s not that hard to see why he’d think creative thoughts are flowers born without seeds. The mind’s products don’t come with receipts—it’s not as if ideas are pulled from an inner GitHub, replete with source code that can be inspected. Instead, thoughts just appear, in an endless stream, from an unknown source, like flowers materializing out of thin air—or thick pink brain-meat, to be more precise.

It seems sensible to make ambitious extrapolations from this intuition; perhaps this unknown source of thoughts is something so special that it is different than everything else that exists in nature—something that doesn’t ever die, and that isn’t made of stuff.

In ancient India, the school of Vedanta went even further and described the universe itself as resting on brahman, a kind of pure consciousness which is infinite and eternal. Brahman is the origin of all physical things, which are secondary and, frankly, kinda fake, since physical things are always changing and breaking.

Not everyone went quite that far, but it was generally believed that there’s an intelligent and immortal entity inside of everybody which isn’t affected by the apparent vicissitudes of the physical world, such as death. However, as early as the 6th century BC, people prominently began throwing cold water on the idea. Ajita Kesakambali, for example, denied the satta opapatika—deities of spontaneous birth—and gave succinct refutations of lofty notions regarding the mind:

The human person consists of the four great elements. When he dies, the solidity returns to the body of earth, fluidity to the body of water, caloric to the body of heat, and viscosity to the body of air.

Ajita is helpful in taking us through a conscious process of understanding the material nature of existence—and creativity—because his materialism is about as simple as it gets. There’s no knowledge of the periodic table, randomized control trials or p-values, but rather, a basic mistrust of appearances that is necessary for intellectual integrity and exploration, especially when it comes to consciousness and creativity.

The Nick Caveites of the world understandably find expressive feelings to be more resonant than skeptical and materialist philosophy; feelings are the bread and butter of poets and musicians, and I’ll admit that the neurochemical cascade evoked by a great song tends to be more satisfying and resonant than, say, studying Navier–Stokes equations.

There aren’t many ballads about about incompressible fluids on the radio—although WAP by Cardi B might have broken new ground in this regard. Feelings are just much more potent, as Emil Cioran can attest: ‘I do not struggle against the world, I struggle against a greater force, against my weariness of the world.’

The more that thoughts and images appear in tandem with intense feelings, the stronger the impression of spontaneous inner genius, especially if its creations are coupled with social status and financial success.

If only it was so simple! But as ol’ Karly Marx puts it in his inimitably torturous language: all science would be superfluous if the form of appearance of things directly coincided with their essence.

2. Creativity Fetishism

Adding a pinch of skepticism regarding the appearance of creativity means confronting the fetishism involved in the Romantic perspective. When Marx talked about fetishism, he meant the way in which people tend to attribute magical powers to objects (for example, praying to a statue for wealth). When talking about commodities, Marx noticed that objects are discussed as if they have value in themselves, rather than the social & material conditions which give rise to them—in other words, commodities are being fetishized.

My favourite example of fetishism is the worship of gold, best exemplified in the story of Hatuey, a revolutionary Cuban folk hero.

Hatuey was a Taíno chief that escaped from Hispaniola (now the Dominican Republic) to Cuba with a few hundred followers to try and warn the locals about what they can expect from the incoming Spaniards.

He told the local Cubans that the Spaniards worship and adore gold as a god, and the only solution is to perform a sacred dance in honor of the god and toss him into the river. That way, they would finally stop killing and mutilating them.

In the case of the Spanish lust for gold, it’s not just that the object was seen to have a value, but rather acted as the standard of value itself. This kind of fetishism can drive otherwise sane people to act like silly dogs; the total gold output from 1900 – 1950 was about 35,000 metric tons, and the total amount of monetary gold held by banks and treasury departments in 1950 was also about… 35,000 metric tons.

Which means that all of the incredible effort involved in extracting gold out of the earth (over the course of five decades) was offset by the fact that the same amount was buried back under ground, like a magical bone that radiated value.

Such fetishism still lives on in the criticism of ‘fiat currency’—flimsy paper and plastic sheets issued by central banks that have no ‘intrinsic value’, unlike the shiny magic thing buried underground.

You’ll have noticed, however, that fiat currency, despite having no ‘intrinsic value’, does a great job at performing the role. Dollars can be exchanged for mansions, fighter jets, and peanut butter (crunchy or smooth), despite the fact that dollars are numbers shifted around in a database.

How could something without any intrinsic value be exchanged for actually valuable things?

What sorcery is this?

Value rests in a social relationship, rather than any magic contained in the dollars themselves, though it’s predominantly a relationship based on violence. Here’s a simple and silly version that nonetheless would work: imagine that you wanted to establish a new currency out of drawings of pink dogs. Phoning your bank and asking them to call off your debts in exchange for 5 PDD (pink dog drawings) wouldn’t get you very far. However, if you threatened to burn down their houses and hurt their families, they would start to listen—especially if you had an army behind you.

In a more realistic scenario, if China were to take over North America and insist that we pay our taxes in Yuan (or else they’ll beat us and throw us in prison), it wouldn’t matter if we believed that the Yuan was a real currency or not. Pain and the threat of injury is real enough to make us hustle and grind for Yuan, which would certainly make it seem as if the Yuan itself has some magical power.

Conquistadora by Andrei Vyktor Georgescu (2023)

Also, it’s not as if Hatuey and the other islanders didn’t like gold—Columbus wrote that they wore it as jewelry. But they had a different history and social arrangement that precluded them from worshipping gold the same way the Spaniards did, as evinced by King Ferdinand’s letter to the colonists of Hispaniola, where he wrote: Get gold, humanely if you can, but at all hazards get gold.

The fetishism of gold is not a flower without seed; precious metals in general have a bloody history. The genocidal policies towards rival tribes in the Old Testament were the result of trying to control copper production, and the word soldier itself comes from the Late Latin solidus, a type of coin.

Consider ancient Athens—by paying soldiers in coins and making it so that workers had to pay taxes in those coins, the bulk of the city-state’s productive labourers would be forced to provide a permanent supply of goods to the market. This not only allowed the establishment of a permanent naval fleet—it spurned the growth of military power that bloomed after every war, as slaves were captured and goodies were looted.

It was important that the Athenian state owned the mines in Laurion (some 40 miles away from Athens) in order to stop people from having access to the means of making their own coins and gaming the system. Working in the mines was notoriously crappy, involving ten-hour shifts around the clock in three-foot tunnels. The only reason people could be convinced to work there was by enslaving them, either through wars or debt peonage.

Spain was at war with the Moors for almost eight hundred years before they conquered Hispaniola—and much of that involved killing people for gold.

After witnessing frenzied murder in the pursuit of a shiny thing, it’s understandable to conclude that the shiny thing is really important, but this conclusion would obscure the fact that the interest in gold is also a historically situated phenomenon. Historians of metallurgy like Paul Craddock mention that there’s a noticeable lack of interest in gold in New Stone Age sites from a few thousand years ago. The interest starts to sharpen, however, as class society began ossify, where a scarce resource like gold would be a great status symbol, and a perfect gift for those looking to ingratiate themselves with powerful people.

After coins were first minted in the 7th century BC, there wasn’t much of a distinction between economics and politics. Since there wasn’t any stable authority around, rich families looking for political power would stamp precious metal as tokens for their followers with the personal seal of the family. Aside from flaunting wealth, this would reinforce their social status as generous and sophisticated. Such coins would have been circulated not as we would use coins (i.e., representing an abstract unit of value) but as part of the traditional system of gift-exchange, which carried with it a rich density of sociopolitical meaning.

3. The Mind of a Genius

Let’s return, for a moment, to our beloved English poet Edward Young, and his idea that ‘learning is borrowed knowledge’, while ‘genius is knowledge innate, and quite our own’. If the fetishism of gold is a mistaken but understandable result of its social and historical circumstances, what’s contributing to Edward’s claim regarding innate genius?

Broadly speaking, this is a form of Pre-Darwinian thinking.

A pernicious effect of apparently fully-formed content in the mind is that that it makes its complexity seem irreducible. In order to write a poem, Edward’s mind spontaneously created complex thoughts fully formed ex nihilo—so why shouldn’t this be the case with the creation of the universe as a whole? Indeed, as Daniel Dennett points out, creationists are fond of asking us questions like, have you ever seen a painting without a painter?, or have you ever seen a building without a builder?

But ol’ Charles “Barnacle Enthusiast” Darwin came up with One Weird Trick that theologians HATED—evolution by natural selection.
As one of his critics put it:

In the theory with which we have to deal, Absolute Ignorance is the artificer; so that we may enunciate as the fundamental principle of the whole system, that, IN ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE, IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT.

This proposition will be found, on careful examination, to express, in condensed form, the essential purport of the Theory, and to express in a few words all Mr. Darwin's meaning; who, by a strange inversion of reasoning, seems to think Absolute Ignorance fully qualified to take the place of Absolute Wisdom in all of the achievements of creative skill.

Since thoughts are spontaneous and tend to lack receipts, some folks are convinced that the origin of thoughts is somewhere else—perhaps a voice from another dimension.

I once heard a medical doctor evince the belief in the possibility of the brain being a receptor of consciousness which is beamed from the true source of ultimate complex conscious thought like a radio, a modern tech version of ancient Vedantic superstitions.

A starting point for the actual seeds behind the magical flower of creativity is Nietzsche’s description of the mind’s origin:

Consciousness is properly only a connecting network between man and man, — it is only as such that it has had to develop; the recluse and wild-beast species of men would not have needed it. […] The sign-inventing man is at the same time the man who is always more acutely self-conscious; it is only as a social animal that man has learned to become conscious of himself, — he is doing so still, and doing so more and more.

Even with this quick sketch, it becomes clear that our ability to think in symbols isn’t exactly a faculty that’s quite our own, but rather the process of eons of socialization with others.

Fleshing it out further, Dennett speculates that our internal monologues developed as a side effect of primates soliciting each other for help using basic vocalizations, presumably remarking on useful or interesting things.

Eventually these early ancestors of ours realized they can vocalize and provide the answers to themselves—the auditory signal gets processed by more parts of the brain, acting as a sort of malleable wire between disconnected parts that couldn’t process the problem in isolation.

This explains the otherwise confusing phenomenon of speaking to ourselves and saying surprising and insightful new things. It also explains why psychedelics can offer such a revolutionary way of looking at the world; many parts of the brain simply wouldn’t communicate without a nudge from an exogenous chemical.

This kind of cognitive autostimulation developed in the population via the Baldwin Effect—a way in which the ability to learn new behaviors can have an indirect but significant impact on genetic selection.

Dennett asks us to imagine a Good Trick—a behavioural talent that offers protection or significantly higher reproductive success—which we could represent as the simple pattern ABC123.

Imagine a creature that’s wired to allow for a certain degree of plasticity. If it had a similar wiring pattern like ABD123, this would offer no particular benefit, since ABC123 is the proper wiring. However, if there’s behavioural variation within a few degrees, then the wiring has a chance to change into the golden combination where C is in the third position, instead of D.

From natural selection’s point of view, the default starting point ABD123 is just as unsuccessful as XYZ789, but because the closer configuration is more likely to hit on the Good Trick by behavioural plasticity, it’ll outbreed the latter, acting as an indirect form of genetic selection.

The ability to learn a particular language is probably too specific for genetic evolution, but the ability to learn some kind of language apparently isn’t—after just a few years, humans can get their brains turbocharged with a large set of symbols that lets them manipulate the world with incredible granularity, fine-tuning their communication with themselves and others.

Language is clearly a useful add-on, but it’s worth considering just how much of a difference it makes to be indoctrinated into the social world of symbols.

Helen Keller, born in the late 19th century, became both blind and deaf shortly before her second birthday. After great difficulty and devotion, a woman named Anne Sullivan taught Helen how to read and write, letting us peer into the ‘pure mind’ of somebody without language.

Here’s Anne Sullivan describing the process:

"[M]ug" and "milk" had given Helen more trouble than all the rest. She confused the nouns with the verb "drink." She didn't know the word for "drink," but went through the pantomime of drinking whenever she spelled "mug" or "milk." This morning, while she was washing, she wanted to know the name for "water." When she wants to know the name of anything, she points to it and pats my hand. I spelled "w-a-t-e-r" and thought no more about it until after breakfast.

Then it occurred to me that with the help of this new word I might succeed in straightening out the "milk-mug" difficulty. We went out to the pump-house, and I made Helen hold her mug under the spout while I pumped. As the cold water gushed forth, filling the mug, I spelled "w-a-t-e-r" in Helen's free hand. The word coming so close upon the sensation of cold water rushing over her hand seemed to startle her. She dropped the mug and stood as one transfixed. A new light came into her face.

Now Helen’s version:

I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten — a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that w-a-t-e-r meant the wonderful cool something that was flowing over my hand. The living word awakened my soul, gave it light, hope, set it free!

[…]

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness... Since I had no power of thought, I did not compare one mental state with another.

This suggests that all of the words which we imbibe as infants are not just convenient labels for our genius minds to toss around, but actually shape our very experience of the world.

Is the genius in Helen, Anne, or the words themselves?

At least part of the answer is: yes. Strangely enough, Nietzsche refused to recognize the interdependence present in this phenomenon, insisting that the real genius is irreducible and individual, undermining his own insight expressed earlier:

My idea is clearly that consciousness actually belongs not to man's existence as an individual but rather to the community- and herd-aspects of his nature; that accordingly, it is finely developed only in relation to its usefulness to community or herd; and that consequently each of us, even with the best will in the world to understand ourselves as individually as possible, 'to know ourselves', will always bring to consciousness precisely that in ourselves which is 'non-individual', that which is 'average'; that due to the nature of consciousness - to the 'genius of the species' governing it - our thoughts themselves are continually as it were outvoted and translated back into the herd perspectives. At bottom, all our actions are incomparably and utterly personal, unique, and boundlessly individual, there is no doubt; but as soon as we translate them into consciousness, they no longer seem to be...

This is what I consider to be true phenomenalism and perspectivism: that due to the nature of animal consciousness, the world of which we can become conscious is merely a surface- and sign-world, a world turned into generalities and thereby debased to its lowest common denominator - that everything which enters consciousness thereby becomes shallow, thin, relatively stupid, general, a sign, a herd-mark; that all becoming conscious involves a vast and thorough corruption, falsification, superficializiation, and generalization.

To recap—we only became conscious because of the bottom-up process of complex social intercourse—a frail faculty which was built up slowly and gradually, bringing us from mute beasts into awakened souls—but actually, we are super complex and unique and smart in our own inner genius, and everything to do with the herd will only dumb us down.

I don’t buy it. Besides the irony of saying ‘yes, you’re right, I am totally unique’ while reading a passage that a million people have read and agreed with, his conclusion throws cold water on the idea that you could learn anything profound at all, even the very lesson offered, since 'the world of which we can become conscious is merely a surface- and sign-world'.

Therefore—why bother with reading—or science—or with my conscious mind at all? It’s an outrageous thing to say, but then again, I (a visual artist who has been coached to be an artist from a very early age) am writing an essay which, in part, will criticize creativity as a self-indulgent, unethical and childish activity which can be outsourced to machines—though of course, it isn’t only that.

What he’s saying has its appeal; genius is my unconsciousness—I don’t need to prove anything to anybody, I don’t owe anything to anyone, I’m amazing exactly as I am, in my stupid beastly self; words and concepts are like so many little nuts that I can crack open

More succinctly: When I hear the word ‘wisdom’, I bare my fangs!

This could lead to a deliciously self-affirming feeling, but talking about uniqueness and individuality sounds too much like reverting to mystical Brahmanistic concepts of immortal magical entities.

If, at bottom, all our actions are incomparably and utterly personal, unique, and boundlessly individual, we must ask:

Why are there any actions or individuals to begin with?

It’s not to fulfill the criteria of being personal, unique and individual, but rather to survive long enough to reproduce. Behaviour is highly conserved from one generation to the next and there’s only a tiny fraction of unique mutations. Individuality is a late flower that’s still developing, not something at bottom.

Pastel Programmer by Andrei Vyktor Georgescu (2023)

4. Humility in the Light of the Computer

Darwin’s insight into the creation of living things was echoed by Alan Turing, and his observation that it’s not necessary to understand what math is in order to successfully add up numbers.

Historically, a ‘computer’ was a person who manually computed mathematical problems, and esteemed scientists throughout history such as Johannes Kepler (who formulated the three laws of planetary motion in the 17th century) cut their teeth as computers.

But rather than getting seduced by his thought-flowers magically popping up, Alan Turing decided to formalize the exact sequence of actions necessary to complete a math problem, so that every step was dumbed down to the point that it could be easily reproducible by a machine—now known as a Turing Machine.

A Turing Machine executes a sequence of events one-by-one in a restricted workspace, using a reliable memory to provide data and instructions for a set of basic operations. John Von Neuman took this idea and actually implemented it in a way that is now familiar across the world as the Von Neumann architecture, including defining features such as a CPU (Central Processing Unit) for executing instructions sequentially, and a shared memory for storing both instructions and data (Random Access Memory or RAM).

Von Neumann’s model not only made the concept of a universal computer a reality (a machine capable of performing any computable task), but it also served as a blueprint for virtually all digital computers that followed.

It’s common to think of the flexibility of modern digital computers as a given, but a hundred years ago, it would be like someone today claiming that they’ve invented a single home appliance that can do everything from washing clothes to making toast—our natural reaction would be: what in tarnation?

Under the hood, computers are working with sequences of binary digits or “bits” like 001101. Despite this fact, it certainly seems as if it’s doing a lot more when displaying video and music, but that’s because there’s no functional reason for the end-user to be aware of the billions of CPU cycles per second, in the same way that none of us are strenuously keeping track of the activity of trillions of cells.

As ol’ Danny Dennett explains, the first digital computers were originally described as electronic brains, but it would be more accurate to call them electronic minds, since they’re severely stripped down versions of cognition, in contrast to the wild, wacky world of neurons, which are still too complex to accurately model.

But electronic minds invading the territory of human minds is a problem that’s been brewing for at least a century. If you were to walk into a movie theater in the early 1920s, you wouldn’t just be sitting and watching a movie like today, but rather, you’d be treated to a series of short films, live music, and vaudeville acts.

In the mid ’20s, innovative new technology from Western Electric allowed the Warner Brothers to take the ethereal mystery of music down to earth by encoding the signal into electronic impulses; the earlier mechanical phonograph could only project sound loudly enough in a small room. As the Film Daily explained in 1926, this meant that the Warner brothers ‘could profit further by firing all the musicians and entertainers it currently had to pay to perform at their theaters’.

Predictably, this ruffled feathers to the point that the American Federation of Musicians formed the Music Defense League in 1930. They spent almost 10 million dollars to create propaganda against this new technology, using concepts that are remarkably similar to anti-AI arguments today. This ad from 1930 invokes mystical notions of the soul:

The robot (singing): "O, soul of my soul, I love thee--". But the Robot has no soul. And having no soul It cannot love. Small wonder the lady spurns Its suit. Now, if the Robot cuts a ridiculous figure beneath a lady's balcony, why expect IT to thrill intelligent theatre goers in the character of Canned Music? Music is an emotional art. By means of it feeling may be translated into all tongues. The Robot, having no capacity for feeling, cannot produce music in a true sense.

In other words, a grotesque mockery of what it means to be human.

The soul argument is a popular one, and my favourite contemporary example is an experiment someone ran on an internet forum. In a thread where people complained about the lack of soul in AI art, he posted a child’s drawing of Sonic the Hedgehog. The reaction was as he predicted: ‘Unfiltered childlike sovl’, said one user.


‘Soul, this mogs all the AI trash posted in this thread’, added another—only to later be shown a screenshot from Niji Journey which demonstrated the fact that it was generated with a diffusion model. GOTTEM!

Although the robots from the 1920s couldn’t generate new creations, they could still take artwork emanating from soulful creatures and accurately reproduce them electromechanically, something that’s completely taken for granted by ‘intelligent theater goers’ in the 21st century. When’s the last time you’ve read a movie review where the critic said they couldn’t feel anything because the sound system and projector lacked a soul? [Maybe this observation will inspire a new wave of trolls]

In the 2020s, the effect is even more pronounced, since all of the equipment is digital, whereas in the 1920s there was still some magic to hold on to because of the analog equipment, which persists today by those who insist that vinyl has some irreducible magic.

I’ve seen someone post a photo of a Surrealist painting, explaining that a computer could never describe the complexity and genius of that painting, even though the image is a meticulously specific set of computer code that accurately displays it.

In other words, the battle against generative AI was lost long ago, since accepting the existence of any kind of computer encoded art (e.g., music, movies, pictures, or books), means accepting there is a digital formula for the production of artistic objects—copies though they may be.

It was only a matter of time before software got sophisticated enough to learn patterns in this digitally produced artwork. That’s essentially how generative AI works; in the case of generative text like ChatGPT, it translates words into numbers, looks through billions of examples of number-sets in the form of existing text, finds the right equations to predict numbers when you give it a certain prompt, and translates the results back into words. It’s like, it’s all maths, innit, mate?

The same principle applies to visual art; each pixel in an image is a series of numbers specifying the intensity of blue, green and red light, and when an array of numbers is linked to a meaningful set of labels, patterns can be found when you have enough examples. Given enough computing power, you can summarize five billion images into a few gigabytes, like in Stable Diffusion, meaning that each image only contributes a tiny number of bytes, since the computer has figured out efficient equations to summarize the essence of the concept—it has learned.

In machine learning, there’s something called a ‘loss function’, which measures how far off the equation is from hitting the target in the dataset, meaning that ‘zero loss’ is a perfect hit.

Let’s say you have an idea you want to instantiate as a painting. You start off with a blank canvas, but by looking online for inspiration, sketching, blocking out shapes, gathering references to flesh out details, and so on, you start lowering the ‘loss’, as the external image iteratively matches the internal one.

For people working in traditional media, reducing ‘loss’ requires extremely high levels of manual dexterity; you need agile ballistic control of your hand at all times when painting in watercolour, for example. Simulating watercolour with digital brushes makes for more forgiving loss minimization, but prompts go a step further, bypassing manual dexterity as a ‘loss’ bottleneck, and directly illustrating semantic content—although the specificity of the semantic content is proportional to the difficulty of loss reduction. This applies to ideas in general:

Idea Behaviour Error Calculation Revised Behaviour Goal Success

Although this process has an explicit architecture, the sheer scale of the dataset and computation means that it’s practically impossible to have a complete understanding of how the model works at a granular level of detail. Modern training methods are measured in petaflops per second, meaning a quadrillion (1,000,000,000,000,000 or sixteen zeros) floating point operations per second (FLOPS), and that has to run for months. Supercomputers like Summit at the Oak Ridge National Laboratory can run 200 petaflops per second at its peak, using 27,000 GPUs. We’re talkin’ Big Boy Toys.

An important difference between human learning and machine learning, at least at this stage of technology, is the fact that human learning is mostly a black-box affair. When an artist is in the process of imagining something, their working dataset is obscure even to themselves, since there are thousands of influences which are surreptitiously distilled and encoded through life experience; an advertisement here, a movie clip there, and eventually you have enough to give birth to an endless stream of apparently seedless flowers.

Prayer by Andrei Vyktor Georgescu (2023)

5. The Ethical Dilemma

The belief that machine learning steals rather than learns when it summarizes a dataset with equations, is a distinction which can only be made because human theft is impossible to detect in anything except but the most brazen theft, allowing delusions of seedless flowers.

In its lawsuit against Stability AI, Getty Images alleges that by accessing its publicly available website, they broke the rule against ‘using any data mining, robots or similar data gathering or extraction methods’. Leaving Google Images aside, which Getty ruined, we might wonder what constitutes similar data gathering—does the human mind gather data in its creative process in a way that’s comparable to computer software?

Even Getty Images seems to think so, since their own lawsuit explains that Stable Diffusion ‘understand[s] the relationships between text and associated images and [uses] that knowledge to computationally produce images in response to text prompts.’

In the process of understanding the relationship between concepts and images, a certain blindness is required (willful or otherwise) to one’s own creative sources to condemn the use of copyright images in training. After all, it’s not possible to have any record of the ‘scraping’ done by artists or engineers when they watch TV or browse the internet.

That might change in the future; scientists like Yu Takagi and Shinki Nishimoto have demonstrated the ability to reconstruct images from human brain activity using functional magnetic resonance images and diffusion models.

If litigation sets a precedent forbidding unauthorized content in machine learning, then, for the sake of consistency, artists ought to have their brains monitored for unauthorized access to content—maybe that Disney cartoon that an artist watched obsessively as a child warrants some royalties or even criminal charges.

Mind you, this presumes that people are fully entitled to the claims they’ve staked out. Walt Disney himself alleged that the idea for Mickey Mouse magically appeared in his mind while he was on a train ride—a magnificent seedless flower! Unfortunately, this a somewhat vindictive lie; he had a falling out with the designer of Mickey, Ubbe Iwerks, which inspired Walt to cut Ubbe out of the story completely.

Walt and Ubbe lost rights to their cartoon character—Oswald the Lucky Rabbit—so they needed to come up with a new one. Although Walt helped in the initial stages, Ubbe was responsible for the final design, and managed to crank out hundreds of drawings a day to finish the first Mickey Mouse cartoon in a couple of weeks (Plane Crazy from 1928).

However, if we take a look at the difference between Oswald and Mickey, they’re kinda lucky that the distributor didn’t sue them, since they’re not very different:

Mickey Mouse and Oswald the Lucky Rabbit © Disney

As silly as it might be to fight over the credit and legitimacy of a cartoon mouse, given the competitive nature of the market, any little niche that affords a living will be aggressively defended.

The American Federation of Musicians didn’t just stick to propaganda about souls—they also made more practical economic claims. In the early 1930s, its president Joseph N. Weber said:

The time is coming fast when the only living thing around a motion picture house will be the person who sells you your ticket. Everything else will be mechanical. Canned drama, canned music, canned vaudeville. We think the public will tire of mechanical music and will want the real thing. We are not against scientific development of any kind, but it must not come at the expense of art. We are not opposing industrial progress. We are not even opposing mechanical music except where it is used as a profiteering instrument for artistic debasement.

He hit the nail on the head regarding the canned nature of movies in the future, but why is it unethical to be a movie studio that seeks to make as much money as possible, but ethical to be a musician that seeks to make as much money as possible?

Consider an even broader question: is it ethical to be an artist?

As Lewington Pitsos pointed out in his video, if we assume that ethical behaviour means aiming for the greatest amount of good for the greatest number of people, then every penny spent on luxuries like art should instead be diverted to social goods, such as feeding starving children around the world, housing the homeless, providing medical care for the elderly, etcetera.

Therefore, spending less money on art by using more efficient tools like Stable Diffusion or ChatGPT would always be the ethical choice—since the money saved can be used for more ethical purposes.

But if that’s the case, then why is it that poets like Edward Young had the temerity to sit around and put words together like a child playing with colorful alphabet blocks, while others around him had to clean chimneys, mine coal and empty cesspits? Why does Getty Images feel it’s appropriate to sit on pictures and charge rent?

Nietzsche offers us a clue in his early essay, The Greek State:

[W]e may compare this grand Culture with a blood-stained victor, who in his triumphal procession carries the defeated along as slaves chained to his chariot, slaves whom a beneficent power has so blinded that, almost crushed by the wheels of the chariot, they nevertheless still exclaim: "Dignity of labour!" "Dignity of Man!" The voluptuous Cleopatra-Culture throws ever again the most priceless pearls, the tears of compassion for the misery of slaves, into her golden goblet. 

In other words, the majority of people in a society need to be enslaved—and enjoy their slavery—in order to allow for the birth of ‘Culture-men‘.

If this sounds harsh, then let’s consider an alternative view: the Universal Declaration of Human Rights, which is a veritable fountain of rights bestowed on individuals by dint of their species. For example, Article 25 states that ‘everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing, and medical care and necessary social services.’

It goes on, but I’d like to pause on just the right to food: the average human being, over the course of 80 years, demands about 65 million calories for basic metabolic function, which would translate to about 800,000 slices of bread or 200,000 cheeseburgers.

If you were to split it up into a more balanced diet, you would need, say, 1000 kilos of oatmeal, 4300 kilos of blueberries, 3200 kilos of chicken breast, and so on—not accounting all of the effort required to grow, package, transport, and cook the food, along with the necessary equipment, training and fuel.

With this in mind, the Universal Declaration of Human Rights becomes a declaration of my obligatory servitude to all strangers everywhere on the planet, an uncomfortable fate which is only ameliorated with the dubious assumption that everyone else will forego any self-interest or allegiances to their family—for the sake of assuaging the roars of my belly.

Really, it’s pretty much the opposite; unless I make it abundantly clear that my existence is beneficial to a large group of people, I’m a nuisance and burden for anyone who isn’t in my immediate family circle. Indeed, there’s a reason that millions of people die from hunger each year, and it isn’t because we’ve all missed the UN’s memo.

And even within the family, there are many hoops to jump through to avoid being dead weight. Parents tend to sacrifice as much as they can in the quest to make their children as indispensable and competent as possible, guiding them to pro-social careers like medicine and engineering.

The reason children want to become artists (and why parents typically try to convince them otherwise) is because it’s a form of deliciously luxurious selfishness. We can extend the logic to any form of self-directed creative agency, because that’s the opposite of being an employee; to be employed means to be given tasks by others. Put more simply, sitting around and making art is fun—a whole lot more fun than our human rights obligation of picking cabbage from a field at noon in the hot summer.

Those who are able to carry through this selfishness to adulthood enter a world of the most rarefied joy; at the upper echelons of the fine art world, the stupidest and ugliest things imaginable are bought and sold for the annual salary of a heart surgeon.

Yes, your kid could paint that, and is painting that—but while you make Powerpoint edits until midnight and sit in three-hour meetings that could have been an email, someone else is hurling splatters on a canvas for twice the pay, and drinking your tears in their golden goblet. Others are treating these self-indulgent abominations as investments and watching the numbers go up while they smoke cigars on a penthouse balcony. La dolce vita!

The con artists at the top of the game will not be affected by artificial intelligence, especially since technical skill stopped being relevant about a hundred years ago. Quite the contrary, in fact. The abundance of beautiful and technically masterful images generated with computers will mean that traditional media is even more scarce, and therefore more valuable—since scarcity is an important factor for fetishism.

However, those who struggle to reach the heights of selfish luxury by grinding through commissions are liable to freak out when the ‘real world’ intervenes, especially if they’ve spent a lot of effort trying to move past mediocrity.

In machine learning, Rich Sutton’s Bitter Lesson is that methods which leverage computing power are much better than trying to find that One Weird Trick based on clever insight:

Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other.

The bitter lesson in the commercial art world is that artists seek to leverage their egoistic pleasure in the creation of art, even though the only thing that matters is the leveraging of tools to produce good results quickly. These two shouldn’t run counter to each other, but they tend to. Time spent indulging in the fantasy of seedless flowers is time not spent improving efficiency and quality.

It might seem that artificial intelligence will make life easier, allowing more time for leisure. Certainly, this is true to some degree, although my experience as a professional illustrator in the past decade makes me temper my hopes. More powerful software & hardware translates to greater demands on speed and quality, rather than free time; three days for a good illustration turns into three days for ten great illustrations.

I’m hardly alone in having noticed this. Jim Blinn, a computer scientist who has made important contributions to computer graphics like the Blinn-Phong shading model, noticed that even though computers kept getting better, graphic artists kept throwing more stuff into the mix. Rather than using improved hardware to render the same things faster (so as to have more time to relax), they created more complex scenes which required approximately the same render time as before—an observation dubbed Blinn’s Law.

It might not always be true in every detail, since clients sometimes skimp on quality to get things done faster, but the principle is solid:

Human desire is infinitely expansive—so be careful with desires, the Buddha might add, lest they devour you.

Aristotle speculated that slavery would disappear if fabrics could weave themselves, since there would be no more need for masters and slaves to produce goods. But after more than two thousand years of innovation in the field of fabric production, the conclusion seems to align with Blinn’s law—we are producing more complicated fabrics than ever, at greater speed, using millions of workers who would probably prefer to be doing something else.

6. The Curious Case of Getty

Speaking of the preference of workers to be doing something else, let’s return to the Getty Images lawsuit.


My favourite illustration of the ethical integrity of Getty Images is the fact that they take public domain images and charge unsuspecting customers large sums of money to download them.

For example, here’s the famous ‘Migrant Mother’ by Dorothea Lange on the Library of Congress website, which is free to download in a lossless format (TIFF) at a resolution of about 6500 x 8500:

Screenshot of the destitute mother photo available on the Library of Congress website

However, the ethical paragons at Getty charge $575 for an image that is less than half of the original resolution (2500 x 3750):

Let’s recap—Getty Images has a public domain image of a mother suffering destitution, with a watermark on it, at less than half quality—and they charge hundreds of dollars for the privilege of downloading it.

Carl Malamud, a public domain advocate and founder of Public.Resource.Org, argues that this is ‘immoral, but not illegal’. Well, I’m not so sure about its legality, since every indication available on the purchase page makes it look as if this is an image that Getty owns, and the customer is paying for the license.

They offer legal protection, explicitly forbid use in magazine covers (for some reason), and list the license type as rights-managed—the link takes you to a wall of legalese which explains that rights-managed content has all sorts of limits and restrictions. They don’t make it clear in any way that this is a public domain image—rather, they hide that fact. The contributor, who vandalized the image with his watermark, does this for countless public domain images, hoping to catch suckers in his trap.

You might say that the customer had to do their research—yes, Getty is lying and misleading customers, but if you’re dumb enough to believe what they tell you, then you’re to blame. At the risk of sounding like an authoritarian extremist, I think companies shouldn’t be allowed to make false or misleading claims about the products they sell. Yes, I know that this would be just like George Orwell’s 1984.

Another tragicomical example is the case of Carol Highsmith, who donated tens of thousands of images to the Library of Congress, prompting Getty Images to take the pictures, slap their copyright on them, and issue a threatening letter to Carol demanding that she pay Getty for using her freely available images. They also went after anyone else who used Carol’s images.

This is one of many reasons why Getty Images has a bad reputation as a copyright troll that throws its weight around to shake people down for unearned cash. Aside from the philosophical issues regarding what constitutes learning and creativity, they would have to first prove ownership of the images that they’ve falsely flagged as belonging to them.

To understand why Getty Images engages in this behaviour, I’m tempted to look at the material circumstances of its genesis; Mark Getty, the co-founder, was born into one of the richest families in the world. His grandfather, Getty Sr., became a millionaire in his early 20s with his royalties from the family oil fields, and Mark’s father was made the president of one of the family’s oil company subsidiaries.

Mark’s brother, John Paul Getty III, was kidnapped and held for ransom for $100 million (in today’s money), and after his grandfather refused to pay up, the kidnappers cut his ear off and pumped him with tons of penicillin to stave off an infection. Eventually Getty Sr. agreed to give the kidnappers the maximum tax deductible amount, but expected his son to pay him 4% interest on a chunk of the cash. John was found alive, but the experience precipitated some substance abuse issues, and he suffered a stroke from a cocktail of drugs that left him permanently disabled.

There’s a couple of things I imagine you’d learn from these kinds of experiences—one: you have a birthright to wealth acquired from sitting on valuable things—and two: there’s evil in the world which will not hesitate to destroy you and take everything you have, so you have to play offense.

And indeed, accruing royalties like his grandfather was his confirmed strategy. In an interview from 2000, Mark explained that ‘intellectual property is the oil of the 21st century‘:

Look at the richest men a hundred years ago: they all made their money extracting natural resources or moving them around. All today’s richest men have made their money out of intellectual property.

I like this kind of honesty—no bombast about the value and joy of supporting and empowering a strong community of creators. Just the straight dope about how having lots of money is awesome and hoarding intellectual property’s probably the best way to do it. Work smart, not hard—I get it.

But an interview from 2011 saw a different side of Mark, where he proclaimed the dignity of labour: ‘Most people, if they have an opportunity to do little, will do just that, particularly if you’re born into money. I could have easily done nothing, but it would have been boring and unfulfilling’. He added that his kids are required to work, ‘not because they have to make their money, but because they have to have a purpose’.

When Getty Images was founded in 1993, its purpose was to gobble up businesses both large and small, including giants like Art.com and the Hulton Press Library, which gave them rights to millions of photos. By the year 2000, Getty Images outperformed oil share prices by more than double, which confirmed Mark’s bet on IP, and pleased the family members who invested in his company, including his father, two uncles, and thirteen cousins—presumably an investment made out of an abstract love of the game of life, rather than an interest in making money.

And it’s not like their hunger has abated; just a couple of years ago, Getty Images bought Unsplash, which offers lots of free stock photos. The latest Unsplash license forbids users from selling these free images—but who would dare to do such a thing?

One of the rumors surrounding Mark’s grandfather was that he installed a payphone in the guest room because he was such a cheapskate. Because of this, Mark decided to buy a Banksy sculpture of a smashed phone box to plop in front of his library, a joke which cost him about half a million pounds in 2023 money (£370,000 in 2011). His collection includes millions of dollar spent on artists like Ed Ruscha and Jeff Koons—artwork that looks like it belongs in plastic wrap on a dollar-store shelf.

7. Conclusion

It’s hard to say whether these elites demonstrate an exuberant rascally vitality or an exhausted sort of hedonism that befalls one when money starts to lose its meaning, and luxury goods offer diminishing returns. Certainly, I’d like the bank account to experiment with the dilemma myself.

In his book on the Getty family, John Pearson explained that ‘money effectively remove[s] the moral imperative from everyday existence’—he was discussing the way in which Getty Sr. retired in his early 20s and decided to focus his attention on hot girls, cool cars, and bootlegged gin.

However, in §40 of Will to Power, Nietzsche pointed out that this kind of decadence is a natural part of life:

It is a disgrace for all socialist systematizers that they suppose there could be circumstances—social combinations—in which vice, disease, prostitution, distress would no longer grow.—But that means condemning life.— A society is not free to remain young. And even at the height of its strength it has to form refuse and waste materials. The more energetically and boldly it advances, the richer it will be in failures and deformities, the closer to decline.— Age is not abolished by means of institutions. Neither is disease. Nor vice.

Take, for example, Prof. Dr. Björn Ommer at the Ludwig Maximilian University of Munich, who released the original Stable Diffusion, called Latent Diffusion, with the help of Robin RombachAndreas BlattmannDominik Lorenz, and Patrick Esser—completely for free.

Björn was the son of a physics teacher, and wrote his first program when he was nine years old. He studied physics as an undergrad, wrote a thesis in computational neuroscience for his PhD, and researches things ranging from the algorithmic analysis of medieval paintings to immunotherapy for regaining motor control after strokes. Björn’s motivation seems to stem—in part—from an endless curiosity about the world, driven by fascinatingly difficult questions like: how can we teach computers to see?

The fact that we live in a world where people like Björn exist, and can freely share their research, means we must also come to terms with the existence of aggressive decadents who will do their utmost to either exploit, parasitize, downplay, insult or destroy whatever is built—the best flowers require lots of manure, and might be suffocated by it.

I should say, the research is mostly free—sometimes you need to ask the author for a paper if it’s paywalled by parasitic journals. There are many people around the world, plenty of them anonymous, who are currently spending most of their waking life building and fine-tuning resources for AI tools like Stable Diffusion at no cost to the user.

Getty Images could have incorporated the freely available Stable Diffusion in their business; given the resources and talent they wield, they had the opportunity to produce an industry-leading proprietary model. But that would take ingenuity and effort, so they opted for their usual route of shakedowns, asking for $1.8 trillion in damages. Then again, this might be a tactic to try and bankrupt Stability AI, forcing the ownership of their entire infrastructure into Getty’s hands. After all, it might have been cool to sit on a database of images for profit in the 1990s, but that’s going to go the way of compact discs in the 2020s.

Whether or not copyright content belongs in AI models is something of a moot point, and not just because there are millions of finetuned and untraceable models that have already been downloaded to people’s personal computers across the world (ChilloutMix by itself is close to a million downloads).

Rather, given the fact that training methods will continue to improve, it’s not long before the days of scraping the internet will be a long-lost memory; high-resolution video with segmentation algorithms will feed on a rich stream of data in the same way humans do.

Just as canned music, canned drama, canned vaudeville, and canned pictures have become commonplace, artificial intelligence and artificial creativity will eventually seem no more unusual than computer screens or speakers. Until then, amidst the harsh anti-AI noise, and possible lobotomization and criminalization of AI tools, my advice is: stop and smell the artificial flowers!

The Princess by Andrei Vyktor Georgescu (2023)