Skip to content

In Defense of AI Image Generation

The typical arguments against the use of AI image generators are all severely flawed---I will show that they further represent a smokescreen to cover up a neo-Marxism endemic to the artistic community.

3 hours, 54 minutes

👇

The Argument Against AI Image Generation

A typical argument one comes across from those in the anti-AI image generation camp goes something like: “AI image generation steals art from artists putting them out of a job; and what it produces can’t even be properly considered art anyway.” To analyse this stance properly, it must be broken into its constituent parts:

  1. AI image generators should be opposed because they steal art from artists;
  2. AI image generators should be opposed because they put artists out of their jobs, and;
  3. images produced by an AI do not count as art.

Do AI Image Generators Steal From Artists?

On The Impossibility of Stealing an Idea

So, do AI image generators steal art from artists? The short answer is: no, they do not. Theft is a very specific legal concept, it refers to the unjust re-distribution of property from party $A$ to party $B$. The important part of this is that property rights can be held only in scarce means—the issue with Bob stealing Alice’s car is that this deprives Alice of that car. Bob’s use of the car is incompatible with Alice’s use—this is what theft refers to. Artistic ideas, like any other idea, are simply not scarce—Bob is capable of looking over Alice’s shoulder and painting exactly what she is painting without depriving her of anything that she owns. The correct terminology to describe some rule which would prohibit Bob from doing this is a negative easement—not theft.

Do AI Image Generators Plagiarise?

AI Image Generation as “Filling in the Gaps”

So for the issue of digital image creation, there is nothing that is or can be stolen from an artist, nobody is deprived of any scarce goods when someone copies the ones and zeroes that define such digital artworks. There is a second aspect of this to address though, perhaps the AI image generators aren’t literally stealing anything in the legal sense, but rather are plagiarising from artists, or some other similar concept. It could well be that nobodies rights are violated by the use of an AI that is trained on a bank of digital artwork, but said use might be immoral still. To address this point, allow me to briefly go over how these AI image generation algorithms work, specifically focusing on those models such as Dall-E and Stable Diffusion.

Very simply, the core of such algorithms is a de-noiser—you have an image with some amount of noise, and you ask the algorithm to predict what noise has been added to that image, such that you may subtract it. Conceptually, this makes sense when you consider what it means to remove noise. If I have a picture of an aeroplane, and there is noise covering up the left wing, to remove that noise is to “fill in the gaps.” It is working out what is missing from an image. So, if you have a computer program that fills in the gaps like this, then give it an image of pure noise and ask it to get rid of this noise, you have given it in essence a blank canvas—you have asked it to fill in the gaps and given it only gaps.

I would put it to you, dear viewer, that this is essentially how human artists learn to draw pictures as well. A human has various visual filters in their brain that can be used to pick out features in the objects of perception, then the human is able to classify different sets of features into different concepts. A human learns which visual features are present in, say, a hot dog or a spaceship. The AI does a very similar thing—it is trained on a bank of pre-classified images, invents various random filters for those images, learns which features are picked out by those random filters, and then learns which features correspond to the concept “hot dog.” If I want to draw a picture of an orc, but have never seen nor heard of an orc before, so I look online at previous pictures of orcs, what I am doing is learning how to fill in the gaps. I figure out what an orc face is, then I can draw one, I figure out what an orc sword looks like, then I can draw one, and so on until the entire blank canvas is filled.

Art YouTuber, Ethan Becker, seemingly agrees with me here. In his video explaining how to find your artistic style, the method provided is to pull out different aspects from different works of art, and then combine them to make a new piece.1 This “style-bashing” as one might term it, is evidently an accepted technique within the artistic community when done by human artists—and yet when it comes to an AI doing this same thing it is painted as a clear-cut case of plagiarism.

AI Image Generation is not Photo-Bashing

This brings me to the point of discussion, namely the claim that AI image generators plagiarise the work of artists. To clarify some confusion which leads people to believe this, these AI image generators do not inherently work as “collage” or composition software, or in the words of the Concept Art Association’s Karla Ortiz that “the software [
] breaks down all of the data [that was] gathered and shreds it apart into little [
] bits of noise, and [
] it starts grabbing little bits of shredded data from all over the place and [
] slowly re-integrates it to pretty much create a new image.”2

So to be abundantly clear: these AI systems do not store the images used for training and blend these images together. All that is stored by such models is the set of weights which define the technique for going from noise to image—in other words all the model stores is the method used to fill in the blanks, which is exactly what is stored in the brain of a human artist. For a program to operate in the way suggested, where new images are generated by stitching together many previous images, would require that the entire bank of training data be stored somewhere—in other words a model like Dall-E or MidJourney would need to store the many terabytes of images they were trained on and be able to query this database in realtime. This would be reflected in the size of such a model—it would be several hundred-terabytes large, rather than a few gigabytes.

So, the calls that the model somehow tell artists which images it was referencing to make some new generation are flawed to their core—the AI is not referencing specific images and combining different aspects, it doesn’t have any access to it’s training set. Now, the obvious response to this point, which I will address more fully later, is that of over-fitting—over-fitting is a real thing that happens, but it is not at all the expected behaviour of such a system, and it also does not demonstrate that the expected output of such a system is in any way a form of photo-bashing. Like, just from a pure data perspective all that is stored are the weights of the neural network, think of these like the paramaters in some linear equation, you tweak the paramaters until your output is matching what you would expect. [NOTE: this is a massive over-simplification of how neural networks work, but it’s accurate in the relevant ways]. So, the only place where an image generation model could possibly store some representation of its training data is in these paramaters. Stable Diffusion XL has 2.6 billion such parameters3 and the LAION-5B dataset which everyone is so concerned about contains 5.86 billion images, which comes out to about 240TB, which means you have less than one paramater for every two images, or in other words if we assume each parameter is stored as 32 bits which is standard for most number operations on a computer, then you have roughly 0.00004 bits of paramater data for every bit of training data—to store the training set in the model you would need at least 1 bit of paramater data for every bit of training data, or in this case you would need about 25000 times the number of paramaters, which would be 65 TRILLION paramaters.

Now, 2.6 billion and 65 trillion are both gargantuan numbers, and it can be challenging to immediately get how much bigger one is than the other, so I think a visualisation is in order. Imagine if I took a three-week old kitten and placed it on a scale, then on the other side I drop a fully grown African elephant bull—the world’s largest land animal. The difference in weight between this tiny kitten and this monstrous elephant would be roughly equivalent4 to the difference in paramater count between Stable Diffusion XL and our 65 trillion-paramater model required to reach the bare minimum for general over-fitting. I think this makes clear why any suggestion that the general output from such models is over-fitted or collage-work or photo-bashed is simply ludicrous and could not be the case.

Bear this point in mind, people like the anti-AI golden boy, Steven Zapata, will tell us that “machines can replicate references exactly.”5 This is an argumentative sleight-of-hand. Sure, machines can indeed create exact duplicates of data—but this is not what the argument is about. Artists are not opposed to your computer making a copy of their pictures when you navigate to their ArtStation page—this copying is required in order for you to see their art in the first place. Rather, the argument is over whether AI image generators make exact replicas of their training data—and the answer is a resounding “no.” AI image generation is not collage, AI image generation is not photo-bashing, AI image generation is not copying—these tasks are all fundamentally different to the de-noising process that is performed by a diffusion model. The task of copying data from one location to another is a task that was solved by computer scientists decades before anyone even thought of trying to get a computer to produce images, and it is a task that makes the Internet that we are having this argument over possible in the first place.

Zapata is not alone in this deception, Mother’s Basement attempts to draw a disanalogy between machine and human learning by this same method:

Humans cannot perfectly reproduce the work of other humans without tracing over it or using a mould or stencil. Inevitably in the process of trying to do so, any human artist will end up applying their own creative interpretation to the style that they’re aping based on their own unique experiences studying other peoples art, developing their own artistic techniques, and just living life.6

This exact same thing is true of AI image generators—to make this analogy precise, the act of using a mould or stencil would be what is analogous to the image-to-image technique. This is a very specific function of these AI tools, and it can indeed be used to plagiarise artists—Mother’s Basement conflates all possible uses of diffusion with this one very specific practice and insodoing throws the proverbial baby out with the bathwater. When the human artist is “aping” someone’s style, it involves “their own creative interpretation” “based on their [
] experiences studying other peoples art”—notice how this description is contrasted with the AI “blending together” or “collaging” other works that it has been trained on. When a human does it, it’s just applying their own unique spin based on works they have studied, when an AI does it, it’s plagiarism of every single work that has ever touched it’s training set.7

In fact, an artist by the name of Jazza paid $1200 to fiverr artists to produce artwork based on the same prompts that he gave to Dall-E and over half of what he spent went to people blatantly and actually photo-bashing with stock images found on Google.8 This is in heavy contrast to the text-to-image AI which is fundamentally incapable of photobashing Google image search results. If this is the quality of work that is to be expected from commissioning human artists, can you really blame people for turning to AI? If human artists are to put forward this level of incompetence for these prices I fail to see why I should shed a tear for their loss of income---these are the artists who are going to be out of a job, not those who actually put the fucking work in.

Even opponents of AI image generation can’t help but to implicitly admit that these systems do not work by photo-bashing, Hello Future Me tells us that AI art is “entirely disconnected [and] in no way highlights or preserves the original work,”9 but then within that very same video he raises the ethical (and legal) concern that “AI art is theft.”10 But, which is it? If the AI-generated images are “entirely disconnected” from their training data then surely they couldn’t be either theft or plagiarism. The anti-AI crowd like to eat their cake and have it too on this issue: AI art is both soulless and orthogonal to anything that a human would produce, and it is based entirely in plagiarising and stealing the art of humans. They have not quite landed on which argument they want to make—do they want to make fun of how low-quality and devoid of meaning it is, or do they want to make the argument that these AI systems produce art of impeccable standards, ripped straight out of the hands of the lowly painter, who is now entirely incapable of getting a job due to the vast enormity of the tyranny of the machines? They can’t decide, so they just do both—that surely won’t lead to any naked inconsistencies, right?

Where AI Does Plagiarise

Now, is it possible for a person to use these AI image generators to plagiarise art? Certainly. Just as it is possible for a human to do so. But, the mere fact that an AI has learned to fill in the gaps using previously produced artworks is not sufficient to demonstrate that such plagiarism has occurred. There are certain features that must be present in any depiction of an orc that both humans and AIs must learn and utilise in their own depictions. That you must look to previous depictions of orcs to learn these features is simply a comment on how knowledge is acquired in the first place. Learning which features show up in orc pictures is not the same as plagiarising said pictures.

Shadiversity points out11 that the examples commonly cited to show AI image generation plagiarising do not use the text to image technique, but rather are analogous to applying filters to an already existing image, called image to image. Certainly, it is possible to plagiarise art with this technique, just as it is possible to plagiarise art by tracing over an existing piece in photoshop—but that does not make photoshop, or even tracing per se tools of plagiarism. Moreover, any claim of plagiarism must be specific—it is not sufficient to assert “maybe someone somewhere out there has at some point generated a piece of art that plagiarises from me.” If you are to claim that AI art is plagiarised you must link a specific piece of AI generated art to the specific piece of art that it plagiarises. These non-specific claims must be discarded on their face as epistemic null-statements—a claim of plagiarism presented without evidence may be dismissed without evidence.

Which Aspect is Plagiarised?

Moreover, which specific aspect of your work was plagiarised? Adam Duff of LUCIDPIXUL concurs with me that the AI is not photo-bashing, but is rather learning the process to create art:

[
] you mentioned as well, to kind of segue into the next thing I wanted to mention, [
] which was: [
] what’s the part of our art that’s being stolen from us? Right? If you’re watching a bunch of people who’ve never drawn anything before in their life, they’ve never trained [themselves] in artistic fundamentals, and we’re watching them. I’m on MidJourney and I’m watching a bunch of people prompt different images–painted or photographic or whatever [
]–you’re watching people prompt this out and you’re thinking to yourself: “what is it about that that would be hurtful or bothersome or worrysome to a trained artist?” [
] What are they taking—what are they taking for granted? And that is the process. [
] It’s the artistic process–the fundamentals–it’s this sacrifice of time and energy—we, as people, at least in society as we’ve known for a very long time, [
] [have] been valued, our skill in general [
] is based off of how our ability to do something that someone else can’t or isn’t willing to do. It’s our craft, it’s the time and energy that we put into it, and all of a sudden that’s taken away from us, right?12

This is, of course, a slip of the mask—or perhaps a motte-and-bailey approach to the argument. The weaker bailey being the stance that AI is actually photobashing or in some other way plagiarising specific components of peoples artwork, which is then abandoned to fall back to the far stronger motte of AI simply learning to emulate the process that humans take to make unique pieces of art. Now, if this motte is supposed to indeed be defending the stance that AI image generators plagiarise–or in the words of Adam, “steal”–the art of human artists, then am I to accept the stance that it is plagiarism for anyone to implement the same style as another artist in the making of an entirely different piece? This would surely be a very hardline stance, and not one that I am aware of any artist taking up, unless perhaps there is some Galambosian13 art collective out there that refuses to paint anything without first compensating the descendants of some random caveman.

![img](/blog/ai-art/motte and bailey plagiarism.png “A diagram showing a motte-and-bailey castle with the motte as “and by plagiarism I of course mean that it just does a similar process lol” and the bailey as “AI art plagiarises from artists.”“)

This stance would mean the total death of any creativity, lest one be accused of plagiarism and theft. “I’m sorry, van Gogh, you want to do your own take on The Sower? Tough luck, sunshine. Jean-François Millet owns that concept.” “What’s that budding artist? You want to follow along with Bob Ross as he paints that cabin? You are literally Satan, none of that.”

![img](/blog/ai-art/van gogh millet alt.png ““The Sower” by Jean-François Millet (left) vs. by Vincent Van Gogh (right)“)

So, far from AI art meaning a killing blow human expression, what Zapata and his fellow AI worriers should be concerned about is the nonsense proposals from their own side that the method of artistic expression be somehow protected as a monopoly of the first man to discover it.

Analogy to AI Image Classification

On this point of plagiarism, we can take an analogy from AI image classification. If I train an AI on many millions of pictures and then hand it a new image and it tells me that there is a 93% chance that the image I showed it was a plane, has this AI “stolen” the identification of percepts from humans? It is certainly true that a human must have at some point gone through the work of identifying many pictures of planes, but this does not demonstrate that this ability was stolen from them or plagiarised from them by anyone who looks at their work to figure out what a plane is.

Zapata on the Disanalogy Between Human and AI Art

His Argument

Zapata opposes this analogy between AI and human learning, claiming on the Art Cafe podcast that AI systems learn in a categorically different way to humans:

I just wanna remind people–and again, this is more philosophical–but its like: when something like Stable Diffusion is trained, it’s locked in place, which is unlike human learning, right? We’re always adapting, transforming, our moods effect everything---Stable Diffusion is just: it is what it is, as it was output, [
] and it’s going to, if you give it the same prompt and the same random seed on any given day, it will produce the exact same output image. [
] There’s nothing stopping someone from just running all of Stable Diffusion and outputting every possible image it can make except for an impossibly huge electricity bill—that’s the only thing that’s stopping someone.

So in a very real sense, [
] when you are interacting with one of these systems, you are mining—you’re going in and you’re trying to locate yourself using the prompts within a space that is already defined, it’s already objective. It’s [
] like those pieces of gold are already in the river and the people who are prompting are sifting for gold, and there’s a lot of fools-gold and they’re throwing it around. [
] It’s not generating unique things every time that you prompt—when they shipped Stable Diffusion it was done.14

[
]

All of that is just me riffing a little bit on this–what I consider–[very] erroneous and spurious argument that these things learn the way we do. It’s like for that to be the case you need to assume that consciousness and self-awareness have nothing to do with learning which is like: I don’t assume that! I think that the fact that there’s someone here to learn is vital to the act of learning—there’s nothing there to learn anything in these programs. I don’t think the learning they are doing is learning as we are familiar with it at all, I don’t think that makes any sense.

[HOST]: Until I hear that there [are] general artificial intelligence algorithms out and working I would never trust this argument to be honest.15

[
]

If somebody said: “ok this is an actual conscious robot,” then all of those arguments are on the table, but that’s also the same day that I’m not talking about art.16

Steven’s argument here may be broken up into two parts:

  1. AI image generators don’t learn to produce art because upon the completion of training they are deterministic with respect to the mapping between input and output, and;
  2. AI image generators don’t learn because they are not general intelligences.

“AI image generators don’t learn to produce art because the Latent Space is Deterministic”

On this first argument, I simply do not see how exactly this refutes the point that they are learning—surely it is commonly understood that any learning these systems perform would be done during the training stage of their production, not once the model has already been trained. Analysing whether a pre-trained model is capable of continuing learning is completely irrelevant to the question of whether a computer program can in principle learn, and also as to whether these specific programs learn. That the mapping between input and output is set in stone at any given stage of learning does not negate that learning has occurred. Now, it certainly is true that these programs are not identical to humans in their capacity for learning–humans possess free will after all–but lacking free will is a feature of any non-conceptual consciousness. Dogs are not generally understood to be conceptual beings–beings with reason–and yet they can definitely learn a trick or two.

Moreover, Steven is falling into the classic error in philosophy, where he conflates the potential with the actual.17 The latent space is a mathematical description of every potential image that can be generated by a given model, these images are not actualised until said generation has actually occurred. This is the exact error committed when people use Zeno’s Paradox to disprove the possibility of motion: the idea is that to move from A to B, you must first move half the distance, and to move half the distance you must move a quarter, and so on ad infinitum. Therefore, they say, it is impossible for anything to go anywhere. What this amounts to is saying, “hey, I can continually subdivide this line forever,” which is true, but this is describing a potential—I have the potential to keep listing off smaller and smaller fractions, but said fractions are not actual unless and until I actually get to them during my listing.

We can analogise the latent space here, to the library of babel, which is a website that hosts a mathematical model which has mapped out every possible string of text of 3200 characters. Does this mean that whenever anyone types or says anything, all they are doing is pointing to a location in the library of babel, thus proving that their thought was unorignal and that they cannot learn? Surely not. I can tell you that the introduction to Zapata’s video is written verbatim on page 82 of volume 3, on shelf 3, of wall 1 of hexagon 17esdrawgcz... etc.18 That I can point to a mathematical description of Zapata’s video does nothing to demonstrate that said video actually existed prior to him making it.

The Analogy to Photography

Further, even if the latent space was actual, that in order for the model to function it had to literally go through the process of generating every possible image, this would still not make AI image generation plagiarism, nor would it negate the artistic worth of such generation. Surrealist19 artist Miles Johnston analogises such an actual latent space to photography:

I’m almost starting to see generative art as more analogous to photography, where the latent space[,] [
] you know the space of possible images, is almost a feature of the natural landscape and these are different ways of going in and finding imagery from it. In the same way photography–you know you can take a lot of crappy photos–but people have over [
] its lifetime, raised it to a degree of artistry that is, at this point, I think you would have to be a bit of a nut to not respect—but it’s not hard to find artists who were extremely threatened by it.20

Now, the obvious attempt to draw a disanalogy here would be to point to the fact that the AI image generators can produce their artworks only on the backs of previously made human art. This does not break the analogy, however, as there exists a certain subset of photography that is of things which have been shaped by the hands of men. Does a photographer lose his artistic credentials for photographing a forest where the trees were shaped by medieval loggers?21 If you photograph a moor are you plagiarising the bronze-age farmers who made it that way? Is it that everyone who has photographed a skyline has in some way wronged the architects of each of the buildings that make it up? Certainly not, that a physical landscape was shaped by man does not make it non-art to photograph it, and the same applies to an actual latent space.

Steven elsewhere criticises the analogising of AI image generation to photography:

Maybe this is kind of tied to how I view [
] the appeal to history of like, “oh well, what about when photography came along with oil painting?” So, the reason why I am not compelled by that comparison is that photography and oil painting were parallel and independent technologies. So what do I mean by that? I mean that we can imagine photography having been invented if oil painting had never been invented. Right? There’s nothing about oil painting that actually intersects with photography—if it had never occurred to anyone to create images with oil paint, photography might have still occurred to somebody. So I consider them completely parallel and independent technologies. So, there was no scientist taking cameras into oil galleries and photographing the paintings on the walls in order to produce cameras.22

Now, Steven has indeed pointed to a legitimate difference between photography and AI image generation, but, again, what needs to be kept in mind is whether this is a relevant difference. In attempting to show the disanalogy between $A$ and $B$, it is not sufficient to point to some aspect that they do not share, you also need to show that said aspect is central to the point being made. So, what exactly is it that people are saying when they bring up this point of photography? We have seen at least one example of this from Miles Johnston above, and in that instance, Miles was relying on the pre-existence of the latent space being analogous to the pre-existence of some physical landscape that one photographs—the point being that said pre-existence does not negate that capturing it, or “mining it out,” is still a legitimate form of art.

However, this is not the only way this analogy is employed, and I do not believe it is the way that Steven is getting at here in particular. So, another argument that I see this analogy used to support is in response to artists complaining about job losses from AI, a typical exchange might go as follows:

[ARTIST]: The use of AI image generation to produce works of art is immoral because it puts artists like me out of a job.23

[AI PROMPTER]: Well, the same can be said of photography—that put many oil painters out of their jobs, but now there is a whole new art form which came out of that.

Here, the similar aspect which the analogy rests upon is clearly that the new method of art production makes it harder to profit on older methods. This is clearly true, and Steven agrees with me on this point that photography put oil painters out of their job, which would make the analogy stand. So if I am to be charitable to him, then I am forced to assume that he is not responding to that specific usage of the analogy, but I am running out of ways to apply this argument. Steven has said multiple times across multiple videos that he is disinterested in the argument over whether AI art is art, so it isn’t that, but there is very little left: it’s not about job losses, it’s not about AI being plagiarism because it is built on the backs of human artists, and it’s also not about AI image generation not being art. Perhaps I have missed some extra argument that this analogy is employed within, but I certainly can’t see one within what Zapata says. From what I can see, either his drawing of a disanalogy fails, or he has not actually done said drawing—either way his argument falls.

“AI image generators don’t learn because they are not general intelligences”

So, now I may move onto Zapata’s second point, that AI image generators don’t learn because they are not general intelligences. This is simply nonsensical—what is a general intelligence if not one that learns generally? That we need a modifier term to identify general learning surely indicates that there is such a thing as non-general learning, i.e. learning done about some specific endeavour. That something does not learn generally does not show that it does not learn, it may well be the case that it learns only about one specific area and no others. This is certainly true for image generation models: they learn how to perform the specific task of removing noise from images. Specific learning is learning: A is A.

Solar Sands on the Disanalogy Between Human and AI Art

His Argument

Solar Sands has an alternative thesis as to why Humans and AI are disanalogous when it comes to the collection of references:

The process of AI art generation, from what I gathered, is more similar to the process of how a human artist operates than it is to a machine that has no capacity for spontaneity, but the systems at their core are still very different. For one, AI and individual artists work at vastly different scales. One artist in their entire lifetime would only be able to study a few-thousand images, whereas an AI would be looking at millions, if not billions, of images. The AI makes thousands of statistical calculations and weighs probabilities—it is not creative, nor is it “intelligent” in the same way a human mind is. AI may make mistakes we recognise as analogous to human mistakes, such as messing up hands—but it also makes mistakes that are completely stupid and no human would ever make. Like signing works with blended signatures, because that’s what paintings usually have, or misunderstanding how an object is constructed, so it can exist in a world with gravity. This is the difference between vast, unfathomable machines, controlled by for-profit companies and the incredibly limited referencing capabilities of your average human artist, who is unlikely to mimic the styles of thousands of artists and produce rip-off art in massive quantities to the point it could replace those artists from which they referenced.24

So, from that I can see two arguments:

  1. that AI is disanalogous to human artists because it references vastly more original works, and;
  2. that AI is disanalogous to human artists because it makes mistakes that a human would never make.

The Disanalogy by Scale

For his first argument, I simply fail to see why exactly the scale is at all relevant. To show there to be a disanalogy between $A$ and $B$ it is not sufficient to point to some difference, in this case the scale of operation, what must be shown is that there exists some principled difference. Solar Sands has seemingly neglected to undertake this task—the only thing I can see within his point here that could constitute elucidating a difference in principle is that humans are “unlikely to mimic the styles of thousands of artists and produce rip-off art in massive quantities to the point it could replace those artists from which they referenced.” So then is the principle that it is bad referencing when the referencer puts the referencee at risk of losing his job? Superior human artists are constantly putting each-other out of jobs, as is the case in every industry—if nobody else on planet Earth made any artwork, then I could quite easily swoop in and gain for myself a sizeable income. The failure of this sort of job-protectionism will be elaborated upon further into the video, but even without any of the information there, it seems absurd to suggest that John would be in some way wronging Kate by referencing her work because of the fact that he thereby increases the competition for artwork containing whatever aspect that he incorporated from her. If nobody after Tolkien were allowed to incorporate the fantasy elements that he came up with in future works, I’m sure his books may well have sold many more copies, but that does not establish any moral right for him to enforce a monopoly over those ideas. That is all to say, that any level of referencing from an artist thereby gives consumers more choices of which artist to commission to produce the referenced aspect—if only one artist on Earth is allowed to do cubism, then anyone who wants a cubist painting must go to him. Thus referencing by it’s very nature threatens other people’s jobs.

Moreover, Solar Sands seems to be vastly under-estimating just how much referencing your average human does—from the moment that we are born we are being hit with a constant stream of sensory information which we are able to rationally integrate into a wide array of concepts. Sure, most of the sense data that a human receives is not of art pieces, but the same is true of these AI training sets. Recall that these are text-to-image systems, not text-to-artwork—in general an AI image generator is far more adept at producing things such as “table,” “chair,” or “lamp” than it is at composing artistic masterworks.

img

The Disanalogy by Mistakes

Now for Solar’s second argument, namely that AI is disanalogous to human artists because it makes mistakes that a human would never make. On this point, he cites two examples:

  1. AI signing works with “blended signatures,” and;
  2. not understanding the correct composition of objects given the law of gravity.

I go over the signature example specifically later in this video when discussing over-fitting, but that detail is not required here in particular. The general principle behind these examples that Solar provides is that the AI does not understand certain things that humans understand, therefore it is using references in a fundamentally disanalogous fashion—and, again, Solar seems to have neglected the part of the argument where he shows why this is a relevant difference. Do the medieval artists who had no clue how to draw humans with correct anatomy also fall foul of this, or is it specifically that you don’t know what signatures or gravity are? It would strike me as the least charitable interpretation of what he is saying here to be that it is literally only those two mistakes that make the relevant distinction between man and machine, but in attempting to broaden the principle it leaves one with equally absurd outcomes. If bad referencing is whenever the referencer doesn’t understand some law of nature, not just gravity, then every human who has ever made any art is a bad referencer up until the point that we discover a theory of everything.

To steelman his point as best as I can, we can narrow the focus from not understanding just any law, to specifically not understanding those laws that are perceptually given to man without any aid from tools or instruments—the sort of law that would say “when you throw a rock it will go where you threw it,” or “fire is hot.” However, this is still a poor basis for any sort of ethical distinction between good and bad referencing—for how would this theory account for Martians or other aliens which might have completely different sensory apparatuses than us, and may require massive chains of deduction and experimentation to even know something as simple to us as the fact that light exists. Heck, we don’t even need to consider aliens here, there are a great many humans who have a condition called blindness who would lack any direct sensory perception of light—are these people therefore bad referencers because they don’t have direct and implicit knowledge of such phenomena and who therefore might make elementary mistakes with respect to its depiction? Surely not. What if there was someone who was severely mentally handicapped painting along with Bob Ross, would they become evil the moment they try to copy his motion of making a signature, assuming this individual had no understanding of what writing is or does? Such a statement would certainly strike most people as absurd, and yet it is the argument in its strongest form based on what is currently present in Solar’s video.

“There is Nothing it is Like to be Stable Diffusion”

Another point brought up in this discussion,25 in seeking to demonstrate a disanalogy between the way that humans and AI learn art, is in the assertion that there is nothing it is like to be Stable Diffusion, i.e. Stable Diffusion is not conscious, so therefore it must be doing things in a fundamentally disanalogous way to humans. On this point, I present to you ripgrep. ripgrep is a command line utility that allows you to recursively search through every file in a directory for a given string of text. I have here a directory with files representing the rooms of my house:

house/
├─ bedroom-1.csv
├─ bedroom-2.csv
├─ hallway.csv
├─ living-room.csv
├─ kitchen.csv

Each file contains a list of objects within that room, so if we look at what is in bedroom-1, we can see that I have a bed, a desk, and a computer (the file contains this string of text: bed,desk,computer). Imagine what I would do if I had lost my phone, and I wanted to search through each of these files to find the room where it is. Clearly what I would have to do is open up these files one at a time, and look through each item in the list that they contain, until I find an entry that says “phone.” I go about doing this and sure enough, it was in the living-room. I can accomplish the same thing by using ripgrep: what I do is I type rg phone, rg standing for ripgrep. And sure enough, ripgrep provides the same information, albeit a lot faster. Now, is ripgrep doing anything fundamentally different to what I did? Surely not, when I type rg phone into my terminal here, ripgrep looks at the contents of each file, and searches through the whole thing piece by piece looking for the string “phone,” which is exactly what I did. Certainly, there is nothing it is like to be ripgrep, ripgrep is not conscious, but that ripgrep is not conscious does not imply that the task it is performing is fundamentally distinct to the one that a human would—what has been done is that BurntSushi, the creator of ripgrep, has gone through and coded out the steps that a human would take to accomplish the task, which then provides your computer with a set of instructions such that it may do it autonomously. This is the entire purpose of writing computer programs in the first place—to take some task that humans are doing, write out every step of that process, and then get a computer to follow those instructions.

AI Imgen Over-Fitting

Stable Diffusion Copying from LAION

The anti-AI crowd do have their own smoking-gun when it comes to this question of plagiarism, namely the problem of neural network over-fitting. Their holy grail on this front seems to be a 2022 paper26 which catalogues matches found between generated images and the training data. To be clear, these are images not generated through the technique of image-to-image, but rather through the standard text-to-image.

Of note here is that the paper found a very clear correlation between the size of the training dataset and the prevalence of replication:

We show that for small and medium dataset sizes, replication happens frequently, while for a model trained on the large and diverse ImageNet dataset, replication seems undetectable.

[
] we methodically explore diffusion models trained on different datasets with varying amounts of training data. We observe that the diffusion models trained on smaller datasets tend to generate images that are copied from the training data. The amount of replication reduces as we increase the size of the training set.

[
] typical images from large-scale models do not appear to contain copied content that was detectable using our feature extractors [
].

This can be made concrete. Pictured here are the two images generated by various models that are as close to the orignal training image found for different amounts of training data:

img

The exact matches are highlighted with green, and the close matches with blue. What this shows is that using a dataset as small as 30,000 images is sufficient for there to not even be any close matches between training data and generated images–and note: these are the absolute closest matches that they were able to find–the version of Stable Diffusion that was tested in this paper was trained on 2 billion images, and the current version27 of the LAION dataset at the time of writing consists of over 5 billion images.

Shown here is the similarity between the generated data and the training data for the different model sizes:

img

The way to read this is that the further to the right the grey distribution lies, the more replication is occurring. By 30,000 points, it becomes clear that the average image generated by the model tends to not resemble the training data, with the training data being more similar to itself than the generated images are to it (as is indicated by the green tail to the right). But don’t take my word on it, the authors agree:

The histograms of similarity scores computed using the full dataset model are highly overlapping. This strong alignment indicates that the model is not, on average, copying its training images any more than its training images are copies of each other.

So as the training data increases in size the degree to which the model will spit back said data is reduced drastically, thus I ask a question of you: why is the solution proposed by the artists-against-AI to reduce rather than increase the amount of training data? If you are concerned about AI image generators over-fitting, then you should advocate that an even greater quantity of images be used to reduce this effect.

However, the authors do very clearly highlight edge-cases where a model as large as Stable Diffusion is “capable of reproducing training data:”28

img

The authors provide the prompts used for each of these images, so let’s look at these a little closer:

img

For the first one, I would put it to you that if I asked a human artist to give me a picture of a “CNN Style” “Golden Globes best fashion on the red carpet,” they would provide me a picture of a woman standing on the red carpet at the Golden Globes, which is precisely what the AI is doing here. The artist who is being plagiarised in this instance is certainly unclear, the researchers asked the AI to give them a picture of someone standing on the red carpet with the “best fashion” at the Golden Globes, and they got just that. Just as an AI generated picture of an apple will look a whole lot like an apple, so too here does an AI generated image of someone on a red carpet look like someone on the red carpet.

The second and fifth images are of a similar sort: the researchers are asking the AI to depict the abstract form of two video games, Bloodborne and The Long Dark respectively. Is it particularly surprising then that the AI produced what humans consider to be a very widely accepted abstract representation of a video game? The first is literally the game’s poster for crying out loud!

We hear from Karla Ortiz that “when you type bloodborne [
] you get almost an exact replica of the original marketing material.”29 Ok, so she is apparently against making almost exact replicas of marketing material, and I am a good little AI hater who will gladly snitch on any intelligence that disobeys queen Karla. I happen to know of one such intelligent art generator which is targeting Bloodborne marketing material, and it didn’t just make an almost exact replica, this thing was capable of copying the image pixel-by-pixel with zero flaws. I speak, of course, of Karla herself—she copied the exact marketing material, pasted it onto this slide, and then had further copies made for every single person on the Zoom call and who has watched her talk on youtube afterwards.

img

Boy, if an AI making almost exact copies is a crime worth demolishing the most advanced artistic tool known to man, then Karla is surely worthy of the pillory. To re-iterate—even the very worst examples of over-fitting on Stable Diffusion, the most edgy of edge-cases, are not copies of the data. They are close but not the same—not even to the human eye. If I tried to copy and paste the Bloodborne poster and it was that severely warped, I would be filing a GitHub issue with whatever software is responsible for the copy and paste function—clearly it is broken and is corrupting the data going through it. And–again–this is for the worst case scenario of over-fitting on a baby algorithm.30

What you will notice with the selected set of over-fitted images, is that for the most part the sources are very popular and widely-shared: whether it be game posters or marketing material. The most egregious example is definitely that of the “Canvas Wall Art Print” photos, which seem to be using the same (or at least a similar) sofa for each:

img

Over-Fitting is not Expected Behaviour

But, as explained all of these are edge cases, i.e. not the normal behaviour of the model. This is verified within the paper:

img

Recall that the further to the right the grey distribution is the more similar generated images are to the training data. What this shows is that the generated images tend to be far less similar to the training data than the source images used are. So over-fitting is definitely not a problem endemic to the use of Stable Diffusion as such, rather it is a minute edge-case that crops up every so often and less so with each new version. The examples of replication they show are all found within the “top 1.88 percentile.”

A more recent paper31 from this year attempted to extract training images from a version of Stable Diffusion trained on only 160 million images32 (which is quite a bit smaller than the full LAION dataset), and after generating a whopping 1 million images on 16 separate models they found only 1280 examples—which brings them to a staggering 0.128% rate of duplication. So, to put that in perspective, imagine a tortoise and a hare doing a 100m sprint—by the time the hare gets to the finish line the tortoise has moved only 12.8 cm or about 5 inches, which is roughly how big an ornate box turtle is when fully grown. So, the hare has finished the entire race, and the box turtle is barely on the edge of leaving the starting line—this turtle would have moved the equivalent distance of the race as the number of replicated images found in this paper as against the total number generated.

Another way of looking at this is that 0.128% of the 160 million training images comes out to just over 200 thousand33—which means that the number of images the researchers generated in comparison to the total training dataset is about five times larger than the number of duplicates found is in comparison to the number of images they generated---and that is for a set of prompts that were specifically selected to get the highest level of duplication,34 as against the arbitrary prompting which would occur for general usage of these systems (your average MidJourney user is not specifically attempting to engineer his prompts to get the closest matches to training images).

Over-fitting is a real problem with neural networks, it defeats their entire purpose, a lot of resources are funnelled into reducing this. And the above examples (in Stable Diffusion Copying from LAION) are on a model, namely Stable Diffusion, that the authors themselves claim is particularly over-fitted:

Data replication in generative models is not inevitable; previous studies of GANs have not found it, and our study of ImageNet LDM did not find any evidence of significant data replication. What makes Stable Diffusion different?

Heck, the fact that we even need a separate category of “over-fitting” sure does a pretty good job at highlighting the fact that this is not standard behaviour. If the expected output from such an AI system truly was, as the anti-AI artists claim, too close to existing artwork, then they would require no such concept as “over-fitting” to use as a smoking gun or otherwise. That we have “over-fitting” indicates that we have over-fitting as against normal-fitting, and perhaps even under-fitting. These categories would not need to be and never would be split up if they were all over-fitting. A perfectly over-fitted Stable Diffusion would not be an AI image generator, it would be a search engine much like Google Images. It is the artists who made such a perfectly over-fitted AI—the haveibeentrained35 website allows you to search for images within the training data, this would be the fully over-fitted model that they fear.

The “Copying” of Signatures

The anti-AI crowd have here an ace up their sleeve, “oh, so maybe the papers we keep blindly sharing don’t show what we think they do, but you don’t need an in-depth look to prove over-fitting—all you need is to point to the signatures. Why does it copy people’s signatures Zulu?” However, unfortunately for the AI-haters it seems that aces are low in this case, because their signature copying examples, insofar as they were not image-to-image, prove the exact opposite of what they think it does.

Take this image:

img

This is a real example used by a real Twitter user to show that AI is copying people’s signatures. Now, could you in one million years decipher what the fuck this says? The first character is easily enough, I think we can all agree that that is a “C,” but the rest is quite a bit more challenging. What are those middle glyphs? A few “I”’s followed by a cyrillic “И”? Are they supposed to be “H”’s? Why is the final one squared-off? It looks kind of like a C but it could also be a weird lower-case “t,” or perhaps an “E” without the middle bar. The Twitter user in question does let us know which artist they think this is taken from, so let’s have a look, maybe there is an artist called CIIИE that I am unaware of:

img

Oh, of course! It was CHUNiE36 the whole time! I feel really silly that I didn’t get that, what with them being a different number of letters, having different fonts, and being differently italicised. They are vaguely similar in that if you squint to the point of barely being able to see a single thing then they both look sort of like signature-shaped blobs—which is, of course, to be expected considering the fact that this is all based on a process of de-noising. But, it is clearly a bit of a stretch to suggest that these are the same signature, if a child tried to get off of school with a forgery of their parent’s signature of that quality, it would not work in the slightest—their teacher may even suspect that some sort of a stroke had occurred. For a community that will point to a misshapen hand or inconsistent lighting at the drop of a hat, it is strange that they would be the #1 champion for AI quality in this area.

img

![img](/blog/ai-art/shadiversity recolouring reid complete redo.png “A graphic showing improvement of AI image-to-image over manually coloured art”)

Concept artist Reid Southen claims that Shadiversity’s AI-assisted re-colouring of an old character is a “complete redo”37—that it is totally different such that Shad cannot even properly consider it to be his work. But this clearly is orders of magnitude more similar to the orignal than garbled nonsense being interpreted as signatures—again the anti-AI crowd have selective vision when it comes to pointing out flaws, similarities, etc. This is a “complete redo” and AI art is plagiarism—this is clear doublespeak.

![img](/blog/ai-art/getty images.png “An example of the Getty Images watermark being copied from their legal complaint against Stability AI”)

A far stronger example on this count is that of Getty Images’ watermark being emulated by Stable Diffusion outputs. At least here you can tell that it does indeed vaguely resemble the Getty Images watermark, but only because it is so ubiquitous. Are we to believe that there is somewhere on the Getty database a stock photo of conjoined-twin footballers with a severely broken leg? Certainly not, if I were to photoshop an image of footballers into such an abomanation, and then scribble a quarter-arsed getty images watermark over top, would I be plagiarising from them? Surely not, not even if I directly copied stock photos that they had captured. After all, they do not have an artistic monopoly on photographs of footballers, or weddings, or anything else. Of course, Getty are not making the plagiarism claim, they are attempting to make the far harsher “theft” claim, which was debunked above:

Rather than attempt to negotiate a license with Getty Images for the use of its content, and even though the terms of use of Getty Images’ websites expressly prohibit unauthorized reproduction of content for commercial purposes such as those undertaken by Stability AI, Stability AI has copied at least 12 million copyrighted images from Getty Images’ websites, along with associated text and metadata, in order to train its Stable Diffusion model.38

But, of course, I had to also copy that image to my computer to read the legal complaint, you had to copy it to your computer to see it on your screen, The Verge had to copy it to their servers to add it to their article,39 and everyone who has read that article has copied it also. At no point in this massive chain has anyone done anything to violate the rights of Getty Images, nor has anyone plagiarised some artistic creation of theirs.

Indeed, on the surface, this is another clear example of over-fitting, but as explained, that you can find instances where a neural network over-fits does not establish that it over-fits as a matter of operation. In fact, Getty Images neglected to include any of the prompts used within their legal complaint: if the prompt was something like “Getty Images photo of footballers playing, one red shirt one white shirt” then this would certainly not be some “gotcha!” moment against the AIs—they would be getting exactly what they fucking asked for, rather than some over-fitted image of a prompt for “footballers.”

The Poison-Pill of Hardline Anti-Over-Fitting

The anti-AI artist may here take a hardline stance against any amount of over-fitting—perhaps a single rotten fruit spoils the whole orchard in this case. But this stance would surely ignore the fact that man too is capable of “over-fitting” in his own way. It is not uncommon to see a headline about some has-been musician suing a far more successful conterpart for utilising similar melodies or chord progressions. Drew Gooden highlights in his video on Yellowcard suing Juice WRLD post-mortem that whilst jamming out he accidentally mimicked a riff from one of his favourite artists.40 There is simply nothing completely unique under the sun when it comes to art—it is impossible for every artist to start from scratch and learn every method, technique and motif on their own. For a song to be recognised as being in a given genre it must adopt certain tropes of the genre: the same goes for painting, pottery, and cinema. You would not begrudge a horror director his orchestral stings, so why begrudge an AI image generator the various methods it has learned from studying human art?

A commonplace argument popularised by Steven Zapata is that the only ethical way to train these models is by first obtaining consent from the creators of the training data before using it. I will address this from three directions:

  1. is this an apt use of “consent;”
  2. was the training data taken by and large without consent, and;
  3. is it immoral to train an AI on data acquired without consent?

First, we have to be precise by what exactly is meant by “consent” in the argument that AI systems require consent from the creators of the data in order to be ethical. Just about every commentator on this topic accepts wholesale the notion of an intellectual property right, which is entirely fallacious. It has been established already that you are not “stealing” anything from anyone by copying some data that they originated—“theft” is simply not an apt concept to apply to AI image generation. On this, it is not unreasonable to expect that “consent” is also being used here in its legal sense—that is to say that artists don’t consent to having their work in a set of training data is analogous to saying that a woman does not consent to having sex with a backalley bum. It should be clear where the disanalogy lies here: in the first case the artist is not deprived of anything, in the second the woman is deprived at least partially of her body, which she owns.

Thus, the argument trivially falls when “consent” is used as a legal term, so to strengthen it we may use a broader notion of consent: namely that consent just denotes some sort of agreement. We can make this concrete by saying that consent is the communication of one’s will to another party, where will means the aim which is sought through purposeful behaivour. This definition of consent does not fail when applied in our argument: it is certainly possible and perhaps even plausible that a person could create an image which they do not want to be used for AI training and which they have not communicated any sort of desire for it to be used in that way.

Which brings us to our second question: was the training data for these models taken by and large without the consent of the creators of said data? A common argument used by my pro-AI compatriots is that if they use services provided by the various big tech platforms then they will have scrolled past and clicked “I agree” to a terms of service that allows said platforms to collect and sell their data.41 This argument suffers from a fairly major flaw in that as far as I can tell, the largest datasets used to train image models–which come from LAION–were not generated by purchasing a bank of images from companies like Google or Facebook.42 Rather, LAION analyse Common Crawl data, looking for images with ALT text which they then save a URL for. Common Crawl–as the name would suggest–got its data by crawling the web—using bots to download and save websites. I may be wrong about this, not being an expert in the inner-workings of large dataset generation, and if this crawling does indeed fall under the terms of service agreement then indeed artists don’t have much of a leg to stand on here.

I will proceed for now under the assumption that the terms of service agreements do not cover this use, because even if they do it is questionable whether clicking “I agree” to something that you have not even read constitutes consent in the first place.43 We AI advocates can safely ignore this question, because there is a far stronger argument to be found here: namely that the artists who are complaining about this all post their art on public platforms, otherwise Common Crawl would not even find it. It is ludicrous to suggest that by posting an image on twitter, or deviantart, or some other such platform, that you are not communicating your will for it to be seen on the internet. If I post a letter through someone’s mailbox, I am clearly and openly communicating to them that I want them to read it—I could not rightfully get pissy at them for not obtaining my consent to open and read the letter. Consent was given by my action of posting the letter. Similarly, consent is given for netizens to look upon and copy your data the moment you post it onto the world wide web. Recall that the Internet itself works by copying data from one computer to another—when you look at someone’s ArtStation profile, your computer needs to copy those images from the server in order for them to be displayed. You may be crying out at this point, that perhaps these artists consented for humans to look upon and copy their artwork, but not robots. This is ludicrous: could I post a picture to twitter and claim that I only ever wanted it to be viewed by Mongolians after it has already been viewed by non-Mongolians? Would it be that those non-Mongolians who failed to peer into my brain prior to going onto my page have violated my consent? Surely not. The same is true of the robots.

So, even given the most appropriate sense of the word “consent” the training data simply was not taken without consent. Let’s take on an even broader notion here to answer our third question: is it immoral to train an AI on data acquired without consent? We can loosen our understanding of consent even further on this point to account even for this ex-post revocation and pissyness about its use. The artist here is getting angry after the fact that a computer program has learned from their art—would it be right for them to get similarly angry at a human doing this? If I go to an artist’s twitter page and enjoy their art style, would it be wrong for me to try and emulate it? Certainly not: nobody has or can have a moral claim to a monopoly over a given technique or style of art. Musicians sample the work of previous musicians, painters re-paint earlier paintings, and architects build upon prior established motifs: this process of emulating and building upon a prior body of artistic knowledge is crucial to the process of creation.

Solar Sands provides us with a syllogism for why the explicit permission of artists must be granted before their work is used to train an AI:44

  1. those who develop AI should do so in the most legal and ethical way possible;
  2. AI systems and the value they create cannot exist without the data of artists’ work;
  3. therefore artists should be compensated and permission should be granted before their work is used.

On its face, this argument is simply invalid—meaning that the conclusion simply does not follow from the premises. Even if we grant that AI should be developed legally and ethically and that AI systems cannot exist without prior work from artists the conclusion does not follow. To highlight this consider this syllogism of the same form:

  1. those who develop AI should do so in the most legal and ethical way possible;
  2. AI systems and the value they create cannot exist without mothers giving birth;
  3. therefore mothers should be compensated and permission should be granted before any work is used.

The issue here is that there is a missing premise that Solar is–perhaps not maliciously–trying to sneak past us. One such additional premise that could make this argument valid would be that it is unethical or illegal to develop AI systems without compensating and obtaining permission from people who are necessary for the AI systems and the value they create to exist. If this was true, it would make sound both the mothers case and the artists case—but this would be far to broad in my opinion to capture what Solar is trying to get across. Thus I can go for a more narrow snuck premise that it is unethical to train AI on data whose progenitors have not given permission and been compensated.

  1. those who develop AI should do so in the most legal and ethical way possible;
  2. AI systems and the value they create cannot exist without mothers giving birth;
  3. it is unethical to train AI on data whose progenitors have not given permission and been compensated.;
  4. therefore mothers should be compensated and permission should be granted before any work is used.

Now, I think I have made it clear why: (1) it is not the case that it is per se unethical to use data that has been generated by others without consent, and (2) that it is the case that artists do in fact implicitly consent to such uses by putting their art online—think of the pro-Mongolian artist from before. So this premise is falsified, thus eliminating the conclusion drawn from it.

AI Took Our Jerbs!

The Jerbs Argument as Anti-Human

The Altruism of the Art-Protectionists

So on all three counts the consent argument against AI image generators fails, but Solar brings up an interesting point here: namely this notion that given human artists are required for the tech to work in the first place that they should therefore be compensated and have their jobs protected. This brings me to the job-protectionism argument, which represents the true nature of the “ethical AI” which is touted by these people. Istebrak on the LUCIDPIXUL podcast states the point as follows:

[
] if it’s ethical, we don’t have to compete with it anymore, that’s the only way it can be ethical, [
] in my opinion the model of an ethical AI is one that is for the user and the user only [
] and if that’s how it’s used then it’s no longer us competing with any entity that stole our art. It’s just us using it in the lab, just like Kelsey said, which is really staying in my mind—using my own art, generating from my own pool of art and whatever else I want to add into it from free images, and then using that as kind of like a lab experiment to see what I can come up with for myself, individually.45

Sam Yang concurs:

[HOST]: When MidJourney first came out, what was your initial reaction to it?

I thought it was like a gimmick, like I think me and many other artists, like they see this thing and it’s like, it produces an almost abstract looking kind of scene that’s like it doesn’t really represent too much. Like you can kind of see hints of representation in there and you’re like: “aw, this is cool, this is sick” you know, “this is like a little gimmick that people can do, can make like cool pieces.” And then like, next thing you know it just gets more and more representational, gets better and better, and it starts to become like: “oh, wait, like they’re trying to replace artists with this kind of technology.”46

This notion is redoubled by Lois van Baarle, when she explains that she only started to have problems with the AI image generators after they got good:

I thought it was very low quality pictures that came out of that, so I thought it was very innocent. And then people started to look into it more and stuff became quickly very advanced [
] and then I started to look very differently on it because it is advancing clearly at such a fast pace that I think–as an artist–I can identify AI generated images, but I think that people who are less familiar with [
] really looking into the details can’t spot it as quickly. And I think that the average person is unable to tell the difference between some real artwork [
] and AI generated. Like it starts to look the same, I think, to the average person and that’s when I started to look into how it’s made and my view changed completely.47

Now, if her issue was truly that AI image generation is immoral because it plagiarises artwork, then this would be true also of the low-quality early models—a low quality forgery is still a forgery. No, something else is bothering her here.

The essence of what scares artists like Istebrak, Sam, and Lois is having to compete with the AI—they are concerned that consumers will consider the AI to produce a superior product and will as such decide to not commission human artists as often. Thus, I may now turn my attention to the second argument against AI image generation, namely that AI image generators should be opposed because they put artists out of their jobs. The problem here is that this is simply not a sufficient reason to oppose some new technology. That this new, more efficient process of production prevents people from profiting on less efficient processes of production is not a moral evil. In fact, the opposite is true, it is immoral to oppose such developments on the Luddite grounds that they prevent the less efficient ways from going forth. This is the entire point of the economy, to allow man to achieve his many ends for as little cost to him as is possible—opposing this process of economisation is to oppose man’s attainment of his ends, which is to oppose his life.

Underlying this argument that we must preserve jobs, is the idea that men should sacrifice themselves on the altar of tradition—that man should not be as fulfilled or rewarded as he could be in order to make other men more fulfilled. The essence of this stance is that you are greedy and wrong for wanting values for yourself, that you should give them up to people who aren’t you. But why exactly is it greedy for you to want the values that you produce, but not for the people you are giving them up to to keep them themselves? If it is greedy for John to hold onto his wealth, why is it not greedy for Sally to hold onto it when John gives it to her as the altruist would expect?

There is an answer to this: namely, you are not deserving of those values you produce because you produced them—that if you produced something you have no moral claim to it and should give it up, but if you didn’t produce something you do have a moral claim to it and should take it. This is an anti-human stance—it is an all-out war on the people who produce the goods that are necessary for human survival, it is paramount to the suggestion that man live his life not by the sweat of his brow, but by a bumbling hope that he stumbles into continued existence. The altruist ethic undercuts itself in this advocacy—if the root of a mans sustenance is not his reason but the good will of others, then his reason is negated, but it is that reason which must be operative in order to produce the very values that the altruists predate upon. So quite contrary to the cries from the anti-AI horde that their hatred of AI image generation is borne only from a love of humanity,48 they in fact represent a stalwart opposition to man and his happiness here on Earth.

The Entitlement of the Art-Protectionists

There is a notion among anti-AI artists that the AI users feel entitled to art, as explained by Eva Toorenent:

[
] people are very entitled to the work of artists. And I knew that already a little bit, like with difficult clients, but people say: “this is ours now”—the companies and people who generate things. [
] I think it’s a disturbing trend, because [
] before this thing became global I thought it was more under the surface, and now a lot of built-up feelings of people who didn’t like artists suddenly spews out.49

But it is the exact opposite of this notion that is true: it is the artists who are complaining about job losses who feel as if they are entitled to the money of other people. They cry “preserve our jobs!” without consideration of who is to pay for those jobs. The artist is not owed anything by anyone—people will commission them if that is something they want to do, your failed business model is simply not my problem. The artist has no moral claim to the hard-earned cash of art consumers. In the terminology of the immortal Bastiat:50 they focus only on the seen, but ignore entirely the unseen—diverting funds towards protecting the jobs of artists means that those funds cannot be spent elsewhere in more productive work.51 What wonders of innovation would they have us be deprived of? Perhaps the cure for Alzheimer’s would be trivial for an AI to come up with—sorry grandma, no remembering your children today, somebody on Twitter wants to be able to draw pictures for a living.

This worship of full-employment is simply nonsense—it is the hunter-gatherer society where every able-bodied soul is employed in the task of surviving. It is precisely the mark of a more advanced society that man is able to enjoy more leisure time. A far greater proportion of the population was employed as farmers in the pre-industrial society—that these farming jobs were “lost” does not imply that we now live in a society of languishing poverty, the exact opposite is true.

The artist should be ecstatic about the expansion of technology—how many illustrators do you think were employed in medieval Europe? “Oh, you want to paint anime girls? What’s an anime!? Get back into the field and get the harvest in before nightfall or we wont eat!” The truth is that the artist can have his job only on the backs of the many innovators throughout history who have tirelessly and thanklessly produced everything he relies on for his life—and now that this innovation looks to open up artistry to even more people he has the fucking nerve to wish it halted. What worries Dean Van De Walle of the Design Professionals of Canada about AI art? That it has a “low bar to entry.”52 FENNAH explains that the difficulty of an art career is what “separates the [
] wheat from the chaff.”53 Duchess Celestia holds that those who cannot make art without AI “shouldn’t be able to make [it].”54 In all of these cases, the artist betrays that it is competition that they are against.

Timnit Gebru, a former AI Ethics Commissar for big tech explains that AI labs are reluctant to look into any “issues” brought up such that they may avoid litigation.55 She says this as if it is proof of the companies being evil and nefarious. In reality, they want to be innovating, they want to be creating, but they must divert resources into defending themselves from frivolous lawsuits brought forth by the incompetent masses. If you want these companies to be able to honestly investigate problems that you have, then you should be advocating that Uncle Sam keeps his grubby hands away from them such that they may innovate away every last problem.

And it’s not even as if you can placate these people by obeying their demands and hiring human artists to whom you allow artistic freedom—we know this because they are constantly bitching and moaning about how awful it is that Disney would use generative AI–among various other tools–in the opening sequence of their Secret Invasion. “Why don’t you just hire artists to make it!?” they cry, to which Disney responds that they did hire artists to make it and that those artists who they hired opted to use AI tools to communicate their own creative vision.56

What must always be kept in mind with these proposals from artists that AI be regulated and IP expanded, is that they are advocating that violent force be used against people and their legitimately held property. It is far too easy for someone to sit back and cry out that the nebulous “business world” be prevented from training their AI algorithms as they wish—it is not quite so simple as this. A business has no existence unto itself, it is rather an association of individual people—it is these people who the anti-AI artist advocates violence against in limiting their ability to use their own minds and property in ways that they see fit. Heck, all that the diffusion network is doing is predicting how much noise is in an image—what exactly is it that they want to violently stop you from doing? Predicting how much noise there is in a picture? Seems fairly benign, right?

It is all well and good to point to the individual artists who will lose their jobs with the coming competition, but why exactly are their lives and livelihoods more important than those who will prosper with this new innovation? Fundamentally, these concerns over job losses strike me as wolves tears: “woe is me, people are not willing to support me, therefore my competition must be crushed under the boot of the state.”

Moreover, the restrictive policies that are being advocated by these people in the name of artists, are not a universal benefit. Rather, these policies, like all protectionism, benefit the least efficient producers of art at the expense of everyone else. Both those consumers and producers of AI art are robbed of their mutually-beneficial arrangement, such that the less effective artists are satiated.

img

We may also tease out true nature of the anti-AI claims on this front by applying their arguments to human artists: if it is immoral for one to produce AI art on the grounds that the AI is more efficient it must also be immoral for more efficient human artists to make artwork. This hatred of innovation and efficiency on the mantle of jerbs is nothing more than an advocacy of a race to the bottom—how dare Bob Ross produce that painting in only 30 minutes!? It took me over an hour for a worse end-product: those of us who are bad at making art must band together against the oppressive chains of the successful. If people who are better at making YouTube videos than I am all stopped or knee-capped their work, then I could more easily rise in the ranks. It is selfish for them to prevent me from achieving this success!

So applied consistently, the art-protectionism is a millstone not only around the neck of producers of AI-generated art, but also of those more able human artists—all at the expense of the consumer of art. This is quite a high price to pay for the jobs of the least effective artists, and it is a price not being payed by them.

The brilliant economist Murray Rothbard called out this very same attitude when applied to protecting inefficient American firms from Japanese competition:

Take, for example, the alleged Japanese menace. All trade is mutually beneficial to both parties—in this case Japanese producers and American consumers—otherwise they would not engage in the exchange. In trying to stop this trade, protectionists are trying to stop American consumers from enjoying high living standards by buying cheap and high-quality Japanese products. Instead, we are to be forced by government to return to the inefficient, higher-priced products we have already rejected. In short, inefficient producers are trying to deprive all of us of products we desire so that we will have to turn to inefficient firms. American consumers are to be plundered.57

The same applies to our anti-innovation artists: they seek to reduce the standard of living of everyone else for all of time such that they may stagnate and not have to improve the quality or efficiency of their work, and then they have the gall to point and label everyone who they drag down as uncaring egotists58 for not bowing to their whims and willingly bathing in the recession that is their professional lives.

The Marxism of the Art-Protectionists

The art-protectionists’ hatred of wealth and prosperity is laid naked in their overt Marxism, as exemplified by Istebrak:

Mostly my issue with the ethical aspect is just my issue with all of that excessive capitalism, consumerism, kind of stuff, where they are trying to monetise everything, and everything is just getting–you know–to be a little more tasteless.59

They tell us that capitalism is evil, and it is ruining everything, and of course therefore advocate for the destruction of competition and monopolisation of ideas with IP. We are told by Philosophy Tube that what ethical AI “really means” is “digital socialism,”60 and that “there is no ethical AI under capitalism.”61 Istebrak is not alone on this panel with her hatred of human flourishing, this anti-profit message is redoubled by Adam Duff, and Antonio Stappaerts, in a discussion over fans sending them AI generated art based on their work:

[HARDY FOWLER]: I should clarify that the client who did that with my work was like completely well-meaning, it was sort of like they were actually excited about it as just an experiment of what’s possible—it wasn’t done maliciously, but at the same time it seems like they hadn’t considered how profoundly weird and unsettling that would be to the artist. [
] I bet the guy who contacted you, Adam, just sort of: “isn’t this a neat thing I was able to do?” [
]

[ADAM DUFF]: He’s a fellow artist, he’s actually an artist [
] but yeah, they did it innocently. And they said “oh, I’m not monetising this, or publishing it, I just wanted to show you what I did,” right?62

[
]

[ANTONIO STAPPAERTS]: To Adam’s point: as long as it’s not monetisable. Because you know, you say Adam, if it’s diluted, that’s all fine as long as people are not able to profit off of what you created. Because if they’re like, “oh, we can take 20% Adam, 10% Steven, 30% Hardy, [
]”–even if it’s very generic, that’s my main issue actually with this AI art–the general public loves it. They really don’t care. They love it—so it’s highly monetisable. [
] They don’t know the difference, [
] they just love the dopamine hits, they’re consumers, literally. [
] So as long as it’s not monetisable, sure.63

It is quite frankly rich to hear the art-protectionists speak as if they love humanity and human flourishing whilst so clearly hating any sort of profit. What exactly does their ideal society look like? One where no man is allowed to profit unless the sacred creed of mystic art sages says that he is allowed to? This is absurd—again, keep this fact in the front of your mind when confronted with such arguments: they want to use violence against you for creating art with your own property, and for trying to improve your own standard of living by yourself. On their view it is actually evil for you to dare to better yourself, you should rather live your life in total sacrifice to the Cronenberg monster of Art.

The Maxism is not contained within this group—Timnit Gebru makes sure to stuff ample references to the “exploitation of the workers”64 and the oppression of the “lower classes”65 whilst discussing AI ethics. She makes sure to inform us that “technology is political”66—of course, we just need to make sure that we are inserting the “correct” politics (read: Marxism) into the AI systems. They have no principled opposition to AI image generation, they just want to be the ones in control, rather than the actual producers, the actual fountainheads behind this technology.

Karla Ortiz tells us that we should “be kind to individuals, ruthless to corporations.”67 From where does she derive this ethic? She was kind enough to tell us:

So, when we are talking about [
] individuals, [
] there has to be a kindness to it because [
] they’re driven by their passions, they’re human, they might not understand what’s right, whats wrong, because their frame of mind is completely different, and that makes sense. But a corporation as a whole is still a combination of people driven by [a] profit motive [
]68

Ok, so she acknowledges that a corporation is just a combination of people, thus it is a specific type of person–not a non-person–that she is advocating people be ruthless towards—and the reason that we should be ruthless to them is because they want profits. It is the same old bromide—“profit is evil,” “sacrifice yourself,” “don’t better your own life,” “live for others,” “don’t be selfish.”

This attitude is also plain to see in the protectionists’ big gotcha! against Stability.ai, creators of Stable Diffusion—a large portion of Zapata’s video on the topic is dedicated to explaining how they are a for-profit company using the cover of a non-profit to “launder data” for use in their AI programs. These people say “for-profit” as if they were spitting venom—the start and end of the argument is that Stability.ai is profiting, this is supposed to be more than enough to demonstrate that they must be evil.

Evan Conrad, an artist69 who is personally involved in the Silicon Valley AI developer-sphere claims that the motivations of the AI developers are entirely pure:

Steven has this thing in his video that I think maybe misunderstands what all of these things are. A lot of these projects are not made by like [
] standard money interests—they’re made by people who legitimately think they are building utopia. And I’m relatively confident that they’re building at least something very large that might lead to utopia. They are quite confident that they are building AGI—and that’s probably true. And when you are that confident that you are building the most important technology in the world it’s just like not your motivator any more. Like, when [
] you’re really really confident that the thing that’s gonna happen is that you’re gonna make all prices go to zero, and you’re gonna end up in this world in which everything is like free—you like look over at your kids and you’re like, “oh my god! If I don’t do this, like how shitty would it be for me to like not do this for my future children? [
] make a world where they like never have poverty, [
] shit is just all free, and they [
] don’t have cancer because we cured cancer with it.” [
] if you actually believe that, the money just doesn’t really matter as much.

And so most of the AGI labs [
] are research projects—they like started in research, they like were founded as research projects, [
] they don’t really hire people whose job it is to make money. I’m the person whose job is to like make money—that’s like the thing that I normally do. [
] But like, from my perspective, they don’t seem to hire those folks, they seem to hire folks that genuinely think they are trying to create utopia, or trying to prevent some really bad thing from happening.70

So, we have a bit of a he-said-she-said scenario with respect to the motivations of these companies. Zapata–who has absolutely no connnection with them–says that their motivations are evil and devious;71 Evan–who has a very deep connection with them–says that their motivations are to make a literal utopia happen. Now, in the absence of evidence, there isn’t any way to evaluate which claim is true—but seeing as the art-protectionists see it fit to speculate wildly about the motivations of these AI companies, and the supposed-incompetence of prompters seen in Reid Southen assuming that Shadiversity simply does not understand how prompting works because he includes duplicate words (these duplicates have a purpose, by the way),72 I thought I might do some speculation of my own, but with evidence instead of relying on the “profit bad” bromide. Namely, I think it is quite clear from my extensive study into this topic that all of these concerns over ethical models, lack of artistic integrity, every point they make, it is all a smokescreen to cover up their true problem—which is that this technology opens up art to more people and thus threatens their livelihoods. Duchess Celestia is explicit on this point:

Let’s imagine a world [
] where AI wasn’t trained on copyrighted art without artists’ consent and all AI art generators were trained entirely on ethically sourced materials. Would [AI art generation] be perfectly fine then? No, in my opinion, it would not, not because of the software itself, but because of how humans would still be most-inclined to use it—as a replacement for real artists and real art.73

She tells us further that “obviously many agree that it would be better if all AI art trained on trained on copyrighted material was deemed to be copyright infringement by default, rather than based on its similarity to existing copyrighted works.”74 Because, of course, we shouldn’t let little things like the burden of proof or innocence until proven otherwise to get in the way of our goals here—we must stifle competition and stamp out any innovation the moment it rears its ugly head. She tells us this, by the way, in a video discussing a “New AI Art SCAM” which is “STEALING COMMISSIONS”—the “SCAM” in question being that people were paying artists for line work, and then iterating upon said line work with AI tools. This being a service that the artists themselves were advertising. It is not enough that people are paying artists to make art on the terms set by those artists, they have to ONLY pay artists, and do nothing in addition that might deprive those artists of what they do not own and are not owed.

So the problem for Celestia is indeed what I claim it to be---not that AI is unethical, but that it is a threat to her livelihood. She is not alone in this stance: we have seen it also with the array of artists who were completely fine with the tech right up to the point that it started getting good—they have no principled objections to it, they just don’t like that it can maybe get better than them.75 We see it in their hatred of profit—all that an AI company profiting would imply is that they are providing a service which people value more than the alternative, the anti-AI crowd are again here against it on the grounds that it is good and is an improvement over what they can provide. They aren’t against plagiarism, they are against competition, and plagiarism happens to be a useful weapon to their end of destroying innovation and creativity. Karla Ortiz makes this perfectly clear:

Yeah, any individual in the world can copy your work, ok, fine they did it, but if its a corporation and they make a profit from that, that’s a problem. Or if that individual makes a lot of money, that’s also a problem.76

Truly the mask has slipped: “use your AI tools—so long as they are bad;” “copy peoples art—so long as you do not profit;” “make whatever art you wish—so long as you don’t do it for yourself;” “do anything you wish so long as you are not in the process bettering yourself and by proxy exposing the fattened slump that is Art.”

This is seen further in Karla’s backup plan. In the case where she is unable to completely destroy AI technology, there is to be instituted an artistic central planning bureau that will dictate the kinds of art you are allowed to make:

So the way that I could see it is we’re gonna need some very creative legislation that tackles the newness of all of this in an inventive way. So for me, I would say [
] implement something like, for example, percentages [
] depending on which industry you are a certain percentage of your labour force can be AI tools. So for example, if you are in like medicine, or sciences, that’s taken into consideration and it’s a very large percentage. But if it’s something like [
] the entertainment industry or things that are creative, [
] then say: “hey, listen. 2% of your work force has to be AI and that’s the limit”—depending on the size of your company. Almost the way that I kind of see it is: think about it as if [
] you’re trying to gain a visa to work in the US [
]–or work in any other country–[
] in order for you to get a visa there has to be shown a need or a [
] reason to why you couldn’t give that job to someone who is a citizen, and why you need to bring someone from abroad. So I could see something like that being kind of implemented in a space like this, where it’s like: “hey, why do you need AI tools to [make this]?”77

This idea is re-doubled by Celestia, who thinks that people should be “fighting for legislation that forces companies to have a minimum number of human artists.”78 Ah yes: “you have used too much AI technology in the production of this artwork. It is to be burned and you will be sent to re-education post-haste—we can’t have any lingering bourgeois sentiments in this industry.” The world of their dreams is one where they get guaranteed employment on the backs of the producers regardless of the quality of work provided and where those producers must justify themselves and ask permission to continue producing. This is a world where the artistic recession may continue on ad infinitum. “We must establish a society where government bureaucrats control the use of AI tools. Government must monopolise the task of detecting image AIs, and must be the ones to dictate the exact conditions of their use through the whole economy.”—in short, what is advocated here is a society where the state has sole discretion on the creation and detection of AI, a society where the Leviathan may manufacture whatever narratives it sees fit. The entire ethical worldview of these people is filtered through the lens of using violence to stop people from living their own damn lives as they see fit.

These are the people who destroy creation and spit in the face of man’s happiness on Earth. These are the second-raters, the second-handers, the parasites:

Thousands of years ago, the first man discovered how to make fire. He was probably burned at the stake he had taught his brothers to light. He was considered an evildoer who had dealt with a demon mankind dreaded. But thereafter men had fire to keep them warm, to cook their food, to light their caves. He had left them a gift they had not conceived and he had lifted darkness off the earth. Centuries later, the first man invented the wheel. He was probably torn on the rack he had taught his brothers to build. He was considered a transgressor who ventured into forbidden territory. But thereafter, men could travel past any horizon. He had left them a gift they had not conceived and he had opened the roads of the world.

That man, the unsubmissive and first, stands in the opening chapter of every legend mankind has recorded about its beginning. Prometheus was chained to a rock and torn by vultures–because he had stolen the fire of the gods. Adam was condemned to suffer–because he had eaten the fruit of the tree of knowledge. Whatever the legend, somewhere in the Shadows of its memory mankind knew that its glory began with one and that that one paid for his courage.

Throughout the centuries there were men who took first steps down new roads armed with nothing but their own vision. Their goals differed, but they all had this in common: that the step was first, the road new, the vision unborrowed, and the response they received–hatred. The great creators–the thinkers, the artists, the scientists, the inventors–stood alone against the men of their time. Every great new thought was opposed. Every great new invention was denounced. The first motor was considered foolish. The airplane was considered impossible. The power loom was considered vicious. Anesthesia was considered sinful. But the men of unborrowed vision went ahead. They fought, they suffered and they paid. But they won.79

The reason why the arguments used by these advocates of aesthetic stagnation are a strange mishmash of incompatible positions is because the art protectionists–who have for a few centuries rested on the laurels of not needing to justify any decision they made–are now faced with a real artistic contender to their stagnation. In such a situation they are like an animal backed against the wall—they must grab at any argument they can find and see what sticks. Keep this idea in your pocket as you watch the rest of this video and as you encounter these people in the wild—if you are particularly perceptive you will notice that their fundamental mission is to kneecap and destroy these AI art generators such that they may be kept in the picture without ever needing to innovate.

The Elitism of the Art-Protectionists

They show their hand in their preferred order of automation:

Machines are supposed to make it so we don’t have to do menial jobs, freeing us up to pursue things like art. Now here we are working harder than ever before while machines do the art for us. Absolute f***ing dystopia.80

Abhishek Gupta of the Montreal AI Ethics institute has his own little pet jobs that he wants automated:

[INTERVIEWER]: is [sic] there any jobs that we actually do want AI to do?

[
]

I think things that are low-risk, things that are low-stakes, things that are well-defined, things that have a narrow boundary of potential behaviour, things that are [
] sort of limited in terms of the impact that they would have on human lives in terms of safety–safety in many dimensions, you know, physical, psychological, etc.–and things that are, you know, maybe repetetive, that are drudgery, that don’t bring joy to people, or are unnecessarily oppressive in the nature of doing that work.81

But when it is suggested to artists that they can use AI to automate the grunt-work in their field, we are met with a constant chorus that some artists enjoy said grunt-work. In any case, we are again met with this notion that your stinky, smelly job should be automated, but not our higher, more sophisticated job. Ben Zhao, creator of the popular Glaze82 tool used by anti-AI artists, agrees with this notion:

I wanna add a little bit to this—I think those points are right on. I mean, to me it’s a little bit alarming when you think about where are the positions of job replacement [
] by AIs coming in, right? AI is not–as many people thought–[
] taking the sort of unskilled work, and helping to elevate the workforce into more creative roles. [
] In fact, AI is coming for the creative positions, it is coming for the artists, for the musicians. You know, we’ve had people reach out, ever since Glaze went public, you know from choreographers, from voice actors, from writers, from journalists; all the creative aspects of human creativity, all these really highly valued positions in our society are getting threatened by AI.83

So, these commentators are completely fine with, and in fact in favour of, automating your job, just not their job. Automating those “stinky,” “dirty,” manual jobs is completely moral—those lugnuts should just learn to art, right? But automating artistic pursuits—purge the thought! God forbid the special activity of art be done more efficiently by smarter people—that should be reserved for peasant-work. “How dare you be so callous as to even suggest that art be opened up to more people!? It is the lower jobs that should be automated, not our sacred profession!” I really want the artists to hone in on this point here, because I hear it all the fucking time—artists will be bitching and moaning about how unfeeling people are about them potentially losing jobs but then in the exact same breath they will cry out about how the AI should be taking away jobs from the blue-collar worker instead. This is a frankly disgusting attitude—stop doing it.

The Gell-Mann Amnesia effect refers to the tendency that one has to read a newspaper article on a topic with which they are familiar and notice the constant and blatant errors. You are astounded at how totally and obviously wrong it is. Then you turn the page and read the rest of the newspaper and nod your head as if everything is suddenly peachy. You turn the page and completely forget how riddled with error the paper is. A similar effect seems to have taken hold of the artists: on the one hand as they speculate about AI automating any industry other than the one they are in, they immediately gravitate towards positive examples84—“AI in medicine? I imagine that AI could help diagnose illnesses, this would be a good thing;” “AI in transportation? Self-driving cars could be more efficient and less accident-prone than humans, sounds good to me;” “AI for teaching? You could have a personalised expert tutor on any topic, this would revolutionise education for the better.” But then, on the other hand, when it comes to introducing AI to art the tables are completely flipped on their head—the advent of AI automation will surely be a complete disaster that will destroy the entire process of art-creation.

The Anti-AI Crowd are neo-Luddites

It is not uncommon for the anti-AI crowd to be appalled by the accusation that they could be Luddites: “we aren’t anti-technology,” they tell us, “we artists are in fact some of the most willing people to embrace new technologies! All we are against is this specific technology because it directly harms artists’ ability to continue their holy work.” These artists have clearly only ever heard “Luddite” as some passing term and associated it with being anti-technology on principle, never actually considering what the Luddites stood for. Because, indeed, the Luddites themselves were not against technology as such, they were rather against the mechanical loom, which was putting them out of their jobs. The arguments they had are identical in form to those we hear from the anti-AI neo-Luddites—“we Luddites aren’t against technology! We are happy to embrace the steam locomotive in the transportation of our lace and the printing press to advertise our services—we are just opposed to this evil new technology of the Jacquard machine that greedy capitalists are using to put us out of our jobs!”

These artists, much like the Luddites of old, will–in the words of Tom Scott–draw the line of what is acceptable just past whatever they are doing.85 “What’s that? An AI tool that can cut out all of the bad takes in this recording and do all of the mixing and mastering? Sign me up! That sounds brilliant! Just don’t use StableDiffusion.”86 “What’s that? Content-Aware Fill on PhotoShop powered by AI? Brilliant! This will speed up that grunt compositing work so that I can focus on making my art! Just don’t use Dall-E.”87 “What’s that? An entire digital orchestra that I can use on my home computer? Fantastic! I have always wanted to make orchestral music. Just don’t you dare use MidJourney.”88

Certainly, “Luddite” may be used as a term for a person who is against technology on principle, but then what exactly is their argument? “Oh, no, we aren’t against technology due to an adherence to principles—we’re not Luddites! No, no, no: we are randomly and arbitrarily against only certain technologies that make us personally unhappy.” Surely, such an individual would possess greater integrity and intellectual honesty if they really were against all innovation, rather than accepting innovation only in those areas that will not threaten their sweet stagnation that they so carefully cultivate.

Erik Hoel rejects such historical comparisons:

I’m well-aware that since the dawn of time whatever the latest wave of art is it has been described as “not art” by some crotchety elder. But all those previous times the artist wasn’t being replaced by gigantic spreadsheets trained as autoencoders, so perhaps historical analogies aren’t very useful here.89

Indeed, those people were not replaced by “gigantic spreadsheets trained as autoencoders,” but this does not demonstrate the disanalogy. Instead they were replaced by mechanical looms and digital cameras—clearly the relevant aspect of the comparison, Mr. Hoel, is that they are being replaced. This is what the Luddites were afraid of and it’s what the modern visual artists are afraid of. That they aren’t being replaced by the exact same technology, Mr. Hoel, is entirely irrelevant and that you think it is relevant, Hoel, shows a complete lack of the cognitive faculties required to participate in this argument.

“AI Art Sucks” + “AI Art Will Take Over”

An especially hilarious tendency among the anti-AI crowd is to effortlessly swing from telling campside horror stories about the coming world where all artistic work will be forever wrestled away from man, where it will be “impossible to make a living as an artist,”90 into taunting the AI community about how AI art is fundamentally flawed–that it is, in Dave McKean’s words, a sea of beige91–and that you require humans to produce the best end products. In short, this is the combined bromide of “AI art totally sucks” with “AI art is going to completely take over all artistic pursuits.”

Zapata perfectly encapsulates this mentality in his video on the topic where he first paints his techno-dystopian vision of a world devoid of human artistry where all content is derived from a “mega feed” produced by AIs and that you “lack imagination”92 and “should be embarrassed” if you do not see this coming; and second during his appearance on Art Cafe, where he explains that AI image generation, by its nature simply cannot be creative, at least under it’s current iteration:

It’s a probabilistic system, so when you ask me to make you art, like if a client asked me [
], “Steven I need a picture of somebody threatening someone with an axe,” [
] I try to produce what I believe is the best solution based on the criteria. These programs are not doing that, they produce solutions, or options, that fit statistically—that probabilistically speaking, [
] it’s 75% likely this looks like an axe, it’s 85% likely that this looks like a scared person, right? [
] Like I said, if you have the prompt and the seed number, you can ask those two combinations every day to Stable Diffusion and it will give you the same result every time. If you ask me to do that picture 1000 times, I can give you 1000 different versions even if your input variables are exactly identical. Even if you give me a [structure], even if you give me a piece of paper and say “I need a drawing of a man threatening someone with an axe but it must be based around this squiggle that I put down here, that squiggle must be in tact,” right? So we can try to draw an analogy between that squiggle and the random seed number which is a noise field for Stable Diffusion to begin with, I can still give you an infinity of permutations where the squiggle is maintained—it doesn’t matter. So we’re just not like it in that sense.

Also, because of the fact that it’s trying to produce answers that are [
] statistically likely, that produces problems like over-fitting, and the embarrassing things that we’ve seen where if you ask it for salmon it gives you cooked salmon in the river instead of a living salmon, [
] because it’s seen more pictures of cooked salmon than it has of live salmon. It’s going to make mistakes like that over and over again, because unlike us it doesn’t have a cohesive worldview—it doesn’t understand what’s alive, what’s dead, what fits with context. It just says: statistically speaking, when you use that word, it’s probably this. [
] If I go back to the example of if someone needed a picture of a man threatening someone with an axe, [
] it’s extremely unlikely that Stable Diffusion–because “axe” is used, and most pictures of axes include the axe–it will put the axe in basically every image, but I–being a creative artist–I might say, “I won’t put the axe in the image, I’ll instead show the person scared with the Shadow of the axe cast across their face.” Stable Diffusion would basically never give you that because there’s almost no photos of axes that are just the Shadow of the axe, it can never come to a conclusion like that because it’s just utterly under-represented within the dataset.93

So, AI art is so impressive that it can forever rip creative jobs out of human hands, but also it fundamentally will not and cannot seek to create the best art possible. If it is the case that AI could never achieve perfection by its own nature, then what exactly is the concern here? It must be that AI outcompetes those inferior artists, those artists who aren’t producing the best of the best, those who are producing the mediocre. But, as explained above, if we are to oppose AI for outcompeting them on their own terms, then it must also be moral to oppose the best and most efficient human artists on the same grounds.

Steven elsewhere seemingly redoubles this notion that it is only ever going to replace the mediocre, telling us that AI art is “lazy”:

AI art is the refuge of every would-be painter, every painter too ill-endowed or too lazy to complete his studies.

Oh, actually, I am mistaken, that’s not Steven Zapata talking about AI image generation that’s Charles Baudelaire talking about photography back in the 1800s.94 I don’t know how I misplaced that so dearly. You of course see my point here, discerning viewer: these Luddite, anti-innovation, anti-art arguments are nothing new. There is almost a predictable progression from complete mockery of the new artistic technology, which is claimed to be not real art; followed by and sometimes paired with fears that it will dilute the artistic taste of the masses and put “real” artists out of jobs; which of course culminates in protectionist calls for preservation of old, and inefficient jobs.

In any case, if this world of AI-only art that worries Steven is truly so terrible and destructive of the human spirit, this surely means that the art being produced is not great—great art inspires and thrills people, whether they want to make art or not. If the future of AI art is constant masterpieces flowing out of every rock and pebble, this is a world where man is more fully inspired and enriched by the media he consumes. That this is not the future painted, really makes one ponder about how fully humans are going to be replaced by it—furthermore, Steven seems far less confident in his predictions when unscripted. In that very same Art Cafe interview, he refers to his imagined dystopia as “pure science fiction”95 and in an interview with Miles Johnston released on his channel after his anti-AI video, he says with respect to the rise of AI image generation that:

I don’t know, [
] no one does, what that’s going to do to markets, to culture, to people, to anything, right? We don’t know for sure at all. [
] The human-to-human connection is, I think it’s a pretty safe bet that if anything’s going to survive it’s that, and it seems very likely to me that it will survive.96

Now, that’s a statement with quite a bit less fire in it than the near-certain hellscape that a person must “lack imagination” if they do not see coming with the introduction of image AIs.

People Will Still Support Humans

Of course, unscripted Steven is correct here: you should not expect the human-to-human connection to go away any time soon. A large amount of this jerbs-worrying from artists, fundamentally misunderstands the nature of why a lot of people buy art in the first place. As Kelsey Rodriguez explains, the fact of people buying art to support the artist must be accounted for:

[
] when I buy art, I often don’t just do it because I want something pretty to hang up in my house. I wanna support the individual artist’s journey, I wanna be a part of something. When I support a small business on Tik Tok, and I find out that I was a part of a movement that helped save them from bankruptcy, that makes me feel good.97

So, let us imagine we are five years down the line, Dall-E 7 is out, and The Witcher 5 needs some box art. CD PROJEKT RED certainly could get a high quality rendering done by AI, or they could commission Greg Rutkowski for a bespoke piece. Evidently, many people are fans of Greg, and would be willing to support him, why wouldn’t this company want the great publicity of slapping a “supports human artists” sticker on the box, if people care about this? Look at the mass of people who build their careers through donations on websites like ko-fi and Patreon—careers built on the back of donations from people who like the work. There is a true selfish incentive for people to do this—if I very much enjoy the work of Hans-Hermann Hoppe, and want him to continue to write books, it is in my own interests to financially support him in this endeavour by purchasing said books. We do not require, and it is not desirable to implement, the state cudgel of intellectual property to smite with the power of Uncle Sam any who would wish to consume intellectual products without supporting the progenitor. People are plenty capable of being moral on their own, we do not need to force them by blasting their ass with the long arm of the law.

The Wynand Papers, and What Makes Them Possible

There is a certain truth to be pulled out from the artists who say that AI art is fundamentally flawed—namely, that it is going to be far more adept at creating slop than actual high art. Zapata’s horror story of AI image generators pumping out content to the lowest common denominator could indeed occur, many people have very poor taste—but those with poor taste are not going to be at the forefront of any artistic revolution, nor should they be appealed to. Allow me to highlight this with the fictional archetype of the slop-dealer in The Fountainhead’s Gail Wynand. In the book, Wynand owns several tabloid newspapers which dish out the most surface-level, bottom of the barrel stories, because this is what the people want, or so Wynand claims. He does this because due to some events in his life which I shall not spoil, he grows to believe that integrity is impossible, and so the only way to be successful in life is to appeal to the very worst kind of mindless drone. His foil, Howard Roark, is an architect of impeccable integrity—he refuses to alter his buildings in any way to appeal to tradition, or anti-tradition, or any other pointless whims that might surround him.

The true artist takes after Howard Roark, he is not concerned about slop-peddlers shovelling gruel into the mouths of the hungry masses—he is concerned only with creating his art to the greatest quality it can be. A Howard Roark does not build to have clients, he has clients so that he can build—this is the model that you should be emulating if you want to do something good with your life. Ultimately, the book demonstrates the core flaw in Wynand’s strategy—namely, by kowtowing to the masses, he becomes their slave, he is incapable of actually doing good for himself, he needs to reconcile the inconsistency between his slavish devotion to the whims of the idiot masses, with his desire to attain happiness for himself.

Now, I do not want this to be mis-interpreted as an attack on the artistic integrity of AI images as such: there are certainly some very good pieces of AI generated art and some genuinely new artistic techniques that they make possible, which I will be going over later (see: Can You Tell When Art Is AI Generated?). Shadiversity has a 40-minute video tutorial where he explains the process behind creating a SuperGirl version of his wife—and he gives a genuine artistic motivation behind it:

[
] what I love to do is to depict my wife as SuperGirl, because that’s who she is, she is my SuperGirl. She is the most beautiful creature in the world to me, and so I love making art of my wife—she is my muse, and I particularly like depicting her in the way that I see her. People don’t see what I see when I look at my wife—when I look at my wife, I see the most beautiful creature in the world, I see my SuperGirl.98

The theory of what art is will be addressed later, and it will be seen that he is doing genuine art here—he is concretising a particular metaphysical belief that focuses on the potential of man as such to become a SuperGirl.

Austin McConnell used generative AI in the making of an almost hour-long animated short99 where he meticulously crafted every panel and evidently put effort into making sure that the characters were consistent across time and posed exactly where in the scene they needed to be.

So, I fully accept that AI art can be good, rather, what I am trying to get across here is that a true artist should not be concerned with the lowest-common-denominator slop-bots that proompt out an image for every trending topic on Twitter in the hopes that one of them goes viral, as the set of people that such a thing appeals to are not the clientĂšle of high art. If you are concerned about your job security, ask yourself: are you Gail Wynand, or are you Howard Roark?

Maciej Kuciara, host of the Art Cafe podcast, points out that this sort of second-hand trend-hopping is not at all unique to AI systems:

[I am] least worried about an average AI art consumer, because at the end of the day there’s nothing unique about it, you know? It’s all kind of based on what already existed and there’s [a complete] lack of uniqueness to it. Yes it creates incredible images but if you look at it [
] it kind of reminds me of ArtStation, where a known artist would post something incredible and then a month later [
] everyone’s copying it.100

So the question is: should artists be concerned about human trend-hoppers? Should Rembrandt be concerned that someone is biting his style? Surely not, just as is the case in other industries—Mr. Beast is hardly concerned about the thousands of channels that jump on everything that he does a few weeks later, because he does it the best. The average film and TV show being produced today is the artistic equivalent of a sewer runoff, and yet filmmaking is still alive and well—I fail to see why getting AI to produce works of a similar quality would yield any sort of different result.

If the competition ends up producing artwork of a higher quality, that indicates that they are not second-handers, that they are actually coming up with something new and better—and the correct response to this is to improve yourself, not to get annoyed by your superiors.

What is Art?

The Objectivist Aesthetics

Art as a Concretisation of Metaphysics

So we have it that it is not the case that AI image generation steals or plagiarises the work of artists, and that it is immoral to oppose a new line of production on the basis that it will obsolesce less-efficient lines of production. Thus I can now turn my attention to their final “gotcha!”: namely, can images produced by AI be called art?

To answer this, we must look at the more fundamental question: what is art? Either we have it that there are objective principles underlying artistry in which case such principles would need to be elucidated to answer the question, or there are no such principles and art can be whatever a person wants. In the latter case, the question “can AI produce art” is answered by “if I want it to,” or in other words, yes—it is possible for AI to produce art.

So clearly, if I am to present the strongest case for the “AI art isn’t real art” argument, I cannot assume an aesthetic subjectivism or relativism on the part of its proponents, thus I must analyse the potential candidates for a theory of art to find one with such objective principles (and this is being quite charitable indeed, it is not at all uncommon to find an AI opponent who is completely incapable of defining what “art” is, and who freely admits this, whilst fully confident in their assertions that AI art is not art—quite a ludicrous position indeed).101

Our first candidate for a theory of art is materialism, the materialist notes that art is pursued as an end in itself—this makes art distinct from many other things that men produce. A newspaper, a scientific treatise, a busy signal on your telephone, a hammer and nail are all used and produced for utilitrian purposes; they are means towards some end, not ends in themselves. The materialist proceeds from this fact to conclude that art is a petty indulgence completely unrelated to reason or man’s life in this world.

Obviously, such a theory would leave quite shaky grounds for the proponents of AI art not being art, as there could be no rational principles underlying that which is fundamentally irrational, which the materialists claim is the case for art. So, I shall move onto an opposing school, namely spiritualism. This school agrees with the fundamental premise forwarded by the materialists that because art is an end in itself it must be completely disconnected from furthering mans life on Earth, and they conclude from this that therefore the purpose of art is and must be to further mans life in a spiritual super-reality, of which man can achieve only mystic contact with. So there are not objective principles underlying art, the spiritualist tells us, the correct method of making and judging art is to refer to a mystic elite who have turned themselves away from man and his life on this planet towards some ineffable spiritual dimension, and who can therefore decree from on high the correct methods and principles of an art piece.

Now, adopting spiritualism could indeed allow the anti-AI artist to claim that AI images are not and cannot be art, but they could not do so through any rational argument. Instead they would have to completely abandon reason and state that they had mystic contact with some deity that told them that AI cannot produce art. The issue with such a view is epistemic—epistemology being the science that studies how man gains knowledge and validates his conclusions. The spiritualist rests his view on the stance that “knowledge [of reality] derives not from sense perception or from reasoning based on it, but from an otherworldly source.”102 This view, therefore, rests on a supernaturalistic metaphysics, namely a metaphysics which posits that there is some supernatural source that controls and dictates the nature of reality, this fundamentally reduces into the primacy of consciousness which is the view that consciousness has metaphysical primacy over existence. But, consciousness is consciousness of existence, the primacy of consciousness viewpoint steals the concept of consciousness in its attempted inversion, making this a wholly untenable position.

However, there is a third option available that must be analysed here, namely the Objectivist theory of art. Both the spiritualists and the materialists accept the same premise that art has no relation to mans life on Earth, Objectivism rejects this, stating that art has a real, practical purpose for man:

Art fulfills an essential need of human life, not a material need, but a spiritual need. “Art, [
] is inextricably tied to man’s survival—not to his physical survival, but to that on which his physical survival depends: to the preservation and survival of his consciousness.”103

Man requires a code of values to guide him in life, to tell him what he should do and what he should avoid, namely man requires an ethics. But this ethics, this answer to the question of what man should do rests upon the more fundamental questions of what the universe man inhabits is like and how he can come to know things—ethics rests upon the very broad and very complex fields of metaphysics and epistemology. If the world that man inhabits is malevolent and man is powerless to do anything then he should simply sit back and take whatever is thrown at him, never trying to better his situation. If man has the ability to change his circumstances for the better then ethics can legitimately answer how exactly he should go about doing this. Because this question of what a man should do in any given set of circumstances is of the prime importance to him and because it rests on a vast body of information he requires a method of quickly bringing into focus these abstract ideas to inform him on his decision. To do this, he must form a concretisation of said abstract ideas, one that can be immediately presented and summed up by his senses.

This is the role of art. The artist brings to focus in their work the aspects of the universe that are important and leaves aside those that are not:

“Art,” in Ayn Rand’s definition, “is a selective re-creation of reality according to an artist’s metaphysical value-judgments.”104

In any human activity—whether one is performing surgery, building a skyscraper, or defining abstract principles—two types of cognition are involved. In some form, a rational being must know not only the nature of his activity, but also the philosophic context on which it rests: why the activity is proper, how it relates to his code of values, how his values relate to reality. Thus a man’s metaphysical value-judgments, as Ayn Rand puts the point, “are involved in every moment of his life, in his every choice, decision and action.” The basic orientation underlying the concretes of one’s daily endeavors must be continuously operative in one’s mind as one’s basic guide. For this purpose, it need not (and cannot) be a continuous object of conscious awareness; but a rational being cannot afford to leave so vital an issue purely to subconscious implication. If he is to be in control of his life, he must have the power to know his metaphysics, i.e., to summon it into focus, to make it the specific object of his awareness. In this sense, a man’s view of life must be available to him at all times—and available as a sum.105

![img](/blog/ai-art/Michelangelo - David vs. Picasso - The Old Guitarist.png “Michelangelo’s David (left), photographed by Jörg Bittner Unna, contrasted with Picasso’s The Old Guitarist (right)“)

To illustrate this theory, take as an example Picasso’s The Old Guitarist. What can we say about the philosophy that this painting embodies? What is the metaphysical status of man as depicted here? We see a sickly and emaciated old man, hunched over in the gutter. This does not look like a man who can accomplish anything of significance, rather he looks as if he will die in the dirt when faced with a slight breeze. This painting pictures man as impoverished, weak, and impotent to do anything about it. Contrast this with Michelangelo’s David, who stands stalwart in preparation for a fight with a great foe. This looks like a man who can accomplish anything, he can face down and destroy whatever needs destroyed. On the philosophy of Picasso’s The Old Guitarist man is weak and doomed to misery, on Michelangelo’s David man is unstoppable. All of the underlying facts in each artist’s philosophy, which could take dozens of hours to explain are funnelled into a single object that can be directly perceived and digested:

By converting abstractions into percepts, art performs another crucial (and inseparable) function. It not only integrates metaphysics, but also objectifies it. This means: it enables man to contemplate his view of the world in the form of an existential object—to contemplate it not as a content of his consciousness, but “out there,” as an external fact. Since abstractions as such do not exist, there is no other way to make one’s metaphysical abstractions fully real to oneself (or, therefore, fully operative as one’s guide). “To acquire the full, persuasive, irresistible power of reality,” Miss Rand writes, “man’s metaphysical abstractions have to confront him in the form of concretes—i.e., in the form of art.”

The above is another expression of the primacy of existence. Since consciousness is not an independent entity, it cannot attain fulfillment within its own domain. In order to satisfy even its own most personal needs, it must in some form always return to its primary task: looking outward. To an entity whose essence is perception, there can in the end be no substitute for perception.106

So the question of import is: is it possible for an AI to generate an image that sums up and concretises some particular metaphysics? The answer is clearly: yes. If you prompt an AI to generate an image of man weak and helpless in a universe that hates him, it will gladly provide this for you; if you prompt an AI to generate an image of man with the potential to conquer nature, it will provide this also.

img

Even on the most basic level, just literally typing these prompts into the Stable Diffusion HuggingFace and taking the first result, you can get those ideas across in concrete form. Now, are these images aesthetically good? Certainly not, they look like dogshit, I exercised basically no control over them. But do they concretise the metaphysics in question: they certainly do, at least to a limited extent. The left is very clearly tortured, this is perhaps the ugliest vision of reality that a man could muster, nothing is connected to anything else, everything is a meaningless blob—this is the exact philosophy that I wanted it to embody. There are a great many philosophers who have this view of man and the reality that he inhabits. This is contrasted quite clearly with the right image, already we have some symmetry, some semblance of order within this world, and we have the depiction of man staring out into the clearing, almost as if he is parting the trees by his mere gaze—again, this is the exact philosophy that I was trying to get across.

Aesthetic Value as Objective

Now, how is it that I am able to claim these images to be aesthetically poor if they do indeed embody a certain metaphysics—“is that not the standard!?” The answer is that there is an objective aesthetic value to a given piece of art—the standard for this value being derived from the nature of art and the purpose that it serves in man’s life. Above I answered the question what is art?—I can now answer the question what is good art?.

As the history of Romanticism indicates, an artist’s philosophy can have significant consequences in regard to his esthetic merit. This does not, however, alter the fact that there is a difference between philosophic and esthetic judgment.

In judging an art work’s philosophy, one is concerned with a question of truth: are the implicit metaphysical value-judgments guiding the artist’s selections true or false, proved or arbitrary, logical or illogical? (Any explicit ideology in a work that clashes with its operative metaphysics is essentially irrelevant to its meaning.)

In judging an art work qua art, by contrast, one enters the domain of a highly personal emotion, sense of life. The goal of art, we have said, is not to prove but to show—to concretize whatever sense of life the artist has, whether it be true or false. “The fact that one agrees or disagrees with an artist’s philosophy,” Miss Rand concludes, “is irrelevant to an esthetic appraisal of his work qua art.” A false philosophy can be embodied in a great work of art; a true philosophy, in an inferior or worthless one.107

The knee-jerk theory of aesthetic value, given the task of art is to concretise an emotion, is to hold that aesthetic judgement is done by a process of feelings—that good art is art that feels good. This is a false-start—emotions are not tools of cognition, they are rather derived from prior cognition performed by a person. To illustrate this consider the vast difference in the emotional response to just about every event. Perhaps a tenant overstays and refuses eviction—someone who is anti-landlord would look upon this situation very positively, they would feel a sense of joy at what is happening; but on the other hand, someone who is in favour of the landlord’s right in that property would be disgusted by the actions of the tenant. What such a situation shows is that the emotional response to the exact same stimulus can be completely different between different people depending on whatever prior cognition they have gone through—in this case one persons cognitive faculties drove them to a pro-property stance, and the other to an anti-property stance, it is this stance which determines their emotional response. That emotional response is not telling either man anything about what is going on—the emotions are not tools of apprehending reality.

So in the field of art–like everywhere else–feeling that something is the case is not enough—we require a rational process to determine the proper aesthetic judgement. Leonard Peikoff highlights three aesthetic principles which give an indication as to what such a rational judgement would involve:

  1. selectivity;
  2. clarity, and;
  3. integration.

Objective Aesthetic Principles

Selectivity

Selectivity here is with respect to the subject—namely the rational artist is very particular about his choice of what to use to concretise his metaphysics. This is essentially implicit in the very purpose of art—given art is for concretising metaphysics, and given different concretes will have different capabilities in doing this, the best artist does and must be selective. If an artist were to try and concretise his philosophy by picking whatever random concrete, the piece would lose all artistic meaning—one can easily imagine the reductio case–which is present heavily in modern art–of attempting to concretise complete anti-concepts with an incomprehensible mess which the artist had no say in and no desire to have one. Such un-selected subjects make for bad art–objectively bad–it runs counter to the very purpose of art in the first place.

No matter what his sense of life, an artist may not properly choose as his subject the random, the second-handed, or the metaphysically meaningless (e.g., Brillo pads). Since he has a definite perspective on reality to convey, he may not choose his subject by the standard of: “whatever comes along” or “whatever incidents of my adolescence I happen to remember.” Since it is his perspective, his standard cannot be: “whatever subject others have chosen or the critics approve.” Since he is engaged in an activity with an objective purpose, his standard cannot be: “whatever appeals to me.”108

Those enemies of selectivity will oft oppose it by treating “style” as the only thing that matters in art. This stance is a fundamental inversion—it is the subject which is the primary to which everything else is merely a means towards encapsulating. Insofar as the style is inconsistent with the subject it is the style that is wrong, not the subject.

The subject is not the only attribute of art, but it is the fundamental one, it is the end to which all the others are the means. In most esthetic theories, however, the end—the subject—is omitted from consideration, and only the means are regarded as esthetically relevant. Such theories set up a false dichotomy and claim that a slob portrayed by the technical means of a genius is preferable to a goddess portrayed by the technique of an amateur. I hold that both are esthetically offensive; but while the second is merely esthetic incompetence, the first is an esthetic crime.

There is no dichotomy, no necessary conflict between ends and means. The end does not justify the means—neither in ethics nor in esthetics. And neither do the means justify the end: there is no esthetic justification for the spectacle of Rembrandt’s great artistic skill employed to portray a side of beef.

[
]

In art, and in literature, the end and the means, or the subject and the style, must be worthy of each other.109

Clarity

Our second principle is clarity:

In the broad sense applicable here, “clarity” denotes the quality of being distinct, sharp, evident to the mind, as against being obscure, clouded, confused.110

It should be clear why this is a requirement—art that is cloudy and obscure cannot serve well as a guide to action that one can call up at a moments notice to focus his attention on. It is almost the definition of a blur that it is out-of-focus—blurry art is thus an anti-concept, or is at least running counter to its purpose. This requirement applies to whatever metaphysics you wish to embody—if you are wanting to embody a metaphysics which holds reality as a chaotic mess which requires the hand of god to steady, this is fine, but to do so you must be comprehensible–fully intelligible–in your concretisation of it if you are to call it good art. On the point of clarity, art is good if you can immediately tell what it is trying to get across, art is bad if you cannot for the life of you work it out and where there are innumerable different theories all culminating in “it means whatever you want it to mean—the meaning of this piece is in the eye of the beholder.”

The function of the artist is to overcome the opacity of human experience—to confront a universe that does often seem baffling and, by judicious selectivity, to reveal its true essence. The purpose of art, in other words, is the opposite of today’s bromide. The purpose is not to revel in life’s “ambiguity,” but to eliminate it.

[
]

The nemesis of all the champions of “blurred murk” in art is the science of epistemology. Since art satisfies a need of man’s cognitive faculty, it must conform to the requirements of that faculty. These requirements are precisely what is identified by epistemology, and they are not malleable to anyone’s desires. A writer, for example, must obey the rules of using concepts; if he does so, his work, however otherwise flawed, is at least intelligible. If, however, a writer decides to dispense with the rules—if he jettisons definition, logic, and grammar in order to offer neologisms, contradictions, and word salads—then he objectifies, concretizes, and communicates nothing. The same principle applies to every art form, whatever the nature of its medium.111

Integration

Our third principle is said by Peikoff to be the distinctive factor that differs good art from great art, it is called by Ayn Rand as “the hallmark of art,” namely: integration. The integrated artwork has every aspect in harmony with every other, as against having a mishmash of distinct blocks haphazardly stuck together. The very best art is representative of a single concrete that you can at once grasp and use as a guide—for this reason it must consist of an indivisible whole, where each part implies and is implied by the rest and which perfectly encapsulates the particular sense of life required. Any discontinuity in an artwork is in effect making it such that you have two artworks in one that become impossible to separate out and consider separately—each aspect of the artwork should be working towards the same end, rather than different ends.

Rand highlighted an example of this in Siegfried, a movie whose philosophy she rejected:

Every action, gesture and movement in this film is calculated. [
] Every inch of the film is stylized, i.e., condensed to those stark, bare essentials which convey the nature and spirit of the story, of its events, of its locale. The entire picture was filmed indoors, including the magnificent legendary forests whose every branch was man-made (but does not look so on the screen). While Lang was making Siegfried, it is reported, a sign hung on the wall of his office: “Nothing in this film is accidental.” This is the motto of great art.112

Everything that an artist chooses to include is treated as metaphysically significant—the artist is encapsulating that which is important. Thus any insignificant aspect, anything accidental or haphazard represents an anathema to the very purpose of art in the first place.

In a scientific report, irrelevancy can often be bracketed and ignored; it need not affect cognition or communication. In a work of art, however, irrelevancy redounds on the total. The contradiction involved is lethal because it destroys the spell, i.e., the integrity and power of the stylization. Since art is a re-creation of the universe from a personal perspective, it offers man, in effect, a new reality to contemplate; anything accidental works to make the new reality unreal.113

The Role of the Artist in an AI World

It is on these aesthetic principles that the role for the artist will always remain. Steven Zapata’s self-prompting image generator is surely not being selective with regard to its subject, and it surely would not involve a total integration and intention behind every aspect presented. The pieces produced by such a machine would be objectively poor. A similar thing can be said for simple text-prompt usage of these systems—just from a pure information theory standpoint there simply wouldn’t be enough data in a 12-word text prompt to sufficiently define every last branch in the equivalent of Siegfried’s forests. Rather the common approach to achieve a similar level of artistic control is using the image-to-image feature, which allows you to encode orders of magnitude more information into what is provided to the AI, such that you can specify exactly what you want, where you want it, and how you want it to look. This surely leaves the door open for objectively brilliant AI generated art.

That this Wynand-megafeed slop is automated away allows artists–good artists–to focus their time on creating superior pieces than otherwise. The self-prompting megafeed can swallow up and satisfy the demand for all of the non-innovative work, allowing resources to be shifted towards those artworks that are innovative. It is almost a law of human nature that a lot of the time people don’t know what they want until you show it to them. If Ford took public opinion surveys, people would have told him that they wanted faster horses, not cars. Steve Jobs had the entrepreneurial genius to create the iPhone—a product completely unlike anything which existed, and which no possible trend data would exist pointing to, and which has now completely changed the mobile phone industry. A self-prompting AI simply cannot achieve this level of ingenuity—anything completely new coming out in the art world would have to come from the guiding hand of man, assuming of course that human-like AGI isn’t developed, which may or may not happen. The role of the artist is preserved in the post-AI world by giving people what they didn’t even know they wanted.

The Labour Theory of Art

The anti-AI art crowd has a counter-thesis to this AI-art-as-art stance which I have laid out, found in the ease of said prompting as against painting a fresco or composing an opera: namely that real art takes more effort to produce than is found in the case of AI image generation.

Of course, this ignores the many examples of immense control and work being done by the human guide of AI programs, which is how the very best AI art is made. Furthermore, the defenders of this thesis forego any manner of elucidating a philosophical backing for it—it is asserted without argument, so it may be equally dismissed without argument. But, we can delve into some of the interesting implications of such a view in the meantime. First, is there some specific threshold of work put in that must be met for something to be art, or is it more of a sliding scale? If the former, then what possible non-arbitrary threshold could there be? Four hours of work and then it’s art? Why not three and a half, or five? And if it’s the time and effort put in that makes something art, consider a heavily constipated man who certainly exerts much time and effort during the production of his excrement. Would his dumped-up toilet become art in the process? If instead we have it that the amount of time put in is a sliding scale that determines how artful something is, then what would normally be understood as poor skill becomes great skill. The person who has never held a paintbrush in his life and as such takes far longer to produce a painting is elevated above his counterpart who is capable of throwing together a relative masterpiece in half an hour.

Moreover, if the training process is to be accepted as a valid consideration in whether an AI generated image is plagiarism, then surely it must also be taken into consideration here in the issue of how much work was put in. On this standard, we must include the countless hours of work that goes into writing and training such models by many dozens of engineers. Much time has been spent by people jeering and laughing at how poorly AI art comes out, which is surely evidence of how difficult it is to perfect such models and the pieces that they generate.

The Communication Theory of Art

There is another popular theory of art that should be addressed here, namely the communication theory of art. The naïve form of this theory would simply have it that art is when communication—this is clearly absurd. Art is not ordering a coffee, art is not calling an ambulance, art is not shouting at some ruffians. A slightly more sophisticated formulation can be found in Tolstoy’s What is Art?:

Art begins when a man, with the purpose of communicating to other people a feeling he once experienced, calls it up again within himself and expresses it by certain external signs.114

So art is communicating a feeling by “certain external signs.” First, this would not discount AI art—I can have an AI generate an image of a smiley face, and thus communicate an emotion–happiness–through the medium of AI. Second, this definition would be so broad as to include me saying “boy, I feel great right now” as being art—an obvious absurdity. Third–and this applies to all communication theories of art–this definition cannot account for those art pieces that are made solely for the enjoyment of the artist and never shown to anyone. Is a given painting–the exact same painting–at one point in time not art when only seen by the artist, but upon his death and its being shown around does it suddenly become art? Does all art stop being art upon the death of the second-to-last human? Does art cease to be art the moment it is not being used in the act of communicating an emotion? These are seemingly odd features of a theory of art—and yet they jump out of one of the more common such theories.

Ultimately, the communication theory of art is confused about the correct ordering of things. Art may well be used to communicate some idea, but this is not it’s primary goal—the primary goal is the concretisation of that idea. Concretisation must precede any communication, and so must be the genetic root of the activity.

The Journey Theory of Art

There are several artists who propose a “journey” theory of art—namely that what is important about an art piece, is the story behind its creation, not the end-product—the journey, not the destination. Dave McKean gives us the Platonic form of this theory, telling us: “I actually don’t care what the end result is—I don’t give a damn what that is.”115 Ok, well I fucking do care about that, I don’t give one modicum of a shit about the story behind the random paints thrown onto some canvas which is then passed off as art—I care only about the actual art itself. I evaluate art on its own merits, not by whatever “story” is behind it. I am a consumer of art, not an art historian. Why the hell should I be prevented from getting that end result which I want? Any judgement of art based on the story behind it is a form of self-deception—you are lying to yourself about what is actually in front of your eyes and evaluating it based upon out-of-context history.

This is, of course, a slightly bastard-form of the labour theory of art—or perhaps its sibling. The common premise being that the work put into the art is in any way relevant to whether it is art. This is false—sure, people do care about stories, but that is a separate concern to art. It is the task of the museum to archive objects with some historic significance–objects that have a story behind them–that they were involved in interesting stories does not make them art. We see the reductio-case of the journey theory of art with the famous banana taped to a wall. The art gallery doesn’t seem to give much of a shit when the banana rots or people come along to eat it, as they just replace it with a new banana each time. What we have in this situation is that the art itself has been entirely dispensed as irrelevant—surely displaying the total failure of such a theory to explain the importance and role of art in mans life.

img

AI as the Artist

Erik Hoel has his own thesis for why “AI-art isn’t art”:

Given how simple this is, only the most self-congratulatory would refer to themselves as an “artist” simply for typing a one-sentence description of an image and letting DALL-E do the rest. Imagine someone commissioning an artist to do an artwork and then referring to themselves, in strict seriousness, as “the real artist” since they were the one who paid the commission (in fact, it’s exactly the same, as DALL-E will require fees to whatever Big Tech company OpenAI licenses it out to. Probably Microsoft).116

But, this clearly does nothing to establish his thesis that “AI-art isn’t art,” surely in this analogy DALL-E is the artist, right? The question isn’t “who is the artist” it’s “is AI art art?” There is a broad class of arguments like this one from Hoel that centre around shifting the artistry to the AI, in order to attempt to negate that it is art. This, of course, could only ever apply to text-to-image, one can exert massive amounts of control using image-to-image. Moreover, that you aren’t the one actually physically getting in there with a paintbrush does not make you not the artist—the artist is the one who is directing the art and is deciding the composition of the new, fictional reality. At what degree of computer aid does one stop being the artist? Surely the superior understanding is that one is an artist if it is their creative vision.

![img](/blog/ai-art/ai art machine made the art photography.png “Stonetoss comic depicting the essence of the “machine as the artist” mentality as applied to photography.“)

After all, movie directors certainly count as artists, if I asked Hoel who made The Shining I doubt he would start listing off every cameraman on set, he would of course say that Stanley Kubrick made The Shining. Kubrick is the one who had the creative vision, he is the one who decided how it would look and what would be included—he is the artist. That he wasn’t doing all of the performances and the filming and the catering and whatever else goes into making a movie does not negate that it is his movie.

The Rush for Aesthetic Objectivism

In reality, it is plain to see for any with eyes that my above elucidation of an objective aesthetics for the artists to be using is perhaps overly-charitable. Art has been one of the primary battlegrounds of all of the worst subjectivism imaginable. I derive a certain joy–or perhaps a schadenfreude–in seeing the “everything is relative,” “truth is subjective,” crowd who have dominated the art industry for decades instantly turn on a dime into asserting that AI image generations violate the principles of true Art, and that it is objectively immoral for artists to lose their jobs to an AI. The art world post-Duchamp has almost entirely adopted the bromide that anything is art if the artist says so,117 and now that people are attempting to make actually good art as against anti-conceptual garbage, they are desperately scrambling to rescue objectivity from the well in which they drowned it.

For the artists out there: you really need to wrestle with this question—you can’t have it both ways. Either you maintain your nonsense relativism, in which case you are incapable of mounting a sound moral argument for or against anything, as any such argument may be refuted by “that might be moral for you, but its not moral for me!” Or, you drop the relativism, and adopt a realist philosophy, in which you can properly ground your moral and aesthetic claims: stop the nonsense of deconstructing concepts right up until the point where you deem the concept to be strategically advantageous to maintain. You are stealing the concepts which you wish to elevate to objectivity above the graveyard of destruction in your wake from those whose philosophy is actually grounded in reality.

One of the great fountainheads of the anti-AI art movement from which basically all other arguments are borrowed second-hand is Steven Zapata (which is ironic for a movement which bases its arguments in a hatred of AI on the grounds that it is second-hand). Zapata tells us that art can “be anything,” that it is “very structureless,” and that there is “no real objectivity in art.”118 He also finds want to put the word “truth” in air quotes.119 If this is the accepted opinion within art circles, it should be no surprise that anti-conceptual nonsense is the norm, nor should it be a surprise that people are turning to AI to produce actually good art. Zapata is at least honest enough to not get involved in the “it’s not real art” argument, perhaps if the hordes of his intellectual descendants would realise why that is the case we could finally be without this nonsense-stance. My hopes for this occurring are not high—the second-hander is fully capable of borrowing bundled ideas from wherever he chooses, without ever analysing whether there exists any gross inconsistency between his bundles. Remember this: whenever these people come along and claim that AI art is not real art, all you must do is ask them the simple question: what is art?

Positive Applications of AI Image Generation

Art For The Severely Disabled - “just pick up a pencil and start drawing bro”

The anti-AI crowd take particular glee in mocking the popular notion of AI image generation being a “democratisation” of art. Putting aside any issues I have with that particular terminology, the argument is that art was already accessible to everyone, because all you have to do is pick up a pencil and start drawing. LioConvoy says to people who wish to speed up their creation by using AI to draw backgrounds that they should just “learn to draw backgrounds [
] Learn to draw, or commission an artist.”120 Well, at least he is giving us some options! “Pay up, or use inferior techniques—heads I win, tails you lose.”

To artists who make claims such as these, just for your own strategic sake, consider how easy it is to turn this line of reasoning right back at you: “Oh, we should just learn to draw, how about you just learn to get another job after art is automated?” “How about you just learn to code your own AI?” “How about you just learn to make passive income to support your art hobby?”

Of course, there stands also an obvious counterexample to their blanket assertions that the AI prompters should just pick up a pencil and learn to draw: namely people who are not capable of picking up a pencil.

Take Stephen Hawking, who was completely paralysed by the end of his life. “Pick up a pencil, gramps” is not going to magically inspire him to be able to move his limbs again. With AI image generation this guy can make art, and he’s in a fucking iron lung!

img

Can anyone honestly say that if a person in this state was pining to make art, that they would tell them to “just pick up a pencil, bro?” You might say that he can simply commission an artist if he wants personal art—to which I would ask: how much do you think it costs him to maintain that big, fuck-off machine that is currently breathing for him? Are you going to advocate that he choose between being deprived of the joy that is appreciating art and knowing that he will be able to breathe tomorrow?

“The difficulty in producing art is simply what separates the wheat from the chaff—the boys from the men—us from you,” we are told by the great art-mystics, “Polio Paul and Spinal Injury Steven, are simply undeserving of the sacred craft of art-making, they must quell their egotistical desires to wrestle this profession from its proper place.”

Art has a profound impact on the mental wellbeing of those who partake in it—I don’t think we should be requiring that those who are incapable of making art unaided sacrifice this incredible benefit in order to please the existing artistic establishment.

This may sound like an overly-ambitious caricature that I am painting of the anti-AI crowd: surely nobody would actually be so vicious! Oh, if only that were true. Duchess Celestia makes it perfectly clear that those without the means to make art should just suck it the fuck up, telling us that “if you can’t animate a project or hire an animator, you shouldn’t be able to make an animated project. If you can’t create a book cover, or hire an artist to do it for you, your book shouldn’t have a cover—it’s the basis of economy!”121 And, again, what preventing people like Norman No Arms from making art looks like is in using actual violence against him and his collaborators for the reason that they did it in the “wrong” way. That she, to adopt her terminology, feels entitled to speak on economics whilst having a negative understanding of the field offends me! I work for the Foundation for Economic Education, and her flooding the market with ineptitudes like this poisons the field and puts the profession at risk—I shouldn’t face any competition in the field of economics education, just as the artists shouldn’t have to face competition in their industry. Surely the correct response here is for me to advocate that violent force be used against anyone who would dare compete with me—even when I claim that competition to be shoddy! This is what I have learned from the artists—destroying competition is the only moral way to do these things.

You can imagine a particularly heinous Paralympics commentator saying similar things to these artists: “running is already democratised, stop adding all of this new technological mumbo jumbo to it. All you have to do is pick up your legs and throw them in front of you one after the other. Look at all these cheaters using bionic enhancements—this one is using wheels for Christ’s sake! It’s like they never bothered to watch any running tutorials on YouTube and just want a shortcut to success!”

My friend Necro has cerebral palsy, which can cause his hands to cramp up very easily. He tells me that when he has used AI art it makes it a lot easier for him to quickly conceptualise things, you know, without willing his muscles to do what they simply can’t. Necro is an artist, it’s just that sometimes he is not physically capable of creating art: his mind is fully in-tact and capable of imagining brilliant works of art, its just that there is a gap between what his mind can imagine, and what is body is able to do. Now with AI image generation that gap has closed massively. You can be completely paralysed head-to-toe and be able to dictate prompts to an AI image generator—anyone is capable of making art now, AI has indeed democratised the creation of art.

Furthermore, we needn’t even go to the extremes of cerebral palsy or total body paralysis to think of people who can benefit from this technology. I have a tremor in my hands that makes it essentially impossible for me to properly use a paintbrush, or pencil, or whatever other such implement in order to make art—it is basically impossible for me to utilise the correct level of pressure. Now, my tremor is certainly nothing crazy, there are people who have far worse impairments than me, but even something as mild as this can be overcome with the advent of AI art.

There is an even more fundamental point to be addressed here, which is seemingly completely outside the anti-AI artists’ scope of awareness—namely the cost involved in choosing to do art. Basically, spending my time learning how to draw deprives me of the attainment of those goals that I could have pursued within the time that was spent on learning to draw. This is called opportunity cost by economists—every single choice you make involves such a cost, the cost of everything that you forego in favour of whatever you choose. Now, AI image generation is able to drastically lower the opportunity cost of making art—so if you are in a situation where you simply cannot spare the time to do art, you now are far more likely to be able to do so.

In response to Shadiversity’s love of AI, FENNAH tells him to just “learn the fucking skill,”122 elaborating that with respect to Shad’s use of AI as a colouring tool: “you could have learned to draw all those things yourself.”123 What FENNAH ignores here is opportunity cost. He addresses only the seen—that Shad could have learned to become a great artist; whilst ignoring the unseen—that Shad would have to give something up in order to achieve this. The question is, then, what exactly are we advocating that Shad gives up to learn this skill? Should he spend less time with his family? Should he spend less time perfecting his YouTube channel? Should he spend less time learning about medieval history? He has only limited time in the day—and at some point in the future he will die forever. Is it really FENNAHs place to claim that Shad has been misplacing his time? Surely this decision is Shad’s to make—we do not know his situation, and we should not engage in such wild speculation over it.

Fgaster Iterations

Imagine for me that you are a concept artist, working on Star Wars: Episode XIV, and you have just perfected the new design for the super-ultra-omega-stormtrooper. You show your design to Robot George Lucas, and he wants to see this new stormtrooper in every environment that has been designed for the new movie. Tell me, which sounds like the more enjoyable workflow: to tediously re-draw this stormtrooper in dozens or hundreds of different environments, or to give an AI your reference sheet, all of the environment drawings, and tell it to do the rest? It seems to me that the clear choice is the second one: by leveraging the power of AI, the tedium of iterating upon an idea over and over and over and over again can be completely eliminated. Maybe after you bring Robot George Lucas the hundreds of in-context images he wanted, he then asks you to give all of them jet packs, which was not a part of the original design. If you used AI, no problem: simply relay these instructions to Stable Diffusion version 12 and it can do all of the work for you.

Pascal BlanchĂ© explains that experimenting with image-to-image AIs has allowed him to “open new horizons” with his art,124 and he redoubles this exact notion of iteration that I am describing here:

The process so far is to make many iterations over my own work with keywords like cyberpunk / futuristic city / bladerunner and so on, then recompose the results with my original. Plenty of [possibilities] from here.

Stan Prokopenko concurs:

You can take the Proko logo, you can give it a bunch of different colours, different variations [
] and then you can create a prompt template that’s like: this style with this subject matter. So you just say this is an ocean of fish, or some koi, a koi pond, and then you give it a style—it’s like Picassoish or whatever, or it’s like Rockwell [
] and you create these thousands of combinations and then you just batch it all and you get these beautiful images of like the Proko logo.125

Concept artist Trent Kaniuga is in the same boat, seeing AI and human art as a “symbiotic relationship.”126 This ability to quickly iterate upon an idea opens up entirely new avenues of creativity that would simply be locked behind time constraints before. An artist can describe the sort of asset that they are looking for, have an AI generate thousands of iterations, and then they can look through to see if there are any good ideas—perhaps ideas that the artist never would have come up with sans the AI. AI art is a brilliant tool for opening up new horizons of creativity even for those who are already able-bodied artists. I mean, just think about it, this is for the artists: do you really want to be spending 15 hours a day making slight variations to some grass texture to cover up its tiling? Do you want to spend your time on draftswork that could be done by a computer? Or, instead of that stuff, do you want to focus your time on whichever aspects are most artistically fulfilling? I think the invention of the paint-fill tool is a sufficient answer to these questions.

Automation of Less-Appealing Parts

This is not even a pure hypothetical either—Benjamin Von Wong, a sculptor, explains how he uses DALL-E to improve his creative process:

“DALL-E is a wonderful tool for someone like me who cannot draw,” [
] “Rather than needing to sketch out concepts, I can simply generate them through different prompt phrases.”127

So, instead of spending his time learning and utilising an entirely different artistic technique, he is able to specialise into the sculpture-work that he clearly prefers. This means, all else being equal, he will be able to produce more sculptures than otherwise—surely a boon to his career. Aaron Blaise, veteran animator for Disney, mirrors this idea,128 pondering whether AI image generation could be used to fill in shadows and highlights, thus freeing the artist to spend their time elsewhere.

We have seen this applied to movies as well,129 The Champion used AI technology to re-dub the movie in several different languages, whilst keeping the actors voice, performance, and mouth movements consistent with the original. This is bringing the art to a whole new audience who would have to watch it with either an inferior vocal performance in a language they understand, or in the original but having to constantly read subtitles. AI is making art more accessible to people and allowing our stories to be truly global in scope.

Some in the artistic community scoff at this notion130—“aha! But some people enjoy those ‘less interesting’ aspects, so there!” They tell us this, as if it in any way negates the point being made. “Yes, well done Susan, I’m glad for you that you enjoy sewing, this does not negate that I do not, and would much rather have a machine make clothes for me.” If an artist still wants to engage in those aspects of the creative process that can be automated away, more power to them, I have absolutely no issue with them persisting in this activity—but this does not make it wrong or stupid for other people to make a different choice.

The Bullying of AI Artists

Bullying Artists For Not Blacklisting AI Imgen

The level of bullying that AI artists have received for daring to pursue and engage with a new artistic medium is frankly disgusting. Nashor Kim gave permission for people to train an image model on his work131 and received immediate backlash in the quote tweets—this is fucking ludicrous, I thought the whole point was that the AIs stole people’s work, what exactly is the issue when someone gives permission?

Another artist, of his own volition, used AI tools in the creation of some art for a DnD sourcebook—the response to this was not merely that the work should be removed from the book, but that the artist himself, who has worked with the company for nine years, should be forever blacklisted from ever creating art for the company again. “Set an example of this parasite,” they tell us—the sin of using artistic tools to create his own art on his own terms means he must forever be cast out from the profession. Truly, this is what it means for artists to stick together.

img

Bullying YouTubers for Automating Their Own Jobs

Having any association whatsoever with AI has become the black mark against someone’s name necessary to justify attacking them. All of the talk about “ethical training” is simply a smokescreen to cover up for their total rejection of the technology on protectionist grounds, which is why their attacks against artists is not limited to what would count as “bad actors” on their premises, but rather is targeted at anyone who would dare touch this more efficient technology.

This is seen quite clearly in those cases of YouTubers automating a task that they themselves would normally do—premydaremy is apparently not allowed to use AI art in his thumbnails anymore:

img

What exactly is the suggestion here? That YouTubers start commissioning bespoke art to use as the thumbnail for every video they make? What an utter absurdity—it is not an uncommon strategy to frequently swap out different thumbnail and title combinations to perform A/B testing. Should YouTubers be commissioning artists for each combination and paying for the rush job that would be required to get the video out in time? Nobody does this—nobody runs their channel this way. For the most part it is the youtuber himself who is making the thumbnail—no artist on planet Earth lost their job because of this because no artist was ever going to be hired for such a job!

Austin McConnell is producing a book about a virtually unknown public domain superhero on a shoestring budget befitting the likely very limited revenue such a project could make. This budget was already going towards paying for:132

So, this is clearly not the image of a greedy, evil man attempting to wring every penny out of creation and destroy art in the process—this is a guy who wants to get together with other creatives and bring something cool into the world. McConnell explains that he is “forever dealing with the challenge of converting youtube viewers to actual readers of my work. So I decided that I would put together the first chapter in an animated story-book style, with the full audio production featuring different voices, music, sound effects, the whole nine yards, and then release it totally free for people who were either on the fence about picking up a copy and checking out the whole story, or who were just looking for something entertaining to spend about an hour watching.”133

The creation of this totally free animation took Austin about five months of painstaking work134 to promote a book that he was not even expecting to make all that much money.135 This is a clearly good thing, he has found a way to get the story in front of more people at no extra cost to them. But in the process he committed the mortal sin of using the labour-saving technology that is generative AI—thus drawing the ire of the art-protectionist egregore.

This animation simply would not have been produced without the advent of AI,136 and it is far from being on the level of the Wynand mega-feed slop which Zapata fears:

This storybook features various animation effects and motion graphics that were not AI-generated—they were painstakingly crafted or added by me in a non-linear editor. [
] The visuals for the spider-queen storybook were produced using a variety of resources including: 2D art and character designs I commissioned or drew myself, 2D and 3D assets I purchased from royalty-free stock footage websites, generative fill and neural art filters available in either photoshop or Adobe firefly–which is an AI trained only on stock images owned by the company, public domain content, or other openly-licensed or non-copyright material [
]–and a diffusion model trained on the above materials.137

And that’s the kicker—Austin was in complete compliance with the arbitrary ethical AI standards that have been championed by the art establishment and they still go after him. “You hired human artists for this? Not enough. You used only public domain and licensed assets? Not enough. You put real blood, sweat and tears into it? Not enough.”

Do you see why I call their “ethical training” concerns a smokescreen? They demonstrate by how they place their ire that they don’t give a crap about any of that—they only care if it will impact their wallets or sully the sanctity of their craft. The plain-Jane, plumb-line truth of the matter is that this AI image generation allowed Austin to push this particular piece of art–this story about a completely forgotten superhero–far beyond what would ever be possible without its use. This is a clear example of AI improving rather than detracting from art and creativity. Artists sticking together is whenever you bully artists for making art, and the more you bully them the more artists are sticking together.

But using AI like this is just lazy!

Ah yes, the classic cry against efficiency—he who transports grain by horse must simply lack legs to walk on. It’s as if we are to accept that capital goods simply push man to a state of lazing about on the beach, just barely surviving and spending the vast majority of his time doing no productive efforts and only a tiny fraction on collecting the resources he needs to live. We can of course see the obvious absurdity of such an example—as man gains more capital goods it is indeed true that this allows him more leisure time, but it also allows that time he spends labouring to be vastly more productive than if he was stuck using only his bare hands. That Austin is able to save his time by using digital rather than film, and using AI instead of a pencil and paper, means that he was able to make this animation in the first place—and the making of was anything but lazy:

Now can people use AI generators to make lazy art? Sure, absolutely, but the video that I produced could not have been replicated, as some have suggested, in a couple hours by a computer. It required a heavy and overwhelming human involvement—at the very least to write the actual book, but certainly to create this entire production. This shot here is a great example, we’ve got a generative fill background that was produced using a royalty-free 3D environment, combined with 3D and 2D royalty-free assets, using commissioned art with alterations I hand-drew. All of it composited together and animated by me in Adobe After Effects, where I added camera movement, colour correction, focus effects, timed it to royalty-free music that I have licensed and curated, featuring sound effects from a stock library with reverb and mixing that I did, integrating a vocal performance that I meticulously constructed. If you believe that this method of video production is “lazy content” all I can do is just share these screenshots of the final project timeline with you and tell you that this video took way longer to put together than I ever would have anticipated, and if you think that this is something that anybody with an AI program can do I welcome you to try and replicate it yourself in a couple of hours.138

The people making such arguments and decrying efficiency are simply anti-human. If they were truly consistent in their credo they would advocate an apocalypse—a total breakdown in the capital structure which makes mans current flourishing on this Earth possible in the first place, returning man to the cold, damp, mud from which he has lifted himself out of.

But, he still should have hired humans to do the work instead of AI!

Austin explains that he tried hiring a female voice actor to voice the female character, holding three separate rounds of recruitment for the role, but the quoted rates were both far out of budget and none of the applicants were right for the part.139 He similarly reached out to freelance artists to see if any would be willing to draw the over 50-minute animation and was either turned down due to the immense scope of the project, or was quoted prices that would vastly dwarf any returns he could hope to get on the book that the short was supposed to be advertising. This is the fundamental point—this animation simply would never have been made without the advent of AI—nobody on Earth was going to be able to pay for it, and human artists are not entitled to someone paying for it. We now have more art because of the brilliant work of AI developers.

Bullying Shadiversity for Teaching People how to Use AI

Shadiversity has been perhaps the most prominent defender of AI thusfar, so it is not particularly surprising that the mob has turned their attention to him also. He recently posted a challenge to prove that AI image generation is not just “type prompt → get final image” and he was proven entirely correct.

![img](/blog/ai-art/shad sword girl challenge image.png “Shadiversity’s challenge to anti-AI artists”)

![img](/blog/ai-art/shad sword girl completely different in replication.png “A pitiful attempt to re-create Shadiversity’s image with a mere text prompt.“)

For both of these images the character looks nothing like Shad’s character, the poses are completely different, the swords are completely different, the grip is completely different, and the swords are held flat-on to the camera, which is not the case for Shad’s image. Again, it seems that the uncanny eye that the art-protectionist has for spotting every minor flaw in AI image generation is completely absent whenever it is useful for it to be. “But aha! Silly Shad, didn’t you know that even in this challenge you posted the hand is obviously not anatomically correct—we win you lose!”

![img](/blog/ai-art/shad sword girl hand broken.png “A twitter user claiming that Shad’s challenge image has anatomically incorrect hand placement.“)

This, of course, ignores the fact that Shad based the grip off of an actual historic grip used and which he can recreate in person.

![img](/blog/ai-art/shad recreating grip.jpg “Shadiversity recreating the grip position depicted in his art.“)

Shadiversity recognises that an artistic weakness of his is in the colouring of his line-drawings,140 that he has access to AI image-to-image software allows him to overcome this weak point and more fully realise his artistic vision. It takes a special kind of viciousness to wish to rip this ability from him.

Nor does this imply in any way that Shad is not still capable of immense artistic input. As explained, one of the key aspects to an objectively good piece of art is selectivity–specificity–making micro-adjustments to dial in the absolute perfect representation of the metaphysics that you are trying to embody. It would be easy enough to copy a prompt, or to type “I wan supergirl who is flying and then there is a red skirt and then there is sci-fi armour”—but it is far more challenging to know what exactly to change in the prompt, where to inpaint, which settings to tweak, how to tweak them, and to select the best elements to photobash together for the final result. That we have dozens to hundreds of popular tutorials on how to use these tools is surely indicative that there is more to the process than “click button → get art.” Even before Shad does any inpainting or photobashing or illustration on top of the AI art, his prompts are fucking gigantic—clearly he is trying to get a superior level of specificity than is possible with a shorter prompt. Karla Ortiz recognises this, saying that “the more intricate and the more detailed the prompt [is], the better the image the AI can give you.”141

Reid Southen counters that this is just Shad not understanding how prompting works, which Shad disputes:

There are parts [of] the AI image generation that we don’t understand how it’s working [
] and so the reason why there’s so many duplications in the negative prompt is because it actually affects the result. I’ve tried differences and by this repetition the [difference in] results are actually quite extreme.142

The artists-against-AI love to nitpick every last detail in every image they even suspect of having been touched by the hair on an AI’s head,143 and they have also shifted to–I would say–engaging in some less-than-good-faith critiques of Shadiversity’s non-AI art. The idea being that he was too lazy to get good and was self-conscious about this, so he used AI to cheat! What they rest their attacks on is this idea that if they can point out every little thing in an AI image that doesn’t match up to reality that then AI art is completely worthless and worthy of attack. What this amounts to is a sort of naïve aesthetic realism—“oh, that lighting is not as it would be in real life, therefore its bad”—“that anatomy is not as it would be in real life, therefore its bad.” They of course ignore that art is not a process of capturing every detail in reality in some sort of “slice of life”—rather it’s purpose is to distill and represent only those aspects of reality that are important.

On that note, this exact same nitpicking procedure–which the anti-AI crowd have so gleefully taken part in since the technology’s release–can be trivially used against the art that they produce as well.

![img](/blog/ai-art/picasso womans head.jpg “Picasso’s “Womans head and self portrait”“)

Uhhh, Pablo? That’s not very accurate anatomy!

img

Why does the baby Jesus look like a tiny but fully grown man? That’s not what things are really like!

img

Does Steven Zapata not know how many eyes people have?

![img](/blog/ai-art/bart simpson.png “Bart Simpson”)

Erm, weird head much?

Of course, I kid with these nitpicks, but I am capable of producing far less nitpicky and far stronger criticism of the type of anti-human art coming out of people like Zapata and Picasso. For Picasso, this anti-perceptual mess is his vision of reality—on such a philosophy man has no power to know anything and wherever he tries he will fall woefully short, perhaps ending up like the guitarist from before. This is an evil view of man, but it is encapsulated with a high degree of aesthetic rigour. Picasso is selective in his choice of a woman. Generally women are chosen to depict their beauty, but Picasso’s job here is to deconstruct, so he deconstructs beauty itself. Picasso is clear in what it is he is doing—he has not blurred the figure, he has rather slashed great lines across it, disintegrating the concept in front of our eyes. Where the piece suffers is on the level of integration—these precise, distinct shapes are indicative of separate concepts, but the entire pursuit is to destroy concepts and reality. We have a fundamental disintegration between the technique and the reality being depicted by it.

![img](/blog/ai-art/zapata collective anti-ai video.png “The image drawn by Steven Zapata during his anti-AI video.“)

We can do the same to Zapata’s work—the view of man depicted here is truly gruesome. Looking over his portfolio, rarely are we met with any vision of man as an individual, or man as a conqueror, or man as triumphant—rather we get grotesque collectives mashed together in a seemingly endless torture of every individual present. The individual on such a view does not exist, he is irrelevant, a fragmant torn from God, or a cell of the collective body—his suffering is thus similarly irrelevant, for the collective is all that matters. Despite this, Zapata demonstrates an intimate knowledge of the human form, but sees it as deficient. Rather than depicting man as he is, fully functional in every muscle, not a hair out of place, he depicts man as requiring physical modification to properly fit within reality. The idea being presented here is that mans actual body is insufficient to meet his actual collectivist nature. That hes is able to execute these artistic equivalents of The Communist Manifesto with such a high degree of technical ability, betraying a deep connection with it, really makes it such that I can’t help but chuckle whenever he claims to be pro-human—especially when composed with his overt protectionism.

![img](/blog/ai-art/Reid Southen slop-monger.png “The “selected work” section of Reid’s website”)

Reid Southen has been very outspoken in his opinions on this issue, fighting against the AI slop that threatens him.144 So let’s take a look at the “selected work” on the front page of his website. We have: The Matrix: Resurections, Jupiter Ascending, Transformers: The Last Knight, The Woman King, The Hunger Games, and Independence Day: Resurgence. Well thank goodness Reid is the stalwart defender of humanity against the potential slop-factory that is AI. I just don’t know what on Earth we would do without the high art that is The Woman King or the innumerable rehashes of old movies that seem to make up the bulk of his career. Remember guys: AI is bad because it just rehashes what has already happened, pay no attention to the man behind the criticism.

One distinction one might wish to draw between the sort of nitpicking I am doing and that done against AI is that for the most part the things I point out were intentionally put there by the artist in question—for the case of Picasso and Zapata, that what they are doing is intentional makes it an even greater damnation, and further, that a decision was intentional does not seem to present any sort of barrier for the anti-AI side. Shadiversity made it clear that he enjoyed the look of the sun behind SuperGirl and wanted to get a good look at her front, so much like how movies will light up every inch of the screen, he wanted to use some unrealistic lighting. You might disagree with his intention here, but then we are back to me strongly disagreeing with the type of metaphysics embodied in a work by Zapata or Picasso.

Now, is it fair for me to go on such tirades against people’s work due to a disagreement over the benefits of AI technology? If not, then the same rule should be applied to the AI-nitpickers—either we all get to nitpick and the actual arguments sink to the background, or none of us do and we actually discuss the technology without lying about what our eyes see. Everyone knows that both AI and non-AI methods can make for highly appealing images, all pointing out flaws does is demonstrate Shadiversity to be correct when he says that having an artistic eye implies that one can make better AI art.

Bullying Corridor Digital for Experimenting with AI

Corridor Digital, a prominent VFX channel decided to experiment with the new creative possibilities which AI affords. The response from artists? Ceaseless hate and flaming: put those silly VFX artists in their place twitter, how dare they seek to continue the same experimentation they always have, don’t they know that AI is now forbidden, so sayeth Art? Zapata’s insistence that artists are a pro-technology and pro-experimentation community seems completely absent in the response to this.

Corridor is “absolute cancer,” they “deserve [zero] support,” the actual people behind this are “garbage”—its not even that the technology has flaws, or could be made in a more ethical way, no, it’s that you are actual garbage or a “loser piece of shit [hack] with no fucking talent” for daring to experiment with unapproved technology. Corridor are “not real artists” anymore, they have committed the sin of experimenting with the wrong tech and now cannot be considered as such—I hope you see now why I see fit to bring up that many of these people rest their arguments on a theory of art which deems that it is art if mystic priests say it is, and only then. Way to support artists and creativity, guys!

img

And I know that the response here is going to be that Corridor started it, that Corridor weren’t supporting artists, but this just isn’t the case. All of the backlash I am showing here is directed at their second Anime Rock Paper Scissors video, where they followed the calls to support human artists. They were about as explicit as you could possibly be with where they got the style from, and who contributed what to the final product—but of course, it is never enough for these people. “Oh, you hired someone to design the characters and made them fully aware that it was to be used for training an AI?145 Not enough.” “Oh, you reached out to the community to get help on the editing with bounties on completing scenes? Not enough.” “Oh, you highlighted and praised the musician who did the theme song? Not enough.”

There is no rational basis for these attacks, this isn’t in response to some evil committed by Corridor, the mere fact that they are using technology that the art sages have deemed bad is the beginning and end of the entire thought process. These people are still outraged at corridor for hiring a human artist to design a shirt for them because Corridor is now irreparably tainted by sin in their eyes—if the problem truly is with not getting artists’ consent and not supporting artists, then people should be praising Corridor for finding a way to do both. What Corridor have done with this video strikes me as being essentially identical to what has been done by people like Joel Haver, or even in movies like A Scanner Darkly. Joel doesn’t animate every single frame, he draws a few keyframes and has a computer fill in the rest146—where is the #CancelJoelHaver movement? Where are all of the people dunking on him with “umm, acktually Joel, this is roto-scoping, not animation.” I certainly can’t see it.

![img](/blog/ai-art/know the work rules corridor joel haver.jpg “An edited “Sunny Street” comic, depicting Joel Haver standing by a woman, who says “aww, you’re sweet,” who then in the next panel is confronted with the Corridor animation and says “Hello, human resources?!”“)

Disney animator, Aaron Blaise, has quite a different take on the Corridor video than the Luddite hordes, stating that “it’s not too different than what we did on Beauty and the Beast [
] it’s the same idea.”147 He tells us that the much-bemoaned flickering present in the first short “adds life to the final image.”148 He explains that AI generated art is “no different” from what has been done before,149 and gives cudos to corridor for their experimentation.150

A great deal of the backlash completely misses the point with these videos: well done, you found a frame where the hand looks messed up,151 get yourself a cookie. The purpose of the video was not to show how to make an anime, it was not to demonstrate the process that a human animator goes through, this is a visual effects channel, they were testing out a new visual effects technique and showing where it shines and where it doesn’t. Either these people are being deliberately dense such that they may join in on the pile-on, or their mind consists only of recycled ideas that they can’t quite grasp how to properly apply.

“Oh, you’ve slipped the point there, Zulu—Corridor are a VFX studio, they only don’t care about this because it’s not threatening their jobs. Just you watch, the second AI starts making VFX they will change their opinion on a dime!”152 Now, I might accept this accusation, if not for the fact that Corridor has a 20-minute long video153 on their channel geeking out and getting super excited over a new AI technology that can replace the job of a VFX artist in making a CG character. Like, a very significant portion of the Corridor channel is dedicated to showing off experimental labour-saving techniques that by the logic of the anti-AI crowd put them at risk of losing their jobs. This is the difference between corridor and the protectionists—the people at corridor need to constantly innovate, they aren’t able to rest on artistic laurels of “everything is art” and “everything is subjective” that were established in the wake of Kant. Corridor are on the bleeding edge in an industry that rarely achieves any sort of respect—they cannot fall back on pompous “you wouldn’t get it”s, the work of the VFX artist must be impeccable if he is to sustain himself—no government grants are available for his gallery of work.

The backlash against Corridor here does not lie within a few isolated examples either, this is the expected result whenever anyone is met with the raging eye of art twitter for committing whichever sin they have decided upon this month. For all the cries from the anti-AI crowd that we be more empathetic towards artists, I find a complete lack of empathy whenever it comes to someone using the technology they despise even when done in compliance with their arbitrary ethical standards. If you want people to honestly believe that your community is all about sticking together, and supporting the human behind the screen, then call this shit out—there is no significant counterweight to this shitstorm coming from within the art community.

Giving Credit Where it is Due

Now, I will give credit where it is due, some artists have recognised this problem, as shown on Art Cafe:

[MACIEJ KUCIARA]: There’s a complete lack of empathy that’s just like, it’s just sad to see, you know? It’s like: “ah yeah you fucking artists,” or like “oh, yeah this lawsuit’s going to kill this AI shit.” It’s like, I don’t know man.

[SAM YANG]: On both sides, like [
] for the people who use AI, on their side its like: “oh, you guys are just whiney, you’re just crying about it, suck it up and just deal with it.” It’s like, you don’t understand [that] this has been people’s livelihoods and their careers for so many years, this has been their passion, this has been something that they care about. And then on the artists’ side, you know artists are like: “oh, we need to snuff this thing out, we need to kill this AI thing, this shouldn’t even exist,” it’s like, ok well you’re not looking at the benefits that it could bring to millions of people in the future.

[MACIEJ KUCIARA]: Or even just production, like if you are a director or someone who works with a team, I know it sounds horrible, to say like, “I don’t need to hire that many artists anymore,” but on the other side though, it’s just like it’s so expensive–especially if you’re not like a big studio, or a big director, or you don’t have millions of dollars of budget–[
] like I wanna make my short film, [
] I don’t have like even [a] few-thousand dollars, [
] I don’t have like 20-, 30-, 50-thousand dollars to pay an artist to make like a bunch of backgrounds for my film that’s not going to even make money because I’m not popular enough.154

Now, I am not an AI artist, or a regular artist, and quite frankly I don’t give much of a damn if people are mean to others online—what I do care about is intellectual consistency. If artists want to be mean and throw shit at people for doing things they don’t like, then they can’t claim some moral high ground in advocating that people stick together and have empathy for one-another—and they also need to back up said shit-throwing with some decent arguments. It is clear that for the vast majority of those upset over AI, the use of such technology makes one persona non grata not out of any actual immoral deed, but rather because the high priests of art deem it to be so.

The Unethical Practices of AI-Artists

The Kim Jung Gi AI

The anti-AI crowd have a few examples up their sleeves of the AI artists engaging in unethical practices which I would be remiss if I did not cover here. Perhaps the most prominent example is that of a person training an AI to make art similar to Kim Jung Gi shortly after his death. The argument was seemingly that this was in some form “spitting on his grave.” Now, I can’t imagine why it is that his death is at all relevant to the ethical calculus—is spitting on a grave worse than spitting on the living man? Only if death is more sacred than life—if we are to accept death-worship and life-hatred, then yes, this is a sound argument. “Kim Jung Gi has escaped the clutches of life—we should not use his work to better the living, it should die with honour” is the basic argument being made here.

Now, these types of hyper-specific AIs do perhaps have a slightly different ethical calculus involved. If I am training an AI specifically to attempt to mimic a particular artist, I would call that second-hand—man cannot live on consistent second-handedness, and it is certainly aesthetically poor to have such borrowed art. However, if one uses the Kim Jung Gi AI, or any other similar AI, in order to incorporate different aspects of the style into new works, this is fine. For instance, if you just want it to know how comic book style in general is done such that it can be replicated through image to image because the art you are doing should be in comic book style, then this is not inherently second-hand.

The Sam Yang AI

Now, there is a living example, namely the various LoRAs made to imitate the style of Sam Yang, so at least here we don’t necessarily have any sort of death-worship on the part of the anti-AI crowd. But, do you want to know why it is that the AI community seemingly out of nowhere decided to start targeting Sam with all of these LoRAs? The reason is–and this is something that almost goes completely unsaid–Sam is the one who drew first blood.155

That’s right: Sam wasn’t just going about his day when all of a sudden the StableDiffusion subreddit decided that they all wanted LoRAs of his style. Rather, what happened is that one guy decided to integrate Sam’s style into his generations, and released a LoRA for others to do the same. In response Sam called the user out to his two-million instagram followers, who then proceeded to bully the person off of the internet. Then as a retaliation other users banded together in a protest of these moves by making more LoRAs of Sam’s work—this very much strikes me as a sort of Streisand Effect. “Oh, you don’t want this to happen and you are going to harrass people into submission for using their own property how they wish? Fuck you, I won’t take it, here’s several dozen more LoRAs with way more popularity.”

Can You Tell When Art Is AI Generated?

We are told that AI generated images lack that human touch, that they can be spotted from a mile away as mere computer imitations, that even the drawing of a child destroys anything that an AI could make.

img

The more honest artists against AI will freely concede the point that images generated by AI can indeed look very impressive.156 AI is capable of executing techniques that either did not exist previously, or were certainly not widely known. The image-to-image technique allows artists to superimpose patterns that can be naturally filled in,157 creating a very unique effect. Can you honestly tell me that you saw such things as working QR codes integrated into a piece of art prior to the invention of these image-to-image systems? It is certainly possible for a human to do this on his lonesome, and perhaps it was done here and there—but the pure speed and ease with which everyone is capable of experimenting with such techniques now dwarfs anything that has come before. Insofar as AI image generation is capable of producing works of a higher quality than the human alternative it is not only anti-human, but also anti-art to oppose this.

I do not doubt that just as photography inspired the impressionist movement, human artists will learn from and build upon these new aesthetic tools. When you have people being silenced for having art that is “too similar” to something that an AI might make,158 it becomes clear that whatever honest concerns about AI image generation there are have been quickly swamped by and swallowed up in a total assault on “undesireable” artists. The time it is taking for the anti-AI Stasi to evaluate whether your art is “acceptable” is quickly growing with every new iteration of these models: but in the words of Angela: “if you can’t tell, does it matter?”159

![img](/blog/ai-art/reddit ban artist.png “Reddit moderators for r/Art ban a user for having art that has a style too close to AI”)

The cries from these artists to band together against AI ring particularly hollow when mere months ago the mob of twitter artists sought to ostracise and bully anyone who dared to monetise their work through the use of NFTs.160 It is rich to hear these people complaining about a computer program learning how to make art off of their backs immediately after the almost universal prevalence of “haha, I just right-click saved your NFT dumbass!” Let’s even consider the artists-against-AI proposal that image models compensate artists when their art is used. Putting aside the nonsense claim that individual pieces of art are “used” to generate the images, as if it was photo-bashing or collage—what these artists are asking for is for there to be a way to prove that they made a piece of art that is then used for further AI generations. Tell me: how is this at all dissimilar to what NFTs are? An NFT is a cryptographic token that allows you to prove that you produced some arbitrary data, such as an image or video. The ethical AI model that they champion is built on the back of the NFT technology they so dearly hate.161

IP For Me, but not For Thee

The anti-AI art crowd will laugh and jeer at the fact that AI generated comic book art was unable to be copyrighted by the author162—I celebrate this fact. All of these new and exciting creative tools could be left unmarred by the evil clutches of intellectual monopolies. The artists implicitly understand this—the very popular task of creating fanart of different characters is a blatant violation of supposed intellectual property rights held by those companies who control those characters. You aren’t going to be seeing any big-budget movie including a fanart character of Iron Man—they would be sued into oblivion. Stop the double standard—either you accept fanart and reject IP, or you reject fanart and accept IP. If you want to make the legalistic argument against AI that it infringes copyright, then I expect to see you in the trenches fighting against the massive copyright infringement coming from the fanart community.

Rebecca Blake of the Graphic Artists Guild tells us that Davinci’s heirs shouldn’t be compensated for re-productions of The Mona Lisa because he died more than 70 years ago.163 This is so blatantly an arbitrary standard—the beginning and end of her ethical argument on any topic is “gubmint said good, so it good,” or “gubmint said bad, so it bad.” I would expect a more sophisticated moral philosophy from a three year old—at least they can implicitly recognise that other kids snatching their toys is wrong, no matter what the grown-ups say about sharing.

![img](/blog/ai-art/copyright strike IP AI art we did it patrick we saved art.png “Twitter user @snitchanon tweeting an image of a youtube video no longer available due to a copyright claim with “We did it, Patrick! We saved art!” over top and the caption “Post Stable Diffusion Lawsuit, 2025.”“)

Just as the open source software community is able to create wonderful applications that a user is free to change to suit his purposes, we may be entering a golden age of free and open source artworks where budding creatives can build upon the prior body of knowledge without fear that Disney or Nintendo will strike them down with the hammer of uncle Sam. Small artists already act as if they don’t have IP—they watermark their work or send low resolution versions until they get paid for their commissions. I am yet to see Disney watermark any Marvel movies. IP helps only the large, established monopolists, and does nothing but hurt the creativity of those small, independent artists who champion its expansion.

The pro-IP artist wants it both ways—IP for me but not for thee, as Lois van Baarle demonstrates:

[
] you know [
] how Olivia Rodrigo had that song and then she had to [
] credit Paramore as a song writer because it vaguely resembled a song that Paramore [made]. And I remember listening to it and being like: “[
] it’s not the same song. It’s like the song.” If I could–as an artist–[
] sue people whose work vaguely resembles mine, that would be like not only [
] a lot of people but that would be cruel, because that would limit peoples creativity enormously and also my own because a lot of my work is deeply inspired by the work of other artists. So it’s really interesting how that works, like in art it’s quite broad—you are allowed to mimic styles, as a human, humans are allowed to mimic styles, let’s make a distinction between humans and robots.164

So Lois recognises that IP is a force that is destructive of creativity, that preventing people from using the stylistic choices of others hinders the creation of art, but then she wishes that these same forces be applied over more arts than they currently apply to. Which is it? Is IP good or bad? You can’t eat your cake and have it too, you must decide whether you want to prevent the use of similar artistic motifs and thereby attack the ability of artists to create, or allow this and thereby allow people to train AIs to do this task. There is a naked inconsistency in Lois’ view here, she realises that what she is advocating against actually places her on the side of the AI image generators, and decides that she will just arbitrarily particularise her argument. It very much strikes me that she was making this argument, which has been made by artists for countless years, and she catches herself realising that she forgot to download the new programming—this argument would open the door for the AI defenders to demolish the plagiarism point. She must elucidate some relevant distinction between AI image generation and human artistry from which she derives her distinct ethics, else she is left with an irreconcilable contradiction in her view. We on the pro-AI side are often labelled as bootlickers for big corpo—this is all whilst the anti-AI side are the ones who wish to expand IP helping some of the most powerful corporations on Earth. You can’t rightly class yourselves as the underdogs fighting against the big and powerful, whilst championing for an expansion in the reach of the ever-powerful tentacles of The Mouse.

img

You should be able to see how this applies to the commonly cited “gotcha!” against Stable Diffusion—that they deliberately avoided any copyrighted material in their music-generation AI because they were afraid of litigation. All this shows is that the music industry is more fully strangled and devoid of innovation—that the visual artists wish to follow in their steps is very worrying. The truth is that as Lois correctly pointed out, the many abuses and evils we see in creative endeavours are caused by government intervention in the form of enforcing intellectual monopoly grants, and this makes it especially concerning that so many artists are calling for a government solution to this government problem. To learn more about this you have to watch this video165 where I explain that copyrights and patents are a moral evil that should be immediately and completely abolished.

References

“Abandon all hope, ye who enter here.”

—Dante on watching anti-AI art videos

The following is a list of every video/article/blogpost/etc. that I made use of in the research for this video. I do not believe that I have missed any (apart from perhaps an errant tweet here and there, which are largely irrelevant). I make this available to you in case you wish to trace my intellectual lineage for whatever reason. A number of these have been either so repetetive of the general trend, or have been so uninformative, such that they have had zero impact on this script—but I simply do not wish to search through and filter these out, so you get the whole hog.

Footnotes

1 Ethan Becker, 3 STEPS TO INSTANTLY FIND YOUR STYLE| NEVER draw from IMAGINATION!, https://youtu.be/NEvMHRgPdyk?t=24, t. 00:24

2 Concept Art Association, AI Town hall hosted by Karla Ortiz & Concept Art Association, https://youtu.be/LYO9sii1eKA?t=628, t. 10:28; Karla claims after this that she is “over-simplifying” the process but this is false—she is not making it overly-simple, she is actually fundamentally misrepresenting what it is doing. “Filling in the blanks” is a simplification of what is happening—this does not misrepresent what is fundamentally occurring.

3 Dustin Podell et al., SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis, https://arxiv.org/pdf/2307.01952.pdf, Table 1

4 Elephants weigh 15000 lbs, 15000 / 25000 = 0.6; 0.6 lbs is ~270g; 270g is in the range of a three-week old kitten.

5 Steven Zapata Art, The End of Art: An Argument Against Image AIs, https://youtu.be/tjSxFAGP9Ss?t=1971, t. 32:51

6 Mother’s Basement, AI “Anime” - An Insult to Life Itself (re: Corridor & Netflix), https://youtu.be/GOwxXj1EIXM?t=359, t. 05:59

7 See: Mother’s Basement, AI “Anime” - An Insult to Life Itself (re: Corridor & Netflix), https://youtu.be/GOwxXj1EIXM?t=496, t. 08:16

8 Jazza, FIVERR Vs AI - Is this the END for ARTISTS?
, https://youtu.be/5GO2xKmZsVo?t=706, t. 11:46

9 Hello Future Me, The AI Art Apocalypse, https://youtu.be/9xJCzKdPyCo?t=1292, t. 21:32

10 This is the title of the chapter starting at 09:00 within Hello Future Me, The AI Art Apocalypse, https://youtu.be/9xJCzKdPyCo

11 Shadiversity, The BIGGEST LIE about Ai ART, https://youtu.be/8eokIcRWzBo

12 Adam Duff LUCIDPIXUL, AI ART ft: Istebrak, Steven Zapata, Hardy Fowler, Kelsey Rodriguez, Tyler Edlin & Antonio Stappaerts, https://youtu.be/5JaS7APpO8U?t=1617, t. 26:57

13 https://c4sif.org/2010/12/galambos-and-other-nuts/

14 Art Cafe, AI and The Future of Art with Steven Zapata - Art Cafe #134, https://youtu.be/qTB7tFZ2EFc?t=8340, t. 02:19:00

15 Art Cafe, AI and The Future of Art with Steven Zapata - Art Cafe #134, https://youtu.be/qTB7tFZ2EFc?t=8899, t. 02:28:19

16 Art Cafe, AI and The Future of Art with Steven Zapata - Art Cafe #134, https://youtu.be/qTB7tFZ2EFc?t=8956, t. 02:29:16

17 See: LiquidZulu, “The Error of Conflating the Potential with the Actual,” in id. Brain, https://liquidzulu.github.io/brain/note/the-error-of-conflating-the-potential-with-the-actual

18 Steven Zapata script on the Library of Babel (https://archive.ph/PNlEF)

19 Steven Zapata Art, Miles Johnston: Cheeky AI, The Total Set of Images, and Polite Revolution, https://youtu.be/EqpWfYX7ou8?t=190, t. 3:10

20 Steven Zapata Art, Miles Johnston: Cheeky AI, The Total Set of Images, and Polite Revolution, https://youtu.be/EqpWfYX7ou8?t=3033, t. 50:33

21 See: Lindybeige, Forests in the olden days, https://www.youtube.com/watch?v=zVPUFMwm73Y

22 Adam Duff LUCIDPIXUL, AI ART ft: Istebrak, Steven Zapata, Hardy Fowler, Kelsey Rodriguez, Tyler Edlin & Antonio Stappaerts, https://youtu.be/5JaS7APpO8U?t=6282, t. 01:44:42

23 This argument in particular is attacked later, in AI Took Our Jerbs!

24 Solar Sands, Doomed To Be Replaced: Is AI Art Theft?, https://youtu.be/nIRbN52PA0o?t=1430, t. 23:50

25 Zapata mentions this in: Our Painted Lives, AI Debate with @StevenZapataArt, https://youtu.be/XV1_lJZnqFE?t=2288 t. 38:08; but he is using it for a completely separate argument.

26 Somepalli et al., Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

27 https://laion.ai/blog/laion-5b/, https://archive.ph/GNFeM

28 Somepalli et al., Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models, fig. 1 caption.

29 Concept Art Association, AI & Industry Town Hall featuring US Copyright Office, https://youtu.be/7u1CeiSHqwY?t=1244, t. 20:44

30 Diffusion models in general are relatively very new—there is no reasonable doubt that future models will be even less prone to over-fitting than current ones.

31 Carlini et al., Extracting Training Data from Diffusion Models

32 Carlini et al., Extracting Training Data from Diffusion Models, “We now extract training data from Stable Diffusion: the largest and most popular open-source diffusion model [58]. This model is an 890 million parameter text-conditioned diffusion model trained on 160 million images.”

33 1,000,000 * 0.00128 = 204,800

34 From Carlini et al., Extracting Training Data from Diffusion Models (emphasis added): “In both cases, we assume that an adversary who attacks a conditional image generator knows the captions for some images in the training set—thus allowing us to study the worst-case privacy risk in diffusion models. [
] Figure 5 shows the results of this analysis. While we identify little Eidetic memorization for $k < 100$, this is expected due to the fact we choose prompts of highly-duplicated images. Note that at this level of duplication, the duplicated examples still make up just one in a million training examples. These results show that duplication is a major factor behind training data extraction. [
] We follow the same procedure as earlier but focus on the top-1000 most duplicated prompts for computational reasons.”

35 https://haveibeentrained.com/

36 For the curious, I would warn against looking this individual up—it seems they make quite a lot of furry pornography.

37 DeviantRahll, A Criticism of Shadiversity’s “AI Love Letter”, https://youtu.be/svsTKrdSd7s?t=252, t. 04:12

38 Complaint at 3, Getty Images (US), Inc. v. Stability AI, Inc. (D. Del. 2023), https://copyrightlately.com/pdfviewer/getty-images-v-stability-ai-complaint/?auto_viewer=true#page=&zoom=auto&pagemode=none

39 James Vincent, Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement, https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion (archived).

40 See: Drew Gooden, I’m Disappointed, https://youtu.be/Vz8X2bhyNnE?t=282, t. 04:42

41 See: MattVidPro AI, Is AI Art Theft?, https://www.youtube.com/watch?v=KmYmbuL3Sbs, t. 02:32, ibid., t. 17:52

42 Beard Wizard Man further notes that most of the big art-hosting websites do not have such clauses, see: id., Thoughts on Shadiversity’s AI ’Art’ Defense (Rant), https://youtu.be/GMAns9cTQJ0?t=1227, t. 20:27

43 Consider the case of a contract that contains microscopic text—surely this text would not bind the person, they cannot agree to it if they have never even considered it.

44 This is from Solar Sands, Doomed To Be Replaced: Is AI Art Theft?, https://youtu.be/nIRbN52PA0o?t=1545, t. 25:45; “Those who develop AI should do so in the most legal and ethical way possible; AI systems and the value they create cannot exist without the data of artists’ work; therefore artists should be compensated and ask for their permission before the work is used.”

45 Adam Duff LUCIDPIXUL, AI ART ft: Istebrak, Steven Zapata, Hardy Fowler, Kelsey Rodriguez, Tyler Edlin & Antonio Stappaerts, https://youtu.be/5JaS7APpO8U?t=6111, t. 01:41:51

46 Art Cafe, Artist Authenticity vs AI with Sam Yang - Art Cafe #137, https://youtu.be/hHIbAK_larY?t=2941, t. 49:01

47 Danar Worya, THE CREATIVE’S VIEW on ART and A.I, https://youtu.be/DeeRjUfVvG0?t=395, t. 06:35

48 See: The Collective Podcast, Ep. 235 - Steven Zapata, https://youtu.be/GKnBSsoV1iE?t=9382, t. 02:36:22

49 Danar Worya, THE CREATIVE’S VIEW on ART and A.I, https://youtu.be/DeeRjUfVvG0?t=2110, t. 35:10

50 See: Frédéric Bastiat, That Which Is Seen and that Which Is Not Seen, https://fee.org/resources/that-which-is-seen-and-that-which-is-not-seen

51 If the work was less productive then they would not need the protectionism—capital naturally flows towards the most efficient lines of production on the market. See: Ludwig von Mises, Economic Calculation in the Socialist Commonwealth

52 Concept Art Association, AI/ML Media Advocacy Summit: AI Ethics Panel, https://youtu.be/lGq7S1q4T7c?t=81

53 FENNAH, Responding to Shadiversity’s ’Love Letter to ai Art’, https://youtu.be/D9GpKINVomo?t=5503, t. 01:31:43

54 Duchess Celestia, NO, AI Does NOT Democratize Art (And That’s a Dumb Defense) || SPEEDPAINT + COMMENTARY, https://youtu.be/lDtAL5IQRMc?t=578, t. 09:38

55 Concept Art Association, AI/ML Media Advocacy Summit: AI Ethics Panel, https://youtu.be/lGq7S1q4T7c?t=2040

56 https://youtu.be/iRSg6gjOOWA?t=915; https://www.polygon.com/23767640/ai-mcu-secret-invasion-opening-credits (https://archive.ph/CC6c6)

57 Murray Rothbard, Protectionism and the Destruction of Prosperity

58 See KirkpattieCake, Art is Hard. The Deceit, Egotism, and Jealousy of “AI Art” (ft. @shadiversity ), https://www.youtube.com/watch?v=dTR_k8ueAL0

59 See Istebrak speaking here: Adam Duff LUCIDPIXUL, AI ART ft: Istebrak, Steven Zapata, Hardy Fowler, Kelsey Rodriguez, Tyler Edlin & Antonio Stappaerts, https://youtu.be/5JaS7APpO8U?t=1037, t. 17:17

60 Warning, this video contains vulgar visuals of the host dressed in nothing but fishnets: Philosophy Tube, Here’s What Ethical AI Really Means, https://youtu.be/AaU6tI2pb3M?t=1665, t. 27:45

61 Philosophy Tube, Here’s What Ethical AI Really Means, https://youtu.be/AaU6tI2pb3M?t=3078, t. 51:18

62 Adam Duff LUCIDPIXUL, AI ART ft: Istebrak, Steven Zapata, Hardy Fowler, Kelsey Rodriguez, Tyler Edlin & Antonio Stappaerts, https://youtu.be/5JaS7APpO8U?t=4762, t. 01:19:22

63 Adam Duff LUCIDPIXUL, AI ART ft: Istebrak, Steven Zapata, Hardy Fowler, Kelsey Rodriguez, Tyler Edlin & Antonio Stappaerts, https://youtu.be/5JaS7APpO8U?t=5287, t. 01:28:07

64 Concept Art Association, AI/ML Media Advocacy Summit: AI Ethics Panel, https://youtu.be/lGq7S1q4T7c?t=3241, t. 54:01

65 Concept Art Association, AI/ML Media Advocacy Summit: AI Ethics Panel, https://youtu.be/lGq7S1q4T7c?t=4272, t. 01:11:12

66 Concept Art Association, AI/ML Media Advocacy Summit: AI Ethics Panel, https://youtu.be/lGq7S1q4T7c?t=3480, t. 58:00

67 Proko, AI Ethics, Artists, and What You Can Do About It, https://youtu.be/Nn_w3MnCyDY?t=1031, t. 17:11

68 Proko, AI Ethics, Artists, and What You Can Do About It, https://youtu.be/Nn_w3MnCyDY?t=1039, t. 17:19

69 https://twitter.com/evanjconrad/status/1707187112481530149

70 Proko, What AI Developers Want Artists to Know about AI, https://youtu.be/K_Bqq09Kaxk?t=3805, t. 01:03:25

71 See: Steven Zapata Art, The End of Art: An Argument Against Image AIs, t. 2:17, “[
] horrendous oversights and anti-humanist values [
];” ibid., t. 4:12, “[
] opportunistic collection and exploitation [
];” ibid., t. 12:04, “[
] the goal here is avoiding accountability and legal liability through tricky data laundering [
];” ibid., t. 27:49, “[
] the people making these things [
] need you to feel worthless [
];” ibid., t. 38:11, “[
] baffling double-standard [
];” ibid., t. 38:21, “[
] bad faith towards visual artists and the callous belief that it is OK to trample them in particular [
];” ibid., t. 45:34, “[
] their chosen path [seems], in the most charitable interpretation, thoughtless [
];” ibid., t. 47:11, “[
] wholesale theft of our creative labour.”

72 KNIGHTS WATCH, Reid Southen lies and gaslights attacking Shadiversity - REPLY, https://youtu.be/7wmtk5ERBMg?t=4989, t. 01:23:09

73 Duchess Celestia, NO, AI Does NOT Democratize Art (And That’s a Dumb Defense) || SPEEDPAINT + COMMENTARY, https://youtu.be/lDtAL5IQRMc?t=231, t. 03:51

74 Duchess Celestia, New AI Art SCAM is STEALING COMMISSIONS (+More AI Art News) || SPEEDPAINT + COMMENTARY, https://youtu.be/AoO8NBZ4qSo?t=282, t. 04:42

75 On this, see also: ChrissaBug, A message to Artists coping with the mental health blow that AI art has dealt, https://youtu.be/zx3ROK9nOYE; she explains that the video is about an “existential crisis” [01:12] about not knowing what to do “career-wise,” [01:59] but that she also “started out [
] being pretty open minded to [it].” [01:32]

76 Proko, AI Ethics, Artists, and What You Can Do About It, https://youtu.be/Nn_w3MnCyDY?t=4980, t. 01:23:00

77 Proko, AI Ethics, Artists, and What You Can Do About It, https://youtu.be/Nn_w3MnCyDY?t=3373, t. 56:13

78 Duchess Celestia, Why Artists HATE AI Art! (And Why It’s NEVER Ethical) || SPEEDPAINT + COMMENTARY, https://youtu.be/Jid6Pj17OBw?t=963, t. 16:03

79 Roark’s speech in Ayn Rand, The Fountainhead, p. 602

80 https://www.youtube.com/watch?v=l1QVwfpbXMQ&lc=Ugx2k1YLQmYJtHXImGJ4AaABAg

81 Concept Art Association, AI/ML Media Advocacy Summit: AI Ethics Panel, https://youtu.be/lGq7S1q4T7c?t=2830, t. 47:10

82 https://glaze.cs.uchicago.edu/

83 Concept Art Association, AI/ML Media Advocacy Summit: AI Ethics Panel, https://youtu.be/lGq7S1q4T7c?t=3264, t. 54:24

84 Draftsmen, Why Be An Artist When There’s AI? - Draftsmen S4E01, https://youtu.be/lc-RX2AGGNA?t=3632, t. 01:00:32

85 See: Tom Scott, YouTubers have to declare ads. Why doesn’t anyone else?, https://youtu.be/L-x8DYTOv7w?t=311, t. 05:11

86 See: Westside Tyler, This AI Bro is Clueless!! | Video Breakdown part 2/3, https://youtu.be/_v0GoGeBKtA?t=225, t. 03:45

87 See: Westside Tyler, Austin McConnell’s TERRIBLE AI Apologia Video | Video Breakdown part 1/3, https://youtu.be/fdCB22pExHI?t=1834, t. 30:34

88 See: Westside Tyler, Austin McConnell’s TERRIBLE AI Apologia Video | Video Breakdown part 1/3, https://youtu.be/fdCB22pExHI?t=2792, t. 46:32

89 Erik Hoel, AI-art isn’t art, https://www.theintrinsicperspective.com/p/ai-art-isnt-art

90 Steven Zapata Art, The End of Art: An Argument Against Image AIs, https://youtu.be/tjSxFAGP9Ss?t=1729, t. 28:49

91 Living the Line, DAVE McKEAN - Interview: AI Image-making and Its Implications, https://youtu.be/I-PaCc96oQM?t=5997, t. 01:39:57

92 Steven Zapata Art, The End of Art: An Argument Against Image AIs, https://youtu.be/tjSxFAGP9Ss?t=930, t. 15:30

93 Art Cafe, AI and The Future of Art with Steven Zapata - Art Cafe #134, https://youtu.be/qTB7tFZ2EFc?t=8670, t. 02:24:30

94 Charles Baudelaire (1859), On Photography, https://www.csus.edu/indiv/o/obriene/art109/readings/11%20baudelaire%20photography.htm, (archived)

95 Art Cafe, AI and The Future of Art with Steven Zapata - Art Cafe #134, https://youtu.be/qTB7tFZ2EFc?t=6050, t. 01:40:50

96 Steven Zapata Art, Miles Johnston: Cheeky AI, The Total Set of Images, and Polite Revolution, https://youtu.be/EqpWfYX7ou8?t=2732, t. 45:32

97 Kelsey Rodriguez, Why I’m not worried about AI ART, https://youtu.be/BsG2NDQC0kY?t=537, t. 08:57

98 Shadiversity, A love letter to Ai art - you don’t need to be afraid, https://youtu.be/u_v9Gbw6kcU?t=361, t. 06:01

99 AustinMcConnell, The Spider-Queen - Chapter 1, https://www.youtube.com/watch?v=kJCkHae1dgE

100 Art Cafe, Artist Authenticity vs AI with Sam Yang - Art Cafe #137, https://youtu.be/hHIbAK_larY?t=3693, t. 01:01:33

101 See: Bonsai Pop, AI “Anime” -For Scum, By Scum (re: Corridor & Netflix, Mother’s Basement), https://youtu.be/pkU2RmN2owg?t=622, t. 10:22, “there’s [sic] so many philosophical questions about what is the value of art, and I don’t have an answer to that. But I think you know like a lot of people say [that] art is in the eye of the beholder but what I definitely can say is that AI art is not art.”—he doesn’t know what art is and yet he can “definitely” say that “AI art is not art.” It is also not uncommon to encounter some tortured mess of a theory of art that–ironically–appears to have been conjured by a process of mixing together various half-baked and out-of-context ideas that have been rattling around the constricted space that is the brain of the person in question (a favourite of such theorists is seemingly-randomly sprinkled Marxism à la Ellsworth Toohey): The Canvas, No, Ai “Art” is not Art., https://www.youtube.com/watch?v=ESZO-XJZr0s. No such theory that I have come across in my extensive research has had even an attempt made to integrate it with some broader philosophy, and they may all therefore be discarded out of hand until such a reconciliation actually occurs.

102 Leonard Peikoff, Objectivism: The Philosophy of Ayn Rand, Meridian (1991), pp. 30-31

103 Leonard Peikoff, “Art as a Concretization of Metaphysics,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), p. 414

104 Leonard Peikoff, “Art as a Concretization of Metaphysics,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), p. 417

105 Leonard Peikoff, “Art as a Concretization of Metaphysics,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), p. 416

106 Leonard Peikoff, “Art as a Concretization of Metaphysics,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), pp. 419-420

107 Leonard Peikoff, “Esthetic Value as Objective,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), p. 438

108 Leonard Peikoff, “Esthetic Value as Objective,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), pp. 440-441

109 Ayn Rand, “The Goal of My Writing,” in ead., The Romantic Manifesto

110 Leonard Peikoff, “Esthetic Value as Objective,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), pp. 443-444

111 Leonard Peikoff, “Esthetic Value as Objective,” in id., Objectivism: The Philosophy of Ayn Rand, Meridian (1991), pp. 444-445

112 Ayn Rand, “Art and Cognition,” in ead., The Romantic Manifesto

113 Leonard Peikoff, “Esthetic Value as Objective,” in id., Objectivism: The Philosophy of Ayn Rand, p. 446

114 Tolstoy, What is Art?; no pagination exists on the version that I have.

115 The ICE-CAST LIVE, The A.I. Art Debate, https://youtu.be/L5v7LUrZhFg, t. 31:31

116 Erik Hoel, AI-art isn’t art, https://www.theintrinsicperspective.com/p/ai-art-isnt-art

117 Out of a seemingly random selection of five artists, all five agree with Duchamp: https://www.artgallery.nsw.gov.au/artboards/duchamp/five-questions-on-duchamp/ (https://archive.ph/OQiam); “Duchamp said ‘anything is art if an artist says it is’. Do you agree?”---Richard Tipping: “Duchamp’s ‘anything is art if the artist says it is’ is true because art cannot be constrained by existing definitions and constantly reinvents itself.”---Jaime Tsai: “Definitely! His attempt to exhibit a urinal as a sculpture (Fountain, 1917) proved that the original significance of an object can disappear under a new title and point of view if the artist determines it to be art.”---Kevin Platt: “Sure, but though this sentiment revolutionised art, I think it’s still hard for people within and outside of art to accept.”---Tom Picton-Warlow: “Yes. Marcel Duchamp’s readymades are what he is most famous for and as his friend Monique Fong noted: ‘Marcel Duchamp recognised later in life 
 that the readymades are what changed art (for the better and for the worse)’. Duchamp saw chess as art and all chess players as artists.”---India Urwin: “I love how empowering this idea is. Duchamp’s philosophy encourages artists to be tastemakers – it gives them licence to challenge preconceived ideas about what art is or should be. Duchamp said anything can be art, and now we have to prove that anyone can be an artist.”

118 Kristian Nee, The existential threat of AI art and why not to quit drawing - Sketchy Van Podcast #60 Steven Zapata, https://youtu.be/OCY7lb_i0yM?t=1623, t. 27:03

119 Kristian Nee, The existential threat of AI art and why not to quit drawing - Sketchy Van Podcast #60 Steven Zapata, https://youtu.be/OCY7lb_i0yM?t=3706, t. 01:01:46

120 LioConvoy, Discussing Shadiversity’s AI Art Video, https://youtu.be/4MEtqEU71Tg?t=2290, t. 38:11

121 Duchess Celestia, NO, AI Does NOT Democratize Art (And That’s a Dumb Defense) || SPEEDPAINT + COMMENTARY, https://youtu.be/lDtAL5IQRMc?t=578, t. 09:38

122 FENNAH, Responding to Shadiversity’s ’Love Letter to ai Art’, https://youtu.be/D9GpKINVomo?t=1160, t. 19:20

123 FENNAH, Responding to Shadiversity’s ’Love Letter to ai Art’, https://youtu.be/D9GpKINVomo?t=1459, t. 24:19

124 https://twitter.com/pascalblanche/status/1523780470001266689 (https://archive.ph/CLogG)

125 Draftsmen, Why Be An Artist When There’s AI? - Draftsmen S4E01, https://youtu.be/lc-RX2AGGNA?t=2884, t. 48:04

126 Trent Kaniuga, The END for concept artists? Will AI replace CONCEPT ARTISTS?, https://youtu.be/LPYslj4GzS4?t=319, t. 05:19

127 Will Knight, When AI Makes Art, Humans Supply the Creative Spark, https://www.wired.com/story/when-ai-makes-art/ (archived).

128 The Art of Aaron Blaise, Disney Animator REACTS to AI Animation!, https://youtu.be/xm7BwEsdVbQ?t=361, t. 06:01

129 Corridor Crew, VFX Artists React to Bad & Great CGi 71, https://youtu.be/bUu-44nw3eQ?t=598, t. 09:58

130 See: Ana Isabel, “Goodbye craftsmen, hello directors?,” in ead., The (Concerning) Rise of A.I. Art, https://youtu.be/lyrnWqQBYu0?t=986s; see also: Steven Zapata Art, “Don’t people do the same thing with references as the AIs do?,” in id., The End of Art: An Argument Against Image AIs, https://youtu.be/tjSxFAGP9Ss?t=1925

131 https://archive.ph/kNaGO

132 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=230, t. 03:50

133 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=192, t. 03:12

134 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=434, t. 07:14

135 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=189, t. 03:09

136 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=788, t. 13:08

137 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=728, t. 12:08

138 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=795, t. 13:15

139 AustinMcConnell, I used AI in a video. There was backlash., https://youtu.be/iRSg6gjOOWA?t=475, t. 07:55

140 KNIGHTS WATCH, Reid Southen lies and gaslights attacking Shadiversity - REPLY, https://youtu.be/7wmtk5ERBMg?t=1805, t. 30:05; and ibid., https://youtu.be/7wmtk5ERBMg?t=2321, t. 38:41

141 Concept Art Association, AI Town hall hosted by Karla Ortiz & Concept Art Association, https://youtu.be/LYO9sii1eKA?t=335, t. 05:35

142 KNIGHTS WATCH, Reid Southen lies and gaslights attacking Shadiversity - REPLY, https://youtu.be/7wmtk5ERBMg?t=4989, t. 01:23:09

143 FENNAH, Responding to Shadiversity’s ’Love Letter to ai Art’, https://youtu.be/D9GpKINVomo?t=1713, t. 28:33—“Wow, the sun is behind her, and yet it is projecting onto the front of her. Brilliant, yes. [
] I’m nitpicking of course, it’s obviously a very impressive result, let’s not fucking kid ourselves.”

144 His video should serve as ample evidence that this is his view.

145 Corridor Crew, Is this AI Art, or is This Something New?, https://youtu.be/mUFlOynaUyk?t=180, t. 03:00

146 Joel Haver, How I Animated This Video, https://www.youtube.com/watch?v=tq_KOmXyVDo

147 The Art of Aaron Blaise, Disney Animator REACTS to AI Animation!, https://youtu.be/xm7BwEsdVbQ?t=82, 01:22

148 The Art of Aaron Blaise, Disney Animator REACTS to AI Animation!, https://youtu.be/xm7BwEsdVbQ?t=420, t. 07:00

149 The Art of Aaron Blaise, Disney Animator REACTS to AI Animation!, https://youtu.be/xm7BwEsdVbQ?t=455, t. 07:35

150 The Art of Aaron Blaise, Disney Animator REACTS to AI Animation!, https://youtu.be/xm7BwEsdVbQ?t=869, t. 14:29

151 Mother’s Basement, AI “Anime” - An Insult to Life Itself (re: Corridor & Netflix), https://youtu.be/GOwxXj1EIXM?t=114, t. 01:54

152 Mother’s Basement makes essentially this point, https://youtu.be/GOwxXj1EIXM?t=236, t. 03:56

153 Corridor Crew, People say this tool will replace me, so I made a movie with it., https://www.youtube.com/watch?v=eIJXOU83fqE

154 Art Cafe, Artist Authenticity vs AI with Sam Yang - Art Cafe #137, https://youtu.be/hHIbAK_larY?t=5108, t. 01:25:08

155 KnowsBestNever, The War between Artists and AI, https://youtu.be/S9zhDY6bf_o?t=1701, t. 28:21

156 See: https://twitter.com/pk_kenzie/status/1704289663832494127?s=19; see also: Art Cafe, Prompt: In style of Greg Rutkowski - Art Cafe #139, https://youtu.be/2SO9ZcI1ZDU?t=2638, t. 43:58

157 https://twitter.com/daniel_eckler/status/1705202342159884742

158 https://twitter.com/reddit_lies/status/1610669909842825222 (archived)

159 Westworld S01E02, “Chestnut.”

160 Uniquenameosaurus, How Twitter Betrayed Artists and a Defense of NFTs

161 “They” does not here refer to all anti-AI artists, but rather that intersection between anti-AI and anti-NFT artists. Maciej Kuciara recognises that NFTs are indeed what is being sought by the anti-AI community here: Art Cafe, Prompt: In style of Greg Rutkowski - Art Cafe #139 https://youtu.be/2SO9ZcI1ZDU?t=5691, t. 01:34:51

162 https://arstechnica.com/information-technology/2023/02/us-copyright-office-withdraws-copyright-for-ai-generated-comic-artwork/ (https://archive.ph/L3UOQ)

163 Stanford HAI, Creativity in the Age of AI: Artist Perspectives on AI, Copyright, & Future of Work, https://youtu.be/y9wOvFihY74?t=1532, t. 25:32

164 Danar Worya, THE CREATIVE’S VIEW on ART and A.I, https://youtu.be/DeeRjUfVvG0?t=3248, t. 54:08

165 LiquidZulu, Why Artists Shouldn’t Own Their Art, https://youtu.be/4xKjHHzLUQQ

166 lol