I've got some strong opinions on this.
The usual source of outrage around AI generated art revolves around a few similar papers that were published, suggesting that the AI are
capable of directly copying work with or without mild stylistic changes. In these papers, a scenario is constructed where this outcome is extremely likely, for demonstrative purposes.
An example paper is here:
https://arxiv.org/pdf/2301.13188.pdf
For those who don't wish to read the whole thing, the tl;dr is that they trained what is called a GAN (Generative Adversarial Network), and used that to attempt to reconstruct images from the training set:
Adversary goals. We consider three broad types of ad-
versarial goals, from strongest to weakest attacks:
1. Data extraction: The adversary aims to recover an
image from the training set x ∈ D. The attack is
successful if the adversary extracts an image ˆx that
is almost identical (see Section 4.1) to some x ∈ D.
2. Data reconstruction: The adversary has partial
knowledge of a training image x ∈ D (e.g., a sub-
set of the image) and aims to recover the full image.
This is an image-analog of an attribute inference at-
tack [80], which aims to recover unknown features
from partial knowledge of an input.
3. Membership inference: Given an image x, the ad-
versary aims to infer whether x is in the training set.
So they've made another AI who's goal is to optimize the problem of generating prompts that produce a near-copy of a piece of the training data. This is done with
partial knowledge of the input set. This means that the AI is aware of some of the images used, and some of the captions used to for training with that image. This is to simulate a worst-case scenario, where you have a malicious user attempting to create "original" art which is nearly identical to an existing image.
The problem I see, in general, is that people see this paper and a few papers similar to it, and say "See? AI just copies people's art/doesn't produce anything original". However, what is actually happening in the paper is, they're showing a program what a bunch of pictures look like, and what its captions are. Then a separate entity is saying "Generate an image with a prompt identical/nearly identical to ones you have already seen". This is like showing a picture of the Mona Lisa to an artist, then asking the artist to paint the Mona Lisa. They're going to produce something very similar. The only difference here is that it's a computer doing it.
Usually, following this, people assume malicious intent when they hear somebody is using an AI to generate art. It's just another tool. If people want to use a tool to produce near-copies of existing art and then claim it to be original, hate those people. Most people using AI to generate art have no such malicious intent, and are not going out of their way to find the captions of training data to generate something similar.
There will always be room for truly inspired, human art. AI art will become dull and the models will get worse over time without it. This is generally well understood in the AI space. Training AI on data from a generative network generally produced degenerate results. This does have a huge economic impact on artists doing commissioned work, and that's a whole other discussion. On the ethics of freely produced (not profit seeking, freely shared) AI art, I think there is nothing wrong with it.
Saying AI art CAN plagarize existing art so we shouldn't use it, is like saying word processors CAN be used to plagarize existing works, so we shouldn't use them.
Some people argue that everything produced is stolen, because it used pictures everybody can see to train. Artists study existing content to train as well. Are they stealing when they produce something in a similar style to an artist they've seen? Is level of effort the deciding factor for if art is art?
Every major leap in technology has significantly impacted practitioners of all kinds. I suspect my job will be very, very different in 5 years due to some of the leaps I've seen from GPT-2 to GPT-4. I think, for most commissioned artists, the prudent thing to do would be to adapt somehow. A great disruptor has entered the market, and there's no commanding it to leave. Consider tuning an AI using your own private works and your own stylistic details. Your tuned AI will produce something no other AI will.
To the idea that AI Art makes 'cheap' art, and degrades all other art, all I can say is, that's true. It's been true of every technology that produces something. Every automation we create makes the product cheaper and less personal. Hand-made tools are a thing of the past, and it shows. Any modern mechanic can tell you.