Zum Inhalt springen

proposal: mark/punish ai imagery

I know this is radical, but hear me out.

First off, this reflects existing practice. I know multiple users of this site
that immediately skip[1] articles if they notice obviously AI-generated imagery.
This also means that a way to tell if the article is likely to contain AI art
upfront would save them some time.

Second off… a bit of context. While many people in tech don’t really
appreciate this, generative AI is very exploitative technology. AI companies have
taken the hard work of artists without their consent, and used it to put them
out of their jobs (the starving artist having already been a stereotype since
the dawn of time) and enable fascist propaganda in the style of Studio Ghibli. It’s also known to directly plagiarize images it was trained on – obviously, without attribution.

Think about how it must feel to be one of these artists right now.

Actually, Cat and Girl made an amazing short
comic
about how it feels to
be on the receiving end of this. I really recommend reading it, it’s a
perspective a bit alien to those of us in tech. Actually, do it now. I
promise it’s worth your time
.

Almost all uses of AI art don’t really add anything to the article[2], but they
show implicit support for the Torment Nexus, and casual disregard for the rights and well-being of artists. I presume goodwill on the part of
the bloggers, so I assume they’re doing this because they’re unaware of the
issues.

Thus, there’s an issue. Comments about a site’s usage of AI imagery are (rightfully)
off-topic here. The best we can do is ignore the article. This solves point
one, but we’re still just kinda accepting the growing and growing support for this
evil tech, and bloggers stay unaware. What if we did have a way to complain
about anti-social sites, without derailing the discussion, and in a way that
hopefully could make authors care?

Making an objective decision if something is AI art is hard, so a tag (think
/t/rant) wouldn’t really work here. This is not
something mods should have to do. However, for the most part “we can tell”. I
thus propose adding an “ai imagery” flag for stories, and making it work as a
downvote
.

Lobste.rs is popular, so I’m hoping this could actually make a difference in
awareness of the ethical issues. It’s really fucking depressing constantly seeing yet another blogger I respect using cutesy AI imagery.


[1] At one point, people felt like this used to express a lack of care for the
quality of the content – so reading the rest was a waste time. With more and
more bloggers using AI art, this has shifted to ethical reasons. Why engage
with someone who supports destructive, exploitative tech?

[2] For example, people have complained about hero images being useless (Medium
being the worst offender) long before generative AI became a thing.
Also see: the website obesity crisis.
Also see: there is no website obesity crisis.


Honorable mentions:

I’m intentionally not tagging this with vibecoding, because this is especially relevant to people who have filtered it out.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert