For a long time, X’s algorithms didn’t seem to surface my content for many users. I’m guessing it’s because after Twitter became “X” I basically left the platform and rarely posted, perhaps also because I had my Bluesky handle in my bio and kept doing stuff like advertising this Substack. Whatever the reason, even though I had 9000 followers, it was pretty evident that no one saw my Tweets. A thing that has happened in my life over the last few weeks is that clearly the algorithm has reconsidered its position because my little snarky missives suddenly get a huge amount of engagement. Most of it — as tends to be the case on “X” — is negative.
So this essay is an attempt to think through one of my threads — and to make sure I didn’t leave it at the snark inherent in a three tweet mini-storm, but actually worked through what I believe. It’s also a bit of a follow-up to an earlier post about what it means to say that people “on Twitter” / “on X” are saying specific things. This one is about AI, Instagram, virality and the things we can miss by fixating on all of these things. Just as my post about Twitter/X, it’s not intended to be about the fact that we supposedly can’t say certain things about what people online are doing/saying. It’s about the limitations and caveats inherent in traditional gatekeepers’ view of social media — whether that’s journalists, experts or university researchers like myself. I come at this in two roles: in my job at the Clayman Institute for Gender Research, I analyze how people online make sense of questions around gender and sexuality — whether that’s in YouTube comments, Wikipedia edits, or in Twitter DMs; and in my job as media critic, I think about how we as a society talk about social media, yes, how the way we talk about social media has become in a certain sense an intrinsic aspect of our politics.
This installment is about the debate over a picture with the slogan “All Eyes on Rafah” that went viral last week. It showed up a lot in my social media feeds, above all on Instagram.
But what really began to interest me was the backlash to the image. And it was a real backlash, one that — in a way that has frankly come to seem typical — bypassed what the tile was saying in favor of the image itself. Today I want to analyze how the fakeness of the AI image was made to stand in for a supposed fakeness of the message. It’s a process that I think is quite instructive, because it shows an underappreciated risk in an environment in which media literacy becomes itself a political tool (not that I am entirely critical of that, I’m just saying this is an attendant risk). The risk is simply that we dissolve fellow human beings’ earnest expressions of their political opinions into the constituent media by which we take them to have been created — their media diet, the visual rhetoric of their presentation, their affect in creating/sharing/seeing a particular piece of media etc. I am keenly aware that for someone who has been lecturing people for four years about their lack of media literacy when it comes to “cancel culture” stories, gender studies, critical race theory, etc., this might seem … a bit rich. But I actually like to think through these limit cases to distinguish my own practice, to make sure that I’m not falling into a kind of debunking mania. So think of this post as part of that.
Significantly, I almost exclusively saw this tile in people’s “Story”-feed, meaning at least the folks I follow didn’t post it as a separate image, but usually shared it (temporarily) for their followers to see. Which also meant it was an image mostly to be interacted with, and interacted with quickly — not, for instance, an image to contemplate. People could comment or share it to their own Its spread was indeed spectacular — not since the black squares of 2020, to my recollection, has there been an image as omnipresent in my feed as this one. According to the BBC it was shared over 47 million times.
I didn’t think much of it at the time, but then noticed a second trend — mostly not on IG, but rather in legacy media, in particular in Germany. A tendency to … well, not quite debunk, but sort of debunk around the image? Der Tagesspiegel noted that the person who had first posted the image seemed to be an islamist and warned that there was “the danger that the millions of demonstrations of sympathy for the suffering of war victims will ultimately have a subtle anti-Israel effect clearly intended on the part of the creator.” And that sort of stopped me in my tracks: It’s a kind of remarkable description of how images work! Something about the image — which is, after all, really quite simple (hence its virality) — might carry the “subtle anti-Israel effect” “intended” by its creator. It’s a conception of “virality” that seems to owe a lot more to its semantic source: you get infected by the intent carefully camouflaged in the image you innocently consume. The text ends worrying that the image might become something like a gateway drug to “the BDS movement”.
In a video on the website of Der Stern, we learn that "the image may be shared by people who mean more by it” than just what the text says, and that “most Instagram users of course don’t have the time, and don’t have the competence to start figuring out: who else is sharing this? Where does it come from?” The fear being (I think?) that one might accidentally “really” be doing something else when one is just trying to say: “hey, let’s pay attention to what’s happening in Rafah.” The analysis in the video is overall very good — the expert (a specialist in digital media from Leipzig) has very smart things to say about the way we interact with images on Instagram and how that can be abused. But the overall tone of the video (which seems edited down from a longer conversation) suggests that the people who shared this image had been in some way incompetent, had been dupes of the platform and of clever manipulations on the part of the content. The questions and editing of the answers felt like a discussion of people who had fallen for some sort of scam, or been victims of some sort of fraud. The thing the video didn’t seem to admit as a possibility was that people had used the image to express their opinion and had successfully done so, and that one now needed to engage with what they had expressed. The overall framing seemed to be that people should not have shared the picture — though neither the interviewers nor the interviewee ever came out and said why.
As Jakob von Lindern at Die Zeit put it: the German-language reception of this image felt like “debate cosplay”. And it seems to me like even the debate is mostly a German one. There are English-language reports on the picture as well, but they are all quite different. CNN reported on how “a likely AI-generated image of Gaza took over the internet” (“likely”??) — CNN mostly quotes activists questioning the political value of sharing pictures on Instagram, questioning the use of an obviously fake picture to illustrate a conflict that is generating no shortage of images. The Washington Post likewise led its explainer with this criticism: “Some on social media have criticized the image as replacing distressing footage of what’s actually happening in Gaza — from photographers and people on the ground — with a fake image generated by technology.” The ethical implications here are entirely different from the German articles. And none of them debunk the picture — they wonder what using AI images does to the actual photographers working (and dying) in Gaza. Not whether Bella Hadid got infected with anti-Israel bias by sharing an inauthentic pic on her Insta-story.
Eventually, the debunking work made it back onto IG, for instance in this set of tiles by Der Spiegel.
I’ve reproduced 3 out of the 6 tiles here: the title says “Who is behind this AI-image?”, the second explains that
“It becomes obvious that it’s an AI image, when you look at the snow-covered mountains in the background. Rafah is in the South of Gaza and abuts the desert. The city also doesn’t consist entirely of tents, nor are these arranged in as orderly and geometric a fashion as the image suggests.”
Good to know, I guess.
The third tile reads:
“Who started the trend? Behind the [image] is a user who calls himself Shahv4012 on Insta, and who has the flags of Malaysia and Singapore in his profile. His pictures show a young man who is interested in cars, professional photography and the natural beauty of South East Asian. He does not appear to be local to Israel or Gaza, and doesn’t appear to have ever been there according to his profile.”
I found these tiles very interesting because they pretend to perform media literacy for the internet age, while not-so-covertly assuming everyone on the internet is an absolute idiot. As an act of visual analysis, the second slide is kind of telling in its slippages: really, is it the snow-capped peaks in the background, the disorderly tents that should tip us off that this image is not a news photograph? What about the fact that the tents spell out “All Eyes on Rafah”? The idea that someone might need to be told that an image is an artistic representation, and that the residents of Rafah didn’t spend time in an active war zone arranging their (“disorderly”) tents in the shape of a slogan, seems itself kind of baffling. The critique of this as “fake” is itself kind of incoherent, insofar as it imagines a subject that imagines as real something that quite calmly says “I am not real”.
All of this having been said: it IS a weird image, isn’t it? If we don’t try to convince ourselves that people sharing the image are signing up to the agenda of a nature-loving, islamist Malaysian guy, that doesn’t mean that we can’t critically reflect on the image or on the act of sharing it. Let’s stipulate that most people sharing the image knew it was a symbolic image. But why a symbolic image at all? Why not a real picture from Rafah, or — if too upsetting — a black background? It is indeed strange to combine a slogan emphasizing the importance of witnessing — “All Eyes on Rafah” — with a picture that so flagrantly declines being an act of witnessing.
We might also think about the particular mode of fakeness this AI image exemplifies. Once we get past the dumb fact-check syntax (along the lines of: “Oh, you rubes thought there were Himalayan-style mountains in Gaza?”), we can actually interrogate what this AI image draws on. Some experts might even be able to speculate what Shahv4012’s prompt to Dall-E or whatever might have been. I can’t do that. But I want to point to a few features that seem to me interesting and troubling. And that’s fair even with a generated picture. After all, at some point Shahv4012 — whoever he is — looked at what the AI had come up with and said: “yeah, that’s it — let’s put that on my Insta-story”.
The first thing to note is the image is kitsch. Most AI art is. But more than that: the rugged mountains in the background seem to reach for a kind of sublimity, almost unreality — they’re frankly giving Middle Earth. They feel spectacular, and heroic and uncommon — in a way that so many of the actual images coming out of Gaza since November are frankly not. In a way the mountains function much the same way as the tents. As Der Spiegel noted, the tents seem very regular. What if we don’t use that to dunk on people sharing this image, but to reflect on the image itself? Shahv4012’s image is all about neatness and order. The juxtaposition of the refugee camp, the valley floor and the towering peaks, the tents in orderly rows, and the depopulatedness of the visual field that is common in AI images: all of these conspire to create an arresting, but overly pat visual field.
So it’s not that there’s nothing to say or criticize here. In a society as deeply moved by images as our own, critical attention to how exactly images do move us seems absolutely essential. But why fixate not on the mode of unreality and political rhetoric? Why fixate on the unreality and imply that it delegitimates the message? Yes, this image is kitsch. Another article in Der Spiegel asked: “Where does the urge to turn even war and massacres into kitsch come from?” I liked the article overall — the question of why this image outdid the real ones is interesting, although the answer is probably somewhat prosaically: the algorithm, which is supposed to throttle political and upsetting content, both of which almost anything real coming out of Rafah right now is bound to be. But it’s a good article. I just want to briefly pause on that final question, where it ends up with our “urge”, our desire for kitsch? And finds in this yet another way to not engage with what the people sharing the image most fervently wanted others to engage with. The AI-image may be about our inability to look at what is happening; but the discourse about the AI-image is about our inability to look others in the eye who are horrified by what is happening.
Speaking of the AI-part of the AI-image: it’s not quite clear what AI has to do with it. If this were photoshopped, would it be less fake, or fake in a different way? If anything, the weird shimmer and blur that seems to come with (most?) AI images might be more telling than a really impressive photoshop job. As many of the reports note, the reason AI can go viral where real images can’t is that it’s less likely to get tripped up in content moderation — but something similar would likely be true for a fake-looking photoshop as well. (I asked two friends who used to work with content moderation about this, but they couldn’t tell me for sure, and neither of them have ever worked at IG.)
In several ways this image says “I am symbolic”. The tiles published by Der Spiegel, however, seem to want to read it as something else: it doesn’t represent reality, but not because it’s symbolic, but because it’s fake. This of course involves a misunderstanding: it treats the picture as a document, not as a piece of political rhetoric. And it deliberately miscasts those who share it as underinformed dupes. Rather than political actors that you may disagree with it casts them as no political actors at all.
Now, I am sure that among the millions of users that shared this image, there are almost inevitably some who didn’t reflect on the fact that this is not a real image. What I think is interesting is imagining a world in which that matters — or matters more than what 99.9999% of the millions of users wanted to express by sharing this image: that they wanted “all eyes on Rafah”. Because seeing the AI trees means you don’t have to see the forest: that this is a piece of political rhetoric and is asking you to do something, and you can either do that, or not do it. What you can’t do is pretend that the people who shared it weren’t trying to say something fairly clear and obvious. I’ve been getting very interested in the ways in which the era of “fact-check”, “community notes”, etc. can create a simulacrum of accuracy and informed-ness, one that eventually simply dissolves everything into an all-encompassing puddle of skepticism. They pick out select details with a view to disproving a whole, without asking — in fact, without being able to ask — whether the argument made depends on the specific relationship between part and whole. After all, that relationship is usually not explicitly named, and therefore cannot be fact checked.
The tiles put together by Der Spiegel seem like a perfect encapsulation of this kind of positivistic fact check that pretends to dissolve political speech into a set of factual claims to either aver or negate. Several people I follow on IG shared Der Spiegel’s tiles with the caption “do your homework”. That’s what these kinds of fact-checks are: information hygiene, a pragmatic way of dealing with the torrent of information that arrives in our feeds — homework. But given that these are part of a praxis, it’s worth asking: what sort of homework is one really doing in swiping through these tiles? What sort of important analytic work is Der Spiegel doing by going through the original creator’s posts celebrating the “natural beauty of South East Asia”? Why do the German-language articles note the origin of the image, but ignore the origin of the phrase “All Eyes [are] on Rafah” — WHO functionary Richard Peeperkorn —, which the BBC, for instance, readily supplies? What Der Spiegel is doing is not disinformation by any stretch. But it’s information seemingly cobbled together to avoid encountering directly the question raised by the image: should all eyes be on Rafah? And if not, why not?
I should say that I do take the dangers posed by internet disinformation quite seriously. Having researched quite a bit Americans who lived in their own reality with their own facts in the 1950s, 60s, 70s and 80s, I do wonder how new alternative facts really are. And as someone who has been watching (and during especially cursed times writing about) Fox News for going on twenty years, I understand you don’t need internet disinfo to goad Americans into supporting a second invasion of Iraq. But I do think the pandemic brought home to all of us the dangers of people disappearing down their online rabbit holes. Speaking personally, I was surprised what the pandemic revealed about media literacy among family members, students, even colleagues.
On the other hand, as someone who has spent the last three years studying the implicit demonization of online discourses during the “cancel culture” panic, I am quite aware that beating up on the form of internet discourse (a pithy tweet, a sanctimonious Insta tile, a cantankerous YouTube video) can be a way to demonize an entire generation’s way of coming to and expressing political understandings. To put it quite plainly: yes, a bunch of stuff on the internet is bullshit, yes, a bunch of folks probably need to learn how to verify facts before sharing some online infographic. But just as much something else is true: people are also often quite bad at parsing what’s in their daily newspaper or on their evening news, to assess its veracity, its value as news, its implicit biases. Studying #MeToo and “Cancel Culture” will do that to a motherfucker: sure, people pick up bullshit online; but they are in some ways far less inoculated against bullshit proffered by serious-looking white dudes in suits on the evening news, or by a pundit at a national newspaper.
More to the point, there’s an implicit dividing line being drawn here — especially when legacy media report on internet phenomena — which doesn’t take into account how terminally online our “offline” media have become. As non-internet native journalism shrinks, as it gets casualized and outsourced, as fact checking gives way to clickbait, and as it largely seems to rely mostly on the internet for its sourcing and information, there is a problem both in implicitly denigrating the way most people gather their information and form their opinions today; and in implicitly elevating journalism that easily falls prey to many (if not all) the same pitfalls as everyday users, and that doesn’t have the advantage of some snarky asshole putting a response tweet right under it setting it straight. If your buddy shares a dumb, obviously fake statistic online, you can clap back. If a Twitter account shares this hilarious map of the “historical borders of India”, there’s immediate comeuppance. Once false information is in the newspaper, on the telly, or in the Congressional Record — well, the retraction will be consumed by a fraction of the people who were fed that information.
Most observers correctly note that the original offending tile doesn’t have any information on it, it is a piece of political rhetoric. It makes no factual claims about Rafah and about the war. It is exhorting you to pay attention. But somehow what the tiles from Der Spiegel seem us to want to assume is that what you’re “really” doing in sharing this tile is x, y, and z. But it’s worth asking: is that “really” what you’re doing? In what sense of “really”? I don’t know anything about the creative process behind the slogans about nuclear power that I put on placards throughout my youth; I know who made the rainbow flag, but I don’t know whether he was, I don’t know, hugely anti-vaxx. And, more to the point, I don’t see how, if he should turn out to have been that, it would have any practical bearing on my relationship to the cause embodied by the symbol he created. To transfer this question to social media: Just imagine if the first person to make an instagram tile with the hashtag #FridaysForFuture or #MeToo also happened to be a 9/11 truther. What the heck would that mean? If retweets are barely endorsements, Instagram tiles are icons, they just kind of just say what they mean. This can be somewhere between not very much and nothing at all, but it’s odd to claim that they are imbued by their creators with this secret charge that can slumber in the depths until it is set off in your timeline.
Why would someone think that they do? This leads to two mainstays of “disinformation”-discourse that has sort of become untethered from its analytic origins and just kind of floats freely to discredit various bits of online discourse. One concerns Instagram and the worries about virality. There is indeed an ease with which content goes viral on Instagram that can feel a little scary. The radicalization of various Instagram influencers is indeed scary — think of the #SaveTheChildren hashtag, which washed QAnon content into the feeds of millions of average users who had never before come into contact with it. That is a real risk. But in some way “All Eyes on Rafah” is the opposite of “SaveTheChildren” — the latter took a thing people can’t really disagree with (children! they are nice!) and gave it a very particular spin, while the picture created by Shahv4012 works exactly the opposite way. It is one of many possible images people could have picked from to express a feeling — it’s just the one that happened to go viral. The question is basically: what came first, a widespread concern or the virality? Did one express the other? And the framing around social media debates (especially discourses about “virtue signaling”) can make it seem as though the virality basically invalidates the genuineness of the concern. Which it can, of course — but as a basic assumption it’s both wrong and deeply problematic.
The other concerns memes. The provenance of memes is indeed sometimes used to discredit them, for instance if Elon Musk burps up yet another skull-measuring “race science” monstrosity onto his Twitter/X-feed. But there are three important things to notice. First: the memes usually not-so-subtly invoke a far-right worldview. The things they teach us to pay attention to, to care about, are things that stand orthogonal to the attention of “normies”. They are almost impossible to disseminate without reinforcing the values and worldview of their creators. If you come across a tile with a hashtag like #SaveTheChildren and you search for that hashtag, you will be inundated (or were once inundated) with QAnon content. You google “Rafah” right now, you get the BBC, the New York Times, or Der Spiegel. That would seem to be … either absolutely okay, or even a good thing? Surely even people who think that looking at the suffering in Rafah is somehow immoral, would not begrudge others consuming news about what is happening there.
Second: pointing to the origin of these memes on far right message boards is not, or not primarily, provenance research — it doesn’t matter that the person who created them is also active on 4Chan. It matters that these kinds of memes are the communal currency of 4Chan. They import entire debates, affective styles and framings. That’s a spillover researchers are interested in, and not infrequently worried about — my friend and colleague Simon Strick works on the way far right memes travel, and especially about how they don’t travel alone. In the case of “All Eyes on Rafah”, all articles about the tile note just how far it’s been traveling — it started with an Instagram user, was used by many other Instragram users, none of which are part of some inauthentic effort (“astroturfing”). The people whom I saw sharing the image were models, influencers, journalists, etc. In some way that suggests that whatever the original intent of image creation was, it matters less. This isn’t a group of trolls trying to get a bigger group of people to care about their weird cherrypicked datapoints, or take on their particular preoccupations. This is a massive number of users latching on to an idea and probably attaching a fairly varied set of positions and preoccupations to it. What users took themselves to be expressing by sharing the image was probably extremely varied — why wouldn’t that be the reality of the image, rather than the intention of its nebulous creator?
Third and most important point: the reason it’s creepy when, say, a politician posts a far-right meme is not that they believe in far-right stuff (in most cases this will not exactly come as a surprise). It’s evidence of their information diet, it shows where on the internet they hang out. Think of the current hubbub about SCOTUS Justice Samuel Alito’s various flags. The problem is not that Alito loves Trump, that’s been in evidence for a long time. The problem is that Alito is clearly getting much of his information and world view from Trumpist fever swamps, which isn’t ideal for one of nine justices who might, for instance, get to decide the next election. As important as images are: we fixate on them at our own peril, if we ignore how people use them and what for.
If you’re asking me: “well, where is the line between that kind of analysis, and what Der Spiegel was doing with the Rafah-tile?” I’d have to truthfully answer that I don’t know. But it is important that there is a line to be drawn. And it is important to admit that at times it’s okay just to share an image because you feel a certain way. And that we have to engage with that as political speech, not as some social media artifact of algorithms sliding past each other. Because otherwise our image of social media “competence” is essentially exclusion: it imposes an unreasonable amount of due diligence on ordinary social media users who see something and want to say something.
There’s already a kind of dark side to the “fact check industrial complex”. Joe Bernstein, in a long essay in Harper’s once pointed to the fact that the idea of human manipulability that seems to subtend a lot of disinformation discourse is basically indistinguishable from the advertising industry’s sales pitch about itself. The picture of human beings in either case is that we have very little autonomy and are essentially sheep. Most importantly — or perhaps, most offensively — the disinformation framing allows us to understand the “output” (the opinion) purely in terms of the “input” (misinformation). It forecloses the possibility, in other words, that the people who hold opinions we don’t like hold those opinions genuinely and authentically, having gotten there without needing to be manipulated. This is doubly true for something like Rafah. I would assume that even the people going after folks for sharing the Accursed Tile would stipulate that what’s happening in Rafah is not great, and that it might help to have — you know — eyes on Rafah. But they don’t have to, since they can frame any expression of that opinion as a sort of social media boo boo. You only said it, because you were digitally illiterate. Because you can’t tell AI generated images from the real thing. Because you got caught up in the virtue signaling, etc. etc.
Because in the end the handwringing about the particulars around Shahv4012, and around the image he created, amounts to the simple fact that it’s immense efforts of fact checking and analysis marshaled to drive home one meta point: you should not have shared this message. This worry about virality, about AI, about digital literacy is not only not independent of the content of the message; it is likely in large part about the content of the message. During the 2020 George Floyd protests there was a meme that went around:
“Don’t sit.
Don’t kneel.
Don’t stand.
Don’t speak.
Don’t lock arms.
Don’t write about it.
Don’t protest.”
For 2024 we could rewrite this as:
“Don’t hit like.”
“Don’t post political slogans unless those political slogans are also dissertations.”
“Don’t speak out if you haven’t spoken out an equivalent amount about other things.”
“Don’t post a tile until you have clearly established the person who came up with its design.”
“Don’t protest.”