You Probably Think This Bot Is About You

When it comes to machines, paranoid assumptions about the world are mutually reinforcing: The New Inquiry’s Conspiracy Bot condenses this recursive symbiosis.

View the conspiracy bot here.

Where you seek patterns, you will find them. In humans, we call this apophenia: seeing a face in a cloud, hearing voices in white noise, or divining cosmic significance from a chance meeting with an old friend.

Machine learning algorithms, which are used by computers to identify relationships in large sets of data, echo our pattern-seeking tendencies, since pattern recognition is what they’re designed for. When seeking to program learning algorithms with human intelligence, we inevitably include our peculiarities and paranoias. Like the human brain, machine learning algorithms arrive at shallow, inappropriate conclusions from ingesting sprawls of data.

But when it comes to machines, paranoid assumptions about the world are mutually reinforcing: When they see the false patterns we see, they validate the faults of our own pattern-seeking tendency through the illusion of computational rigor. Seeing our own judgments reflected in the algorithm, we feel more confident in its decisions.

The New Inquiry’s Conspiracy Bot condenses this recursive symbiosis. Just like us, our bot produces conspiracies by drawing connections between news and archival images—sourced from Wikimedia Commons and publications such as the New York Times—where it is likely none exist. The bot’s computer vision software is sensitive to even the slightest variations in light, color, and positioning, and frequently misidentifies disparate faces and objects as one and of the same. If two faces or objects appear sufficiently similar, the bot links them. These perceptual missteps are presented not as errors, but as significant discoveries, encouraging humans to read layers of meaning from randomness. If a “discovered” conspiracy finds some “true” reflection in the “real” world, such as linking two politicians that are actually colluding—and due to the sheer number of relationships it produces, it’s statistically likely—then the bot’s prediction appears more valid to the viewer, heightening the plausibility of its other predictions. Thus the nauseating cycle loops once more. First as news, then as fake news.

View the conspiracy bot here.