Two years ago, Gareth Williams was found naked and decomposing, padlocked inside a duffel bag in the bathroom of his London apartment. He had been employed as a codebreaker for the Government Communications Headquarters (Britain’s NSA counterpart), so the chief of the Secret Intelligence Service met with the commissioner of the Metropolitan Police before investigations into his death began to ensure that any of the sensitive materials he’d had access to wouldn’t be compromised. The investigation was instead.
Mysteries emerged and remained mysteries. Who locked him in the duffel bag? Who wiped his phone hours before his death? Whose long orange-haired woman’s wig was that? Why was his apartment unusually clean of fingerprints? Why was his heat on in August? Why did his fellow spooks at MI5 not report his absence for a whole week? A two-year investigation into his death has not answered these and other questions.
The detective assigned to Williams’s case was barred from directly questioning his co-workers as witnesses or suspects. She instead had to rely on security-cleared counterterrorism officers to question them for her. The Guardian reports that “SO15 [counter-terrorism] officers, not those from homicide command, interviewed SIS witnesses, in the presence of their line managers and legal representatives,” and “instead of producing signed, sworn, verbatim statements, SO15 produced ‘anonymized’ notes, drawn up after the interview.”
The first explanation for his death was suggested by a fellow officer, identified in court only as F, who initially notified the police that Williams had been missing for a week. A recording of the call played in court: “after a question about his state of mind, [F] said he had been recalled from a job he had wanted to do, and was uncertain about how he had taken the news,” writes Nigel West in the Telegraph. “The implication was obvious.” Williams was supposed to have killed himself.
The second line of speculation, incorporating the duffel bag, the wig, and some of Williams’ browsing history, sallied that it was a tabloid’s wet dream: SUB SPY SNUFFED OUT IN SICK SEX GAME GONE WRONG. That Williams was a claustrophiliac was tentatively confirmed by his landlady, who’d had to untie him from his bedposts once before. Collateral kinks were brought in to support this one. Who was he going to give the £20,000 of women’s designer clothes in his apartment to, anyway, seeing as he had no friends at work and no known romantic interests, male or female?
Williams’s family, for their part, suspects the “dark arts” of the world of espionage, and various experts have come forth to assure the grieving Williamses that the Russians or the Iranians would have targeted a cypher such as their son for his proclivities and blackmailed, then eliminated him. That, or these proclivities indicated a coat already turned, and the SIS would have had to do it themselves.
Whatever the direct cause of Williams’s death, the circumstances made public share formal properties with the fates of two other queers with high-level security clearances, and rely on recognizable and eagerly consumed tropes about intelligence professionals. There’s something queer in the intelligence project itself, occupied as it is with imitation and the intimate details of other men’s lives. But it has odd echoes in the machinic social world that compels self-knowledge and self-disclosure. Is the entire Internet a honeypot trap? Are the machines themselves spying on us in order to learn our social mores and ultimately pass themselves off as us? The fates of various queer nodes in the surveillance system indicate that artificial intelligence and the modern intelligence services are related in a deep way, beyond that of their shared origin.
Alan Turing was found dead in the home he shared with his mother in 1954. Froth around his mouth, a bitten apple by his bed and jars of cyanide in the house led to a verdict of suicide, though his mother, who had been in Italy at the time, maintained it must have been an accident: He was always so careless. But all the same, his favorite movie was Snow White and the Seven Dwarves with its poisoned apple, and he had just lost his post doing research with the Government Communications Headquarters, Gareth Williams’s place of employ 60 years later. Friends bought the story.
Turing had been the lead cryptanalyst at the GCHQ during World War II , devising among other things the method required to decrypt German military communication, and contributing significantly to the development of the digital computer along the way. Emerging from the open, jocular homosexuality of Cambridge University, Turing certainly wasn’t closeted in any strict sense. After the war, however, the discovery of the Red treachery of the Cambridge Circle, a high-class group of KGB agents within Britain’s elite, and the trial of Klaus Fuchs, a nuclear scientist who passed Manhattan Project secrets to Russia, made securing the nation’s precious intelligence a paramount concern, so the British government adopted for itself the U.S.’s new restrictions on employing homosexuals.
Thus when Turing brought a youth back to his home and that youth later brought a friend who burgled him, reporting the details of the incident led to a trial for gross indecency that was taken seriously as a national-security threat. Convicted of buggery, Turing was sentenced to undergo estrogen injections for two years, which made him buxom and impotent. Most who knew him surmised that this was a humiliation that ultimately caused his suicide. Other friends entertained speculation that the secret service had conspired to kill him because he knew so much — they were afraid he’d go abroad and fall prey to some comely young spy.
Though most accounts of Turing’s nature indicate a deep guilelessness, he still had to fend off suspicion that he could compromise sensitive intelligence. But Turing was never a spy himself. His approach to self-disclosure — he volunteered to the police investigating his burglary that he’d brought the guy home for sex — would have made him a terrible one. This unforced confession makes a bit more sense if we take a remark he makes in his 1950 paper “Computing Machinery and Intelligence” as indicative of his attitude toward subterfuge in general.
The conceit of this seminal paper is to investigate the upper limits of machine intelligence, which Turing believes can be made equal to that of humans. Even though one of his earliest intellectual triumphs was a proof of Gödel’s Incompleteness Theorem, which could easily have led him to believe in the “superiority of mind to mechanism,” his biographer Andrew Hodges writes that Turing
had proved that there was no “miraculous machine” that could solve all mathematical problems, but in the process he had discovered something almost equally miraculous, the idea of the universal machine that could take over the work of any machine. And he had argued that anything performed by a human computer could be done by a machine. So there could be a single machine which, by reading the descriptions of other machines placed upon its tape, could perform the equivalent of human mental activity. A single machine, to replace the human computer. An electric brain!
The ramifications of this discovery were wide-reaching, but were predicated on the truth or falsity of discrete utterances. Thus Turing’s attitude toward deceit played a significant role in theorizing what would be the model for the digital computer and artificial intelligence, and is laid out in the opening paragraph.
The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.
Here are two things. First, he believes that human behavior is ultimately regular enough that a sufficiently powerful computer could conceal its identity as a machine from a human interlocutor. Second, he is unconcerned with his irregular behavior — thinking that machines can think — and sees no point in concealing it. These two positions are the outer bounds of a thought experiment which has come to guard the border between machine and human intelligence, an increasingly contorted position with the deep integration of social media into how we subjectivate ourselves.
This is the paper where Turing proposes his famous test to determine whether or not it would be true to say that machines are capable of intelligence or thought. The way he does this is by turning the question slightly, so that “thinking” can be ascertained by the machine playing a version of a game as well as a human could. This game, the “imitation game,” consists of an interrogator trying to tell which of two other players in a room apart from him or her is a man and which is a woman by asking questions through some sort of intermediary. One of the two other players is trying to help the interrogator pick correctly, and the other is trying to frustrate the identification.
In Turing’s proposed version of the game, the male and female players become a single one, assumed to be human but potentially machine, which is confronted by the interrogator, unsure whether or not she is addressing an honest or deceitful human or a machine programmed to be deceitful. Turing assumes that the interrogator will ask the possible machine questions only a human could answer (a smart computer would know not to answer a question only a computer would know), but a sufficiently well endowed machine could still plausibly beg incapacity on human-like grounds. Thus an exchange like the following is put forth.
Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
The bulk of the rest of the paper is dedicated to responding to critics who maintain various reasons for why a machine could never pass for a human. Their explanations vary, but the force of their objections comes from the fact that it’s deeply destabilizing to find oneself on equal discursive footing with a machine.
Of course, we now do so regularly. Spam email as well as spambots on Twitter account for a massive majority of messages sent on the Internet. And while most of these machines very quickly fail the Turing Test, accounts like @horse_ebooks inspire widespread fascination for the poetic quality of their attempts.
A problem with Turing’s concept of machine intelligence, though, is the sort of subject it might be trying to emulate. Will a computer programmed to pass as human be more or less effective if it’s aware that it’s trying to pass? The answer would probably depend on whether or not such reflexive activity is common among humans. Duplicitous or part-hidden subjects, such as spies or queers, have long had to know a thing or two about the intelligence required to make affected naturality pass for the real thing, though Turing himself was fatally untroubled by the need for such cover.
The tension between his two positions — insouciant self-disclosure, à la Eric Schmidt’s “If you have
something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place,” and the belief that a kind of transcendence through anonymity is ultimately possible — is unresolved today. Think of the detective investigating Gareth Williams’s death, who had to interrogate her witnesses through the intermediary of counterterrorism officers and received only anonymized summaries of their responses in return. Machines may be assumed to be identifiable as machines, but they can also be assumed to be gathering intelligence that would still require humans to act on it.
A tragic instance of this type of dialogue emerged when Wired magazine published the chat log of the conversations between B. Manning, the Army private accused of passing military and diplomatic secrets to Wikileaks, and Adrian Lamo, the hacker turned snitch who informed on her. In this exchange, Manning treats Lamo as the interrogator, and she will be the assumed deceiver.
(1:40:51 PM) bradass87 has not been authenticated yet. You should authenticate this buddy.
(1:40:51 PM) Unverified conversation with bradass87 started.
(1:41:12 PM) bradass87: hi
(1:44:04 PM) bradass87: how are you?
(1:47:01 PM) bradass87: im an army intelligence analyst, deployed to eastern baghdad, pending discharge for “adjustment disorder” in lieu of “gender identity disorder”
(1:56:24 PM) bradass87: im sure you’re pretty busy…
(1:58:31 PM) bradass87: if you had unprecedented access to classified networks 14 hours a day 7 days a week for 8+ months, what would you do?
(1:58:31 PM) [email protected] <AUTOREPLY>: Tired of being tired
(2:17:29 PM) bradass87: ?
(6:07:29 PM) [email protected]: What’s your MOS?
(3:16:24 AM) bradass87: re: “What’s your MOS?” — Intelligence Analyst (35F)
(3:16:24 AM) [email protected] <AUTOREPLY>: Tired of being tired
Easy enough to tell the machine lines from the human. But we know Lamo’s conduct is untrustworthy — he’s the FBI informant who turned Manning in — and Manning’s need for a confidante is precisely the reason the intelligence services tried to de-risk themselves by expelling “perverts” in Turing’s time.
A Foucauldian story could be told about the transition from a mode of governance that compelled self-transparent subjects to one that assumed multiply layered ones, with enough confidence in its own longevity to allow for nonsimultaneous disclosure of subjects’ every facet. The long game, in which the modes of capture are laid in place and the fullness of time assures their efficacy, is the same maneuver that Turing deploys to lay the theoretical foundations of the digital computer, which he calls universal for its capacity to perform any possible computational move.
The multiple identity is a keystone of the knowledge economy, which is predicated on the computability and universalizability of workers’ discrete actions — at once profitably constituting the social and underpinned by military ends. Its worldview posits a layered and partially duplicitous identity to be continually exposed and managed.
Multiply layered and part-hidden subjects and an affected naturalness under acute knowledge of surveillance are the traits that must be mastered by spies, queer subjects, and machine intelligence. Is it a surprise that they have become generalized in a world which rests on Turing’s theoretical advances, when his history binds all three together?
Despite working in the seeming objectivity of pure mathematics, Turing was nevertheless evaluating its uses based on what he thought what one could or should do with truth or falsity. As his biographer noted, he could have used his early proofs of Gödel’s incompleteness theorems to maintain a view of human activity that exceeded description or capture by algorithm. He didn’t, though, and now human activity has been vastly digitized.
The veil of anonymity a computer affords to the identity of its user is now known to be threadbare, given that the ad-based Internet is predicated upon turning personal information about consumer preferences into metrics. But early hackers took the Internet seriously as a sort of post-racial, postgender, even post-state utopia where the right sort of speech acts did real work. Though those dreams of digital identity transcendence have since fallen out of it, the world of work is increasingly structured by the parameters of machine intelligence doing certain tasks a human computer would do. It’s important to remember that Turing was specifically proposing to replace human computers with digital computers, and female human computers at that: computer being at the time a gendered occupation like a typist.
It is Alan Turing’s centenary this month, and petitions are circulating to have him replace Charles Darwin on the £10 note. His position in the histories of computing and artificial intelligence is secure. In 2009, Gordon Brown issued an official apology, proclaiming he was both proud to say sorry to a war hero such as Turing and proud that the homophobic laws under which Turing was convicted have since been overturned. But the strange balance of artificial and military intelligence which Turing’s sexuality played spoiler to still functions the same way and still catches others like him in its net despite the repeal of laws. Turing’s contributions to the two fields include an illumination of their own laws, as well as their strange nexus.