There is good reason to be concerned about the various data pools of personal information being gathered by communications and social-media companies. It's used to shape the material conditions of our lives -- what we see, what we're permitted to do, who will talk to us, what sort of service we'll receive. That data functions as our proxy, so it makes sense to at least know how extensive it is and whether we need to try to rein in this shadow self.
As the Economist reported recently, reputation-management companies like Reputation.com are eager to help with this, offering to monitor mentions of you and develop mop-up strategies for problems. They also hope in the future to somehow sequester your personal data in a "data vault" and let you charge outside parties to access it. I don't know what would prevent the data from leaking out of this vault, or how it could ever all be put in the vault in the first place. Much of the data captured about us is collected through surveillance, making it extremely difficult to know what people have on you, let alone centralize it all.
This sort of service, at this juncture, offers merely a fantasy of control over one's reputation, something no one ever has. It treats reputation as something one just posits, however one sees fit -- as though people could never be derogated or discredited against their will. But you can't simply declare your reputation with data exhaust.
The Economist article claims that a data vault will be "like a bank vault containing all the data that constitute a person’s reputation." But reputation isn't a matter of data; it's an exercise of social judgment. It's what other people say about your data. The relationships that dictate reputation are impossible to reduce to data points, and even if they could be, they would still be subject to further interpretation. These and other pieces of reputation-related data are never inert and neutral, self-evidently postive or negative. No data point intrinsically conveys some simple and single piece of information.
As Gina Neff and Brittany Fiore-Silfvast explain in this presentation, data have different valences; data are always mediated. They must be contextualized by an interpretive community -- pieces of data don't automatically dictate how they must be interpreted by anyone who sees it. They are available to be put to whatever use by those with the authority to contextualize them. And more data doesn't automatically make for a clearer picture. It just makes for more interpretative work, more exercises of power by the interpreters, more occasions where power might need to be resisted.
In other words, data are not inherently a weapon against power, as transparency advocates sometimes seem to suggest; they are also a tool of power. A reputation is constituted by who gets to interpret data and for what reasons; it is determined by power relations. Amassing more data won't somehow undo the hierarchy; it just gives people in the position to impose social judgments more information to rationalize their prejudices and protect their privileges.
If you are assigned a reputation score, someone is making an effort to exercise power over you, and you had better be able to marshal enough power to overwrite that score or, better still, ignore it. Reputation scores are a tool of domination in search of an application. The chief market for reputation data (or rather, data contextualized as being relevant to reputation) is not necessarily the person who that data is assigned to but those who want to use that data for racketeering purposes ("it would be a real shame if this data about you hurt your job chances") or those who want to use it for social risk management.
It's no coincidence that Reputation.com is joining forces with the credit-score agencies, as the Economist reports. It's an extension of the same racket, to create a reputation score that is as actionable as a credit score. It will have the same effect of obviating the contingencies surrounding social judgment and making it seem like one convenient number can convey enough information about a person to seal their fate.
If the reputation score is applied to you, you will have to pay to try to improve it or "clean it up." But for others, such a score can be used to guide decisions about whether you are worth knowing, worth having as a roommate, worth friending on Facebook, worth offering a microloan to, worth renting a space on Air BnB to, etc., etc., etc. Just as banks don't want to lend money to bad credit risks, individuals may conceivable balk at lending time to bad social risks. Why waste your time on outcasts or low-status losers who can't improve your reputation? Why take a chance on their surprising you with some undocumented and unproved talent? There are so many other people in the world, especially now that we have technology to assemble them in networks and collapse the distance between all of us.
So expect the reputation-management companies (Facebook pre-eminent among them) to be reminding us of the risks of unhedged sociality. It's not prudent or profitable to be associating with people in the wild, with no tools with which to contextualize them -- or tools to exercise your power to contextualize them and their data to make it all work for you and your quantified reputation. Friendship is too valuable a resource to spend on unworthy people. You worked hard to carve out your favorable node in the network; you don't want to associate with people willy-nilly and surrender that, have them drag you away from the center of the action. Think of your EdgeRank! You have important things to share, and you want to be sure the right people will hear them and reward you for them.
Of course, if reputation scores and reputation management gain traction, social media space will become even more egregiously self-promotional, a space where we must make explicit all the social capital that the flow of everyday life allows to remain implicit, for better or worse. The discriminatory implications of our various social decisions will seem measurable. We can shame and be shamed in real time. Perhaps this could make us feel lousy about such social judging, perhaps it will make us be even more judgmental. Probably it will do both. (Remember: every time you pass over an opportunity to "like" something, it counts as a de facto "dislike.") We'll feel terrible as we play up the value of our social connections to the hilt and try to build fences around them. Facebook would serve as an audition space for middle-class eligibility. If we share enough, and appropriately, and find enough of the right sort of people to sponsor our online profile, we might just qualify for that home loan, or that party invitation, or that job where the "office culture" is very delicately maintained.
Thanks to social media, the pressure to conform leads not to occasional, discrete moments of inner struggle but the demand of a constant performance of credible normality. It's exhausting work; luckily algorithms are assessing our data to tell us just what would be normal for us to do next.