The Missing Issue in Domestic Surveillance

Discussion of domestic surveillance is framed as a dilemma between privacy and security. Meanwhile, the issue of data retention is outside the frame; it poses greater peril and is ignored by liberals and conservatives alike.

Liberals, focused on protecting rights, can’t own up to what threatens them. The threat is government. Not this government or that government but government as such. The Founding Fathers knew this and, to protect individuals from the government they had just created, added the Bill of Rights to the Constitution.

To address retention today’s liberals would have to embrace the fear of government that gave birth to the individual rights they are protecting. But they can’t embrace that fear because they are in general supporters of big government.

Conservatives are silent; they neither defend nor criticize retention. They defend domestic surveillance as inward-facing national security but to defend retention would mean accepting Big Brother inside big government. They can’t go there.

Nor do they criticize retention because at bottom they don’t feel at risk. In the “us vs. them” worldview of mainstream conservatives, “us” is all the hard-working, law-abiding right-thinking regular folk and “us” does not think itself threatened by data retention.

I for one would more willingly accept government snooping, trading less privacy for more security, if government would destroy tomorrow the data about me it collects today. But it’s very hard to destroy any data, period, and this government shows no sign of even planning to try to do so tomorrow. Nor would I expect any government to do so, ever.

Retention is outside the frame because that’s what ideologies do: they hide from view issues they cannot address. Wearing such blinders, we can neither see our problems fully nor envision fitting solutions.

The debate about domestic surveillance is a dangerous instance.  Today’s government is worrisome.  That should amplify our concern with tomorrow’s government but neither side raises a hue and cry.   Data retention is outside their frames and the intrinsic threat remains in the shadows.   Shame on us if we’re ever surprised!

Post-humans Sighted

I’ve exorcised my demons and put them between covers under the title Silicon Simulacra: Post-humans of the Machines Worlds.   If you don’t want to buy, you can download PDF files of individual chapters at www.lenellis.com/books.  An abstract is below.

Abstract

The assimilation of humans into machines, once science fiction, is a well advanced reality today.  Each of us has virtual versions inside the two great machines of the late modern age.  In the datascape, the vast array of databases in which the details of our daily lives are recorded and analyzed, we appear as profiles.  In cyberspace, the global network of computers in which everyone can connect with everyone, we appear as personas.  Both are part human.  We continually update both machines, passively and actively, and, as we do, our simulacra change in tandem.  Both are part machine.  The profile is a probabilistic portrait, conjured up by others to inform their decision making; it’s an informational output.  The persona is a pattern of connections, created as we present ourselves to and interact with others; it’s a network effect.  Drawing upon humans in near real time but manifested inside machines, neither looks like the continuous, whole and bounded self of the modern tradition.  Rather, these hybrid entities are contingent, relative and open.  Silicon Simulacra describes how these two semblances come to be, how each represents us and what opportunities and challenges each poses and suggests they are the post-human forms of humans assimilated into these machine worlds

Web Sites 2.0

Pete Blackshaw has written a hugely important–and helpful!–think-piece on the role of web sites in a 2.0 world here.   I will only add that it illustrates a very basic lesson: Don’t ask consumers to come to you.  Go to wherever they are.  That’s what P&G CEO Ed Arzt told ad agencies back in 1995 when they pooh-poohed the web.  That still applies and will apply.  Wherever consumers go, marketers and their agencies had better follow, and quickly.

Crisis as the New Normal

The only appropriate response to any crisis is always the same:  Regret, restitution and reform.  We’re sorry this happened.  We’ll make whole anyone who’s been harmed.  We’ll change so this won’t happen again.

Craig Reiss in Entrepreneur.com provides a thorough and insightful “PR playbook” for how to implement the three R’s.  Here, I will only add and advocate that response readiness should become the new normal for most companies.

Four factors contribute to this assessment.

First, the Internet has enabled activists, investigators, regulators, analysts and others including disgruntled employees and unsatisfied customers with easier and faster access to company documents.  In short, more eyes are prying more often.

Second, the recent spread of Web 2.0 tools has created an “attention economy” that encourages the reporting of not only mishaps, misdeeds and misrepresentations but also miscues, missteps and mistakes, all with investigative fervor and headlines to match.  In short, every errant action is now suspect.

At the same time and for the same reason, there are no more news cycles and thus no interim time periods during which to prepare a response.  Operating 24/7 means both all the time and at any time.  In short, preparedness is too slow.

Fourth and most recently, the volatility of the equity markets means that more errant events will be reflected in a company’s stock price and thus qualify as “material” events. To such events regulations require companies respond with appropriate disclosures.

In summary, attacks on a company’s reputation are becoming easier, broader and faster with material impacts more likely.  To me those conditions suggest that response-readiness needs to be the steady state of a company’s communications capabilities.

Internet Naysayer Doesn’t Go Far Enough

In the “The Dangers of Web Tracking” (August 6 Wall Street Journal) Internet skeptic/critic Nicholas Carr describes how we are and could be tracked, profiled and identified from our online activities.  But the dangers he cites are flimsy and he seems to shy away from naming the danger that concerns him.

Carr offers three dangers. 

The first is crime, e.g.,  identity theft and the frauds enabling by the theft.  Yes, the data generated by our online activities add to the data generted by our use of credit cards, store cards, toll tags, catalogs, warranties, etc., but theft and fraud have no special tie to online tracking.

Second, prediction can blur into manipulation but Carr doesn’t define the latter   If he means behavior-based persuasion, that’s as old as the hills. Every salesman listens to what customers say with their mouths and with their eyes, heads, shoulders, hands and feet and then adjusts the sales pitch accordingly.  Most persuasion encounters are feedback-governed interactions and until recently a task for humans.  In the field of human-computer interaction (HCI), two questions gained a lot of attention early on: could computers teach and could computers persuade?  Marketers are figuring  out the latter in their workaday activities; in specific a lot of interactive design involves mapping out diverse consumers’ different decision paths en route to a purchase and then laying in at each step along  the path the content and/or tools that will lead the customer onward.  This is SOP.  But even “persuasion” may be too strong a term.  Many designers say instead that they’re helping the customer buy.  If something is wrong with being persuiaded by a machine, I’d like to know what that is.  Labelling it manipulation doesn’t help.   

Third and his greatest danger is the chilling effect of surveillance.  He writes, ” When we feel that we’re always being watched, we begin to lose our sense of self-reliance and free will and, along with it, our individuality.”   It is certainly true that the machine does not care about us as individuals.   Rather we are persons in a population who can be differentiated into groups, some of which are more likely than others to respond to certain persuasions.  Whether the continuous surveillance of our activities for the purpose of parsing us into probabalistic groups erodes our self-reliance, free will and individuality — or even the sense of same — is arguable at best.

Earlier in the piece Carr hints at the danger that I think he actually fears:  specifically, that government could identify those whom it considers opponents.  The headline on the WSJ’s online edition tried to make the point.  This is not paranoia.   To the contrary it’s an axiomatic truth. 

All governments, everywhere and always, have not just a potential but an actual tendency to encroach on the rights of those they are to protect.  Many governments have gone too far; they can and should be prevented and it requires the level of citizen awareness and vigilance that Carr calls for.    Americans don’t like to think that our government could take such a turn, but it did and recently.  Members of the “greatest generation” remember the 1950s Red Scare led by Joseph McCarthy; baby boomers in the anti-war movement can testify to egregious overreaching by both police and FBI.    

If we’re going to deal with the dangers of web tracking, we need to be more forthright in naming them.

We, the Profiles: The Machine and The Polity, 2018–2028 from THE REVIEW OF UNWRITTEN BOOKS

A friend without blog or web  site mailed me a hard copy of the July 24 Economist article, describing the meeting between Mark Zuckerberg and David Cameron, the UK’s newly elected Prime Minister, to discuss government,  governing and governance.  It brought to mind a book on this topic and, after some  rummaging around, I came up with the attached; it’s from a rececnt issue of the Review of Unwritten Books.  The full text is also immediately below.

We, the Profiles: The Machine and The Polity, 2018–2028
Len Ellis
Erewhon Books 2030

Humans have a nasty history of trying to exclude each other from politics.  In the distant past the excluded have included women, blacks, immigrants and other others.  In our own day most countries with human clones have denied them the vote.  The concerns of We, the Profiles are the first stirrings of political activity by our profiles and avatars and the growing backlash against machine involvement in the polity.

The book begins by tracing how humans brought politics into machines with chapters on three events between 2018 and 2020.  The politicizing of Facebook is examined first.  Starting in 2016, the worldwide social network began deploying tools that enabled profile owners to participate in its governance.  The 2018 culmination, the first election for the Facebook parlia-ment is closely examined as is the use of voting bots that operated without direct supervision by the profile owners.

Welfare politics entered cyber-space after Linden Labs, as always seeking more members for Second Life, allowed its avatars to acquire reproductive and contraceptive applications. Two unexpected effects occurred: dramatic rises in virtual abortions and in abandoned avatar toddlers.  Linden Labs solved the former by funding virtual stem-cell research firms; they quickly outgrew the need for further aid. The harder problem was parentless tikes.  Although a new protocol required that in-world infants deactivate if they go without food, shelter and contact for four consecutive sessions, it could not be applied retroactively.  The persistence of the extant virtual orphans sparked the first demands for avatar rights and the first avatar protection societies.

The third key event was not explicitly political.  In 2019 V-ID Technologies, a U.K. developer of cyber-persona applications, launched an homophily plug-in for profile owners.  When“on,” the plug-in customized the profile to optimize its similarity to the profile(s) with which it was interacting.  Profiles customized to specific situations made them more life-like and made online sociability easier. Only later, when voting bots scanned them, did a problem emerge:  The profile no longer expressed its owner but its owner in a temporary and specific interaction.

The book’s second half is framed by the quadrennial elections of the 2020s but again focuses on certain events.  The well-known story of the unplanned birth of “profile preferences,” generally considered the first explicitly political act by the machine, is quickly retold.

In winter 2019 the profiles, detecting exponential growth in profile pages about politicians and political issues, began activating on their pages and distributing to other pages voting widgets, inspired by the Facebook tools but since tricked out by third-party developers into next-gen apps.  One, the Condorcet Engine allowed split votes—50% for Candidate A, 30% for Candidate B, etc.  Because Condorcet outputs so closely mirrored their inputs, the pro-files ranked them higher in authority, promoted them prominently on their own pages and linked to them elsewhere frequently.  Condorcet-based profile preferences quickly became ubiquitous and controversial.

No one quite knew what to make of them and the chapter dives into the ensuing debate.  Ellis’ framework is simple (or simplistic).  He groups in one camp and calls “philosophes,” all those who resurrected, relied on and made explicit some version of the humanist tradition:  The individual is the atomic unit of society; the vote is a unitary action and indivisible.  He groups and labels as “positivists” all those who championed the informationalist goals of precision and reliable prediction, for which the divisible preference was a better input.  Debating the differences between voting and measuring has since petered out, unresolved, but the arguments will likely return.

The chapter on the 2024 election concerns the competing claims of collective intelligence providers about how to distill wisdom from crowds.  Caught flat-footed by the spontaneous genera-tion of profile preferences in 2020, they showed up for 2024 more fully featured.  Ellis speeds through the claims and efforts of the polling and survey houses, still committed to 19th century methods, and a lot of time on recommendation engines (those using passive collabora-tive filtering) and on prediction markets.  Both of those solutions reliably produced accurate results, leading positivists to argue that they were more accurate than and should replace voting in determining the aggregate will.

As readers will recall, the last election was marked by the launch of We, the Sims, an ambitious and robust political hoax organized by the hacker-prankster syndicate http://www.secret.org.  Following a few flashbacks about their high jinks, the chapter dissects the mechanics and assumptions of two applications: the registration bots that enabled WtS to get on the ballot in 34% of Congressional districts in less than 3 weeks and the natural language, text processing tools used to sort candidates into ideological positions on three axes.  The latter used genetic algorithms, generating solutions by randomly mutating its own code.  The resulting 3D ideology map while highly accurate was also inexplicable, much to the consternation of everyone including the positivists.

Portraits of two new organizations that express the backlash against these developments provide the book’s conclusion.  Spooked by the spontaneous generation of profile preferences, the National Information Institute has created the Center for Meme Control, tasked with developing standards and protocols for self-propa-gating engines.  More dramatic is the formation of the Society Against Machine Evolution, a paradigm-busting alliance of the American Civil Liberties Union, the National Rifle Association and a splinter group of Computer Scientists for Social Responsibility.   Although both groups are young, Ellis lays out the directions each is likely to follow short term.

More analyst than essayist, Ellis’ prose is terse and he hides his own point of view about these matters.  He was similarly elusive in his earlier work, Silicon Simulacra: Post-Humans of the Machine Worlds.  But his diffidence works to readers’ benefit.  The power of this non-argumentative history is its ability to stimulate thought rather than close it off, and, given the current events it chronicles, we need to think more often and harder about how machines are working their ways into our political life.

WSJ’s Flawed Exposé of User Tracking

There’s an old PR saying—“Never argue with someone who buys ink by the barrel and paper by the ton.”   So, I won’t post this comment on the Wall Street Journal’s web site but will share it here.

Its research on user-tracking software, “The Web’s New Gold Mine: Your Secrets,” reported in three full pages of its July 31-August 1, 2010 weekend edition, is solid on facts but wrong on the one fact upon which its argument depends.  After introducing the types of user behavior collected by cookies, Flash cookies and beacons, the fifth paragraph asserts that this data is packaged into profiles about individuals.   This is wrong as a matter of fact.

Individual-level data are stored in what’s called a record.  Database administrators perform certain hygiene procedures on records and they’re continually updated with fresh individual-level data, but records as such are raw material; they just sit there until someone queries the database.   When that happens, the software scans the data inside each record and sorts the records into groups that comform in greater and lesser degrees to the query.  The resulting group portraits are profiles.  

Typically, the software is designed so that profiles express a probability.  Measuring the variability of individuals on one or more attributes (the raw material), it differentiates these persons as more likely than those persons (the profiles) to behave in the way desired by a business marketer or a government administrator (the query).  This statistical differentiation then becomes the basis for real-world discrimination: treating these persons differently from those persons, in the service of business profits or government efficiency.  Social statistics at birth and all its descendents since including user-tracking software parse populations into probabilistic groups.  The method cannot say—and does not want to say—anything at all about individuals as such. 

That’s why consumers, despite telling pollsters that they are “concerned” or “very concerned” about online privacy, don’t use the privacy protecting tools that have long been available.  They know that this surveillance does not threaten them as individuals.  Scare-mongering about privacy by the media and activists only perpetuates the belief that individuals are important to business and government.  Fortunately, we aren’t.

The Old Spice Man: A Traditional Triumph

The viral success of The Old Spice Man campaign prompted Craig Reiss to parse its success factors and explain how companies of any size can apply them.  His dissection offers breadth and many practical insights.  Here’s the link: http://blog.entrepreneur.com/2010/07/lessons-from-the-old-spice-man.php

There’s a “meta” lesson, too.  Created by ad agency Weiden + Kennedy, the campaign was conceived as entertainment.  It handled the Web accordingly—as TV but in near real-time and two-way—and the work itself was genuinely entertaining.

Entertainment is not a core competency in interaction design.  Its prevailing paradigm assumes a purposeful user with her own intentions the fulfillment of which drives her further interaction.  Assuming a purposeful user doesn’t rule out but does militate against entertainment.  The prevailing paradigm also assumes optimization: a steady stream of user data informs ongoing redesign that in turn yields continually improving results.  Optimization and entertainment are not antithetical but they are apples and oranges. 

My somewhat ironic take-away is that the type of talent capable of creating a campaign like The Old Spice Man may more likely be found among the creative staffs of ad agencies where entertainment is the prevailing paradigm than among digital experts.

Jay Leno Ratings and Post-Modernism

It’s not easy to find real world and easy-to-understand instances of the post-modern but here’s one.  Examining the declining, increasingly dismal ratings of Jay Leno in the 10PM prime-time spot, Simon Dumenco, “Top 10 Lessons from NBC’s Failed Leno Strategy” in today’s AdAge suggests that “Late-night Leno functioned as a sort of utility: an easy, default pre-bedtime diversion.”  It’s a post-jmodern premise that the receiver as much as the sender determines the meaning of the communication, and whether or not Dumenco intended, he makes a good case that this pov applies here.  Specifically, users detemine the program’s function (pre-bedtime diversion) which figures in his “lessons” 5, 6 and 7 as well as the evaluation criteria appropriate to that function (“pleasantly sedative,” “not-too-taxing”) which figure in his “lessons” 4 and 8.  I don’t think the hypothesis is provable but I do find it plausible as an instance.  Moreover, I think it’s an especially sharp example of the PoMo pov because the user here is the passive television viewer without any kind of technological empowerment. So, I’m sharing and saving for future reference as an example of how the meaning of communication is co-created and situational.