Its not (just) about Facebook. Or Cambridge Analytica.
In the Facebook-Cambridge (FB-CA) saga the three important players so far are: First, Facebook, with the data of millions of people, couched in an architecture with dubious privacy permissions. Second, Aleksandr Kogan, a psychology professor at Cambridge, whose app “thisisyourdigitallife” harvested the data of the person who downloaded the app as well as the Facebook friends of that person. Third, Cambridge Analytica, a UK company to whom this data was sold by Kogan and which used it to create profiles of individuals, and micro-targeted messages to manipulate their voting behaviour. Chris Wylie, who helped set up Cambridge Analytica, is the whistle-blower who has revealed the questionable practices that emerged out of this nexus.
Zooming out from this specific case is important. One possible silver lining to the FB-CA scandal is that it will lift the halo around data analytics, artificial intelligence (AI) and machine learning (ML), allowing us to see them with all their warts. FB-CA are not the only ones to exploit these techniques, which have been projected over the past few years as “the future”.
AI, ML and data analytics are generally heralded as a force to “do good”. A clutch of studies using these techniques have led to instructive insights related to crime, traffic management, health, etc. What appears to have happened subsequently is that the seminal studies are repeatedly cited, even oversold at times. The dark side of these techniques, that algorithms and data can be “weaponised”, hurting rather than helping the weak, tends to be underplayed. It is this — ugly face of algorithms — that Cathy O’Neil’s book, Weapons of Math Destruction, documents. She makes two other important points: Some claims are overstated and sometimes, contrary to the claims, these techniques replicate — even exacerbate — inequalities and biases. She demonstrates how algorithms can be deployed against the weak in car insurance, pay-day loans, screening job candidates, etc.
It is one thing for data as the by-product of our activities to be used to understand, with the intent of improving, human life. Others such as Zeynep Tufecki and Bruce Schneier (author of Data and Goliath) warn of another alarming shift: To nudge, even push, us to increase our digital footprint. This is done with the sole purpose, they warn, of enhancing data mining opportunities for targeted advertising. The recent evangelising of digital payments is a good example. Cash expenditures do not log how we spend our money in as much detail as digital payments. Metadata (anonymised data, for example, frequency, timing, size of spending, rather than specific purchases) from digital wallets provide useful signals for advertising. Ditto with aggregator apps (food, taxis, etc). Insurance companies are becoming notorious for using such techniques to identify susceptible customers. In this sense, the emerging data economy poses new and serious challenges to privacy.
Schneier warns against the corporate and government surveillance potential of the digital economy. The main concern with the deployment of algorithms by corporations was targeted advertising. Wylie’s FB-CA expose shows businesses may not be squeamish about going further, undermining democratic practices through manipulation and coercion.
Edward Snowden’s revelations in 2013 about mass government surveillance also demonstrated the dark side of these techniques. With Glen Greenwald and others, they exposed the unprecedented mass surveillance power of the NSA’s programmes such as “prism”, “xkeyscore”, “mystic” etc.
One contrast between Wylie and Snowden, the two whistle-blowers is worth flagging. Snowden having exposed his own government has paid a heavy price: He had to leave the country and seek asylum elsewhere (where he remains). For Wylie, thankfully, such a situation has not arisen.
One key reason for this contrast is that Wylie was blowing the whistle against corporations, whereas Snowden spoke out against government surveillance. Though FB-CA are powerful corporations, we have seen some semblance of action by the government against them. Whether the corporations will be held to account remains to be seen, as they tend to have the resources and influence to have their own way. Tech giants have been spending record sums on lobbying the US government, especially against anti-trust and privacy regulations.
This has lessons for us in India too, where Aadhaar is seen as a drill to get the “new oil”, data. Recently, some businesses built on the Aadhaar platform intervened in the Supreme Court arguing that “access to the Aadhaar eco-system.is critical” and declaring Aadhaar illegal will cause them “grave and irreparable harm”.
Like Facebook, Aadhaar’s consent architecture is weak at best, non-existent at worst (remember how Airtel opened Airtel bank accounts when people linked their Aadhaar numbers). Even if we get a robust data privacy and data sharing policy, do we have the enforcement machinery to prevent abuse (there was no serious action against Airtel)? Furthermore, there are genuine anxieties about how many of us are adequately equipped to successfully navigate a world of privacy permissions and protections.
The UIDAI, of course, maintains that all is well. In the face of the exposes by hackers (for example, ZDNet and Robert Baptiste) and journalists (for example, The Tribune) this assertion is vacuous, even laughable. What is alarming is that the UIDAI initiated legal action against the whistle-blowers, not the offenders.
Without pushing the parallel too far, one could view the emerging Aadhaar eco-system as the foundation of a state-sanctioned mega-Facebook project or like China’s scary social crediting system. With Aadhaar we are not up against the might of corporations (as Wylie is) or that of the state (as Snowden is), but against the combined might of the state and corporations.