Consider this: An external advisory council, the Advanced Technology External Advisory Council (ATEAC) — essentially an ethics council to guide new technologies — set up by Google in the last week of March, being disbanded in the first week of April, within less than a fortnight. However, one wants to play it, this was a major goof up by the company — a failure of conception, planning and execution. Google could have avoided this huge embarrassment if they had only googled to find out what lay ahead. But to do so, one must know what questions to ask. Even they seemed to have asked the wrong questions.
The ATEAC comprised eight members from across disciplines and organisations. It was supposed to discuss issues that Google confronted in the new and fast-growing area of Artificial Intelligence, and related technologies such as facial recognition and fairness in machine learning. ATEAC was to meet four times a year and complement the advice of an internal body to help Google navigate the ethical challenges that were emerging. It was to align the AI work of Google with the seven principles that the CEO, Sundar Pichai, had enunciated in June 2018. These are (i) be socially beneficial, (ii) avoid creating or reinforcing unfair bias, (iii) be built and tested for safety, (iv) be accountable to people, (v) incorporate privacy design principles, (vi) uphold high standards of scientific excellence, (vii) be made available for uses that accord with these principles. A cursory glance at these elegantly set out principles will immediately tell you that words such as “unfair bias”, “accountable”, “beneficial” and “privacy” are ethically loaded and contested concepts and, therefore, some tough ethical questions lay ahead. Google’s initiative was not a moment too late. Those of us who belong to the humanities were delighted. When it was disbanded, we were dismayed. So why did this happen?
The simple explanation is that the composition of the committee was challenged by at least 1,800 Google employees: Not just because the process employed was flawed — no consultation with the staff and no consultation with other members of the committee — but also because of who the members are. Kay Coles James, president of the Heritage Foundation, a conservative think tank, was strongly opposed because of her anti-environment, anti-LGBTQ and anti-immigrant public views. Dyan Gibbens, CEO of a drone company, Trumbull Unmanned, was challenged because of the debate on the use of AI for military applications. Privacy expert Alessandro Acquisti resigned because he didn’t believe this was “the right forum to engage with issues of fairness, rights and inclusion in AI”. Further the task of the committee was not clearly set out. Its powers vis-a-vis Google were not properly defined. Could it, for example, veto technologies considered inappropriate? Were four meetings a year adequate? Would its recommendations be secret or made public? No clear answers were given and many suspected that it was a PR exercise, rather than a genuine attempt to begin a conversation on ethics and advanced technology.
But instead of letting the issue go into the pages of history, now that the committee has been disbanded, let me draw four lessons from the episode that are pertinent in the context of an absent Indian debate on tech ethics.
The first is poor planning. The fact that prospective members were not informed about who else would be on the council, or that the political implications of prospective members’ public positions would be considered, or that clear terms of reference were not formulated — all this shows how casually the task of setting up the council was taken. Deep thought, which Google excels in, would have shown how fundamentally flawed the building blocks of the council were.
The second, is the heartening fact that politics is back in certain corporate spaces. Google employees have been at the forefront of raising issues relating to ethics and new technologies. They opposed the work done by the company on drone technologies for the Pentagon. They challenged the inclusion of a member with openly anti-LGBTQ and anti-environment views on the council. They demanded more robust public debate on the ethical issues involved. This is great news. Politics has reemerged to counter the technology-first attitude that appeared to be settling over society and colonising our minds. Ethical issues, so important for the well-being of human societies, it now appears — thanks to Google employees — cannot be ignored by policy makers in organisations, universities or the state. And, what is most uplifting is the political activism that technology workers are prepared to demonstrate. Is this the 1960s generation being re-born? Here is a group of people who are highly paid, materially comfortable and enjoying good social status. Yet, they are willing to challenge their bosses, publicly too, on issues of ethics. Just when political sociologists such as myself were lamenting the subtle co-option strategies of global capitalism of young people, grieving for the apparent death of the protest politics of a Dylan or a Freddie Mercury, or a Bob Geldof, along comes this protest from young men and women, thousands of them, from Google. Salut, my friends. But when will our own IT community get publicly involved in ethical issues in India — such as the misuse of Aadhaar by the state, or, of cooperation of IT companies with the government on the National Population Register in Assam?
Which brings me to the third issue. I looked at the ethical manifesto of Indian IT companies such as TCS and Infosys. The former only listed the ethical code of the house of Tatas: Integrity, honesty, accountability, and all that motherhood and apple-pie sort of stuff. No statement on ethics and technology such as Google’s seven principles. The same is the case with Infosys. These companies seem to be living in another age where such ethical issues have not yet emerged. Despite huge grants by the Tatas and Infosys to Harvard and Yale, the humanities debates taking place there have not reached the company. Perhaps, some employee activism is now in order.
Finally, the fourth issue. I looked at the teaching curricula of the IITs: No engagement with ethics and technology. We have humanities programmes, but no mainstreaming of ethics with robotics or biotechnology or artificial intelligence. Our IITs produce technologists not humanists with excellent technology capabilities. I was disappointed to find no Indian among the experts interviewed by the MIT Technology review of April 6 on why ATEAC bombed. This despite many Indians leading technology companies in the Silicon valley. Perhaps it has to do with the absence of ethical sensibilities among Indian technologists. Perhaps it has to do with the IIT training they get. Perhaps the humanities scholars in the US are all postmodernists. Perhaps it has to do with our dharmic philosophy. Which one is it, Sundar Pichai?
This article first appeared in the print edition on April 27, 2019, under the title ‘Ethics in the time of technology’. The writer is professor at the Centre for the Study of Developing Societies, New Delhi, and the author of In the Hall of Mirrors: Reflections on Indian Democracy.
- The Pope, the Dharmaraja
The pontiff’s UAE visit is historic in terms of inter-faith harmony and peace. But do the symbolic compromises he made diminish his moral sheen? ..
- A good ally to invoke, difficult to walk with
Prime Minister Modi recently invoked Gandhi as he talked of capitalists. But Gandhi believed that owners must become trustees...
- Much in a name
The Margaret Court Arena controversy touches upon a key question plural democracies must deal with: What should be the correct response to the demand for…