I was checking in for a flight, when the desk manager asked me if I would like to participate in a beta-programme that they are deploying for their frequent flyers. “No more checking-in, no more boarding passes, no more verification queues,” she narrated with a beaming smile. Given the amount of travelling I do, and the continued frustrations of travelling with a passport that is not easily welcome everywhere, I was immediately intrigued. Anything that makes the way to a flight easier, and reduces the variable scrutiny of systemically biased algorithmic checks was welcome. I asked about the programme.
It is a biometric facial recognition programme. It recognises my face from the minute I present myself at the airport, and from there on, till I am in the flight, it tracks me, locates me, offers me a visual map of my traces, and gives me seamless mobility, alerting the systems that I am transacting with, that I am pre-approved. I saw some mock-ups, and imagined the ease of no longer fishing out passports and boarding passes at every interaction in the airport. I could also see how this could eventually be linked to my credit card or bank account, so that even purchases I make are just seamlessly charged to me, and if there is ever any change of schedule or emergency, I could be located and given the assistance that would be needed. It was an easy fantasy.
I was almost tempted to sign up for it, when out came a data consent form. It was about two-pages long, with tiny print that makes you think of ants crawling on paper in orchestrated unison. I stared at those pages for a while, and turned to the manager. “How exactly does this system work?” She was startled for a second and then gave me a long, reassuring answer. It didn’t have much information, but it did have all the buzzwords in it — “machine learning”, “artificial intelligence”, “self-learning”, “data-driven”, “intuitive”, “algorithmic” and “customized” were used multiple times. That’s the equivalent of asking somebody what a piece of poetry could mean and they say, “nouns”, “verbs”, “adjectives”, “adverbs”, and “participles”.
Her answer was a non-answer. So I cut through all of it, and asked her to tell me who will collect my data, how it will be stored, and whether I will be able to see how it will be used. She pointed at the unreadable two pages in front of me, and said that I would find all the information that I need in there. I walked off to my flight, without signing on the dotted line or the consent forms, but I was surprised at how uncanny this entire experience was. I had just been asked to submit myself to extreme surveillance for a trade-off that would have saved a few hours a year in my life, and enabled some imagined ease of mobility in purchasing things. It wasn’t enough that I was going to pay money, I was also going to pay with my data.
In that fluffy dream of easy movement and transacting, I had accepted the fact that this service, which was being presented as a privilege, was an extremely invasive process of surveillance. I had also skipped the due diligence of who will use this data of my body and being, and for what purposes. When I asked for information, I was given a black box: a legal contract that is as inscrutable as it is unreadable, and empty words that pretend to describe a system when all they produce is an opaque description of concept-words. Had I not asked the couple of extra questions, and if I was not more persistent in getting actual information, I would have just voluntarily entered a system that would track, trace, and record me at a level that turns the airport into a zoo.
This is the trope that SMART technologies have perfected — trading surveillance for promised convenience. The airport is already a highly surveilled space, but when these SMART technologies enter our everyday spaces, the amount of information they collect and store about us is alarming. The possibility that every surface in the city is an observation unit, that every move we make is recorded, that our lives are an endless process of silent verifications that seamlessly authorise us, is scary. Because, by corollary, when we become deviant, unintelligible, or undesirable, the same checks can turn hostile and be used for extreme persecution and punishment. I am not a technology sceptic but I am also getting wary of smart technologies being presented as magic where we don’t need to worry about how it is done, and just look at the sleight of hand that keeps on showing us the illusion while hiding the menace.
Nishant Shah is a professor of new media and the co-founder of The Centre for Internet & Society, Bengaluru. This article appeared in the print edition with the headline ‘Digital Native: In Your Face’
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines