How does the common man judge which exit poll is most reliable? Rely on the one whose numbers you like the most and dismiss the one whose numbers you dislike? Today, some even judge the accuracy of an exit poll by looking at the survey agency that conducted the poll, or by looking at the television channel that commissioned the poll. Some others make a judgment from the sample size — a commonly shared notion is that the bigger the sample size, the more reliable should be the exit poll.
In reality, these should not be the indicators for judging the accuracy of an exit poll. Then how should one read these numbers, and should we even rely on these exit polls?
The numbers thrown up by various exit polls can neither be rejected in totality, nor can all be accepted without a pinch of salt. There are good exit polls and there are some not-so-good exit polls. As the least we expect from a physician is to measure the temperature of the patient, the least one should expect from an exit poll is to give the viewer/reader an estimate of the vote share.
The science of surveys, which includes exit polls, works on the assumption that the data have been collected after interviewing a large number of voters using a structured questionnaire. It is a different matter whether the interview was conducted over the telephone, or face to face using either pen and pencil or a gadget (iPad or a mobile app).
This method is not new; it began back in 1957 during the second Lok Sabha elections when the Indian Institute of Public Opinion conducted a poll. But not even the best guesswork or estimate can skip the methodology that is required. Without a structured questionnaire, the data can neither be collected coherently nor be analysed systematically to arrive at vote share estimates.
Sample size, representativeness
Since exit polls began in 1957, there has been enormous improvement in at least one aspect, which is sample size. Gone are the days when a national sample of 20,000-30,000 looked like a very large sample. Even those who pioneered psephology in India, like Prannoy Roy and Yogendra Yadav, worked with similar samples from the early 1980s until the late 1990s. Though the Centre for the Study of Developing Societies (CSDS) does not normally conduct exit polls, it did conduct a few — I recall CSDS’s first exit poll during the 1996 Lok Sabha polls with Nalini Singh and Doordarshan using a sample size of 17,604.
We went on to make a very accurate national projection of both vote share and seats. The CSDS has continued its voting behaviour study (National Election Study) using post-poll surveys as a tool, which is a much larger study, and has also used the study for making projections of vote share and seats at times. Our samples for post-poll surveys during the 1998 and 1999 elections remained below 10,000, and our projections were not off the mark. With changing times and for state-level projections and analysis, we increased our sample size in 2004, 2009 and 2014 to a little more than 20,000 (the biggest sample size was about 37,000 in 2009). When we complete our post-poll survey for the 2019 Lok Sabha election, we should have a sample of about 22,000. Our seat projections may have been off the mark on some occasions but the vote share estimates have been very close on many occasions.
I am happy to note that the sample sizes of various exit polls for 2019 run into several lakhs. I only hope I was able to learn the art of collecting a properly randomised sample of such a large number. Yes, a large sample size is important, but based on my experience, I can say for sure that more than the sample size, it is far more important to have a sample that is representative of the profile of voters. But in recent years, the pressure on television channels (which in most cases are the sponsors of these exit polls) to have the largest sample has resulted in exit polls with bigger and bigger samples.
Earlier, the competition among the channels was only about which channel aired its exit poll the earliest; now, it is also about whose poll has the largest sample size.
In recent times, seat projections by CSDS based on post-poll surveys went completely wrong; in Chhattisgarh we predicted the winner wrong, and in the UP Assembly elections we predicted the winner right but were way off the mark for the final tallies for different parties. We tried to introspect as to what may have gone wrong in these surveys as the methodology remained the same — we got data collected from representative samples. If someone asked if there was a possibility of having got those post-poll survey estimates correct had we taken bigger samples, I would not have hesitated to say no, a bigger sample would not have helped. Certainly, something else went wrong with those post-poll surveys; maybe a case of some fake interviews filed by the investigators, which we could not figure out on time. Technology — calls back to respondents, images of interviews being conducted and phone calls from the field, WhatsApp groups and similar tools — have helped us in overcoming such shortcomings, yet there is no thumb rule for how to get the prediction correct.
Swing model & complexities
There are other challenges in conducting a pre-poll survey, a post-poll survey or an exit poll. The prediction of seats is based on a swing model — the current poll makes an estimate of vote shares for different parties and alliances by interviewing selected respondents, and the seat forecast is made based on the result of the previous election.
Estimating the vote share is not an easy task either, given various diversities in India — diversity of location, caste, religion, language, different levels of educational attainment, different levels of economic class — and all of these have a bearing on voting behaviour. Over- or under-representation of any of these diverse sections of voters can affect the accuracy of vote share estimates.
If these were not enough, there are other difficulties. Since the swing model is applied on the previous vote shares, a change in alliances, or a split or a merger of parties, between two elections poses a difficulty in making this estimate of past vote shares. During the 2014 Lok Sabha election, the JD(U) was not an ally of the BJP and polled 15.7% of the vote in Bihar, while the NDA together polled 38.7% and the UPA polled 29.7%.
Now the alliances have changed and the JD(U) is part of the NDA. Since the JD(U) contested against the NDA in 2014, it is difficult to estimate what the NDA vote share would have been in 2014 had the JD(U) been part of NDA. To put it simply — if 38.7% votes for the NDA resulted in 31 Lok Sabha seats in 2014, how many seats might it get if the vote share is likely to increase or decrease? This was the story of just one state; imagine the complexity of working out this forecast state-wise for 29 states.
The task of applying the swing is far more complex than one might think. Measurement of swing and electoral change is easier when the contest is limited to two parties. The complexity of swings increases as more and more political players are added. For example, the swing from the Congress to the BJP or vice versa is easier to measure than the swing from BJP to Congress to SP and to BSP in the same election.
This model could be understood only by those who are still engaged in conducting such polls (pre, post, exit) using conventional methodology. In the current phase of exit polls, it is more an estimation of just seats, which could be done by a method (count method) other than the swing method.
How comprehensive is it?
The count method is itself time-consuming and labour-intensive, as one is expected to make an estimate for each seat. When agencies claim to have made seat-wise estimates, it is presented as the most comprehensive poll. This is when the sample size is as high as several lakhs. Some of the agencies have made innovations in the count method and this results in maximising gains by spending relatively less time and resources.
While an exit poll might claim to have covered all constituencies, in practice the poll is not required in some seats — like, why would one waste time and energy in doing a poll in Varanasi where the Prime Minister is contesting, or in Gandhinagar where the BJP president is contesting? If one looks carefully at constituencies, state-wise, many such seats could be eliminated where a survey need not be conducted and one could still make the most accurate estimate. After this elimination method combined with the count method, the survey is required in a limited number of difficult constituencies (swing constituencies). It is possible for an exit poll to be far more accurate than polls conducted using traditional methodology. But while the polls using traditional methodology estimate vote share and help us analyse voting behaviour on the basis of different social economic backgrounds, the count method can hardly give an estimate of vote shares, and any systematic analysis of voting behaviour could only be a dream.
Time to reflect
The numbers from various exit polls for the 2019 Lok Sabha elections are out. The question is, would these numbers have the same fate on May 23 as the projections of 2004, or will these exit polls be more accurate than those in 2014? During the 2004 elections, all exit polls predicted a comfortable victory for the NDA, but finally we had a fractured mandate with the Congress emerging the single largest party.
The average of all exit poll projections for NDA was 255 seats, and it won 187; the average of predictions for UPA was 183 seats, and it ended up with 219. Will the latest exit polls be similar to the exit polls of 2014, when most polls predicted the winner right but most failed to assess the extent of the BJP’s victory (though there were a few that also predicted the extent of victory with great accuracy)? At the moment, we do not know how seriously these numbers should be taken —which poll may be more accurate than the other.
The CSDS voting behaviour study using post-poll survey techniques will be completed in the next couple of days. We are not in a position to estimate the vote share yet, as CSDS has not conducted an exit poll outside the polling booth on election day.
Professor Sanjay Kumar is currently Director, Centre for the Study of Developing Societies. Views expressed are personal.