The Indraprastha Institute of Information Technology Delhi (IIIT-Delhi) and global tech company Logically have extended their collaboration on countering misinformation and hate speech till 2026.
“As part of the partnership, the two organisations will conduct further research on developing advanced technologies to counter hate speech and online mis- and disinformation. The partnership will also enhance multimedia analytical capabilities, including video, images and memes, as well as build multilingual models that understand regional languages in India,” they said in a statement.
The partnership had begun in 2020 between Logically and the Laboratory for Computational Social Systems (LCS2) at IIIT-Delhi. They had been collaborating on “fundamental technical research on understanding the provenance, motivations, and psychology of online misinformation”.
“Research from the first two years of collaboration has already been converted into multilingual capabilities that have been deployed in Logically’s flagship threat intelligence platform – Logically Intelligence – to detect and analyse mis- and disinformation and online harms more quickly. In 2021, outputs from the research secured recognition in prestigious academic conferences,” the two said in a statement.
Commenting on the partnership, Dr Anil Bandhakavi, Head of Data Science at Logically, said, “We are thrilled with the impact from the first two years of our research collaboration with IIIT-Delhi. As expected, we have been able to show quantifiable results in the space of research to curb hate speech and mis/disinformation. Given the success from the first phase of our collaboration, we are excited to further strengthen our partnership with a prestigious institution like IIIT-Delhi.”
Dr Tanmoy Chakraborty, the director of the Laboratory for Computational Social Systems and the head of the Centre for AI at IIIT-Delhi, said, “We look forward to building further on our research successes and growing our research teams in the next phase of the collaboration. Our research capabilities and Logically’s industry experience will enable us to develop better insights in understanding online harm and its prevention across languages and various forms of media”.
In a joint statement, the two said their research partnership had succeeded in designing “predictive models to forecast the likelihood of a social media post attracting harmful content over social media discourse, enabling content moderators to more quickly identify social media posts that may invite online harm”.
“Additionally, to further understand and identify community-level threats, the teams modelled how hateful online echo chambers are formed, observing that a small number of echo chambers are responsible for the spread of the majority of harmful online content,” they said.