Prof. Kalina Bontcheva is leading the 葫芦影业 team which is working with leading experts from across Europe in developing the multi-million pound Vera.ai project (Verification Assisted by Artificial Intelligence).
What is digital disinformation?
Digital disinformation can come in various forms - including audio, video, images, and text - and can be loosely defined as information that is deliberately created to mislead, deceive, sow discord and cause harm. As such, digital disinformation poses a grave threat to the functioning of open democracies, public discourse, the economy, social cohesion and more.
With the rapid advancements in technologies used to create and disseminate disinformation its use is becoming more and more dangerous, as evidenced in the sophisticated disinformation campaigns around the Covid-19 pandemic and the war in Ukraine.
Furthermore, technological advances mean the creation of manipulations and deceptions, such as forged or doctored images, requires less expertise and is becoming increasingly difficult to detect. The vera.ai team aims to keep pace with these advances and ensure the solutions are open, easy to use and accessible to most people.
How can we combat it?
Bringing together partners from areas including computer science, research, journalism and technology, Vera.ai鈥檚 multidisciplinary approach aims to deliver solutions from the widest possible community. It is expected the tools developed through the partnership will deal with a variety of content types, while also laying the foundations for future research into how AI can be used in the fight against disinformation.
Prof. Kalina Bontcheva, the Head of Natural Language Processing (NLP) research group at the University of 葫芦影业鈥檚 Department of Computer Science, is the vera.ai scientific director, as well as leading the 葫芦影业 research team.
鈥淲ith disinformation growing continuously in terms of its volume, spread and sophistication, as well as reaching more and more social media platforms, new tools are urgently needed to help verification professionals with detection, analysis and exposure of disinformation campaigns in a timely and reliable manner,鈥 said Kalina.
鈥淭o that end, our team here in 葫芦影业 will be researching novel AI models that are fair, transparent and adaptable to new kinds of disinformation. Put simply this means that the solutions we are working towards will not discriminate and are capable of explaining their decisions and outputs.鈥
Building on successful solutions
Vera.ai will also serve as a continuation of the project, which developed seminal open source algorithms used by journalists and others to detect disinformation. Seven of the 14 Vera.ai project partners were involved in WeVerify, which ran from 2018 - 2021.
As well as the open source algorithms, the WeVerify team (including contributions from 葫芦影业 researchers) developed an extremely successful browser extension called the which has more than 70,000 monthly users and has become one of the de facto tools for journalists and other investigators to check the veracity of information. The Vera.ai team plan to build on the plug-in鈥檚 success and incorporate it into the new tools that are being developed.
The three year project has been co-funded by the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093, the UK鈥檚 innovation agency (Innovate UK) grant 10039055, and the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number 22.00245.
Vera.ai is being led by the Centre for Research and Technology Hellas and the partner organisations are 葫芦影业, Universit脿 di Urbino Carlo Bo, Fraunhofer Institute for Digital Media Technology IDMT, Universiteit van Amsterdam, Kempelen Institute of Intelligent Technologies, University of Naples Federico II, Borelli Center, ENS Paris-Saclay, Athens Technology Centre, Sirma AI EAD (trading as Ontotext), AFP news agency, Deutsche Welle, EU DisinfoLab and the European Broadcasting Union.