Deep Fake and Disinformation

How the news-poly­graph rese­arch pro­ject is using an AI plat­form to coun­ter media manipulation

The Pope as a hip-hop mogul in a white down jacket, the vio­lent arrest of Donald Trump or a sup­po­sed video call bet­ween Berlin’s for­mer Gover­ning Mayor Fran­zis­ka Gif­fey and Kiev’s mayor Vla­di­mir Klit­sch­ko: arti­fi­ci­al intel­li­gence is now capa­ble of sug­gest­ing a fal­se, but per­fect­ly staged rea­li­ty to us in see­mingly real pho­tos or vide­os. This may have enter­tain­ment value, but we need to ask our­sel­ves with many artic­les or social media posts: what’s true and what isn’t? Becau­se nowa­days anyo­ne can publish. The­re isn’t a need to give refe­ren­ces or mul­ti-level veri­fi­ca­ti­on in social media. This is a chall­enge for jour­na­lists and media com­pa­nies - they have to deci­de in a mat­ter of hours whe­ther mate­ri­al they’ve recei­ved is news­wor­t­hy and should be included in their news coverage.

How does one veri­fy the­se sources? What are the stan­dards one sets for clas­si­fi­ca­ti­on? Good tools and awa­re­ness-rai­sing are nee­ded at all levels in order to coun­ter pro­pa­gan­da and fake news. Such a tool­kit is curr­ent­ly being deve­lo­ped by the news-poly­graph rese­arch pro­ject which is coor­di­na­ted by trans­fer­me­dia in the Media Tech Hub. The plat­form, in effect a digi­tal lie detec­tor, aims to faci­li­ta­te the work of jour­na­lists in the future.

Che­cking facts is and always has been one of the cen­tral tasks of jour­na­lism. What is new is the mas­si­ve amount of data - good relia­ble tools are nee­ded here along with a grea­ter awa­re­ness in order to be able to make valid state­ments. Many media com­pa­nies have fact-che­cking edi­to­ri­al teams or offer fur­ther trai­ning. And some them­sel­ves have put tog­e­ther a kind of tool­box such as a fol­der with various pro­gram­mes. The­se must then be sel­ec­ted some­what awk­ward­ly by the user. news-poly­graph builds on exis­ting pro­ces­ses, opti­mi­ses them and brings them tog­e­ther in one inter­face. The rese­arch alli­ance has alre­a­dy ente­red into a dia­lo­gue with media part­ners and asked them about their needs. The goal of the pro­ject is to pro­vi­de a plat­form offe­ring fast, easier/​intuitive support.

Which con­tent is being mani­pu­la­ted - and which isn’t?

trans­fer­me­dia COO Clau­dia Wolf, who is respon­si­ble for the alliance’s pro­ject coor­di­na­ti­on, explains: “From a sci­en­ti­fic point of view, we would like to dis­pen­se with the popu­lar term of “fake news” in our rese­arch pro­ject. For us, it’s about veri­fy­ing infor­ma­ti­on. At the end of the day, peo­p­le are the ones who deci­de whe­ther or not some­thing is ‘fake’. We have desi­gned a plat­form that gives media pro­fes­sio­nals and jour­na­lists tools to make a quick eva­lua­ti­on of texts, vide­os, audio and pho­tos. That’s par­ti­cu­lar­ly important for newsrooms.”

Drag and drop fea­tures can be used via an intui­ti­ve user inter­face to enter ques­tionable media con­tent and have it che­cked by an arti­fi­ci­al intel­li­gence pro­gram­me. Models and ser­vices run in the back­ground and remain invi­si­ble to them. The jour­na­lists sim­ply have to deci­de in advan­ce what they would like to have che­cked, deter­mi­ne the run­ning order for the queries - and whe­ther they want to inte­gra­te so-cal­led crowd panels.

While text and image veri­fi­ca­ti­on in edi­to­ri­al offices is alre­a­dy advan­ced, the veri­fi­ca­ti­on of video sequen­ces is much more com­plex. This is whe­re the Fraun­ho­fer Insti­tu­te is get­ting invol­ved as a part­ner. They are spe­cia­li­sed in audio sequen­ces, inclu­ding pro­du­cing foren­sic audio ana­ly­ses in the area of law enforce­ment. Other alli­ance part­ners such as rbb or Deut­sche Wel­le are also making equal­ly important contributions.

The human crowd as a verificationist

Fact-che­cking thanks to human crowd sup­port is a spe­cial fea­ture of news-poly­graph. The inte­gra­ted crowd panels kick into action if, for exam­p­le, mate­ri­al is made available by whist­le­b­lo­wers to news­rooms who don’t have any access to cont­acts in the respec­ti­ve coun­try to con­firm its authen­ti­ci­ty. A moun­tain ran­ge, trees, vege­ta­ti­on at a cer­tain time of year: peo­p­le living the­re onsite are able to cle­ar­ly iden­ti­fy pic­tures and vide­os wit­hout having to resort to the use of tech­ni­cal aids. Such a line of ques­tio­ning can work if vol­un­teers are grou­ped into crowds in advan­ce and qua­li­fy after a regis­tra­ti­on pro­cess in order to par­ti­ci­pa­te accor­ding to their spe­ci­fic know­ledge. The input from the crowd comes from the aca­de­mic envi­ron­ment, inclu­ding invol­vement by the TU Ber­lin who had alre­a­dy been con­duc­ting a test pha­se with Ukrai­ni­an nati­ve spea­k­ers during the initi­al stages of the war in Ukrai­ne. Fol­lo­wing a crowd query, the pro­gram­me then reports back on whe­ther a pre­de­ter­mi­ned high per­cen­ta­ge of par­ti­ci­pan­ts have made the same statement.

Here again, it’s the jour­na­lists who ulti­m­ate­ly deci­de whe­ther they agree with the assess­ment. As Clau­dia Wolf points out, “peo­p­le are the ones who have to make the decis­i­on.” That’s also what the project’s title of “news-poly­graph” is refer­ring to. Like a lie detec­tor, it is only a means to an end. It’s not wis­hing to dimi­nish the ori­gi­nal work of the jour­na­lists becau­se that would neither be pos­si­ble nor desi­ra­ble: “It is in the DNA of jour­na­lists to be clas­si­fy­ing and eva­lua­ting con­tent. We are just pro­vi­ding a sim­pli­fied, fast, well-func­tio­ning and trust­wor­t­hy tool.” The enhan­ced approach through the crowd sup­port not only makes it pos­si­ble to respond quick­ly, but also to go into much more detail with ano­ther line of research.

An alli­ance of specialists

The seman­tic lin­king is being hand­led by the trans­fer­me­dia team who had alre­a­dy instal­led an onto­lo­gy for meta­da­ta for the dwerft rese­arch pro­ject and is also respon­si­ble for pro­ject manage­ment and coor­di­na­ti­on of the ten part­ners. The tech­no­lo­gi­cal part is based at Fraun­ho­fer IDMT. Other part­ner com­pa­nies are Uber­me­trics, which has spe­cia­li­sed in social media posts; neu­ro­cat, which tests AI models for safe­ty com­pli­ance; and the Ger­man Rese­arch Cent­re for Arti­fi­ci­al Intel­li­gence and Del­phai which pro­vi­de a Goog­le alter­na­ti­ve for B2B purposes.

Loo­king to the future, Media­Tech Hub will work with news-poly­graph on the tech­ni­cal-media level, while the Medi­en­an­stalt Ber­lin-Bran­den­burg (mabb) will hand­le legal mat­ters. The suc­cessful com­ple­ti­on of the rese­arch pro­ject should see the deve­lo­p­ment of a mar­ke­ta­ble soft­ware which the media com­pa­nies can inte­gra­te into their own sys­tems with con­stant updates being made available.

The pro­ject began on 3 May as it moved from the pre­pa­ra­to­ry pha­se to imple­men­ta­ti­on. Media pro­fes­sio­nals can now start loo­king for­ward to an event in Novem­ber when the project’s part­ners will show what one can expect from news-poly­graph in the coming years.

Foto: Deep­Mind on Unsplash

About MTH Blog

The media technologies of the future are already being used today – not only in the entertainment sector, but also in a wide variety of industries. Christine Lentz meets up with tech enthusiasts, established companies and researchers for our monthly MediaTech Hub Potsdam blog to tell the stories behind the innovative business models.