Democracy and participation, Transparency and Data literacy

TraceMap-Fighting Misinformation with Transparency winner

Creators

Who is behind this?

Allegra Pochtler

TraceMap

http://www.tracemap.info

Germany

Who is joining forces?

iRights Lab

https://irights-lab.de/en/

Germany


The Good Lobby

http://www.thegoodlobby.eu/

Belgium


Nettz

http://Nettz.com

Germany


Idea Sketch | Proposal

Proposal

Idea pitch

In social media, populist parties have found the perfect platforms for their deliberate use of misinformation to disturb and emotionalize political debates. In this setting, fact-based discussions have become virtually impossible. To tackle this problem, we are developing an open-source web tool that will empower social media users to investigate and discuss online content and to debunk misinformation collectively.

Why does Europe need your idea?

Before the advent of social media, the power to shape political opinion formation was highly centralized in traditional media. If on the one hand social media have made discussions more democratic and participatory, on the other hand they provide users with actions and reactions that facilitate the diffusion of hate speech and misinformation. This has allowed populist parties to implement very strong and dishonest online strategies.

Our idea is to introduce the right tools for more transparent and fact-based online discussions - which, we believe, is fundamental for democracy. Thereby, we counteract dishonest social media strategies in political debates, bringing about the full potential of social media for participatory and democratic opinion formation.

What is your impact?

TraceMap will provide the means to qualify and verify online content. As a consequence, not only would misinformation be inhibited from spreading but the overall quality of information circulating in social media should increase. The collective process of flagging will turn users into information curators, increasing data and digital news literacy.
Once this dynamics is established, misinformation and hate speech will be easily detected, undermining the success of dishonest online strategies and enabling fact-based and respectful political discussions to take place.

How do you get there?

1. Data Collection: the database we created already stores Twitter users’ social relations and their tweets.
2. Transparency: we are currently finishing the development of an interactive visualization interface which allows people to navigate through tracemaps - diagrams depicting the diffusion of tweets transparently.
3. Analysis: we will develop a series of functionalities to assist users in their interpretation of what they read - such as hashtag tracking, hate speech flagging or allowing users to rearrange timelines by relevance.
4. Crowd-Debunking: the platform will allow users to interactively flag and share annotations regarding the quality or veracity of a tweet. Thus, we guarantee that fact checking becomes a collective process.

What is your story?

As engaged European citizens, we believe that access to unbiased information is key to democratic participation. TraceMap originated from a think tank for social and digital transformation, where we met and sketched the concept of a tool to combat misinformation on social media. Later on, our idea won the first prize at the #disruptpopulism competition, awarded by the Bundeszentrale für politische Bildung. Ever since, the four of us have been thoroughly working to implement it.

Who are you doing it for?

Social media have become central spaces for news consumption and vital for political debates. Therefore, we are carrying out this project for all social media users in Europe, which we split into two broad groups.
The first consists of engaged users: those who proactively detect and flag ill-intentioned content and who seek to contribute to a culture of respectful and fact-based online discussions. The second group represents users who rather consume those flags to interpret online news and political debates.
Our interactive platform is designed to connect political activists, multipliers, journalists, researchers and engaged social media users across Europe in their fight against misinformation and hate speech.

What makes your idea stand apart?

We propose an innovative approach to fight misinformation on social media: instead of focusing on manual content analysis, we visually expose the diffusion of information and give people many tools to do their own research and analyses.

TraceMap empowers users to become change-makers themselves. It is designed to transform the way people consume digital news, from being passive and disoriented readers towards investigative fact checkers who are aware of the mechanisms of digital news.

Not only is our aim to make news diffusion transparent, we are also transparent in our work. Our codes and algorithms are open source.

€ 50000,-

Funding requested from Advocate Europe

€ 125000,-

Total budget

€ 50000,-

Funding granted from Advocate Europe

Major expenses

We develop our tool in modules. Each module is an independent set of functionalities developed in 3-to-4-month cycles. With the Advocate Europe funding, we will cover the educational program, legal consulting and implement two analysis modules that will already allow us to release the tool. Additional modules will depend on further funding. Personnel costs: 88.200,00 EUR; Content production for educational program: 10.000,00 EUR; Web tool design: 5.000,00 EUR; Legal consulting: 3.000,00 EUR

What do you need from the Advocate Europe community?

We'd love your feedback and support! Do you know initiatives working on similar issues in your country? Do you know networks and potential partners for content creation? Do you know how to program? Join our team! Do you have a Twitter account? Donate a token at twitter.tracemap.info!

Project Journey

Join forces!

How the tangle between online knowledge and the ad industry induced a collective dementia

 

The Social Media’s Arena

You are reading this text online. How did you get here? Through a search engine? Did you find it in a social medium posted by someone? Does the way you will interpret this text depend on who recommended it to you? Is the topic and style going to keep you interested until the end? Are you going to visit the pages I linked in the text? And how will you organize your thoughts about this text for further reference in case you want to?

Now apply these questions for every single text you’ve ever encountered on- and offline. And for every human being that ever lived on this planet. At this point, you should have formed a mental image of an intricate network of knowledge production, distribution, (access?) and consumption. How will a piece of content operate on you? And when does it establish as common knowledge?

In a society that has become highly digital and in which people have very little time, social media have become the central arena for the collective construction of knowledge. It involves, of course, knowledge on mathematics or cooking, but also discussions about politics and historical facts. And that is when things get tangled.

Collective Memory Formation

Think of every piece of information circulating in social media as a package. Doesn’t matter if it is an image, text, video, interactive app or some kind of combination of them. After each package is produced it will eventually be distributed – sometimes going viral and establishing a collective memory. Their spreading mechanisms are quite complex.

First, the packages that will reach you depend on your acquaintances. At this stage, the process I’m describing still looks like a transport network, with traffic lights, controllers and jams, nothing too complicated except for the complex underlying social structure instead of streets. Once it is delivered, it will challenge the reader: is the headline strong enough? Will it convince me to click further and read the text? Can I trust its content? Now, things start to get subjective.

Who delivered that package to me? Most likely, I will use the answer to this question to interpret what I am reading. Can I trust that person? (Was it that uncle of mine that only posts suspicious texts?) Trusting the person who recommended a text usually adds a layer of trust to the text itself. How much knowledge do I have about the topic? Which sentences are stated as facts? Some people will trust their gut feeling more than anything, being subject to many of the cognitive biasesand misinterpretations of the brainFlat-earthers included. Others will try to use several sophisticated criteria to assess the trustworthiness of the stated facts, not really getting rid of biases. Are there really true facts whatsoever?

Repeat a lie often enough and it becomes the truth. It doesn’t matter what is really true or false, virality determines what will become a collective memory. And what is driving virality is affection.

The Financial Fabric of our Short-Term Digital Memories

The economic powerhouse of the internet is the ad industry – which, in turn, feeds on people’s attention. Once something goes viral, it draws the attention to specific facts – „alternative“ or not. What is in the spotlight? Cat memesKim Kardashian or the latest news in nuclear fusion? It depends on how the package is framed, the marketing (psychological) strategies behind its aesthetics. The thing is, the viral stuff that will constitute collective memories in the future triggers lots of clicks and drives whole economies generating revenue – oftentimes for the fake-news industry. And what drives economy also drives politics.

We have seen the vote-influencing machinery developed by Cambridge Analytica (a.k.a. behavioral microtargeting) radically change the way political campaigns operate. Political lies and promises were always part of the landscape, but the precision with which the „right“ packages were delivered to specific groups of people to manipulate their opinions made it explicit how fragile our democracies are. Well-crafted and laser-targeted packages of fake news are misleading a considerable amount of people. Automated bots detect and surf the ephemeral attention waves, generating revenue and spreading misinformation. How well-informed should a person be to chose their representative and not somebody else’s?

Microtargeting can only work because our private data and metadata have become a product for third-party enterprises. This means that someone else is using the value we generate as users to produce revenue. And that our data might be used against us if it arrives at our enemies‘ hands – enough money can buy all sorts of sensitive, private data.

Selling private data is the core of the toxic business models around the web, social media included. Facebook, Twitter, Google, Amazon and co. have several lock-in mechanisms that keep their users imprisoned and dependent, even when they want to leave. Scandal after scandal, their data silos and power are getting bigger and their monopolies are not really being avoided as they should. How to repair this mess?

A Light at the End of the Dystopian Tunnel

Let’s zoom out to the big picture: the architectures of social media are shaping to a great extent our social and political interactions – and threatening very basic principles of democracy. Most of the content published on social media cannot be easily found or referenced, provoking a sort of collective memory loss or attention disorder. The current models upon which the giants are based are problematic for several reasons, and it is fundamental to diagnose where they fail to design the antidotes we need. How to build a sustainable digital ecosystem?

We should be designing the online spaces not for maximizing growth or shareholder profits, but for allowing a safe, diverse and open construction of collective knowledge. We need social media that foster non-violent communication, in which users have a deep sense of responsibility for what they say. We need transparent regulation mechanisms that give users full control of their content filters. We need to build a common linguistic ground so that we can keep track of History, even if the narratives are different and we politely agree to disagree. We also need mechanisms to collaboratively detect and flag false narratives as soon as they are produced.

We need decentralized technologies that empower users to use their data in their own benefits. Micropayments included. We need private data to be portable, so that users can carry it with them wherever they decide to go. We need interoperable technologies that allow users to switch between services without getting locked inside specific platforms, enabling the natural selection of market competition and cooperation to operate instead of pretending the market is actually free. We need better-organized platforms on which our knowledge will be built – enabling full content and meta-content search, citing with precision and with an intuitive interface.

Opening up knowledge is the most obvious way out of this systemic crisis. It involves making education inclusive and bringing knowledge paywalls down. But unlocking data silos and democratizing knowledge means decentralizing power – which always bothers the lobbyist tech giants. Luckily enough, there are many purpose-driven projectsorganizationsinitiatives and digital activists that are contributing to the technological building that will enable re-configuring the settings of our organization as a society.

And that is what we are doing in association with WorldBrain. We are now using Memex – a technology developed by Worldbrain – as the infrastructure of a systemic model for collective knowledge organization to empower independent journalists, content creators, researchers and fact-checkers in all steps necessary for the construction of well-founded narratives. It consists of a fundamentally different way of organizing and processing information to fight misinformation: from the underlying economic and revenue models through the way data and metadata are treated all the way up to the abilities to organize, classify, analyze and annotate on content – individually or collectively.

Remembering is the substance of knowledge. Strengthening our collective memories involves decoupling them from economic interests. Giving users ownership over their data by design must be the basis of innovative solutions that aim at sustainability. The challenges are obviously many, but a revolution has already started.

bruno_pace on May 27, 2019
Failing forward

How Twitter’s Policy stopped us from helping them

Today, the discussions around social and political topics are so centered in social media that investigative journalists must track a huge amount of information in there to detect misleading content. Social media analysis has become a fundamental piece of the fact-checking puzzle. For that reason, we spent a bit more than a year developing a digital technology to equip everyone engaged in that fight with better tools to analyze the news.

Three days after we launched our closed beta, a very unpleasant surprise: Twitter blocked our access to its data, claiming (in an automated email) that we had violated the paragraph II.B of the Developer Agreement and Policy and were, therefore, blocked. In the same email they wrote an extra sentence, not present in the above-mentioned agreement:

“Users of your service must specifically initiate and authorize requests made on their behalf (with authorization granted per-request, not as a generic grant of token).”

This extra sentence was where our “violation” took place. And it changes everything. It means all our work to collect tokens was thrown away and we would have to rewrite a big part of our code so that it respects Twitter’s intransparent policy. It also means that our tool would be so slow that it would hardly be of any use to fact-checkers - who require fast tools to keep up with the production of anti-content and debunk them in real time.

Our initial approach was to contact Twitter - also not a trivial task. Despite our good connections to high-rank people in there, the only channel in which they actually answered was the commercial one (which took us a while to try). We followed all necessary steps for enterprises to buy data and they promptly set a meeting, in which they answered our pressing questions: the free Standard API (that we are using) was designed not to work for products; paying for the public data we need is necessary (and expensive and not an option if you’re an NGO); and Twitter is not willing to help us help them with their problem of fake news by facilitating the access to their public data in any way.

But what doesn’t kill you, makes you stronger. We learned during the process that being restricted to the scope of Twitter is, in fact, quite a limitation. There are other Social Media (Facebook-WhatsApp included) in which misinformation is circulating and there are cross-platform phenomena that cannot be ignored in a serious journalistic analysis. We also learned that depending on an API is complicated - for many reasons - and getting rid of that dependency might be necessary. Finally, we learned that our technology is, indeed, helpful for journalists but tools that allow content analysis might be more fundamental.

With all those lessons, we found a radically different approach to continue pursuing our mission. We started an important collaboration that will allow us to move forward with a model that, we believe, is more robust - allowing users to collaboratively analyze, organize and classify content all over the web! This is how our journey continues from now on. We are nevertheless adapting our code to have TraceMap’s tool available, either for people analysing Tweets’ dynamics without any time constraints or to expose to the world how limiting it is to reach Twitter’s data if you don’t have the money.

bruno_pace on April 18, 2019
Road to impact

First version: complete!

We started TraceMap to tackle the complex challenge of informing our civil society, countering the mass production of fake news and other misleading forms of content online.

Our mission involves developing digital infrastructures that can empower fact-checkers, journalists, activists and knowledge producers to analyze the news and spread their understanding of the facts. To onboard these actors, we plan to develop workshops for community building and increasing our outreach.

In the first months of this funding period, we developed the first version of our tool (check it out!), which is an open-source tool specifically designed to work on Twitter - the social network known for its political engagement. It displays maps that indicate how information is spreading, rather than analyzing its content. These maps, together with a bunch of metrics and analytic tools, were specifically designed to help our users make sense of the news they're reading.

bruno_pace on Feb. 7, 2019

Why this idea?

Collective fact-checking: Tackling the role of social media in the public sphere, this project develops a digital tool where the user community debates, decides and debunks fake news. It’s up to the users to investigate social media content together.

Team

Allegra Pochtler

Philipp Beyerlin

Eike

bruno_pace

Idea created on Jan. 19, 2018
Last edit on July 19, 2018

Write comment