Wrangling at Mozilla Festival 2024

In February of this year, I was selected for this year's cohort of Wranglers for the Mozilla Festival House 2024 in Amsterdam.

wip fair inclusive AI



April 16, 2024

Since then, a lot has happened. I stumbled into the team by attending an online workshop on "Speculative F(r)iction in Generative AI", held by Bobi Rakova, in which we explored possible futures of AI in the health sector using a speculative everything approach. Completely new to the area of future studies, I was kind of intrigued by the positive vibe that was created in the group and the way we were able to think about possible futures, our futures, in a playful way. Don't get me wrong, there are a lot of harms and risks in the way we currently use "Artificial Intelligence", but instead of focusing on what will go wrong, this speculative approach allowed us to also think about what could go right and how we could get there.

Mozilla Festival seemed as if similar and like-minded people, who also grow up with the Internet, or at least value the Internet or Internet health the same way I do would gather there. So I thought I'd give it a try and applied for the Wrangler role and surprisingly got accepted. Since the Mozilla Foundation, which is the non-profit organization behind the Mozilla Corporation and also Mozilla Festival itself, put its focus on Trustworthy AI with their 2020 white paper on that topic, this is also the overarching theme for 2024. However, this year's particular focus is on solidarity and togetherness, reflecting Mozilla's community-driven ethos. I am now part of the festival curation team, co-creating the narrative of MozFest under the Fair & Inclusive AI theme. We started off by brainstorming pressing issues in the current field of AI and quickly identified bias and fairness as one of the key topics.

Main thoughts

AI technology is the product of human decisions and behavior, and so is bias in AI the result of human bias itself or in the human-collected data. AI is not aware of the things it sees, but tries to replicate patterns in the data. Consequently, it also replicates bias. So, how do we prevent this unwanted propagation of bias?

Technological solutions on the algorithmical layer might be one answer, but we know that unless the right algorithm and the right training data has been chosen to train an AI model, it will inevitably still be biased in one way or the other. Since choosing the right data and algorithms is again human decision making, we need to start there.

“It may not be possible to have what some call an “unbiased brain”, but it is possible to bring diverse brains to the table to help prevent it.”

— Christian Thilmany, Director, AI Strategy, Microsoft (Source)

Diversity

Our goal is to bring together people from different backgrounds across Europe to collaborate on AI solutions that are ethical, inclusive, and fair. We believe that diversity is required during the whole process: from the idea, to design and development, to deployment and post-launch monitoring.

A sub-part of this of course also demands technical solutions: How can datasets be gathered from diverse, decentralized sources for example?

”No single institution is fully representative of our diverse population and practice patterns, so AI models that are trained on single institutional data may replicate biases that are present in that source data. Access to diverse and representative data is a foundational step in efforts to reduce bias in AI, and federated learning has the capacity to reduce the barriers to building broad, diverse datasets across institutions while preserving privacy and security.”

– Accelerating artificial intelligence: How federated learning can protect privacy, facilitate collaboration, and improve outcomes (Source)

However, a main part of this is also about the people who are involved in creating AI.

Transparency

Another main area we see here is Transparency: Not just in the ideation or design phase, but also when choosing the right fundings. Where do funds come from, who designs the AI system, where does the data come from, which people are being impacted, who owns it …?

Transparency also applies to the people who use the outputs of AI systems. When running deep learning models, it can be very difficult to determine why those predictions were made. How can humans determine whether the data they use in the model is correct/representative? A big research area here is Explainability and Interpretability in ML/AI.

What's next?

We are still in the early stages of planning the festival, but I am excited to see where this journey will take us. So keep an eye on the Mozilla Festival website and connect with me on LinkedIn for updates on the Fair & Inclusive AI track.




You can expect irregularly published notes on current world events, memes, short stories, Wikipedia articles and essays from various renowned guest writers.

Make sure not to miss it!