Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse
Problems of bias and fairness are central to data justice, as they speak directly to the threat that ‘big data’ and algorithmic decision-making may worsen already existing injustices. In the United...
The Siren Song of Technological Remedies for Social Problems: Defining, Demarcating, and Evaluating Techno-Fixes and Techno-Solutionism
Can technology resolve social problems by reducing them to engineering challenges? In the 1960s, Alvin Weinberg answered yes, popularizing the term “techno-fix”
The Bumpy Road Toward Global AI Governance | NOEMA
Overheated rhetoric about a U.S.-China AI arms race need not distract us from common ground on how advanced technology can be regulated across cultural and national boundaries.
I’ve been struggling to articulate this idea, and maybe the problem is that it’s actually kind of simple once you put it out there, and there’s really no good reason to unpack a whole case for it once you put the thought on paper.
How AI Cheapens Design (At a Great Ecological Cost)
It doesn’t come as much of a surprise to learn that, environmentally speaking, AI is an extremely wasteful and destructive technology. It’s rare that you can nail down the ecological cost of an image. Still, researchers have assessed that generating a single AI image requires the same amount of energy needed to fully charge your iPhone […]
Seeing Like a Data Structure | Belfer Center for Science and International Affairs
Our data-centric way of seeing the world isn't serving us well. Barath Raghavan and Bruce Schneier argue that we need new socio-technical systems that leave room for the inherent messiness of reality.
🦜Stochastic Parrots Day Reading List🦜 On March 17, 2023, Stochastic Parrots Day organized by T Gebru, M Mitchell, and E Bender and hosted by The Distributed AI Research Institute (DAIR) was held online commemorating the 2nd anniversary of the paper’s publication. Below are the readings which po...
A New York Times Book Review Editors' Choice"In Daub’s hands the founding concepts of Silicon Valley don’t make money; they fall apart." --The New York T...
Uncanny Valley A Memoir, Anna Wiener | 9781250785695 | Boeken | bol
Uncanny Valley A Memoir (Paperback). A NEW YORK TIMES BESTSELLER. ONE OF THE NEW YORK TIMES'S 10 BEST BOOKS OF 2020. Named one of the Best Books of 2020...
Waag | Nederlandse bevolking stelt prioriteiten voor onderzoeksagenda AI
Onderzoek naar mening Nederlander over AI: 58% van de Nederlandse bevolking acht het thema "Nepnieuws, nepfoto's en polarisatie" cruciaal als het gaat over de ontwikkeling van Artificiële Intelligentie (AI) en het onderzoek hiernaar.
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
AI Nationalism(s): Global Industrial Policy Approaches to AI
Our latest report diagnoses concentration of power in the tech industry as a pressing challenge – and points the path forward to seize this moment of change.
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Future Art Ecosystems 4: Art x Public AI provides analyses, concepts and strategies for responding to the transformations of AI systems on culture and society.