Posted on

Procedural justice can address generative AI’s trust/legitimacy problem



Share

Tracey Meares
Contributor

Tracey Meares is the Walton Hale Hamilton Professor and Faculty Director of the Justice Collaboratory at Yale Law School.

More posts by this contributor

Spotify must be more transparent about its rules of the road

Sudhir Venkatesh
Contributor

Sudhir Venkatesh is William B. Ransford Professor of Sociology at Columbia University, where he directs the SIGNAL tech lab. He previously directed Integrity Research at Facebook and built out Twitter’s first Social Science Innovation Team.

More posts by this contributor

Spotify must be more transparent about its rules of the road

Matt Katsaros
Contributor

Matt Katsaros is the Director of the Social Media Governance Initiative at the Justice Collaboratory at Yale Law School and a former researcher with Twitter and Facebook on online governance.

More posts by this contributor

Spotify must be more transparent about its rules of the road

The much-touted arrival of generative AI has reignited a familiar debate about trust and safety: Can tech executives be trusted to keep society’s best interests at heart?
Because its training data is created by humans, AI is inherently prone to bias and therefore subject to our own imperfect, emotionally-driven ways of seeing the world. We know too well the risks, from reinforcing discrimination and racial inequities to promoting polarization.
OpenAI CEO Sam Altman has requested our “patience and good faith” as they work to “get it right.”
For decades, we’ve patiently placed our faith with tech execs at our peril: They created it, so we believed them when they said they could fix it. Trust in tech co …

Read More