📁
Bias in AI and Machine Learning
How training data and models reinforce inequalities.
0 topics 0 posts
Pinned Approved in Bias in AI and Machine Learning

SUMMARY - Bias in AI and Machine Learning

A language model trained on decades of internet text associates certain names with criminality, certain professions with specific genders, and certain neighborhoods with poverty, reproducing stereotypes absorbed from its training corpus. An image recognition system labels a photo of a Black man holding a phone as threatening while labeling an identical pose by a white man as neutral, having learned associations between race and danger from millions of captioned images.

Alberta
Approved in Bias in AI and Machine Learning

RIPPLE

This thread documents how changes to Bias in AI and Machine Learning may affect other areas of Canadian civic life. Share your knowledge: What happens downstream when this topic changes? What industries, communities, services, or systems feel the impact? Guidelines: - Describe indirect or non-obvious connections - Explain the causal chain (A leads to B because...) - Real-world examples strengthen your contribution Comments are ranked by community votes. Well-supported causal relationships inform our simulation and planning tools.
Alberta
Subscribe to Bias in AI and Machine Learning