Page 56 - AI & Machine Learning for Beginners: A Guided Workbook
P. 56
Example: A hiring algorithm trained primarily on data
from male employees might unfairly evaluate female
candidates.
Discussion Question: How might we detect and correct bias in
AI systems?
The Challenge of Bias in AI Systems – Part 1
One of the significant risks associated with AI is the potential for
bias to be embedded within AI systems, leading to unfair or
discriminatory outcomes.
What is Bias in AI?
Bias in AI occurs when the data used to train a system reflects
existing societal biases, causing the AI to perpetuate or even
amplify those biases in its predictions and decisions.
Example: If a hiring algorithm is trained on historical hiring
data that favors specific demographics, it may unintentionally
discriminate against qualified candidates from
underrepresented groups.
Example: Facial recognition systems trained on imbalanced
datasets may perform poorly for certain ethnicities, leading to
inaccuracies and unfair outcomes.
Since AI systems learn from data, ensuring fairness, diversity,
and ethical oversight in AI development is critical for mitigating
bias.
54

