top of page

Unveiling Bias in AI: The Impact of Prospect Theory and Steps Towards Fairness


ree



In the data and AI industry, there is a widespread awareness that traditional datasets inherently contain biases, and these same datasets are employed for training AI models. What captivated my interest was delving into the origins of these biases, leading me to discover Prospect Theory as a significant factor contributing to the presence of biases in the data.


Prospect Theory

Prospect theory, a behavioral economics concept, posits that people value potential gains and losses relative to a reference point, demonstrating asymmetrical sensitivity to losses. It explains how individuals make decisions under uncertainty, emphasizing the impact of framing and loss aversion, which can influence stakeholder decision-making in various contexts. It suggests that individuals evaluate potential outcomes based on perceived gains or losses relative to a reference point, often the status quo. The way the scenario is presented can significantly influence the individual's decision-making, reflecting the theory's prediction that individuals tend to be more risk-averse in the face of potential losses but more risk-seeking when presented with potential gains, influenced by their subjective assessment of probabilities.


Tangible Example

Consider a decision maker faced with two presentations concerning revenue numbers. In the first scenario, an advisor highlights that the revenue has maintained an average growth of 10% over the last three years. In contrast, a second advisor informs the decision maker that while the revenue has been above average over the past decade, it has experienced a decline in the last three years. According to prospect theory, even though both advisors are presenting the same revenue numbers, the decision maker is more likely to favor the presentation by the first advisor. This preference arises because the decision maker is more inclined to engage when the revenue is framed solely in terms of gains, as opposed to the second advisor who emphasized both high revenues and recent declines.


IMPORTANCE

It proves beneficial for decision makers to gain a comprehensive understanding of their cognitive biases, particularly noting that losses often elicit a more significant emotional impact than equivalent gains. The application of prospect theory becomes instrumental in providing insights into the intricate dynamics and psychological mechanisms that underlie the decision-making processes employed by decision makers. By recognizing and navigating these biases, decision makers can make more informed and nuanced choices in various contexts, contributing to more effective and rational decision-making strategies.


IMPACT

Historically, behavioral factors have played a crucial role in shaping laws, formulating policies, disbursing loans, and offering opportunities, especially in the absence of data-driven decision-making. Even with the advent of data-driven techniques, these behavioral aspects continue to exert influence. Consequently, the existing datasets inherently reflect biases stemming from historical decisions. Now, as we train our latest AI models using this data, it raises concerns about perpetuating and potentially amplifying these biases within the technology.


ACTIONS

To mitigate biases from the data in training AI models, prioritize diverse data collection and conduct bias audits. Use data pre-processing techniques for balanced representation. Employ feature engineering, emphasizing interpretability, and transparency. Continuously monitor the model's real-world performance, encourage user feedback, and educate the team on ethical considerations. Emphasizing interpretability ensures a clearer understanding of model decisions, aiding in identifying and addressing biases effectively.

Apply relevant fairness metrics to training data to assess and quantify biases.

Some common fairness metrics include:

  • Disparate Impact (DI): Measures the ratio of the positive outcome rate for the advantaged group to that of the disadvantaged group. A DI close to 1 indicates fairness.

  • Statistical Parity Difference (SPD): Compares the difference in the positive outcome rates between different groups. A value of 0 indicates perfect fairness.

  • Equalized Odds (EO): Examines the balance of true positive rates between different groups. Ensures that the odds of being correctly classified are equal across groups.

  • Calibration: Assesses the agreement between predicted probabilities and observed outcomes. A well-calibrated model provides accurate probability estimates across different groups.

  • Confusion Matrix Disparities: Analyzes differences in metrics like accuracy, precision, recall, and F1 score across different groups.

  • Treatment Equality: Evaluates the proportion of positive outcomes for each group. Equality indicates fair treatment.

  • Predictive Parity: Compares the proportion of positive predictions across different groups. A balanced predictive parity indicates fairness.

  • Fairness-aware ROC Curve: Analyzes the Receiver Operating Characteristic (ROC) curve for disparate impact across different groups.

  • Mean Difference: Measures the average difference in outcomes between groups.

  • Generalized Entropy Index (GEI): Quantifies the distribution of predicted outcomes across different groups.

When applying fairness metrics, it's crucial to consider the specific context, goals, and potential impact on different groups. These metrics help identify and address biases in training data, contributing to the development of fair and ethical AI models.


Conclusion

The exploration into the intrinsic biases within traditional datasets used for training AI models unveils the influential role of Prospect Theory. As we uncover the asymmetrical sensitivity to gains and losses, particularly in the context of decision makers and revenue presentations, it becomes apparent how these biases permeate into our AI technologies. Recognizing the importance and impact of these biases, there is a call to action. Mitigating biases in AI data involves a multifaceted approach—prioritizing diverse data collection, conducting bias audits, and emphasizing interpretability. By applying relevant fairness metrics and continuous monitoring, we pave the way for more ethical, transparent, and unbiased AI models, ultimately fostering effective and rational decision-making strategies in the evolving landscape of data and artificial intelligence.









 
 
 
I Sometimes Send Newsletters

Thanks for submitting!

© 2023 by Sofia Franco. Proudly created with Wix.com.

bottom of page