AI is Deciding for Me – But Who's Watching AI?
In a world where AI is increasingly integrated into various aspects of our daily lives, it is imperative for us to understand its use cases, implications and recognize our role in shaping its ethical development.
3/24/20255 min read
AI is Deciding for Me – But Who's Watching AI?
A Citizen's Perspective on Fairness, Transparency, and AI's Hidden Influence
Artificial Intelligence (AI) has been quietly influencing our lives for decades, often without our conscious awareness. However, recent advancements, particularly in Generative AI, have thrust AI into the spotlight, marking a pivotal moment reminiscent of the iPhone's transformative impact.
AI is fundamentally changing how we live – faster and more invisibly than ever before. It's not just in labs or sci-fi movies anymore. It's in our pocket, on our screen, influencing what we see, the choices we make, and the opportunities available to us.
Internet searches are filtered by AI, prioritizing the information we see first.
Social media algorithms shape our worldview, reinforcing certain opinions while hiding others, creating Social Media Filter Bubbles.
AI hiring tools decide if our job applications are viewed.
Banking AI is affecting whether we get a loan, a mortgage, or financial support.
Healthcare AI helps diagnose diseases – but it might not work equally well for all patients.
AI not only aid decision-making; it is also making decisions on our behalf and worse these decisions happen without our awareness, transparency, accountability, or public oversight.
So, the real question is: Who's watching AI?
In a world where AI is increasingly integrated into various aspects of our daily lives, it is imperative for us to understand its use cases, implications and recognize our role in shaping its ethical development.
Why It Matters:
AI systems are now integral part of our lives like hiring, lending, law enforcement and healthcare. However, it is important to note that the historical data employed for training AI can be skewed, leading to inherent biases.
Facial Recognition Errors: Studies have shown that facial recognition software often misidentifies individuals from certain racial groups, leading to false arrests and other grave consequences (AIMultiple)
Job Recommendation Disparities: Research shows that AI-driven job recommendation algorithms may show biases, favoring certain racial groups over others. This perpetuates existing inequalities in employment opportunities. (AIMultiple)
Job Screening Discrimination: Amazon’s AI hiring tool was scrapped after it was found to favor male applicants over female ones because it was trained on past hiring patterns that were already biased. (Reuters)
Healthcare Bias: An AI tool designed to detect skin cancer performed significantly worse for patients with darker skin tones because its training data mostly came from lighter-skinned individuals. This means people of color are less likely to get correct diagnoses. (Prolific)
Soap Dispenser Issue: An AI-powered soap dispenser was unable to detect darker skin tones, resulting in a failure to dispense soap. Although subtle, this example highlights the potential for bias in AI to be integrated into everyday technologies. (Policy Options)
Fairness in Decision-Making
Transparency and Accountability
AI’s biggest problem? It’s a black box.
The complex nature of AI algorithms makes it challenging to understand their decision-making processes and difficult to address potentially biased or unfair outcomes.
· AI Image Generation Bias: In 2024, Google's Gemini chatbot faced backlash after generating racially inappropriate images, highlighting the need for greater oversight and ethical considerations in AI development. (The Wall Street Journal)
AI in Immigration Screening: The UK’s Home Office uses AI to prioritize immigration cases, but critics say the system favors certain nationalities while fast-tracking others, leading to unfair outcomes. (The Guardian)
Facial Recognition & Surveillance: Many cities—including some in Canada—use AI-powered surveillance. But studies show these systems misidentify people of color more often, leading to higher rates of wrongful identification and profiling. (Knight Columbia)
AI in Search Results: AI algorithms decide what information you see first. In certain scenarios, search engines have been found to amplify racial and gender biases, linking certain groups to negative stereotypes.
Without transparency, how can we trust AI to make fair decisions?
AI isn’t just about data, its about humans and the society we live in. AI bias or flaws result in real consequences for real people.
AI Amplifying Misinformation: The introduction of AI Application like X's Grok has been linked to an increase in online racist abuse, showing how AI can be misused to amplify harmful content. (The Gaurdian)
Housing Discrimination: An AI-driven tenant screening system wrongfully denied housing to renters based on biased data. A class-action lawsuit forced the company to pay over $2.2 million in settlements. (AP News)
AI in Credit Scoring: AI has given lower credit scores to minority applicants, even when they had similar financial histories as white applicants—deepening financial inequality. (Arxiv)
AI in the Workplace: Some companies use AI to watch employee productivity—tracking mouse movements, keystrokes, and webcam activity. But who decides what’s “productive” and what’s not? (The Australian)
AI is more than just a tool—it’s a decision-maker. If allowed to be used to enhance our strengths, can advance humanity, but if left unchecked, it could reinforce inequality, limit opportunities, and erode privacy.
Addressing bias in AI is crucial for promoting inclusivity:
AI in Beauty Standards: The rise of AI-generated models and influencers has sparked debates about reinforcing unattainable beauty standards, potentially affecting self-perception and perpetuating biases. (Vogue)
Echo Chambers & Misinformation: AI-powered algorithms prioritize content that keeps users engaged—which often means amplifying divisive, sensational, or misleading content. This has contributed to political polarization and the spread of misinformation. (The Gaurdian)
Personal Impact: AI is Making Decisions About You
Final Thoughts: Embracing Our Role in AI's Evolution
AI is experiencing a transformative moment, akin to the advent of the iPhone, rapidly integrating into various sides of our lives. This evolution presents both opportunities and challenges. By staying informed, actively taking part in public consultations, and advocating for fairness and transparency, we can ensure that AI developments align with societal values and promote equity for all. Our collective actions today will shape the ethical landscape of AI for future generations.
Social Equity
How You Can Participate
1. Engage in Public Consultations:
The Canadian government has introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, aiming to regulate AI development and use. Public participation is crucial to ensure the legislation addresses societal concerns. Have your say before it’s too late. (Canada.ca
2. Join or Support Advocacy Groups
Organizations like the Montreal AI Ethics Institute and GlobalNARI are actively working to ensure AI is ethical and fair. Supporting such groups can amplify efforts toward responsible AI development. (Montreal AI Ethics Institute)
3. Stay Informed
Educate yourselves, how AI affects your life. The "Learning Together for Responsible AI" initiative offers resources to improve AI literacy. (ISED Canada)
4. Talk to Your Representatives
Reach out to your MPs, MPPs, law makers and let them know that AI bias isn’t just a tech issue—it’s a social issue. Advocate for fairness and transparency in AI policies.
5. Leverage Digital Platforms
Use social media, blogs, and public forums to raise awareness about AI’s impact. Share articles and information about AI bias on social media. Your participation can influence public discourse and policymaking.








© 2025. All rights reserved.