Algorithmic Bias and the Ethical Implications of AI in Decision-Making Systems Navigating the Tension Between Technological Innovation and Social Justice

This report examines the ethical challenges raised by artificial intelligence (AI) in data science, particularly how biased algorithms can lead to discrimination in high-stakes areas like hiring, criminal justice, and finance. It explores the dangers of treating AI as neutral; however, in reality, it often reflects and amplifies human and historical biases. Further, the report includes research into algorithmic bias through detailed case studies, as well as creative explorations through rhetorical poetry and dialogue that together provide emotional and personal perspectives on algorithmic injustice and highlight the human consequences of biased AI decision-making. By blending academic analysis with poetic and narrative lenses, the reader gains a deeper understanding of how ethical flaws in AI impact real lives and amplify historical inequalities while being perceived as objective. Overall, it argues that while AI has the power to transform society, it must be developed and used with transparency, fairness, and human oversight to prevent reinforcing systemic inequalities.

ALGORITHMIC BIAS AND THE ETHICAL IMPLICATIONS OF AI IN DECISION-MAKING SYSTEMS

Parshv Patel

7/28/202517 min read

Abstract

This report examines the ethical challenges raised by artificial intelligence (AI) in data

science, particularly how biased algorithms can lead to discrimination in high-stakes areas like

hiring, criminal justice, and finance. It explores the dangers of treating AI as neutral; however, in

reality, it often reflects and amplifies human and historical biases. Further, the report includes

research into algorithmic bias through detailed case studies, as well as creative explorations

through rhetorical poetry and dialogue that together provide emotional and personal perspectives

on algorithmic injustice and highlight the human consequences of biased AI decision-making.

By blending academic analysis with poetic and narrative lenses, the reader gains a deeper

understanding of how ethical flaws in AI impact real lives and amplify historical inequalities

while being perceived as objective. Overall, it argues that while AI has the power to transform

society, it must be developed and used with transparency, fairness, and human oversight to

prevent reinforcing systemic inequalities.

Introduction

Mechanical eyes scanning for resumes, the AI hiring tool instantly rejects a candidate,

not for lack of skill, but because their name “sounds female.” Imagine a world where algorithms

decide who gets a job, who qualifies for a mortgage, and even who is more likely to commit a

crime, all without human intervention. Now, imagine that these decisions are filled with bias that

disproportionately harms marginalized communities. These are not dystopian nightmares. These

are real consequences of artificial intelligence (AI) powered decision-making in data science

today. AI systems are being integrated and utilized rapidly in high-stakes areas such as criminal

justice, hiring, and finance. Yet, beneath their promise lie serious ethical issues. While AI has

revolutionized data science by increasing efficiency and objectivity, it has created ethical

vulnerabilities and poses a significant risk of perpetuating and amplifying societal bias.

Consequently, their reliance on biased and flawed training data and lack of accountability

have produced discriminatory outcomes in key decision-making areas. Rather than eliminating

bias, AI often amplifies and reinforces historical inequalities while being seen as neutral and

objective. Cases like the COMPAS recidivism algorithm, Amazon’s hiring tool, and biased

mortgage lending reveal how algorithmic bias reinforces existing inequalities (Larson et al. sec.

2). To ensure that AI serves as a tool for equity rather than oppression, ethical frameworks must

prioritize transparency through explainable AI, diverse and representative datasets, and human

oversight (“Data Ethics in AI: 6 Key Principles for Responsible Machine Learning” par. 5).

Achieving this balance requires solving the tension between technological innovation and ethical

responsibility while addressing the limitations of current governance structures in keeping pace

with AI’s rapid evolution before its invisible roots deepen the very inequalities it was meant to

solve. Ensuring fairness in AI decision-making is critical for creating responsible and

trustworthy technology.

Data Science and AI: Transforming the Digital World

Moreover, the rise of data science and artificial intelligence has profoundly reshaped

industries, driving innovation, efficiency, and automation. In today’s digital world, businesses

and governments rely on AI to make decisions, predict trends, and optimize resources and

actions. AI is capable of processing vast amounts of data faster and more accurately than

humans, making it an essential tool across various sectors. As Washington State University

notes, “It would appear that we are now entering the fourth industrial revolution, which is driven

by AI, robotics, quantum computing, and other innovations. This Fourth Industrial Revolution

promises to disrupt our world once again” (“The Fourth Industrial Revolution” par. 1). However,

with this rapid advancement comes a growing concern about ethical implications, such as data

privacy, algorithmic bias, and accountability (IBM par. 4). As IBM states, “AI ethics is a

multidisciplinary field that studies how to optimize the beneficial impact of artificial intelligence

while reducing risks and adverse outcomes” (IBM par. 5). These concerns highlight the

importance of understanding how AI is transforming industries and why ethical considerations

must evolve alongside technological advancements.

Furthermore, one of the most significant ways data science and AI have revolutionized

industries is through their ability to analyze and extract meaningful insights from massive

datasets. According to UC Berkeley’s School of Information, “The ability to take data—to be

able to understand it, to process it, to extract value from it, to visualize it, to communicate

it—that’s going to be a hugely important skill in the next decades” (Varian par. 4). Companies

utilize AI to improve customer experiences, streamline operations, and make data-driven

decisions. For example, AI-powered recommendation systems personalize the content on

platforms such as Netflix and Amazon, whereas financial companies use machine learning to

detect fraudulent transactions in real time (IBM par. 5). The growing reliance on AI demonstrates

its potential to reshape industries as the fourth revolution, but it also raises ethical concerns about

transparency and fairness in decision-making.

Indeed, AI’s impact is evident in critical fields such as healthcare, finance, and education.

In healthcare, AI-driven predictive models assist in diagnosing diseases, personalizing

treatments, and even forecasting disease outcomes (IBM par. 7). IBM showcases an example of

AI’s role in medicine, where a platform analyzes medical records to categorize patients based on

their risk of stroke and predicts the success of treatment plans (par. 8). In finance, AI enables

algorithmic trading, fraud detection, and risk assessment, improving both security and efficiency

(UChicago par. 2). Meanwhile, in education, AI-driven tools personalize learning experiences,

helping students grasp concepts at their own pace (IBM par. 6). While these advancements offer

immense benefits and efficiency, they also introduce ethical issues. Biased training data can lead

to unfair medical diagnoses, financial discrimination, or inequalities in educational resources,

which requires the need for ethical oversight in AI development.

Despite AI’s transformative potential, its unchecked growth raises concerns that cannot

be ignored. The increasing use of AI in decision-making demands greater transparency and

accountability from companies and policymakers. As IBM notes, “Lack of diligence in this area

can result in reputational, regulatory, and legal exposure, resulting in costly penalties” (par. 10).

While AI continues to revolutionize industries, addressing its ethical challenges is crucial to

ensuring it remains a force for good rather than a tool for exploitation. The balance between

innovation and ethical responsibility will define the future of AI in our digital world (Rahwan et

al. p. 485).

The Ethical Challenges of AI and Data Science

Significantly, the increasing use of AI in data science has exposed serious ethical

concerns, particularly regarding privacy, surveillance, and bias in decision-making. AI systems

often rely on vast amounts of personal data, raising issues about how that data is collected,

stored, and used (IABAC® sec. 5). While companies claim AI enhances objectivity, the reality is

that many models operate as “black boxes,” making it difficult to understand how decisions are

made (IABAC® sec. 6). The lack of transparency in AI decision-making processes means that

users often do not know why they were denied a loan, flagged for fraud, or rejected from a job.

This secrecy weakens public trust and raises concerns about accountability, especially when AI is

used in high-stakes areas such as law enforcement and healthcare. As Nick Bilton, a tech

columnist from The New York Times, warns, “Imagine how a medical robot, originally

programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate

humans who are genetically prone to the disease” (Gesikowski par. 5). Similarly, without clear

guidelines on data ethics and accountability, AI has the potential to reinforce systemic biases and

violate fundamental rights, making it crucial to address these concerns now (IBM par. 13).

Additionally, one of the most pressing ethical issues in AI is its impact on privacy and

surveillance. AI-driven data collection has become widespread, with companies and

governments using AI to track user behavior, monitor online activity, and even predict future

actions. This level of surveillance raises significant privacy concerns, especially when sensitive

data such as health records, financial information, or biometric data is involved (IABAC® sec.

9). In many cases, AI collects data without explicit consent or repurposes existing data for new

applications without informing users (IABAC® sec. 10). For instance, a healthcare AI system

initially used for diagnosing illnesses may later expand to mental health predictions, but without

new consent from patients. This expansion becomes an ethical violation (“Data Ethics in AI: 6

Key Principles for Responsible Machine Learning” par. 7). The European Union’s General Data

Protection Regulation (GDPR) has attempted to address some of these issues by enforcing strict

data protection laws, but many countries lack similar regulations, leaving users vulnerable to

exploitation (IABAC® sec. 11).

Beyond privacy, AI also plays a significant role in spreading misinformation and

amplifying biases. Many AI algorithms, particularly those used in content recommendation and

news aggregation, prioritize engagement over accuracy (IBM par. 15). This has led to the

widespread sharing of fake news and propaganda, influencing public opinion and even election

outcomes (IBM par. 16). AI models trained on biased datasets reinforce those biases, leading to

discriminatory outcomes in hiring, policing, and lending practices. For example, facial

recognition software has been shown to misidentify individuals from minority groups at

significantly higher rates than white individuals due to biased training data (“Data Ethics in AI: 6

Key Principles for Responsible Machine Learning” par. 8). Furthermore, AI models used in

hiring processes often favor male candidates over equally qualified female applicants because

they are trained on historical hiring data that reflects past discrimination. These examples

demonstrate how AI, rather than eliminating human bias, often amplifies and validates existing

inequalities, raising urgent ethical concerns.

Furthermore, the lack of accountability in AI decision-making further complicates these

ethical dilemmas. Many AI models operate in ways that even their creators do not fully

understand. The “black box” nature of machine learning makes it difficult to determine how

decisions are reached, making it nearly impossible to hold anyone accountable when AI systems

cause harm (IABAC® sec. 14). If an AI-driven hiring tool discriminates against certain

applicants or an autonomous vehicle makes a fatal error, who is responsible? The company that

developed the AI? The engineers who trained it? The users who deployed it? These unanswered

questions highlight the need for clearer accountability frameworks in AI governance. Without

them, companies can deflect blame onto AI, avoiding responsibility for unethical decisions. This

lack of accountability has real-world consequences, as seen in cases where AI-driven sentencing

algorithms in the criminal justice system disproportionately assign harsher sentences to minority

defendants (Larson et al. sec. 4). As AI continues to be integrated into critical aspects of life,

ensuring transparency and accountability is more important than ever (IBM par. 18).

Addressing the ethical challenges of AI and data science is no longer optional. It is an

urgent necessity. The rapid pace of AI innovation has far outstripped regulation, leaving society

to deal with the consequences of biased decision-making, mass surveillance, and misinformation

(IABAC® sec. 15). Ethical AI development requires a commitment to fairness, transparency, and

user protection. This means adopting measures such as explainable AI models, stronger data

privacy laws, and diverse training datasets to prevent bias. Companies and policymakers must

take immediate steps to establish ethical guidelines that ensure AI serves humanity rather than

magnifies existing inequalities. If these issues remain unaddressed, the risks of AI outweigh its

benefits, making it a threat rather than a tool for progress.

Biased Algorithms: The Root of AI Discrimination

Importantly, artificial intelligence is often perceived as a neutral and objective

decision-making tool, yet the reality is that AI systems are only as fair as the data they are

trained on. When machine learning models rely on biased datasets, they produce skewed

outcomes that reinforce historical inequalities (Larson et al. sec. 5). This bias originates from

training data that reflects existing societal prejudices, including racial, gender, and

socioeconomic disparities. For example, an investigation by ProPublica into the COMPAS

recidivism algorithm found that Black defendants were twice as likely as white defendants to be

misclassified as high-risk offenders, leading to harsher sentencing outcomes (Larson et al. sec.

7). Because the algorithm was trained on historical arrest and conviction data, which is already

influenced by systemic racial disparities, it learned to associate race with criminality rather than

evaluating defendants based on behavioral factors. This demonstrates how AI, rather than

eliminating human bias, can amplify and validate discrimination in critical areas such as the

criminal justice system.

Additionally, the consequences of biased AI can affect job recruitment, financial services,

and even access to housing. A striking example is Amazon’s automated hiring tool, which

systematically discriminated against women because it was trained on resumes from past hires

who were predominantly male software engineers (Goodman par. 5). The algorithm downgraded

resumes that included words like “women’s,” such as “women’s rugby team,” and favored

candidates who used action verbs more commonly found in men’s resumes (Goodman par. 6).

Similarly, in the financial sector, an AI mortgage system disproportionately charged Black and

Latinx borrowers higher interest rates than white borrowers for identical loans (UChicago par. 3).

These examples illustrate how AI systems inherit and perpetuate existing social inequalities,

making it harder for marginalized groups to access opportunities and financial stability. Because

AI decisions often appear objective and data-driven, they can be more difficult to challenge than

human biases.

Moreover, the biggest challenge in creating unbiased AI is that bias is deeply embedded

in both the data and the development process. AI systems do not inherently understand fairness;

they merely identify statistical patterns in the data they are given. When datasets reflect historical

inequalities, AI models learn and reinforce those patterns (IBM par. 21). Additionally, biases can

arise from algorithmic design choices, such as how certain variables are weighted in

decision-making processes (Jonker and Rogers par. 10). Even when developers attempt to correct

for bias, they may unintentionally introduce new forms of discrimination, as seen in attempts to

“de-bias” hiring algorithms by removing gender-based data, which can still leave areas like

education history or work experience that correlate with gender disparities (IBM par. 22).

Furthermore, AI bias is compounded by a lack of transparency and accountability. Not having an

understanding of how AI systems reach decisions makes it difficult to detect or correct biased

outcomes (“Data Ethics in AI” par. 10). Without comprehensive oversight and diverse

representation in AI development teams, ensuring fairness in machine learning remains a

persistent and complex challenge.

Equally important, regulatory frameworks, such as the European Union’s AI Act, have

begun imposing strict penalties on companies that fail to reduce algorithmic bias, recognizing

that biased AI erodes public trust in technology (IBM par. 23). While AI has revolutionized data

science, it has also revealed deep ethical vulnerabilities, making it crucial to approach AI

development with accountability and fairness at the forefront. Without proactive intervention,

biased AI will continue to amplify historical injustices, reinforcing discrimination under the

facade of technological objectivity.

The Role of Regulation and Ethical AI Development

Consequently, the rapid growth of AI in data science has raised serious ethical concerns

regarding personal data collection and usage. Companies and governments gather vast amounts

of personal data from users, often without explicit consent or understanding of its implications

(IABAC® sec. 18). For instance, biometric data, health records, and financial information are

frequently stored in centralized databases, making them vulnerable to breaches and misuse

(IABAC® sec. 19). Governments also use AI for mass surveillance, raising concerns about

privacy and civil liberties (IABAC® sec. 20). Without clear regulations, companies can exploit

user data for profit.

To address these concerns, various laws and regulations should be implemented to govern

AI and data science. For example, the EU AI Act imposes severe penalties on companies that fail

to comply with its ethical AI guidelines, with fines reaching up to EUR 35 million or 7% of a

company’s global revenue (IBM par. 24). Tech companies bear significant responsibility for

ensuring fairness and transparency in AI development. Many corporations prioritize efficiency

and profit over ethical considerations, leading to rushed AI deployments that perpetuate bias.

Companies must implement internal oversight mechanisms, such as third-party audits and

bias-detection tools, to identify and remove discriminatory patterns in AI models (Bandy 20).

Ethical AI should not be an afterthought but a fundamental aspect of development.

The Future of AI: Balancing Innovation and Ethics

Looking ahead, AI development must be approached responsibly to ensure that

innovation does not come at the cost of ethical integrity. Even Elon Musk, a tech genius and

CEO of Tesla and SpaceX, remarks, “With artificial intelligence, we’re summoning the demon”

(Gesikowski par. 6). However, a key solution is integrating fairness-focused regulations with AI

research rather than imposing restrictions that hinder innovation (Floridi and Cowls par. 13). For

example, policymakers can implement “regulatory sandboxes” where AI companies can

experiment with new models under ethical oversight before full-scale deployment (Floridi and

Cowls par. 14). Additionally, interdisciplinary collaboration between ethicists, technologists, and

social scientists can help create AI systems that align with human values (Rahwan et al. 492).

Instead of treating AI as an entirely autonomous decision-maker, developers should design AI as

an assistive tool that complements human judgment, particularly in high-stakes fields. This

approach allows AI to enhance efficiency without amplifying discriminatory patterns.

Additionally, third-party audits and bias-detection algorithms should be standard practice in AI

deployment, ensuring companies remain accountable for their systems’ decisions (Bandy 21).

Moreover, striking a balance between progress and ethical concerns is crucial because

AI’s impact extends beyond technology. It shapes economies, societies, and human rights.

Ethical AI ensures that advancements in automation, healthcare, and data science benefit

everyone. On the other hand, overly restrictive regulations could prevent innovation, slowing

advancements in fields like medical research and climate modeling. The solution lies in adaptive

governance with regulations that evolve alongside AI technology while maintaining ethical

safeguards (Rahwan et al. 496). By prioritizing transparency, fairness, and human oversight,

society can harness AI’s potential without sacrificing justice.

Conclusion

In a nutshell, while artificial intelligence may seem like an impartial force driving

progress in data science, it has deepened ethical concerns by reinforcing systemic biases under

the illusion of neutrality. AI’s dependence on flawed historical data and lack of accountability

have led to discriminatory outcomes, proving that technology does not eliminate human

prejudice but often magnifies it. Addressing these ethical challenges is not just about improving

technology, rather, it is about protecting fundamental human rights and ensuring fairness in an

increasingly automated society. Without transparency, accountability, and diverse representation

in AI development, these biases will only become more deeply rooted. This reality demands

urgent attention because unchecked AI threatens to enhance inequality, erode public trust, and

undermine the very progress it promises. As historian Yuval Noah Harari warns, “Technology is

not good or bad. It is up to us to decide how to use it” (115). The path forward requires diverse

data representation and robust regulatory frameworks to ensure technology serves humanity and

not the other way around. Prioritizing ethical AI development will define not just the future of

technology but the future of equality itself. The future of AI is still unwritten, and it is up to us to

decide what kind of world we want it to create.

Works Cited

“AI Is Making Housing Discrimination Easier than Ever Before | Kreisman Initiative for

Housing Law & Policy at UChicago.” Uchicago.edu, 2024,

kreismaninitiative.uchicago.edu/2024/02/12/ai-is-making-housing-discrimination-easier-t

han-ever-before/.

Bandy, Jack. “Problematic Machine Behavior: A Systematic Literature Review of Algorithm

Audits.” Proceedings of the ACM on Human-Computer Interaction, vol. 4, no. CSCW2,

2020, pp. 1-34. doi.org/10.1145/3415192.

“Data Ethics in AI: 6 Key Principles for Responsible Machine Learning.” Alation.com, 2024,

www.alation.com/blog/data-ethics-in-ai-6-key-principles-for-responsible-machine-learnin

g/.

Floridi, Luciano, and Josh Cowls. “A Unified Framework of Five Principles for AI in Society.”

Harvard Data Science Review, 2019, doi.org/10.1162/99608f92.8cd550d3.

“The Fourth Industrial Revolution.” Engineering and Technology Management, 30 Aug. 2023,

etm.wsu.edu/2023/08/30/the-fourth-industrial-revolution/.

Gesikowski, Cezary. “AI — the Good, the Bad, and the Transformative - Cezary Gesikowski -

Medium.” Medium, 17 Mar. 2023,

gesikowski.medium.com/ai-the-good-the-bad-and-the-transformative-7c9661267f63.

Goodman, Rachel. “Why Amazon’s Automated Hiring Tool Discriminated against Women |

News & Commentary.” American Civil Liberties Union, 12 Oct. 2018,

www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-a

gainst.

Harari, Yuval Noah. 21 Lessons for the 21st Century. Spiegel & Grau, 2019.

IABAC®. “The Ethics of AI and Data Analytics.” IABAC®, 4 Oct. 2023,

iabac.org/blog/the-ethics-of-ai-and-data-analytics.

IBM. “AI Ethics.” IBM.com, 17 Sept. 2021, www.ibm.com/think/topics/ai-ethics.

IBM. “Data Science.” IBM.com, 21 Sept. 2021, www.ibm.com/think/topics/data-science.

Jonker, Alexandra, and Julie Rogers. “Algorithmic Bias.” Ibm.com, 20 Sept. 2024,

www.ibm.com/think/topics/algorithmic-bias.

Larson, Jeff, et al. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica, 23

May 2016,

www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

Rahwan, Iyad, et al. “Machine Behavior.” Nature, vol. 568, no. 7753, 2019, pp. 477-486.

doi.org/10.1038/s41586-019-1138-y.

“What Is Data Science?” Ischoolonline.berkeley.edu, Dec. 2023,

ischoolonline.berkeley.edu/data-science/what-is-data-science/.

Varian, Hal. “The Ability to Take Data...” Google Chief Economist, quoted in “What Is Data

Science?” Ischoolonline.berkeley.edu, Dec. 2023,

ischoolonline.berkeley.edu/data-science/what-is-data-science/.

The Invisible Judge

A line of code, so neat, so pure,

Make silent decisions dressed as secure.

No heart to feel, no eyes to see,

Yet it shapes the fate of you and me.

A name on paper, turned into vapor,

No value to the maker, graded by a faker.

4 years of college and experience stripped of skin,

A resume lost, before life could begin.

A loan denied with numbers cold,

A future traded, a story untold.

The judge who wears neither robe nor face,

It lives in wires and holds no place.

Yet verdicts fall without a sound,

Lost the purpose of AI that seems to be found.

Bias hidden in the data’s breath,

An algorithm that signs your death.

History’s weight in a digital world,

Discrimination recycled, the harm has been blurred.

Oh fair machine, you seem so wise,

But the truth is hidden in your disguise.

Behind each choice, an invisible hand,

Of flawed designers across the land.

So check the code like a pencil, not a pen,

Before injustice strikes again.

Explanation: This poem, “The Invisible Judge,” explores the emotional reality behind

algorithmic bias and the illusion of fairness that AI creates. It uses personification to give the

algorithm human-like power, calling it a “judge” that lacks eyes or a heart. Yet it holds

real-world authority over human futures. Further, the poem uses repetition, such as “No heart to

feel, no eyes to see,” to emphasize the indifferent nature of AI decision-making and its emotional

disconnect from the lives it affects. Moreover, the use of metaphors such as “invisible hand”

highlights the human designers whose unconscious biases enter into the algorithms, amplifying

and turning machines into silent enforcers of historic discrimination. Ultimately, the poem aims

to raise awareness about the unseen but powerful consequences of data-driven decisions, forcing

readers to consider how easily technology can become a modern-day destroyer of opportunity

and fairness.

A Mirror Made of Math

In numbers they trust, in patterns they see,

A mirror of us, but who holds the key?

It calculates worth, it measures your face,

But fairness is lost in the algorithm’s race.

Data whispers from decades past,

Teaching machines truths that don’t last.

Old wounds coded in endless loops,

Carves the future hollow like a scoop.

From job hiring to courtrooms wide,

The algorithm stands by the judge’s side.

Its choices hidden in logic and fact,

But justice and empathy are it lacks.

A mirror made of math reflects our sins,

While power smiles and progress begins.

The future runs on lines of code,

But whose stories were erased remains untold.

Rewrite the code, rethink the plan,

Bias will linger unless checked by man.

Machines can assist, but not replace,

The human heart and moral grace.

Explanation: The second poem, “A Mirror Made of Math,” builds on the first by deepening the

idea that AI is not an unbiased authority but rather a mirror reflecting human flaws. The central

metaphor of the mirror symbolizes how AI replicates human biases instead of erasing them. In

addition, it also uses rhyme and rhythm to create a flow that imitates the patterns found in both

algorithms and societal prejudice. Phrases like “Old wounds coded in endless loops” reflect the

repeating nature of bias, especially when historical data is used to train AI models without

critical oversight. In a nutshell, the poem suggests that while AI systems may appear logical and

objective, they lack empathy and accountability, which are essential for justice. This connects

with the first poem by emphasizing that ethical AI is about human responsibility and conscious

moral design.

The Interview That Wasn’t Fair

Setting: A quiet In-N-Out shop in downtown San Francisco. It is early evening. Jade, a recent

college graduate, sips coffee across from Dr. Parshv Patel, a Chief Data Science Officer (CDO)

and longtime family friend.

Jade (sighing as she puts her phone down): I’ve got to be honest, Dr. Patel. I don’t know what

went wrong. I nailed the interview. My degree is solid, my experience lines up, and even the

recruiter said I was a top contender. But then the rejection email came, cold and short, like I was

never considered in the first place.

Dr. Parshv Patel (nodding slowly): I am sorry to hear that, Jade. Did they mention anything about

why?

Jade (sipping her coffee with tired eyes): Just the usual line: “We’ve selected a candidate whose

qualifications align more closely with the role.” It didn’t even sound like a human wrote it.

Dr. Parshv Patel (looking concerned): I looked into it out of curiosity. Companies use an AI

screening system before human eyes even see the applications. You know, resumes go through an

algorithm first.

Jade (leaning forward): That explains a lot. So a machine tossed me out before anyone even read

my name. Have you ever wondered what the AI was trained on?

Dr. Parshv Patel: I found a developer forum online. Apparently, their system was trained on past

successful employee data. However, most of their selected candidates, especially for this role,

have been white men from big-name universities like UC Berkeley, from where I graduated, and

also Princeton, Carnegie Mellon, Cornell, UCLA, USC, etc. The AI just “learned” to prefer more

of the same.

Jade (worried): That is the real hidden danger, Dr. Patel. AI systems aren’t inherently fair, no

matter how smart they sound.

Dr. Parshv Patel: Correct! They reflect the data they’re fed, which often carries human prejudice

and bias. If a company has always hired a particular type, the AI will keep repeating the cycle.

Jade (voice cracking slightly): It hit me hard. I thought data was neutral. I believed in merit and

spent countless hours to get here. I stayed up late nights, took extra shifts, and juggled family

responsibilities, which means I earned that chance. But now I’m starting to see the flaw. The

algorithm didn’t even see me. It saw statistics. I wasn’t rejected for who I am; I was rejected for

not fitting a biased mold.

Dr. Parshv Patel (softly): Exactly. AI often wears the mask of objectivity. Companies say, “Oh,

the algorithm chose the best candidate,” when really it’s just automating past discrimination. It’s

like an old bias dressed up in new code.

Jade (looking down at her hands): So we’re trusting machines to be gatekeepers for jobs, for

loans, and even for court decisions, and they could be just as flawed as the people who

programmed them?

Dr. Parshv Patel: Sometimes even worse. At least with people, you can challenge a decision

face-to-face. Algorithms are black boxes. No explanations, no appeals. The more society leans

on them, the more invisible the injustice becomes, which would definitely be hard to fix.

Jade (frustrated): It is just so disheartening. I worked so hard, thinking I would finally break

through and crack a job. But it feels like the future is being written by past mistakes.

Dr. Parshv Patel (firmly): It doesn’t have to be. That’s why we need people like you who are

young, aware, and ready to question the system. AI doesn’t write its own rules. Humans do. If

we change the rules, the algorithms will follow.

Jade (after a long pause): I hope you’re right. But right now, I feel like the world is letting

machines decide our fate.

Dr. Parshv Patel: That is why ethics has to stand shoulder to shoulder with technology. Without

it, we’re not advancing; rather, we’re just automating inequality.

The coffee shop gets darker as they finish their drinks. The conversation continues, heavy and

unresolved, much like the ethical future of AI.

Explanation: This dialogue, “The Interview That Wasn’t Fair,” explores the emotional and

personal impact of algorithmic bias in AI-driven hiring processes. Through the conversation

between Jade, a recent job seeker, and Dr. Parshv Patel, a Chief Data Science Officer (CDO), the

dialogue humanizes the abstract issue of data discrimination. The dialogue uses rhetorical

strategies like ethos by presenting the CDO’s expert insight, pathos through Jade’s

disappointment and frustration, and logos by utilizing logical explanations of how AI bias

develops to present the complex reality of algorithmic injustice in a relatable form. The use of

figurative language, such as “an old bias dressed up in new code,” captures the misleading nature

of AI’s objectivity and highlights how technological systems often amplify human prejudice

under the illusion of fairness. The dialogue also contrasts the optimism of a hardworking

graduate with the cold, automated rejection to highlight the gap between human potential and

machine judgment. By making the conversation casual, the dialogue allows readers to see the

human cost of AI bias, urging them to think critically about how algorithms affect real people’s

futures. The dialogue underscores the research report’s claim that AI, without ethical oversight,

does not erase discrimination but magnifies it.