illuminAI

We bring curious people exploring interdisciplinary challenges of the ethics and societal impacts of AI, together.

A Brief Introduction to AI Ethics

by Kassidy McDonald and Shirley Zhang

From bias and privacy to deepfakes and superintelligence, AI ethics is no longer a niche concern. This piece breaks down why ethical AI matters, how it affects everyday life, and why shaping its future requires voices beyond tech alone.

A Brief Introduction to AI Ethics

Photo by Danny Choi

January 27, 2026

Artificial intelligence (AI) has touched nearly every aspect of our personal and professional lives, making its presence increasingly inescapable. From recommendation systems to large language models (LLMs), many of these applications are opportunistic, and many genuinely beneficial. Insights about computation came from asking philosophical questions, we must also ask the same about the computability and intelligence of modern AI systems [3]. While we recognize the potential of these technologies, we must also maintain a balanced perspective on their emergence and advancement. This requires thoughtfully approaching AI development and fully considering the trajectory of both its limitations and possibilities [15].

Professor Steven Coyne brings a uniquely interdisciplinary perspective to these conversations. He earned a Bachelor of Science degree in Mathematics alongside an Honours Bachelor of Arts in Philosophy, and now teaches philosophy with a particular interest in reason and morality. Steven is cross-appointed with the Department of Computer Science and the Department of Philosophy, and he also prepares and delivers ethics modules for the Embedded Ethics Education Initiative (E3I) at the University of Toronto (UofT) [6].

E3I has become a cornerstone for the UofT’s computer science curriculum for its aim towards instilling the skills and incentive to incorporate ethical considerations in future educators, scientists, and tech developers [8]. This arises from discussions and concerns that the field of computer science has often overlooked context-sensitive and culturally appropriate technology, exposing the field’s unchecked hubris [2].

Ethical AI has been identified as a critical component of responsible system design because technology alone cannot address the societal consequences of AI. ACM FAccT calls for "an increased focus on ethical analysis grounded in concrete use-cases, people’s experiences, and applications” [2]. This reveals that consideration of ethics should be embedded in the context of AI systems' operation, and not treated as an afterthought.

What is AI Ethics?


AI and LLMs must be trained on large datasets in order to effectively carry out their function. Training on datasets consisting of audio, text, and image/video to recognize patterns and make patterns is its hubris. The scale and design of training does not guarantee that the system will treat all users equally. AI ethics confront the biases and ways of discrimination that emerge in these systems because the developers who build them, alongside the data they rely on, are shaped by particular cultural, social, and historical perspectives thereby resulting in these tools reproducing the same assumptions. This creates blind spots. AI ethics, in this sense, is the employment of values and principles that are widely accepted standards of right and wrong to guide moral conduct in developing and using AI [9].

Short-term questions. Steven approaches AI ethics with an understanding that this field can be divided into two halves. The first involves short-term questions about how AI is used today. Are current AI systems biased or discriminatory in the decisions they make about people?

Virginia Eubanks’ Automating Inequality argues that automated decision-making, in the context of US public services, profiles and punishes the poor. This trend operates on a larger scale, using complex technology and algorithms that blurs the decision-making process, and flawed metrics embed existing social biases into the system, to which she represents as a "digital poorhouse.” The utilization of high-tech tools such as AI are the new digital infrastructure for continuing historical civil rights problems of class and race-based inequalities. [1, pg 142]

We must also address issues such as bias, deepfakes, and misinformation as they also require insight and knowledge from disciplines outside of computer science to gain a deeper understanding of what bias and discrimination are and why it is wrong. We, of course, understand why bias instilled in AI systems are harmful, what we may need to further study is the consequences it will have on fairness for all [4].

Long-term questions. The second half concerns long-term questions about issues of AI safety, particularly debates about artificial general intelligence (AGI) and what may happen when AI systems reach or surpass human-level intelligence and potentially outperform humans [16]. Considering such possibilities like this makes us question how we understand our own contributions to writing, thinking, and other intellectual activities.

AI ethics requires asking existential questions about human replacement, responsibility, and even possibly survival. If superintelligent AI systems do not need humanity thereby operating independently from human needs and interests, should we focus on preserving human existence in the future? And if these systems can essentially outthink us, does that mean they have reached a form of consciousness [16]?

Long-term questions about AI push us to contemplate these challenges that AI poses, especially as our reliance on this technology increases. There is a reason there is a contentious debate about whether we will witness the advent of superintelligent AI because it would be hard to maintain control over such systems if they exceeded human intelligence. There is also the issue of whether a superintelligent AI would even share the same values as humans or diverge from them in ways that create conflict instead of alignment.

AI Ethics Affects Us All


Ethical AI is not just a topic for computer scientists, developers, and researchers to be concerned about. AI is woven into nearly every part of life; whether we use it directly through large language models or generative AI tools, or feel their effects as corporations continue to embed AI into day-to-day processes, it is certainly a constant presence in our daily lives.

Bias. One of the most heavily discussed ethical AI considerations is bias. As AI systems are trained on increasingly massive amounts of data, it becomes harder for AI to separate useful inferences from the societal biases embedded in historical data. Consequently, these biases become encoded in AI algorithms, which perpetuate and amplify discriminatory outcomes and impact the meritocracy of critical areas such as hiring, criminal justice, and resource allocation. These concerns aren’t baseless: an AI system’s hiring algorithm might learn and perpetuate biases when screening job applicants, inadvertently discriminating against and disqualifying qualified candidates. For example, a 2024 study found that "resumes with Black male names are only preferred to Black female names and White male names in 14.8% and 0% of bias tests, respectively” [4]. Even healthcare algorithms might systematically underestimate the needs of and produce inaccurate results for patients of colour, as another study found that "CNNs that provide high accuracy in skin lesion classification are often trained with images of skin lesion samples of white patients, using datasets in which the estimated proportion of Black patients is approximately 5% to 10%” which meant that "when tested with images of Black patients, the networks have approximately half the diagnostic accuracy [as] originally claimed” [5]. These findings highlight that biased AI systems have tangible consequences that can quickly become more persistent, widespread, and harmful to us all.

Privacy. Another universal ethical consideration is privacy. With modern AI’s dependency on the availability and volumes of personal data, even the simple act of accessing the internet raises ethical concern about data access and consent. Personal data that was once collected to tailor recommendations could now be repurposed to train AI systems [17], and data breaches are becoming more prevalent than ever. Stanford’s 2025 AI index found that "AI incidents jumped by 56.4% in a single year, [with incidents spanning] from data breaches to algorithmic failures that compromise sensitive information”. With growing concerns, this raises the fundamental question: "Who has the right to say what is allowed and what is not” [18]? When our data shapes the systems that shape our world, it is motivation for everyone, regardless of background or discipline, to join the conversation.

Trust. Trust in information ecosystems has also been destabilized by AI-driven media manipulation. Nancy Pelosi, former House Speaker, was once included in a deepfake controversy that spread doctored videos of her in a seemingly impaired state in 2019 [11]. Though at the time, deepfakes were understood as low-tech media and still fooled many, its implications of altering an individual's speech and behaviour for the purpose of furthering an agenda was always at the forefront of deepfakes. Facebook, now part of Meta, did not have a policy against this type of distribution of media and now removes such misleading videos, which has also shown us how our perception of deepfakes were only technical and not social which also led to an underdevelopment of digital literacy around manipulated media [12].

In short, ethical AI isn’t niche, nor is it insignificant. It’s personal, and it affects everyone.

Your perspective matters.


Beyond the technical work of computer scientists who implement and design these systems, "AI ethics conversations require participation from people across the disciplinary spectrum”, as we cannot begin to meaningfully address ethical AI until we understand "how people interact with AI systems” (Prof. Coyne).
While computer scientists and researchers may lay the groundwork for building and deploying AI systems, the "development of ethical AI is not just a technological challenge”. It involves "navigating complex social, philosophical, and legal questions” [19], which means that ethical AI can’t be approached or solved from a technical angle alone. Decisions regarding what data to use, which values to prioritize, and what risks are acceptable cannot be made by one group alone. They are reflections of social, cultural, and moral judgements that require different perspectives across disciplines.

This is where diversity becomes essential. Philosophers can help to articulate the meanings of terms such as "fairness” and "harm” or question why humans make certain decisions, which can also be reflected in how machines should think or make decisions when it comes to humans. Opinions of legal experts, who ensure that AI systems comply with laws and regulations, may help to shape the policies and regulations that will protect our rights. Social scientists can contribute insights into human-AI interaction, or explore AI’s impact on society and culture. Even fields that seem far removed from the AI conversation bring valuable insights; healthcare workers may have a deeper understanding of AI’s patient biases through firsthand experiences, and educators may witness its effects on younger generations.

In other words, it is evident that ethical AI requires more than technical skill and advancements; achieving productive conversations requires a collective and diverse understanding of how these systems shape our society, and how we can shape them in return.

How can you get involved?


If you are new to AI systems and the ethics we must consider in their implementation, development, and use, Steven suggests finding something that connects to your personal interests. For example, if you have an interest in art, you might question the ethics of relying on AI for generated art. The Toronto Maple Leafs received backlash on X for posting perceivably AI-generated content depicting the teams’ Legend’s Row statues coming to life [13]. The video showcased the incorrect logos and misspellings which did not sit right with hockey fans due to its perception to many as an unethical form of media.

Explore how your personal interests intersect with the field of AI and how it may be affected by the issues raised because different issues may attract your attention and have the potential to take action. Scouring social media about topics related to the use of AI and its potential implications is a great way to start gauging your interest in particular topics as well as your level of knowledge about a particular concept of AI.

Schwartz Reisman Institute for Technology and Society at UofT hosts various talks, workshops, and regular seminar series all of which they record and post on their YouTube channel [10]. Steven suggests browsing their library as one may strike your interest.

Alternatively, you may already have a background in AI and if you are interested in getting more involved in AI ethics, consider registering in courses that help build systematic ways of AI ethics and fill potential gaps you have about AI knowledge.

Here at UofT, there is PHL277 (‘Data Ethics’) taught by Steven Coyne himself, which introduces the ethical problems posed by Big Data and algorithmic decision-making by considering concerns about AI ethics [7]. There are more courses in the Philosophy department that consider long-term, big pictures questions like human existentialism, as well as courses in the Computer Science department like CSC300 (‘Computers and Society’).

The best way to discover your particular interest or niche in AI, whether you are starting out or deepening your knowledge, is finding like-minded people who want to discuss AI ethics. Consider finding an extracurricular club of other students in similar positions who want to talk about these issues.

Where Do We Go From Here?
The lingering question remains, where do we go from here? The field of AI is evolving and the technology that rises from it is innovative and transformative but it is also uncharted territory which necessitates further discussion of its development, deployment, and use to ensure it is ethical. Steven is featured on the first of many videos from IlluminAI with different speakers and different topics, and more opportunities to join the conversation on AI ethics. Stay tuned.

REFERENCES


[1] Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
https://tetrazolelover.at.ua/virginia_eubanks-automating_inequality-how_high-te.pdf
[2] D’Ignazio, C., & Klein, L. F. (2024). Data Feminism for AI. ACM FAccT Conference.
https://facctconference.org/static/papers24/facct24-7.pdf
[3] Schrage, M., & Kiron, D. (2025). Philosophy Eats AI. MIT Sloan Management Review.
https://sloanreview.mit.edu/article/philosophy-eats-ai/
[4] View of Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval.
[5] Using Artificial Intelligence on Dermatology Conditions in Uganda: A Case for Diversity in Training Data Sets for Machine Learning. bioRxiv.
[6] Embedded Ethics Education Initiative (E3I), University of Toronto.
https://www.cs.toronto.edu/embedded-ethics/about.html
[7] University of Toronto Faculty of Arts & Science. PHL277H1 – Data Ethics.
https://artsci.calendar.utoronto.ca/course/phl277h1
[8] University of Toronto. U of T CS Ethics Education Initiative Recognized with D2L Innovation Award.
https://web.cs.toronto.edu/news-events/news/u-of-t-cs-ethics-education-initiative-recognized-with-prestigious-d2l-innovation-award-in-teaching-and-learning
[9] ISO. Responsible AI and Ethics.
https://www.iso.org/artificial-intelligence/responsible-ai-ethics
[10] Schwartz Reisman Institute for Technology & Society (YouTube).
https://www.youtube.com/c/SchwartzReismanInstitute
[11] CBS News. Doctored Nancy Pelosi Video Highlights Threat of Deepfake Tech (2019).
https://www.cbsnews.com/news/doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25/
[12] Euronews. Does Facebook’s New Policy on Deepfake Videos Go Far Enough? (2020).
https://www.euronews.com/my-europe/2020/01/07/does-facebook-s-new-policy-on-deepfake-videos-go-far-enough-thecube
[13] Hockey Patrol. Fans React After Maple Leafs Use AI in 2025–26 Season Hype Video.
https://www.hockeypatrol.com/nhl-team/toronto-maple-leafs/fans-react-after-maple-leafs-use-ai-in-2025-26-season-hype-video
[14] Toronto Star. These Ads Near Union Station Could Be Recording You.
https://www.thestar.com/news/gta/these-ads-near-union-station-and-other-places-around-toronto-could-be-recording-you-what/article_7af7c920-1ce7-4b19-98db-4c22d742f202.html
[15] IBM Think. Examining Superintelligence.
https://www.ibm.com/think/insights/examining-superintelligence
[16] Towards Data Science. Stop Worrying About AGI — The Immediate Danger Is Reduced General Intelligence.
https://towardsdatascience.com/stop-worrying-about-agi-the-immediate-danger-is-reduced-general-intelligence-rgi/
[17] Stanford HAI. Privacy in an AI Era: How Do We Protect Our Personal Information?
[18] Harvard Division of Continuing Education. Ethics in AI: Why It Matters.
https://professional.dce.harvard.edu/blog/ethics-in-ai-why-it-matters/
[19] Why Is Interdisciplinary Collaboration Essential for AI Ethics?

Read More