Share this article!

AI has recently dominated discussions on the future of just about every sector, and its impact on education is no exception. Yet one in five teachers doesn’t think AI is appropriate for a K-12 setting.

This article examines the main ethical considerations when teaching AI to high school students (along with some suggestions for how to navigate them).

Ethical Considerations When Teaching AI

Let’s look at the following ethical issues:

  • Responsible use
  • Data privacy
  • Algorithmic bias
  • Transparency
  • Inclusivity

Responsible Use

Teaching how to use AI responsibly is crucial. While AI has the potential to help students learn, it also carries risks.

One of the most prominent issues related to the misuse of AI has been concerns with cheating and plagiarism. This can certainly be problematic, and the solution is not as simple as using AI detectors (many of which are notoriously inaccurate). Instead, teachers should focus on monitoring how their students use AI and incorporate a variety of assessment types, such as group projects, discussions, or presentations.

Yet, responsible use of AI goes beyond cheating. Over reliance on AI like ChatGPT can also lead to children not developing enough critical thinking and creativity. Spending too much time using AI and digital tools can also mean that students don’t develop their face-to-face communication skills to the same extent.

Teachers can mitigate these possibilities by ensuring they incorporate a variety of activities into the classroom — not just AI but also problem-solving and team-building exercises. They may also wish to incorporate creative expression, such as drawing or music.

Data Privacy

When students use AI, they are by default giving their data to whoever owns the AI. This can pose a serious ethical dilemma, especially if students input sensitive information into a tool without realizing the implications and are obligated to use certain tools as part of their education.

While the concern of data privacy doesn’t necessarily mean that AI should be avoided altogether, teachers are responsible for vetting the technology used properly. Some aspects to consider include:

  • Type of student data stored
  • Who accesses the data and where it is stored
  • How the data is protected
  • Any past data breaches
  • Whether third parties can access the data

Teachers should educate both students and parents about what data is collected when teaching AI, who accesses it, and how it is protected to ensure informed consent. This may mean creating guidelines explaining the points above for students and parents to access or even giving them the opportunity to play a part in the decision-making themselves.

Similarly, schools must only work with reliable vendors, ensure they encrypt their data properly, and carry out regular security audits. This will reduce the chance of a data breach.

Algorithmic Bias

Over the last few decades, dialogue about teachers’ biases in how they treat students based on factors like their ethnicity has risen. Unfortunately, using AI doesn’t eliminate these concerns since the models are built based on the data and information that humans feed them. Therefore, AI can be just as biased as humans! So, when you’re teaching AI, it’s essential to address these potential biases.

Using AI as a teacher’s assistant for decision-making or outcome prediction can be problematic, as this can discriminate against certain groups or assume outcomes.

Teachers must learn how algorithms work, including data sources, training processes, and decision-making. This will help them be aware of potential biases.

They can also educate children about this subject by fostering conversations in the classroom.

Transparency

When approaching all of the above, don’t neglect the importance of transparency.

As touched on already, educators should ensure that students and their guardians are aware of how schools are storing data and allow them to opt in and out of AI use.

Transparency about AI use can also educate both parties about the AI models used and some issues related to algorithmic bias.

Inclusivity

As technology becomes more advanced, the divide between those who can access it and those who can’t has the potential to become a greater concern. AI education can play a key role in reducing this issue by giving all students the opportunity to learn and use artificial intelligence. But there’s still the issue of what happens outside of the classroom.

Some students have access to a strong internet connection at home or the best digital devices, meaning they can continue to use AI as a learning tool or to help them with assignments.

Teachers can bridge this gap by giving alternatives to students who don’t have access, especially when it comes to homework. For instance, they can offer alternatives to AI (like resources on a USB stick or on paper) or the opportunity to use resources from the school.

Ethics for Success

As exciting as new technology is, it’s crucial not to get so wrapped up in the software that we forget about the humans using it. By being aware of the ethical considerations outlined here when teaching AI, educators can best prepare their high school students for the decades ahead.

Looking for more resources to help? At Learning.com, we offer an AI curriculum for 4 to 12 grade students, covering both technology literacy and skills for using artificial intelligence technology.