Why Our Learner Tricia Believes Ethics Belongs in Every AI Decision

Time min

August 15, 2025

“When you build models that affect patients, ethics isn’t a side topic. It’s the whole thing.”

When Tricia Govindasamy joined our AI Ethics course, she was looking for something practical — tools she could use.

As a Senior Information Analyst at Public Health Scotland, Tricia works with sensitive patient-level data every day. She’s also spent years in civic tech, leading data projects in South Africa. What she needed was a way to bring ethical thinking into the real-life decisions that come with building and managing AI systems.

“While I had the technical skills to build AI models, I realized I didn’t fully understand how AI ethics fits into that development process,” she says.

Finding the Missing Link

Tricia’s interest in data ethics grew during her master’s in data science for politics and policymaking, where questions of governance and fairness often came up. But it wasn’t until she started following the rollout of government AI strategies, many still light on practical detail, that something clicked.

“I saw an opportunity to contribute by combining my background in policy, product management, and technical AI development,” she explains.

She was looking for something that could connect those dots. Turing College’s course stood out because it offered a chance to explore ethics not as an abstract concept, but as part of the actual process.

Ethics in Action

Working in public health, Tricia sees firsthand how ethics can’t be an afterthought.

To access sensitive data, her team carries out data protection impact assessments, plans for data linkages, and follows GDPR-aligned practices like anonymization. Transparency is key. Only high-level, aggregated data is published, often as open data.

But technical safeguards are only part of the story. Through the course, Tricia started looking beyond those, especially when it came to fairness.

“Before the course, I understood bias existed in data,” she says. “But I hadn’t fully appreciated how deeply historical and structural inequalities can shape datasets, even before modeling begins.”

Having lived and worked in both South Africa and Scotland, she's seen how local context leaves its mark. In South Africa, the legacy of apartheid still shapes many social systems, including the data that comes out of them. In Scotland, the small size of ethnic minority populations can lead to models that unintentionally exclude or misrepresent them.

A Shift in Perspective

What changed most for Tricia was how she now thinks about intersectionality — not as a buzzword, but as a core part of designing fair systems.

“Rather than treating fairness as a checklist of isolated attributes, I now approach it as a systemic issue,” she says. “It requires understanding how overlapping identities shape people’s risks, opportunities, and outcomes in relation to AI systems.”

This idea has influenced how she approaches her work now. She started thinking more deeply about edge cases, overlooked groups, and how seemingly neutral decisions can have unequal impacts.

Building Toward Something Better

Tricia’s next goal is clear: she wants to support organizations in embedding ethics into AI development from the start. That means bringing together governance, policy, and real-world data practices — not just talking about fairness, but operationalizing it.

“This mindset is essential not just for avoiding harm, but for designing AI that actively contributes to reducing inequality,” she says. 

More voices like Tricia’s are reshaping how AI gets built. Want to join them?

Ready to start learning?

Explore the AI Ethics course

Select your programLearn more