Skip to main content

Select your location

AI is changing the way we work and live. While exciting, it also opens up worrying opportunities for bias and unethical use. We must have candid conversations about the ways we engage with and shape AI—especially as we continue to push for gender equality. 

As experts in the data space, passionate about current tech trends, and committed to continuous learning of contemporary diversity and inclusion issues, we saw the opportunity to host a panel which showcases our employees’ differentiated view of the evolving world.

During International Women’s Month, we were joined by some of our talented Kin for a panel on the ethics of AI, its impact on gender equality, and the opportunities and challenges that lie ahead. 

Moderated by Mishaal Ijaz, Data Analyst at Kin + Carta, the panel included:

Here are some of the key moments from our fascinating panel:

What are your first thoughts when you hear the words 'AI' and 'bias in AI?'

Katerina Nishan: On one hand, AI is amazing. It has enormous potential to make our lives easier and more efficient. But on the other hand, there are real concerns. We've seen where AI picks up on our biases and amplifies them, especially when it comes to gender.

Examples that I have seen affect women specifically are AI image generators, which often portray women in a way that's just not realistic. They're all super sexy and glamorous, which isn't exactly how most of us walk around every day. This stereotype limits how people see women and ignores all the incredible things we contribute to society. Deepfakes are another scary development. These are fake videos that look totally real, and they can be used to create compromising content of women, especially those in the public eye. It's a massive invasion of privacy and a form of digital violence. It's used to silence, humiliate, and blackmail women.

AI picks up on our biases and amplifies them, especially when it comes to gender.

Considering your experience working with data in different industries, how would you define bias?

Debbie Gee: Bias in AI is systematic and unfair disadvantages that can take different forms, including data collection, model training, algorithm design, and decision-making processes. The biases show up in the data itself, the algorithms, and the people building the models.

Bias in AI is systematic and unfair disadvantages that can take different forms.

How can we improve bias in machine learning?

Bowen Shi: Bias can creep into every step of your ML pipeline. There is no magical tool that can be applied to your model after it's built and eliminate the bias all at once. To help mitigate bias, we must integrate the thought process into every step when developing an AI system.

Some of the critical questions to ask at each stage include:

  • At data collection: Have we considered sources of bias introduced during data collection and survey design? 
  • During data analysis and pre-processing: Have we examined the data for possible sources of bias and taken steps to mitigate it? 
  • During model development: Have we communicated the limitations and biases of the model to stakeholders?
  • During model deployment: Is there a way to turn off or roll back the harmful model in production if needed?
To help mitigate bias, we must integrate the thought process into every step when developing an AI system.

How do existing regulations on AI address ethical concerns?

Guadalupe Calvo: When talking about ethics regulation, we shouldn't just think about AI-specific regulations. Existing legal frameworks, like data privacy laws, indirectly address AI ethics. But just because something is legal, doesn’t mean that it is ethical. We need to think outside the box thoughtfully and not follow the maxim that everything that is not forbidden is allowed. 

More than legislation is needed to do things ethically, so we must highlight the importance of governance principles. Kin + Carta is working on an AI Governance Policy and an AI Ethics Council. This new policy aims to set AI systems and platforms' ethics, responsible principles, and limitations to use and development.

Just because something is legal, doesn’t mean that it is ethical.

From your experience, how can organizations integrate considerations for GenAI into their existing people and process frameworks?

Stephanie Shine: If a company doesn't already have guiding principles for technology development, now is the time to get your cross-functional team together. There are leaders like the National Institute of Standards and Technology or EqualAI. However, you still need to collaborate internally with experts to fully define what these global standards mean for you.

It can't just be technologists or leadership in a silo. Bring together technologists, legal and compliance experts, data governance experts, and employees from the frontline who understand the business.

If a company doesn't already have guiding principles for technology development, now is the time to get your cross-functional team together.

Speaking of company culture, how can organizations promote education and awareness among stakeholders?

Katerina Nishan: The key is embedding AI ethics awareness not as a niche concern but as an essential organizational competency:

  • Be transparent about AI systems and development processes: Openly share details about the capabilities, limitations, and methodologies behind AI offerings with stakeholders.
  • Invest in AI literacy programs and training materials: Develop educational resources, workshops, or certifications to help stakeholders understand how AI works and its potential impacts.
  • Form diverse AI ethics review boards: This group can review AI systems for hidden biases, make sure company values on fairness are reflected in how AI is built or being used, and serve as guardians of responsible AI practices.
  • Proactively develop AI codes of conduct: Rather than reacting after issues arise, receive stakeholder input early to establish agreed-upon principles and safeguards.
  • Lead by example: Senior leadership should be vocal champions for building fair and unbiased AI. Their words and stories carry weight within an organization and show everyone that ethical AI is a top priority.

How can we work together to responsibly shape the future of AI?

Katerina Nishan: AI's impacts are felt across society, so we must create forums where voices from government, industry, academia, civil society groups, impacted communities, and the public can all be heard. An AI future shaped behind closed doors by a small subset of actors is a concerning prospect.

Within that dialogue, we must prioritize diversity and inclusion. Ensuring gender equity, racial diversity, representation for disadvantaged groups, and a plurality of viewpoints and disciplines at the table is imperative. Dominance by one demographic could quickly breed biases and blind spots.

An AI future shaped behind closed doors by a small subset of actors is a concerning prospect.

Want to explore how AI can empower your journey to data-driven decision making?

At Kin + Carta, we offer AI services designed to build trust and understanding around your data—using AI expertise to craft solutions tailored to your biggest challenges and goals.

Our approach is flexible and adaptable, allowing us to start small while planning for long-term success. Get in touch to assess your AI potential and discover how AI can make a meaningful impact in your organization.

Ready to partner?

Contact us

Share this article

Show me all