Skip to main content


applied intelligence Live

17 Aug 2023

Ethical considerations in the design, development and deployment of AI

Alley Lyles-Jenkins, Principal Consultant, Product Strategy & Experience Design - Slalom Consulting
Ethical considerations in the design, development and deployment of AI

Given the rapid developments of AI in the last six months, ethical considerations around the implementation and application of AI are still very much on the agenda for all. When it comes to the design, development and deployment of AI, it is imperative that ethical considerations are factored into the conversation right from the beginning.

Informa Tech’s Applied Intelligence Group got the chance to speak with Alley Lyles-Jenkins, Principal, Product Strategy & Experience Design at Slalom Consulting about this in more detail, ahead of her participation at Applied Intelligence Live! Austin, September 21-22.

Can you give a brief explanation or outline of what the ‘ethical debt’ concept in AI is and the topline implications we need to be aware of?

"Ethical debt" in AI is when ethical considerations are not considered during its design, development, and deployment, causing risks and challenges. Like technical debt, taking shortcuts during software development can cause problems later.

Organizations should consider three topline implications of AI ethical debt in the near term:

  • Regulation and compliance: As governments and regulatory bodies increasingly focus on AI ethics, organizations that accumulate ethical debt may face difficulties complying with developing rules, potentially leading to fines and legal consequences.
  • Mitigation strategies: Proactive ethical design, including diverse teams, robust testing, ongoing monitoring, and transparency, can help organizations manage debt.
  • Public perception: Organizations accumulating ethical debt risk damaging their reputation and customer relationships. A lack of ethical foresight can result in negative media coverage and public backlash.

What do you think are the key ethical considerations or challenges when it comes to AI development and deployment?

When it comes to AI development and deployment, there are many critical ethical aspects to keep in mind. First, bias and fairness are prime examples of how technology moves faster than the law. AI can sometimes pick up biases from the data it's trained on, leading to unfair outcomes for specific groups--most notably minorities. For business, the continued use of bias models may result in reputation damage, legal and regulatory issues, customer alienation, and limited market adoption. 

Second is the consideration of privacy and data protection. With AI needing lots of data to learn from, we're navigating ultimate accountability when respecting privacy rights and handling sensitive information. It's not always clear who's responsible when an AI makes a mistake or a bad call. Companies that heavily rely on data-driven business models, such as targeted advertising or personalized recommendations, may need to adapt their strategies to align with stricter privacy regulations. 

A scary third factor is fake and manipulated content. Most notably, and ahead of the 2024 election cycle, the rise of deep fakes leaves people less able to debate based on facts. As AI keeps evolving, we must consider its long-term effects and potential unintended consequences – things that might not show up until way down the line.

It's an exciting field, but we must navigate these waters thoughtfully to ensure AI is a positive force for all.

What are the major consequences (if any) if companies ignore ethical considerations when implementing AI?

Ignoring ethical considerations in AI implementation can negatively affect a company's standing, relationships, financial stability, and long-term viability. Taking a proactive and responsible approach to ethical AI can help companies avoid potential pitfalls and build a positive reputation as ethical technology leaders.

What do you think the future of ethical AI looks like?

The future of AI is exciting! I see AI ingenuity that both excites and scares me every day. It’s complex. The future of ethical AI is a complex and nuanced landscape with both positive and negative potential outcomes. Society, governments, organizations, and individuals' actions determine the impact of AI.

The future brings optimism. I am optimistic about fair and inclusive AI, as ethical AI practices can lead to the development of AI systems that are fair, unbiased, and inclusive. AI can reduce inequalities and promote diversity by addressing biases in data and algorithms.

Also, there’s the promise of enhanced decision-making. Ethical AI could help individuals and organizations make better decisions by providing insights and recommendations based on unbiased data analysis. For example, industry experts foresee these capabilities leading to improved healthcare diagnoses and financial predictions.

Finally, humans and AI systems could collaborate harmoniously in an ethically conscious AI future. AI could assist humans in tasks that require data processing and analysis, allowing humans to focus on creativity, empathy, and complex decision-making.

What could the impact on society be if considerations are ignored or not properly implemented?

Ignoring or inadequately implementing ethical considerations in AI could have significant and far-reaching impacts on society.

The possibility of reinforcing inequalities keeps me up. AI systems not properly designed to account for bias and fairness can perpetuate and even amplify existing disparities in areas such as race, gender, and socioeconomic status. And neglecting human oversight and control over autonomous AI systems can lead to situations where critical decisions are made without human intervention, potentially causing unforeseen harm.

Neglecting ethical considerations in AI development and deployment can lead to a range of negative consequences that affect societal values, human rights, and the well-being of individuals. All stakeholders need to recognize the potential effects and work together to ensure that we integrate ethical considerations into every stage of AI development and deployment.


Alley Lyles-Jenkins is focused on widgets that change how people interact, experience technology and discover information. Her role at Slalom Consulting is not to design products; it's to help an organization win by creating a competitive advantage with Design. As a Principal at Slalom Consulting, she collaborates with talented people to increase product value by leveraging human-centered design and testing market fit.

Her award-winning efforts contributed to many forward-thinking firms: the City of New York under former Mayor Michael Bloomberg, Budweiser at Super Bowl LII, Amazon's Echo Dot launch, USAA's Chief Design Office, Dell Technologies, Columbia University's "1,000 Cut Journey" VR project for immersive storytelling, and advising FemTech start-ups at FemTech Focus

You can join Alley at Applied Intelligence Live! Austin this September 21-22, where she will be part of a panel that discussing. ‘Ethical Considerations in AI Expansion: Allocating Responsibility for Sustainable Growth’. Secure your place now.

Explore Our Full Blog Library