Skip to main content


applied intelligence Live

04 Sep 2023

Creating a Robust Digital Identity Foundation for Responsible AI

Interview with Aditya Khurjekar CEO & Founder, Medici Global, and AI-enabled markets, Prove
Creating a Robust Digital Identity Foundation for Responsible AI

With this rapid acceleration it's critical that businesses have a solid foundation in place to adopt and implement this technology in a positive and responsible way.

Whilst the power of AI seems to be understood by most, it is important that advances in AI are in the interest of wider society.

Informa Tech’s Applied Intelligence Group caught up with Aditya Khurjekar CEO & Founder, Medici Global ahead of this year’s Applied Intelligence Live! Austin (previously the AI Summit Austin and IoT World Expo and Conference) taking place 20-21 September to dive into the digital identity of AI and how we can ensure it remains responsible.

Can you tell us about your background?

My last 2 ventures were focused on the FinTech industry, and through our work at Money20/20 and Medici (now part of Prove, the global leader in identity & authentication), we were able to shape the contours of a new and impactful industry across the globe. FinTech went from a niche innovative sector to a mass market enabler of new experiences over the course of a decade.

Similarly, as AI now seeks to enter the mainstream narrative with overwhelming momentum, it is important to lay a robust foundation for its adoption across all sectors of the economy, so that we can maximize the equitable and positive impact of this powerful technology.

What is digital identity in responsible AI? And why is this seen as an increasingly critical component of responsible AI?

The rapid and ubiquitous proliferation of generative Artificial Intelligence is changing the nature of the internet in a fundamental way. Generative AI enabled content is flooding our social and news feeds, our shopping recommendations, our entertainment reels, our business communications, and also our academic literature! 

While this content has the potential to inform, educate and entertain, it also has the ability to misinform, perpetuate falsehoods, influence with destructive intention, and cause real human suffering through erroneous medical and wellness recommendations. The assumption of responsibility in human generated content cannot be taken for granted when it is purely AI-generated.

This is where identity plays a central role in ascertaining that the output of AI is still responsible, and those consuming it are able to verify the human authorship of the content that is produced. 

Why, in your opinion do you think the fear of AI could be either overblown or underestimated?

Unlike prior tech innovations, the evolution of AI is intrinsically nondeterministic. In fact, if it were able to be fully controlled by the humans who design it, its intelligence would not be "artificial". We are still a way away from truly autonomous intelligent beings, aka Artificial General Intelligence (AGI), so it's still premature to speculate exactly what to expect. 

We seem to be in a hype cycle for AI, and understandably so just as with any other major transformative tech, but we still don't know if AI will take over and rule the human species or end up as merely yet another productivity tool that every human benefits from. Along the way, there is clearly a plethora of AI-enabled advancements that we can put to good use, so it's important not to be distracted by the terminal state speculations.

Advances in AI need to serve the interests of society at large. What can companies or those implementing AI at a broader scale do to ensure AI remains responsible?

There are many facets of responsibility in the design and use of AI, ranging from selection and transparency in use of training data to weighting of the models to respecting the preferences of the sources of the data to establishing guardrails against uncontrolled dissemination of AI-generated output. There will also be legal, business and general economic considerations as AI gets deployed and utilized, but the moral and ethical hazards should be of paramount concern as we seek to make AI responsible.

My primary focus at Prove is to address the new AI-enabled use cases in various industries, and contribute towards making AI responsible via our digital identity and authentication capabilities.

What are the tools needed to support responsible AI and what benefits will be implementing these tools provide?

There are a variety of initiatives originating from AI companies, infrastructure & security providers, content platforms and even regulators, to address the needs of responsible AI. I expect a combination of industry led protocols and best practices, as well as legally mandated requirements that will be enforced by the courts and the government, spanning both the core models as well as the applications of AI.

We are specifically building tools to solve two main issues in the use of AI, related to individuals' self-sovereign control over their identity and their creations:

- Managing and enforcing authors' consent in the use of training content by AI models

- Value attribution and accountability for the human creators of content


As well as CEO and founder at Medici, Aditya also co-founded Money 20/20, the world’s largest FinTech conference. After a career spanning more than 25 year’s working in semi-conductors, leadership positions in enterprise software and mobile devices, includinding positions at Lucent Technologies Bell Labs & CSG Systems, and a founding portfolio advisor at Blume Ventures from 2010-2015, Aditya’s knowledge and experience enables him to advise and invest in global initiatives in the tech landscape.

With a passion to solve real world problems and make a difference, Aditya will be discussing the issues around the role of digital identity responsible AI at Applied Intelligence Live! Austin on 20 September. Find out more.

Explore Our Full Blog Library