Artificial Intelligence: How to approach the ethical cost of innovation

Head of Innovation, Christopher Pereira, takes a look at how and why we should ensure the use of AI in healthcare remains ethical as well as innovative.

Produced by: Christopher Pereira

The whole history of human identity rests on the idea that our choices define us, and we are what we choose to do. No matter what your principles are, at some point in your life you’ve had to sit back and ask, “should I really do this? Is it the right choice?”   

Our choices have always been impactful, but, in the age of information, it’s hard to avoid considering the ripple effect. And with the misinformation that can often be prominent in today’s digital world, it’s even harder to know if you’re making the right choice.  We all have a social identity shaped by so many different things, making it impossible to pin down any one set of thoughts that are universally correct. But, as with all things, we choose what we are and aren’t okay with. Whatever the scale we are willing to hold ourselves accountable to, none of us are strangers to an ethical dilemma.  Using the new wave of artificial intelligence or automation offerings, as with all advances in technology, comes with a similar dilemma of “is it the right thing to do” It’s a fair question when you consider these systems can use immense resources, replace people’s skills and operate on a level of autonomy that can severely impact day-to-day life.  

The Responsible AI Framework  

Microsoft developed an initial Responsible AI standard[1] which approaches the use of AI systems within the context of: 

  • fairness 

  • reliability 

  • safety 

  • privacy  

  • security 

  • inclusiveness 

  • transparency 

  • accountability.   

This framework is a good starting point for tackling the ethical dilemma of whether we should be using AI and how.  

Fairness and inclusivity: Build your systems to be fair. The age-old argument against bias when dealing with data is an important consideration. The lens of societal exclusion that applies to literally every other aspect of society, applies here too. We face it in academia, research, socio-economic disparity and every other part of our lives: It’s no surprise that we face it when it comes to data processing systems. But the data is just the data, until we decide what it should mean. That’s where the bias creeps in. Checks and balances for a fairer AI system go a long way in mitigating its impact.  

Reliability and safety: Safety with AI systems must be consistent and reliable. We build our systems to operate within a secure design, plan for contingencies, consider potential impacts and resist harmful change. The Secure by Design[2] principles are a good way to improve security culture by making security everyone’s collective responsibility from the outset.  

Transparency: If an AI system helps inform people’s life decisions, they must have full disclosure on how those systems work and why. The right to informed consent is a principle that should be considered inviolable.   

Accountability: The people who design and deploy AI systems must be accountable for how their systems operate.  AI systems should never be the final authority on any decision that affects people's lives. Accountability may also mean ensuring that humans maintain meaningful control over otherwise highly autonomous AI systems.  

Innovation hinges on our collective advancements, and the idea of AI systems disrupting our lives is unlikely to simply go away if we ignore it. As the innovators creating and using these systems, we cannot afford to turn a blind eye to the way they are developed. As the user of these systems, you can’t afford to, either.  

At Mednet, our innovation team is dedicated to using AI in a responsible and ethical way to support the health outcomes of patients in a range of therapy areas in all across the world. By combining our innovative creativity and our patient-centric goals, we’re driving change across the pharma landscape and helping transform patient lives.  References: 
 
1. Microsoft, 2024. Responsible AI in Azure Machine Learning. [online] Available at: https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai [Accessed 17 December 2024]. 

2. Government Security, 2024. Secure by Design Principles. [online] Available at: https://www.security.gov.uk/policy-and-guidance/secure-by-design/principles/ [Accessed 17 December 2024]. 

Seeking to elevate the way customers interact with your brand?

Seeking to elevate the way customers interact with your brand?

Seeking to elevate the way customers interact with your brand?