Hello, HealthTech community of marketers, product innovators, academicians, and researchers. Welcome to the next edition of the HealthTech Top Voice interview series with Prof. Aldo Faisal, Professor of AI & Neuroscience at Imperial College London. Ahead of Imperial Global USA’s AI for Healthcare Event, we sat down with Professor Aldo to discuss the role of new-age technologies and the inauguration of Imperial Global USA– a new hub focused on building transatlantic partnerships in emerging technologies and scientific discovery.
The latest HealthTech Top Voice Interview features an enlightening discussion with Aldo Faisal, a leading expert in AI and Neuroscience, and Professor at Imperial College London. In this engaging conversation, Professor Faisal, who is also the Founding Director of the UKRI Centre for Doctoral Training in AI for Healthcare (AI4Health), shares his journey from computational neuroscience to the impactful field of digital health.
Professor Faisal dives deep into the intersection of machine learning, human learning, and AI in healthcare, explaining how his research has shaped innovations in the medical sector. He discusses the significance of “sovereign AI” in healthcare and contrasts it with other frameworks like Ethical, Responsible, and Trustworthy AI. His insights on how AI can transform patient care and clinical decision-making through initiatives like Nightingale AI are particularly thought-provoking.
Join Prof. Aldo Faisal as he explores the future of AI in healthcare, the potential for sovereign AI models, and the pivotal role of collaborative innovation in advancing medical technologies.
HealthTech Insights: Hi, Professor Aldo. Tell us about your journey in AI and Neuroscience. How did your research trajectory lead you into the digital health space?
Prof. Aldo: I came into the digital health space from the intersection of machine learning and human learning, an area called computational neuroscience. It’s where we try to understand the algorithms the human brain uses and use that as inspiration to build better AI methods. I started in the domain of neuroscience, which naturally led to applications in neurology. In my quest to find more and better data, I explored other areas of medicine like intensive care, pediatrics, cardiovascular medicine, and ultimately public health to begin to see the real impact we could have.
How do you define “sovereign AI” for health data training models? How does this compare to Ethical AI, Responsible AI, and Trustworthy AI in your research areas?
Prof. Aldo: Sovereign AI, particularly in healthcare, refers to the ability to design, develop, and operate AI systems independently, without relying on foreign infrastructure, technologies, or data. Modern AI systems are highly complex and often depend on components sourced from different parts of the world, such as compute infrastructure, datasets, software platforms, and skilled personnel. Sovereign AI means reducing or removing those dependencies so that critical systems, like those in healthcare, can function irrespective of international political or economic shifts.
This approach is distinct from, but complementary to, Ethical AI, Responsible AI, and Trustworthy AI. While those frameworks focus on fairness, transparency, bias mitigation, and governance, Sovereign AI introduces the dimensions of resilience and strategic autonomy. In a healthcare context, where lives are at stake, this is essential. It is not just about whether an AI system is fair or interpretable; it is about ensuring that it can be maintained and operated consistently to give people the care they need when they need it, regardless of the global environment.
Our approach in projects such as Nightingale AI is to encourage a federation of sovereign AIs, where countries and organizations collaborate, but without creating critical dependencies.
Recommended: Madison Dearborn to Acquire Major Stake in NextGen Healthcare
In what ways does leveraging NHS data to train independent AI models provide a more effective and contextually relevant foundation for clinical decision support? How does this compare, technically and ethically, to pursuing a fully sovereign AI infrastructure?
Prof. Aldo: The NHS provides a uniquely comprehensive and high-quality dataset for AI development because it is a publicly funded healthcare system that serves the entire population. This means the data reflects the complete healthcare journeys of individuals representing the full spectrum of the population. This covers primary care through to hospital treatment, clinical trial participation, and beyond–every interaction from cradle to grave.
Training AI models on NHS data allows us to develop systems that are both clinically relevant and contextually appropriate. This public healthcare system serves approximately 65 million individuals, and its scale is unparalleled. This kind of dataset is unmatched in its depth and breadth, and it is continuously improving as more NHS systems become digitized.
Technically, this gives us a far better foundation for developing decision-support tools that can be tailored to the needs of each section of the medical community. Ethically, it ensures the data remains under public stewardship, with transparency and alignment with public values. And, when combined with sovereign compute infrastructure, such as the Isambard-AI supercomputing facility in Bristol, we can create fully sovereign healthcare models. But again, the goal is not to isolate; it is to ensure that we can act independently when necessary while still being open and encouraging international collaborations.
Nightingale AI is being hailed as a strategic breakthrough for the NHS – what motivated its development, and how does it differ from commercial large language models?
Prof. Aldo: Commercial large language models are impressive in their general capabilities, but they are not tailored to the needs of the medical community. They are trained on heterogeneous text data, often drawn from open web sources, and are not optimized for clinical safety, regulatory compliance, or the ethical standards expected in a public health setting.
Nightingale AI is fundamentally different because it is designed to leverage vast repositories of electronic patient records, biomedical data, and published medical literature to develop an advanced health-focused AI model.
And, it is multimodal which means it will be able to read X-rays, electrocardiograms, genetic data, electronic health records, doctors’ letters, data from wearable tech, clinical trial results, and every other type of health data you can think of. All of this is made possible because it is set to be trained on NHS data, which is comprehensive, high-quality, and representative of our diverse UK population.
It is also being developed in collaboration with both academic and clinical partners, ensuring that domain expertise and insight into how humans in the medical community make decisions, such as how doctors make and revise treatment plans, inform every stage of development. Importantly, it is supported by sovereign compute infrastructure, including significant government investment in AI capabilities such as the Isambard-AI facility.
Recommended: Creative Solutions in Healthcare Names Chelsea Jordan VP
Nightingale AI is a safe, effective, and resilient AI-powered tool that has the potential to enhance medical research, streamline clinical decision-making, and facilitate drug discovery, all while reducing the time and cost burden on health professionals and administration.
You are speaking at the upcoming event with experts from Stanford, UCSF, Sanofi, and Imperial College joining forces. What unique conversations do you anticipate emerging around the intersection of healthcare, academia, and AI?
Prof. Aldo: This event brings together Nigam Shah, Professor of Medicine at Stanford University and Chief Data Scientist at Stanford Health Care; Professor Marina Sirota, Professor and Acting Director at the Bakar Computational Health Sciences Institute UC San Francisco; and Jared Josleyn, VP, Global Head of Digital Healthcare at Sanofi.
Stanford and UCSF are internationally recognized for their work in clinical AI applications, and Sanofi is one of the most forward-thinking pharmaceutical companies in the digital health space. The conversation will focus on how we can collaborate internationally to advance the responsible development and deployment of AI in healthcare, and how different healthcare systems can complement one another to accelerate progress in AI-enabled medicine.
We also want to highlight the distinctive strengths of the UK healthcare system.
The NHS, combined with our academic and clinical ecosystem at Imperial, creates a uniquely integrated environment for developing and testing digital health solutions. We have the infrastructure to serve as a proving ground for medical AI – whether through initiatives like Nightingale AI or our broader network of digital health test beds.
My Imperial colleague Prof. Anthony Gordon will provide a clinical view to explore how to bridge academic innovation and real-world implementation, and his firsthand clinical insights into how these models can transform patient care.
Nightingale AI will be a highlight of this event. How do you think this model changes the global conversation about what sovereign AI in healthcare should look like?
Prof. Aldo: Rather than framing this solely in terms of sovereignty, I believe the more significant shift is toward what we call agentic AI. This refers to AI systems that go beyond data analysis or prediction and can take meaningful action, in other words, ‘AI with agency’. It is not just about interpreting information but about acting on it in real-world clinical contexts.
At Imperial, we have been pioneering this approach through the AI Clinician programme, which I co-lead with my colleague Prof. Anthony Gordon. This is one of the few AI systems globally that has progressed through clinical trials and is currently being deployed in four hospitals across London to assist with the treatment of patients in intensive care. This marks a fundamental shift in how AI is used in medicine—not just as a diagnostic or decision-support tool, but as a system that actively participates in patient care.
With Nightingale AI, we are building on this foundation and our vision is to expand this agentic approach across all branches of medicine. The convergence of agentic AI and sovereign infrastructure enables us to design systems that are both actionable and resilient while maintaining public trust and regulatory alignment.
If self-driving cars in San Francisco are the global benchmark for agency in transport, then our work in London represents the equivalent benchmark for AI in medicine!
With many nations exploring sovereign AI initiatives, what are the most common misconceptions about what AI sovereignty entails?
Prof. Aldo: A common misconception is that AI sovereignty is simply about hosting compute or data centres within national borders. While infrastructure is one part of the equation, it does not capture the full scope or intent of what sovereign AI requires. True sovereignty in AI is about establishing control and independence across the entire AI lifecycle.
In our work, we define four key dimensions of AI sovereignty. The first is data sovereignty, which means having authority over how data is sourced, governed, and used in AI systems.
The second is model sovereignty, which ensures that models are developed in alignment with national objectives, that their parameters are transparent, and that their behavior is predictable and safe.
Third is update sovereignty—the ability to control how and when models are updated or adapted. This is essential to maintain performance and safety over time without becoming dependent on external providers.
Finally, governance sovereignty ensures clear oversight over how the system is used, who is accountable, and how ethical and regulatory standards are upheld.
Focusing only on infrastructure misses these critical layers. Sovereign AI is not just about localization; it is about achieving resilience, transparency, and strategic alignment in how AI systems are developed, deployed, and maintained.
For further details, these principles are outlined in our White Paper which you can read at sovereign-ai.org.
Recommended: Aya Healthcare Announces Emily Hazen as Chief Executive Officer
Why do you believe full AI sovereignty is impractical — and possibly unnecessary — for most countries?
Prof. Aldo: Full AI sovereignty is impractical and unnecessary for most countries because it’s not just about local data centers or storage. True sovereignty covers data, model, inference, skills, and governance.
While local control over data and compute is important, global collaboration is essential for innovation and advancement. AI systems benefit from a federated approach where nations collaborate but maintain control over critical aspects, like healthcare models, without isolating themselves.
Complete self-sufficiency in AI is unrealistic, as most countries can’t build fully independent systems. Instead, nations should focus on ensuring sovereignty in key areas while remaining open to international cooperation. This allows for local control and global collaboration, leveraging shared expertise and technology, for maximum benefit globally.
How do you ensure that health-specific large models maintain both patient safety and clinical utility in real-world deployment?
Prof. Aldo: The key to this is the continuous integration of operational data. Unlike traditional settings, where data is extracted periodically and models are trained based on static datasets, our approach integrates live operational data from healthcare systems. This means that as new data comes in, such as new diseases, emerging patterns, or even real-time events like the COVID-19 pandemic, the model automatically adapts and updates itself.
This constant flow of fresh data ensures that the model remains current and relevant to the realities of clinical practice, providing clinicians with more accurate, timely insights. Since the model is trained on operational data, it can respond more quickly to changes in patient populations or disease patterns, which is a significant advantage over models that are only trained periodically.
Patient safety is always maintained because the model evolves alongside the changing healthcare landscape. By continuously updating with real-time operational data, the system can remain aligned with the latest clinical realities, improving both patient outcomes and the model’s clinical utility in everyday practice.
Beyond healthcare, what potential do you see for Nightingale AI’s decision-making architecture to impact sectors like aerospace or energy?
Prof. Aldo: Nightingale AI is designed to analyze multimodal data and reason about outcomes, which means it can be applied to any sector that involves complex decision-making processes.
The fundamental strength of Nightingale AI lies in its ability to understand and predict outcomes, as well as steer decisions towards desirable results. This means the principle could be applied to any industry that requires intelligent decision-making based on a range of inputs, whether those inputs are data from operations, historical trends, or real-time events.
The key differentiator of Nightingale AI is its human-in-the-loop design. This means the system is built to work alongside human decision-makers, factoring in how humans reason and interact with technology. This is crucial in fields where collaboration between AI systems and humans is needed to drive successful outcomes.
Recommended: World Ovarian Cancer Coalition Launches Global Awareness Campaign
For governments or organizations beginning to invest in national AI infrastructure, what’s the single most strategic question they should ask before starting?
Prof. Aldo: The single most strategic question they should ask is – “How to ensure sovereignty across the AI system?”
This involves thinking beyond just the technology and focusing on the essential layers that enable long-term sustainability. The five critical aspects to consider are:
- Data Sovereignty: Where is the data sourced from, and how can access to it be controlled and retained?
- Model Sovereignty: How can you ensure you have control over the design and updates of AI models?
- Inference Sovereignty: How can decisions and outputs from the AI be managed and ensured to meet national needs?
- Skills Sovereignty: Do you have the necessary expertise and skilled personnel to build and maintain these systems?
- Governance Sovereignty: How will you ensure transparency, accountability, and ethical oversight throughout the AI lifecycle?
Data and compute are critical starting points, but it’s essential to approach AI infrastructure with a holistic view that covers these foundational areas of sovereignty.
Tag a person in the industry whose answers you would like to see in the AITech Top Voice interview series:
Prof. Aldo: All our panellists have fascinating insights to share:
Professor Nigam Shah, Professor of Medicine at Stanford University and Chief Data Scientist, Stanford Health Care
Professor Marina Sirota, Professor and Acting Director at the Bakar Computational Health Sciences Institute, UC San Francisco
Jared Josleyn, VP, Global Head of Digital Healthcare, Sanofi
For the latest news, events, and campaigns from Imperial Global USA, check out LinkedIn
About Prof. Aldo Faisal
Professor Aldo Faisal is the Professor of AI & Neuroscience at the Dept. of Computing and the Dept. of Bioengineering at Imperial College London. He was awarded a prestigious UKRI Turing AI Fellowship (£2 Mio including industry partners). Aldo is the Founding Director of the £20Mio. UKRI Centre for Doctoral Training in AI for Healthcare that aims to transform AI for Healthcare research and pioneer training 100 PhD and Clinical PhD Fellows. He also holds a Chair in Digital Health at the University of Bayreuth (Germany).
At his two departments, Aldo leads the Brain & Behaviour Lab focussing on AI & Neuroscience and the Behaviour Analytics Lab at the Data Science Institute. He is Associate Investigator at the MRC London Institute of Medical Sciences and is affiliated faculty at the Gatsby Computational Neuroscience Unit (University College London).
He was the first elected Speaker of the Cross-Faculty Network in Artificial Intelligence representing AI in College on behalf of over 200 academic members. Aldo serves as an Associate Editor for Nature Scientific Data and PLOS Computational Biology and has acted as conference chair, program/area chair, chair in key conferences in the field (e.g. Neurotechnix, KDD, NIPS, IEEE BSN). In 2016 he was elected into the Global Futures Council of the World Economic Forum.
Read more about Dr. Aldo Faisal’s journey here.
About Imperial Global USA

We are Imperial – a world-leading university for science, technology, engineering, medicine, and business (STEMB), where scientific imagination leads to world-changing impact.
Imperial Global is a network of dynamic hubs in strategic regions to support high-impact research, education, innovation, recruitment, and student experience opportunities. Our first hubs are in Singapore, Ghana, the USA, and India.
As a global top ten university (2nd in the world – QS World University Rankings 2025), we use science to try to understand more of the universe and improve the lives of more people in it.
Follow Imperial Global USA at LinkedIn.