How technology can support daily practice and why human presence with children and families remains irreplaceable.
Jan Romportl is a Czech AI researcher and philosopher with nearly 20 years of experience. After a distinguished academic career, including a PhD in AI, a master’s in cybernetics and philosophy, and leadership at the University of West Bohemi, he transitioned to industry as Chief Data Scientist at O2 Czech Republic. He later founded Dataclair.ai, the AI Center of Excellence for PPF/O2, and now offers independent consulting focused on AI strategy, ethics, safety, and societal impact. He remains actively engaged in AI alignment and safety communities and is known for his ethical insight and visionary thinking.
In September 2025, the EAPC Reference Group Children & Young People spoke with Jan about the role of AI in healthcare. We asked him a question many professionals have on their minds:
Jan, is there any way AI can support me as a healthcare professional?
AI will mostly help in areas related to curative care, where it will have tremendous impact. Palliative care will benefit wherever it shares the same tools with curative care, such as medication, pain management, and even physical support of patients as robotics catches up with non-physical AI and accelerates its development. Another important field with direct impact is AI-powered training tools for palliative professionals (e.g., support for communication training).
However, there will also be important indirect effects. As AI transforms curative healthcare, the role of medical staff will shift from “technicians routinely fixing human machines” to “real caregivers and companions to human beings.” Much of the diagnosing and fixing will eventually be done by AI. So what will be the new role of many doctors? They will provide care and human guidance to patients. And they will have to learn this from palliative professionals, whose core job is exactly this. This transformation will open significant opportunities for palliative care to become more deeply integrated into the healthcare system.

You have already mentioned some areas, but are there any specific tools, platforms, or innovations being tested or used already?
Specifically for palliative care, I know of some excellent projects around communication training. The shared space with curative care is huge, with projects ranging from pharma to robotics. And there is one more field emerging specifically around palliative care—but this one is ethically tricky: digital immortality. Near-future AI tools will be very good at simulating people’s behaviour, including voice and visual appearance, based on the data generated by them when they were alive.
Do you mean that the digital traces we leave today might make us immortal in the future? That is both scary and promising.
Yes, exactly. I believe near-future AI tools will indeed be able to simulate people’s behaviour, including voice and appearance, based on the data generated while they were alive.
What are the main ethical concerns or limitations when using AI in palliative care in general, not only in the context of digital immortality?
Human presence is absolutely crucial and must remain—and even flourish—not replaced by AI in the moments where “being human” makes the real difference. For example, “using ChatGPT to give patients in hospices to fake human care” is a very dangerous road. Digital immortality also requires serious ethical attention because it is closer than many people are willing to believe.
It seems that many dilemmas will arise soon. Can we do something not to lose our human touch?
AI can actually help caring professions to grow and flourish, because “human touch” and “being human” will, in the long run, be the only domains where humans can excel over machines. To put it in a simplified yet illustrative way: “When I’m telling you about my pain or joy and you are a person close to me, my mirror neurons must be able to mirror that your mirror neurons mirror my pain or joy.” If this loop is missing, we suffer emotionally. And this loop is created only in human-to-human connection, not human-to-AI. The main role of AI here is to free humans from many other tasks and give them space, energy, and time to fully care for one another.
That sounds promising and raises my hope that more people will work in caring professions, as other jobs will be done by AI.
That is indeed optimistic, and I see where you are going. AI can also be present with a patient whenever a human caregiver cannot. This 24/7 availability is valuable, as long as it does not replace human touch. We can already see parallels in psychotherapy: AI can lower the threshold for clients to find a human therapist, help bridge waiting times before appointments, and provide support between sessions.
You mentioned psychotherapy. Do you know of anything specifically relevant to children’s palliative care?
I am mostly aware of applications shared with curative care, such as:
- caregiver chatbots (guidance on symptom severity, next steps, post-discharge support, etc.),
- prototypes of Retrieval-Augmented Generation (RAG) AI agents for 24/7 on-demand information about medications, conditions, and care decisions,
- Natural Language Processing (NLP) techniques for documentation processing and quality measurement,
- AI assistants for family education and information access.
I have also seen indirect applications where AI supports administration, back-office processes, and even fundraising for children’s palliative care.
Jan, thank you for your time. Read more about Jan on his webpage.