How Cloud AI Infrastructure Enables Advances in Radiation Therapy at Elekta

Brought to you by Microsoft + NVIDIA

Despite a number of challenges, some of the most successful examples of putting innovative AI applications into production have come from healthcare. In this VB Spotlight event, learn how organizations in any industry can follow proven practices and leverage cloud-based AI infrastructure to accelerate their AI efforts.

Sign up to watch free, on demand.

From pilot to production, AI is a challenge for all industries. But as a highly regulated and high-risk sector, healthcare faces especially complex obstacles. Cloud-based infrastructure that is “purpose-built” and optimized for AI has become a key foundation for innovation and deployment. By leveraging the flexibility of cloud and high-performance computing (HPC), companies in all industries are successfully expanding proof-of-concept (PoC) and piloting into production workloads.

VB Spotlight brought together Silvain Beriault, AI Strategy Leader and Principal Research Scientist at Elekta, one of the world’s leading innovators of precision radiation therapy systems for cancer treatment, and John K. Lee, Senior Platform and Infrastructure Leader for AI on Microsoft Azure. They joined VB Consulting Analyst Joe Maglitta to discuss how cloud-based AI infrastructure has fueled better collaboration and innovation for Elekta’s R&D efforts around the world aimed at improving and expanding brain imaging and MR-guided radiation therapy from the company worldwide.

The three big benefits

Elasticity, flexibility and simplicity outweigh the benefits of end-to-end, on-demand, cloud-based infrastructure as a service (IaaS) for AI, according to Lee.

Because enterprise AI typically starts with a proof of concept, Lee says, “the cloud is a perfect place to start. You can start with just one credit card. As models become more complex and the need for additional computing capacity increases, the cloud is the perfect place to scale that work.” That includes scaling or increasing the number of GPUs interconnected to a single host to increase server capacity and scaling or increasing the number of host instances to increase overall system performance.

The flexibility of the cloud allows organizations to manage workloads of any size, from huge enterprise projects to smaller efforts that require less processing power. For any endeavor, purpose-built cloud infrastructure services offer much faster time-to-value and better TCO and ROI than building an on-premises AI architecture from scratch, Lee explains.

When it comes to simplicity, Lee says that pre-tested, integrated, and pre-optimized hardware and software stacks, platforms, development environments, and tools make it easy for businesses to start.

COVID accelerates Elekta’s cloud-based AI journey

Elekta is a medical technology company that develops image-guided clinical solutions for the management of brain disorders and better cancer care. When the COVID pandemic forced researchers out of their labs, company leaders saw an opportunity to accelerate and expand efforts to move AI R&D to the cloud that had begun a few years earlier.

The division’s head of AI knew that a more robust and accessible cloud-based architecture to enhance its suite of AI-powered solutions would help Elekta advance its mission to increase access to healthcare, including in underserved countries.

In terms of cost analysis, Elekta also knew that it would be difficult to estimate current and future needs in terms of high-performance computing. They considered the cost of maintaining the local infrastructure for AI and its limitations. The overall expense and complexity extends well beyond buying GPUs and servers, Beriault says.

“Trying to do that yourself can get difficult pretty quickly. With a framework like Azure and Azure ML, you get much more than access to the GPUs,” he explains. “You get a complete ecosystem to do AI experiments, document your AI experiments, share data between different R&D centers. You have a common ML operations tool.”

The pilot was simple: automate the contouring of organs on MRI images to speed up the task of delineating the target of treatment, as well as organs at risk to avoid radiation exposure.

The ability to scale up and down was crucial to the project. In the past, “there were times when we would run up to 10 training experiments in parallel to do some hyperparameter tuning of our model,” Beriault recalls. “Other times, we just waited for the data selection to be ready, so we didn’t train at all. This flexibility was very important to us as we were quite a small team at the time.”

Since the company was already using the Azure framework, they turned to Azure ML for their infrastructure, as well as crucial support as the teams learned to use the platform portal and APIs to start launching jobs in the cloud. Microsoft worked with the team to build a very specific data infrastructure for the enterprise domain and addressed crucial data privacy and security issues.

“Starting today, we have expanded auto-contouring, all using cloud-based systems. Using this infrastructure has allowed us to expand our research activities to more than 100 organs for multiple tumor sites. Additionally, scaling has allowed us to expand into other, more complex AI investigations in RT beyond simple segmentation, increasing the potential to positively impact patient treatments in the future.”

Choosing the right infrastructure partner

In the end, Beriault says that adopting a cloud-based architecture allows researchers to focus on their work and develop the best possible AI models instead of building and “taking care of” the AI ​​infrastructure.

Choosing a partner that can provide that kind of service is crucial, Lee said. A strong vendor must bring a strong strategic partnership that helps keep their products and services on the cutting edge. He says Microsoft’s collaboration with NVIDIA to develop the foundation for enterprise AI would be critical for clients like Elekta. But there are other considerations, he adds.

“You should remind yourself that it’s not just about the product offerings or the infrastructure. Do they have the whole ecosystem? Do they have the community? Do they have the right people to help you?

Sign up to view on demand now!


  • First-hand experience and advice on the best ways to accelerate the development, testing, deployment and operation of AI models and services
  • The critical role AI infrastructure plays in moving from POCs and pilots to production workloads and applications
  • How a cloud-based “AI-first approach” and proven, front-line best practices can help your organization, regardless of industry, scale AI faster and more effectively across departments or across the board. world


  • Silvain Beriault, AI Strategy Leader and Principal Research Scientist, Elekta
  • John K. Lee, Principal Director of AI Platform and Infrastructure, Microsoft Azure
  • Joe Maglitta, host and moderator, VentureBeat


Scroll to Top