UK alliance to build ‘federated learning’ hospital network

The new system, say stakeholders, will enable algorithms to travel from one hospital to another to train on local datasets, then provide each hospital a blockchain-distributed ledger that captures and traces all data used for model training.

A consortium including NVIDIA, King's College London, and Owkin, an AI platform provider, has announced an initiative aimed at connecting hospitals across the UK with federated learning, a new distributed learning technique announced today that can train machine learning models while protecting patient privacy.

According to NVIDIA, “The Owkin Connect platform running on NVIDIA Clara enables algorithms to travel from one hospital to another, training on local datasets. It provides each hospital a blockchain-distributed ledger that captures and traces all data used for model training.”

The project is initially connecting four of London’s premier teaching hospitals, offering AI services to accelerate work in areas such as cancer, heart failure and neurodegenerative disease, and will expand to at least 12 U.K. hospitals in 2020.

According to reports, the collaboration will gradually create a dataset across healthcare organizations across healthcare organizations throughout the UK, which would form a potentially valuable reference point for research, while concurrently ensuring ensuring protection of patients’ privacy.

Clara FL is a reference application for distributed AI training that’s designed to run on Nvidia’s recently announced EGX intelligent edge computing platform. Those systems are capable of performing deep learning training locally at the “network edge,” where the data resides, without moving it.

Clara FL is also collaborative, which means multiple systems can work together at different locations to create more accurate, global models. Clara FL has already been put to use by radiologists at several top healthcare providers, including the American College of Radiology, King’s College London and UCLA Health.

As explained by NVIDIA, “federated learning decentralizes deep learning by removing the need to pool data into a single location. Instead, the model is trained in multiple iterations at different sites.”

For example, the company continues, say three hospitals decide to team up and build a model to help automatically analyze brain tumor images. If they chose to work with a client-server federated approach, a centralized server would maintain the global deep neural network and each participating hospital would be given a copy to train on their own dataset.

Once the model had been trained locally, the participants would send their updated version of the model back to the centralized server and keep their dataset within their own secure infrastructure. The central server would then aggregate the contributions from all of the participants, and “the updated parameters would then be shared with the participating institutes, so that they could continue local training.”