Hey! My name is Felix and I am currently a Principal AI Scientist/Engineer within Surgical Robotics at Medtronic.
I am currently researching machine learning and computer vision methods for visual reasoning for surgical robotics. In particular; I focus on the development of novel transformer architectures for multi-task learning whilst I also investigate spatiotemporal models and the use of semi-supervision for training. I have also helped develop novel models to power real-time surgical video processing algorithms and have written production ready code for training, validating, deploying and exporting our models.
Previously, I was at Babylon Health where I was developing methods for learning patient representations from health records whilst also researching methods across modular and disentangled representation learning.
Before Babylon Health, I was a post-doc at UCL working with M. Jorge Cardoso where I developed fundamental methods in deep learning and multi-task learning. I developed multiple learning schemes for multi-task learning, with oral presentations at MICCAI 2018 and ICCV 2019.
I initially was a PhD student under the supervision of Prof. David Hawkes at CMIC UCL and Prof. John Hurst at the Royal Free Hospital. I developed numerous algorithms for the quantitative analysis of Chronic Obstructive Pulmonary Disease (COPD) from three-dimensional Computed Tomography scans using supervised and unsupervised learning methods.
PhD in Medical Computer Vision, 2017
University College London
MSc in Biomedical Engineering, 2012
University of Oxford
BEng in Mechanical Engineering, 2011
University College London
For a complete list, please visit my Google Scholar webpage
In this paper, we present a probabilistic approach to learning task-specific and shared representations in CNNs for multi-task learning. We propose stochastic filter groups (SFG), a mechanism to assign convolution kernels in each layer to specialist or generalist groups, which are specific to or shared across different tasks, respectively. The SFG modules determine the connectivity between layers and the structures of task-specific and shared representations in the network. We employ variational inference to learn the posterior distribution over the possible grouping of kernels and network parameters.
We identified subtypes of patients with COPD with distinct longitudinal progression patterns using a novel machine-learning tool called Subtype and Stage Inference (SuStaIn) and evaluated the potential for unsupervised diseased progression modelling for patient stratification in COPD. We demonstrated two distinct patterns of disease progression in COPD using SuStaIn, likely representing different endotypes. One third of healthy smokers have detectable imaging changes, suggesting a new biomarker of early COPD.
We propose a probabilistic multi-task network that estimates: 1) intrinsic uncertainty through a heteroscedastic noise model for spatially-adaptive task loss weighting and 2) parameter uncertainty through approximate Bayesian inference. This allows sampling of multiple segmentations and synCTs that share their network representation.
A fully automated, unsupervised lobe segmentation algorithm is presented based on a probabilistic segmentation of the fissures and the simultaneous construction of a population model of the fissures. A two-class probabilistic segmentation segments the lung into candidate fissure voxels and the surrounding parenchyma. This was combined with anatomical information and a groupwise fissure prior to drive non-parametric surface fitting to obtain the final segmentation.
We present local disease and deformation distributions to address this limitation. The disease distribution aims to quantify two aspects of parenchymal damage: locally diffuse/dense disease and global homogeneity/heterogeneity. The deformation distribution links parenchymal damage to local volume change. These distributions are exploited to quantify inter-patient differences. We used manifold learning to model variations of these distributions and applied manifold fusion to combine distinct aspects of COPD into a single model
Developing machine learning models for processing surgical videos for AI-assisted surgery. Responsibilities include:
Developing machine learning models to help build products for AI-driven health care. Responsibilities include:
Deep learning and multi-task learning algorithms for computer vision and medical image computing.