Confocal Laser Endomicroscopy (CLE) is a new technique that is able to show cell structures during surgery. The interpretation of CLE data for tissue characterisation during brain tumour resection is challenging even among experts and it can lead to considerable inter-observer variability.
Different kinds of deep machine learning programs and models were developed for better interpretation of the cell findings. A few-shot learning framework is proposed to assess the diagnostic value of CLE data and to classify them into healthy tissue and different brain tumour types, namely glioblastoma, meningioma, or astrocytoma.
Performance evaluation on ex vivo and in vivo data shows that the rejection of data with low diagnostic value improves the classification accuracy by 37.5% while the proposed tissue characterisation framework achieves 96.20% classification accuracy.
Complete resection of brain tumours is more demanding than for any other organ system. Brain cancer is the 8th most common cause of cancer deaths in the UK [1] and it has the highest Average Years of Life Lost (AYLL) of all common tumours, just over 20 years [2]. Current state-of-the-art technologies which are used to facilitate brain tumour identification, such as neuronavigation [3], iMRI and fluorescence imaging [4], have significant limitations intra-operatively or can not provide complete tumour identification.
Confocal Laser Endomicroscopy (CLE) has enabled direct visualisation of tissue at a microscopic level, with recent pilot studies suggesting it may have a role in identifying residual cancer tissue and improving resection rates of brain tumours [5]. In neurosurgery, CLE allows the surgeon to see and to operate intraoperatively in real time at the cellular level, in order to be capable to look directly at the source of the tumour biology, to treat and also to see the tumour or normal tissue and important structures (nerves, vessels) as precisely as possible, assuring the conservation of their functionality.
However, the interpretation of endomicroscopic information remains challenging, particularly for surgeons who do not routinely review histopathology by themselves. Brain tumour interpretation is also based on molecular pathology findings. Among experts, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with CLE, based on a database of previously annotated data by expert physicians with diagnosis confirmed by histology, would support the surgeon in establishing diagnosis and guide autonomous robotic tissue scanning to focus locally on pathological areas. The main challenge to such tissue characterisation is the quality of the CLE data which is prone to degradation during scanning due to debris on the tissue surface or motion instability of the imaging probe.
In addition, there is significant intra and inter class variability in the appearance of CLE images. Furthermore, neurosurgeons have to interpret intraoperative pictures of cells on the monitor by themselves and to decide whether to cut further the tumour or not.
Recently, deep learning has gained a lot of attention for the design of Computer-Aided Diagnosis (CAD) systems for brain tissue characterisation [6]. These architectures enable the learning of data-driven and highly representative image features for efficient tissue state classification. However, a large set of data is required to train these models which could be a limitation for systems focusing on the analysis of in vivo data captured intraoperatively. The videos and pictures of CLE intraoperatively have a significant limitation in their number.
To overcome the lack of large training datasets, few-shot learning techniques can be adopted as they can learn to classify medical data even when only few training examples are available for each class.
In this work, we propose a few-shot learning framework to assess the diagnostic value of CLE images and classify them into four different brain tissue states, namely Glioblastoma tumour, Meningioma tumour, Astrocytoma tumour and healthy tissue, as shown in figure 1.
The aim of the data quality assessment is to remove from the CLE dataset, images which do no carry significant diagnostic value due to motion artifacts or imprecise probe-tissue contact. This assessment is done in two stages. At the first stage, images with Shannon entropy lower than a predefined threshold, are rejected as they do not contain sufficient information for tissue characterisation. At the second stage, the Deep Nearest Neighbour Neural Network (DN4) [7] is used to classify images, which survived the first quality assessment stage, as “diagnosable” or “non-diagnosable”. An example of “non-diagnosable” CLE data is shown in figure 2.
The DN4 classifier consists of an embedding module which uses Convolutional Neural Networks to learn local image descriptors. Then, an image-to-class module is used to measure the similarity between a query image and each image class. For this purpose, the k-Nearest Neighbour based on cosine similarity is employed. The above two modules are trained in an end-to-end manner. During training, a support set S is created which contains few labeled samples for different image classes. Given a query set Q, the classifier is trained to classify each sample in Q according to the support set S. In the test stage, the learned model is used together with a support set to classify a query image. In this work, the DN4 model has been enhanced by constraining the support and query sets not to contain data from the same patient to avoid bias. The CLE data with high diagnostic value are further processed for tissue characterisation while the “non-diagnosable” data are removed from our dataset as shown in figure 3.
To characterise brain tissue, a DN4-based architecture is designed to classify CLE images into the tumour and healthy tissue states defined above. To improve the classification performance, a more detailed data representation is estimated by increasing the number of filters in the convolutional layers comprising the embedding module of the DN4 architecture. Similar to the quality assessment model above, the support and query sets of the proposed classification model do not contain data from the same patient.
The tissue characterisation framework proposed in this work is based on the analysis of ex vivo and in vivo CLE data of brain tissue, collected using the Cellvizio system (Mauna Kea Technologies, Paris, France) during brain tumour resection procedures at the Merheim Hospital in Cologne, Germany. The ex vivo dataset described in [6] contains Meningioma and Glioblastoma brain tumours. in vivo data was collected using Indocyanine Green (ICG) as contrast agent and was classified into Glioblastoma, Meningioma, Astrocytoma and healthy tissue by expert histopathologists. Each in vivo CLE video represents one tissue state and our dataset includes 6, 20, 9 and 27 videos of Glioblastoma, Meningioma, Astrocytoma and healthy tissue, respectively, resulting in 27472 frames in total.
To train the diagnostic quality assessment model, the ex vivo CLE dataset with Glioblastoma and Meningioma images is first used to learn an efficient data representation on a large and high-quality dataset. We then capitalize on the strength of few-shot learning which allows the classifier to adapt to new classes with minimal training. For the training of the tissue characterisation model, only the in vivo data which has survived the quality assessment process is further used for training, validation and testing without using videos from the same patient on multiple sets to avoid bias.
Diagnosis of tumour and definition of tumour borders intraoperatively is based on the visualization modalities, which the surgeon is used to as well as the histopathologic examination of a limited number of biopsy specimens. Furthermore, optimal surgical therapy is the combination of maximal near total resection and minimal injury of the normal tissue, which would only be achieved, if we were able to identify intraoperative cellular structures, differentiate tumour from normal functional tissue so that we were able to resect tumour totally and protect normal tissue. To achieve this goal we need new technological equipment combined with new surgical concepts.
In oncological diagnosis and surgery, CLE would allow on one hand the intraoperative detection and differentiation of single tumour cells (without the need of fast biopsies), and on the other hand the definition of borders between tumour and normal tissue on a cellular level, making surgical resection much more accurate than ever before [8-11]. The application and implementation of CLE-assisted surgery in surgical oncology increases not only the diagnostic but also the therapeutic options by extending the resection borders of cancer on a cellular level, and more importantly by automatic protection of the functionality of normal tissue on eloquent areas of the human body [12].
The performance of the proposed diagnostic quality assessment model has been validated on a set of 382 “non-diagnosable” images selected from the in-vivo Glioblastoma-Meningioma dataset and on all the images from the ex vivo dataset which are considered as “diagnosable”. Further validation has also been done on 4534 “diagnosable” images from the in vivo dataset. The performance evaluation results in table 1, verify the ability of the model to identify data of high diagnostic quality and reject images of low diagnostic value. The data quality assessment process led to the rejection of 34.3% of the in vivo data and resulted in 37.5% improvement in the classification accuracy as shown in table 2. The proposed tissue characterisation framework classifies endomicroscopy data into three brain tumour and healthy tissue, achieving 96.20 % accuracy which is also verified in the confusion matrix shown in figure 4.
Table 1: Performance evaluation of the diagnostic quality assessment model. | ||||
Data | Avg. Accuracy | Precision | Recall | F1-score |
Ex vivo | 93.44 | 88.78 | 99.45 | 93.38 |
in vivo | 84.35 | 77.68 | 96.40 | 86.03 |
Table 2: Classification performance improvement for in vivo Glioblastoma/Meningioma data due to the rejection of low diagnostic quality data. | ||||
Data | Avg. Accuracy | Precision | Recall | F1-score |
Without quality assessment | 62.36 | 62.36 | 62.36 | 62.36 |
With quality assessment | 99.92 | 99.66 | 100 | 99.83 |
Interpretation of cellular images during surgery is a new challenging situation in the daily routine in neurosurgery. Different kinds of tumours were characterised from different cellular markers and optical appearance. Optical findings are not only important for the characterisation and differentiation but also for the classification of tumours within the same group of tumours. Intraoperatively, the number of CLE data which can be generated are limited.
Therefore, in this paper, a tissue characterisation framework based on few-shot learning has been proposed to classify endomicroscopy data of brain tissue into three types of tumour and healthy tissue. Images with low diagnostic value due to motion artifacts or imprecise probe-tissue contact, are eliminated improving the tissue characterisation accuracy by 37.5%. The performance evaluation study has shown that the proposed framework achieves 96.20% accuracy on the classification of in vivo CLE data.
Dr. Stamatia Giannarou is supported by the Royal Society (UF140290).
All studies on human subjects were performed according to the requirements of the local ethic committee and in agreement with the Declaration of Helsinki.
SignUp to our
Content alerts.
Are you the author of a recent Preprint? We invite you to submit your manuscript for peer-reviewed publication in our open access journal.
Benefit from fast review, global visibility, and exclusive APC discounts.