Contents
“Multiple Sclerosis Spinal Cord Lesions Detection from MultiSequence MRIs”
An uncommon but relevant task: spinal cord lesions detection
Multisequences: 4 sequences potentially available with various resolutions
Missing modalities: all four sequences not available for each patient at training and at testing
Challenge Rationale
Lesions detection: Let’s not forget the Spinal Cord
The identification of Multiple Sclerosis (MS) lesions on Magnetic Resonance Images (MRI) is a complex and mentally demanding task that often leads to an underestimation of disease activity, even for most experienced radiologists. There is thus a need for automated tools that can provide clinicians an aid for accurate and robust identification and quantification of MS lesions.
To date, the medical imaging community concentrated its efforts toward the detection/segmentation of the lesions in brain MRI. For this purpose, over the past years, several challenges have been organized to assess the ability of automated methods to detect multiple sclerosis (MS) lesions as compared to manual delineation. These have allowed the community to explore innovative directions.
However, clinically, the presence of lesions in the spinal cord has a major prognostic value compared to brain lesions. Also their detection represents a hard task for radiologists. Indeed, MS lesion detection/segmentation in spinal cord MRI is a complex task due to specific characteristics (e.g. lack of sharp contrasts between healthy and pathological tissue, high occurrence of significant artifacts). As a result, despite its clinical importance, spinal cord MRI is currently under-exploited in patients with MS. Providing clinicians with tools capable of reliably identifying these spinal cord lesions would therefore be a major added-value.
A Multisequence “Missing-Modality” Setting
Spinal cord lesion segmentation raises a specific methodological challenge. Indeed, in clinical practice, it is highly recommended to acquire at least two sequences among a set of available sequences, without specific guidelines to date. In practice, depending on the center and context, any combination of existing MR sequences can be provided. In this challenge, that represents a concrete complex case of multisequence datasets, we focus on four commonly used sequences: the sagittal T2 (that is always provided in the challenge and will be considered as the reference to segment), the sagittal STIR, the sagittal PSIR and the 3D MP2RAGE that will be provided in different combinations.
In practice, the training set will consist of the following pairs of acquisitions: 50 pairs of T2 and STIR data, 25 pairs of T2 and PSIR data and 25 pairs of T2 and MP2RAGE data. This data originate from various 1.5T and 3T scanners from two different brands. The testing set (not provided to challengers) will consist of the following set: 40 pairs of T2 and PSIR data, 20 pairs of T2 and PSIR data, 20 pairs of T2 and MP2RAGE data and 20 triplets of T2, STIR and MP2RAGE data. Moreover, some of its data will originate from a third scanner brand.
A Probabilistic Instance Segmentation Setting
Only fully automated methods will be allowed. Methodologically, the setting is those of an instance segmentation problem where probability are assigned to inferred instances and where ground-truths are provided as segmentation masks. Methods will thus be ask to output both a mask of labels (with a unique label for each voxel of a given identified lesion) and a csv file with the probabilities associated to each lesion label.
To assess the performance of lesion-wise detection of the challengers’ methods, we will use a FROC-based metrics consisting in computing the mean sensitivity averaged among the five false positive rates 0.25, 0.5, 1, 2 and 3. The achieved mean sensitivity for each of the five levels will be estimated as part of the evaluation procedure individually for each pipeline based on the probability assigned to each lesion.
The code for the evaluation will be provided with the first release of the training set.
Important Dates & Submission Process
Important dates:
- Challenge Website open: July 2, 2024
- Registration period: November 2024 – May 2025
- Training data first release: December 1, 2024
- Training data updated version release: March 1, 2025
- Short paper submission deadline: June 15, 2025
- Docker submission deadline: June 15, 2025
- Announcement of the results: end of September 2025
Resources for challengers:
- Access to the Training Data Set : coming soon
- Performance Evaluation Code : coming soon
- Detailed instructions for the docker interface : coming soon
Submission process:
The detailed instructions for the docker interface and submission will be provided soon. In a nutshell, the four following steps will be required :
1. Build a Docker or Singularity image containing the method,
2. Create a Boutiques descriptor of the tool,
3. Make the image and descriptor available to the VIP team,
4. Validate its integration with the VIP team.
Submitted pipelines will be integrated in the Virtual Imaging Platform (VIP), allowing
for their execution and evaluation on the test set (unrealeased to the challengers).
We are thankful to our institutions, partners and sponsors to have made this challenge possible.