Contents
Timeline
We will use the following timeline for the challenge:
- March 15, 2021: release date of the training data
May 17May 25, 2021: registration for the challenge.- June 30, 2021: paper submission and integration start deadline
- pipeline integration in the platform can be a long process, as such we require you to be in contact with us before june 30. But the earliest, the better so please contact us as soon as you are ready
- July 16, 2021: docker submission deadlines
- Be careful, pipeline integration in the platform can be a long process. Your pipeline will have to be integrated and tested by that date for you to participate, so please anticipate as much as possible.
- Summer 2021: we evaluate the methods
- September 3 2021: deadline for registering for the challenge online event (see https://portal.fli-iam.irisa.fr/msseg-2/challenge-day/)
- September 23, 2021: challenge day ! Results release and presentations
Challenge registration
For registering to the challenge, you will have to create at least one account on shanoir for getting the training data (see the data page) and send us an email at challenges-iam@inria.fr with the following information:
- a team name
- a contact person (name plus email)
- main affiliation (please add if you are a company or an academic)
- shanoir accounts linked to this team: give us the emails of people in your team who already have registered a shanoir account to access the training data
- number of pipelines proposed by the team
- GPU need: this is especially for people working with machine learning algorithms. But please answer this point even if this is not the case. Your training will be done on whatever architecture you like (GPU or CPU) on your own ressources. In the evaluation phase, we will only use the testing part of your algorithm. For the execution on our computing platform of your solution, we strongly encourage the use of CPUs instead of GPUs (this is transparent for quite a few deep learning libraries for example). In case your solution cannot be executed on CPU, we cannot guarantee yet that, at the time of its execution, GPUs will be available. We thus ask you to tell us if you need absolutely a GPU for the evaluation part or if your method runs on a CPU or can be adapted for it.
- Matlab usage: again from platform related reasons, matlab is complicated due to licenses problem on a computing platform. If you absolutely need matlab, even after thinking of compiling matlab code to get executables out of matlab scripts, please tell us so. If you do not need matlab at all or can use compiled versions of your tools, tell us so as well
Once you send us this email, we will acknowledge your participation.
Paper instructions
When participating to the challenge you will have, by June 30, to have started discussing with us your pipeline integration and to submit a 3 pages paper (excluding references) describing shortly your method and its parameters that have been used for the challenge. These papers will be reviewed by the organization committee. When preparing your paper, please follow the MICCAI template, either in Latex or Word, but in any case, only PDF submissions will be accepted.
Pipeline integration
The challengers will integrate their segmentation pipeline in the France Life Imaging platform with the help of the FLI team. Please contact challenges-iam@inria.fr for any question. The pipeline will be implemented in the VIP platform. You can refer to the guidelines described on https://gitlab.inria.fr/amasson/lesion-segmentation-challenge-miccai21/
Once the pipeline is integrated in the VIP platform, the challenger (or us depending on the use of GPUs) will:
- Run the pipeline on the training data to test and adjust parameters (phase I).
- Save the parameters that will be used during the testing phase (phase II).
The testing phase (phase II) will not require any involvement from the challengers. The integrated pipelines will be run on the testing datasets in the VIP platform, using a connexion with the Shanoir database where the testing data are stored. Segmentation results on training and tesing data will also be stored inside this database and opened to the general public after the challenge day.
By no means will the organizers use these pipelines for any other application than the challenge. Once the challenge is over, the pipelines will be removed or kept in the platform depending on the challengers’ wishes.