Phases and Rules¶
The LUNA25: AI Study takes place in two phases:
-
Open Development Phase (Duration: 4 months):
Anyone can participate in this phase of the challenge. Interested teams can create an account on grand-challenge.org, and register for the LUNA25 challenge at luna25.grand-challenge.org. Afterwards, they will be provided access to download the public training dataset, and in turn, they can start developing and training AI algorithms using their private or public compute resources. Participating teams can also use additional data to train their algorithms, but such data must be publicly available under a permissive open-source license three months prior to the submission deadline, and its source must be clearly stated. Teams can upload and submit their trained algorithms (in Docker containers) for evaluation a maximum of 15 times throughout the challenge. During evaluation, the algorithms are executed on the grand-challenge.org platform, their performance is evaluated on the hidden tuning cohort, and team rankings are updated accordingly on a live, public leaderboard. Facilitating validation in such a manner ensures that any image used for evaluation remains truly unseen, and that AI predictions cannot be tampered with. This allows for bias-free performance estimation. -
Closed Testing Phase (Duration: 1 month):
After the Development Phase is closed, each registered team can choose to submit a single AI algorithm (presumable their top-performing algorithm) for evaluation on the hidden testing cohort. Based on their performance on this cohort, all new rankings will be drawn and the top 5 algorithms of the LUNA25 challenge will be determined and announced. To qualify as one the top teams, participants must also submit a short paper on their methodology (2-3 pages) and a public/private URL to their source code on GitHub to ensure fairness, traceability and reproducibility of all proposed solutions.
Rules:
-
All participants must form teams (even if the team is composed of a single participant), and each participant can only be a member of a single team.
-
Any individual participating with multiple or duplicate Grand Challenge profiles will be disqualified.
-
Anonymous participation is not allowed. To qualify for ranking on the validation/testing leaderboards, true names and affiliations [university, institute or company (if any), country] must be displayed accurately on verified Grand Challenge profiles, for all participants.
-
Members of all sponsoring or organizing entities (i.e. Radboud University Medical Center, University Medical Center Groningen, University of Copenhagen) can freely participate in the challenge, but are not eligible for awards or the final ranking in the testing phase.
-
This challenge only supports the submission of fully automated methods in Docker containers. It is not possible to submit semi-automated or interactive methods.
-
All Docker containers submitted to the challenge will be run offline (i.e., they will not have access to the internet and cannot download/upload any resources). All necessary resources (e.g., pre-trained weights) must be encapsulated in the submitted containers apriori.
-
Participants competing for prizes can use pre-trained AI models based on computer vision and/or medical imaging datasets (e.g. ImageNet, Medical Segmentation Decathlon). They can also use external datasets to train their AI algorithms. However, such data and/or models must be published under a permissive license (within 3 months of the Open Development Phase deadline) to give all other participants a fair chance at competing on equal footing. They must also clearly state the use of external data in their submission, using the algorithm name [e.g., "LUNA25 Classification Model (trained w/ private data)"], algorithm page, and/or a supporting publication/URL.
-
Researchers and companies, interested in benchmarking their institutional AI models or products but not competing for prizes, can freely use private or unpublished external datasets to train their AI algorithms. They must clearly state the use of external data in their submission, using the algorithm name [e.g., "LUNA25 Classification Model (trained w/ private data)"], algorithm page, and/or a supporting publication/URL. They are not obligated to publish their AI models and/or datasets before or anytime after the submission.
-
To participate in the Closed Testing Phase, participants must submit a short arXiv paper on their methodology (2–3 pages) and a public/private URL to their source code on GitHub (hosted with a permissive license). We take these measures to ensure the credibility and reproducibility of all proposed solutions and to promote open-source AI development.
-
The top 5 winning algorithms of the LUNA25 challenge, as trained on the Public Training and Development Dataset and evaluated on the Hidden Testing Cohort in the Closed Testing Phase, will be made publicly available as Grand Challenge Algorithms once the challenge has officially concluded.
-
Participants of the LUNA25 challenge, as well as non-participating researchers using the LUNA25 public training dataset, can publish their own results separately at any time. They do not have to adhere to any embargo period. While doing so, they are requested to cite the Study Protocol document (BIAS preregistration form for the LUNA25 challenge), which is coming soon. Once a study protocol and/or a challenge paper has been published, they are requested to refer to those publications instead.
-
Organizers of the LUNA25 challenge reserve the right to disqualify any participant or participating team at any time on grounds of unfair or dishonest practices.
-
All participants reserve the right to drop out of the LUNA25 challenge and forego any further participation. However, they will not be able to retract their prior submissions or any published results till that point in time.