Welcome to the Ariel Machine Learning Data Challenge. The Ariel Space mission is a European Space Agency mission to be launched in 2029. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the gallactic context.

Announcement - Final Winners announced!

we are very happy to announce the winners (see here for the Leader Board). Congratulations to everyone, this has been a very high-scoring competition. We would like to thank every participant for helping us improve the state-of-the-art of exoplanet atmospheric modelling. We hope you had fun and hope to see you at the next Ariel Machine Learning Data Challenge!

Announcement - Deadline extended to 24th Oct!

we have made further changes to the regular track metric to minimise the issue of non-convergence and abnormal high score. In order to allow time for participants to test with the newly modified metric, we will be postponing the final evaluation deadline for a week. The new deadline for the competition will be 24th October. We are now extending the invitation to all participants with scores above the baseline solution to enter the final evaluation round. Please see the timeline for the latest schedule of the competition. We apologise for any inconvenience caused.

Announcement - Updates on the Regular Track

Recently we have noticed a surge in the regular track score, which resulted in near-perfect scores. While it could potentially mean the challenge is solved, it may also mean that the metric is vulnerable to extreme values and large sample sizes. These could be manipulated to achieve a high score without providing a physically meaningful solution, which runs against the spirit of the competition. To ensure a fair evaluation of the solutions provided by the participants, we are announcing the following changes to the regular metric:

    1. Submissions to the Regular Track will be limited to 1000 - 5000 sample points per submission; all submissions must fall within this boundary. In other words, for a solution matrix (N, M, 6), where N is the number of examples, M is the number of sample points, and M should go between 1000 - 5000.
    2. Any values that fall outside the prior range specified here will be reset to the respective boundary values.
The metric has been updated to reflect these changes, which means any submission to the system will be subjected to these changes.

As for the final evaluation round, we will use the updated metric but with an increased number of iterations to ensure the metric is converged. We want to reiterate that any unphysical solutions will NOT be accepted. We sincerely apologise for any inconvenience caused. We hope you understand that we must guard against uncompetitive abuses of the evaluation metric and unfair advantages over other participants.


  • 17-Oct - Deadline extended
  • 11-Oct - Default prior bounds for the targets are updated
  • 06-Oct - RT Metric updated (see announcement above)
  • 29-Jul - Test Data Documentation released
  • 29-Jul - Timeline announced, see below.
  • 18-Jul - Light Track scoring metric updated - Upload your solution to get the latest score!
  • 15-Jul - New Slack channel for discussions and Q&A.
  • 15-Jul - Full release of the baseline model and scoring metric on our GitHub repository


  • 30-Jun - Challenge begins
  • 30-Jun - Baseline solution and other documentations released
  • 30-Jun - Challenge is live!
  • 30-Jun - 1st release of test data
  • 01-Sept - 2nd release of test data, Scores on leaderboard will reset
  • 24-Oct - Invitation to Final Evaluation round
  • 24-Oct to 28-Oct - Final Evalution window (please note late submissions will not be accepted)
  • 28-Oct to 13-Nov - Evaluation period
  • 14-Nov to 18-Nov - Winners are informed & announced
  • 5-9 Dec (nominal) - Winning solutions presented at NeurIPS 2022 Workshop

Understanding worlds in our Milky Way

Today we know of roughly 5000 exoplanets in our Milky Way galaxy. Given that the first planet was only conclusively discovered in the mid-1990's, this is an impressive achievement. Yet, simple number counting does not tell us much about the nature of these worlds. One of the best ways to understand their formation and evolution histories is to understand the composition of their atmospheres. What's the chemistry, temperatures, cloud coverage, etc? Can we see signs of possible bio-markers in the smaller Earth and super-Earth planets? Since we can't get in-situ measurements (even the closest exoplanet is lightyears away), we rely on remote sensing and interpreting the stellar light that shines through the atmosphere of these planets. Model fitting these atmospheric exoplanet spectra is tricky and requires significant computational time. This is where you can help!

Help us to speed up our model fitting!

Today, our atmospheric models are fit to the data using MCMC type approaches. This is sufficient if your atmospheric forward models are fast to run but convergence becomes problematic if this is not the case. This challenge looks at inverse modelling using machine learning. For more information on why we need your help, we provide more background in the about page and the documentation.

There are prizes

The first prizes for light and regular tracks are $1000 and $2000 respectively. The second prizes are $500 each. The first prize winners will also be invited to attend our NeurIPS workshop.

Many thanks to...

NeurIPS 2022 for hosting the data challenge and to the UK Space Agency and the European Research Council for support this effort. Also many thanks to the data challenge team and partnering institutes, see here for some info on the team members, and of course thanks to the Ariel team for technical support and building the space mission in the first place!

Any questions or something gone wrong? Contact us at: exoai.ucl [at]

Ariel h2020 UKSA UKSA UKSA UKSA Spaceflux Europlanet Society UCL