Tracks

Track 1: Score-based Track 2: Score and real-world relevance Track 3: Critiques and improvements
What do you do?
  • propose and implement multilateral agreements to augment the simulator,
  • train AI agents that negotiate with each other and optimize their utility, and
  • evaluate the learned policies and resulting economic and climate change metrics, e.g., global equality, productivity, temperature increase.

In this track, you will argue why your solution is practically relevant and usable in the real world. As we aim to bring the insights from the competition to policymakers, we expect the entries in this track to contain a high-level summary for policymakers.

We strive to closely simulate real-world dynamics. But, no simulator is perfect. Thus, we invite you to point out potential improvements and loopholes.

What do you submit?

Each submission should have the following:

  • Modified code with negotiation protocol
  • RL agents trained on that protocol

In addition to the requirements in Track 1, you should also submit a written summary, justification, and explanation of your solution and insights. Your write-up should argue why your solution is feasible, technically sound, and attractive. For instance, from a game-theoretical perspective, good multilateral agreements might punish free-riders and might have to ensure that it’s difficult to “game” agreements with unrealistic behaviours. Please see the submission guidelines for a template submission with suggestions for aspects to discuss.

This is a free-form submission: you may include a write-up, example code, or other ways to demonstrate your insights.

How do we evaluate?

Each team's solution is scored by computing the hypervolume enclosed by your 10 most recent solutions. This is an lower-bound approximation of the area under the Pareto curve defined by your submitted solutions.

Participants in Track 1 are not required to submit a technical report. However, we will invite the top scorers in Track 1 to write a technical report to be published in the Proceedings.

An expert jury will review your submission and use a scoring rubric to evaluate your solution. They will also assess the real-world relevancy and impact of your proposed solution.

An expert jury will review your submission and evaluate the significance of your suggested improvements and/or discoveries.
Guidelines

Code, documentation, and technical instructions.

Submission guidelines, evaluation rubrics, and suggested discussion topics.

Submission guidelines and suggested topics to investigate.
Submit here!

Submit your evaluation metrics using this Google Form. This will update the leaderboard on this website!

Submit your evaluation metrics using this Google Form. This will update the leaderboard on this website!

Submit your essay via OpenReview

Submit your essay via OpenReview

Track 1: Score-based
What do you do?
  • propose and implement multilateral agreements to augment the simulator,
  • train AI agents that negotiate with each other and optimize their utility, and
  • evaluate the learned policies and resulting economic and climate change metrics, e.g., global equality, productivity, temperature increase.
What do you submit?

Each submission should have the following:

  • Modified code with negotiation protocol
  • RL agents trained on that protocol
How do we evaluate?

Each team's solution is scored by computing the hypervolume enclosed by your 10 most recent solutions. This is an lower-bound approximation of the area under the Pareto curve defined by your submitted solutions.

Participants in Track 1 are not required to submit a technical report. However, we will invite the top scorers in Track 1 to write a technical report to be published in the Proceedings.

Guidelines

Code, documentation, and technical instructions.

Submit here!

Submit your evaluation metrics using this Google Form. This will update the leaderboard on this website!

Track 2: Score and real-world relevance
What do you do?

In this track, you will argue why your solution is practically relevant and usable in the real world. As we aim to bring the insights from the competition to policymakers, we expect the entries in this track to contain a high-level summary for policymakers.

What do you submit?

In addition to the requirements in Track 1, you should also submit a written summary, justification, and explanation of your solution and insights. Your write-up should argue why your solution is feasible, technically sound, and attractive. For instance, from a game-theoretical perspective, good multilateral agreements might punish free-riders and might have to ensure that it’s difficult to “game” agreements with unrealistic behaviours. Please see the submission guidelines for a template submission with suggestions for aspects to discuss.

How do we evaluate?

An expert jury will review your submission and use a scoring rubric to evaluate your solution. They will also assess the real-world relevancy and impact of your proposed solution.

Guidelines

Submission guidelines, evaluation rubrics, and suggested discussion topics.

Submit here!

Submit your evaluation metrics using this Google Form. This will update the leaderboard on this website.

Submit your essay via OpenReview

Track 3: Critiques and improvements
What do you do?

We strive to closely simulate real-world dynamics. But, no simulator is perfect. Thus, we invite you to point out potential improvements and loopholes.

What do you submit?

This is a free-form submission: you may include a write-up, example code, or other ways to demonstrate your insights.

How do we evaluate? An expert jury will review your submission and evaluate the significance of your suggested improvements and/or discoveries.
Guidelines Submission guidelines and suggested topics to investigate.
Submit here!

Submit your essay via OpenReview