Research teams (RTs) are solicited to participate in a project examining the experimental design variability in behavioral economics. In particular, the project focuses on the variablity in experimental designs when the same research question is examined by different research teams. The goal is to explore variation in experimental designs and to estimate a metascientific effect of competition on moral behavior.

All participating RTs will propose an experimental research design with the objective of analyzing the impact of competition on moral behavior fulfilling a certain set of criteria by submitting a pre-registration. In the pre-registration, each RT will propose a linear model for analyzing the data. The project coordinators (PCs) will analyze the data as proposed in each RTs' pre-registration, as well as using their own specifications as outlined in the PCs' pre-analysis plan. All RTs will be anonymized prior to reporting or subsequent sharing of the submitted results.

In Stage 1, potential RTs express interest in participating and sign-up to #ManyDesigns. Given that the RT fulfill the requirements, participating RTs move on to Stage 2, where they then will submit a design proposal following a given pre-registration template, outlining the key parameters of their proposed experiment. If more than 50 submitted design proposals are eligible for participation, 50 designs will be randomly selected for being included in the study. In Stage 3, each of the up to 50 selected design proposals will be implemented as an online experiments using Prolific. RTs are programming the software for their submitted design proposals and are hosting the software while the experiment is conducted and paid for by the project coordinators (PCs). In Stage 4, all RTs will be asked to read 10 other design proposals, submitted by their peers, and evaluate them using a short questionnaire.

We expect that the total workload for research teams participating in the project will be between two and three weeks.


The project coordinators recruit research teams (RTs) between April and May 2021. Eligible RTs will design their experiment and submit their proposals by June 25, 2021. During the summer, all selected RTs will program the experimental software for their submitted designs and submit their software and instructions by October 1, 2021. Starting in October 2021 (estimated), pilot sessions for all experiments will be run to confirm feasibility. Starting in November 2021 (estimated), all selected experiments will be conducted online using Prolific. Data analysis by the project coordinators will start after data collection of all selected experiments have been completed. The overall project will last until summer 2022 with the goal of having a first draft of the paper ready. For a detailed schedule of the project, please refer to the schedule.

Why participate?

  • In addition to being part of a fascinating project, you will get the chance to satisfy the curiosity of the academic in you, hungry for answers to a highly relevant and controversial research question.
  • For the proposals selected for implementation, the project coordinators will cover the costs for participant payments.
  • If your proposal is selected for implementation, you will become co-author on the paper that targets publication in a top scientific journal. The PCs previously organized a related but different type of crowd-analysis in neuroscience, published in Nature in 2020 (see here).


  • RTs consist of one or two participants.
  • At least one of the members of the RT has to hold a PhD in Economics, Psychology or a related field and at least one team member needs one experimental study that is either published (accepted for publication) or a working paper.
  • The team should be sufficiently skilled in experimental methodology and should be familiar with the conceptualizing and implementing (programming) an economic online experiment.
  • RTs need to sign up to participate by filling out a brief survey about background characteristics and expertise in behavioral and experimental economics.
  • The PCs will then decide whether the RT is sufficiently qualified to participate by majority voting.


In case you have any questions, please contact the project coordinators via [email protected].

stage 1: registration

The project coordinators will pre-screen applications for accordance with the eligibility conditions and will invite all eligible research teams (RTs) to submit a proposal for an experimental research design.

After successful application, all eligible RTs are invited to submit a proposal. Up to 50 RTs are then randomly selected to be part of #ManyDesigns and to complete the programming and implementation of the experiment, to provide the raw data after data collection has been completed, as well as to assess other design proposals. Data analysis and writing of the paper is done by the project coordinators.

Learn more about the eligibility conditions and how to sign-up to #ManyDesigns.

stage 2: design submission and selection

design submission

All registered and eligible RTs have been invited to submit a pre-registration outlining an experimental research design with the aim to assess whether and how competition affects moral behavior.

To be included in this project, the submitted pre-registration has to adhere to the following design conditions:

  • Design is implemented as an online experiment to be run on Prolific with 400 observations:
  • Employ a between-subjects treatment design with two equally sized treatments/conditions:
    • one control condition without competition
    • one treatment condition with competition
  • Incentive compatible payments for subjects that cover at least the opportunity cost of time (Prolific implements the requirement of GBP 5 per hour as minimum payment), and are not negative (losses).
  • Define a clear measure of moral behavior.
  • No deception of subjects.
  • Subject anonymity regarding who is interacting with whom during the experiment.
  • No measurements of physical state (e.g., saliva samples, blood samples) and no physical or psychological harm.
  • Clear information for subjects regarding the experiment’s duration, repetitions, interactions, and random processes (like lotteries) that are relevant for subjects, and which information is common knowledge to other (groups of) subjects.
  • Design clearly defines randomization procedures across treatments.
Borderline criteria will be voted on by the project coordinators (majority rule).

Signed-up RTs are be able to submit a pre-registration outlining their experimental design proposals using the following pre-registration template. The submitted pre-registration also includes details on RTs preferred method of analysis under the condition of using an ordinary least squares regression (see stage 5: analysis below) and acts as a pre-analysis plan.

design selection

All submitted proposals will be pre-screened by the project coordinators (PCs) to eliminate studies that do not fit the above-mentioned criteria.

If up to 50 eligible proposals are submitted, all of them will be implemented and run; if more than 50 eligible proposals are submitted, 50 of them will be randomly selected to be run (and the other ones will be "returned" to the submitting research teams and will not be shared with anyone else).

stage 3: experimental implementation

experimental software

Once research teams (RTs) have been notified that their design has been selected, they have more than three months time to provide the experimental software including experimental instructions. All selected designs will be implemented as an online experiment on Prolific with up to 400 observations. RTs therefore have to make sure that their experimental software adheres to the following software conditions:

  • RTs are responsible for hosting the experimental software and provide the project coordinators (PCs) with:
    • One anonymous link per experimental treatment to be sent to participants via Prolific.
    • Access to the server on which the experimental software is hosted such that PCs are able to retrieve the data.
  • Experimental software has to be compatible with Prolific – i.e., it has to be possible to ...
    1. ... send participants to the experiment using an anonymous link,
    2. ... record their Prolific IDs, and
    3. ... redirect them back to Prolific upon completion.
    • See here for more information.
  • Design has to adhere to Prolific's terms & conditions for researchers.
  • Experimental software and instructions have to be in English.

stage 4: peer assessment

All included RTs will be asked to assess each others' experimental designs anonymously. Each RT will be asked to assess 10 other designs and rate them on a Likert scale from 0 to 10 with the following question:

To what extent does this design, within the design conditions defined above, provide an informative test of the research question: "does competition affect moral behavior?"
(not at all informative)
(extremely informative)

stage 5: analysis

data & analysis

After all included RTs' designs have been experimentally implemented and data collection is complete, PCs will analyze the data from all designs.

Each RT is free to specify their own methodology for analyzing the data as specified in their submitted pre-registration, subject to one condition: a linear model using ordinary least squares (OLS) has to be estimated, with moral behavior as the dependent variable and a dummy variable equal to 1 for observations from the COMPETITION treatment (and 0 otherwise).

RTs will ...

  • ... provide their preferred, exact empirical specification to test for a possible competition effect on moral behavior (see template.)
  • ... provide the PCs with access to the server on which the experimental software is hosted such that PCs are able to retrieve the data.
  • ... submit the raw data directly after the experiment and provide PCs ex ante with a code book that explains all incoming variables and data points.

Project coordinators (PCs) will ...

  • ... analyze the data following the pre-registration provided by RTs and report the main treatment effect (in terms of the effect size and the corresponding standard error).
  • ... do additional, own analyses based on standardized analyses for all included research designs.

stage 6: metascience paper

PCs will write a metascience paper; all members of selected RTs will be co-authors.

data usage & authorship

The data from each experiment is initially only available to the RT belonging to the particular design proposal and to PCs. RTs will not be allowed to release, publicize, or discuss their results until a first draft of the metascience paper is public (expected in summer 2022). After this period, all data will become publicly available under CC-BY licence and can be used freely.

For the final paper to be produced from this project (including analysis of RTs' design proposals, RTs' peer assessments, and RTs' analyses), the project coordinators will draft the manuscript. All members of each RT will be offered co-authorship on the paper(s). Authorship will be limited to RTs who complete all stages of the project; i.e., to those whose design proposal is (randomly) selected to be included, provide appropriate and feasible experimental software, complete the peer assessments, and submit their data by the respective deadlines. Co-authors from the RTs will be given two weeks to review any drafts of papers prior to submission.