Less of a burden, but less fairness and accuracy: Some thoughts on the new ARC Discovery Scheme two-step application process
The Australian Research Council’s new Expression of Interest (EOI) process for Discovery Project (DP) funding might pose less of an administrative burden on researchers and universities, but it fails to address another key issue with research funding in Australia; fairness. In fact, the new application process, a two-stage process with an initial EOI stage which doesn’t involve specialist peer reviewers, appears to be less fair and accurate than the process it will replace. A basic peer review process involving aggregative decision making methods at the EOI stage would address both the administrative burden of the single-stage grant process and enshrine a fair process.
In April of this year, the government released a review of the ARC, highlighting the challenges posed by the significant administrative burdens embedded in the grant application process. The existing procedure demanded considerable time and institutional resources from researchers, resulting in extensive applications for a grant program with a success rate of only 16.4% of the 2592 submissions in the latest round. In response, the ARC has announced the implementation of a two-stage process for its 2025 DP Scheme, with applications closing in February 2024. Under this new process, applicants will submit a concise Expression of Interest (EOI) for an expedited initial review before potentially submitting a full application. Notably, in contrast to the previous scheme, where all applications underwent peer review before assessment by the College of Experts, the EOI stage applications will be exclusively assessed by members of the College of Experts.
While the two-stage process addresses the challenge posed by the administrative burden of a single-stage process, the absence of peer review in the EOI stage presents a notable challenge to the fairness of the grant assessment process.
In framing the ARC review, the authors emphasized the necessity of reforms to "enshrine the importance of academic expertise, peer review, and transparent, fair competitive assessments." The new DP assessment approach jeopardizes these principles, however, as the (albeit expanded) College of Experts comprises just 269 senior researchers and the vast majority of applications will be rejected (the ARC has not said how many, but the ARC review suggested an EOI process with just 10% success rate). They cannot possess the required expertise to assess all applications on equal and fair grounds, there being a reasonable expectation that they will be unable to adequately assess applications beyond their area of expertise on their merits. To illustrate, my own discipline of philosophy, the current Field Of Research (FoR) code does not encompass the breadth of work conducted in philosophy departments, including applied ethics, philosophy of science, political philosophy, and mathematical logic. Even within the philosophy FoR code, the discipline is highly diverse, covering topics ranging from the nature of consciousness to the truth of moral theories. There are currently two philosophers in the College of Experts. They are highly regarded in their specific sub-fields, of course, but they would not claim expertise in all areas of philosophy. While all EOIs will be assessed by three members of the College, there being just two philosophers on the College poses a challenge to fair assessments of work across the breadth of the discipline. Philosophy is not an outlier. All disciplines are heterogeneous at the six-digit FoR Code level and have just one or two representatives on the College. This problem arguably is only made worse for interdisciplinary projects which require expertise in multiple fields.
In response to these concerns the ARC could further expand their College of Experts, but this approach still seems a blunt means to ensure a fair assessment of all applications. Rather, I argue, the ARC should draw on research in aggregative decision making showing that in situations with poor or limited evidence and high judgment variability (as would reasonably expected from non-specialist assessors), aggregative judgment processes tend to yield more consistent assessments. Requesting simple scores of feasibility and track record from multiple reviewers for each EOI in their broad area of research alongside a self-assessment of the reviewer’s confidence in the score, and then aggregating them with a weighted metric, would provide a more equitable basis for deciding which applications proceed to the next stage for full peer review.