By Peter Bridgewater (University of Canberra), Associate Editor for People and Nature.

Read the article discussed in this blogpost here.

Macleod, Brandt, and Dicks have an interesting deep dive into the issues of how evidence is gathered and used, focussing on biodiversity management in New Zealand’s extensive agricultural landscapes. They start from a premise that collective action at all levels of government is needed to achieve positive trends in biodiversity, rather than the negative trends at present. And that international goals and targets set the targets actions for the national and local lower levels, yet in the end it is down to individual stakeholders. Of course, to achieve realistic actions quality evidence needs to be available, easily findable, and dependable.

The paper underlines the complexity that comes from having more than 2.5 million papers published annually – a total that just seems to keep rising as Journals reinvent themselves or spin off new ones, which are more of the same. That is why, as an aside, I feel privileged to work with the BES stable of journals whose ancient lineage remains strong, with new titles like PaN reinforcing 21st century developments on those strong legs. And while the assessment industry, such as IPBES, is now gathering steam to put degrees of certainty around conclusions, this is rarely evident in individual papers.

So, the authors attempt to review two of what they see as the most persistent deficiencies in decision-making – “evidence disparity” and “evidence complacency.” The former is the familiar problem of the science empire creating knowledge that may be valuable but does not answer pressing questions posed by decision makers ( the lack of links between science and policy); and the equally problematic issue of evidence existing, perhaps even in assessments, but is not sought or used by the policy empire (think all the IPCC reports).  A worse problem they identify is what I would call evidence selectivity, where out-of-date or biased knowledge is used to substantiate a particular policy position. And we have all seen examples of that!

The authors seek to remedy these problems through use of boundary science that needs boundary spanning thinking, or even institutional structures to link evidence and science. Here they posit the idea that production of “accurate, concise and unbiased syntheses” of evidence can be helpful. They recognise that science and its practice is not necessarily unbiased by adding that use of structured processes (expert panels for example) can help draw out the fuzziness in some available evidence. And so, they set the scene for a fascinating case study from New Zealand (Aotearoa).

In the case study they assess giving local stakeholders a voice in setting biodiversity priorities; bring global evidence to local reality and making “wise use” of local expertise. They used the International Conservation Evidence Initiative as a basis, adapted to New Zealand needs.

The paper is well-structured and has plenty of excellent figures to help illustrate the key points. The authors make a particular point that resonated with me: “the crucial role that a boundary-spanning team plays in gathering, organising, summarising, and integrating datasets toaddress evidence disparity and complacency issues affecting local biodiversity management decisions required by global policy.”  We absolutely need more boundary spanners, people who are comfortable in working between natural and social sciences, those who can work between science sensu lato and policy, and those who can link back to the “big data” emerging in biodiversity through examples like the Global Biodiversity Information facility (GBIF) and the many national initiatives that feed it.

Their key conclusion is that “giving local stakeholders from a diverse range of roles and interests a voice in setting priorities, tailoring global evidence systematically to meet local needs, and making wise use of local biodiversity specialists to enhance the accuracy and reliability of their judgements directly addresses the three principles of good evidence synthesis.” They also identify a fourth principle, “making evidence accessible”, recognising that a lack of infrastructure for discovering, retrieving, and processing relevant information from the scientific literature contributes to evidence complacency.

So, a very nice piece of work. My only beef is the use of the phrase “biodiversity outcomes,” an increasingly common phrase used (presumably) to mean positive biodiversity change, but as written, it’s essentially meaningless. Which is why I guess it is so popular with politicians….

Finally, there is, it seems, a way to go to resolve evidence disparity and complacency, but this study sets on the path to resolve it. It is a must-read (and act on!) for all those taking part in the burgeoning pile of biodiversity related assessments, which seem to increase inversely in mass relative to the increasing rate of negative biodiversity change.