24 maj 2018
Ulf Sandström, Peter van den Besselaar Policymakers and researchers have long sought measures to compare countries’ scientific performance. The
most widely used has been to divide levels of public R&D spending by numbers of publications or citations. Simply dividing funding by outputs, however, is not likely to give an accurate portrait of a research system’s efficiency. Countries differ greatly in how their research budgets are organised and administered, in how PhD studentships are financed, for example. And while in theory the data for OECD statistics are collected in the same way everywhere, in practice this is not the case. These factors and others make it unwise to use R&D spending levels when comparing the performance of national research systems. In this paper we propose an alternative.
24 april 2018
Ulf Sandström, Jörg Müller, Anne Laure Humbert, Sandra Klatt The present paper report the findings of the cross-country survey regarding gender diversity in R&D teams across Europe and its link to performance indicators carried as part of the GEDII project. The empirical evidence is based upon 1,357 complete questionnaire submissions across 159 teams in the following 17 countries: Austria, Belgium, Czech Republic, Denmark, Finland, France, Germany, Italy, Lithuania, the Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland and the UK.
Most teams were recruited from Spain (approximately 500 individual responses. and Sweden (approximately 300 responses. followed by Germany, the UK, the Netherlands approximating about 100 individual responses each. The fieldwork was conducted between March 2017 and January 2018. Despite concerted efforts, response from the private sector was negligible.
R&D teams reaching a sufficiently high response rate threshold were included in the analysis of the diversity-performance link.
Web of Science publications as well as patents were collected for all members of the participating groups. Bibliometric indicators including such size-dependent indicators as the Field Adjusted Performance (FAP) and Percentile Model (PModel) were calculated in order to compare performance of research groups across scientific fields. Patent indicators counted the number of patents per team.
Gendered processes within teams were captured through the Gender Diversity Index (GDI), a composite indicator developed in another part of this project. The GDI measures the representation and attrition of women and men within teams along seven dimensions of diversity, such as education, age, marital status, care responsibilities, team tenure, seniority and contract type. The GDI provides a score bound between 0 and 1, where 1 signals a more inclusive team.
Our preliminary analysis shows that more inclusive teams – that is, teams with a score close to 1 on the Gender Diversity Index – tend to perform better and generate more research output. When controlling for gender stereotypes, gender balance and the representation of women within teams, a score of 1 on the GDI is associated with an increase of 0.91 FAP. Less inclusive teams need on average an additional 0.91 senior researchers in order to perform as well as more inclusive teams.
There is no statistically significant effect on the quality rank of the published research (Percentile Model). Initial modelling also does not indicate a significant mediation effect of team processes such as team climate, power disparity, perception of leadership style or diversity climate.
4 december 2017
Ulf Sandström, Peter van den Besselaar Understanding the quality of science systems requires international comparative studies, which are difficult because of the lack of comparable data especially about inputs in research. In this study, we deploy an approach based on reasonable comparative data that focus on change instead of on levels of inputs and outputs, as this approach to a large extent eliminates the problem of measurement differences between countries. Using input-data related to output data (top publications in Web of Science) we first show which national science systems are more efficient (where performance increase is stronger than expected change in funding) and systems which are less efficient. We then discuss our findings using popular explanations of performance differences: differences in the level of competition, differences in the level of university autonomy, and differences in the level of academic freedom. Interestingly, the available data do not support the common explanations. Good functioning systems are characterized by a well-developed ex post evaluation system combined with considerably high institutional funding and low university autonomy (meaning a high autonomy of professionals). On the other hand, the less efficient systems have a strong ex ante control, either through a high level of so-called competitive project funding, or through strong power of the university management.
28 september 2017
Peter van den Besselaar, Ulf Sandström A recent paper in this journal compares the Norwegian model of using publications counts for university funding with a similar intervention in Australia in the mid-1990 s. The authors argue that the Norwegian model (taking into account the quality of publications) performs better than the Australian (which did neglect paper quality other than being peer reviewed). We argue that these conclusions are in contrast to the evidence provided in the article, and therefore should be considered incorrect.
11 september 2017
Ulf Sandstrom, Peter van den Besselaar The selection of grant applications generally is based on peer and panel review, but as shown in many studies, the outcome of this process does not only depend on the scientific merit or excellence, but also on social factors, and on the way the decision-making process is organized. A major criticism on the peer review process is that it is inherently conservative, with panel members inclined to select applications that are line with their own theoretical perspective. In this paper we define 'cognitive distance' and operationalize it. We apply the concept, and investigate whether it influences the probability to get funded.
Influence of cognitive distance on grant decisions. Available from: https://www.researchgate.net/publication/319546972_Influence_of_cognitive_distance_on_grant_decisions [accessed Sep 11, 2017].
26 augusti 2017
Ulf Sandström, Peter van den Besselaar It is often argued that female researchers publish on average less than male researchers do, but male and female authored papers have an equal impact. In this paper we try to better understand this phenomenon by (i) comparing the share of male and female researchers within different productivity classes, and (ii) by comparing productivity whereas controlling for a series of relevant covariates. The study is based on a disambiguated Swedish author dataset, consisting of 47,000 researchers and their WoS-publications during the period of 2008-2011 with citations until 2015. As the analysis shows, in order to have impact quantity does make a difference for male and female researchers alike—but women are vastly underrepresented in the group of most productive researchers. We discuss and test several possible explanations of this finding, using a data on personal characteristics from several Swedish universities. Gender differences in age, authorship position, and academic rank do explain quite a part of the productivity differences.
12 augusti 2017
Ulf Sandström Denna bok skildrar Byggforskningsrådet som organisation och forskningsfinansierande organ i relation till såväl forskningspolitiska som allmänpolitiska frågeställningar under perioden 1960-1992. Sex år efter bokens utgivning gick BFR i graven staten lade om den svenska forsk-ningsorganisationen och avslutade den traditionella sektorsforskningspolitiken. I föreliggande upplaga har vissa avsnitt som bedömts vara av mindre intresse tagits bort samt smärre språk-liga justeringar genomförts.
(1) Mellan politik och forskning: Byggforskningsrådet 1960-1992. Available from: https://www.researchgate.net/publication/319624608_Mellan_politik_och_forskning_Byggforskningsradet_1960-1992 [accessed Sep 11, 2017].
4 augusti 2017
Peter van den Besselaar, Ulf Sandström, Ulf Heyman More than ten years ago, Linda Butler (2003a) published a well-cited article claiming that the Australian science policy in the early 1990s made a mistake by introducing output based funding. According to Butler, the policy stimulated researchers to publish more but at the same time less good papers, resulting in lower total impact of Australian research compared to other countries. We redo and extend the analysis using longer time series, and show that Butlers’ main conclusions are not correct. We conclude in this paper (i) that the currently available data reject Butler’s claim that “journal publication productivity has increased significantly… but its impact has declined”, and (ii) that it is hard to find such evidence also with a reconstruction of her data. On the contrary, after implementing evaluation systems and performance based funding, Australia not only improved its share of research output but also increased research quality, implying that total impact was greatly increased. Our findings show that if output based research funding has an effect on research quality, it is positive and not negative. This finding has implications for the discussions about research evaluation and about assumed perverse effects of incentives, as in those debates the Australian case plays a major role.
1 juli 2017
Peter van den Besselaar, Ulf Heyman, Ulf Sandström In Van den Besselaar et al. (2017) we tested the claim of Linda Butler (2003) that funding systems based on output counts have a negative effect on impact as well as quality. Using new data and improved indicators, we indeed reject the claim of Butler. The impact of Australian research improved after the introduction of such a system, and did not decline as Butler states. In their comments on our findings, Linda Butler, Jochen Gläser, Kaare Aagaard & Jesper Schneider, Ben Martin, and Diana Hicks put forward a lot of arguments, but do not dispute our basic finding: citation impact of Australian research went up, immediately after the output based performance system was introduced. It is important to test the findings of Butler about Australia – as these findings are part of the accepted knowledge in the field, heavily cited, often used in policy reports, but hardly confirmed in other studies. We found that the conclusions of Butler are wrong, and that many of the policy implications based on it simply are unfounded. In our study, we used better indicators, and a similar causality concept as our opponents. And our findings are independent of the exact timing of the policy intervention. Furthermore, our commenters have not addressed our main conclusions at all, and some even claim that observations do not really matter in the social sciences. We find this position problematic − why would the taxpayer fund science policy studies, if it is merely about opinions? Let’s take science seriously − including our own field.
21 november 2016
Ulf Sandström, Peter van den Besselaar Do highly productive researchers have significantly higher probability to produce top cited papers? Or do high productive researchers mainly produce a sea of irrelevant papers—in other words do we find a diminishing marginal result from productivity? The answer on these questions is important, as it may help to answer the question of whether the increased competition and increased use of indicators for research evaluation and accountability focus has perverse effects or not. We use a Swedish author disambiguated dataset consisting of 48.000 researchers and their WoS-publications during the period of 2008–2011 with citations until 2014 to investigate the relation between productivity and production of highly cited papers. As the analysis shows, quantity does make a difference.