Ad verba per numeros


Thursday, July 4, 2019, 08:11 AM
...And when I say “the state of Denmark” I mean current practices of research and publishing.

But first of all, some background on myself:

  • I was a first generation university student (actually, I was also a first generation high school student). Getting a PhD and teaching at university have far exceeded any expectation I (or anyone in my family) could have hold. Therefore, I’m a working-class academic and most of my concerns with the current state of affairs are derived from those personal circumstances.
  • I'm "profesor titular" at the Department of Computer Science in the University of Oviedo (Spain). That means that I'm an associate professor (i.e., tenured) but also a civil servant.
  • I'm not the most productive nor brilliant computer scientist in the world but I think I have made a couple of interesting contributions, and they have received what I think is a fair share of attention (and thus citations).
  • During my career I have received virtually no funding and, thus, I have not been able to offer PhD or postdoc positions. This means that, except for some collaborations with a few coauthors, quite a bunch of my papers have been solo works. Also, because of research evaluation approaches in Spain many of them are published in (pay-walled) journals, not conferences, as is more common in CS.
  • Having published in pay-walled journals does not mean I don’t care for open access (actually I provide preprints or author’s versions of my papers on my website), it means I simply cannot pay to publish.
  • At this moment I still haven't got anyone to complete a PhD under my supervision, and I have not been principal investigator of any major research project.
  • Therefore, I'm very conscious that my chances of getting full professorship are close to zero (but I can live with that).
All of this means that I have quite a bit of experience regarding research, publication and research evaluation while, at the same time, I feel free to speak of the issues I find problematic in the wide world of research.

Certainly, I have no solutions for most, but that does not mean that we cannot openly discuss them. Maybe if we (as a community) get to understand the flaws in the system (and the very different, and usually unfair, circumstances faced by researchers in different countries) we may try to do something to fix them.

In this post I will go through many of those issues but I cannot promise it will be a coherent review; take it as an assorted selection of the most pressing problems that you may have already suffered or will suffer along your career.

Needless to say I’m not trying to convince you to stop playing the academic game (not pulling the ladder behind me). What I’m attempting here is to help you understand the hand you have got in this game. That way you may understand that your cards are especially good and that does not make you better, smarter or harder worker than anyone else; or maybe they are simply adequate and, thus, you have to still play with them but it probably means that it doesn’t really matter how smart or hard working you are, you are going to reach shorter than other people.

That said, let’s start with the many flaws in the current world of research.

Some of the first problems I want to pay attention to are those derived from the publish or perish mentality. Because of that we are more interested in writing our own stuff than in reading others' research.

To start with, it is simply ridiculous that major conferences are receiving thousands of papers. For instance, NeurIPS 2019 received almost 7000 submissions.

Those numbers are simply unmanageable and they require thousands of qualified reviewers. NeurIPS 2019, for instance, needed 4500!

I could of course accept that conference organizers can find all of them but, frankly, I cannot believe it; indeed, I highly doubt those experts even exist. This means that many reviews in major conferences are actually conducted by graduate students who are still struggling to fit within the field. Of course, we could accept 'neatness' as a disease but we shouldn't call that peer-review.

A consequence of not having real peer-review, compounded with the single-blind approach used in most conferences, is that authors' fame and pedigree are strong indicators of acceptance, and underdogs (my kind) tend to be rejected.

Another approach to tackle so may submissions is to go a step forward and simply remove the peers from peer review and automate part of the system. I think it is stupid, dangerous and unfair but I'm afraid we will eventually suffer it at some parts of the "peer"-review pipeline. Still, I would feel more "appreciated" as a human if being desk-rejected by a person and not by a robot.

Another consequence of publishing so many papers, and not having anyone really reading most of them, is that we need to automate some metrics in order to evaluate research outcomes in quantifiable ways.

In that regard we have the dumb and dumber approaches: I'm talking of impact factors and acceptance rates. I know impact factor is stupid and unfair (I live by that standard) but you simply cannot take acceptance rates as proxies for quality.

Maybe you feel that taking into account the actual citations you received or your h-index could be an improvement. Still, that probably means that you think of yourself as “well-endowed”, not that you have carefully pondered the usefulness of those metrics... Indeed, I can’t complain in that regard but I still consider h-index as harmful.

We have of course altmetrics (I must confess I use them in my website) but they are not better than traditional scientometrics and they can probably be an undesirable incentive: if buzz is important for you as a researcher, you are prone to inflate the importance of your research and contribute to the already worrisome hype in research, or you may try to achieve celebrity status without any relation to your actual scientific contributions.

If all this was not bad enough we must also take into consideration the impact on actual human beings. After all, who is writing those many papers? [1][2]

Truth is that they are mostly the product of the lower castes of academia: myrmidons, oompa loompas, minions... Sure thing, all of them should be grateful to Achilles, Willy Wonka and Gru; after all, they are the actual breadwinners, but who’s getting the glory and who's putting most of the labor, to the point of even risking health and life? The hordes of underpaid, temporal, fungible staff.

Of course, the underlings (no offence intended) do it under the premise that, eventually, they will reach the top of the trophic pyramid. Unfortunately, this shed a new light onto academia and paint and unpleasant picture of it: academia is a mix of The Hunger Games, drug gangs and Ponzi schemes. For sure, someone is going to win but not everybody, not necessarily you.

There is an additional factor introducing even more noise to an already convoluted situation: funding money. Truth is not everybody has it: it’s unequally distributed among countries, and within countries it is unevenly distributed among researchers. Actually, the Matthew effect (the rich get richer) is well known in science: it affects credit (aka fame and prestige) and, of course, real money.

All of this implies that if you have early access to funding money you are going to be more successful than people which are as smart and hard-working as you. Needless to say, early access to funding money usually implies later access to funding money, after all with that money you have got underlings to help you in your research and, thus, more and better publications.

As someone without easy access to funding money I feel that huge grants are unfair and I find very attractive some ideas that could disrupt (for the good) research. In that regard there are two ideas I find appealing: lotteries and peer-to-peer funding allocation. Of course I’ll be upfront here: I like those models because my chances of getting funding under them would be far better than under current ones. The opposite is true: if you think current funding models are preferable it is very likely because you have benefited from them, not because you have real arguments about them being fair or accurate when selecting which projects to fund, actually, they are not.

When talking about money we cannot forget the so-called open access: the noble idea of tearing down the paywalls of journals and digital libraries by researchers assuming the costs of publishing. Pardon my French but that’s bullshit: with fees on the orders of thousands of dollars per paper, if you publish in open access journals it is because you have the money to do it, full stop; underfunded labs and researchers have no way to publish in those journals. So again, circumstances that cannot be attributed to the actual merits of researchers induce unfair differences among them.

Hence, we are confusing fame, pedigree and early access to funding money with research excellence. In other words, meritocracy is a myth, the sooner you understand that the better, particularly is you are not working at a well-funded top-tier institution.

I know, it sucks.

You may be wondering what you should do now. I have very little advice to offer, after all any advising would be taunted by survivorship bias and I have enough survivor guilt to fall for that. What I can do is refer you to two opposite positions: either you cannot be a professor or you can; read both, asses your hand, and decide on that basis.

Finally, read this paper and follow the next rule:

Reach for the minimum (i.e. good enough is the new perfect). Rather than getting caught up in measuring worth by the number of peer-reviewed journal articles published or grant dollars procured, reach instead for the minimum numbers necessary to achieve important benchmarks (such as tenure and promotion). Reaching for the minimum allows for a focus on quality – rather than quantity – and acknowledges the need for balance.

Good luck!

(As usual you can find me on Twitter: @PFCdgayo).



Back Next