Responding to reviews of rejected conference papers

This post is concerned with this overall question:

How to make good use of reviews for a rejected conference paper?

The obvious answer is presumably something like this:

Extract TODOs from the reviews. Do you work. Resubmit.

In this post, I'd like to advocate an additional element:

Write a commentary on the reviews.

Why would you respond on reviews for a rejected conference paper?

Here are the reasons I can think of:
  • R1You received a review that is clearly weak and you want to complain publicly. I recommend against this complaint model. It is unfriendly with regard to the conference, the chairs, and the reviewers. If one really needs to complain, then one should do this in a friendly manner by direct communication with the conference chair.
  • R2: You spot factual errors in an otherwise serious review and you want to defend yourself publicly. There is one good reason for doing this. Just write it off your chest. There is two good reasons for not doing it. Firstly, chances are that your defense is perceived as an unfriendly complaint; see above. Secondly, why bother and who cares? For instance, sending your defense to the chairs would be weird and useless, I guess.
  • R3: You want to make good use of the reviews along revision and document this properly.

R3 makes sense to me. 

R3 is what this post is about.

We respond to reviews anyway when working on revisions of journal submissions because we have to. One does not make it through a major revision request for a journal paper unless one really makes an effort to properly address the reviewer requests.

Some conferences run a rebuttal model, but this is much different. Rebuttal is about making reviewers understand the paper; revision of a journal paper is more about doing enough of presentational improvements or bits of extra thinking and even research so that a revision is ultimately accepted. 

In the case of a rejected conference paper and its revision, I suggest that a commentary is written in a style, as if the original reviewers were to decide on the revision, even though this will not happen, of course. It remains to be decided on a case-by-case basis whether and how and when the commentary should be made available to whom for what purpose. 

Not that I want my submissions to be rejected, but it happens because of competition and real issues in a paper or the underlying research. My ICMT 2016 submission was received friendly enough, but rightly rejected. The paper is now revised and the paper's website features the ICMT 2016 notification and my commentary on the reviews. In this particular case, I estimated that public access to the notification and my commentary will do more good than bad. At the very least, I can provide a show case for what I am talking about in this blog post.

With the commentary approach, there are some problems that I can think of:
  • P1: Reviewers or conference chairs feel offended. Without being too paranoid, the reviewers or the chairs could receive the commentary as a criticism of their work. For instance, the chair may think that some review was not strong enough to be publicly exposed as a data point of the conference. I have two answers. Firstly, an author should make an effort to avoid explicit or subliminal criticism. (We know how to do this because of how we deal with journal reviews.) Secondly, dear reviewers and chairs, maybe the world would be a better place if more of the review process would be transparent?
  • P2: Prior reviews and your commentary could be misused by reviewers. There is some good reason for not exposing reviewers to other reviews of the same paper (or a prior version thereof), not until they have casted their vote at least, because they may just get biased or they may use these other views without performing a thorough analysis on their own. This is a valid concern. This problem may call for some rules as to i) what conferences are eligible for passing reviews and commentary to each other and ii) when and how commentary can be used by reviewers. 
  • P3: Your commentary is perceived as putting pressure on reviewers of the revision. At this stage of our system, I don't propose that reviewers should be required in any way to consider the commentary on a previous version of a paper because reviewing is already taking too much time. All what I am saying is that reviewers should be given the opportunity to access previous reviews and the author's commentary, at least at some stage of the review process. Reviewers are welcome to ignore the commentary. In fact, some reviewing models may be hard to unite with the notion of commentary. For instance, I don't now whether it would work for the double blind model.

In summary, commentary on rejected conference submissions is a bit like unit testing. We should do it because it helps us to test our interpretation of the reviews in a systematic manner. Without such testing we are likely to i) complain non-constructively about the reviews; ii) ignore legitimate and critical issues pointed out by the reviews; iii) as a consequence, resubmit suboptimal revisions and busy program committees. So we do not really write this commentary for future reviewers; we rather write the commentary for us. However, we write it in a style that it could be used for good by future reviewers. 

Once the community gets a bit more used to this idea, we could deal with commentaries pretty much in the same way as with optional appendices in some conferences. One risk is the one of bias when reviewers are exposed to previous reviews and author claims in the commentary. Another risk is that a badly implemented process for commentaries would just cause more work for both program committees and authors. Maybe, I am thinking a bit too revolutionary here, but I am looking forward a system where we break out of the static nature of program committees and allow for review results and author responses to be passed on from conference to conference. I am thinking of a more social process of reviewing and revision.

Regards,
Ralf

Comments

Popular posts from this blog

SWI-Prolog's Java Interface JPL

Software Engineering Teaching Meets LLMs

Artificial General Intelligence obsoletes Software Reverse/Re-Engineering research as we know it!?