On Integrity in Reviewing

Image credit: OtherRealisms (Flickr)

On Integrity in Reviewing

After my trip to New York City, I attended the ICDM conference in Atlantic City. I presented my paper Learning Semantic Similarity for Very Short Texts at the ReLSD workshop on Saturday - which stands for Representation Learning for Semantic Data. The workshop was organized by a.o. Patrick Gallinari and hosted by Sheng Gao, and some important names on the (tentative) program committee’s list were the Bengio brothers, Ronan Collobert, Hugo Larochelle, Yann Lecun and Jason Weston (pardon me if I have forgotten some).

The paper was initially sent as a poster submission to a top conference in information retrieval, but got rejected. In fact, the paper was accepted by the first three reviewers, who, in a meta-review, agreed that

“[The] work presented in this short paper [is] interesting. While there were some minor concerns about the potential impact of the work and the details of the experimental evaluation, the reviewers felt that this makes for a good short paper. This paper should spark some good discussions at the poster session.”

Well, yaay, that’s what a poster session is all about, no?

However, the chairs decided to reject my paper in the end. Why? I decided to send an e-mail to the chairs, who responded quickly:

“Once all the reviews and the metareview were in, [we] did a first round of quality control to examine papers that were borderline […] Your paper fell in this category. There was substantial disagreement among the reviewers, with the expert reviewer being more negative than the two reviewers with less expertise. The metareviewer’s recommendation was based on the first three reviews only.”

Okay, I agree, the reviewers disagreed (oxymoron score +1 !). I got an accept, weak accept, and weak reject (which is still a weak accept when averaged), but the last reviewer had indicated being an expert in the field.

The e-mail, however, went on:

“We then brought in a second expert reviewer. This expert reviewer recommended to reject your paper. After re-calibrating the scores for your paper with the scores for other papers, we decided not to accept your paper. Unfortunately, at that stage we could no longer reach the first metareviewer. A second metareview explaining the decision has now been added for your paper.”

Wait, what? Bringing in a fourth (expert!) reviewer? Rejecting my paper based (only) on this review? After that, not being able to contact the first meta-reviewer - I mean, really? And then a second meta-review just appearing out of thin air (which, above all, greatly resembled the e-mail the chairs had just sent to me)?

At this point it was already half past midnight, and, being tired, I accepted the e-mail as it came. I sent a short, polite message back: “Thank you for your investigation and the explanation.” My paper just needs more experiments, some overall improvements, and a better motivational context to get accepted at a major conference. But the story is not finished yet.

Coffee helps the paper reviewing process
Coffee helps the paper reviewing process. Image courtesy: Steve Hanna (Flickr).

A month later the titles of the accepted papers were published on the conference’s website. I went through the list, looking for potentially related work and interesting applications. It is at this point I came across a paper, submitted by someone from the conference chair; in fact, the very person I sent an e-mail to after the reviews had come in. I was surprised, the subject of this paper greatly resembled my own rejected paper’s subject. Even the title was somewhat similar. I skimmed the paper, and indeed, we both tried to solve the same kind of problems in short text similarity; luckily the authors used different techniques than I did.

I am going to let you draw your own conclusions. Did the chair person write the second meta-review him-/herself? Did he/she deliberately reject my paper in favour of his or her own? It is known that papers from conference organizers or chairs are always represented in a high number compared to papers from other organizations and research groups. This was the case at ICDM, and will probably have been the case at the other conference as well. But is this a sound practice?

Notice how I did not link any specific person or event to this story. As this is a plea for integrity in reviewing and science and general, I try to be a sound person myself. This is also not an incentive to start an argument over the entire issue. This is the addressing of a general problem in paper reviewing.

Despite the entire story, we decided to clean up the paper given the comments from the original reviewers - which only took a day or so. We added some extra experiments, motivations and context, and we sent it to the ReLSD workshop I talked to you about earlier. If you are interested in the paper itself, you can currently find it only on arXiv, as it has not appeared yet on IEEE Xplore.

As always, you can leave comments below!

comments powered by Disqus