The NIPS Experiment

So, this blog post has been floating around Facebook for the last couple of days. A guy talks about an experiment that was run at a recent Comp Sci conference. Essentially, they divided the submitted papers into two and had two program committees and two sets of reviewers. However, they did an overlap of 10% of the papers. So each group actually examined 55% of the papers, not just 50. Afterwards, they looked at what papers in this overlap were accepted by both committees and found they disagreed on about 25% of the papers. The blog post goes into some detail about how this actually works out to a lot more than just 25%, because of how many they were suppose to accept and so on (it’s worth a read – it’s really not long).

I think the reason it’s getting a lot of attention, is that it’s just pointing out more concretely what everyone already knew – getting a paper accepted to a conference has as much to do with chance and luck as anything else. Something I’ve definitely noticed – because when you get a paper back with a positive set of reviews and yet these same reviewers are saying reject, you’ve got to wonder about the system.

Anyway, it also re-raises the question of how selective conferences should be? In comp sci, it’s tricky, because conferences are often weighted as heavily (if not more so) than journals when it comes to getting a tenure track position or promotion. But, if your odds are really just a roll of the dice, it starts to seem like a very unfair judgement against people because person A got in and person B didn’t. Or, as a friend put it – her supervisor has about 14 (yes 14!) students and so has a good chance of getting a bunch of publications each year, just from the odds. Someone else with only 1 or 2 students is going to naturally have a lot fewer publications (but it doesn’t mean and shouldn’t be seen as them doing “worse”).

Personally, these artificial limits of “can only accept 22.5%” always seem exactly that – artificial. By having these strict limits, it means that there’s not room for flexibility and the conference is either going to end up in a position of rejecting papers that should get out there or accepting papers that shouldn’t (there’s nothing to say that exactly 22.5% of the papers (and only 22.5% of the papers) should be worth accepting each year).

This reminds me of grad curving – the idea that only some preset percentage of students should be allowed to get an “A.” This, in my opinion, goes completely against what we should be trying to do with learning. It should never be about ranking who learned more, but instead focusing on trying to get everyone to a level of mastery on the subject. It’s always made more sense to me to allow everyone who reaches some milestone to receive the same grade. And I feel like this should happen for conferences too – any paper that passes some bar should get in (either as a poster, talk, lightening talk, something).

Advertisements

Join the discussion

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s