[D] Many times I feel additional experiments during the rebuttal make my paper worse
Back in the days when I just started to review for major conferences, it was common to give and receive reviews saying "I don't have major concerns".
In the past 3-5 years, the field has spent significant effort cracking down on low-quality reviews, which is great. But a side effect is that we don't see these kinds of "easy" reviews anymore. It feels like the reviewers are obliged to find something wrong with the paper to show they are doing their job. Even on papers where all reviewers are accepting, it's common for the author to be requested 5-10 additional numbers/plots during rebuttal.
Many times, these experiments are detrimental. Most of them are "what ifs". How about a different backbone, task, dataset, or a specific setting? And whenever something doesn't work (especially during the rebuttal timeframe), the reviewer is having a good "gotcha" moment. I'm not only complaining as an author but also as a reviewer. Several times, I had to step in during the discussion: "I don't think X experiment suggested by Reviewer Y is important," And every time the AC sided with me.
The requirement for experiments should always be "sufficient to support the core claims," not "exhaustively examine every single barely applicable case." Folks, it's OK to say "the paper passes the bar, but I have curiosity questions that do not affect my rating" (I have written this line many times in my reviews).
[link] [comments]
Want to read more?
Check out the full article on the original site