Review comments play an important role in the improvement of scientific articles. There are typically many rounds of reviewrevision before the different reviewers with varying backgrounds arrive at a consensus on a submission. However, not always the reviews are helpful. Sometimes the reviewers are unnecessarily critical of the work without justifying their comments. Peer reviews are always meant to be critical yet constructive feedback on the scientific merit of a submittedarticle. However, with the rising number of paper submissions leading to the involvement of novice or less experienced reviewers in the reviewing process, the reviewers tend to spend less expert time on their voluntary reviewing job. This results in lackluster reviews where the authors do not have many takeaways from their reviews. The entire scientific enterprise is heavily dependent on this very human peer-review process. In this paper, we make an attempt to automatically distinguish between constructive and non-constructive peer reviews. We deem constructive comment to be the one that, despite being critical, is polite and provides feedback to the authors to improve their submissions. To this end, we present BetterPR, a manually annotated dataset to estimate the constructiveness of peer review comments. Further, we benchmark BetterPR with standard baselines and analyze their performance. We collect the peer reviews from open access forums and design an annotation scheme to label whether a review comment is constructive or non-constructive. We provide our dataset and codes (https://github.com/PrabhatkrBharti/ BetterPR.git) for further exploration by the community