The National University of Singapore has announced the winner of a 24-hour coding competition. Here’s the goal:
…a 24-hour programming competition, encourages students and faculty to develop customized applications that advance the search and discovery process of scientific information.
So you might be thinking there would be some cool new scientific tools being developed. Maybe students have added perspective and a new way of thinking about problems, based on data discovery? No, instead the winner is a Google modification — a search engine that ranks scientific papers based on reputation.
First place: Zhao Shanheng and Zhong Zhi, of NUS, developed the iRank Apps, an application which ranks institutes by the number of papers returned in the top search results. This tool can help students decide where to apply for their PhD or pursue postdoctoral research in their chosen field.
Google’s reputation-based system has advantages, but it also has risks. It requires you to trust external sources of verification — the peer-review system depends on the quality of the peers. That seems highly unscientific. It’s a second-person or even third-person view of data.
So the first place award goes to an app that tells you what the search engines tell you about what the peers tell you about papers from institutes. It is a popularity contest app that is at least three steps removed from an actual review of source material.
The huge irony, of course, is that the contest appears to have had a controlled/trusted system to determine the winner. How quaint. Perhaps they should have instead thrown the apps into public search engines and let the one that hits the top search results win. Otherwise, I would say their decision to review and vote for iRank App in a closed system contradicts the mission of the iRank App…