Thursday 30 September 2010

RecSys 2010: industry

One striking feature of this year's ACM Recommender Systems conference was a large industry presence: from SAP, Bloomberg, IBM and Sony, to Yahoo! and Google, as well as a host of smaller outfits. I particularly enjoyed comparing notes with the teams from YouTube, Microsoft Bing, and the b2b recommendation company MediaUnbound (now Rovi), which has provided music recommendation services for MTV and Ericsson. Some clear themes emerged from those conversations, as well as the various panels in which we were speaking.

Some researchers in the field might be surprised to hear that the predominant industry view is that predicting ratings for items isn't a very useful way to build or evaluate a recommender system. This is partly because "users don't get ratings", as Palash Nandy from Google put it at the opening panel, "they always rate 1 or 5", and partly because improving prediction of users' ratings is at best weakly aligned with the business aims of most systems. In fact all of the industry people I spoke to were suspicious of offline evaluation metrics for recommendations in general. The YouTube team had even compared some standard offline metrics evaluating two recommendation algorithms with results from online controlled experiments on their live system. They found no correlation between the two: this means that the offline metrics had no value whatsoever in telling which algorithm would do better in practice. Finally there was a strong industry concern that the design of the Netflix Prize competition had led many researchers into developing methods that could not possibly scale to real world scenarios.  Most industry people I spoke to described scalability as their number one consideration.

There were two strong beliefs that I heard stated many times in the last few days: improvements in the overall user experience dominate likely enhancements of current algorithms; and recommendations from friends remain far more powerful than those from machines.  So the sort of research that will excite industry in the near future is likely to focus on topics such as:
  • providing more compelling explanations for machine recommendations
  • exploiting social relationships to make better recommendations
  • optimising the user's interaction with particular recommenders
Where does this leave the academic researcher with good ideas but no access to real users? This question was raised eloquently by Jo Konstan from the pioneering GroupLens team at the University of Minnesota, in a panel debating the value of competitions for scientific research. Jo's plea was for someone to offer researchers not a big cash prize or even a huge dataset, but rather an experimental platform where their systems can be deployed on a modest group of users under controlled conditions, the uptake of their recommendations recorded, and users in the test group asked directly to give feedback. This approach would meet most of industry's concerns about academic research head on.  But it raises obvious issues of privacy, security, maintenance, and trust between businesses and their customers - probably enough to scare away even the most research-friendly commercial organisation.

One final thing that might interest researchers with concerns about any of these issues: pretty much everyone said that they were hiring.

1 comment:

  1. Hi Mark

    At Xyggy (www.xyggy.com) we are addressing the issues highlighted by the industry. Have a look at the Music Recommender (http://www.xyggy.com/music.php) demo which is based on Oscar Celma's last.fm data set.

    Xyggy is not Cf-based and operates on the premise: given one or more things, find other similar things. The results are dynamic (real-time), the UI is fully interactive and can scale to whatever size.

    Enter an artist to get started; drag one or more artists from the results into the query box to improve relevance; drag items out of query box; toggle items on/off in query box; change keywords mid-stream.

    The demo is a work in progress and in a production service, the user would be able to drag one or more songs into the query box to get started.

    Using "david bowie" as an example, the results also contain "david bowie" because david bowie is featured with other artists. Drag "david bowie" into the query box and see the relevance change. To overcome this we plan to provide autosuggestion.

    Dinesh

    ReplyDelete