This year's tasks employ the Netflix Prize training data set. This data set consists of more than 100 million ratings from over 480 thousand randomly-chosen, anonymous customers on nearly 18 thousand movie titles. The data were collected between October, 1998 and December, 2005 and reflect the distribution of all ratings received by Netflix during this period. The ratings are on a scale from 1 to 5 (integral) stars.
This year's competition consists of two tasks. Each team can participate in the competition of any one task or both tasks.
Task 1 (Who Rated What in 2006): Your task is to predict which users rated which movies in 2006. We will provide a list of 100,000 (user_id, movie_id) pairs where the users and movies are drawn from the Netflix Prize training data set. None of the pairs were rated in the training set. Your task is to predict the probability that each pair was rated in 2006 (i.e., the probability that user_id rated movie_id in 2006). (The actual rating is irrelevant; we just want whether the movie was rated by that user sometime in 2006. The date in 2006 when the rating was given is also irrelevant.)
Task 2 (How Many Ratings in 2006): Your task is to predict the number of additional ratings the users from the Netflix Prize training dataset gave to a subset of the movies in the training dataset. We provide a list of 8863 movie_ids drawn from the Netflix Prize training dataset. You need to predict the number of additional ratings that all users in the Netflix Prize training dataset provided in 2006 for each of those movie titles. (Again the actual rating given by each user is irrelevant; we just want the number of times that the movie was rated in 2006. The date in 2006 when the rating was given is also irrelevant.)
Winners will be determined, for both tasks, by computing the root mean squared error (RMSE) between your individual predictions and the correct answers. That is, if your prediction for an item is Y, the correct answer for the item is X and we have n items, RMSE = sqrt((sum(for all items (X-Y)^2))/n). Entry with the smallest RMSE will be judged the winner; in case of a tie, the entry with the earliest submission date will be judged the winner.
- In the case of "Who Rated What in 2006", the correct answer is 1 if the movie is rated by that user, 0 otherwise.
- In the case of "How Many Ratings in 2006", the correct answer is the actual number of ratings received. However, RMSE is computed slightly differently from the first task. Assuming that the actual number of ratings received is X, in computing RMSE, we use ln(1+X), where "ln" is natural logarithm. This also applies to your predicted number. That is, assuming that your predicted number is Y, we use ln(1+Y).
Note: We reserve the right to use a different evaluation criterion if no team can achieve the baseline result for a task. For example, for task one, the baseline result is one that assigns each pair the probability of the base rate, which is the proportion of movies rated in the test set (unknown to contestants).
Following the award of the KDD Cup prizes, the answer sets will be made available at the KDD Cup website and the Netflix Prize website.