Accepted Papers

Dual Sequential Prediction Models Linking Sequential Recommendation and Information Dissemination

Qitian Wu (Shanghai Jiao Tong University);Yirui Gao (Shanghai Jiao Tong University);Xiaofeng Gao (Shanghai Jiao Tong University);Paul Weng (Shanghai Jiao Tong University);Guihai Chen (Shanghai Jiao Tong University);

Sequential recommendation and information dissemination are two traditional problems for sequential information retrieval. The common goal of the two problems is to predict future user-item interactions based on past observed interactions. The difference is that the former deals with users’ histories of clicked items, while the latter focuses on items’ histories of infected users.In this paper, we take a fresh view and propose dual sequential prediction models that unify these two thinking paradigms. One user-centered model takes a user’s historical sequence of interactions as input, captures the user’s dynamic states, and approximates the conditional probability of the next interaction for a given item based on the user’s past clicking logs. By contrast, one item-centered model leverages an item’s history, captures the item’s dynamic states, and approximates the conditional probability of the next interaction for a given user based on the item’s past infection records. To take advantage of the dual information, we design a new training mechanism which lets the two models play a game with each other and use the predicted score from the opponent to design a feedback signal to guide the training. We show that the dual models can better distinguish false negative samples and true negative samples compared with single sequential recommendation or information dissemination models. Experiments on four real-world datasets demonstrate the superiority of proposed model over some strong baselines as well as the effectiveness of dual training mechanism between two models.


How can we assist you?

We'll be updating the website as information becomes available. If you have a question that requires immediate attention, please feel free to contact us. Thank you!

Please enter the word you see in the image below: