Publications

The project presents “Showcases”, which are publications that we choose to highlight as illustrations of the results of the project. Showcases are mature, significant publications that have been selected because they represent the nature and the diversity of the results that the project has achieved.

Full publications list:

  • M. Larson, D. Tikk, and R. Turrin, “Crowdsourcing and Human Computation for Recommender Systems,” in Proceedings of the ACM RecSys CrowdRec 2015 Workshop on Crowdsourcing and Human Computation for Recommender Systems, 2015.
    [BibTeX] [Abstract] [Download PDF]

    The CrowdRec 2015 workshop is the third in a series of international workshops that is dedicated to promoting the effective and responsible integration of human intelligence into recommender systems. The success of today’s recommender systems can be attributed to their ability to exploit human input in the form of user-expressed preferences and the implicit information inherent in user-item interactions. Recommender systems that leverage “The Crowd” go beyond conventional approaches in that they actively elicit input, requesting specific information that will improve recommendations, and motivating users to contribute. “The Crowd” takes on a variety of forms: Crowdworkers carrying out microtasks on large-scale commercial crowdsourcing platforms, professional annotators hired by content providers to annotate or curate content, or also members of the community of users whom the recommender system is designed to serve. The program of the CrowdRec 2015 workshop represents the wide variety of ways in which crowdourcing and human intelligence can contribute to recommendation. We would like to thank the authors who submitted to the workshop, as well as our reviewers and panelists. Without their contributions, the CrowdRec workshop would not be possible. We would also like to express our appreciation for the support of the CrowdRec Workshop Steering Board, Kuan-Ta Chen (Academia Sinica, Taiwan) and Irwin King (The Chinese University of Hong Kong), who founded the CrowdRec Workshop series in 2013. Finally, we would like to express our thanks to the European Union for their funding of the CrowdRec project, which coincidentally bears the same name as the workshop series. This project supports, in part, the work of the members of the organizing committee. We hope that you find the proceedings of CrowdRec 2015 informative, thought provoking, and a source of promising ideas with great future potential.

    @inproceedings{larson_crowdsourcing_2015,
      title = {Crowdsourcing and {Human} {Computation} for {Recommender} {Systems}},
      url = {http://crowdrecworkshop.org/papers/CrowdRec2015_Proceedings.pdf},
      abstract = {The CrowdRec 2015 workshop is the third in a series of international
    workshops that is dedicated to promoting the effective and responsible
    integration of human intelligence into recommender systems. The success of
    today’s recommender systems can be attributed to their ability to exploit
    human input in the form of user-expressed preferences and the implicit
    information inherent in user-item interactions. Recommender systems that
    leverage “The Crowd” go beyond conventional approaches in that they
    actively elicit input, requesting specific information that will improve
    recommendations, and motivating users to contribute. “The Crowd” takes on a
    variety of forms: Crowdworkers carrying out microtasks on large-scale
    commercial crowdsourcing platforms, professional annotators hired by content
    providers to annotate or curate content, or also members of the community of
    users whom the recommender system is designed to serve. The program of
    the CrowdRec 2015 workshop represents the wide variety of ways in which
    crowdourcing and human intelligence can contribute to recommendation.
    We would like to thank the authors who submitted to the workshop, as well as
    our reviewers and panelists. Without their contributions, the CrowdRec
    workshop would not be possible. We would also like to express our
    appreciation for the support of the CrowdRec Workshop Steering Board,
    Kuan-Ta Chen (Academia Sinica, Taiwan) and Irwin King (The Chinese
    University of Hong Kong), who founded the CrowdRec Workshop series in
    2013. Finally, we would like to express our thanks to the European Union for
    their funding of the CrowdRec project, which coincidentally bears the same
    name as the workshop series. This project supports, in part, the work of the
    members of the organizing committee.
    We hope that you find the proceedings of CrowdRec 2015 informative,
    thought provoking, and a source of promising ideas with great future potential.},
      booktitle = {Proceedings of the {ACM} {RecSys} {CrowdRec} 2015 {Workshop} on {Crowdsourcing} and {Human} {Computation} for {Recommender} {Systems}},
      author = {Larson, Martha and Tikk, Domonkos and Turrin, Roberto},
      month = oct,
      year = {2015}
    }

  • M. Riegler, M. Larson, C. Spampinato, J. Markussen, Pål. Halvorsen, and C. Griwodz, “Introduction to a Task on Context of Experience: Recommending Videos Suiting a Watching Situation,” in Proceedings of the MediaEval 2015 Workshop, 2015.
    [BibTeX] [Abstract] [Download PDF]

    We propose a Context of Experience task, whose aim it is to explore the suitability of video content for watching in certain situations. Specifically, we look at the situation of watching movies on an airplane. As a viewing context, airplanes are characterized by small screens and distracting viewing conditions. We assume that movies have properties that make them more or less suitable to this context. We are interested in developing systems that are able to reproduce a general judgment of viewers about whether a given movie is a good movie to watch during a flight. We provide a data set including a list of movies and human judgments concerning their suitability for airplanes. The goal of the task is to use movie metadata and audio-visual features extracted from movie trailers in order to automatically reproduce these judgments. A basic classification system demonstrates the feasibility and viability of the task.

    @inproceedings{riegler_introduction_2015,
      title = {Introduction to a {Task} on {Context} of {Experience}: {Recommending} {Videos} {Suiting} a {Watching} {Situation}},
      url = {http://ceur-ws.org/Vol-1436/Paper5.pdf},
      abstract = {We propose a Context of Experience task, whose aim it is to explore the suitability of video content for watching in certain situations. Specifically, we look at the situation of watching movies on an airplane. As a viewing context, airplanes are characterized by small screens and distracting viewing conditions. We assume that movies have properties that make them more or less suitable to this context. We are interested in developing systems that are able to reproduce a general judgment of viewers about whether a given movie
    is a good movie to watch during a flight. We provide a data set including a list of movies and human judgments concerning their suitability for airplanes. The goal of the task is to use movie metadata and audio-visual features extracted
    from movie trailers in order to automatically reproduce these judgments. A basic classification system demonstrates the feasibility and viability of the task.},
      booktitle = {Proceedings of the {MediaEval} 2015 {Workshop}},
      author = {Riegler, Michael and Larson, Martha and Spampinato, Concetto and Markussen, Jonas and Halvorsen, Pål and Griwodz, Carsten},
      month = sep,
      year = {2015}
    }

  • F. Abel, “We Know Where You Should Work Next Summer: Job Recommendations,” in Proceedings of 9th ACM conference on Recommender Systems, 2015.
    [BibTeX] [Abstract] [Download PDF]

    Business-oriented social networks like LinkedIn or XING support people in discovering career opportunities. In this talk, we will focus on the problem of recommending job offers to Millions of XING users. We will discuss challenges of building a job recommendation system that has to satisfy the demands of both job seekers who have certain wishes concerning their next career step and recruiters who aim to hire the most appropriate candidate for a job. Based on insights gained from a large-scale analysis of usage data and profile data such as curriculum vitae, we will study features of the recommendation algorithms that aim to solve the problem. Job advertisements typically describe the job role that the candidate will need to fill, required skills, the expected educational background that candidates should have and the company and environment in which candidates will be working. Users of professional social networks curate their profile and curriculum vitae in which they describe their skills, interests and previous career steps. Recommending jobs to users is however a non-trivial task for which pure content-based features that would just match the aforementioned properties are not sufficient. For example, we often observe that there is a gap between what people specify in their profiles and what they are actually interested in. Moreover, profile and CV typically describe the past and current situation of a user but do not reflect enough the actual demands that users have with respect to their next career step. Therefore, it is crucial to also analyze the behavior of the users and exploit interaction data such as search queries, clicks on jobs, bookmarks, clicks that similar users performed, etc. Our job recommendation system exploits various features in order to estimate whether a job posting is relevant for a user or not. Some of these features rather reflect social aspects (e.g. does the user have contacts that are living in the city in which the job is offered?) while others capture to what extent the user fulfills the requirements of the role that is described in the job advertisement (e.g. similarity of user’s skills and required skills). To better understand appropriate next career steps, we mine the CVs of the users and learn association rules that describe the typical career paths. This information is also made publicly available via FutureMe – a tool that allows people to explore possible career opportunities and identify professions that may be interesting for them to work in. One of the challenges when developing the job recommendation system is to collect explicit feedback and thus understanding (i) whether a recommended job was relevant for a user and (ii) whether the user was a good candidate for the job. We thus started to stronger involve users in providing feedback and build a feedback cycle that allows the recommender system to automatically adapt to the feedback that the crowd of users is providing. By displaying explanations about why certain items were suggested, we furthermore aim to increase transparency of how the recommender system works.

    @inproceedings{abel_we_2015,
      title = {We {Know} {Where} {You} {Should} {Work} {Next} {Summer}: {Job} {Recommendations}},
      url = {http://dl.acm.org/citation.cfm?id=2799496},
      abstract = {Business-oriented social networks like LinkedIn or XING support people in discovering career opportunities. In this talk, we will focus on the problem of recommending job offers to Millions of XING users. We will discuss challenges of building a job recommendation system that has to satisfy the demands of both job seekers who have certain wishes concerning their next career step and recruiters who aim to hire the most appropriate candidate for a job. Based on insights gained from a large-scale analysis of usage data and profile data such as curriculum vitae, we will study features of the recommendation algorithms that aim to solve the problem.
    
    Job advertisements typically describe the job role that the candidate will need to fill, required skills, the expected educational background that candidates should have and the company and environment in which candidates will be working. Users of professional social networks curate their profile and curriculum vitae in which they describe their skills, interests and previous career steps. Recommending jobs to users is however a non-trivial task for which pure content-based features that would just match the aforementioned properties are not sufficient. For example, we often observe that there is a gap between what people specify in their profiles and what they are actually interested in. Moreover, profile and CV typically describe the past and current situation of a user but do not reflect enough the actual demands that users have with respect to their next career step. Therefore, it is crucial to also analyze the behavior of the users and exploit interaction data such as search queries, clicks on jobs, bookmarks, clicks that similar users performed, etc.
    
    Our job recommendation system exploits various features in order to estimate whether a job posting is relevant for a user or not. Some of these features rather reflect social aspects (e.g. does the user have contacts that are living in the city in which the job is offered?) while others capture to what extent the user fulfills the requirements of the role that is described in the job advertisement (e.g. similarity of user's skills and required skills). To better understand appropriate next career steps, we mine the CVs of the users and learn association rules that describe the typical career paths. This information is also made publicly available via FutureMe - a tool that allows people to explore possible career opportunities and identify professions that may be interesting for them to work in.
    
    One of the challenges when developing the job recommendation system is to collect explicit feedback and thus understanding (i) whether a recommended job was relevant for a user and (ii) whether the user was a good candidate for the job. We thus started to stronger involve users in providing feedback and build a feedback cycle that allows the recommender system to automatically adapt to the feedback that the crowd of users is providing. By displaying explanations about why certain items were suggested, we furthermore aim to increase transparency of how the recommender system works.},
      booktitle = {Proceedings of 9th {ACM} conference on {Recommender} {Systems}},
      author = {Abel, Fabian},
      month = sep,
      year = {2015}
    }

  • B. Kille and F. Abel, “Engaging the Crowd for Better Job Recommendations,” in Proceedings of Workshop on Crowdsourcing and Human Computation for Recommender Systems (CrowdRec 2015), 2015.
    [BibTeX] [Download PDF]
    @inproceedings{kille_engaging_2015,
      title = {Engaging the {Crowd} for {Better} {Job} {Recommendations}},
      url = {http://crowdrecworkshop.org/papers/CrowdRec2015_Proceedings.pdf},
      booktitle = {Proceedings of {Workshop} on {Crowdsourcing} and {Human} {Computation} for {Recommender} {Systems} ({CrowdRec} 2015)},
      author = {Kille, Benjamin and Abel, Fabian},
      month = sep,
      year = {2015}
    }

  • F. Hopfgartner, T. Heintz, and R. Turrin, “Real-time Recommendation of Streamed Data,” in Proceedings of the 9th ACM Conference on Recommender Systems, 2015, pp. 361-362.
    [BibTeX] [Abstract] [Download PDF]

    This tutorial addressed two trending topics in the field of recommender systems research, namely A/B testing and real-time recommendations of streamed data. Focusing on the news domain, participants learned how to benchmark the performance of stream-based recommendation algorithms in a live recommender system and in a simulated environment.

    @inproceedings{hopfgartner_real-time_2015,
      title = {Real-time {Recommendation} of {Streamed} {Data}},
      url = {http://dl.acm.org/citation.cfm?id=2792839},
      abstract = {This tutorial addressed two trending topics in the field of recommender systems research, namely A/B testing and real-time recommendations of streamed data. Focusing on the news domain, participants learned how to benchmark the performance of stream-based recommendation algorithms in a live recommender system and in a simulated environment.},
      booktitle = {Proceedings of the 9th {ACM} {Conference} on {Recommender} {Systems}},
      author = {Hopfgartner, Frank and Heintz, Tobias and Turrin, Roberto},
      month = sep,
      year = {2015},
      pages = {361--362}
    }

  • A. Sen and M. Larson, “From Sensors to Songs: A learning-free novel music recommendation system using contextual sensor data,” in Proceedings of Workshop on Location-Aware Recommendations (LocalRec 2015, 2015.
    [BibTeX] [Abstract] [Download PDF]

    Traditional approaches for music recommender systems face the known challenges of providing new recommendations that users perceive as novel and serendipitous discoveries. Even with all the music content available on the web and commercial music streaming services, discovering new music remains a time consuming and taxing activity for the average user. The goal for our proposed system is to provide novel music recommendations based on contextual sensor information. For example, contextual place information can be inferred with intelligent use of techniques such as geo-fencing and using lightweight sensors like accelerometers and compass to monitor location. The inspiration behind our system is that music is not in the past, neither in the future, but rather enjoyed in the present. For this reason, the system does not rely on learning the user’s listening history. Raw sensor data is fused with information from the web, passed through a cascade of Fuzzy Logic models to infer the user’s context, which is then used to recommend music from an online music streaming service (SoundCloud) after filtering out songs based on genre preferences that the user dislikes. This paper motivates and describes the design for a mobile application along with a description of tests that will be carried out for validation.

    @inproceedings{sen_sensors_2015,
      title = {From {Sensors} to {Songs}: {A} learning-free novel music recommendation system using contextual sensor data},
      url = {http://ceur-ws.org/Vol-1405/paper-07.pdf},
      abstract = {Traditional approaches for music recommender systems face the known challenges of providing new recommendations that users perceive as novel and serendipitous discoveries. Even with all the music content available on the web and commercial music streaming services, discovering new music remains a time consuming and taxing activity for the average user. The goal for our proposed system is to provide novel music recommendations based on contextual sensor information. For example, contextual place information can be inferred with intelligent use of techniques such as geo-fencing and using lightweight sensors like accelerometers and compass to monitor location. The inspiration behind our system is that music is not in the past, neither in the future, but rather enjoyed in the present. For this reason, the system does not rely on learning the user’s listening history. Raw sensor data is fused with information from the web, passed through a cascade of Fuzzy Logic models to infer the user’s context, which is then used to recommend music from an online music streaming service (SoundCloud) after filtering out songs based on genre preferences that the user dislikes. This paper motivates and describes the design for a mobile application along with a description of tests that will be carried out for validation.},
      booktitle = {Proceedings of {Workshop} on {Location}-{Aware} {Recommendations} ({LocalRec} 2015},
      author = {Sen, Abishek and Larson, Martha},
      month = sep,
      year = {2015}
    }

  • J. Yuan, F. Sivrikaya, F. Hopfgartner, A. Lommatzsch, and M. Mu, “Context-aware LDA: Balancing Relevance and Diversity in TV Content Recommenders,” in Proceedings of the 2nd Workshop on Recommendation Systems for Television and Online Videos (RecSysTV 2015), 2015.
    [BibTeX] [Abstract] [Download PDF]

    In the vast and expanding ocean of digital content, users are hardly satisfied with recommended programs solely based on static user patterns and common statistics. Therefore, there is growing interest in recommendation approaches that aim to provide a certain level of diversity, besides precision and ranking. Context-awareness, which is an effective way to express dynamics and adaptivity, is widely used in recom-mender systems to set a proper balance between ranking and diversity. In light of these observations, we introduce a recommender with a context-aware probabilistic graphi-cal model and apply it to a campus-wide TV content de-livery system named “Vision”. Within this recommender, selection criteria of candidate fields and contextual factors are designed and users’ dependencies on their personal pref-erence or the aforementioned contextual influences can be distinguished. Most importantly, as to the role of balanc-ing relevance and diversity, final experiment results prove that context-aware LDA can evidently outperform other al-gorithms on both metrics. Thus this scalable model can be flexibly used for different recommendation purposes.

    @inproceedings{yuan_context-aware_2015,
      title = {Context-aware {LDA}: {Balancing} {Relevance} and {Diversity} in {TV} {Content} {Recommenders}},
      url = {https://comcast.app.box.com/recsystv-2015-yuan},
      abstract = {In the vast and expanding ocean of digital content, users are hardly satisfied with recommended programs solely based on static user patterns and common statistics. Therefore, there is growing interest in recommendation approaches that aim to provide a certain level of diversity, besides precision and ranking. Context-awareness, which is an effective way to express dynamics and adaptivity, is widely used in recom-mender systems to set a proper balance between ranking and diversity. In light of these observations, we introduce a recommender with a context-aware probabilistic graphi-cal model and apply it to a campus-wide TV content de-livery system named “Vision”. Within this recommender, selection criteria of candidate fields and contextual factors are designed and users’ dependencies on their personal pref-erence or the aforementioned contextual influences can be distinguished. Most importantly, as to the role of balanc-ing relevance and diversity, final experiment results prove that context-aware LDA can evidently outperform other al-gorithms on both metrics. Thus this scalable model can be flexibly used for different recommendation purposes.},
      booktitle = {Proceedings of the 2nd {Workshop} on {Recommendation} {Systems} for {Television} and {Online} {Videos} ({RecSysTV} 2015)},
      author = {Yuan, Jing and Sivrikaya, Fikret and Hopfgartner, Frank and Lommatzsch, Andreas and Mu, Mu},
      month = sep,
      year = {2015}
    }

  • B. Hidasi, “Context-aware preference modeling with factorization,” in Proceedings of the 9th ACM Conference on Recommender Systems, 2015, pp. 371-374.
    [BibTeX] [Abstract] [Download PDF]

    This work focuses on solving the context-aware implicit feedback based recommendation task with factorization and is heavily influenced by the practical considerations. I propose context-aware factorization algorithms that can efficiently work on implicit data. I generalize these algorithms and propose the General Factorization Framework (GFF) in which experimentation with novel preference models is possible. This practically useful, yet neglected feature results in models that are more appropriate for context-aware recommendations than the ones used by the state-of-the-art. I also propose a way to speed up and enhance scalability of the training process, that makes it viable to use the more accurate high factor models with reasonable training times.

    @inproceedings{hidasi_context-aware_2015,
      title = {Context-aware preference modeling with factorization},
      url = {http://dl.acm.org/citation.cfm?id=2796543},
      abstract = {This work focuses on solving the context-aware implicit feedback
    based recommendation task with factorization and is heavily influenced by the practical considerations. I propose context-aware factorization algorithms that can efficiently work on implicit data. I generalize these algorithms and propose the General Factorization Framework (GFF) in which experimentation with novel preference models is possible. This practically useful, yet neglected feature results in models that are more appropriate for context-aware recommendations than the ones used by the state-of-the-art. I also propose a way to speed up and enhance scalability of the training process, that makes it viable to use the more accurate high factor models with reasonable training times.},
      booktitle = {Proceedings of the 9th {ACM} {Conference} on {Recommender} {Systems}},
      author = {Hidasi, Balázs},
      month = sep,
      year = {2015},
      pages = {371--374}
    }

  • A. Lommatzsch and S. Werner, “Optimizing and Evaluating Stream-Based News Recommendation Algorithms.” 2015, pp. 376-388.
    [BibTeX] [Abstract] [Download PDF]

    Due to the overwhelming amount of items and information users need support in finding the information matching the individual preferences and expectations. Real-time stream-based recommender systems get in the focus of research allowing the adaption of recommendations to the user’s context and the current set of relevant items. In this paper we focus on recommending news articles. In contrast to most traditional recommender systems, our system must handle several additional challenges: News articles have a short lifecycle forcing the recommender system to continuously adapt to the set of news articles. In addition, the recommender algorithms should work efficiently: On the one hand, news recommendations must be provided within milliseconds since the recommendations must be embedded in news article pages. On the other hand, the news algorithms must be able to handle a huge amount of recommendation request inorder to process load peaks without violating the time constraints. We present algorithms optimized for providing real-time news recommendation given limited hardware resources. We present an offline evaluating framework allowing us the efficient optimizing of recommender algorithms taking into account the available hardware resources. The evaluation shows that our approach allows us to find optimal recommender algorithms for a given hardware setting.

    @inproceedings{lommatzsch_optimizing_2015,
      title = {Optimizing and {Evaluating} {Stream}-{Based} {News} {Recommendation} {Algorithms}},
      url = {http://ceur-ws.org/Vol-1180/CLEF2014wn-Newsreel-WernerEt2014.pdf},
      abstract = {Due to the overwhelming amount of items and information users need
    support in finding the information matching the individual preferences and expectations. Real-time stream-based recommender systems get in the focus of research allowing the adaption of recommendations to the user’s context and the current set of relevant items. In this paper we focus on recommending news articles. In contrast to most traditional recommender systems, our system must handle several additional challenges: News articles have a short lifecycle forcing the recommender system to continuously adapt to the set of news articles. In addition, the recommender algorithms should work efficiently: On the one hand, news recommendations must be provided within milliseconds since the recommendations must be embedded in news article pages. On the other hand, the news algorithms must be able to handle a huge amount of recommendation request inorder to process load peaks without violating the time constraints. We present algorithms optimized for providing real-time news recommendation given limited hardware resources. We present an offline evaluating framework allowing us the efficient optimizing of recommender algorithms taking into account the available hardware resources. The evaluation shows that our approach allows us to find optimal recommender algorithms for a given hardware setting.},
      publisher = {Springer},
      author = {Lommatzsch, Andreas and Werner, Sebastian},
      month = sep,
      year = {2015},
      pages = {376--388}
    }

  • B. Loni, M. Larson, A. Karatzoglou, and A. Hanjalic, “Recommendation using the Right Slice: Speeding up Collaborative Filtering with Factorization Machines,” in Poster Proceedings of the 9th ACM Conference on Recommender Systems, 2015.
    [BibTeX] [Abstract] [Download PDF]

    We propose an alternative way to efficiently exploit rating data for collaborative filtering with Factorization Machines (FMs). Our approach partitions user-item matrix into ‘slices’ which are mutually exclusive with respect to items. The training phase makes direct use of the slice of interest (target slice), while incorporating information from other slices indirectly. FMs represent user-item interactions as feature vectors, and they offer the advantage of easy incorporation of complementary information. We exploit this advantage to integrate information from other auxiliary slices. We demonstrate, using experiments on two benchmark datasets, that improved performance can be achieved, while the time complexity of training can be reduced significantly.

    @inproceedings{loni_recommendation_2015,
      title = {Recommendation using the {Right} {Slice}: {Speeding} up {Collaborative} {Filtering} with {Factorization} {Machines}},
      url = {http://ceur-ws.org/Vol-1441/recsys2015_poster17.pdf},
      abstract = {We propose an alternative way to efficiently exploit rating data for
    collaborative filtering with Factorization Machines (FMs). Our approach
    partitions user-item matrix into ‘slices’ which are mutually exclusive with respect to items. The training phase makes direct use of the slice of interest (target slice), while incorporating information from other slices indirectly. FMs represent user-item interactions as feature vectors, and they offer the advantage of easy incorporation of complementary information. We exploit this advantage to integrate information from other auxiliary slices. We
    demonstrate, using experiments on two benchmark datasets, that
    improved performance can be achieved, while the time complexity
    of training can be reduced significantly.},
      booktitle = {Poster {Proceedings} of the 9th {ACM} {Conference} on {Recommender} {Systems}},
      author = {Loni, Babak and Larson, Martha and Karatzoglou, Alexandros and Hanjalic, Alan},
      month = sep,
      year = {2015}
    }

  • R. Turrin, A. Condorelli, R. Pagano, M. Quadrana, and P. Cremonesi, “Large scale music recommendation,” in Proceedings of the Large Scale Recommender System workshop (LSRS 2015), 2015.
    [BibTeX] [Download PDF]
    @inproceedings{turrin_large_2015,
      title = {Large scale music recommendation},
      url = {http://robertopagano.tk/wp-content/uploads/2015/09/Large-Scale-Music-Recommendation.pdf},
      booktitle = {Proceedings of the {Large} {Scale} {Recommender} {System} workshop ({LSRS} 2015)},
      author = {Turrin, Roberto and Condorelli, Andrea and Pagano, Roberto and Quadrana, Massimo and Cremonesi, Paolo},
      month = sep,
      year = {2015}
    }

  • R. Turrin, M. Quadrana, A. Condorelli, R. Pagano, and P. Cremonesi, “30Music listening and playlists dataset,” in Poster Proceedings of the 9th ACM Conference on Recommender Systems, 2015.
    [BibTeX] [Abstract] [Download PDF]

    We introduce the 30Music dataset, a collection of listening and playlists data retrieved from Internet radio stations through Last.fm API. In this paper we describe the creation process, its content, and its possible uses. Attractive features of the 30Music dataset that differentiate it from existing public datasets include, among the others, (i) the user listening sessions complete of contextual time information, (ii) the user playlists, and (iii) the positive user ratings, key information to experiment with the task of modeling user taste and recommending playlists.

    @inproceedings{turrin_30music_2015,
      title = {30Music listening and playlists dataset},
      url = {http://ceur-ws.org/Vol-1441/recsys2015_poster13.pdf},
      abstract = {We introduce the 30Music dataset, a collection of listening and playlists data retrieved from Internet radio stations through Last.fm API. In this paper we describe the creation process, its content, and its possible uses. Attractive features of the 30Music dataset that differentiate it from existing public datasets include, among the others, (i) the user listening sessions complete of contextual time information, (ii) the user playlists, and (iii) the positive user ratings, key information to experiment with the task of modeling user
    taste and recommending playlists.},
      booktitle = {Poster {Proceedings} of the 9th {ACM} {Conference} on {Recommender} {Systems}},
      author = {Turrin, Roberto and Quadrana, Massimo and Condorelli, Andrea and Pagano, Roberto and Cremonesi, Paolo},
      month = sep,
      year = {2015}
    }

  • B. Kille, A. Lommatzsch, R. Turrin, A. Serény, M. Larson, T. Brodt, J. Seiler, and F. Hopfgartner, “Stream-based recommendations: Online and offline evaluation as a service.” 2015, pp. 497-517.
    [BibTeX] [Abstract] [Download PDF]

    Providing high-quality news recommendations is a challenging task because the set of potentially relevant news items changes continuously, the relevance of news highly depends on the context, and there are tight time constraints for computing recommendations. The CLEF NewsREEL challenge is a campaign-style evaluation lab allowing participants to evaluate and optimize news recommender algorithms online and offline. In this paper, we discuss the objectives and challenges of the NewsREEL lab. We motivate the metrics used for benchmarking the recommender algorithms and explain the challenge dataset. In addition, we introduce the evaluation framework that we have developed. The framework makes possible the reproducible evaluation of recommender algorithms for stream data, taking into account recommender precision as well as the technical complexity of the recommender algorithms.

    @inproceedings{kille_stream-based_2015,
      title = {Stream-based recommendations: {Online} and offline evaluation as a service},
      url = {http://www.dai-labor.de/fileadmin/Files/Publikationen/Buchdatei/kille-et-al--StreamBasedRecommendations.pdf},
      abstract = {Providing high-quality news recommendations is a challenging task
    because the set of potentially relevant news items changes continuously, the relevance of news highly depends on the context, and there are tight time constraints for computing recommendations. The CLEF NewsREEL challenge is a campaign-style evaluation lab allowing participants to evaluate and optimize news recommender algorithms online and offline. In this paper, we discuss the objectives and challenges of the NewsREEL lab. We motivate the metrics used for benchmarking the recommender algorithms and explain the challenge dataset. In addition, we introduce the evaluation framework that we have developed. The framework makes possible the reproducible evaluation of recommender algorithms for stream data, taking into account recommender precision as well as the technical complexity of the recommender algorithms.},
      publisher = {Springer},
      author = {Kille, Benjamin and Lommatzsch, Andreas and Turrin, Roberto and Serény, András and Larson, Martha and Brodt, Torben and Seiler, Jonas and Hopfgartner, Frank},
      month = sep,
      year = {2015},
      pages = {497--517}
    }

  • I. Verbitskiy, P. Probst, and A. Lommatzsch, “Development and Evaluation of a Highly Scalable News Recommender System,” in Working Notes of CLEF 2015 – Conference and Labs of the Evaluation forum, 2015.
    [BibTeX] [Abstract] [Download PDF]

    The development of highly scalable recommender systems, able to deliver recommendations in real time, is a challenging task. In contrast to traditional recommender systems, recommending news entails additional requirements. These requirements include tight response times, heavy load peaks, and continuously changing collections of users and items. In this paper we describe our participation at the CLEFNewsREEL challenge 2015. We present our highly scalable implementation of a news recommendation algorithm. The developed approach alleviates all the specific challenges of news recommender systems. We use the Akka framework to build an asynchronous, distributable system able to run concurrently on multiple machines. Based on the framework a time window-based, most popular algorithm for recommending news articles is implemented. The evaluation shows that our system implemented using the Akka framework scales well with the restrictions and outperforms the recommendation precision of the baseline recommender.

    @inproceedings{verbitskiy_development_2015,
      title = {Development and {Evaluation} of a {Highly} {Scalable} {News} {Recommender} {System}},
      url = {http://ceur-ws.org/Vol-1391/149-CR.pdf},
      abstract = {The development of highly scalable recommender systems, able to deliver recommendations in real time, is a challenging task. In contrast to traditional recommender systems, recommending news entails additional requirements. These requirements include tight response times, heavy load peaks, and continuously changing collections of users and items. In this paper we describe our participation at the CLEFNewsREEL challenge 2015. We present our highly scalable implementation of a news recommendation algorithm. The developed approach alleviates all the specific challenges of news recommender systems. We use the Akka framework to build an asynchronous, distributable system able to run concurrently on multiple machines. Based on the framework a time window-based, most popular algorithm for recommending news articles is implemented. The evaluation shows that our system implemented using the Akka framework scales well with the restrictions and outperforms the recommendation precision of the baseline recommender.},
      booktitle = {Working {Notes} of {CLEF} 2015 - {Conference} and {Labs} of the {Evaluation} forum},
      author = {Verbitskiy, Ilya and Probst, Patrick and Lommatzsch, Andreas},
      month = aug,
      year = {2015}
    }

  • B. Kille, A. Lommatzsch, R. Turrin, A. Sereny, and M. Larson, “Overview of CLEF NewsREEL 2015: News Recommendation Evaluation Lab,” in Working Notes of CLEF 2015 – Conference and Labs of the Evaluation forum, 2015.
    [BibTeX] [Abstract] [Download PDF]

    This paper summarises objectives, organisation, and results of the first news recommendation evaluation lab (NEWSREEL 2014). NEWSREEL targeted the evaluation of news recommendation algorithms in the form of a campaignstyle evaluation lab. Participants had the chance to apply two types of evaluation schemes. On the one hand, participants could apply their algorithms onto a data set. We refer to this setting as off-line evaluation. On the other hand, participants could deploy their algorithms on a server to interactively receive recommendation requests. We refer to this setting as on-line evaluation. This setting ought to reveal the actual performance of recommendation methods. The competition strived to illustrate differences between evaluation with historical data and actual users. The on-line evaluation does reflect all requirements which active recommender systems face in practise. These requirements include real-time responses and large-scale data volumes. We present the competition’s results and discuss commonalities regarding participants’ approaches

    @inproceedings{kille_overview_2015,
      title = {Overview of {CLEF} {NewsREEL} 2015: {News} {Recommendation} {Evaluation} {Lab}},
      url = {http://ceur-ws.org/Vol-1180/CLEF2014wn-Newsreel-Kille2014.pdf},
      abstract = {This paper summarises objectives, organisation, and results of the first
    news recommendation evaluation lab (NEWSREEL 2014). NEWSREEL targeted the evaluation of news recommendation algorithms in the form of a campaignstyle evaluation lab. Participants had the chance to apply two types of evaluation schemes. On the one hand, participants could apply their algorithms onto a data set. We refer to this setting as off-line evaluation. On the other hand, participants could deploy their algorithms on a server to interactively receive recommendation requests. We refer to this setting as on-line evaluation. This setting ought to reveal the actual performance of recommendation methods. The competition strived to illustrate differences between evaluation with historical data and actual users. The
    on-line evaluation does reflect all requirements which active recommender systems face in practise. These requirements include real-time responses and large-scale data volumes. We present the competition’s results and discuss commonalities regarding participants’ approaches},
      booktitle = {Working {Notes} of {CLEF} 2015 - {Conference} and {Labs} of the {Evaluation} forum},
      author = {Kille, Benjamin and Lommatzsch, Andreas and Turrin, Roberto and Sereny, Andras and Larson, Martha},
      month = aug,
      year = {2015}
    }

  • B. Hidasi and D. Tikk, “Speeding up ALS learning via approximate methods for context-aware recommendations,” in Journal of Knowledge and Information Systems, 2015.
    [BibTeX] [Abstract] [Download PDF]

    Implicit feedback-based recommendation problems, typically set in real-world applications, recently have been receiving more attention in the research community. From the practical point of view, scalability of such methods is crucial. However, factorization-based algorithms efficient in explicit rating data applied directly to implicit data are computationally inefficient; therefore, different techniques are needed to adapt to implicit feedback. For alternating least squares (ALS) learning, several research contributions have proposed efficient adaptation techniques for implicit feedback. These algorithms scale linearly with the number of nonzero data points, but cubically in the number of features, which is a computational bottleneck that prevents the efficient usage of accurate high factor models. Also, map-reduce type big data techniques are not viable with ALS learning, because there is no known technique that solves the high communication overhead required for random access of the feature matrices. To overcome this drawback, here we present two generic approximate variants for fast ALS learning, using conjugate gradient (CG) and coordinate descent (CD). Both CG and CD can be coupled with all methods using ALS learning. We demonstrate the advantages of fast ALS variants on iTALS, a generic context-aware algorithm, which applies ALS learning for tensor factorization on implicit data. In the experiments, we compare the approximate techniques with the base ALS learning in terms of training time, scalability, recommendation accuracy, and convergence. We show that the proposed solutions offer a trade-off between recommendation accuracy and speed of training time; this makes it possible to apply ALS-based methods efficiently even for billions of data points.

    @inproceedings{hidasi_speeding_2015,
      title = {Speeding up {ALS} learning via approximate methods for context-aware recommendations},
      url = {http://link.springer.com/article/10.1007/s10115-015-0863-2#page-1},
      abstract = {Implicit feedback-based recommendation problems, typically set in real-world applications, recently have been receiving more attention in the research community. From the practical point of view, scalability of such methods is crucial. However, factorization-based algorithms efficient in explicit rating data applied directly to implicit data are computationally inefficient; therefore, different techniques are needed to adapt to implicit feedback. For alternating least squares (ALS) learning, several research contributions have proposed efficient adaptation techniques for implicit feedback. These algorithms scale linearly with the number of nonzero data points, but cubically in the number of features, which is a computational bottleneck that prevents the efficient usage of accurate high factor models. Also, map-reduce type big data techniques are not viable with ALS learning, because there is no known technique that solves the high communication overhead required for random access of the feature matrices. To overcome this drawback, here we present two generic approximate variants for fast ALS learning, using conjugate gradient (CG) and coordinate descent (CD). Both CG and CD can be coupled with all methods using ALS learning. We demonstrate the advantages of fast ALS variants on iTALS, a generic context-aware algorithm, which applies ALS learning for tensor factorization on implicit data. In the experiments, we compare the approximate techniques with the base ALS learning in terms of training time, scalability, recommendation accuracy, and convergence. We show that the proposed solutions offer a trade-off between recommendation accuracy and speed of training time; this makes it possible to apply ALS-based methods efficiently even for billions of data points.},
      booktitle = {Journal of {Knowledge} and {Information} {Systems}},
      publisher = {Springer},
      author = {Hidasi, Balázs and Tikk, Domonkos},
      month = jul,
      year = {2015}
    }

  • B. Hidasi and D. Tikk, “General factorization framework for context-aware recommendations,” in Journal of Data Mining and Knowledge Discovery, 2015.
    [BibTeX] [Abstract] [Download PDF]

    Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.

    @inproceedings{hidasi_general_2015,
      title = {General factorization framework for context-aware recommendations},
      url = {http://link.springer.com/article/10.1007/s10618-015-0417-y#/page-1},
      abstract = {Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.},
      booktitle = {Journal of {Data} {Mining} and {Knowledge} {Discovery}},
      publisher = {Springer},
      author = {Hidasi, Balázs and Tikk, Domonkos},
      month = may,
      year = {2015}
    }

  • A. Lommatzsch and S. Albayrak, “Real-time recommendations for user-item streams,” in Proceedings of the 30th Annual ACM Symposium on Applied Computing, 2015, pp. 1039-1046.
    [BibTeX] [Abstract] [Download PDF]

    Recommender systems support users in finding items or users matching their individual preferences or interests. With the growing importance of social networks and the ubiquitous availability of internet connectivity, data streams become one of the most important information sources. Popular streamed data sources are micro blogging services (e.g. "twitter"), update messages in social networks, or articles on online news portals. Traditional recommender algorithms focus on large user-item matrixes applying complex algorithms (e.g. "factorization machines") for extracting the dominant knowledge and reducing the noise. In stream-based scenarios these algorithms cannot be applied due to tight time-constraints and limited resources. In this paper we present a framework optimized for providing recommendations based on streams. We analyze the user-item interaction stream for several online news portals and present the computed characteristics of these streams. Subsequently, we develop several different algorithms optimized for providing recommendations based on streams fulfilling the requirements according to quality, robustness, scalability, and tight time-constraints. We evaluate the algorithms and combine different algorithms in ensembles in order to handle the context-dependent user expectations. The evaluation results show that the developed algorithms outperform traditional recommender approaches and allow us to provide context-aware relevant recommendations.

    @inproceedings{lommatzsch_real-time_2015,
      title = {Real-time recommendations for user-item streams},
      url = {http://dl.acm.org/citation.cfm?id=2695678},
      abstract = {Recommender systems support users in finding items or users matching their individual preferences or interests. With the growing importance of social networks and the ubiquitous availability of internet connectivity, data streams become one of the most important information sources. Popular streamed data sources are micro blogging services (e.g. "twitter"), update messages in social networks, or articles on online news portals. Traditional recommender algorithms focus on large user-item matrixes applying complex algorithms (e.g. "factorization machines") for extracting the dominant knowledge and reducing the noise. In stream-based scenarios these algorithms cannot be applied due to tight time-constraints and limited resources. In this paper we present a framework optimized for providing recommendations based on streams. We analyze the user-item interaction stream for several online news portals and present the computed characteristics of these streams. Subsequently, we develop several different algorithms optimized for providing recommendations based on streams fulfilling the requirements according to quality, robustness, scalability, and tight time-constraints. We evaluate the algorithms and combine different algorithms in ensembles in order to handle the context-dependent user expectations. The evaluation results show that the developed algorithms outperform traditional recommender approaches and allow us to provide context-aware relevant recommendations.},
      booktitle = {Proceedings of the 30th {Annual} {ACM} {Symposium} on {Applied} {Computing}},
      author = {Lommatzsch, Andreas and Albayrak, Sahin},
      month = apr,
      year = {2015},
      pages = {1039--1046}
    }

  • F. Hopfgartner, G. Kazai, U. Kruschwitz, and M. Meder, “GamifIR 2015: Second International Workshop on Gamification for Information Retrieval,” in Proceedings of the Second International Workshop on Gamification for Information Retrieval co-located with the 37th European Conference on Information Retrieval, 2015.
    [BibTeX] [Download PDF]
    @inproceedings{hopfgartner_gamifir_2015,
      title = {{GamifIR} 2015: {Second} {International} {Workshop} on {Gamification} for {Information} {Retrieval}},
      url = {http://ceur-ws.org/Vol-1345},
      booktitle = {Proceedings of the {Second} {International} {Workshop} on {Gamification} for {Information} {Retrieval} co-located with the 37th {European} {Conference} on {Information} {Retrieval}},
      author = {Hopfgartner, Frank and Kazai, Gabriella and Kruschwitz, Udo and Meder, Michael},
      month = mar,
      year = {2015}
    }

  • M. Meder, B. J. Jain, T. Plumbaum, and F. Hopfgartner, “Gamification of Workplace Activities,” in Smart Information Services – Computational Intelligence for Real-Life Applications, Springer International Publishing, 2015, pp. 239-268.
    [BibTeX] [Abstract] [Download PDF]

    Gamification—taking game design patterns and principles out of video games to apply them in non-game environments has become a popular idea in the last 4 years. It has also successfully been applied to workplace environments, but it still remains unclear how employees really feel about the introduction of a gamified system. We address this matter by comparing the employees’ subjective perception of gamification with their actual usage behavior in an enterprise application software. As a result of the experiment, we find there is a strong relationship visible. Following up on this observation, we pose the gamification design problem under the assumptions that (i) gamification consists of various types of users that experience game design elements differently; and (ii) gamification is deployed in order to achieve some goals in the broadest sense, as the problem of assigning each user a game design element that maximizes their expected contribution to achieve these goals. We show that this problem can be reduced to a statistical learning problem and suggest matrix factorization as one solution when user interaction data is given. The hypothesis is that predictive models as intelligent tools for supporting users in decision-making may have the potential to support the design process in gamification.

    @incollection{meder_gamification_2015,
      series = {Advances in {Computer} {Vision} and {Pattern} {Recognition}},
      title = {Gamification of {Workplace} {Activities}},
      url = {http://link.springer.com/chapter/10.1007/978-3-319-14178-7_9},
      abstract = {Gamification—taking game design patterns and principles out of video games to apply them in non-game environments has become a popular idea in the last 4 years. It has also successfully been applied to workplace environments, but it still remains unclear how employees really feel about the introduction of a gamified system. We address this matter by comparing the employees’ subjective perception of gamification with their actual usage behavior in an enterprise application software. As a result of the experiment, we find there is a strong relationship visible. Following up on this observation, we pose the gamification design problem under the assumptions that (i) gamification consists of various types of users that experience game design elements differently; and (ii) gamification is deployed in order to achieve some goals in the broadest sense, as the problem of assigning each user a game design element that maximizes their expected contribution to achieve these goals. We show that this problem can be reduced to a statistical learning problem and suggest matrix factorization as one solution when user interaction data is given. The hypothesis is that predictive models as intelligent tools for supporting users in decision-making may have the potential to support the design process in gamification.},
      number = {2191-6586},
      booktitle = {Smart {Information} {Services} - {Computational} {Intelligence} for {Real}-{Life} {Applications}},
      publisher = {Springer International Publishing},
      author = {Meder, Michael and Jain, Brijnesh Johannes and Plumbaum, Till and Hopfgartner, Frank},
      month = jan,
      year = {2015},
      pages = {239--268}
    }

  • B. Hidasi, “Factorization models for context-aware recommendations,” in Infocommunications Journal, 2014, pp. 27-34.
    [BibTeX] [Abstract] [Download PDF]

    The field of implicit feedback based recommender algorithms have gained increased interest in the last few years, driven by the need of many practical applications where no explicit feedback is available. The main difficulty of this recommendation task is the lack of information on the negative preferences of the users that may lead to inaccurate recommendations and scalability issues. In this paper, we adopt the use of contextawareness to improve the accuracy of implicit models—a model extension technique that was applied successfully for explicit algorithms. We present a modified version of the iTALS algorithm (coined iTALSx) that uses a different underlying factorization model. We explore the key differences between these approaches and conduct experiments on five data sets to experimentally determine the advantages of the underlying models. We show that iTALSx outperforms the other method on sparser data sets and is able to model complex user–item relations with fewer factors.

    @inproceedings{hidasi_factorization_2014,
      title = {Factorization models for context-aware recommendations},
      volume = {6(4)},
      url = {http://www.researchgate.net/profile/Balazs_Hidasi/publication/276059025_Factorization_models_for_context-aware_recommendations/links/554f95a108ae12808b37a6a6.pdf},
      abstract = {The field of implicit feedback based recommender algorithms have gained increased interest in the last few years, driven by the need of many practical applications where no explicit feedback is available. The main difficulty of this recommendation task is the lack of information on the negative preferences
    of the users that may lead to inaccurate recommendations and scalability issues. In this paper, we adopt the use of contextawareness to improve the accuracy of implicit models—a model extension technique that was applied successfully for explicit algorithms. We present a modified version of the iTALS algorithm (coined iTALSx) that uses a different underlying factorization
    model. We explore the key differences between these approaches and conduct experiments on five data sets to experimentally determine the advantages of the underlying models. We show that iTALSx outperforms the other method on sparser data sets and is able to model complex user–item relations with fewer factors.},
      booktitle = {Infocommunications {Journal}},
      author = {Hidasi, Balázs},
      month = dec,
      year = {2014},
      pages = {27--34}
    }

  • J. S. Pedro and A. Karatzoglou, “Question Recommendation for Collaborative Question Answering Systems with RankSLDA,” in 8th international Conference on Recommender Systems (RecSys ’14), 2014.
    [BibTeX] [Abstract] [Download PDF]

    Collaborative question answering (CQA) communities rely on user participation for their success. This paper presents a supervised Bayesian approach to model expertise in on- line CQA communities with application to question recom- mendation, aimed at reducing waiting times for responses and avoiding question starvation. We propose a novel algo- rithm called RankSLDA which extends the supervised La- tent Dirichlet Allocation model by considering a learning- to-rank paradigm. This allows us to exploit the inherent collaborative e ects that are present in CQA communities where users tend to answer questions in their topics of exper- tise. Users can thus be modeled on the basis of the topics in which they demonstrate expertise. In the supervised stage of the method we model the pairwise order of expertise of users on a given question. We compare RankSLDA against several alternative methods on data from the Cross Validate com- munity, part of the Stack Exchange network. RankSLDA outperforms all alternative methods by a signi cant margin.

    @inproceedings{pedro_question_2014,
      title = {Question {Recommendation} for {Collaborative} {Question} {Answering} {Systems} with {RankSLDA}},
      url = {http://www.researchgate.net/profile/Alexandros_Karatzoglou/publication/267335046_Question_Recommendation_for_Collaborative_Question_Answering_Systems_with_RankSLDA/links/544cc6ef0cf24b5d6c40d8e1},
      abstract = {Collaborative question answering (CQA) communities rely on user participation for their success. This paper presents a supervised Bayesian approach to model expertise in on- line CQA communities with application to question recom- mendation, aimed at reducing waiting times for responses and avoiding question starvation. We propose a novel algo- rithm called RankSLDA which extends the supervised La- tent Dirichlet Allocation model by considering a learning- to-rank paradigm. This allows us to exploit the inherent collaborative e ects that are present in CQA communities where users tend to answer questions in their topics of exper- tise. Users can thus be modeled on the basis of the topics in which they demonstrate expertise. In the supervised stage of the method we model the pairwise order of expertise of users on a given question. We compare RankSLDA against several alternative methods on data from the Cross Validate com- munity, part of the Stack Exchange network. RankSLDA outperforms all alternative methods by a signi cant margin.},
      booktitle = {8th international {Conference} on {Recommender} {Systems} ({RecSys} ’14)},
      author = {Pedro, Jose San and Karatzoglou, Alexandros},
      month = oct,
      year = {2014},
      annote = {(to appear)}
    }

  • B. Loni and A. Said, “WrapRec: An Easy Extension of Recommender System Libraries,” in Proceedings of the 8th ACM Conference on Recommender Systems, 2014.
    [BibTeX] [Abstract] [Download PDF]

    WrapRec is an easy-to-use Recommender Systems toolkit, which allows users to easily implement or wrap recommendation algorithms from other frameworks. The main goals of WrapRec are to provide a flexible I/O, evaluation mechanism and code reusability. WrapRec provides a rich data model which makes it easy to implement algorithms for different recommender system problems, such as context-aware and cross-domain recommendation. The toolkit is written in C\# and the source code is publicly available on Github under the GPL license.

    @inproceedings{loni_wraprec:_2014,
      series = {{RecSys}},
      title = {{WrapRec}: {An} {Easy} {Extension} of {Recommender} {System} {Libraries}},
      isbn = {978-1-4503-2668-1},
      url = {http://dl.acm.org/citation.cfm?id=2645717&dl=ACM&coll=DL&CFID=455589397&CFTOKEN=86434242},
      abstract = {WrapRec is an easy-to-use Recommender Systems toolkit, which allows users to easily implement or wrap recommendation algorithms from other frameworks. The main goals of WrapRec are to provide a flexible I/O, evaluation mechanism and code reusability. WrapRec provides a rich data model which makes it easy to implement algorithms for different recommender system problems, such as context-aware and cross-domain recommendation. The toolkit is written in C\# and the source code is publicly available on Github under the GPL license.},
      booktitle = {Proceedings of the 8th {ACM} {Conference} on {Recommender} {Systems}},
      publisher = {ACM},
      author = {Loni, Babak and Said, Alan},
      month = oct,
      year = {2014}
    }

  • A. Said and A. Bellogín, “Comparative Recommender System Evaluation: Benchmarking Recommendation Frameworks,” in Proceedings of the 8th ACM Conference on Recommender Systems, 2014.
    [BibTeX] [Abstract] [Download PDF]

    Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithm implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks.\% used by industry and academia. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.

    @inproceedings{said_comparative_2014,
      series = {{RecSys}},
      title = {Comparative {Recommender} {System} {Evaluation}: {Benchmarking} {Recommendation} {Frameworks}},
      url = {http://dl.acm.org/citation.cfm?id=2645746},
      abstract = {Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithm implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks.\% used by industry and academia. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.},
      booktitle = {Proceedings of the 8th {ACM} {Conference} on {Recommender} {Systems}},
      publisher = {ACM},
      author = {Said, Alan and Bellogín, Alejandro},
      month = oct,
      year = {2014}
    }

  • A. Said, S. Dooms, B. Loni, and D. Tikk, “Recommender Systems Challenge 2014,” in Proceedings of the 8th ACM Conference on Recommender Systems, 2014.
    [BibTeX] [Abstract] [Download PDF]

    The 2014 ACM Recommender Systems Challenge invited researchers and practitioners to work towards a common goal, this goal being the prediction of users engagement in movie ratings expressed on Twitter. More than 200 participants sought to join the challenge and work on the new dataset released in its scope. The participants were asked to develop new algorithms to predict user engagement and evaluate them in a common setting, ensuring that the comparison was objective and unbiased, within the challenge.

    @inproceedings{said_recommender_2014,
      series = {{RecSys}},
      title = {Recommender {Systems} {Challenge} 2014},
      url = {http://dl.acm.org/citation.cfm?id=2645779},
      abstract = {The 2014 ACM Recommender Systems Challenge invited researchers and practitioners to work towards a common goal, this goal being the prediction of users engagement in movie ratings expressed on Twitter. More than 200 participants sought to join the challenge and work on the new dataset released in its scope. The participants were asked to develop new algorithms to predict user engagement and evaluate them in a common setting, ensuring that the comparison was objective and unbiased, within the challenge.},
      booktitle = {Proceedings of the 8th {ACM} {Conference} on {Recommender} {Systems}},
      publisher = {ACM},
      author = {Said, Alan and Dooms, Simon and Loni, Babak and Tikk, Domonkos},
      month = oct,
      year = {2014}
    }

  • F. Hopfgartner, G. Kazai, U. Kruschwitz, and M. Meder, “Gamification for Information Retrieval,” in ECIR’14: Proceedings of the 36th European Conference on Information Retrieval, Amsterdam, The Netherlands, 2014, pp. 806-809.
    [BibTeX] [Abstract] [Download PDF]

    Gamification is the application of game mechanics, such as leader boards, badges or achievement points, in non-gaming environments with the aim to increase user engagement, data quality or cost effectiveness. A core aspect of gamification solutions is to infuse intrinsic motivations to participate by leveraging people’s natural desires for achievement and competition. While gamification, on the one hand, is emerging as the next big thing in industry, e.g., an effective way to generate business, on the other hand, it is also becoming a major research area. However, its adoption in Information Retrieval (IR) is still in its infancy, despite the wide ranging IR tasks that may benefit from gamification techniques. These include the manual annotation of documents for IR evaluation, the participation in user studies to study interactive IR challenges, or the shift from single-user search to social search, just to mention a few. This context provided the motivation to organise the GamifIR’14 workshop at ECIR.

    @inproceedings{hopfgartner_gamification_2014,
      address = {Amsterdam, The Netherlands},
      title = {Gamification for {Information} {Retrieval}},
      url = {http://dx.doi.org/10.1007/978-3-319-06028-6_101},
      abstract = {Gamification is the application of game mechanics, such as leader boards, badges or achievement points, in non-gaming environments with the aim to increase user engagement, data quality or cost effectiveness. A core aspect of gamification solutions is to infuse intrinsic motivations to participate by leveraging people’s natural desires for achievement and competition. While gamification, on the one hand, is emerging as the next big thing in industry, e.g., an effective way to generate business, on the other hand, it is also becoming a major research area. However, its adoption in Information Retrieval (IR) is still in its infancy, despite the wide ranging IR tasks that may benefit from gamification techniques. These include the manual annotation of documents for IR evaluation, the participation in user studies to study interactive IR challenges, or the shift from single-user search to social search, just to mention a few. This context provided the motivation to organise the GamifIR’14 workshop at ECIR.},
      booktitle = {{ECIR}'14: {Proceedings} of the 36th {European} {Conference} on {Information} {Retrieval}},
      publisher = {Springer Verlag},
      author = {Hopfgartner, Frank and Kazai, Gabriella and Kruschwitz, Udo and Meder, Michael},
      month = oct,
      year = {2014},
      pages = {806--809}
    }

  • K. Yadati, M. Larson, C. C. S. Liem, and A. Hanjalic, “Detecting Drops in Electronic Dance Music: Content based approaches to a socially significant music event,” in The 15th International Society for Music Information Retieval Conference (ISMIR), 2014.
    [BibTeX] [Abstract] [Download PDF]

    Electronic dance music (EDM) is a popular genre of music. In this paper, we propose a method to automatically detect the characteristic event in an EDM recording that is referred to as a drop. Its importance is reflected in the number of users who leave comments in the general neighborhood of drop events in music on online audio distribution platforms like SoundCloud. The variability that characterizes realizations of drop events in EDM makes automatic drop detection challenging. We propose a two-stage approach to drop detection that first models the sound characteristics during drop events and then incorporates temporal structure by zeroing in on a watershed moment. We also explore the possibility of using the drop-related social comments on the SoundCloud platform as weak reference labels to improve drop detection. The method is evaluated using data from SoundCloud. Performance is measured as the overlap between tolerance windows centered around the hypothesized and the actual drop. Initial experimental results are promising, revealing the potential of the proposed method for combining content analysis and social activity to detect events in music recordings.

    @inproceedings{karthik_yadati_detecting_2014,
      title = {Detecting {Drops} in {Electronic} {Dance} {Music}: {Content} based approaches to a socially significant music event},
      url = {http://www.terasoft.com.tw/conf/ismir2014/proceedings/T026_297_Paper.pdf},
      abstract = {Electronic dance music (EDM) is a popular genre of music. In this paper, we propose a method to automatically detect the characteristic event in an EDM recording that is referred to as a drop. Its importance is reflected in the number of users who leave comments in the general neighborhood of drop events in music on online audio distribution platforms like SoundCloud. The variability that characterizes realizations of drop events in EDM makes automatic drop detection challenging. We propose a two-stage approach to drop detection that first models the sound characteristics during drop events and then incorporates temporal structure by zeroing in on a watershed moment. We also explore the possibility of using the drop-related social comments on the SoundCloud platform as weak reference labels to improve drop detection. The method is evaluated using data from SoundCloud. Performance is measured as the overlap between tolerance windows centered around the hypothesized and the actual drop. Initial experimental results are promising, revealing the potential of the proposed method for combining content analysis and social activity to detect events in music recordings.},
      booktitle = {The 15th {International} {Society} for {Music} {Information} {Retieval} {Conference} ({ISMIR})},
      author = {{Karthik Yadati} and {Martha Larson} and {Cynthia C. S. Liem} and {Alan Hanjalic}},
      month = oct,
      year = {2014}
    }

  • A. Said, S. Dooms, B. Loni, and D. Tikk, “Proceedings of the 2014 Recommender Systems Challenge.” 2014.
    [BibTeX] [Abstract] [Download PDF]

    This volume contains the papers presented at the ACM RecSys Challenge 2014: User Engagement as Evaluation held on September 10, 2014 in Foster City, CA, USA.

    @inproceedings{said_proceedings_2014,
      title = {Proceedings of the 2014 {Recommender} {Systems} {Challenge}},
      url = {http://dl.acm.org/citation.cfm?id=2668067},
      abstract = {This volume contains the papers presented at the ACM RecSys Challenge 2014: User Engagement as Evaluation held on September 10, 2014 in Foster City, CA, USA.},
      author = {Said, Alan and Dooms, Simon and Loni, Babak and Tikk, Domonkos},
      month = oct,
      year = {2014}
    }

  • B. Loni, A. Said, M. Larson, and A. Hanjalic, “`Free Lunch’ Enhancement for Collaborative Filtering with Factorization Machines,” in Proceedings of the 8th ACM Conference on Recommender Systems, 2014.
    [BibTeX] [Abstract] [Download PDF]

    The advantage of Factorization Machines over other factorization models is their ability to easily integrate and efficiently exploit auxiliary information to improve Collaborative Filtering. Until now, this auxiliary information has been drawn from external knowledge sources beyond the user-item matrix. In this paper, we demonstrate that Factorization Machines can exploit additional representations of information inherent in the user-item matrix to improve recommendation performance. We refer to our approach as ‘Free Lunch’ enhancement since it leverages clusters that are based on information that is present in the user-item matrix, but not otherwise directly exploited during matrix factorization. Borrowing clustering concepts from codebook sharing, our approach can also make use of ‘Free Lunch’ information inherent in a user-item matrix from a auxiliary domain that is different from the target domain of the recommender. Our approach improves performance both in the joint case, in which the auxiliary and target domains share users, and in the disjoint case, in which they do not. Although ‘Free Lunch’ enhancement does not apply equally well to any given domain or domain combination, our overall conclusion is that Factorization Machines present an opportunity to exploit information that is ubiquitously present, but commonly under-appreciated by Collaborative Filtering algorithms.

    @inproceedings{loni_`free_2014,
      series = {{RecSys}},
      title = {`{Free} {Lunch}’ {Enhancement} for {Collaborative} {Filtering} with {Factorization} {Machines}},
      url = {http://dl.acm.org/citation.cfm?id=2645771},
      abstract = {The advantage of Factorization Machines over other factorization models is their ability to easily integrate and efficiently exploit auxiliary information to improve Collaborative Filtering. Until now, this auxiliary information has been drawn from external knowledge sources beyond the user-item matrix. In this paper, we demonstrate that Factorization Machines can exploit additional representations of information inherent in the user-item matrix to improve recommendation performance. We refer to our approach as 'Free Lunch' enhancement since it leverages clusters that are based on information that is present in the user-item matrix, but not otherwise directly exploited during matrix factorization. Borrowing clustering concepts from codebook sharing, our approach can also make use of 'Free Lunch' information inherent in a user-item matrix from a auxiliary domain that is different from the target domain of the recommender. Our approach improves performance both in the joint case, in which the auxiliary and target domains share users, and in the disjoint case, in which they do not. Although 'Free Lunch' enhancement does not apply equally well to any given domain or domain combination, our overall conclusion is that Factorization Machines present an opportunity to exploit information that is ubiquitously present, but commonly under-appreciated by Collaborative Filtering algorithms.},
      booktitle = {Proceedings of the 8th {ACM} {Conference} on {Recommender} {Systems}},
      publisher = {ACM},
      author = {Loni, Babak and Said, Alan and Larson, Martha and Hanjalic, Alan},
      month = oct,
      year = {2014}
    }

  • K. Yadati, P. S. N. Chandrasekaran Ayyanathan, and M. Larson, “Crowdsorting Timed Comments about Music: Foundations for a New Crowdsourcing Task,” in Proceedings of the MediaEval 2014 Worksho, 2014.
    [BibTeX] [Abstract] [Download PDF]

    This paper provides an overview of the Crowdsorting Timed Comments about Music Task, a new task in the area of crowdsourcing for social media offered by the MediaEval 2014 Multimedia Benchmark. Data for this task is a set of Electronic Dance Music (EDM) tracks, collected from online music sharing platform Soundcloud. Given a set of noisy labels for segments of Electronic Dance Music (EDM) that were collected on Amazon Mechanical Turk, the task is to predict a single ‘correct’ label. The labels indicate whether or not a ‘drop’ occurs in the particular music segment. The larger aim of this task is to contribute to the development of hybrid human/conventional computation techniques to generate accurate labels for social multimedia content. For this reason, participants are also encouraged to predict labels by combining input from the crowd (i.e., human computation) with automatic computation (i.e., processing techniques applied to textual metadata and/or audio signal analysis).

    @inproceedings{yadati_crowdsorting_2014,
      title = {Crowdsorting {Timed} {Comments} about {Music}: {Foundations} for a {New} {Crowdsourcing} {Task}},
      url = {http://ceur-ws.org/Vol-1263/mediaeval2014_submission_78.pdf},
      abstract = {This paper provides an overview of the Crowdsorting Timed
    Comments about Music Task, a new task in the area of
    crowdsourcing for social media offered by the MediaEval
    2014 Multimedia Benchmark. Data for this task is a set
    of Electronic Dance Music (EDM) tracks, collected from online
    music sharing platform Soundcloud. Given a set of noisy
    labels for segments of Electronic Dance Music (EDM) that
    were collected on Amazon Mechanical Turk, the task is to
    predict a single ‘correct’ label. The labels indicate whether
    or not a ‘drop’ occurs in the particular music segment. The
    larger aim of this task is to contribute to the development of
    hybrid human/conventional computation techniques to generate
    accurate labels for social multimedia content. For this
    reason, participants are also encouraged to predict labels by
    combining input from the crowd (i.e., human computation)
    with automatic computation (i.e., processing techniques applied
    to textual metadata and/or audio signal analysis).},
      booktitle = {Proceedings of the {MediaEval} 2014 {Worksho}},
      author = {Yadati, Karthik and Chandrasekaran Ayyanathan, Pavala S.N. and Larson, Martha},
      month = oct,
      year = {2014}
    }

  • A. Said, B. Loni, R. Turrin, and A. Lommatzsch, “An Extended Data Model Format for Composite Recommendation,” in Proceedings of the 8th RecSys conference 2014, Foster City, Silicon Valley, CA, USA, 2014.
    [BibTeX] [Abstract] [Download PDF]

    Current de facto data model standards in the recommender systems field do not support easy encoding of heterogeneous data aspects such as context, content, social ties, etc. In order to facilitate a simpler means of sharing and using the rich datasets used by researchas well as production systems today, in this paper we propose a data model standard for heterogeneous datasets in the recommender systems domain. The data model is based on the classical tab separated value (TSV) data model with additional fields for encoding relational data in JSON format. Through using already established data sharing formats, we intend to make the usage of the data model as effortless as possible, i.e. there already exist generic tools for parsing and managing the data format in most programming languages. We invite the RecSys community to contribute to the proposed data model in order to increase ease of use and adoption.

    @inproceedings{said_extended_2014,
      address = {Foster City, Silicon Valley, CA, USA},
      title = {An {Extended} {Data} {Model} {Format} for {Composite} {Recommendation}},
      url = {http://ceur-ws.org/Vol-1247/recsys14_poster20.pdf},
      abstract = {Current de facto data model standards in the recommender systems field do not support easy encoding of heterogeneous data aspects such as context, content, social ties, etc. In order to facilitate a simpler means of sharing and using the rich datasets used by researchas well as production systems today, in this paper we propose a data model standard for heterogeneous datasets in the recommender systems domain. The data model is based on the classical tab separated value (TSV) data model with additional fields for encoding relational data in JSON format. Through using already established data sharing formats, we intend to make the usage of the data model as effortless as possible, i.e. there already exist generic tools for parsing and managing the data format in most programming languages. We invite the RecSys community to contribute to the proposed data model in order to increase ease of use and adoption.},
      booktitle = {Proceedings of the 8th {RecSys} conference 2014},
      author = {Said, Alan and Loni, Babak and Turrin, Roberto and Lommatzsch, Andreas},
      month = oct,
      year = {2014}
    }

  • S. Vargas, L. Baltrunas, A. Karatzoglou, and P. Castells, “Coverage, redundancy and size-awareness in genre diversity for recommender systems,” in Proceedings of the 8th ACM Conference on Recommender systems, 2014, pp. 209-216.
    [BibTeX] [Download PDF]
    @inproceedings{vargas_coverage_2014,
      title = {Coverage, redundancy and size-awareness in genre diversity for recommender systems},
      url = {http://dl.acm.org/citation.cfm?id=2645743},
      booktitle = {Proceedings of the 8th {ACM} {Conference} on {Recommender} systems},
      author = {Vargas, Saúl and Baltrunas, Linas and Karatzoglou, Alexandros and Castells, Pablo},
      month = oct,
      year = {2014},
      pages = {209--216}
    }

  • M. Larson, P. Cremonesi, and A. Karatzoglou, “Overview of ACM RecSys CrowdRec 2014 Workshop: Crowdsourcing and Human Computation for Recommender Systems,” in Proceedings of the 8th ACM Conference on Recommender systems (RecSys ’14), ACM, New York, NY, USA, 2014, pp. 381-382.
    [BibTeX] [Abstract] [Download PDF]

    The CrowdRec workshop brings together the recommender system community for discussion and exchange of ideas. Its goal is to allow the potential of human computation and crowdsourcing to be exploited fully and sustainably, leading to the development of improved recommendation and information filtering technologies. Currently, the complete range of possible intelligent contributions that recommender systems could elicit from users is under-explored, and its full extent is unknown. Critical questions addressed in the workshop include how to: formulate crowdtasks, match tasks with crowdmembers, ensure the quality of crowd input, and integrate feedback from the crowd in an optimal manner to improve recommendation. Further, crowdsourcing can also be exploited for system design and system evaluation.

    @inproceedings{larson_overview_2014,
      address = {ACM, New York, NY, USA},
      title = {Overview of {ACM} {RecSys} {CrowdRec} 2014 {Workshop}: {Crowdsourcing} and {Human} {Computation} for {Recommender} {Systems}},
      url = {http://dl.acm.org/citation.cfm?id=2645783},
      abstract = {The CrowdRec workshop brings together the recommender system community for discussion and exchange of ideas. Its goal is to allow the potential of human computation and crowdsourcing to be exploited fully and sustainably, leading to the development of improved recommendation and information filtering technologies. Currently, the complete range of possible intelligent contributions that recommender systems could elicit from users is under-explored, and its full extent is unknown. Critical questions addressed in the workshop include how to: formulate crowdtasks, match tasks with crowdmembers, ensure the quality of crowd input, and integrate feedback from the crowd in an optimal manner to improve recommendation. Further, crowdsourcing can also be exploited for system design and system evaluation.},
      booktitle = {Proceedings of the 8th {ACM} {Conference} on {Recommender} systems ({RecSys} '14)},
      author = {Larson, M. and Cremonesi, P. and Karatzoglou, A.},
      month = oct,
      year = {2014},
      pages = {381--382},
      annote = {(to appear)}
    }

  • A. Said and A. Bellogín, “RiVal – A Toolkit to Foster Reproducibility in Recommender System Evaluation,” in Proceedings of the 8th ACM Conference on Recommender Systems, 2014.
    [BibTeX] [Abstract] [Download PDF]

    Currently, it is difficult to put in context and compare the results from a given evaluation of a recommender system, mainly because too many alternatives exist when designing and implementing an evaluation strategy. Furthermore, the actual implementation of a recommendation algorithm sometimes diverges considerably from the well-known ideal formulation due to manual tuning and modifications observed to work better in some situations. RiVal – a recommender system evaluation toolkit – allows a complete control of the different evaluation dimensions that take place in any experimental evaluation of a recommender system: data splitting, definition of evaluation strategies, and computation of evaluation metrics. In this demo we present some of the functionality of RiVal and show step-by-step how RiVal can be used to evaluate the results from any recommendation framework and make sure that the results are comparable and reproducible.

    @inproceedings{said_rival_2014,
      series = {{RecSys}},
      title = {{RiVal} - {A} {Toolkit} to {Foster} {Reproducibility} in {Recommender} {System} {Evaluation}},
      url = {http://dl.acm.org/citation.cfm?id=2645712},
      abstract = {Currently, it is difficult to put in context and compare the results from a given evaluation of a recommender system, mainly because too many alternatives exist when designing and implementing an evaluation strategy. Furthermore, the actual implementation of a recommendation algorithm sometimes diverges considerably from the well-known ideal formulation due to manual tuning and modifications observed to work better in some situations. RiVal – a recommender system evaluation toolkit – allows a complete control of the different evaluation dimensions that take place in any experimental evaluation of a recommender system: data splitting, definition of evaluation strategies, and computation of evaluation metrics. In this demo we present some of the functionality of RiVal and show step-by-step how RiVal can be used to evaluate the results from any recommendation framework and make sure that the results are comparable and reproducible.},
      booktitle = {Proceedings of the 8th {ACM} {Conference} on {Recommender} {Systems}},
      publisher = {ACM},
      author = {Said, Alan and Bellogín, Alejandro},
      month = oct,
      year = {2014}
    }

  • D. Loiacono, A. Lommtzsch, and R. Turrin, “An Analysis of the 2014 RecSys Challenge,” in Proceedings of the Recommender Systems Challenge 2014, at ACM RecSys 2014, Foster City, CA, USA, 2014.
    [BibTeX] [Abstract] [Download PDF]

    The RecSys challenge 2014 focuses on the engagement generated by the tweets posted by the users of the IMDb application for smartphones. Such engagement depends on attributes concerning: the user who posts the message (e.g., his role in the social network), the tweet content (e.g., the rating), and the movie object of the tweet (e.g., the popularity of the movie). In this work we provide an analysis of the dataset and of the task to help participants better understand the challenge. Furthermore, we propose a baseline prediction algorithm. We split our analysis into three stages: (i) data enrichment, (ii) knowledge extraction, and (iii) engagement prediction. Initially, we enriched the dataset with additional movie attributes extracted from IMDb and Free-base. Subsequently, we analyzed the statistics of the main tweet attributes and we applied some machine learning techniques to extract additional knowledge from the data. Finally, we defined a predictor on the basis of the main outcomes of these analyses. We define a linear regression model on attributes such as: user rating score, the presence of mentions, and whether the tweet is a retweet or it has already been retweeted. Such predictor led to an nDCG@10 equals to 0.8352.

    @inproceedings{loiacono_analysis_2014,
      address = {Foster City, CA, USA},
      series = {{RecSys} '14},
      title = {An {Analysis} of the 2014 {RecSys} {Challenge}},
      url = {http://www.contentwise.tv/files/Turrin_RecSysChallenge_presentation_RecSys2014.pdf},
      abstract = {The RecSys challenge 2014 focuses on the engagement generated by the tweets posted by the users of the IMDb application for smartphones. Such engagement depends on attributes concerning: the user who posts the message (e.g., his role in the social network), the tweet content (e.g., the rating), and the movie object of the tweet (e.g., the popularity of the movie). In this work we provide an analysis of the dataset and of the task to help participants better understand the challenge. Furthermore, we propose a baseline prediction algorithm. We split our analysis into three stages: (i) data enrichment, (ii) knowledge extraction, and (iii) engagement prediction. Initially, we enriched the dataset with additional movie attributes extracted from IMDb and Free-base. Subsequently, we analyzed the statistics of the main tweet attributes and we applied some machine learning techniques to extract additional knowledge from the data. Finally, we defined a predictor on the basis of the main outcomes of these analyses. We define a linear regression model on attributes such as: user rating score, the presence of mentions, and whether the tweet is a retweet or it has already been retweeted. Such predictor led to an nDCG@10 equals to 0.8352.},
      booktitle = {Proceedings of the {Recommender} {Systems} {Challenge} 2014, at {ACM} {RecSys} 2014},
      publisher = {ACM},
      author = {Loiacono, Daniele and Lommtzsch, Andreas and Turrin, Roberto},
      month = oct,
      year = {2014}
    }

  • R. Turrin, R. Pagano, P. Cremonesi, and A. Condorelli, “Time-based TV programs prediction,” in 1st Workshop on Recommender Systems for Television and Online Video at ACM RecSys 2014, 2014.
    [BibTeX] [Abstract] [Download PDF]

    This paper addresses the problem of providing recommenda- tions to TV viewers. Conversely to standard recommender systems operating in the settings of static datasets, recommending TV shows must take into consideration that items are scheduled at specific times. Consequently, the catalog of items is particularly dynamic and users consumption pattern is strongly affected by time context and channel preferences. In this work, we extend common state-of-the-art recom- mendation methods to exploit and integrate the current watching context into the user viewing model. Empirical experiments over a large-scale linear TV dataset demonstrate a significant improvement in recommendation quality when context is considered.

    @inproceedings{turrin_time-based_2014,
      title = {Time-based {TV} programs prediction},
      url = {http://www.contentwise.tv/files/Time_based_TV_programs_prediction_Paper.pdf},
      abstract = {This paper addresses the problem of providing recommenda- tions to TV viewers. Conversely to standard recommender systems operating in the settings of static datasets, recommending TV shows must take into consideration that items are scheduled at specific times. Consequently, the catalog of items is particularly dynamic and users consumption pattern is strongly affected by time context and channel preferences. In this work, we extend common state-of-the-art recom- mendation methods to exploit and integrate the current watching context into the user viewing model. Empirical experiments over a large-scale linear TV dataset demonstrate a significant improvement in recommendation quality when context is considered.},
      booktitle = {1st {Workshop} on {Recommender} {Systems} for {Television} and {Online} {Video} at {ACM} {RecSys} 2014},
      author = {Turrin, Roberto and Pagano, Roberto and Cremonesi, Paolo and Condorelli, Andrea},
      month = sep,
      year = {2014}
    }

  • F. Hopfgartner, B. Kille, A. Lommatzsch, T. Brodt, and T. Heintz, “Benchmarking News Recommendations in a Living Lab,” in CLEF’14: Proceedings of the 5th International Conference of the CLEF Initiative, Sheffield, UK, 2014, pp. 250-267.
    [BibTeX] [Abstract] [Download PDF]

    Most user-centric studies of information access systems in literature suffer from unrealistic settings or limited numbers of users who participate in the study. In order to address this issue, the idea of a living lab has been promoted. Living labs allow us to evaluate research hypotheses using a large number of users who satisfy their information need in a real context. In this paper, we introduce a living lab on news recommendation in real time. The living lab has first been organized as News Recommendation Challenge at ACM RecSys’13 and then as campaign-style evaluation lab NEWSREEL at CLEF’14. Within this lab, researchers were asked to provide news article recommendations to millions of users in real time. Different from user studies which have been performed in a laboratory, these users are following their own agenda. Consequently, laboratory bias on their behavior can be neglected. We outline the living lab scenario and the experimental setup of the two benchmarking events. We argue that the living lab can serve as reference point for the implementation of living labs for the evaluation of information access systems.

    @inproceedings{hopfgartner_benchmarking_2014,
      address = {Sheffield, UK},
      title = {Benchmarking {News} {Recommendations} in a {Living} {Lab}},
      url = {http://dx.doi.org/10.1007/978-3-319-11382-1_21},
      abstract = {Most user-centric studies of information access systems in literature suffer from unrealistic settings or limited numbers of users who participate in the study. In order to address this issue, the idea of a living lab has been promoted. Living labs allow us to evaluate research hypotheses using a large number of users who satisfy their information need in a real context. In this paper, we introduce a living lab on news recommendation in real time. The living lab has first been organized as News Recommendation Challenge at ACM RecSys’13 and then as campaign-style evaluation lab NEWSREEL at CLEF’14. Within this lab, researchers were asked to provide news article recommendations to millions of users in real time. Different from user studies which have been performed in a laboratory, these users are following their own agenda. Consequently, laboratory bias on their behavior can be neglected. We outline the living lab scenario and the experimental setup of the two benchmarking events. We argue that the living lab can serve as reference point for the implementation of living labs for the evaluation of information access systems.},
      booktitle = {{CLEF}'14: {Proceedings} of the 5th {International} {Conference} of the {CLEF} {Initiative}},
      publisher = {Springer Verlag},
      author = {Hopfgartner, Frank and Kille, Benjamin and Lommatzsch, Andreas and Brodt, Torben and Heintz, Tobias},
      month = sep,
      year = {2014},
      pages = {250--267}
    }

  • B. Kille, T. Brodt, T. Heintz, F. Hopfgartner, A. Lommatzsch, and J. Seiler, “Overview of CLEF NEWSREEL 2014: News Recommendations Evaluation Labs,” in CLEF 2014 Evaluation Labs and Workshop, Online Working Notes, Sheffield, UK, 2014, pp. 790-801.
    [BibTeX] [Abstract] [Download PDF]

    This paper summarises objectives, organisation, and results of the first news recommendation evaluation lab (NEWSREEL 2014). NEWSREEL targeted the evaluation of news recommendation algorithms in the form of a campaignstyle evaluation lab. Participants had the chance to apply two types of evaluation schemes. On the one hand, participants could apply their algorithms onto a data set. We refer to this setting as off-line evaluation. On the other hand, participants could deploy their algorithms on a server to interactively receive recommendation requests. We refer to this setting as on-line evaluation. This setting ought to reveal the actual performance of recommendation methods. The competition strived to illustrate differences between evaluation with historical data and actual users. The on-line evaluation does reflect all requirements which active recommender systems face in practise. These requirements include real-time responses and large-scale data volumes. We present the competition’s results and discuss commonalities regarding participants’ approaches.

    @inproceedings{kille_overview_2014,
      address = {Sheffield, UK},
      title = {Overview of {CLEF} {NEWSREEL} 2014: {News} {Recommendations} {Evaluation} {Labs}},
      url = {http://ceur-ws.org/Vol-1180/CLEF2014wn-Newsreel-Kille2014.pdf},
      abstract = {This paper summarises objectives, organisation, and results of the first news recommendation evaluation lab (NEWSREEL 2014). NEWSREEL targeted the evaluation of news recommendation algorithms in the form of a campaignstyle evaluation lab. Participants had the chance to apply two types of evaluation schemes. On the one hand, participants could apply their algorithms onto a data set. We refer to this setting as off-line evaluation. On the other hand, participants could deploy their algorithms on a server to interactively receive recommendation requests. We refer to this setting as on-line evaluation. This setting ought to reveal the actual performance of recommendation methods. The competition strived to illustrate differences between evaluation with historical data and actual users. The on-line evaluation does reflect all requirements which active recommender systems face in practise. These requirements include real-time responses and large-scale data volumes. We present the competition’s results and discuss commonalities regarding participants’ approaches.},
      booktitle = {{CLEF} 2014 {Evaluation} {Labs} and {Workshop}, {Online} {Working} {Notes}},
      publisher = {CEUR},
      author = {Kille, Benjamin and Brodt, Torben and Heintz, Tobias and Hopfgartner, Frank and Lommatzsch, Andreas and Seiler, Jonas},
      month = sep,
      year = {2014},
      pages = {790--801}
    }

  • D. Tikk, R. Turrin, M. Larson, D. Zibriczky, D. Malagoli, A. Said, A. Lommatzsch, V. Gál, and S. Székely, “Comparative Evaluation of Recommendation Systems for Digital Media,” in Proceedings of the IBC conference 2014, Amsterdam, Netherlands, 2014.
    [BibTeX] [Abstract] [Download PDF]

    TV operators and content providers use recommender systems to connect consumers directly with content that fits their needs, their different devices, and the context in which the content is being consumed. Choosing the right recommender algorithms is critical, and becomes more difficult as content offerings continue to radically expand. Because different algorithms respond differently depending on the use-case, including the content and the consumer base, theoretical estimates of performance are not sufficient. Rather, evaluation must be carried out in a realistic environment. The Reference Framework described here is an evaluation platform that enables TV operators to compare impartially not just the qualitative aspects of recommendation algorithms, but also non-functional requirements of complete recommendation solutions. The Reference Framework is being created by the CrowdRec project which includes the most innovative recommendation system vendors and university researchers in the specific fields of recommendation systems and their evaluation. It provides batch-based evaluation modes and looks forward to supporting stream-based modes in the future. It is also able to encapsulate open source recommender and evaluation frameworks, making it suitable for a wide scope of evaluation needs.

    @inproceedings{tikk_comparative_2014,
      address = {Amsterdam, Netherlands},
      title = {Comparative {Evaluation} of {Recommendation} {Systems} for {Digital} {Media}},
      url = {http://digital-library.theiet.org/content/conferences/10.1049/ib.2014.0015},
      abstract = {TV operators and content providers use recommender systems to connect consumers directly with content that fits their needs, their different devices, and the context in which the content is being consumed. Choosing the right recommender algorithms is critical, and becomes more difficult as content offerings continue to radically expand. Because different algorithms respond differently depending on the use-case, including the content and the consumer base, theoretical estimates of performance are not sufficient. Rather, evaluation must be carried out in a realistic environment. The Reference Framework described here is an evaluation platform that enables TV operators to compare impartially not just the qualitative aspects of recommendation algorithms, but also non-functional requirements of complete recommendation solutions. The Reference Framework is being created by the CrowdRec project which includes the most innovative recommendation system vendors and university researchers in the specific fields of recommendation systems and their evaluation. It provides batch-based evaluation modes and looks forward to supporting stream-based modes in the future. It is also able to encapsulate open source recommender and evaluation frameworks, making it suitable for a wide scope of evaluation needs.},
      booktitle = {Proceedings of the {IBC} conference 2014},
      author = {Tikk, Domonkos and Turrin, Roberto and Larson, Martha and Zibriczky, David and Malagoli, Davide and Said, Alan and Lommatzsch, Andreas and Gál, Viktor and Székely, Sandor},
      month = sep,
      year = {2014}
    }

  • T. Brodt, A. Lommatzsch, and F. Hopfgartner, “Shedding Light on a Living Lab: The CLEF NEWSREEL Open Recommendation Platform,” in IIiX’14: Proceedings of Information Interaction in Context Conference, Regensburg, Germany, 2014, pp. 223-226.
    [BibTeX] [Abstract] [Download PDF]

    In the CLEF NEWSREEL lab, participants are invited to evaluate news recommendation techniques in real-time by providing news recommendations to actual users that visit commercial news portals to satisfy their information needs. A central role within this lab is the communication between participants and the users. This is enabled by The Open Recommendation Platform (ORP), a web-based platform which distributes users’ impressions of news articles to the participants and returns their recommendations to the readers. In this demo, we illustrate the platform and show how requests are handled to provide relevant news articles in real-time.

    @inproceedings{brodt_shedding_2014,
      address = {Regensburg, Germany},
      title = {Shedding {Light} on a {Living} {Lab}: {The} {CLEF} {NEWSREEL} {Open} {Recommendation} {Platform}},
      url = {http://dx.doi.org/10.1145/2637002.2637028},
      abstract = {In the CLEF NEWSREEL lab, participants are invited to evaluate news recommendation techniques in real-time by providing news recommendations to actual users that visit commercial news portals to satisfy their information needs. A central role within this lab is the communication between participants and the users. This is enabled by The Open Recommendation Platform (ORP), a web-based platform which distributes users' impressions of news articles to the participants and returns their recommendations to the readers. In this demo, we illustrate the platform and show how requests are handled to provide relevant news articles in real-time.},
      booktitle = {{IIiX}'14: {Proceedings} of {Information} {Interaction} in {Context} {Conference}},
      publisher = {ACM},
      author = {Brodt, Torben and Lommatzsch, Andreas and Hopfgartner, Frank},
      month = aug,
      year = {2014},
      pages = {223--226}
    }

  • C. Esiyok, B. Kille, B. J. Jain, F. Hopfgartner, and S. Albayrak, “Users’ Reading Habits in Online News Portals,” in IIiX’14: Proceedings of Information Interaction in Context Conference, Regensburg, Germany, 2014, pp. 263-266.
    [BibTeX] [Abstract] [Download PDF]

    The aim of this study is to survey reading habits of users of an online news portal. The assumption motivating this study is that insight into the reading habits of users can be helpful to design better news recommendation systems. We estimated the transition probabilities that users who read an article of one news category will move to read an article of another (not necessarily distinct) news category. For this, we analyzed the users’ click behavior within plista data set. Key findings are the popularity of category local, loyalty of readers to the same category, observing similar results when addressing enforced click streams, and the case that click behavior is highly influenced by the news category.

    @inproceedings{esiyok_users_2014,
      address = {Regensburg, Germany},
      title = {Users' {Reading} {Habits} in {Online} {News} {Portals}},
      url = {http://dl.acm.org/citation.cfm?id=2637038},
      abstract = {The aim of this study is to survey reading habits of users of an online news portal. The assumption motivating this study is that insight into the reading habits of users can be helpful to design better news recommendation systems. We estimated the transition probabilities that users who read an article of one news category will move to read an article of another (not necessarily distinct) news category. For this, we analyzed the users' click behavior within plista data set. Key findings are the popularity of category local, loyalty of readers to the same category, observing similar results when addressing enforced click streams, and the case that click behavior is highly influenced by the news category.},
      booktitle = {{IIiX}'14: {Proceedings} of {Information} {Interaction} in {Context} {Conference}},
      publisher = {ACM},
      author = {Esiyok, Cagdas and Kille, Benjamin and Jain, Brijnesh Johannes and Hopfgartner, Frank and Albayrak, Sahin},
      month = aug,
      year = {2014},
      pages = {263--266}
    }

  • T. V. Nguyen, A. Karatzoglou, and L. Baltrunas, “Gaussian Process Factorization Machines for Context-aware Recommendations,” in Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval (SIGIR ’14), ACM, New York, NY, USA, 2014, pp. 63-72.
    [BibTeX] [Abstract] [Download PDF]

    significant improvements in the relevance of the recommended items by modeling the nuanced ways in which context in uences preferences. The dominant approach in context-aware recommendation has been the multidimensional latent fac- tors approach in which users, items, and context variables are represented as latent features in a low-dimensional space. An interaction between a user, item, and a context variable is typically modeled as some linear combination of their latent features. However, given the many possible types of interactions between user, items and contextual variables, it may seem unrealistic to restrict the interactions among them to linearity. To address this limitation, we develop a novel and powerful non-linear probabilistic algorithm for context-aware recommendation using Gaussian processes. The method which we call Gaussian Process Factorization Machines (GPFM) is applicable to both the explicit feedback setting (e.g. numerical ratings as in the Net ix dataset) and the implicit feedback setting (i.e. purchases, clicks). We derive stochastic gradient descent optimization to allow scalability of the model. We test GPFM on ve di erent benchmark contextual datasets. Experimental results demonstrate that GPFM outperforms state-of-the-art context-aware recommendation methods and thus con rm the advantage of accounting for complex non-linear interactions in latent factorbased models.

    @inproceedings{nguyen_gaussian_2014,
      address = {ACM, New York, NY, USA},
      title = {Gaussian {Process} {Factorization} {Machines} for {Context}-aware {Recommendations}},
      url = {http://dl.acm.org/citation.cfm?id=2609623},
      abstract = {significant improvements in the relevance of the recommended items by modeling the nuanced ways in which context in uences preferences. The dominant approach in context-aware recommendation has been the multidimensional latent fac- tors approach in which users, items, and context variables are represented as latent features in a low-dimensional space. An interaction between a user, item, and a context variable is typically modeled as some linear combination of their latent features. However, given the many possible types of interactions between user, items and contextual variables, it may seem unrealistic to restrict the interactions among them to linearity. To address this limitation, we develop a novel and powerful non-linear probabilistic algorithm for context-aware recommendation using Gaussian processes. The method which we call Gaussian Process Factorization Machines (GPFM) is applicable to both the explicit feedback setting (e.g. numerical ratings as in the Net ix dataset) and the implicit feedback setting (i.e. purchases, clicks). We derive stochastic gradient descent optimization to allow scalability of the model. We test GPFM on ve di erent benchmark contextual datasets. Experimental results demonstrate that GPFM outperforms state-of-the-art context-aware recommendation methods and thus con rm the advantage of accounting for complex non-linear interactions in latent factorbased models.},
      booktitle = {Proceedings of the 37th international {ACM} {SIGIR} conference on {Research} \& development in information retrieval ({SIGIR} '14)},
      author = {Nguyen, Trung V. and Karatzoglou, Alexandros and Baltrunas, Linas},
      month = jul,
      year = {2014},
      pages = {63--72}
    }

  • A. Bellogín, A. Said, and A. de Vries, “The Magic Barrier of Recommender Systems – No Magic, Just Ratings,” in Proceedings of the 22nd International Conference on User Modeling, Adaptation, and Personalization, 2014.
    [BibTeX] [Abstract] [Download PDF]

    Recommender Systems have to deal with different types of users, who represent their preferences in many ways. This difference in user’s behaviour has a deep impact on the final performance of the recommender system, where some users may receive better or worse recommendations depending, mostly, on the quantity and the quality of the information the system knows about the user. Specifically, the inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user. In this work, we analyse how the consistency of user ratings (coherence) may predict the performance of recommendation methods. More specifically, our results show that our definition of coherence is correlated with the so-called magic barrier, and thus, it could be used to discriminate between easy users (those with a lower magic barrier) and difficult ones (those with a higher magic barrier). We report experiments where the recommendation error for the more coherent users is lower than that of the less coherent ones. We further validate these results by using a public dataset, where the magic barrier is not available, in which we obtain similar performance improvements.

    @inproceedings{bellogin_magic_2014,
      series = {{UMAP}},
      title = {The {Magic} {Barrier} of {Recommender} {Systems} – {No} {Magic}, {Just} {Ratings}},
      url = {http://www.dai-labor.de/fileadmin/Files/Publikationen/Buchdatei/UMAP2012_Said_MagicBarrier.pdf},
      abstract = {Recommender Systems have to deal with different types of users, who represent their preferences in many ways. This difference in user's behaviour has a deep impact on the final performance of the recommender system, where some users may receive better or worse recommendations depending, mostly, on the quantity and the quality of the information the system knows about the user. Specifically, the inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user. In this work, we analyse how the consistency of user ratings (coherence) may predict the performance of recommendation methods. More specifically, our results show that our definition of coherence is correlated with the so-called magic barrier, and thus, it could be used to discriminate between easy users (those with a lower magic barrier) and difficult ones (those with a higher magic barrier). We report experiments where the recommendation error for the more coherent users is lower than that of the less coherent ones. We further validate these results by using a public dataset, where the magic barrier is not available, in which we obtain similar performance improvements.},
      booktitle = {Proceedings of the 22nd {International} {Conference} on {User} {Modeling}, {Adaptation}, and {Personalization}},
      publisher = {Springer},
      author = {Bellogín, Alejandro and Said, Alan and de Vries, Arjen},
      month = jul,
      year = {2014}
    }

  • Y. Shi, M. Larson, and A. Hanjalic, “Collaborative Filtering beyond the User-item Matrix: A survey of State of the Art and Future Challenges,” ACM Computing Surveys, 2014.
    [BibTeX] [Abstract] [Download PDF]

    Over the past two decades, a large amount of research effort has been devoted to developing algorithms that generate recommendations. The resulting research progress has established the importance of the user-item matrix, which encodes the individual preferences of users for items in a collection, for recommender systems. The user-item matrix provides the basis for collaborative filtering techniques, the dominant framework for recommender systems. Currently, new recommendation scenarios are emerging that offer promising new information that goes beyond the user-item matrix. This information can be divided into two categories re-lated to its source: rich side information concerning users and items, and interaction information associated with the interplay of users and items. In this survey, we summarize and analyze recommendation scenarios involving information sources and the collaborative filtering algorithms that have been recently developed to address them. We provide a comprehensive introduction to a large body of research, over 200 key references, with the aim of supporting the further development of recommender systems exploiting information beyond the user-item matrix. On the basis of this material, we identify and discuss what we see as the central challenges lying ahead for recommender system technology, both in terms of extensions of existing techniques as well as of the integration of techniques and technologies drawn from other research areas.

    @article{shi_collaborative_2014,
      title = {Collaborative {Filtering} beyond the {User}-item {Matrix}: {A} survey of {State} of the {Art} and {Future} {Challenges}},
      url = {http://dl.acm.org/citation.cfm?id=2556270},
      abstract = {Over the past two decades, a large amount of research effort has been devoted to developing algorithms that generate recommendations. The resulting research progress has established the importance of the user-item matrix, which encodes the individual preferences of users for items in a collection, for recommender systems. The user-item matrix provides the basis for collaborative filtering techniques, the dominant framework for recommender systems. Currently, new recommendation scenarios are emerging that offer promising new information that goes beyond the user-item matrix. This information can be divided into two categories re-lated to its source: rich side information concerning users and items, and interaction information associated with the interplay of users and items. In this survey, we summarize and analyze recommendation scenarios involving information sources and the collaborative filtering algorithms that have been recently developed to address them. We provide a comprehensive introduction to a large body of research, over 200 key references, with the aim of supporting the further development of recommender systems exploiting information beyond the user-item matrix. On the basis of this material, we identify and discuss what we see as the central challenges lying ahead for recommender system technology, both in terms of extensions of existing techniques as well as of the integration of techniques and technologies drawn from other research areas.},
      journal = {ACM Computing Surveys},
      author = {Shi, Yue and Larson, Martha and Hanjalic, Alan},
      month = jul,
      year = {2014}
    }

  • A. Said, M. Larson, D. Tikk, P. Cremonesi, A. Karatzoglou, F. Hopfgartner, R. Turrin, and J. Geurts, “User-Item Reciprocity in Recommender Systems:Incentivizing the Crowd,” in UMAP Project Synergy (UMAP ProS) Workshop “A forum for UMAP related projects to exchange ideas and practices, 2014.
    [BibTeX] [Abstract] [Download PDF]

    Data consumption has changed signiffcantly in the last 10 years. The digital revolution and the World Wide Web has brought an ocean of information for users to choose from. Recommender systems are a popular means of nding content that is both relativized an per- sonalized. However, today’s users require better recommender systems, able of producing continuous data feeds keeping up with their moment- to-moment needs in fast-moving mobile worlds. The CrowdRec project addresses this demand by providing recommendations that are context- aware, resource-combining, socially-informed, interactive and scalable. The key insight of CrowdRec is that, in order to achieve the dense, high-quality, timely information required for such systems, it is necessary to move from collecting information passively from users, to more active techniques fostering user engagement. CrowdRec activates the crowd, soliciting input and feedback from thae wider community

    @inproceedings{said_user-item_2014,
      title = {User-{Item} {Reciprocity} in {Recommender} {Systems}:{Incentivizing} the {Crowd}},
      url = {http://ceur-ws.org/Vol-1181/pros2014_paper_06.pdf},
      abstract = {Data consumption has changed signiffcantly in the last 10 years. The digital revolution and the World Wide Web has brought an ocean of information for users to choose from. Recommender systems are a popular means of nding content that is both relativized an per- sonalized. However, today's users require better recommender systems, able of producing continuous data feeds keeping up with their moment- to-moment needs in fast-moving mobile worlds. The CrowdRec project addresses this demand by providing recommendations that are context- aware, resource-combining, socially-informed, interactive and scalable. The key insight of CrowdRec is that, in order to achieve the dense, high-quality, timely information required for such systems, it is necessary to move from collecting information passively from users, to more active techniques fostering user engagement. CrowdRec activates the crowd, soliciting input and feedback from thae wider community},
      booktitle = {{UMAP} {Project} {Synergy} ({UMAP} {ProS}) {Workshop} “{A} forum for {UMAP} related projects to exchange ideas and practices},
      author = {Said, Alan and Larson, Martha and Tikk, Domonkos and Cremonesi, Paolo and Karatzoglou, Alexandros and Hopfgartner, Frank and Turrin, Roberto and Geurts, Joost},
      month = jul,
      year = {2014}
    }

  • P. Cremonesi, F. Garzotto, R. Pagano, R. Facendola, M. Guarnerio, and M. Natali, “Polarized Review Summarization as Decision Making Tool,” in Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces, Como, 2014.
    [BibTeX] [Abstract] [Download PDF]

    When choosing an hotel, a restaurant or a movie, many people rely on the reviews available on the Web. However, this huge amount of opinions make it difficult for users to have a comprehensive vision of the crowd judgments and to make an optimal decision. In this work we provide evidence that automatic text summarization of reviews can be used to design Web applications able to effectively reduce the decision making effort in domains where decisions are based upon the opinion of the crowd.

    @inproceedings{cremonesi_polarized_2014,
      address = {Como},
      series = {{AVI}},
      title = {Polarized {Review} {Summarization} as {Decision} {Making} {Tool}},
      url = {http://dl.acm.org/citation.cfm?id=2600047},
      abstract = {When choosing an hotel, a restaurant or a movie, many people rely on the reviews available on the Web. However, this huge amount of opinions make it difficult for users to have a comprehensive vision of the crowd judgments and to make an optimal decision. In this work we provide evidence that automatic text summarization of reviews can be used to design Web applications able to effectively reduce the decision making effort in domains where decisions are based upon the opinion of the crowd.},
      booktitle = {Proceedings of the 2014 {International} {Working} {Conference} on {Advanced} {Visual} {Interfaces}},
      author = {Cremonesi, Paolo and Garzotto, Franca and Pagano, Roberto and Facendola, Raffaele and Guarnerio, Matteo and Natali, Mattia},
      month = may,
      year = {2014}
    }

  • B. Hidasi and D. Tikk, “Approximate modeling of continuous context in factorization algorithms.,” in Workshop on Context-awareness in Retrieval and Recommendation, Amsterdam, 2014.
    [BibTeX] [Abstract] [Download PDF]

    Factorization based algorithms, such as matrix or tensor factorization, are widely used in the eld of recommender systems. These methods model the relations between the en- tities of two or more dimensions. The entity based approach is suitable for dimensions such as users, items and several context types, where the domain of the context is nominal. Continuous and ordinal context dimensions are usually discretized and their values are used as nominal entities. While this enables the usage of continuous context in factorization methods, still much information is lost during the process. In this paper we propose two approaches for better modeling of the continuous context dimensions. Fuzzy event modeling tackles the problem through the uncertainty of the value of the observation in the context dimension. Fuzzy con- text modeling, on the other hand, enables context-states to overlap, thus certain observations are in uenced by multiple context-states. Throughout the paper seasonality is used as an example of continuous context. We incorporate the mod- eling concepts into the iTALS algorithm, without degrading its scalability. The e ect of the two approaches on recom- mendation accuracy is measured on ve implicit feedback databases.

    @inproceedings{hidasi_approximate_2014,
      address = {Amsterdam},
      title = {Approximate modeling of continuous context in factorization algorithms.},
      url = {http://www.hidasi.eu/content/fuzzy_context_carr14.pdf},
      abstract = {Factorization based algorithms, such as matrix or tensor factorization, are widely used in the eld of recommender systems. These methods model the relations between the en- tities of two or more dimensions. The entity based approach is suitable for dimensions such as users, items and several context types, where the domain of the context is nominal. Continuous and ordinal context dimensions are usually discretized and their values are used as nominal entities. While this enables the usage of continuous context in factorization methods, still much information is lost during the process. In this paper we propose two approaches for better modeling of the continuous context dimensions. Fuzzy event modeling tackles the problem through the uncertainty of the value of the observation in the context dimension. Fuzzy con- text modeling, on the other hand, enables context-states to overlap, thus certain observations are in uenced by multiple context-states. Throughout the paper seasonality is used as an example of continuous context. We incorporate the mod- eling concepts into the iTALS algorithm, without degrading its scalability. The e ect of the two approaches on recom- mendation accuracy is measured on ve implicit feedback databases.},
      booktitle = {Workshop on {Context}-awareness in {Retrieval} and {Recommendation}},
      author = {Hidasi, Balázs and Tikk, Domonkos},
      month = apr,
      year = {2014}
    }

  • M. Meder, T. Plumbaum, and F. Hopfgartner, “DAIKnow: A Gamified Enterprise Bookmarking System,” in ECIR’14: Proceedings of the 36th European Conference on Information Retrieval, Amsterdam, The Netherlands, 2014, pp. 759-762.
    [BibTeX] [Abstract] [Download PDF]

    One of the core ideas of gamification in an enterprise setting is to engage employees, i.e., to motivate them to fulfil boring, but necessary tasks. In this demo paper, we present a gamified enterprise bookmarking system which incorporates points, badges and a leaderboard. Preliminary studies indicate that these gamification methods result in an increased user engagement.

    @inproceedings{meder_daiknow:_2014,
      address = {Amsterdam, The Netherlands},
      title = {{DAIKnow}: {A} {Gamified} {Enterprise} {Bookmarking} {System}},
      url = {http://dx.doi.org/10.1007/978-3-319-06028-6_90},
      abstract = {One of the core ideas of gamification in an enterprise setting is to engage employees, i.e., to motivate them to fulfil boring, but necessary tasks. In this demo paper, we present a gamified enterprise bookmarking system which incorporates points, badges and a leaderboard. Preliminary studies indicate that these gamification methods result in an increased user engagement.},
      booktitle = {{ECIR}'14: {Proceedings} of the 36th {European} {Conference} on {Information} {Retrieval}},
      publisher = {Springer Verlag},
      author = {Meder, Michael and Plumbaum, Till and Hopfgartner, Frank},
      month = apr,
      year = {2014},
      pages = {759--762}
    }

  • F. Hopfgartner, G. Kazai, U. Kruschwitz, and M. Meder, Proceedings of the Workshop on Gamification for Information Retrieval, Amsterdam, The Netherlands: ACM ICPS, 2014.
    [BibTeX] [Abstract] [Download PDF]

    We are delighted to welcome you to the First International Workshop on Gamification for Information Retrieval, held on April 13, 2014 in conjunction with ECIR’14 in Amsterdam. This workshop focuses on the challenges and opportunities that gamification can present for the information retrieval (IR) community. Gamification is the application of game mechanics, such as leader boards, badges or achievement points, in non-gaming environments with the aim to increase user engagement, data quality or cost effectiveness. A core aspect of gamification solutions is to infuse intrinsic motivations to participate by leveraging people’s natural desires for achievement and competition. While gamification, on the one hand, is emerging as the next big thing in industry, e.g., an effective way to generate business, on the other hand, it is also becoming a major research area. However, its adoption in Information Retrieval is still in its infancy, despite the wide ranging IR tasks that may benefit from gamification techniques. These include the manual annotation of documents for IR evaluation, the participation in user studies to study interactive IR challenges, or the shift from single-user search to social search, just to mention a few.

    @book{hopfgartner_proceedings_2014,
      address = {Amsterdam, The Netherlands},
      title = {Proceedings of the {Workshop} on {Gamification} for {Information} {Retrieval}},
      url = {http://dl.acm.org/citation.cfm?id=2594776},
      abstract = {We are delighted to welcome you to the First International Workshop on Gamification for Information Retrieval, held on April 13, 2014 in conjunction with ECIR'14 in Amsterdam. This workshop focuses on the challenges and opportunities that gamification can present for the information retrieval (IR) community. Gamification is the application of game mechanics, such as leader boards, badges or achievement points, in non-gaming environments with the aim to increase user engagement, data quality or cost effectiveness. A core aspect of gamification solutions is to infuse intrinsic motivations to participate by leveraging people's natural desires for achievement and competition. While gamification, on the one hand, is emerging as the next big thing in industry, e.g., an effective way to generate business, on the other hand, it is also becoming a major research area. However, its adoption in Information Retrieval is still in its infancy, despite the wide ranging IR tasks that may benefit from gamification techniques. These include the manual annotation of documents for IR evaluation, the participation in user studies to study interactive IR challenges, or the shift from single-user search to social search, just to mention a few.},
      publisher = {ACM ICPS},
      author = {Hopfgartner, Frank and Kazai, Gabriella and Kruschwitz, Udo and Meder, Michael},
      month = apr,
      year = {2014}
    }

  • B. Dumelic, M. Larson, and A. Bozzon, “Moody Closet: Exploring Intriguing New Views on Wardrobe Recommendation,” in Proceedings of the First International Workshop on Gamification for Information Retrieval, 2014.
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces Moody Closet, a mobile application for the management of a personal wardrobe with a personalized outfit recommender. To provide incentive for the users to add content and express their preferences, the system provides an easy and enjoyable interaction, which delivers new perspectives on their closets. In particular, we focus on the mood of the wearer, which is considered to be an intriguing trigger capable of prompting the contribution of information needed to feed a recommendation system. An exploratory study with a small set of users provides an initial demonstration that the concept has the potential to fascinate users and motivate them to contribute. We demonstrate a working prototype which showcases the addition of content and the triggers provided to motivate this process.

    @inproceedings{dumelic_moody_2014,
      series = {{GamifIR}},
      title = {Moody {Closet}: {Exploring} {Intriguing} {New} {Views} on {Wardrobe} {Recommendation}},
      url = {http://dl.acm.org/citation.cfm?id=2594790},
      abstract = {This paper introduces Moody Closet, a mobile application for the management of a personal wardrobe with a personalized outfit recommender. To provide incentive for the users to add content and express their preferences, the system provides an easy and enjoyable interaction, which delivers new perspectives on their closets. In particular, we focus on the mood of the wearer, which is considered to be an intriguing trigger capable of prompting the contribution of information needed to feed a recommendation system. An exploratory study with a small set of users provides an initial demonstration that the concept has the potential to fascinate users and motivate them to contribute. We demonstrate a working prototype which showcases the addition of content and the triggers provided to motivate this process.},
      booktitle = {Proceedings of the {First} {International} {Workshop} on {Gamification} for {Information} {Retrieval}},
      author = {Dumelic, Bojana and Larson, Martha and Bozzon, Alessandro},
      month = apr,
      year = {2014}
    }

  • P. Cremonesi, F. Garzotto, R. Pagano, and M. Quadrana, “Recommending Without Short Head,” in Proceedings of the Companion Publication of the 23rd International Conference on World Wide Web Companion, Republic and Canton of Geneva, Switzerland, 2014, pp. 245-246.
    [BibTeX] [Abstract] [Download PDF]

    We discuss a comprehensive study exploring the impact of recommender systems when recommendations are forced to omit popular items (short head) and to use niche products only (long tail). This is an interesting issue in domains, such as e-tourism, where product availability is constrained, "best sellers" most popular items are the first ones to be consumed, and the short head may eventually become unavailable for recommendation purposes. Our work provides evidence that the effects resulting from item consumption may increase the utility of personalized recommendations.

    @inproceedings{cremonesi_recommending_2014,
      address = {Republic and Canton of Geneva, Switzerland},
      series = {{WWW} {Companion} '14},
      title = {Recommending {Without} {Short} {Head}},
      isbn = {978-1-4503-2745-9},
      url = {http://wwwconference.org/proceedings/www2014/companion/p245.pdf},
      abstract = {We discuss a comprehensive study exploring the impact of recommender systems when recommendations are forced to omit popular items (short head) and to use niche products only (long tail). This is an interesting issue in domains, such as e-tourism, where product availability is constrained, "best sellers" most popular items are the first ones to be consumed, and the short head may eventually become unavailable for recommendation purposes. Our work provides evidence that the effects resulting from item consumption may increase the utility of personalized recommendations.},
      booktitle = {Proceedings of the {Companion} {Publication} of the 23rd {International} {Conference} on {World} {Wide} {Web} {Companion}},
      publisher = {International World Wide Web Conferences Steering Committee},
      author = {Cremonesi, Paolo and Garzotto, Franca and Pagano, Roberto and Quadrana, Massimo},
      month = apr,
      year = {2014},
      keywords = {e-tourism, evaluation, quality, recommender systems},
      pages = {245--246}
    }

  • A. Lommatzsch and S. Albayrak, “Real-Time News Recommendation Using Context-Aware Ensembles,” in Advances in Information Retrieval, Springer International Publishing, 2014, vol. 8416, pp. 51-62.
    [BibTeX] [Abstract] [Download PDF]

    With the rapidly growing amount of items and news articles on the internet, recommender systems are one of the key technologies to cope with the information overload and to assist users in finding information matching the their individual preferences. News and domain-specific information portals are important knowledge sources on the Web frequently accessed by millions of users. In contrast to product recommender systems, news recommender systems must address additional challenges, e.g. short news article lifecycles, heterogonous user interests, strict time constraints, and context-dependent article relevance. Since news articles have only a short time to live, recommender models have to be continuously adapted, ensuring that the recommendations are always up-to-date, hampering the pre-computations of suggestions. In this paper we present our framework for providing real-time news recommendations. We discuss the implemented algorithms optimized for the news domain and present an approach for estimating the recommender performance. Based on our analysis we implement an agent-based recommender system, aggregation several different recommender strategies. We learn a context-aware delegation strategy, allowing us to select the best recommender algorithm for each request. The evaluation shows that the implemented framework outperforms traditional recommender approaches and allows us to adapt to the specific properties of the considered news portals and recommendation requests.

    @incollection{lommatzsch_real-time_2014,
      series = {Lecture {Notes} in {Computer} {Science}},
      title = {Real-{Time} {News} {Recommendation} {Using} {Context}-{Aware} {Ensembles}},
      volume = {8416},
      isbn = {978-3-319-06027-9},
      url = {http://dx.doi.org/10.1007/978-3-319-06028-6_5},
      abstract = {With the rapidly growing amount of items and news articles on the internet, recommender systems are one of the key technologies to cope with the information overload and to assist users in finding information matching the their individual preferences. News and domain-specific information portals are important knowledge sources on the Web frequently accessed by millions of users. In contrast to product recommender systems, news recommender systems must address additional challenges, e.g. short news article lifecycles, heterogonous user interests, strict time constraints, and context-dependent article relevance. Since news articles have only a short time to live, recommender models have to be continuously adapted, ensuring that the recommendations are always up-to-date, hampering the pre-computations of suggestions. In this paper we present our framework for providing real-time news recommendations. We discuss the implemented algorithms optimized for the news domain and present an approach for estimating the recommender performance. Based on our analysis we implement an agent-based recommender system, aggregation several different recommender strategies. We learn a context-aware delegation strategy, allowing us to select the best recommender algorithm for each request. The evaluation shows that the implemented framework outperforms traditional recommender approaches and allows us to adapt to the specific properties of the considered news portals and recommendation requests.},
      booktitle = {Advances in {Information} {Retrieval}},
      publisher = {Springer International Publishing},
      author = {Lommatzsch, Andreas and Albayrak, Sahin},
      month = apr,
      year = {2014},
      keywords = {context-aware ensemble, online evaluation, real-time recommendations},
      pages = {51--62}
    }

  • Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, and A. Hanjalic, “CARS2: Learning Context-aware Representations for Context-Aware Recommendations.” 2014.
    [BibTeX] [Abstract] [Download PDF]

    Rich contextual information is typically available in many recommendation domains allowing recommender systems to model the subtle e ects of context on preferences. Most con- textual models assume that the context shares the same la- tent space with the users and items. In this work we propose CARS2, a novel approach for learning context-aware repre- sentations for context-aware recommendations. We show that the context-aware representations can be learned us- ing an appropriate model that aims to represent the type of interactions between context variables, users and items. We adapt the CARS2 algorithms to explicit feedback data by using a quadratic loss function for rating prediction, and to implicit feedback data by using a pairwise and a listwise ranking loss functions for top-N recommendations. By us- ing stochastic gradient descent for parameter estimation we ensure scalability. Experimental evaluation shows that our CARS2 models achieve competitive recommendation perfor- mance, compared to several state-of-the-art approaches.

    @inproceedings{shi_cars2:_2014,
      title = {{CARS}2: {Learning} {Context}-aware {Representations} for {Context}-{Aware} {Recommendations}},
      url = {http://dl.acm.org/citation.cfm?id=2662070},
      abstract = {Rich contextual information is typically available in many recommendation domains allowing recommender systems to model the subtle e ects of context on preferences. Most con- textual models assume that the context shares the same la- tent space with the users and items. In this work we propose CARS2, a novel approach for learning context-aware repre- sentations for context-aware recommendations. We show that the context-aware representations can be learned us- ing an appropriate model that aims to represent the type of interactions between context variables, users and items. We adapt the CARS2 algorithms to explicit feedback data by using a quadratic loss function for rating prediction, and to implicit feedback data by using a pairwise and a listwise ranking loss functions for top-N recommendations. By us- ing stochastic gradient descent for parameter estimation we ensure scalability. Experimental evaluation shows that our CARS2 models achieve competitive recommendation perfor- mance, compared to several state-of-the-art approaches.},
      author = {Shi, Yue and Karatzoglou, Alexandros and Baltrunas, Linas and Larson, Martha and Hanjalic, Alan},
      month = jan,
      year = {2014},
      annote = {(to appear)}
    }

  • B. Loni, Y. Shi, M. Larson, and A. Hanjalic, “Cross-Domain Collaborative Filtering with Factorization Machines,” in Advances in Information Retrieval, M. Rijke, T. Kenter, A. Vries, C. X. Zhai, F. Jong, K. Radinsky, and K. Hofmann, Eds., Springer International Publishing, 2014, vol. 8416, pp. 656-661.
    [BibTeX] [Abstract] [Download PDF]

    Factorization machines oer an advantage over other existing collaborative ltering approaches to recommendation. They make it possible to work with any auxiliary information that can be encoded as a real-valued feature vector as a supplement to the information in the user-item matrix. We build on the assumption that dierent patterns characterize the way that users interact with (i.e., rate or download) items of a certain type (e.g., movies or books). We view interactions with a specic type of item as constituting a particular domain and allow interaction information from an auxiliary domain to inform recommendation in a target domain. Our proposed approach is tested on a data set from Amazon and compared with a state-of-the-art approach that has been proposed for Cross-Domain Collaborative Filtering. Experimental results demonstrate that our approach, which has a lower computational complexity, is able to achieve performance improvements.

    @incollection{loni_cross-domain_2014,
      series = {Lecture {Notes} in {Computer} {Science}},
      title = {Cross-{Domain} {Collaborative} {Filtering} with {Factorization} {Machines}},
      volume = {8416},
      isbn = {978-3-319-06027-9},
      url = {http://prlab.tudelft.nl/sites/default/files/Cross-Domain%20Collabotive%20Filtering%20with%20Factorization%20Machines.pdf},
      abstract = {Factorization machines oer an advantage over other existing collaborative ltering approaches to recommendation. They make it possible to work with any auxiliary information that can be encoded as a real-valued feature vector as a supplement to the information in the user-item matrix. We build on the assumption that dierent patterns characterize the way that users interact with (i.e., rate or download) items of a certain type (e.g., movies or books). We view interactions with a specic type of item as constituting a particular domain and allow interaction information from an auxiliary domain to inform recommendation in a target domain. Our proposed approach is tested on a data set from Amazon and compared with a state-of-the-art approach that has been proposed for Cross-Domain Collaborative Filtering. Experimental results demonstrate that our approach, which has a lower computational complexity, is able to achieve performance improvements.},
      booktitle = {Advances in {Information} {Retrieval}},
      publisher = {Springer International Publishing},
      author = {Loni, Babak and Shi, Yue and Larson, Martha and Hanjalic, Alan},
      editor = {Rijke, Maarten and Kenter, Tom and Vries, ArjenP. and Zhai, Cheng Xiang and Jong, Franciska and Radinsky, Kira and Hofmann, Katja},
      year = {2014},
      pages = {656--661}
    }

  • M. Larson, M. Melenhorst, M. Menéndez, and P. Xu, “Using Crowdsourcing to Capture Complexity in Human Interpretations of Multimedia Content.,” in Fusion in Computer Vision, 2014, pp. 229-269.
    [BibTeX] [Abstract] [Download PDF]

    Large-scale crowdsourcing platforms are a key tool allowing researchers in the area of multimedia content analysis to gain insight into how users interpret social multimedia. The goal of this article is to support this process in a practical manner that opens the path for productive exploitation of complex human interpretations of multimedia content within multimedia systems. We first discuss in detail the nature of complexity in human interpretations of multimedia, and why we, as researchers, should look outward to the crowd, rather than inward to ourselves, to determine what users consider important about the content of images and videos. Then, we present strategies and insights from our own experience in designing tasks for crowdworkers. Our techniques are useful to researchers interested in eliciting information about the elements and aspects of multimedia that are important in the contexts in which humans use social multimedia.

    @inproceedings{larson_using_2014,
      title = {Using {Crowdsourcing} to {Capture} {Complexity} in {Human} {Interpretations} of {Multimedia} {Content}.},
      url = {http://link.springer.com/chapter/10.1007/978-3-319-05696-8_10#page-1},
      abstract = {Large-scale crowdsourcing platforms are a key tool allowing researchers in the area of multimedia content analysis to gain insight into how users interpret social multimedia. The goal of this article is to support this process in a practical manner that opens the path for productive exploitation of complex human interpretations of multimedia content within multimedia systems. We first discuss in detail the nature of complexity in human interpretations of multimedia, and why we, as researchers, should look outward to the crowd, rather than inward to ourselves, to determine what users consider important about the content of images and videos. Then, we present strategies and insights from our own experience in designing tasks for crowdworkers. Our techniques are useful to researchers interested in eliciting information about the elements and aspects of multimedia that are important in the contexts in which humans use social multimedia.},
      booktitle = {Fusion in {Computer} {Vision}},
      publisher = {Springer International Publishing},
      author = {Larson, M. and Melenhorst, M. and Menéndez, M. and Xu, P.},
      year = {2014},
      pages = {229--269}
    }

  • A. Said and A. Bellogín, “RiVal: A New Benchmarking Toolkit for Recommender Systems,” ERCIM News, vol. 2014, iss. 99, 2014.
    [BibTeX] [Abstract] [Download PDF]

    RiVal is a newly released toolkit, developed during two ERCIM fellowships at Centrum Wiskunde & Informatica (CWI), for transparent and objective benchmarking of recommender systems software such as Apache Mahout, LensKit and MyMediaLite. This will ensure that robust and comparable assessments of their recommendation quality can be made.

    @article{said_rival:_2014,
      title = {{RiVal}: {A} {New} {Benchmarking} {Toolkit} for {Recommender} {Systems}},
      volume = {2014},
      url = {http://ercim-news.ercim.eu/en99/special/rival-a-new-benchmarking-toolkit-for-recommender-systems},
      abstract = {RiVal is a newly released toolkit, developed during two ERCIM fellowships at Centrum Wiskunde \& Informatica (CWI), for transparent and objective benchmarking of recommender systems software such as Apache Mahout, LensKit and MyMediaLite. This will ensure that robust and comparable assessments of their recommendation quality can be made.},
      number = {99},
      journal = {ERCIM News},
      author = {Said, Alan and Bellogín, Alejandro},
      year = {2014}
    }

  • M. Meder, T. Plumbaum, and F. Hopfgartner, “Perceived and Actual Role of Gamification Principles,” in UCC’13: Proceedings of the IEEE/ACM 6th International Conference on Utility and Cloud Computing, Dresden, Germany, 2013, pp. 488-493.
    [BibTeX] [Abstract] [Download PDF]

    Although gamification has successfully been applied in office scenarios, it remains unclear how employees really feel about the introduction of a gamified system at their workplace. In this paper, we address this issue from two directions. First, we present the outcome of an online survey where we analyze users’ opinion about gamification in a workplace environment. Then, we analyze the interaction logs of a re-designed gamified enterprise book marking system to compare the employees’ subjective perception of gamification with their actual behavior when using a gamified system. Results indicate that there is a strong relationship between employees’ perception of gamification and their actual interaction with such system.

    @inproceedings{meder_perceived_2013,
      address = {Dresden, Germany},
      title = {Perceived and {Actual} {Role} of {Gamification} {Principles}},
      url = {http://dx.doi.org/10.1109/UCC.2013.95},
      abstract = {Although gamification has successfully been applied in office scenarios, it remains unclear how employees really feel about the introduction of a gamified system at their workplace. In this paper, we address this issue from two directions. First, we present the outcome of an online survey where we analyze users' opinion about gamification in a workplace environment. Then, we analyze the interaction logs of a re-designed gamified enterprise book marking system to compare the employees' subjective perception of gamification with their actual behavior when using a gamified system. Results indicate that there is a strong relationship between employees' perception of gamification and their actual interaction with such system.},
      booktitle = {{UCC}'13: {Proceedings} of the {IEEE}/{ACM} 6th {International} {Conference} on {Utility} and {Cloud} {Computing}},
      publisher = {IEEE},
      author = {Meder, Michael and Plumbaum, Till and Hopfgartner, Frank},
      month = dec,
      year = {2013},
      pages = {488--493}
    }

  • M. Larson, A. Said, Y. Shi, P. Cremonesi, D. Tikk, A. Karatzoglou, L. Baltrunas, J. Geurts, X. Anguera, and F. Hopfgartner, “Activating the Crowd: Exploiting User-Item Reciprocity for Recommendation,” in CrowdRec Workshop, Hong Kong, China, 2013.
    [BibTeX] [Abstract] [Download PDF]

    Recommender systems have always faced the problem of sparse data. In the current era, however, with its demand for highly personalized, real-time, context-aware recommendation, the sparse data problem only threatens to grow worse. Crowdsourcing, specifically, outsourcing micro-requests for information to the crowd, opens new possibilities to fight the sparse data challenge. In this paper, we lay out a vision for recommender systems that, instead of consulting an external crowd, rely on their own user base to actively supply the rich information needed to improve recommendations. We propose that recommender systems should create and exploit reciprocity between users and items. Specifically, recommender systems should not only recommend items for users (who would like to watch or buy them), but also recommend users for items (that need additional information in order that they can be better recommended by the system). Reciprocal recommendations provide a gentle incentivization that can be deployed non-invasively, yet is powerful enough to promote a productive symbiosis between users and items. By exploiting reciprocity, recommender systems can “look inwards” and activate their own user base to contribute the information needed to improve recommendations for the entire user community.

    @inproceedings{larson_activating_2013,
      address = {Hong Kong, China},
      title = {Activating the {Crowd}: {Exploiting} {User}-{Item} {Reciprocity} for {Recommendation}},
      url = {http://crowdrec2013.noahlab.com.hk/papers/crowdrec2013_Larson.pdf},
      abstract = {Recommender systems have always faced the problem of sparse data. In the current era, however, with its demand for highly personalized, real-time, context-aware recommendation, the sparse data problem only threatens to grow worse. Crowdsourcing, specifically, outsourcing micro-requests for information to the crowd, opens new possibilities to fight the sparse data challenge. In this paper, we lay out a vision for recommender systems that, instead of consulting an external crowd, rely on their own user base to actively supply the rich information needed to improve recommendations. We propose that recommender systems should create and exploit reciprocity between users and items. Specifically, recommender systems should not only recommend items for users (who would like to watch or buy them), but also recommend users for items (that need additional information in order that they can be better recommended by the system). Reciprocal recommendations provide a gentle incentivization that can be deployed non-invasively, yet is powerful enough to promote a productive symbiosis between users and items. By exploiting reciprocity, recommender systems can “look inwards” and activate their own user base to contribute the information needed to improve recommendations for the entire user community.},
      booktitle = {{CrowdRec} {Workshop}},
      author = {Larson, Martha and Said, Alan and Shi, Yue and Cremonesi, Paolo and Tikk, Domonkos and Karatzoglou, Alexandros and Baltrunas, Linas and Geurts, Joost and Anguera, Xavier and Hopfgartner, Frank},
      month = oct,
      year = {2013}
    }