The Third International Workshop on Recommendation Systems for Software Engineering was one of the most exciting events collocated with ICSE’12 – at least from my personal experience. I liked the discussions in this workshop. Especially the fundamental question:
How to measure if the Recommendation System actually improves the recommendees’ life?
From the discussion I got the impression that the software engineering community tries to improve the life of the less experienced software engineering professionals. In this case, I think that it is important to cover the learning experience – a less experienced professional learns from the recommendations (s)he receives.
Another important question, raised by Walid was:
What is the next recommendation system?
I would summarize the input from the audience as follows:
- It brings together End-users and Developers
- It focusses on Requirements
- It focusses on Security and Architecture
There was quite some work on using Recommendations from Stackoverflow. E.g. Alberto Bacchelli, Luca Ponzanelli and Michele Lanza use the imports of a class to derive the context and code to refine this context. This context is then used to find relevant content on Stackoverflow and to display it in the IDE. Apparently, this approach behaves bad on bad code. I especially liked a suggestion from the discussion to give Feedback to Stackoverflow on which suggestions were useful or actually used.
Giuseppe Valetto presented on Actionable Identification of Emergent Teams. These emergent teams are visualized as graphs, were the team members are the nodes. An edge is included, when both team members work on similar tasks or artifacts. Looking at the resulting social networks, the audience asked for the roles of each team member and Peppo was able to point out the architect and an intern. Thinking about this, I wonder: In an emergent team, shouldn’t the roles be emergent, too? Especially, when seeing the central role of both intern and architect, who both seem to have some overview of the whole project, compared to the other team members who seem to be specialists.
The final discussion started to highlight research challenges of the field. Self-assessment of recommendation systems and privacy issues were mentioned. Again, one of the pressing questions was:
How much precision is enough?
I had a wonderful discussion with Alexander Felfernig on this matter. Accordingly, the general recommender systems community has stopped to use these metrics (i.e. precision, recall), because the recommendations influence the behaviour of the recommendees. By this, the precision becomes rather meaningless. An interesting thought – but how can we measure the performance of a recommendation system otherwise?
Finally, we started to discuss about how to define a recommender system. Apparently, it is not easy to distinguish between search engines and recommender systems. One proposal from the audience was that recommender systems push their information, even if the recommendee is not aware about an information need, whereas search engines allow to pull information. But then of course the ranking of search results is a recommendation. For software engineering, the definition from Robillard, Walker, Zimmermann from 2009 seems to work fine for the software engineering community:
A recommendation system for software engineering is a software application that provides information items estimated to be valuable for a software engineering task in a given context.
I guess ending with more questions than answers is a good result for a workshop. I enjoyed the discussions and very creative and inspiring presentations and posters and look forward to meet with this community again!