MSR’12 Summary

This is a summary of my impressions during the 9th Working Conference on Mining Software Repositories. Note that I focus on sharing my opinions and feelings and that these depend on my personal perspective.

MSR (Mining Software Repositories)

Session 1

Rafael Lotufo, Leonardo Passos, and Krzysztof Czarnecki were discussing how Game Mechanisms can improve Bug Tracking Systems. For their research they mined Stackoverflow, because it is easy to access. To me, it seems to be a stretch to call Stackoverflow a bug tracking system, but I can see the similarities: A question can be regarded as a bug that needs to be solved and the answers of course part of its solution. For me, the takeaway message of this talk was: Rewards work quite good, if they have value for the users (Reputation & Privileges; e.g. moderation rights in Stackoverflow).

Tse-Hsun Chen, Stephen W. Thomas, Meiyappan Nagappan, and Ahmed E. Hassan work on explaining bugs based on topic analysis. One of their findings is that only few topics are error prone. This can be used to explain defects, e.g. more topics/file lead to more defects/file.

Ivan Keivanbo presented work about secold, a platform for sharing data for the MSR community.

Christian Rodriguez reported work on the impact of distributed version control systems on open source projects. Accordingly, DVCS lower the barrier for external/new/non-core contributors. At the same time, contribution by the core-team is decreased, presumably, because the core-team needs to deal with more organizational tasks (e.g. merging).

Discussion of Session 1: The discussion started on the topic of Data-Sharing and its impact on publications. Also, Dangers of Gamification were discussed. The audience asked whether it might be better to focus on Education instead of Motivation to create better bug reports. Apparently, this decision depends on the context, e.g. actors in companies or in the wild. There was a complaint about the MSR-Vision being unclear: The conference (and Session 1 in particular) is focused on research methodology, not a specific research topic. Except for Infrastructure, (e.g. share data, explore new data sources), there is no apparent common problem. A related discussion thread expressed the fear that metrics proposed in this conference focus on things that are easy to measure, but fail in describing their meaning.

Keynote: Margaret-Anne Storey

Margaret-Anne Storey‘s keynote has been praised in various tweets as well as during the coffee breaks of all ICSE week and I agree: She did a great job in presenting the topic, focussing on the field, not on her own group’s work. I was especially interested in her talking about the dangers of social media:

  • Lobbying instead of stakeholding for requirements
  • Spaghetti code, bad integrations
  • Reputation in social media being more important than degree

Session V

Vladimir Vilkov talked about MIC Check: the Maximal Information Coefficient (from Reshef et al.) is a statistical test that allows to identify non-functional and non-linear relationships between variables.

Bram Adams presented a qualitative study about a certain kind of bug (i.e. performance). One of the characteristics about performance bugs is that they often receive a “works for me” after long time. This happens, when other bugfixes accidentally solve the problem: Performance bugs are hard to trace. I found it interesting that performance bugs tend to have more discussion than other bugs, because this relates to one of the many open questions in our clarification pattern research theme: what are the difficult requirement types?

Session VI

Yossi Gil, Maayan Goldstein, and Dany Moshkovich discussed how software properties change over time. E.g. they observed fewer changes in boolean metrics before major releases, but more changes in numerical metrics. They also observed that software metrics seem to be robust. Most metrics hardly change between different versions of software.

Christian Bird presented work about distributed development. I found it interesting that Microsoft research was presenting about their competitors’ projects. Christian explained that this work is a continuation of the work presented at ICSE’09 in Vancouver. He observed that 90% of the commits to the eclipse project originate from IBM (UVic is among the top-10 contributors, though). Most eclipse plugins are developed by collocated teams. The others have 50% more bugs. I also found it noteworthy that Christian was presenting with a MacbookPro, running Windows 7.

Acknowledgements

Special thanks go to Adrian Schroeter for helping me getting started with this blogging business. I also want to thank the organizers of ICSE12 for creating a perfect environment for scientific exchange in Zurich!

1 thought on “MSR’12 Summary

  1. Pingback: ICSE’12 Summary | Eric's blog

Leave a comment