
Predictive Models in Software Engineering
This international conference seeks repeatable methods for building verifiable models, useful for implementation, evaluation, & management of software development projects (both in general or for specific domains like telecom, finance, scientific applications, etc).
ACCEPTED PAPERS:
- Leandro Minku and Xin Yao. Can Cross-company Data Improve Performance in Software Effort Estimation?
- Damir Azhar, Emilia Mendes and Patricia Riddle. A Systematic Review of Web Resource Estimation
- Ekrem Kocaguneli, Tim Menzies, Jairus Hihn and Byeong Ho Kang. Size Doesn’t Matter? On the Value of Software Size Features for Effort Estimation
- Nikolaos Mittas, Ioannis Mamalikidis and Lefteris Angelis. StatREC: A Graphical User Interface Tool for Visual Hypothesis Testing of Cost Prediction Models
- Makrina-Viola Kosti, Nikolaos Mittas and Lefteris Angelis. Alternative methods using similarities in software effort estimation
- Raymond Borges and Tim Menzies. Learning to Change Projects
- Filomena Ferrucci, Emilia Mendes and Federica Sarro. Web Effort Estimation: the Value of Cross-company Data Set Compared to Single-company Data Set
- Huihua Lu and Bojan Cukic. An Adaptive Approach with Active Learning in Software Fault Prediction
- Tilmann Hampp. A Cost-Benefit Model for Software Quality Assurance Activities
- David Bowes, Tracy Hall and David Gray. Comparing the performance of fault prediction models which report multiple performance measures: reconstructing the confusion matrix
- Bora Çağlayan, Ayse Tosun Misirli, Andriy Miranskyy, Burak Turhan and Ayse Bener. Factors Characterizing Reopened Issues: A Case Study
- Xihao Xie, Wen Zhang, Ye Yang and Qing Wang. DRETOM: Developer Recommendation based on Topic Models for Bug Resolution
General Statistics
- Submissions: 24
- Accepted: 12
- Acceptance rate: 0.50
- Reviews: 72
- External reviews: 1
- Reviews for a paper: 3
TOPICS: Our topics of interest include but are not limited to:
- Effort prediction models
- Defect prediction models
- Meta-analysis and generalizations of predictive models exploring certain questions
- Replicated studies
- Predicting various intermediate or final outcomes of interest regarding business, team, human, people, process, and organizational aspects of software engineering
- Privacy and ethical issues in sharing and modeling
- Qualitative research guiding and informing the process of building future predictive models
- Instance-based models predicting outcomes by examining similarities to past experiences
- Industrial experience reports detailing the application of software technologies – processes, methods, or tools – and their effectiveness in industrial settings.
- Tools for software researchers that effectively gather and analyze data to support reproducible and verifiable research.
THEME: The theme of PROMISE’12 is the next generation of empirical SE (next-gen). While we encourage submission of the traditional style of PROMISE papers, we also seek “next gen” papers that extend this area in significant new directions (see notes below)
KEYNOTE SPEAKERS:
Martin Shepperd: "The scientific basis for prediction research":
In recent years there has been a huge growth in using statistical and machine learning methods to find useful prediction systems for software engineers. Of particular interest is predicting project effort and duration and defect behaviour. Unfortunately though results are often promising no single technique dominates and there are clearly complex interactions between technique, training methods and the problem domain. Since we lack deep theory our research is of necessity experimental. Minimally, as scientists, we need reproducible studies. We also need comparable studies. I will show through a meta-analysis of many primary studies that we are not presently in that situation and so the scientific basis for our collective research remains in doubt. By way of remedy I will argue that we need to address issues of expertise, reporting protocols and ensure blind analysis is routine.
Sung Kim: "Defect, Defect, Defect: Defect Prediction 2.0":
Software prediction leveraging repositories has received a tremendous amount of attention within the software engineering community, including PROMISE. In this talk, I will first present great achievements in defect prediction research including new defect prediction features, promising algorithms, and interesting analysis results. However, there are still many challenges in defect prediction. I will talk about them and discuss potential solutions for them leveraging prediction 2.0.
Data
PROMISE 2012 will give the highest priority to empirical studies based on publicly available datasets. It is therefore encouraged, but it is not mandatory, that conference attendees contribute the data used in their analysis to the on-line PROMISE data repository. The repository currently holds 142 data sets, which can be used to repeat/confirm/refute/improve previous results.
Important Dates
Abstracts due: March 26, 2012
Paper submission: April 2 April 16, 2012
Notification: May 14, June 11, 2012
Camera-ready papers : June 11, July 9, 2012
Kinds of Papers
This conference encourages both standard papers and next-gen papers (and note that only next-gen papers can be submitted for consideration to the special journal issue associated with this conference).
Standard papers focus on prediction systems; e.g. L learners applied to D data sets in some M*N cross-val. For an excellent examples of L*D*M*N studies, see TSE pre-prints and the papers by Hall et al. http://goo.gl/XRWuk (for defect prediction) and Dajaeger et al. http://goo.gl/UNO4E (for effort prediction). For such standard papers, we strongly discourage results based on
- just a few data sets in domains where many data sets are available available in the PROMISE repository;
- tiny effects sizes: e.g. an MMRE improvement of 10% when in data sets where the MMRE can range up to 10,000%;
- the “broken” PROMISE data sets (see comments at http://promisedata.org/?p=30).
Next-gen papers focus on all the issues that surround predictive models. For discussions on next generation predictive modeling see (a) the ICSE’11 tutorial on Empirical SE, version 2.0 at http://goo.gl/MWzlq; or (b) the “Special Issue Notes” at http://goo.gl/b3E05. Issues relevant to next-gen papers include, but are not restricted to the following:
- Before a predictive model is built:
- Privacy concerns of the individual and the corporate must be addressed.
- Training data data quality must be assessed : see http://goo.gl/QE5au.
- When building a predictive model:
- it is important that the tools are run correctly, as discussed in http://goo.gl/qtc9o;
- After the predictors are built:
- Prediction systems could be used in decision making for project managers (e.g. as done in http://goo.gl/AIqC4 or http://goo.gl/y7Agm).
Submission
Submissions must be original work, not published or under review elsewhere.
Submissions must conform to the ACM SIG proceedings templates from http://goo.gl/wE1k.
Papers must not exceed 10 pages (including references).
Papers should be submitted to via Easychair: http://www.easychair.org/conferences/?conf=promise2012.
Accepted papers will be published in the ACM digital library.

Special Issue
Papers accepted to PROMISE’12 may also be submitted to a forthcoming special journal issue on “Empirical Software Engineering, version 2.0”.
Authors with good reviews from PROMISE’12 are strongly encouraged to submit to this special issue since several reviewers used for PROMISE’12 will also review papers for this issue.
It is a requirement for all submissions to the special issue to have some section called “Empirical SE, V2.0” that discusses next gen issues; i.e. how their work fits into the broader picture beyind just building a predictor (see notes, above).
Venue for Special Issue
The venue for that special issue is TBD.
Previous PROMISE special issues have appeared in IEEE Software, the Empirical Software Engineering journal, and the information and Software Technology Journal.
Programme Committee
- Lefteris Angelis University of Thessaloniki
- Ayse Bener Ryerson University
- David Bowes University of Herfordshire
- Daniela da Cruz University of Minho
- Bojan Cukic West Virginia University
- Bernd Fischer University of Southampton
- Harald Gall University of Zürich
- Dragan Gašević Athabasca University
- Greg Gay University of Minnesota
- Tracy Hall Brunel University
- Mark Harman University College
- Rachel Harrison Oxford Brookes University
- Jacky Keung Hong Kong Polytechnic University
- Rainer Koschke University of Bremen
- Ken-ichi Matsumoto Nara Institute of Science and Technology
- Thilo Mende University of Bremen
- Tim Menzies West Virginia University
- Leandro Minku University of Birmingham
- Sandro Morasca University of Insubria
- Tom Ostrand AT&T
- Massimiliano di Penta University of Sannio
- Daniel Rodriguez University of Alcalá
- Alessandra Russ Imperial College
- Alessandro Sarcia University of Rome
- Martin Shepperd Brunel University
- Burak Turhan University of Oulu
- Stefan Wagner University of Stuttgart
- Laurie Williams North Carolina State University
- Ye Yang Chinese Academy of Sciences
- Du Zhang Calif. State University
- Hongyu Zhang Tsinghua University
- Tom Zimmermann Microsoft
Program Committee Chair
- Stefan Wagner, U Stuttgart, Germany
Special Issue Editor
- Tim Menzies, West Virginia University
Local Organiser
- Du Zhang, Sacremento State, USA
Publicity Chair
- Burak Turhan, U Oulu, Finland
Proceedings
- Ye Yang Chinese Academy of Sciences
Webmaster
- Tim Menzies West Virginia University
Steering Committee
- Ayse Bener, Ryerson U, Canada
- Tim Menzies West Virginia University (chair)
- Burak Turhan, U Oulu, Finland
- Stefan Wagner, U Stuttgart, Germany
- Ye Yang, Chinese Academy of Science, China
- Du Zhang, Sacramento State, USA