PROMISE is an annual forum for researchers and practitioners to present, discuss and exchange ideas, results, expertise and experiences in construction and/or application of predictive models and data analytics in software engineering. PROMISE encourages researchers to publicly share their data in order to provide interdisciplinary research between the software engineering and data mining communities, and seek for verifiable and repeatable experiments that are useful in practice.

Please see ESEIW website for venue, registration, and visa information

Program

[08.00-08.30] Registration
[08.30-10.00] Opening and Keynote
Welcome from Chairs
Keynote: Prof. Tracy Hall, Lancaster University: "Getting Defect Prediction into Industrial Practice: Just a Dream?"
[10.00-10.30] Coffee Break
[10.30-12.00] Session I: Defects
Rudolf Ferenc, Zoltán Tóth, Gergely Ladányi, Istvan Siket and Tibor Gyimóthy. A Public Unified Bug Dataset for Java
Tapajit Dey and Audris Mockus. Modeling Relationship between Post-Release Faults and Usage in Mobile Software
Sousuke Amasaki. Cross-Version Defect Prediction using Cross-Project Defect Prediction Approaches:Does it work?
Giuseppe Destefanis, Saed Qaderi, David Bowes, Jean Petrić and Marco Ortu. A Longitudinal Study of Anti Micro Patterns in 113 versions of Tomcat
[12.00-13.30] Lunch Break
[13.30-15.00] Session II: Testing and Decision Making
Jean Petric, Tracy Hall and David Bowes. How Effectively Is Defective Code Actually Tested? An Analysis of JUnit Tests in Seven Open Source Systems
Francis Palma, Tamer Abdou, John Maidens, Ayse Bener and Stella Liu. An Improvement to Test Case Failure Prediction in the Context of Test Case Prioritization
Fernando López de la Mora and Sarah Nadi. An Empirical Study of Using Metric-based Comparisons to Select Libraries
Martí Manzano, Emilia Mendes, Cristina Gómez, Claudia P. Ayala and Xavier Franch. Using Bayesian Networks to estimate Strategic Indicators in the context of Rapid Software Development
[15.00-15.30] Coffee Break
[15.30-16.40] Session III: Communication and Popularity
Marco Ortu, Tracy Hall, David Bowes, Giuseppe Destefanis, Michele Marchesi and Roberto Tonelli. Mining Communication Patterns in Software Development: A GitHub Analysis
Mohammad Alahmadi, Jonathan Hassel, Biswas Parajuli, Sonia Haiduc and Piyush Kumar. Accurately Predicting the Location of Code Fragments in Programming Video Tutorials Using Deep Learning
Tapajit Dey and Audris Mockus. Are Software Dependency Supply Chain Metrics Useful in Predicting Change of Popularity of NPM Packages?
[16.40-17.00] Closing

Keynote: "Getting Defect Prediction into Industrial Practice: Just a Dream?"

by Prof. Tracy Hall, Lancaster University, UK

Abstract: Despite many researchers over many years developing increasingly sophisticated defect prediction models there is little evidence that these models are being used by industry. This talk explores the possible reasons for this lack of industrial impact. The challenges of practitioners’ building reliable defect prediction models are discussed, as well as the commercial overhead of training models on high quality data. The talk goes on to consider whether the kinds of problems addressed by defect prediction researchers are high on industry’s agenda. Experiences with industrial collaborators are presented and ways forward for defect prediction are explored.

Bio: Tracy Hall is Professor of Software Engineering at Lancaster University in the UK. She was previously Professor of Software Engineering at Brunel University London and Head of Brunel’s Computer Science Department. Over the last 20 years Professor Hall has conducted many empirical software engineering studies with a variety of industrial collaborators. Her current research activities focus on software defect prediction and the development of tools for use by software engineers. She has published over 100 international peer reviewed journal and conference papers and has won numerous best paper awards. Professor Hall has been Principal Investigator on a variety of funded research projects. She is Associate Editor for the Information Software Technology Journal and the Software Quality Journal. She has contributed to a range of conference organising committees and is a long standing member of many international conference programme committees.


Accepted Papers

  • Mohammad Alahmadi, Jonathan Hassel, Biswas Parajuli, Sonia Haiduc and Piyush Kumar. Accurately Predicting the Location of Code Fragments in Programming Video Tutorials Using Deep Learning
  • Rudolf Ferenc, Zoltán Tóth, Gergely Ladányi, Istvan Siket and Tibor Gyimóthy. A Public Unified Bug Dataset for Java
  • Fernando López de la Mora and Sarah Nadi. An Empirical Study of Using Metric-based Comparisons to Select Libraries
  • Sousuke Amasaki. Cross-Version Defect Prediction using Cross-Project Defect Prediction Approaches:Does it work?
  • Jean Petric, Tracy Hall and David Bowes. How Effectively Is Defective Code Actually Tested? An Analysis of JUnit Tests in Seven Open Source Systems
  • Martí Manzano, Emilia Mendes, Cristina Gómez, Claudia P. Ayala and Xavier Franch. Using Bayesian Networks to estimate Strategic Indicators in the context of Rapid Software Development
  • Tapajit Dey and Audris Mockus. Modeling Relationship between Post-Release Faults and Usage in Mobile Software
  • Tapajit Dey and Audris Mockus. Are Software Dependency Supply Chain Metrics Useful in Predicting Change of Popularity of NPM Packages?
  • Marco Ortu, Tracy Hall, David Bowes, Giuseppe Destefanis, Michele Marchesi and Roberto Tonelli. Mining Communication Patterns in Software Development: A GitHub Analysis
  • Francis Palma, Tamer Abdou, John Maidens, Ayse Bener and Stella Liu. An Improvement to Test Case Failure Prediction in the Context of Test Case Prioritization
  • Giuseppe Destefanis, Saed Qaderi, David Bowes, Jean Petrić and Marco Ortu. A Longitudinal Study of Anti Micro Patterns in 113 versions of Tomcat

Topics of Interest

Application oriented:

  • prediction of cost, effort, quality, defects, business value;
  • quantification and prediction of other intermediate or final properties of interest in software development regarding people, process or product aspects;
  • using predictive models and data analytics in different settings, e.g. lean/agile, waterfall, distributed, community-based software development;
  • dealing with changing environments in software engineering tasks;
  • dealing with multiple-objectives in software engineering tasks;
  • using predictive models and software data analytics in policy and decision-making.

Theory oriented:

  • model construction, evaluation, sharing and reusability;
  • interdisciplinary and novel approaches to predictive modelling and data analytics that contribute to the theoretical body of knowledge in software engineering;
  • verifying/refuting/challenging previous theory and results;
  • combinations of predictive models and search-based software engineering;
  • the effectiveness of human experts vs. automated models in predictions.

Data oriented:

  • data quality, sharing, and privacy;
  • curated data sets made available for the community to use;
  • ethical issues related to data collection and sharing;
  • metrics;
  • tools and frameworks to support researchers and practitioners to collect data and construct models to share/repeat experiments and results.

Validity oriented:

  • replication and repeatability of previous work using predictive modelling and data analytics in software engineering;
  • assessment of measurement metrics for reporting the performance of predictive models;
  • evaluation of predictive models with industrial collaborators.

 

Important Dates

  • Abstracts due: July 16, 2018
  • Submissions due: July 20, 2018
  • Author notification: August 21, 2018
  • Conference Date: October 10, 2018

 

Journal Special Issue

  • Following the conference, the authors of the best papers will be invited for consideration in a special issue of the Empirical Software Engineering journal by Springer.

 

Keynote

  • Prof. Tracy Hall, Lancaster University

Kinds of Papers

We invite theory and empirical studies on the topics of interest (e.g. case studies, meta-analysis, replications, experiments, simulations, surveys etc.), as well as industrial experience reports detailing the application of predictive modelling and data analytics in industrial settings. Both positive and negative results are welcome, though negative results should still be based on rigorous research and provide details on lessons learned. It is encouraged, but not mandatory, that conference attendees contribute the data used in their analysis on-line. Submissions can be of the following kinds:

  • Full papers (oral presentation): papers with novel and complete results.
  • Short papers (oral presentation): papers to disseminate on-going work and preliminary results for early feedback, or vision papers about the future of predictive modelling and data analytics in software engineering
Note about GitHub research: Given that PROMISE papers heavily rely on software data, we would like to draw authors that leverage data scraped from GitHub of GitHub's Terms of Service, which require that “publications resulting from that research are open access”. Similar to other leading SE conferences, PROMISE supports and encourages Green Open Access, i.e., self-archiving. Authors can archive their papers on their personal home page, an institutional repository of their employer, or at an e-print server such as arXiv (preferred).

 

Submissions

PROMISE 2018 submissions must meet the following criteria:
  • be original work, not published or under review elsewhere while being considered.
  • conform to the ACM SIG proceedings template
  • not exceed 10 (4) pages for full (short) papers including references.
  • follow a structured abstract with the headings: Background, Aims, Method, Results, and Conclusions.
  • written in English;
  • Papers should be submitted via EasyChair (please choose the paper category appropriately):
Submissions will be peer reviewed by at least three experts from the international program committee. Accepted papers will be published in the ACM Digital Library within its International Conference Proceedings Series and will be available electronically via ACM Digital Library. Each accepted paper needs to have one registration at the full conference rate and be presented in person at the conference.

Programme Committee

  • Gabriele Bavota, University of Lugano
  • Ricardo Britto, Blekinge Institute of Technology
  • Massimiliano (Max) Di Penta, University of Sannio
  • Carmine Gravino, University of Salerno
  • Rachel Harrison, Oxford Brooks University
  • Hoa Khan, University of Wollongong
  • Foutse Khomh, Polytechnique Montreal
  • Ekrem Kocaguneli, MIcrosoft
  • Gernot Liebchen, Bournemouth University
  • Lech Madeyski, Wroclaw University of Science and Technology
  • Tim Menzies, North Carolina State University
  • Leandro Minku, University of Leicester
  • Jaechang Nam, Handong Global University
  • Daniel Rodriguez, University of Alcalá
  • Martin Shepperd, Brunel University London
  • Chakkrit Tantithamthavorn, University of Adelaide
  • Hironori Washizaki, Waseda University
  • Xin Xia, Monash University
  • Yuming Zhou, Nanjing University

Steering Committee

General Chair

PC Co-Chairs

Publication Chair

Publicity and Social Media Chair

Local Organization Chair