While very powerful, many current static analysis tools are misused or even abandoned because they are not written with the end-user in mind. My research focuses on improving the usability of analysis tools for code developers through different aspects that range from the analysis algorithm to the implementation of its framework to the usability of its interface. In particular, my research interests are scalable static analysis, usable tooling, and secure software engineering.
Below are a few projects I participated or am participating in, along with the artifacts generated during the projects.
VisuFlow is a debugging environment designed to support static analysis writers understand and debug an analysis. It is written as an Eclipse plugin, and supports static data-flow analyses written on top of the Soot analysis framework.
- Source code: https://github.com/VisuFlow
- Video demonstration: https://www.youtube.com/watch?v=51iimUDaOPQ
- Survey questions: https://lisanqd.files.wordpress.com/2017/08/survey_questions.pdf
- Survey results: https://lisanqd.files.wordpress.com/2017/08/survey-answers.xlsx
- The first sheet contains the raw answers.
- Each of the other sheets contains the answers to one question and their corresponding classification information.
- User study questionnaire: https://lisanqd.files.wordpress.com/2017/08/questionnaire.pdf
- User study results: https://lisanqd.files.wordpress.com/2017/08/user-study-results.xlsx
- The first sheet contains the focus times on the different views of the coding environments.
- The second sheet contains the number of errors found by the participants.
- The third sheet contains the raw answers to the user study questionnaire.
- Each of the other sheets contains the answers to one question of the questionnaire and their corresponding classification information.
- ICSE 2018 Demonstration: VisuFlow: a Debugging Environment for Static Analyses (Lisa Nguyen Quang Do, Stefan Krüger, Patrick Hill, Karim Ali, Eric Bodden).
- Technical Report: Debugging Static Analysis (Lisa Nguyen Quang Do, Stefan Krüger, Patrick Hill, Karim Ali, Eric Bodden).
The Just-in-Time analysis concepts aims at making static analysis more usable to the end user, often the code developer. It allows analysis writers to encode prioritization properties into the analysis. At runtime, certain paths are analyzed before others, allowing important results to be returned first. CHEETAH is an implementation of the Just-in-Time analysis concept for taint analysis for Android applications. It is integrated in the Eclipse IDE as a plugin.
- Source code of CHEETAH: https://github.com/secure-software-engineering/cheetah
- Video demonstration: https://www.youtube.com/watch?v=AMq9sFo7gjc
- User study documents: https://blogs.uni-paderborn.de/sse/files/2016/08/JITA_UserStudy.pdf
- Survey template, participants’ responses, interview protocol.
- ISSTA 2017: Just-in-Time Static Analysis (Lisa Nguyen Quang Do, Karim Ali, Benjamin Livshits, Eric Bodden, Justin Smith, and Emerson Murphy-Hill).
Awarded: Distinguished Paper Award, Artifact Evaluation Award.
- ICSE 2017 Demonstration: Cheetah: Just-in-Time Taint Analysis for Android Apps (Lisa Nguyen Quang Do, Karim Ali, Benjamin Livshits, Eric Bodden, Justin Smith, and Emerson Murphy-Hill).
- Technical Report: Just-in-Time Static Analysis (Lisa Nguyen Quang Do, Karim Ali, Benjamin Livshits, Eric Bodden, Justin Smith, and Emerson Murphy-Hill).
- Technical Report: Toward a Just-In-Time Static Analysis (Lisa Nguyen Quang Do, Karim Ali, Eric Bodden, and Benjamin Livshits).
Automated Benchmark Management
When empirically testing one’s tools, one can either use well established benchmark suites, create one’s own micro-benchmark, or mine open-source repositories for real-life projects. In the first case, benchmark suites are often created by hand for one single purpose and remain unchanged for years, making them ill-adapted to the tool under test, and non-representative of real-life software. In the second case, tool authors also crafting the benchmark is often considered a threat to the validity of the evaluation. The Automated Benchmark Management methodology has been designed to semi-automatically build and maintain benchmark collections that correspond to a user specification. It mines GitHub for up-to-date projects, runs user-specified filters, and rules out those projecs that do not fit, nor are not buildable. The final collection is the source code and executables of buildable, current, and user-specific GitHub projects.
- Old web page: http://www.st.informatik.tu-darmstadt.de/artifacts/webapps/
- One instantiation of ABM: Java web applications.
- Source code
- New web application: https://abm.cs.upb.de/abm/
- Currently being built.
- Source code
- SOAP 2016: Toward an Automated Benchmark Management System (Lisa Nguyen Quang Do, Michael Eichberg, and Eric Bodden).
Pointer analysis is a building block of static analysis. Be it for building call graphs, or to guarantee the soundness of other analyses, points-to and alias information are important to provide. The format of points-to and alias analyses do not return all-alias information, meaning that in order to find all variables that alias to another, the user should iterate over all existing variables in the program, and query the analysis for each of them. Boomerang is the first analysis that provides all-alias sets. It is also an on-demand analysis, which allows it to return results quickly.
- Source code: https://github.com/johspaeth/boomerang-artifact
- Video: https://www.youtube.com/watch?v=aTt4M2_TGPI
- ECOOP 2016: Boomerang: Demand-Driven Flow- and Context-Sensitive Pointer Analysis for Java (Johannes Späth, Lisa Nguyen Quang Do, Karim Ali, and Eric Bodden).
Awarded: Artifact Evaluation Award.