Home
Programme
Invited
Speaker
Submission
instructions
Accepted
papers
Call For
Papers
Call
For Participation
Committees
Registration
Acknowledgments
QoP
2005
QoP
2006
QoP
2007
MetriSec
2009
|
QoP List of Accepted Papers
Title:
Vulnerability Scoring
for Security Configuration Settings
(short) Author(s):
Karen Scarfone, Peter Mell
|
Abstract: The
best-known vulnerability scoring standard, the Common
Vulnerability Scoring System (CVSS), is designed to quantify
the severity of security-related software flaw vulnerabilities.
This paper describes our efforts to determine if CVSS could be
adapted for use with a different type of vulnerability:
security configuration settings. We have identified significant
differences in scoring configuration settings and software
flaws and have proposed methods for accommodating those
differences. We also generated scores for 187 configuration
settings to evaluate the new specification.
|
Title:
Enforcing a Security
Pattern in Stakeholder Goal Models
(short) Author(s):
Yijun Yu, haruhiko kaiya, Hironori
Washizaki, Yingfei Xiong, Zhenjiang Hu
|
Abstract: Patterns
are useful knowledge about recurring problems and solutions.
Detecting a security problem using patterns in requirements
models may lead to its early solution. In order to facilitate
early detection and resolution of security problems, in this
paper, we formally describe a role-based access control (RBAC)
as a pattern that may occur in stakeholder requirements models.
We also implemented in our goal-oriented modeling tool the
formally described pattern using model-driven queries and
transformations. Applied to a number of requirements models
published in literature, the tool automates the detection and
resolution of the security pattern in several goal-oriented
stakeholder requirements.
|
Title:
Strata-Gem:
Risk Assessment Through Mission Modeling Author(s):
K. Clark, E. Singleton, S. Tyree, J.
Hale
|
Abstract: Strata-Gem
utilizes mission trees to perform risk assessments by linking
an organization’s objectives to the IT assets that
implement them. Critical states are identified which indicate
goals that a potential attacker can achieve to prevent each
asset from completing its objective. Those goals are then used
as states to drive attack tree and fault analysis to determine
the likelihood of an attack. This provides a quantitative risk
measurement to be calculated for each asset, objective, and the
overall organization.
|
Title:
Measuring Network
Security Using Dynamic Bayesian Network Author(s):
Marcel Frigault, Lingyu Wang, Anoop
Singhal, Sushil Jajodia
|
Abstract: Given the
increasing dependence of our societies on networked information
systems, the overall security of these systems should be
measured and improved. Existing security metrics have generally
focused on measuring individual vulnerabilities without
considering their combined effects. Our previous work tackle
this issue by exploring the causal relationships between
vulnerabilities encoded in an attack graph. However, the
evolving nature of vulnerabilities and networks has largely
been ignored. In this paper, we propose a Dynamic Bayesian
Networks (DBNs)-based model to incorporate temporal factors,
such as the availability of exploit codes or patches. Starting
from the model, we study two concrete cases to demonstrate the
potential applications. This novel model provides a theoretical
foundation and a practical framework for continuously measuring
network security in a dynamic environment.
|
Title:
Prioritizing Software
Security Fortification through Code-Level Security
Metrics Author(s):
Michael Gegick, Laurie Williams, Jason
Osborne, Mladen Vouk
|
Abstract: Limited
resources preclude software engineers from finding and fixing
all vulnerabilities in their software system. We create
predictive models to identify which components are likely to
have the most security risk. Software engineers can use these
models to make measurement-based risk management decisions and
to prioritize software security fortification efforts, such as
redesign and additional inspection and testing. We mined and
analyzed data from a large commercial telecommunications
software system containing over one million lines of code that
had been deployed to the field for two years. Using recursive
partitioning, we built attack-prone prediction models with the
following code-level metrics: static analysis tool output, code
churn, and source lines of code. A model which used all three
of these as predictors identified 100% of the attack-prone
components (40% of the total number of components) with an 8%
false positive rate. As such, the model could be used to
prioritize efforts to increase the effectiveness of these
efforts and the fortification of the system.
|
Title:
Perceived Risk
Assessment (short) Author(s):
Yudistira Asnar, Nicola Zannone
|
Abstract: In the
last years, IT systems play a more and more fundamental role in
human activities and, in particular, in critical activities
such as the management of Air Traffic Control and Nuclear Power
Plant. This has spurred several researchers to develop models,
metrics, and methodologies for analyzing and measuring the
security and dependability of critical systems. Their objective
is to understand whether the risks affecting the system are
acceptable or not. If risks are too high, analysts need to
identify the treatments adequate to mitigate them. Existing
proposals however fail to consider risks within multi-actors
settings. Here, different actors participating to the system
might have a different perception of risk and react
consequently. In this paper, we introduce the concept of
perceived risk and discuss its differences with actual risk. We
also investigate the concepts necessary to capture and analyze
perceived risk.
|
Title:
Is Complexity Really
the Enemy of Software Security? (short) Author(s):
Yonghee Shin and Laurie Williams
|
Abstract: Software
complexity is often hypothesized to be the enemy of software
security. We performed statistical analysis on nine code
complexity metrics from the JavaScript Engine in the Mozilla
application framework to investigate if this hypothesis is
true. Our initial results show that the nine complexity
measures have weak correlation (rho=0.21) with security
problems for Mozilla JavaScript Engine. The study should be
replicated on more products with design and code-level metrics.
It may be necessary to create new complexity metrics to embody
the type of complexity that leads to security problems.
|
Title:
The Risks with
Security Metrics (short) Author(s):
Marco Aime, Andrea Atzeni, Paolo Carlo
Pomi
|
Abstract: Security
metrics and measurements are processes of obtaining information
about the effectiveness of ISMS, control objectives, and
controls using a measurement method, a measurement function,
analytical model, and decision criteria. Unfortunately,
identifying effective security metrics has proven an hard
challenge: every automatic security evaluation technique has
failed to match the performance of security experts. Our
studies have shown how security metrics are by nature highly
unstable, in time and depending on the specific target of
evaluation. In this paper, we first elaborate this finding,
then describe the experimental framework we used, and present
some validation results.
|
Title:
Towards Experimental
Evaluation of Code Obfuscation Techniques Author(s):
Mariano Ceccato, Massimiliano Di Penta, Jasvir Nagra, Paolo
Falcarin, Filippo Ricca, Marco Torchiano, Paolo Tonella
|
Abstract: Although
there not exist general purpose obfuscation algorithms
satisfying any strong definition of obfuscation and some
argue they are impossible to construct, in practice available
code obfuscation is considered a useful protection against
malicious reverse engineering by obstructing
code comprehension. In previous works, the difficulty of
reverse engineering has been mainly estimated by means of code
metrics, by the computational complexity of static analysis or
by comparing the output of de-obfuscating tools. In this paper
we take a different approach and assess the difficulties
attackers have in understanding and modifying obfuscated code
through controlled experiments involving human subjects.
|
Title:
Does enforcing
anonymity mean decreasing data usefulness? Author(s):
Aaron Visaggio, Gerardo Canfora
|
Abstract: Preserving
data privacy is becoming an urgent issue to cope with. Among
different technologies, anonymization techniques offer many
advantages, even if preliminary investigations suggest that
they could deteriorate the usefulness of data. We carried out
an empirical study in order to understand to which extent it is
possible to enforce anonymization, and thus protect sensitive
information, without degrading usefulness of data under
unacceptable thresholds. Moreover, we analyzed also if
re-writing queries could help reduce drawbacks of
anonymization.
|
|