Title:
AMBRA - Automated
Model-Based Risk Analysis Author(s):
Marco Domenico Aime, Andrea Atzeni and Paolo
Carlo Pomi
|
Abstract: Risk
analysis is the starting baseline to choose what technical and
procedural security measure an agency must employ. In spite of
its importance, due to its complexity and its relative
immaturity, up to now burdens on the arm of security experts,
with little automation of the process. In this work we show a
methodology based on existing standards, highlight tasks
automatically-performable, and describe how automating them in
our model.
|
Title:
Program Obfuscation: A
Quantitative Approach. Author(s):
Bertrand Anckaert, Matias Madou, Bjorn De Sutter,
Bruno De Bus, Koen De Bosschere and Bart Preneel
|
Abstract: Despite
the recent advances in the theory underlying obfuscation, there
still is a need to evaluate the quality of practical
obfuscating transformations more quickly and easily. This paper
presents the first steps toward a comprehensive evaluation
suite consisting of a number of deobfuscating transformations
and complexity metrics that can be readily applied on existing
and future transformations in the domain of binary obfuscation.
In particular, a framework based on software complexity metrics
measuring four program properties: code, control flow, data and
data flow is suggested. A number of well-known obfuscating and
deobfuscating transformations are evaluated based upon their
impact on a set of complexity metrics. This enables us to
quantitatively evaluate and compare the potency of the
obfuscating and deobfuscating transformations.
|
Title:
Experimental verification of
DoS counter-measure performance. Author(s):
Daniel Boteanu, Edouard Reich, Jose M.
Fernandez and John McHugh
|
Abstract: Among the
different quality attributes of software artifacts, security
has lately gained a lot of interest. However, both qualitative
and quantitative methodologies to assess security are still
missing. This is possibly due to the lack of knowledge about
which properties must be considered when it comes to evaluate
security. The above-mentioned gap is even larger when one
considers key software development phases such as architectural
and detailed design. This position paper highlights the
fundamental questions that need to be answered in order to
bridge the gap and proposes an initial approach.
|
Title:
A technique for
self-certifying tamper resistant software Author(s):
Hongxia Jin and Ginger Myles
|
Abstract: Until
recently the use of software tamper resistance was rather
limited. However, as the music and movie industries have
increased their reliance on content protection systems, the
importance placed on and the use of tamper resistance has also
increased. Unfortunately, the nature of tamper resistance can
make it difficult for developers to determine if a protection
mechanism is actually robust and which attacks it can protect
against. To address this issue we have designed a tool for
self-certifying the strength of a tamper resistance
implementation that is based on a hybrid attack-defense graph.
This approach to tamper resistance evaluation is advantageous
in that it enables certification without leaking confidential
implementation details and it assists developers in designing
more robust implementations.
|
Title:
An algorithm for the
security appraisal for complex business processes Author(s):
Fabio Massacci and Artsiom Yautsiukhin
|
Abstract: In order
to provide certified security services we must provide
indicators that can measure the level of assurance that a
complex business process can offer. Unfortunately the standard
formulation of security indicators is not amenable to efficient
algorithms able to evaluate the level of assurance of complex
process from its components. In this paper we show an algorithm
based on FD-Graphs (a variant of directed hyper-graphs) that
can be used to compute in polynomial time i) the overall
assurance indicator of a complex business process from its
components for arbitrary monotone composition functions, ii)
the subpart of the business process that is responsible for
such assurance indicator (i.e. the weakest security link).
|
Title:
Quantitative Software
Security Risk Assessment Model Author(s):
Idongesit Mkpong-Ruffin, David Umphress, John
Hamilton and Juan Gilbert
|
Abstract: Risk
analysis is a process for considering possible risks and
determining which are the most significant for any particular
effort. Determining which risks to address and the optimum
strategy for mitigating said risks is often an intuitive and
qualitative process. An objective view of the risks inherent in
a development effort requires a quantitative risk model.
Quantitative risk models used in determining which risk factors
to focus on, tend to use a traditional approach of annualized
loss expectancy (ALE) which is based on frequency of occurrence
and the exposure factor (EF) which is the percentage of asset
loss due to the potential threat in question. This research
uses empirical data that reflects the security posture of each
vulnerability to calculate Loss Expectancy; a risk impact
estimator. Data from open source vulnerability databases and
results of predicted threat models are used as input to the
risk model. Security factors that take into account the innate
characteristics of each vulnerability are incorporated into the
calculation of the risk model. The result of this model is an
assessment of the potential threats to a development effort,
and ranking of these threats based on the risk metric
calculation.
|
Title:
Effect of Static Analysis
Tools on Software Security: Preliminary
Investigation Author(s):
Vadim Okun, William F. Guthrie, Romain Gaucher
and Paul E. Black
|
Abstract: Static
analysis tools can handle large-scale software and find
thousands of defects. But do they improve software security? We
evaluate the effect of static analysis tool use on software
security in open source projects. We measure security by
vulnerability reports in the National Vulnerability Database.
information on submission.
|
Title:
Improving Vulnerability
Discovery Models: Problems with definitions and
assumptions Author(s):
Andy Ozment
|
Abstract: Security
researchers are applying software reliability models to
vulnerability data, in an attempt to model the vulnerability
discovery process. I show that most current work on these
vulnerability discovery models (VDMs) is theoretically unsound.
I propose a standard set of definitions relevant to measuring
characteristics of vulnerabilities and their discovery process.
I then describe the theoretical requirements of VDMs and
highlight the shortcomings of existing work, particularly the
assumption that vulnerability discovery is an independent
process.
|
Title:
Measuring Up: How to Keep
Security Metrics Useful and Realistic Author(s):
Shari Lawrence Pfleeger
|
Abstract: Software
quality measurement has a long and not always happy history.
Eager to measure many aspects of software quality, researchers
sometimes have measured what was expedient or available instead
of what was useful and realistic. In this talk, Shari Lawrence
Pfleeger reviews software quality measurement, pointing out
lessons that can be applied to current attempts to measure the
security of systems and networks. She offers guidelines for
effective security measurement that take into account not only
the technology but also the business context in which the
measurement is done.
|
Title:
Defining Categories to
Select Representative Attack Test-Cases. Author(s):
Mohammed S. Gad El Rab, Anas Abou El Kalam and
Yves Deswarte
|
Abstract: To
ameliorate the quality of protection provided by intrusion
detection systems (IDS) we strongly need an effective
evaluation and testing procedures. Evaluating an IDS against
all known and unknown attacks is probably impossible.
Nevertheless, a sensible selection of representative attacks is
necessary to obtain an unbiased evaluation of such systems. To
help in this selection, this paper suggests applying the same
approach as in software testing: to overcome the problem of an
unmanageably large set of possible inputs, software testers
usually divide the data input domain into categories (or
equivalence classes), and select representative instances from
each category as test cases. We believe that the same principle
could be applied to IDS testing if we have a reasonable
classification. In this paper we make a thorough analysis of
existing attack classifications in order to determine whether
they could be helpful in selecting attack test cases. Based on
our analysis, we construct a new scheme to classify attacks
relying on those attributes that appear to be the best
classification criteria. The proposed classification is mainly
intended to be used for testing and evaluating IDS although it
can be used for other purposes such as incident handling and
intrusion reporting. We also apply the Classification Tree
Method (CTM) to select attack test cases. As far as we know,
this is the first time that this method is applied for this
purpose.
|
Title:
A Taxonomy for Information
Security Metrics Development for ICT Product
Industry Author(s):
Reijo Savola
|
Abstract: Static
analysis tools can handle large-scale software and find
thousands of defects. But do they improve software security? We
evaluate the effect of static analysis tool use on software
security in open source projects. We measure security by
vulnerability reports in the National Vulnerability Database.
|
Title:
Measuring Network Security
Using Attack Graphs Author(s):
Lingyu Wang, Anoop Singhal and Sushil Jajodia
|
Abstract: In
measuring the overall security of a network, a crucial issue is
to correctly compose the measure of individual components.
Incorrect compositions may lead to misleading results. For
example, a network with less vulnerabilities or a more
diversified configuration is not necessarily more secure. To
obtain correct compositions of individual measures, we need to
first understand the interplay between network components. For
example, how vulnerabilities can be combined by attackers in
advancing an intrusion. Such an understanding becomes possible
with recent advances in modeling network security using attack
graphs. Based on our experiences with attack graph analysis, we
propose an integrated framework for measuring various aspects
of network security. We first outline our principles and
methodologies. We then describe concrete examples to build
intuitions. Finally, we present our formal framework. It is our
belief that metrics developed based on the proposed framework
will lead to novel quantitative approaches to vulnerability
analysis, network hardening, and attack response.
|