Swapna Gokhale and Robert
Mullen. A Discrete Lognormal Model for
Software Defects affecting QoP
|
Abstract: Many
computer and network security crises arise due to the
exploitation of software defects and are only remedied by their
repair. Thus the effect of security related software defects
and their occurrence rates is an important aspect of Quality of
Protection (QoP). Existing arguments and evidence suggests that
the distribution of occurrence rates of software defects is
lognormal and that the first occurrence times of defects
follows the Laplace transform of the lognormal. We
extend this research to hypothesize that the distribution of
occurrence counts of security related defects follows the
Discrete Lognormal. We find that the observed occurrence counts
for three sets of defect data relating specifically to network
security are consistent with our hypothesis. The
paper thus demonstrates how the existing concepts and
techniques in software reliability engineering may be applied
to study the occurrence phenomenon of security related defects
that impact QoP.
|
Ernesto
Damiani, Sabrina
De Capitani di Vimercati, Sara Foresti, Pierangela
Samarati and Marco Viviani. Measuring
Inference Exposure in Outsourced Encrypted Databases
|
Abstract: Database
outsourcing is becoming increasingly popular introducing a new
paradigm, called database-as-a-service, where an encrypted
client's database is stored at an external service provider.
Existing proposals for querying encrypted database are based on
the association, with each encrypted tuple, of additional
indexing information obtained from the plaintext values of
attributes that can be used in the queries. However, the
relationship between indexes and data should not open the door
to inference and linking attacks that can compromise the
protection granted by encryption. In this paper, we present a
simple yet robust indexing technique and investigate
quantitative measures to model inference exposure. We present
different techniques to compute an aggregate measure from the
inference exposure associated with each single index. Our
approach can take into account the importance of plaintext
attributes associated with indexes and/or can allow the user to
weight the inference exposure values upplied in relation to
their relative ordering.
|
Simon
Foley, Stefano Bistaelli, Barry
O'Sullivan, John
Herbert and Garret Swart. Multilevel
Security and Quality of Protection
|
Abstract: Constraining
how information may flow within a system is at the heart of
many protection mechanisms and many security policies have
direct interpretations in terms of information flow
and multilevel security style controls. However, while
conceptually simple, multilevel security controls have
been difficult to achieve in practice.In this paper
we explore how the traditional assurance measures that are used
in the network multilevel security model can be re-interpreted
and generalised to provide the basis of a framework for
reasoning about the quality of protection provided by a secure
system configuration.
|
Iliano Cervesato. Towards
a Notion of Quantitative Security Analysis
|
Abstract: The
traditional Dolev-Yao model of security limits attacks to
``computationally feasible'' operations. We depart
from this model by assigning a cost to protocol actions, both
of the Dolev-Yao kind as well as non traditional forms such as
computationally-hard operations, guessing, principal
subversion, and failure. This quantitative approach
enablesevaluating protocol resilience to various forms of
denial of service, guessing attacks, and resource
limitation. While the methodology is general, we
demonstrate it through a low-level variant of the MSR
specification language.
|
Dogan Kesdogan and Lexi
Pimenidis. The Lower Bound of Attacks on
Anonymity Systems -- A Unicity Distance Approach
|
Abstract: During the
last years a couple of attacks on generic anonymity protocols
emerged, like e.g. the hitting-set attack. These attacks make
use of informations gained by passively monitoring anonymizing
networks to disclose the communication profile of the users. It
has been proven that the longer a person, we call her Alice,
communicates over an anonymizing infrastructure using the same
set of peers (i.e. following a prefixed profile), the more
likely it gets that a link between her and her peers can be
detected. On the other hand, if she changes her peers
dynamically, this is getting harder. In this work we are going
to present a method to calculate a lower bound of observations
that is needed to identify all peer partners of Alice (i.e.
total break) by assuming a prefixed personal profile of Alice.
We claim in this work that this number is comparable to the
well known measure 'unicity distance' in the area of
cryptography.
|
Davide
Balzarotti, Mattia
Monga and Sabrina
Sicari. Assessing
the risk of using vulnerable components
|
Abstract: This paper
discusses how information about the architecture and the
vulnerabilities affecting a distributed system can be used to
quantitatively assess the risk to which the system is
exposed. Our approach to risk evaluation can be used
to assess how much one should believe in system
trustworthiness and to compare different solutions,
providing a tool for deciding if the additional cost of a more
secure component is worth to be afforded.
|
Judith E. Y. Rossebo, Mass
Soldal Lund, Knut Eilif Husa and Atle Refsdal. A
Conceptual Model for Service Availability
|
Abstract: Traditionally,
availability has been seen as an atomic property asserting the
average time a system is "up" or "down". In
order to model and analyse the availability of computerized
systems in a world where the dependency on and complexity of
such systems are increasing, this notion of availability is no
longer sufficient. This paper presents a conceptual model for
availability designed to handle these challenges. The core of
this model is a characterization of availability by means of
accessibility properties and exclusivity properties, which is
further specialized into measurable aspects of availability. We
outline how this conceptual model may be refined to a framework
for specifying and analysing availability requirements.
|
Valentina Casola, Antonino
Mazzeo, Nicola Mazzocca and Massimiliano Rak. A
SLA evaluation methodology in Service Oriented Architectures
|
Abstract: Cooperative
services in Service Oriented Architectures (SOA) interact and
delegate jobs to each other, when they need to respect a
Service Level Agreement (SLA) they need to explicitly manage it
amongst each other. SLAs and, above all, security-SLAs, are
usually expressed in ambiguous ways and this implies that they
need to be manually evaluated both in a mutual agreement to
"qualify a service" and in the monitoring process.
Due to this approach usually service composition cannot vary
dynamically. In this paper we introduce a methodology which
helps in security SLA automatic evaluation and comparison. The
methodology founds on the adoption of policies both for service
behavior and SLA description and on the definition of a metric
function for evaluation and comparison of policies. We will
illustrate the applicability of the proposed methodology in
different context of great interests for e-government projects.
|
Andy
Ozment. Software
Security Growth Modeling: Examining Vulnerabilities with
Reliability Growth Models
|
Abstract: The
insecurity of commercial software is due in part to the lack of
useful metrics of software ecurity. However, the software
engineering tools historically used to examine faults can also
be used to examine vulnerabilities. This work proposes that
\emph{security} growth modeling can provide a useful absolute
and relative metric of software security: an estimate of the
total number of vulnerabilities in a product. Unfortunately,
obtaining accurate vulnerability data in order to model
software security is difficult. The challenges of the
collection process are considered and a set of vulnerability
characterization criterion are proposed. Fifty-four months of
vulnerability data for OpenBSD 2.2 were collected, and ten
reliability growth models were applied to this data. Seven of
the models tested had successful prequential likelihood
(accuracy) and goodness-of-fit results. Security growth
modeling thus shows promise in providing a measurement for
security. However, some impediments to its use remain: these
challenges and several avenues of potential research are
discussed.
|
Miles McQueen, Wayne Boyer,
Mark Flynn and George Beitel. Time-to-compromise
Model for Cyber Risk Reduction Estimation
|
Abstract: We propose
a new model for estimating the time to compromise a system
component that is visible to an attacker. The model provides an
estimate of the expected value of the time-to-compromise as a
function of known and visible vulnerabilities, and attacker
skill level. The time-to-compromise random process model is a
composite of three subprocesses associated with attacker
ac-tions aimed at the exploitation of vulnerabilities. In a
case study, the model was used to aid in a risk reduction
estimate between a baseline Supervisory Control and Data
Acquisition (SCADA) system and the baseline system enhanced
through a specific set of control system security remedial
actions. For our case study, the total number of system
vulnerabilities was reduced by 86% but the dominant attack path
was through a component where the number of vulner-abilities
was reduced by only 42% and the time-to-compromise of that
compo-nent was increased by only 13% to 30% depending on
attacker skill level.
|
Reine Lundin, Stefan
Lindskog, Anna Brunstrom and Simone Fischer-Hu"bner. Using
Guesswork as a Measure for Confidentiality of Selectively
Encrypted Messages
|
Abstract: In this
paper, we start to investigate the security implications of
selective encryption.We do this by using the measure guesswork,
which gives us the expected number of guesses that an attacker
must perform in a brute force attack to reveal an encrypted
message. The concept of reduction chains, is used to describe
how the search (message) space changes for the attacker with
different encryption levels. The characteristic of the proposed
measure is investigated for zero-order languages.
|
ALATA ERIC, DACIER MARC,
DESWARTE Yves, KAANICHE MOHAMED, KORTCHINSKY KOSTYA, NICOMETTE
VINCENT, PHAM VAN-HAU and POUGET FABIEN. Collection
and analysis of attack data based on honeypots deployed on the
Internet
|
Abstract: The CADHo
project (Collection and Analysis of Data from Honeypots) is an
ongoing research action funded by the French ACI "Securite'
& Informatique” [1]. It aims at building an
environment to better understand threats on the Internet and
also at providing models to analyze the observed phenomena. Our
approach consists in deploying and sharing with the scientific
community a distributed platform based on honeypots that
gathers data suitable to analyze the attack processes targeting
machines connected to the Internet. This distributed platform,
called Leurre'.com and administrated by Institut Eure'com,
offers each partner collaborating to this initiative access to
all collected data in order to carry out statistical analyzes
and modeling activities. So far, about thirty honeypots have
been operational for several months in twenty countries of the
five continents. This paper presents a brief overview of this
distributed platform and examples of results derived from the
data. It also outlines the approach investigated to model
observed attack processes and to describe the intruders
behaviors once they manage to get access to a target machine.
|
Günter
Karjoth, Birgit Pfitzmann, Matthias Schunter and Michael
Waidner. Service-oriented
Assurance - Comprehensive Security by Explicit Assurances
|
Abstract: Flexibility
to adapt to changing business needs is a core requirement
of today's enterprises. This is addressed
by decomposing business processes into services that can
be provided by scalable service-oriented
architectures. Service-oriented architectures enable
requesters to dynamically discover and use
sub-services. Today, service selection does not
consider security. In this paper, we introduce the concept
of Service Oriented Assurance (SOAS), in which services
articulate their offered security assurances as well as
assess the security of their sub-services. Products
and services with well-specified and verifiable assurances
provide guarantees about their security properties. As
a consequence, SOAS enables discovery of the sub-services with
the ``right'' level of security. Applied to business
installations, it enables enterprises to perform a well-founded
security/price trade-off for the services used in their
business processes.
|
Andrea Atzeni and Antonio
Lioy. Why to adopt a security metric? A
little survey
|
Abstract: Security
is a hot topic among recent computer system issues. Many
security experts advocate the urgent need to improve and
increase the investment on security, as well as the necessity
to protect user privacy. On the other hand, the same security
experts admit the difficulty to quantify security. This paper
tries to highlight the motivation to improve measurement
security techniques, and discusses whether the status quo is
promising enough to justify security investments in the hope of
significant gains.
|
Dogan Kesdogan, Lexi
Pimenidis and Tobias Ko"lsch. Intersection
Attacks on Web-Mixes: Bringing the Theory into Praxis
|
Abstract: In the
past, different intersection attacks on Chaum Mixes have been
proposed and shown to work well in simulation environments. In
this work we describe intersection attacks that have been
performed on data from anonymized proxy log files. This
approach creates all new problems that arise in real systems,
where real-world users do not behave like those in the
idealized model. E.g. the attack algorithm has to cope with a
fixed number of observations. From the performed first
experiments on the ``dirty'' real world data we get valuable
insight into theory and practice of real anonymizers.
|