Submission 2:
Framework for Malware Resistance Metrics Author(s):Hanno Langweg
Abstract: We survey existing security metrics in software architecture and software engineering. Metrics are adapted to indicate resistance of an application against malicious software (malware) attacks. A repository of generic attacks is presented as well as the concept of resistance classes for software products.
Submission 6: Using
model-based security assessment in component-oriented system development.
A case-based evaluation. Author(s):Gyrd Brændeland and Ketil Stølen
Abstract: We propose an integrated process for component-based system development and security risk analysis. The integrated process is evaluated in a case study involving an instant messaging component for smart phones. We specify the risk behaviour and functional behaviour of components using the same kinds of description techniques. We represent main security risk analysis concepts, such as assets, stakeholders, threats and risks, at the component level.
Submission 8: A
Weakest-Adversary Security Metric for Network Configuration Security
Analysis Author(s):Joseph Pamula, Paul Ammann, Sushil
Jajodia and Vipin Swarup
Abstract: A security
metric measures or assesses the extent to which a system meets its
security objectives. Since meaningful quantitative security metrics
are largely unavailable, the security community primarily uses qualitative
metrics for security. In this paper, we present a novel quantitative
metric for the security of computer networks that is based on an analysis
of attack graphs. The metric measures the security strength of a network
in terms of the strength of the weakest adversary who can successfully
penetrate the network. We present an algorithm that computes the minimal
sets of required initial attributes for the weakest adversary to possess
in order to successfully compromise a network; given a specific network
configuration, set of known exploits, a specific goal state, and an
attacker class (represented by a set of all initial attacker attributes).
We also demonstrate, by example, that diverse network configurations
are not always beneficial for network security in terms of penetrability.
Submission 10:
Towards a measuring framework for security properties
of software Author(s):Riccardo Scandariato, Bart De Win
and Wouter Joosen
Abstract: Among the
different quality attributes of software artifacts, security has lately
gained a lot of interest. However, both qualitative and quantitative
methodologies to assess security are still missing. This is possibly
due to the lack of knowledge about which properties must be considered
when it comes to evaluate security. The above-mentioned gap is even
larger when one considers key software development phases such as
architectural and detailed design. This position paper highlights
the fundamental questions that need to be answered in order to bridge
the gap and proposes an initial approach.
Submission 11:
Contracting over the Quality aspect of Security
in Software Product Markets Author(s):Jari Råman
Abstract: Secure
software development has gained momentum during the past couple of
years and improvements have been made. Buyers have started to demand
secure software and contractual practices for taking security into
consideration in the software purchasing context have been developed.
Software houses naturally are very keen to providing what their potential
customers' desire with respect to security and quality of their products
because companies that fail to deliver on the customer requirements
may soon find themselves without any customers. The capacity of private
bargaining to incite secure software development is analysed and methods
for improvement suggested. This study argues, however, that without
appropriate regulatory intervention the level of security will not
improve to meet the needs of the network society as a whole. There
are not appropriate incentives for secure development in the market
for software products. The software houses do not have to bear the
costs resulting from vulnerabilities in their software and the buyers'
capability to separate a secure product from an insecure one is limited.
Submission 17: Measuring
the Attack Surfaces of Two FTP Daemons Author(s):Pratyusa
K. Manadhata, Jeannette M. Wing, Mark A. Flynn and
Miles A. McQueen
Abstract: Software
consumers often need to choose between different software that provide
the same functionality. Today, security is a quality that many consumers,
especially system administrators, care about and will use in choosing
one software system over another. An attack surface metric is a security
metric for comparing the relative security of similar software systems.
The measure of a system's attack surface is an indicator of the system's
security: given two systems, we compare their attack surface measurements
to decide whether one is more secure than another along each of the
following three dimensions: methods, channels, and data. In this paper,
we use the attack surface metric to measure the attack surfaces of
two open source FTP daemons: ProFTPD 1.2.10 and Wu-FTPD 2.6.2. Our
measurements show that ProFTPD is more secure along the method dimension,
ProFTPD is as secure as Wu-FTPD along the channel dimension, and Wu-FTPD
is more secure along the data dimension. We also demonstrate how software
consumers can use the attack surface metric in making a choice between
the two FTP daemons.
Submission 18:
Modelling the Relative Strength of Security Protocols Author(s):Ho Chung and Clifford Neuman
Abstract: In this
paper, we present a way of thinking about the relative strength of
security protocols using SoS, a lattice-theoretic representation of
security strength. In particular, we discuss how the model can be
used. We present a compelling real world example (e.g. TLS protocol),
show how it is modeled, and then explain how lattice-theoretic properties
created using the model can be used to evaluate security protocols.
Submission 20:
Measuring Denial of Service Author(s):Jelena Mirkovic, Peter Reiher, Sonia
Fahmy, Roshan Thomas, Alefiya Hussain, Stephen Schwab and Calvin Ko
Abstract: Denial-of-service
(DoS) attacks significantly degrade the service quality experienced
by legitimate users, by introducing large delays, excessive losses,
and service interruptions. The main goal of DoS defenses is to neutralize
this effect, and to quickly and fully restore quality of various
services to levels acceptable by the users. To objectively evaluate
a variety of proposed defenses we must be able to precisely measure
the damage created by the attack, i.e., the denial of service itself,
in controlled testbed experiments. Current evaluation methodologies
summarize DoS impact superficially and partially by measuring a
single traffic parameter, such as duration, loss or throughput,
and showing a variation of this parameter during the attack from
its baseline case. These measures fail to capture the overall service
quality experienced by the end users, the quality-of-service requirements
of different applications and how they map into specific thresholds
for various traffic parameters.
We propose a series of DoS impact metrics that
are derived from traffic traces gathered at the source and the destination
networks. We segment a trace into higher-level user tasks, called
transactions, that require a certain service quality to satisfy
users' expectations. Each transaction is classified into one of
several proposed application categories, and we define quality-of-service
(QoS) requirements for each category via thresholds imposed on several
traffic parameters. We measure DoS impact as a percentage of transactions
that have not met their QoS requirements and aggregate this measure
into several metrics that expose the level of service denial and
its variation over time. We evaluate the proposed metrics on a series
of experiments with a wide range of background traffic and our results
show that our metrics capture the DoS impact more precisely then
partial measures used in the past.
Submission 22:
Vulnerability Analysis For Evaluating Quality of
Protection of Security Policies Author(s):Muhammad Abedin, Syeda Nessa, Ehab
Al-Shaer and Latifur Khan
Abstract: Evaluation
of security policies, specifically access control policies such as
firewall and IPSec rules, plays an important part in securing the
network. It is important not only to ensure that polices are correct
and consistent, but also to evaluate policies comprehensively in order
to assess the quality of protection (QoP) with respect to constantly
increasing threats and service vulnerabilities, and dynamically changing
traffic environment in protected networks. Quality of a policy depends
on a number of factors that makes it desirable to have one unified
score combining the quantification of the factors to help judge the
quality of the policy and allow one to compare policies. In this context,
we present our method of calculating the Policy Security Score, a
metric based on a number of factors like the vulnerabilities present
in the system, vulnerability history of the services and their exposure
to the network, and traffic patterns. We measure the existing vulnerability
by combining the severity scores of the vulnerabilities present in
the system. We mine the National Vulnerability Database, NVD, provided
by NIST, to find the vulnerability history of the services running
on the system, and from the frequency and severity of the past vulnerabilities,
we measure the historical vulnerability of the policy using a decay
factor. In both cases, we take into account the exposure of the service
to the network and the traffic volume handled by the service. Finally,
we combine the two scores into one Policy Security Score. We also
provide an example of the application of the method to illustrate
how the method can be applied to practical situations, and conclude
by showing how this can be the cornerstone of full fledged automated
network security evaluation tool.