How To Create New Android Studio Project 2019 2020
Panda Public Auditing for Shared Data with Efficient User Revocation in the Cloud
- ABSTRACT:
With data storage and sharing services in the cloud, users can easily modify and share data as a group. To ensure share data integrity can be verified publicly, users in the group need to compute signatures on all the blocks in shared data. Different blocks in shared data are generally signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks which were previously signed by this revoked user must be re-signed by an existing user. The straight forward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism
For the integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow the cloud tore-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the
Cloud, even if some part of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly improve the efficiency of user revocation.
- INTRODUCTION
With data storage and sharing services (such as Dropbox and Google Drive) provided by the cloud, people can easily work together as a group by sharing data with each other. More specifically, once a user creates shared data in the cloud, every user in the group is able to not only access and modify shared data, but also share the latest version of the shared data with the rest of the group. Although cloud providers promise a more secure and reliable environment to the users, the integrity of data in the cloud may still be compromised, due to the existence of hardware/software failures and human errors.
To protect the integrity of data in the cloud, a number of mechanisms have been proposed. In these mechanisms, a signature is attached to each block in data, and the integrity of data relies on the correctness of all the signatures. One of the most significant and common features of these mechanisms is to allow a public verifier to efficiently check data integrity in the cloud without downloading the entire data, referred to as public auditing (or denoted as Provable Data Possession). This public verifier could be a client who would like to utilize cloud data for particular purposes (e.g., search, computation, data mining, etc.) or a thirdparty auditor (TPA) who is able to provide verification services on data integrity to users. Most of the previous works focus on auditing the integrity of personal data. Different from these works, several recent works focus on how to preserve identity privacy from public verifiers when auditing the integrity of shared data. Unfortunately, none of the above mechanisms, considers the efficiency of user revocation when auditing the correctness of shared data in the cloud.
With shared data, once a user modifies a block, she also needs to compute a new signature for the modified block. Due to the modifications from different users, different blocks are signed by different users. For security reasons, when a user leaves the group or misbehaves, this user must be revoked from the group. As a result, this revoked user should no longer be able to access and modify shared data, and the signatures generated by this revoked user are no longer valid to the group. Therefore, although the content of shared data is not changed during user revocation, the blocks, which were previously signed by the revoked user, still need to be re-signed by an existing user in the group. As a result, the integrity of the entire data can still be verified with the public keys of existing users only.
Since shared data is outsourced to the cloud and users no longer store it on local devices, a straightforward method to re-compute these signatures during user revocation is to ask an existing user to first download the blocks previously signed by the revoked user verify the correctness of these blocks, then re-sign these blocks, and finally upload the new signatures to the cloud. However, this straightforward method may cost the existing user a huge amount of communication and computation resources by downloading and verifying blocks, and by re-computing and uploading signatures, especially when the number of re-signed blocks is quite large or the membership of the group is frequently changing. To make this matter even worse, existing users may access their data sharing services provided by the cloud with resource limited devices, such as mobile phones, which further prevents existing users from maintaining the correctness of shared data efficiently during user revocation.
Clearly, if the cloud could possess each user’s private key, it can easily finish the re-signing task for existing users without asking them to download and re-sign blocks. However, since the cloud is not in the same trusted domain with each user in the group, outsourcing every user’s private key to the cloud would introduce significant security issues. Another important problem we need to consider is that the re-computation of any signature during user revocation should not affect the most attractive property of public auditing — auditing data integrity publicly without retrieving the entire data. Therefore, how to efficiently reduce the significant burden to existing users introduced by user revocation, and still allow a public verifier to check the integrity of shared data without downloading the entire data from the cloud, is a challenging task.
In this paper, we propose Panda, a novel public auditing mechanism for the integrity of shared data with efficient user revocation in the cloud. In our mechanism, by utilizing the idea of proxy re-signatures, once a user in the group is revoked, the cloud is able to resign the blocks, which were signed by the revoked user, with a re-signing key. As a result, the efficiency of user revocation can be significantly improved, and computation and communication resources of existing users can be easily saved. Meanwhile, the cloud, who is not in the same trusted domain with each user, is only able to convert a signature of the revoked user into a signature of an existing user on the same block, but it cannot sign arbitrary blocks on behalf of either the revoked user or an existing user. By designing a new proxy re-signature scheme with nice properties, which traditional proxy resignatures do no have, our mechanism is always able to check the integrity of shared data without retrieving the entire data from the cloud.
- LITRATURE SURVEY
PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOATION IN THE CLOUD
PUBLICATION: B. Wang, B. Li, and H. Li, in the Proceedings of IEEE INFOCOM 2013, 2013, pp. 2904–2912.
With data storage and sharing services in the cloud, users can easily modify and share data as a group. To ensure shared data integrity can be verified publicly, users in the group need to compute signatures on all the blocks in shared data. Different blocks in shared data are generally signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks which were previously signed by this revoked user must be re-signed by an existing user. The straightforward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly improve the efficiency of user revocation.
A VIEW OF CLOUD COMPUTING, COMMUNICATIONS OF THE ACM
PUBLICATION: M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, vol. 53, no. 4, pp. 50–58, Apirl 2010.
Cloud computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1,000 servers for one hour costs no more than using one server for 1,000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT.
PROVABLE DATA POSSESSION AT UNTRUSTED STORES
PUBLICATION: G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song, in the Proceedings of ACM CCS 2007, 2007, pp. 598–610.
We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking supports large data sets in widely-distributed storage systems. We present two provably-secure PDP schemes that are more efficient than previous solutions, even when compared with schemes that achieve weaker guarantees. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation.
COMPACT PROOFS OF RETRIEVABILITY
PUBLICATION: H. Shacham and B. Waters, in the Proceedings of ASIACRYPT 2008. Springer-Verlag,2008,pp. 90–107.
In a proof-of-retrievability system, a data storage
center must prove to a verifier that he is actually storing all of a client’s
data. The central challenge is to build systems that are both effcient and
provably secure | that is, it should be possible to extract the client’s data
from any prover that passes a verification check. In this paper, we give the
rst proof-of- retrievability schemes
with full proofs of security against arbitrary adversaries in the strongest
model, that of Juels and Kaliski. Our rst
scheme, built from BLS signatures and secure in the random oracle model,
features a proof-of-retrievability protocol in which the client’s query and
server’s response are both extremely short. This scheme allows public verify ability:
anyone can act as a verifier, not just the
le owner. Our second scheme, which builds on pseudorandom functions
(PRFs) and is secure in the standard
model, allows only private verification. It features a proof-of- retrievability
protocol with an even shorter server’s response than our rst scheme, but the
client’s query is long. Both schemes rely on homomorphic properties to
aggregate a proof into one small authenticator value.
CHAPTER 2
2.0 SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
An existing system the file uploaded in cloud which not signed by user in each time of upload. So that integrity of shared data is not possible in existing system. However, since the cloud is not in the same trusted domain with each user in the group, outsourcing every user’s private key to the cloud would introduce significant security issue.
2.1.1 DISADVANTAGES:
2.2 PROPOSED SYSTEM:
In our Proposed system may lie to verifiers about the incorrectness of shared data in order to save the reputation of its data services and avoid losing money on its data services. In addition, we also assume there is no collusion between the cloud and any user during the design of our mechanism. Generally, the incorrectness of share data under the above semi trusted model can be introduced by hardware/software failures or human errors happened in the cloud. Considering these factors, users do not fully trust the cloud with the integrity of shared data.
2.2.1 ADVANTAGES:
1.Blocking User account
2.Security question
3.Login with secret key in each time
2.3 HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1 HARDWARE REQUIREMENT:
v Processor – Pentium –IV
- Speed –
1.1 GHz
- RAM – 256 MB (min)
- Hard Disk – 20 GB
- Floppy Drive – 1.44 MB
- Key Board – Standard Windows Keyboard
- Mouse – Two or Three Button Mouse
- Monitor – SVGA
2.3.2 SOFTWARE REQUIREMENTS:
- Operating System : Windows XP
- Front End : Microsoft Visual Studio .NET 2008
- Back End : MS-SQL Server 2005
- Document : MS-Office 2007
CHAPTER 3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use Case Diagram / Flow Diagram:
- The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
- The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
- DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
- DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External sources or destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and retrieved.
PROCESS:
People, procedures or devices that produce data. The physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.
MODELING RULES:
There are several common modeling rules when creating DFDs:
- All processes must have at least one data flow in and one data flow out.
- All processes should modify the incoming data, producing new forms of outgoing data.
- Each data store must be involved with at least one data flow.
- Each external entity must be involved with at least one data flow.
- A data flow must be attached to at least one process.
3.1 BLOCK DIAGRAM
3.2 DATAFLOW DIAGRAM
UML DIAGRAMS:
3.2 USE CASE DIAGRAM:
3.3 CLASS DIAGRAM:
3.4 SEQUENCE DIAGRAM:
3.5
ACTIVITY DIAGRAM:
CHAPTER 4
4.0 IMPLEMENTATION:
4.1 ALGORITHM
4.2 MODULES:
1. USER MODULE:
Registration
File Upload
Download
Reupload
Unblock module
2. AUDITOR MODULE:
File Verification module
View File
3. ADMIN MODULE:
View Files
Block user
4.3 MODULE DESCRIPTION:
- USER MODULE:
Registration:
In this module each user register his user details for using files. Only registered user can able to login in cloud server .
File Upload:
In this module user upload a block of files in the cloud with encryption by using his secret key. This ensure the files to be protected from unauthorized user.
Download:
This module allows the user to download the file using his secret key to decrypt the downloaded data of blocked user and verify the data and reupload the block of file into cloud server with encryption .This ensure the files to be protected from unauthorized user.
Reupload:
This module allow the user to reupload the downloaded files of blocked user into cloud server with resign the files(i.e) the files is uploaded with new signature like new secret with encryption to protected the data from unauthorized user.
Unblock Module:
This module allow the user to unblock his user account by answering his security question regarding to answer that provided by his at the time of registration. Once the answer is matched to the answer of registration time answer then only account will be unlocked.
- AUDITOR MODULE:
File Verification module:
The public verifier is able to correctly check the integrity of shared data. The public verifier can audit the integrity of shared data without retrieving the entire data from the cloud, even if some blocks in shared data have been re-signed by the cloud.
Files View:
In this module public auditor view the all details of upload, download, blocked user, reupload.
- ADMIN MODULE:
View Files:
In this module public auditor view the all details of upload, download, blocked user, reupload.
Block User:
In this module admin block the misbehave user account to protect the integrity of shared data
CHAPTER 5
5.0 SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
- ECONOMICAL FEASIBILITY
- TECHNICAL FEASIBILITY
- SOCIAL FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months. This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.
5.2.1 UNIT TESTING:
A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program testing checks for two types of errors: syntax and logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.
UNIT TESTING:
Description | Expected result |
Test for application window properties. | All the properties of the windows are to be properly aligned and displayed. |
Test for mouse operations. | All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions. |
5.1.3 FUNCTIONAL TESTING:
Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem. The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.
FUNCTIONAL TESTING:
Description | Expected result |
Test for all modules. | All peers should communicate in the group. |
Test for various peer in a distributed network framework as it display all users available in the group. | The result after execution should give the accurate result. |
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:
- Load testing
- Performance testing
- Usability testing
- Reliability testing
- Security testing
5.1.5 LOAD TESTING:
An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.
Load Testing
Description | Expected result |
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. | Should designate another active node as a Server. |
5.1.5 PERFORMANCE TESTING:
Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description | Expected result |
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management. | Should handle large input values, and produce accurate result in a expected time. |
5.1.6 RELIABILITY TESTING:
The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.
RELIABILITY TESTING:
Description | Expected result |
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. | In case of failure of the server an alternate server should take over the job. |
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description | Expected result |
Checking that the user identification is authenticated. | In case failure it should not be connected in the framework. |
Check whether group keys in a tree are shared by all peers. | The peers should know group key in the same group. |
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description | Expected result |
Exercise all logical decisions on their true and false sides. | All the logical decisions must be valid. |
Execute all loops at their boundaries and within their operational bounds. | All the loops must be finite. |
Exercise internal data structures to ensure their validity. | All the data structures must be valid. |
5.1.9 BLACK BOX TESTING:
Black box testing, also called behavioral testing, focuses on the functional requirements of the software. That is, black testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black box testing is not alternative to white box techniques. Rather it is a complementary approach that is likely to uncover a different class of errors than white box methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.
5.1.10 BLACK BOX TESTING:
Description | Expected result |
To check for incorrect or missing functions. | All the functions must be valid. |
To check for interface errors. | The entire interface must function normally. |
To check for errors in a data structures or external data base access. | The database updation and retrieval must be done. |
To check for initialization and termination errors. | All the functions and data structures must be initialized properly and terminated normally. |
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER 6
6.0 SOFTWARE SPECIFICATION:
6.1 FEATURES OF .NET:
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script.
The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
6.2 THE .NET FRAMEWORK
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are
- Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
- Memory management, notably including garbage collection.
- Checking and enforcing security restrictions on the running code.
- Loading and executing programs, with version control and other such features.
- The following features of the .NET framework are also worth description:
Managed Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.
Common Type System
The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
6.3 THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.
6.4 LANGUAGES SUPPORTED BY .NET
The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.
Other languages for which .NET compilers are available include
- FORTRAN
- COBOL
- Eiffel
ASP.NET XML WEB SERVICES | Windows Forms |
Base Class Libraries | |
Common Language Runtime | |
Operating System |
Fig1 .Net Framework
C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.
C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
6.5 THE .NET FRAMEWORK
The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and Web-based
applications.
6.6 FEATURES OF SQL-SERVER
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services
SQL-SERVER database consist of six type of objects,
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5.
MACRO
TABLE:
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.
CHAPTER 7
APPENDIX
7.1 SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.1 CONCLUSION
In this paper, we proposed a new public auditing mechanism for shared data with efficient user revocation in the cloud. When a user in the group is revoked, we allow the semi-trusted cloud to re-sign blocks that were signed by the revoked user with proxy re-signatures. Experimental results show that the cloud can improve the efficiency of user revocation, and existing users in the group can save a significant amount of computation and communication resources during user revocation.
CHAPTER 9
Neighbor Similarity Trust against Sybil Attack in P2P E-Commerce
In this paper, we present a distributed structured approach to Sybil attack. This is derived from the fact that our approach is based on the neighbor similarity trust relationship among the neighbor peers. Given a P2P e-commerce trust relationship based on interest, the transactions among peers are flexible as each peer can decide to trade with another peer any time. A peer doesn’t have to consult others in a group unless a recommendation is needed. This approach shows the advantage in exploiting the similarity trust relationship among peers in which the peers are able to monitor each other.
Our contribution in this paper is threefold:
1) We propose SybilTrust that can identify and protect honest peers from Sybil attack. The Sybil peers can have their trust canceled and dismissed from a group.
2) Based on the group infrastructure in P2P e-commerce, each neighbor is connected to the peers by the success of the transactions it makes or the trust evaluation level. A peer can only be recognized as a neighbor depending on whether or not trust level is sustained over a threshold value.
3) SybilTrust enables neighbor peers to
carry recommendation identifiers among the peers in a group. This ensures that
the group detection algorithms to identify Sybil attack peers to be efficient
and scalable in large P2P e-commerce networks.
- GOAL OF THE PROJECT:
The goal of trust systems is to ensure that honest peers are accurately identified as trustworthy and Sybil peers as untrustworthy. To unify terminology, we call all identities created by malicious users as Sybil peers. In a P2P e-commerce application scenario, most of the trust considerations depend on the historical factors of the peers. The influence of Sybil identities can be reduced based on the historical behavior and recommendations from other peers. For example, a peer can give positive a recommendation to a peer which is discovered is a Sybil or malicious peer. This can diminish the influence of Sybil identities hence reduce Sybil attack. A peer which has been giving dishonest recommendations will have its trust level reduced. In case it reaches a certain threshold level, the peer can be expelled from the group. Each peer has an identity, which is either honest or Sybil.
A Sybil identity can be an identity owned
by a malicious user, or it can be a bribed/stolen identity, or it can be a fake
identity obtained through a Sybil attack. These Sybil attack peers are employed
to target honest peers and hence subvert the system. In Sybil attack, a single
malicious user creates a large number of peer identities called sybils. These
sybils are used to launch security attacks, both at the application level and
at the overlay level, application level, sybils can target other honest peers
while transacting with them, whereas at the overlay level, sybils can disrupt
the services offered by the overlay layer like routing, data storage, lookup,
etc. In trust systems, colluding Sybil peers may artificially increase a
(malicious) peer’s rating (e.g., eBay).
1.2 INTRODUCTION:
P2P networks range from communication systems like email and instant messaging to collaborative content rating, recommendation, and delivery systems such as YouTube, Gnutela, Facebook, Digg, and BitTorrent. They allow any user to join the system easily at the expense of trust, with very little validation control. P2P overlay networks are known for their many desired attributes like openness, anonymity, decentralized nature, self-organization, scalability, and fault tolerance. Each peer plays the dual role of client as well as server, meaning that each has its own control. All the resources utilized in the P2P infrastructure are contributed by the peers themselves unlike traditional methods where a central authority control is used. Peers can collude and do all sorts of malicious activities in the open-access distributed systems. These malicious behaviors lead to service quality degradation and monetary loss among business partners. Peers are vulnerable to exploitation, due to the open and near-zero cost of creating new identities. The peer identities are then utilized to influence the behavior of the system.
However, if a single defective entity can present
multiple identities, it can control a substantial fraction of the system,
thereby undermining the redundancy. The number of identities that an attacker
can generate depends on the attacker’s resources such as bandwidth, memory, and
computational power. The goal of trust systems is to ensure that honest peers
are accurately identified as trustworthy and Sybil peers as untrustworthy. To
unify terminology, we call all identities created by malicious users as Sybil
peers. In a P2P e-commerce application scenario, most of the trust
considerations depend on the historical factors of the peers. The influence of Sybil
identities can be reduced based on the historical behavior and recommendations
from other peers. For example, a peer can give positive a recommendation to a
peer which is discovered is a Sybil or malicious peer. This can diminish the
influence of Sybil identities hence reduce Sybil attack. A peer which has been
giving dishonest recommendations will have its trust level reduced. In case it
reaches a certain threshold level, the peer can be expelled from the group.
Each peer has an identity, which is either honest or Sybil. A Sybil identity can be an identity owned by a malicious user, or it can be a bribed/stolen identity, or it can be a fake identity obtained through a Sybil attack. These Sybil attack peers are employed to target honest peers and hence subvert the system. In Sybil attack, a single malicious user creates a large number of peer identities called sybils. These sybils are used to launch security attacks, both at the application level and at the overlay level at the application level, sybils can target other honest peers while transacting with them, whereas at the overlay level, sybils can disrupt the services offered by the overlay layer like routing, data storage, lookup, etc. In trust systems, colluding Sybil peers may artificially increase a (malicious) peer’s rating (e.g., eBay). Systems like Credence rely on a trusted central authority to prevent maliciousness.
Defending against Sybil attack is quite
a challenging task. A peer can pretend to be trusted with a hidden motive. The peer
can pollute the system with bogus information, which interferes with genuine
business transactions and functioning of the systems. This must be counter
prevented to protect the honest peers. The link between an honest peer and a
Sybil peer is known as an attack edge. As each edge involved resembles a
human-established trust, it is difficult for the adversary to introduce an
excessive number of attack edges. The only known promising defense against
Sybil attack is to use social networks to perform user admission control and
limit the number of bogus identities admitted to a system. The use of social
networks between two peers represents real-world trust relationship between
users. In addition, authentication-based mechanisms are used to verify the
identities of the peers using shared encryption keys, or location information.
1.3 LITRATURE SURVEY:
KEEP YOUR FRIENDS CLOSE: INCORPORATING TRUST INTO SOCIAL NETWORK-BASED SYBIL DEFENSES
AUTHOR: A. Mohaisen, N. Hopper, and Y. Kim
PUBLISH: Proc. IEEE Int. Conf. Comput. Commun., 2011, pp. 1–9.
EXPLANATION:
Social network-based Sybil defenses
exploit the algorithmic properties of social graphs to infer the extent to
which an arbitrary node in such a graph should be trusted. However, these
systems do not consider the different amounts of trust represented by different
graphs, and different levels of trust between nodes, though trust is being a
crucial requirement in these systems. For instance, co-authors in an academic
collaboration graph are trusted in a different manner than social friends.
Furthermore, some social friends are more trusted than others. However,
previous designs for social network-based Sybil defenses have not considered
the inherent trust properties of the graphs they use. In this paper we introduce
several designs to tune the performance of Sybil defenses by accounting for
differential trust in social graphs and modeling these trust values by biasing
random walks performed on these graphs. Surprisingly, we find that the cost
function, the required length of random walks to accept all honest nodes with
overwhelming probability, is much greater in graphs with high trust values,
such as co-author graphs, than in graphs with low trust values such as online
social networks. We show that this behavior is due to the community structure
in high-trust graphs, requiring longer walk to traverse multiple communities.
Furthermore, we show that our proposed designs to account for trust, while
increase the cost function of graphs with low trust value, decrease the
advantage of attacker.
FOOTPRINT: DETECTING SYBIL ATTACKS IN URBAN VEHICULAR NETWORKS
AUTHOR: S. Chang, Y. Qi, H. Zhu, J. Zhao, and X. Shen
PUBLISH: IEEE Trans. Parallel Distrib. Syst., vol. 23, no. 6, pp. 1103–1114, Jun. 2012.
EXPLANATION:
In urban vehicular networks, where
privacy, especially the location privacy of anonymous vehicles is highly
concerned, anonymous verification of vehicles is indispensable. Consequently,
an attacker who succeeds in forging multiple hostile identifies can easily launch
a Sybil attack, gaining a disproportionately large influence. In this paper, we
propose a novel Sybil attack detection mechanism, Footprint, using the
trajectories of vehicles for identification while still preserving their
location privacy. More specifically, when a vehicle approaches a road-side unit
(RSU), it actively demands an authorized message from the RSU as the proof of
the appearance time at this RSU. We design a location-hidden authorized message
generation scheme for two objectives: first, RSU signatures on messages are
signer ambiguous so that the RSU location information is concealed from the
resulted authorized message; second, two authorized messages signed by the same
RSU within the same given period of time (temporarily linkable) are recognizable
so that they can be used for identification. With the temporal limitation on
the linkability of two authorized messages, authorized messages used for
long-term identification are prohibited. With this scheme, vehicles can
generate a location-hidden trajectory for location-privacy-preserved
identification by collecting a consecutive series of authorized messages.
Utilizing social relationship among trajectories according to the similarity
definition of two trajectories, Footprint can recognize and therefore dismiss
“communities” of Sybil trajectories. Rigorous security analysis and extensive
trace-driven simulations demonstrate the efficacy of Footprint.
SYBILLIMIT: A NEAROPTIMAL SOCIAL NETWORK DEFENSE AGAINST SYBIL ATTACK
AUTHOR: H. Yu, P. Gibbons, M. Kaminsky, and F. Xiao
PUBLISH: IEEE/ACM Trans. Netw., vol. 18, no. 3, pp. 3–17, Jun. 2010.
EXPLANATION:
Decentralized distributed systems
such as peer-to-peer systems are particularly vulnerable to sybil attacks,
where a malicious user pretends to have multiple identities (called sybil
nodes). Without a trusted central authority, defending against sybil attacks is
quite challenging. Among the small number of decentralized approaches, our
recent SybilGuard protocol [H. Yu et al., 2006] leverages a key insight on
social networks to bound the number of sybil nodes accepted. Although its
direction is promising, SybilGuard can allow a large number of sybil nodes to
be accepted. Furthermore, SybilGuard assumes that social networks are fast
mixing, which has never been confirmed in the real world. This paper presents
the novel SybilLimit protocol that leverages the same insight as SybilGuard but
offers dramatically improved and near-optimal guarantees. The number of sybil
nodes accepted is reduced by a factor of ominus(radicn), or around 200 times in
our experiments for a million-node system. We further prove that SybilLimit’s
guarantee is at most a log n factor away from optimal, when considering
approaches based on fast-mixing social networks. Finally, based on three
large-scale real-world social networks, we provide the first evidence that
real-world social networks are indeed fast mixing. This validates the
fundamental assumption behind SybilLimit’s and SybilGuard’s approach.
CHAPTER 2
2.0 SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
Existing work on Sybil attack makes use of social networks to eliminate Sybil attack, and the findings are based on preventing Sybil identities. In this paper, we propose the use of neighbor similarity trust in a group P2P ecommerce based on interest relationships, to eliminate maliciousness among the peers. This is referred to as SybilTrust. In SybilTrust, the interest based group infrastructure peers have a neighbor similarity trust between each other, hence they are able to prevent Sybil attack. SybilTrust gives a better relationship in e-commerce transactions as the peers create a link between peer neighbors. This provides an important avenue for peers to advertise their products to other interested peers and to know new market destinations and contacts as well. In addition, the group enables a peer to join P2P e-commerce network and makes identity more difficult.
Peers use self-certifying identifiers that are
exchanged when they initially come into contact. These can be used as public
keys to verify digital signatures on the messages sent by their neighbors. We
note that, all communications between peers are digitally signed. In this kind
of relationship, we use neighbors as our point of reference to address Sybil
attack. In a group, whatever admission we set, there are honest, malicious, and
Sybil peers who are authenticated by an admission control mechanism to join the
group. More honest peers are admitted compared to malicious peers, where the
trust association is aimed at positive results. The knowledge of the graph may
reside in a single party, or be distributed across all users.
2.1.0 DISADVANTAGES:
Sybil peer trades with very few unsuccessful transactions, we can deduce the peer is a Sybil peer. This is supported by our approach which proposes peers existing in a group have six types of keys.
The keys which exist mostly are pairwise keys supported by the group keys. We also note if an honest group has a link with another group which has Sybil peers, the Sybil group tend to have information which is not complete.
- Fake Users Enters Easy.
- This makes Sybil attacks.
2.2 PROPOSED SYSTEM:
In this paper, we assume there are three kinds of peers in the system: legitimate peers, malicious peers, and Sybil peers. Each malicious peer cheats its neighbors by creating multiple identity, referred to as Sybil peers. In this paper, P2P e-commerce communities are in several groups. A group can be either open or restrictive depending on the interest of the peers. We investigate the peers belonging to a certain interest group. In each group, there is a group leader who is responsible for managing coordination of activities in a group.
The principal building block of Sybil Trust approach is the identifier distribution process. In the approach, all the peers with similar behavior in a group can be used as identifier source. They can send identifiers to others as the system regulates. If a peer sends less or more, the system can be having a Sybil attack peer. The information can be broadcast to the rest of the peers in a group. When peers join a group, they acquire different identities in reference to the group. Each peer has neighbors in the group and outside the group. Sybil attack peers forged by the same malicious peer have the same set of physical neighbors that a malicious peer has.
Each neighbor is connected to the peers
by the success of the transaction it makes or the trust evaluation level. To
detect the Sybil attack, where a peer can have different identity, a peer is
evaluated in reference to its trustworthiness and the similarity to the
neighbors. If the neighbors do not have same trust data as the concerned peer,
including its position, it can be detected that the peer has multiple identity
and is cheating
2.2.0 ADVANTAGES:
Our perception is that, the attacker controls a number of neighbor similarity peers, whereby a randomly chosen identifier source is relatively “far away” from most Sybil attack peer relationship. Every peer uses a “reversed” routing table. The source peer will always send some information to the peers which have neighbor similarity trust. However, if they do not reply, it can black list them. If they do reply and the source is overwhelmed by the overhead of such replies, then the adversary is effectively launching a DoS attack. Notice that the adversary can launch a DoS attack against the source. This enables two peers to propagate their public keys and IP addresses backward along the route to learn about the peers.
- It is Helpful to find Sybil Attacks.
- It is used to Find Fake UserID.
- It is feasible to limit the number of attack edges in online social networks by relationship rating.
2.3 HARDWARE & SOFTWARE REQUIREMENTS:
2.3.0 HARDWARE REQUIREMENT:
v Processor – Pentium –IV
- Speed –
1.1 GHz
- RAM – 256 MB (min)
- Hard Disk – 20 GB
- Floppy Drive – 1.44 MB
- Key Board – Standard Windows Keyboard
- Mouse – Two or Three Button Mouse
- Monitor – SVGA
2.3.0 SOFTWARE REQUIREMENTS:
- Operating System : Windows XP or Win7
- Front End : Microsoft Visual Studio .NET
- Script : C# Script
- Document : MS-Office 2007
CHAPTER 3
3.0 SYSTEM DESIGNS
3.1 ARCHITECTURE DIAGRAM:
3.2 DATAFLOW DIAGRAM:
LEVEL 0:
Neighbor Nodes |
Source |
LEVEL 1:
P2P Sybil Trust Mode |
Send Data Request |
LEVEL 2:
Data Receive |
P2P ACK |
Active Attack (Malicious Node) |
Send Data Request |
LEVEL 3:
3.3 UML DIAGRAMS
3.3.0 USECASE DIAGRAM:
SERVER CLIENT
3.3.1 CLASS DIAGRAM:
3.3.2 SEQUENCE DIAGRAM:
3.4 ACITVITY DIAGRAM:
CHAPTER 4
4.0 IMPLEMENTATION:
In this paper, P2P e-commerce communities are in several groups. A group can be either open or restrictive depending on the interest of the peers. We investigate the peers belonging to a certain interest group. In each group, there is a group leader who is responsible for managing coordination of activities in a group peers join a group; they acquire different identities in reference to the group. Each peer has neighbors in the group and outside the group. Sybil attack peers forged by the same malicious peer have the same set of physical neighbors that a malicious peer has. Each neighbor is connected to the peers by the success of the transaction it makes or the trust evaluation level. To detect the Sybil attack, where a peer can have different identity, a peer is evaluated in reference to its trustworthiness and the similarity to the neighbors. If the neighbors do not have same trust data as the concerned peer, including its position, it can be detected that the peer has multiple identity and is cheating. The method of detection of Sybil attack is depicted in Fig. 2. A1 and A2 refer to the same peer but with different identities.
Our approach, the identifiers are only propagated by the peers who exhibit neighbor similarity trust. Our perception is that, the attacker controls a number of neighbor similarity peers, whereby a randomly chosen identifier source is relatively “far away” from most Sybil attack peer relationship. Every peer uses a “reversed” routing table. The source peer will always send some information to the peers which have neighbor similarity trust. However, if they do not reply, it can black list them. If they do reply and the source is overwhelmed by the overhead of such replies, then the adversary is effectively launching a DoS attack. Notice that the adversary can launch a DoS attack against the source. This enables two peers to propagate their public keys and IP addresses backward along the route to learn about the peers. SybilTrust proposes that an honest peer should not have an excessive number of neighbors. The neighbors we refer should be member peers existing in a group. The restriction helps to bind the number of peers against any additional attack among the neighbors. If there are too many neighbors, SybilTrust will (internally) only use a subset of the peer’s edges while ignoring all others. Following Liben-Nowell and Kleinberg, we define the attributes of the given pair of peers as the intersection of the sets of similar products.
4.1 MODULES:
SIMILARITY TRUST RELATIONSHIP:
NEIGHBOR SIMILARITY TRUST:
DETECTION OF SYBIL ATTACK:
SECURITY
AND PERFORMANCE:
4.2 MODULES DESCRIPTION:
SIMILARITY TRUST RELATIONSHIP:
We focus on the active attacks in P2P e-commerce. When a peer is compromised, all the information will be extracted. In our work, we have proposed use of SybilTrust which is based on neighbor similarity relationship of the peers. SybilTrust is efficient and scalable to group P2P e-commerce network. Sybil attack peers may attempt to compromise the edges or the peers of the group P2P e-commerce. The Sybil attack peers can execute further malicious actions in the network. The threat being addressed is the identity active attacks as peers are continuously doing the transactions in the peers to show that each controller only admitted the honest peers.
Our method makes assumptions that the controller undergoes synchronization to prove whether the peers which acted as distributor of identifiers had similarityor not. If a peer never had similarity, the peer is assumed to have been a Sybil attack peer. Pairing method is used to generate an expander graph with expansion factor of high probability. Every pair of neighbor peers share a unique symmetric secret key (the edge key), established out of band for authenticating each other peers may deliberately cause Byzantine faults in which their multiple identity and incorrect behavior ends up undetected.
The Sybil attack peers can
create more non-existent links. The protocols and services for P2P, such as
routing protocols must operate efficiently regardless of the group size. In the
neighbor similarity trust, peers must have a self-healing in order to recover
automatically from any state. Sybil attack can defeat replication and fragmentation
performed in distributed hash tables. Geographic routing in P2P can also be a
routing mechanism which can be compromised by Sybil peers.
NEIGHBOR SIMILARITY TRUST:
We present a Sybil
identification algorithm that takes place in a neighbor similarity trust. The
directed graph has edges and vertices. In our work, we assume V is the set of
peers and E is the set of edges. The edges in a neighbor similarity have attack
edges which are safeguarded from Sybil attacks. A peer u and a Sybil peer v can
trade whether one is Sybil or not. Being in a group, comparison can be done to
determine the number of peers which trade with peer. If the peer trades with
very few unsuccessful transactions, we can deduce the peer is a Sybil peer.
This is supported by our approach which proposes a peer existing in a group has
six types of keys. The keys which exist mostly are pairwise keys supported by
the group keys. We also note if an honest group has a link with another group
which has Sybil peers, the Sybil group tend to have information which is not
complete. Our algorithm adaptively tests the suspected peer while maintaining
the neighbor similarity trust connection based on time.
DETECTION OF SYBIL ATTACK:
Sybil attack, a malicious peer must try to present multiple distinct identities. This can be achieved by either generating legal identities or by impersonating other normal peers. Some peers may launch arbitrary attacks to interfere with P2P e-commerce operations, or the normal functioning of the network. According to an attack can succeed to launch a Sybil attack by:
_ Heterogeneous configuration: in this case, malicious peers can have more communication and computation resources than the honest peers.
_ Message manipulation: the attacker can eavesdrop on nearby communications with other parties. This means a attacker gets and interpolates information needed to impersonate others. Major attacks in P2P e-commerce can be classified as passive and active attacks.
_ Passive attack: It listens to incoming and outgoing messages, in order to infer the relevant information from the transmitted recommendations, i.e., eavesdropping, but doesn’t harm the system. A peer can be in passive mode and later in active mode.
_ Active attack: When a
malicious peer receives a recommendation for forwarding, it can modify, or when
requested to provide recommendations on another peer, it can inflate or bad
mouth. The bad mouthing is a situation where a malicious peer may collude with
other malicious peers to revenge the honest peer. In the Sybil attack, a
malicious peer generates a large number of identities and uses them together to
disrupt normal operation.
SECURITY AND PERFORMANCE:
We evaluate the performance of the proposed SybilTrust. We measure two metrics, namely, non-trustworthy rate and detection rate. Non-trustworthy rate is the ratio of the number of honest peers which are erroneously marked as Sybil/malicious peer to the number of total honest peers. Detection rate is the proportion of detected Sybil/ malicious peers to the total Sybil/malicious peers. Communication Cost. The trust level is sent with the recommendation feedback from one peer to another. If a peer is compromised, the information is broadcasted to all peers as revocation of the trust level is being done. Computation Cost. The sybilTrust approach is efficient in the computation of polynomial evaluation. The calculation of the trust level evaluation is based on a pseudo-random function (PRF). PRF is a deterministic function.
In our simulation, we use C# .NET tool. Each honest and malicious peer interacted with a random number of peers defined by a uniform distribution. All the peers are restricted to the group. Our approach, P2P e-commerce community has a total of 3 different categories of interest. The transaction interactions between peers with similar interest can be defined as successful or unsuccessful, expressed as positive or negative respectively. The impact of the first two parameters on performance of the mechanism is evaluated in the percentage of malicious peers replied is randomly chosen by each malicious peer. Transactions with 10 to 40 percent malicious peers are done.
Our SybilTrust approach
detects more malicious peers compared to Eigen Trust and Eigen Group Trust [26]
as shown in Fig. 4. Fig. 4. shows the detection rates of the P2P when the number
of malicious peers increases. When the number of deployed peers is small, e.g.,
40 peers, the chance that no peers are around a malicious peer is high. Fig. 4
illustrates the variation of non-trustworthy rates of different numbers of
honest peers as the number of malicious peer increases. It is shown that the
non-trustworthy rate increases as the number of honest peers and malicious
peers increase. The reason is that when there are more malicious peers, the number
of target groups is larger. Moreover, this is because neighbor relationship is
used to categorize peers in the
We proposed approach. The number of target-groups also increases when the number of honest peers is higher. As a result, the honest peers are examined more times, and the chance that an honest peer is erroneously determined as a Sybil/malicious peer increases, although more Sybil attack peer can also be identified. Fig. 4 displays the detection rate when the reply rate of each malicious peer is the same. The detection rate does not decrease when the reply rate is more than 80 percent, because of the enhancement.
The enhancement could
still be found even when a malicious peer replies to almost all of its Sybil
attack peer requests. Furthermore, the detection rate is higher as the number
of malicious peers becomes more, which means the proposed mechanism is able to
resist the Sybil attack from more malicious peers. The detection rate is still
more than 80 percent in the sparse network, which according to the definition
of a sparse network detection rate reaches 95 percent when the number of
legitimate nodes is 300. It is also because the number of target groups
increases as the number of malicious peer’s increases and the honest peers are
examined more times. Therefore, the rate that an honest peer is erroneously
identified as a Sybil/malicious peer also increases.
CHAPTER 5
5.0 SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
- ECONOMICAL FEASIBILITY
- TECHNICAL FEASIBILITY
- SOCIAL FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months. This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.
5.2.1 UNIT TESTING:
A program represents the
logical elements of a system. For a program to run satisfactorily, it must
compile and test data correctly and tie in properly with other programs.
Achieving an error free program is the responsibility of the programmer.
Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
UNIT TESTING:
Description | Expected result |
Test for application window properties. | All the properties of the windows are to be properly aligned and displayed. |
Test for mouse operations. | All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions. |
5.1.3 FUNCTIONAL TESTING:
Functional testing of an
application is used to prove the application delivers correct results, using
enough inputs to give an adequate level of confidence that will work correctly
for all sets of inputs. The functional testing will need to prove that the
application works for each client type and that personalization function work
correctly.When a program is tested, the actual output is compared with
the expected output. When there is a discrepancy the sequence of instructions
must be traced to determine the problem.
The process is facilitated by breaking the program into self-contained
portions, each of which can be checked at certain key points. The idea is to
compare program values against desk-calculated values to isolate the problems.
FUNCTIONAL TESTING:
Description | Expected result |
Test for all modules. | All peers should communicate in the group. |
Test for various peer in a distributed network framework as it display all users available in the group. | The result after execution should give the accurate result. |
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:
- Load testing
- Performance testing
- Usability testing
- Reliability testing
- Security testing
5.1.5 LOAD TESTING:
An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.
Load Testing
Description | Expected result |
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. | Should designate another active node as a Server. |
5.1.5 PERFORMANCE TESTING:
Performance tests are
utilized in order to determine the widely defined performance of the software
system such as execution time associated with various parts of the code,
response time and device utilization. The intent of this testing is to identify
weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description | Expected result |
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management. | Should handle large input values, and produce accurate result in a expected time. |
5.1.6 RELIABILITY TESTING:
The software reliability
is the ability of a system or component to perform its required functions under
stated conditions for a specified period of time and it is being ensured in
this testing. Reliability can be expressed as the ability of the software to
reveal defects under testing conditions, according to the specified
requirements. It the portability that a software system will operate without
failure under given conditions for a given time interval and it focuses on the
behavior of the software element. It forms a part of the software quality
control team.
RELIABILITY TESTING:
Description | Expected result |
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. | In case of failure of the server an alternate server should take over the job. |
5.1.7 SECURITY TESTING:
Security testing evaluates
system characteristics that relate to the availability, integrity and
confidentiality of the system data and services. Users/Clients should be
encouraged to make sure their security needs are very clearly known at
requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description | Expected result |
Checking that the user identification is authenticated. | In case failure it should not be connected in the framework. |
Check whether group keys in a tree are shared by all peers. | The peers should know group key in the same group. |
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description | Expected result |
Exercise all logical decisions on their true and false sides. | All the logical decisions must be valid. |
Execute all loops at their boundaries and within their operational bounds. | All the loops must be finite. |
Exercise internal data structures to ensure their validity. | All the data structures must be valid. |
5.1.9 BLACK BOX TESTING:
Black box testing, also
called behavioral testing, focuses on the functional requirements of the
software. That is,
black testing enables
the software engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach that
is likely to
uncover a different class
of errors than
white box methods. Black box
testing attempts to find errors which focuses on inputs, outputs, and principle
function of a software module. The starting point of the black box testing is
either a specification or code. The contents of the box are hidden and the
stimulated software should produce the desired results.
5.1.10 BLACK BOX TESTING:
Description | Expected result |
To check for incorrect or missing functions. | All the functions must be valid. |
To check for interface errors. | The entire interface must function normally. |
To check for errors in a data structures or external data base access. | The database updation and retrieval must be done. |
To check for initialization and termination errors. | All the functions and data structures must be initialized properly and terminated normally. |
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER 7
7.0 SOFTWARE SPECIFICATION:
7.1 FEATURES OF .NET:
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script.
The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
7.2 THE .NET FRAMEWORK
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are
- Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
- Memory management, notably including garbage collection.
- Checking and enforcing security restrictions on the running code.
- Loading and executing programs, with version control and other such features.
- The following features of the .NET framework are also worth description:
Managed Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.
Common Type System
The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
7.3 THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.
7.4 LANGUAGES SUPPORTED BY .NET
The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.
Other languages for which .NET compilers are available include
- FORTRAN
- COBOL
- Eiffel
ASP.NET XML WEB SERVICES | Windows Forms |
Base Class Libraries | |
Common Language Runtime | |
Operating System |
Fig1 .Net Framework
C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
7.5 THE .NET FRAMEWORK
The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and Web-based
applications.
7.6 FEATURES OF SQL-SERVER
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services
SQL-SERVER database consist of six type of objects,
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5.
MACRO
7.7 TABLE:
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.
CHAPTER 7
APPENDIX
7.1 SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.0 CONCLUSION AND FUTURE:
We presented SybilTrust, a defense against Sybil attack in P2P e-commerce. Compared to other approaches, our approach is based on neighborhood similarity trust in a group P2P e-commerce community. This approach exploits the relationship between peers in a neighborhood setting. Our results on real-world P2P e-commerce confirmed fastmixing property hence validated the fundamental assumption behind SybilGuard’s approach. We also describe defense types such as key validation, distribution, and position verification. This method can be done at in simultaneously with neighbor similarity trust which gives better defense mechanism. For the future work, we intend to implement SybilTrust within the context of peers which exist in many groups. Neighbor similarity trust helps to weed out the Sybil peers and isolate maliciousness to specific Sybil peer groups rather than allow attack in honest groups with all honest peers.
Malware Propagation in Large-Scale Networks
Malware Propagation in Large-Scale NetworksAbstract—Malware is pervasive in networks, and poses a critical threat to network security. However, we have very limitedunderstanding of malware behavior in networks to date. In this paper, we investigate how malware propagates in networks from aglobal perspective. We formulate the problem, and establish a rigorous two layer epidemic model for malware propagation fromnetwork to network. Based on the proposed model, our analysis indicates that the distribution of a given malware follows exponentialdistribution, power law distribution with a short exponential tail, and power law distribution at its early, late and final stages, respectively.Extensive experiments have been performed through two real-world global scale malware data sets, and the results confirm ourtheoretical findings.Index Terms—Malware, propagation, modelling, power lawÇ1 INTRODUCTIONMALWARE are malicious software programs deployedby cyber attackers to compromise computer systemsby exploiting their security vulnerabilities. Motivated byextraordinary financial or political rewards, malware ownersare exhausting their energy to compromise as many networkedcomputers as they can in order to achieve theirmalicious goals. A compromised computer is called a bot,and all bots compromised by a malware form a botnet. Botnetshave become the attack engine of cyber attackers, andthey pose critical challenges to cyber defenders. In order tofight against cyber criminals, it is important for defenders tounderstand malware behavior, such as propagation ormembership recruitment patterns, the size of botnets, anddistribution of bots.To date, we do not have a solid understanding about thesize and distribution of malware or botnets. Researchershave employed various methods to measure the size of botnets,such as botnet infiltration [1], DNS redirection [3],external information [2]. These efforts indicate that the sizeof botnets varies from millions to a few thousand. There areno dominant principles to explain these variations. As aresult, researchers desperately desire effective models andexplanations for the chaos. Dagon et al. [3] revealed thattime zone has an obvious impact on the number of availablebots. Mieghem et al. [4] indicated that network topology hasan important impact on malware spreading through theirrigorous mathematical analysis. Recently, the emergence ofmobile malware, such as Cabir [5], Ikee [6], and Brador [7],further increases the difficulty level of our understandingon how they propagate. More details about mobile malwarecan be found at a recent survey paper [8]. To the best of ourknowledge, the best finding about malware distribution inlarge-scale networks comes from Chen and Ji [9]: the distributionis non-uniform. All this indicates that the research inthis field is in its early stage.The epidemic theory plays a leading role in malwarepropagation modelling. The current models for malwarespread fall in two categories: the epidemiology model andthe control theoretic model. The control system theorybased models try to detect and contain the spread of malware[10], [11]. The epidemiology models are more focusedon the number of compromised hosts and their distributions,and they have been explored extensively in the computerscience community [12], [13], [14]. Zou et al. [15] useda susceptible-infected (SI) model to predict the growth ofInternet worms at the early stage. Gao and Liu [16] recentlyemployed a susceptible-infected-recovered (SIR) model todescribe mobile virus propagation. One critical conditionfor the epidemic models is a large vulnerable populationbecause their principle is based on differential equations.More details of epidemic modelling can be find in [17]. Aspointed by Willinger et al. [18], the findings, which weextract from a set of observed data, usually reflect parts ofthe studied objects. It is more reliable to extract theoreticalresults from appropriate models with confirmation fromsufficient real world data set experiments. We practice thisprinciple in this study.In this paper, we study the distribution of malware interms of networks (e.g., autonomous systems (AS), ISPdomains, abstract networks of smartphones who share thesame vulnerabilities) at large scales. In this kind of setting,we have a sufficient volume of data at a large enough scaleto meet the requirements of the SI model. Different from the_ S. Yu is with the School of Information Technology, Deakin University,Burwood, Victoria 3125, Australia. E-mail: syu@deakin.edu.au._ G. Gu is with the Department of Computer Science and Engineering,Texas A&M University, College Station, TX 77843-3112.E-mail: guofei@cse.tamu.edu._ A. Barnawi is with the Faculty of Computing and IT, King AbdulazizUniversity, Jeddah, Saudi Arabia. E-mail: ambarnawi@kau.edu.sa._ S. Guo is with the School of Computer Science and Engineering, The Universityof Aizu, Aizuwakamatsu, Japan. E-mail: sguo@u-aizu.ac.jp._ I. Stojmenovic is with the School of Information Technology, DeakinUniversity, Australia; King Abdulaziz University, Jeddah, Saudi Arabia;and the School of EECS, University of Ottawa, Ottawa, ON K1N 6N5,Canada. E-mail: ivan@site.uottawa.ca.Manuscript received 1 Jan. 2014; revised 14 Apr. 2014; accepted 15 Apr. 2014.Date of publication 28 Apr. 2014; date of current version 1 Dec. 2014.Recommended for acceptance by F. Bonchi.For information on obtaining reprints of this article, please send e-mail to:reprints@ieee.org, and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TKDE.2014.2320725170 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 20151041-4347 _ 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.traditional epidemic models, we break our model into twolayers. First of all, for a given time since the breakout of amalware, we calculate how many networks have been compromisedbased on the SI model. Second, for a compromisednetwork, we calculate how many hosts have beencompromised since the time that the network was compromised.With this two layer model in place, we can determinethe total number of compromised hosts and theirdistribution in terms of networks. Through our rigorousanalysis, we find that the distribution of a given malwarefollows an exponential distribution at its early stage, andobeys a power law distribution with a short exponential tailat its late stage, and finally converges to a power law distribution.We examine our theoretical findings through twolarge-scale real-world data sets: the Android based malware[19] and the Conficker [20]. The experimental resultsstrongly support our theoretical claims. To the best of ourknowledge, the proposed two layer epidemic model andthe findings are the first work in the field.Our contributions are summarized as follows._ We propose a two layer malware propagation modelto describe the development of a given malware atthe Internet level. Compared with the existing singlelayer epidemic models, the proposed model representsmalware propagation better in large-scalenetworks._ We find the malware distribution in terms of networksvaries from exponential to power law witha short exponential tail, and to power law distributionat its early, late, and final stage, respectively.These findings are first theoretically provedbased on the proposed model, and then confirmedby the experiments through the two large-scalereal-world data sets.The rest of the paper is structured as follows. Relatedwork is briefly listed in Section 2. We present the preliminariesfor the proposed model in Section 3. The studiedproblem is discussed in Section 4. A two layer malwarepropagation model is established in Section 5, and followedby a rigorous mathematical analysis in Section 6. Experimentsare conducted to confirm our findings in Section 7. InSection 8, we provide a further discussion about the study.Finally, we summarize the paper and present future workin Section 9.2 RELATED WORKThe basic story of malware is as follows. A malware programerwrites a program, called bot or agent, and theninstalls the bots at compromised computers on the Internetusing various network virus-like techniques. All ofhis bots form a botnet, which is controlled by its ownersto commit illegal tasks, such as launching DDoS attacks,sending spam emails, performing phishing activities, andcollecting sensitive information. There is a command andcontrol (C&C) server(s) to communicate with the bots andcollect data from bots. In order to disguise himself fromlegal forces, the botmaster changes the url of his C&C frequently,e.g., weekly. An excellent explanation about thiscan be found in [1].With the significant growing of smartphones, we havewitnessed an increasing number of mobile malware. Malwarewriters have develop many mobile malware in recentyears. Cabir [5] was developed in 2004, and was the firstmalware targeting on the Symbian operating system formobile devices. Moreover, it was also the first malwarepropagating via Bluetooth. Ikee [6] was the first mobile malwareagainst Apple iPhones, while Brador [7] was developedagainst Windows CE operating systems. The attackvictors for mobile malware are diverse, such as SMS, MMS,Bluetooth, WiFi, and Web browsing. Peng et al. [8] presentedthe short history of mobile malware since 2004, andsurveyed their propagation models.A direct method to count the number of bots is to use botnetinfiltration to count the bot IDs or IP addresses. Stone-Gross et al. [1] registered the URL of the Torpig botnetbefore the botmaster, and therefore were able to hijack theC&C server for ten days, and collect about 70G data fromthe bots of the Torpig botnet. They reported that the footprintof the Torpig botnet was 182,800, and the median andaverage size of the Torpig’s live population was 49,272 and48,532, respectively. They found 49,294 new infections duringthe ten days takeover. Their research also indicated thatthe live population fluctuates periodically as users switchbetween being online and offline. This issue was also tackedby Dagon et al. in [3].Another method is to use DNS redirection. Dagon et al.[3] analyzed captured bots by honypot, and then identifiedthe C&C server using source code reverse engineeringtools. They then manipulated the DNS entry which isrelated to a botnet’s IRC server, and redirected the DNSrequests to a local sinkhole. They therefore could countthe number of bots in the botnet. As discussed previously,their method counts the footprint of the botnet, whichwas 350,000 in their report.In this paper, we use two large scale malware data setsfor our experiments. Conficker is a well-known and one ofthe most recently widespread malware. Shin et al. [20] collecteda data set about 25 million Conficker victims from allover the world at different levels. At the same time, malwaretargeting on Android based mobile systems are developingquickly in recent years. Zhou and Jiang [19] collecteda large data set of Android based malware.In [2], Rajab et al. pointed out that it is inaccurate tocount the unique IP addresses of bots because DHCP andNAT techniques are employed extensively on the Internet([1] confirms this by their observation that 78.9 percent ofthe infected machines were behind a NAT, VPN, proxy,or firewall). They therefore proposed to examine the hitsof DNS caches to find the lower bound of the size of agiven botnet.Rajab et al. [21] reported that botnets can be categorizedinto two major genres in terms of membership recruitment:worm-like botnets and variable scanning botnets. The latterweights about 82 percent in the 192 IRC bots that they investigated,and is the more prevalent class seen currently. Suchbotnets usually perform localized and non-uniform scanning,and are difficult to track due to their intermittent andcontinuously changing behavior. The statistics on the lifetimeof bots are also reported as 25 minutes on average with90 percent of them staying for less than 50 minutes.YU ET AL.: MALWARE PROPAGATION IN LARGE-SCALE NETWORKS 171Malware propagation modelling has been extensivelyexplored. Based on epidemiology research, Zou et al. [15]proposed a number of models for malware monitoring atthe early stage. They pointed out that these kinds of modelare appropriate for a system that consists of a large numberof vulnerable hosts; in other words, the model is effective atthe early stage of the outbreak of malware, and the accuracyof the model drops when the malware develops further. Asa variant of the epidemic category, Sellke et al. [12] proposeda stochastic branching process model for characterizingthe propagation of Internet worms, which especiallyfocuses on the number of compromised computers againstthe number of worm scans, and presented a closed formexpression for the relationship. Dagon et al. [3] extendedthe model of [15] by introducing time zone information aðtÞ,and built a model to describe the impact on the number oflive members of botnets with diurnal effect.The impact of side information on the spreading behaviorof network viruses has also been explored. Ganesh et al.[22] thoroughly investigated the effect of network topologyon the spead of epidemics. By combining Graph theory anda SIS (susceptible—infective—susceptible) model, theyfound that if the ratio of cure to infection rates is smallerthan the spectral radius of the graph of the studied network,then the average epidemic lifetime is of order log n, where nis the number of nodes. On the other hand, if the ratio islarger than a generalization of the isoperimetric constant ofthe graph, then the average epidemic lifetime is of order ena,where a is a positive constant. Similarly, Mieghem et al. [4]applied the N-intertwined Markov chain model, an applicationof mean field theory, to analyze the spread of viruses innetworks. They found that tc ¼ 1_maxðAÞ, where tc is the sharpepidemic threshold, and _maxðAÞ is the largest eigenvalue ofthe adjacency matrix A of the studied network. Moreover,there have been many other methodologies to tackle theproblem, such as game theory [23].3 PRELIMINARIESPreliminaries of epidemic modelling and complex networksare presented in this section as this work is mainly based onthe two fields.For the sake of convenience, we summarize the symbolsthat we use in this paper in Table 1.3.1 Deterministic Epidemic ModelsAfter nearly 100 years development, the epidemic models[17] have proved effective and appropriate for a system thatpossesses a large number of vulnerable hosts. In otherwords, they are suitable at a macro level. Zou et al. [15]demonstrated that they were suitable for the studies ofInternet based virus propagation at the early stage.We note that there are many factors that impact the malwarepropagation or botnet membership recruitment, suchas network topology, recruitment frequency, and connectionstatus of vulnerable hosts. All these factors contribute to thespeed of malware propagation. Fortunately, we can includeall these factors into one parameter as infection rate b inepidemic theory. Therefore, in our study, let N be the totalnumber of vulnerable hosts of a large-scale network (e.g., theInternet) for a given malware. There are two statuses for anyone of the N hosts, either infected or susceptible. Let IðtÞ bethe number of infected hosts at time t, then we havedIðtÞdt ¼ bðtÞ½N _ RðtÞ _ IðtÞ _ QðtÞ_IðtÞ _dRðtÞdt; (1)where RðtÞ, and QðtÞ represent the number of removedhosts from the infected population, and the number ofremoved hosts from the susceptible population at time t.The variable bðtÞ is the infection rate at time t.For our study, model (1) is too detailed and not necessaryas we expect to know the propagation and distribution of agiven malware. As a result, we employ the following susceptible-infected model:dIðtÞdt ¼ bIðtÞ½N _ IðtÞ_; (2)where the infection rate b is a constant for a given malwarefor any network.We note that the variable t is continuous in model (2) and(1). In practice, we measure IðtÞ at discrete time points.Therefore, t ¼ 0; 1; 2; . . . . We can interpret each time pointas a new round of malware membership recruitment, suchas vulnerable host scanning. As a result, we can transformmodel (2) into the discrete form as follows:IðtÞ ¼ ð1 þ aDÞIðt _ 1Þ _ bDIðt _ 1Þ2; (3)where t ¼ 0; 1; 2; . . . ; D is the unit of time, Ið0Þ is the initialnumber of infected hosts (we also call them seeds in thispaper), and a ¼ bN, which represents the average numberof vulnerable hosts that can be infected by one infected hostper time unit.In order to simplify our analysis, let D ¼ 1, it could beone second, one minute, one day, or one month, even oneyear, depending on the time scale in a given context. Hence,we have a simpler discrete form given byIðtÞ ¼ ð1 þ aÞIðt _ 1Þ _ bðIðt _ 1ÞÞ2: (4)Based on Equation (4), we define the increase of infectedhosts for each time unit as follows.DIðtÞ , IðtÞ _ Iðt _ 1Þ; t ¼ 1; 2; . . . : (5)To date, many researches are confined to the “earlystage” of an epidemic, such as [15]. Under the early stagecondition, IðtÞ << N, therefore, N _ IðtÞ _ N. As a result,a closed form solution is obtained as follows:IðtÞ ¼ Ið0ÞebNt: (6)TABLE 1Notations of Symbols in This Paper172 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015Whenwe take the ln operation on both sides of Equation (6),we haveln IðtÞ ¼ bNt þ ln Ið0Þ: (7)For a given vulnerable network, b, N and Ið0Þ are constants,therefore, the graphical representation of Equation (7)is a straight line.Based on the definition of Equation (5), we obtain theincrease of new members of a malware at the early stage asDIðtÞ ¼ ðebN _ 1ÞIðt _ 1Þ¼ ðebN _ 1ÞIð0ÞebNðt_1Þ: (8)Taking the ln operation on both side of (8), we haveln DIðtÞ ¼ bNðt _ 1Þ þ ln ððebN _ 1ÞIð0ÞÞ: (9)Similar to Equation (7), the graphical representation ofequation (9) is also a straight line. In other words, the numberof recruited members for each round follows an exponentialdistribution at the early stage.We have to note that it is hard for us to know whetheran epidemic is at its early stage or not in practice. Moreover,there is no mathematical definition about the termearly stage.In epidemic models, the infection rate b has a criticalimpact on the membership recruitment progress, and b isusually a small positive number, such as 0.00084 for wormCode Red [12]. For example, for a network with N ¼ 10;000vulnerable hosts, we show the recruited members underdifferent infection rates in Fig. 1. From this diagram, we cansee that the recruitment goes slowly when b ¼ 0:0001, however,all vulnerable hosts have been compromised in lessthan 7 time units when b ¼ 0:0003, and the recruitment progressesin an exponential fashion.This reflects the malware propagation styles in practice.For malware based on “contact”, such as blue tooth contacts,or viruses depending on emails to propagate, theinfection rate is usually small, and it takes a long time tocompromise a large number of vulnerable hosts in a givennetwork. On the other hand, for some malware, which takeactive actions for recruitment, such as vulnerable host scanning,it may take one or a few rounds of scanning to recruitall or a majority of the vulnerable hosts in a given network.We will apply this in the following analysis and performanceevaluation.3.2 Complex NetworksResearch on complex networks have demonstrated that thenumber of hosts of networks follows the power law. Peoplefound that the size distribution usually follows the powerlaw, such as population in cities in a country or personalincome in a nation [24]. In terms of the Internet, researchershave also discovered many power law phenomenon, suchas the size distribution of web files [25]. Recent progressesreported in [26] further demonstrated that the size of networksfollows the power law.The power law has two expression forms: the Pareto distributionand the Zipf distribution. For the same objects ofthe power law, we can use any one of them to represent it.However, the Zipf distributions are tidier than the expressionof the Pareto distributions. In this paper, we will useZipf distributions to represent the power law. The Zipfexpression is as follows:Prfx ¼ ig ¼Cia ; (10)where C is a constant, a is a positive parameter, calledthe Zipf index, Prfx ¼ ig represents the probability of theith ði ¼ 1; 2; . . .P Þ largest object in terms of size, andi Prfx ¼ ig ¼ 1.A more general form of the distribution is called theZipf-Mandelbrot distribution [27], which is defined asfollows:Prfx ¼ ig ¼Cði þ qÞa ; (11)where the additional constant q ðq _ 0Þ is called the plateaufactor, which makes the probability of the highest rankedobjects flat. The Zipf-Mandelbrot distribution becomes theZipf distribution when q ¼ 0.Currently, the metric to say a distribution is a powerlaw is to take the loglog plot of the data, and we usuallysay it is a power law if the result shows a straight line.We have to note that this is not a rigorous method, however,it is widely applied in practice. Power law distributionsenjoy one important property, scale free. We referinterested readers to [28] about the power law and itsproperties.4 PROBLEM DESCRIPTIONIn this section, we describe the malware propagation problemin general.As shown in Fig. 2, we study the malware propagationissue at two levels, the Internet level and the network level.We note that at the network level, a network could bedefined in many different ways, it could be an ISP domain,a country network, the group of a specific mobile devices,and so on. At the Internet level, we treat every network ofthe network level as one element.Fig. 1. The impact from infection rate b on the recruitment progress for agiven vulnerable network with N ¼ 10,000.YU ET AL.: MALWARE PROPAGATION IN LARGE-SCALE NETWORKS 173At the Internet level, we suppose, there are M networks,each network is denoted as Lið1 _ i _ MÞ. For anynetwork Li, we suppose it physically possesses Ni hosts.Moreover, we suppose the possibility of vulnerable hostsof Li is denoted as pið0 _ pi _ 1Þ. In general, it is highlypossible that Ni 6¼ Nj, and pi 6¼ pj for i 6¼ j; 1 _ i; j _ M.Moreover, due to differences in network topology, operatingsystem, security investment and so on, the infectionrates are different from network to network. We denote itas bi for Li. Similarly, it is highly possible that bi 6¼ bj fori 6¼ j; 1 _ i; j _ M.For any given network Li with pi _ Ni vulnerable hostsand infection rate bi. We suppose the malware propagationstarts at time 0. Based on Equation (4), we obtain the numberof infected hosts, IiðtÞ, of Li at time t as follows:IiðtÞ ¼ ð1 þ aiÞIiðt _ 1Þ _ biðIiðt _ 1ÞÞ2¼ ð1 þ bipiNiÞIiðt _ 1Þ _ biðIiðt _ 1ÞÞ2:(12)In this paper, we are interested in a global sense of malwarepropagation. We study the following question.For a given time t since the outbreak of a malware, whatare the characteristics of the number of compromised hostsfor each network in the view of the whole Internet. In otherwords, to find a function F about IiðtÞð1 _ i _ MÞ. Namely,the pattern ofFðI1ðtÞ; I2ðtÞ; . . . ; IMðtÞÞ: (13)For simplicity of presentation, we use SðLi; tÞ to replaceIiðtÞ at the network level, and IðtÞ is dedicated for the Internetlevel. Following Equation (13), for any networkLið1 _ i _ MÞ, we haveSðLi; tÞ ¼ ð1 þ bipiNiÞSðLi; t _ 1Þ _ biðSðLi; t _ 1ÞÞ2: (14)At the Internet level, we suppose there are k1; k2; . . . ; ktnetworks that have been compromised at each round foreach time unit from 1 to t. Any kið1 _ i _ tÞ is decided byEquation (4) as follows:ki ¼ ð1 þ bnMÞIði _ 1Þ _ bnðIði _ 1ÞÞ2; (15)where M is the total number of networks over the Internet,and bn is the infection rate among networks. Moreover,suppose the number of seeds, k0, is known.At this time point t, the landscape of the compromisedhosts in terms of networks is as follows.S_L1k1; t_; S_L2k1; t_; . . . ; S_Lk1k1; t_|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}k1S_L1k2; t _ 1_; S_L2k2; t _ 1_; . . . ; S_Lk2k2; t _ 1_|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflk2. . .S_L1kt; 1_; S_L2kt; 1_; . . . ; S_Lktkt; 1_|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}kt;(16)where Ljkirepresents the jth network that was compromisedat round i. In other words, there are k1 compromised networks,and each of them have progressed t time units; k2compromised networks, and each of them has progressedt _ 1 time units; and kt compromised networks, and each ofthem have progressed 1 time unit.It is natural to have the total number of compromisedhosts at the Internet level asIðtÞ ¼ S_L1k1; t_þ S_L2k1; t_þ_ _ _þS_Lk1k1; t_|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}k1þ S_L1k2; t _ 1_þ_ _ _þS_Lk2k2; t _ 1_|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}k2þ_ _ _þ S_L1kt; 1_þ S_L2kt; 1_þ_ _ _þS_Lktkt; 1_|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}kt(17)Suppose kiði ¼ 1; 2; . . .Þ follows one distribution with aprobability distribution of pn (n stands for number), andthe size of a compromised network, SðLi; tÞ, followsanother probability distribution of ps (s stands for size).Let pI be the probability distribution of IðtÞðt ¼ 0; 1; . . .Þ.Based on Equation (18), we find pI is exactly the convolutionof pn and ps.pI ¼ pn ps; (18)where is the convolution operation.Our goal is to find a pattern of pI of Equation (18).5 MALWARE PROPAGATION MODELLINGAs shown in Fig. 2, we abstract theM networks of the Internetinto M basic elements in our model. As a result, anytwo large networks, Li and Lj (i 6¼ j), are similar to eachother at this level. Therefore, we can model the studiedproblem as a homogeneous system. Namely, all the M networksshare the same vulnerability probability (denoted asp), and the same infection rate (denoted as b). A simpleway to obtain these two parameters is to use the means:p ¼1MXMi¼1pib ¼1MXMi¼1bi:8>>>><>>>>:(19)Fig. 2. The system architecture of the studied malware propagation.174 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015For any network Li, let Ni be the total number of vulnerablehosts, then we haveNi ¼ p _ Ni; i ¼ 1; 2; . . .;M; (20)where Ni is the total number of computers of network Li.As discussed in Section 3, we know that Niði ¼ 1; 2; . . . ;MÞ follows the power law. As p is a constant in Equation(20), then Niði ¼ 1; 2; . . .;MÞ follows the power law as well.Without loss of generality, let Li represent the ith networkin terms of total vulnerable hosts (Ni). Based on the Zipf distribution,if we randomly choose a network X, the probabilitythat it is network Lj isPrfX ¼ Ljg ¼ pzðjÞ ¼N P jMi¼1 Ni ¼Cja : (21)Equation (21) shows clearly that a network with a largernumber of vulnerable hosts has a higher probability to becompromised.Following Equation (18), at time t, we have k1 þ k2 þ_ _ _þkt networks that have been compromised. Combiningwith Equation (21), in general, we know the first round ofrecruitment takes the largest k1 networks, and the secondround takes the k2 largest networks among the remainingnetworks, and so on. We therefore can simplify Equation(18) asIðtÞ ¼Xk1j¼1SðNj; tÞpzðjÞþXk2j¼1SðNk1þj; t _ 1Þpzðk1 þ jÞþ . . .þXktj¼1SðNk1þ___þkt_1þj; 1Þ_ pzðk1 þ_ _ _þkt_1 þ jÞ: (22)From Equation (22), we know the total number of compromisedhosts and their distribution in terms of networksfor a given time point t.6 ANALYSIS ON THE PROPOSED MALWAREPROPAGATION MODELIn this section, we try to extract the pattern of IðtÞ in termsof SðLi; t0 Þ, or pI of Equation (18).We make the following definitions before we progress forthe analysis.1) Early stage. An early stage of the breakout of a malwaremeans only a small percentage of vulnerablehosts have been compromised, and the propagationfollows exponential distributions.2) Final stage. The final stage of the propagation of amalware means that all vulnerable hosts of a givennetwork have been compromised.3) Late stage. A late stage means the time intervalbetween the early stage and the final stage.We note that many researches are focused on the earlystage, and we define the early stage to meet the pervasivelyaccepted condition, we coin the other two terms for theconvenience of our following discussion. Moreover, we setvariable Te as the time point that a malware’s progresstransfers from its early stage to late stage. In terms of mathematicalexpressions, we express the early, late and finalstage as 0 _ t < Te, Te _ t < 1, and t¼1, respectively.Due to the complexity of Equation (22), it is difficult toobtain conclusions in a dynamic style. However, we areable to extract some conclusions under some specialconditions.Lemma 1. If distributions pðxÞ and qðxÞ follow exponential distributions,then pðxÞqðxÞ follows an exponential distributionas well.Due to the space limitation, we skip the proof and referinterested readers to [29].At the early stage of a malware breakout, we have advantagesto obtain a clear conclusion.Theorem 1. For large scale networks, such as the Internet, at theearly stage of a malware propagation, the malware distributionin terms of networks follows exponential distributions.Proof. At a time point of the early stage (0 _ t < Te) of amalware breakout, following Equation (6), we obtain thenumber of compromised networks asIðtÞ ¼ Ið0ÞebnMt: (23)It is clear that IðtÞ follows an exponential distribution.For any of the compromised networks, we suppose ithas progressed t0 ð0 < t0 _ t < Te Þ time units, and itssize isSðLi; t0Þ ¼ Iið0ÞebNit0: (24)Based on Equation (24), we find that the size of anycompromised network follows an exponential distribution.As a result, all the sizes of compromised networksfollow exponential distributions at the early stage.Based on Lemma 1, we obtain that the malware distributionin terms of network follows exponential distributionsat its early stage. tuMoreover, we can obtain concrete conclusion of the propagationof malware at the final stage.Theorem 2. For large scale networks, such as the Internet, at thefinal stage (t¼1) of a malware propagation, the malwaredistribution in terms of networks follows the power lawdistribution.Proof. At the final stage, all vulnerable hosts have beencompromised, namely,SðLi;1Þ ¼ Ni; i ¼ 1; 2; . . .;M:Based on our previous discussion, we know Niði ¼1; 2; . . .;MÞ follows the power law. As a result, the theoremholds. tuNow, we move our study to the late stage of malwarepropagation.Theorem 3. For large scale networks, such as the Internet, at thelate stage (Te _ t < 1) of a malware breakout, the malwaredistribution include two parts: a dominant power law bodyand a short exponential tail.YU ET AL.: MALWARE PROPAGATION IN LARGE-SCALE NETWORKS 175Proof. Suppose a malware propagation has progressed fortðt > > TeÞ time units. Let t0 ¼ t _ Te. If we separate allthe compromised IðtÞ hosts by time point t0, we have twogroups of compromised hosts.Following Theorem 2, as t0 >> Te, the compromisedhosts before t0 follows the power law. At the same time,all the compromised networks after t0 are still in theirearly stage. Therefore, these recently compromised networksfollow exponential distributions.Now, we need to prove that the networks compromisedafter time point t0 are at the tail of the distribution.First of all, for a given network Li, for t1 > t2,we haveSðLi; t1Þ _ SðLi; t2Þ: (25)For two networks, Li and Lj, if Ni _ Nj, then Lishould be compromised earlier than Lj. Combining thiswith (25), we know the later compromised networks usuallylie at the tail of the distribution.Due to the fact that t0 >> Te, the length of the exponentialtail is much shorter than the length of the mainbody of the distribution. tu7 PERFORMANCE EVALUATIONIn this section, we examine our theoretical analysis throughtwo well-known large-scale malware: Android malwareand Conficker. Android malware is a recent fast developingand dominant smartphone based malware [19]. Differentfrom Android malware, the Conficker worm is an Internetbased state-of-the-art botnet [20]. Both the data sets havebeen widely used by the community.From the Android malware data set, we have an overviewof the malware development from August 2010 to October2011. There are 1,260 samples in total from 49 differentAndroid malware in the data set. For a given Android malwareprogram, it only focuses on one or a number of specificvulnerabilities. Therefore, all smartphones share these vulnerabilitiesform a specific network for that Android malware.In other words, there are 49 networks in the data set,and it is reasonable that the population of each network ishuge. We sort the malware subclasses according to their size(number of samples in the data set), and present them in aloglog format in Fig. 3, the diagram is roughly a straight line.In other words, we can say that the Android malware distributionin terms of networks follows the power law.We now examine the growth pattern of total number ofcompromised hosts of Android malware against time,namely, the pattern of IðtÞ. We extract the data from thedata set and present it in Table 2. We further transform thedata into a graph as shown in Fig. 4. It shows that the memberrecruitment of Android malware follows an exponentialdistribution nicely during the 15 months time interval. Wehave to note that our experiments also indicate that thisdata does not fit the power law (we do not show them heredue to space limitation).In Fig. 4, we match a straight line to the real data throughthe least squares method. Based on the data, we can estimatethat the number of seeds (Ið0Þ) is 10, and a ¼ 0:2349.Following our previous discussion, we infer that the propagationof Android malware was in its early stage. It is reasonableas the size of each Android vulnerable network ishuge and the infection rate is quite low (the infection is basicallybased on contacts).We also collected a large data set of Conficker from variousaspects. Due to the space limitation, we can only presenta few of them here to examine our theoretical analysis.First of all, we treat AS as networks in the Internet. Ingeneral, ASs are large scale elements of the Internet. A fewkey statistics from the data set are listed in Table 3. WeFig. 3. The probability distribution of Androidmalware in terms of networks.TABLE 2The Number of Different Android Malware against Time (Months) in 2010-2011Fig. 4. The growth of total compromised hosts by Android malwareagainst time from August 2010 to October 2011.176 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015present the data in a loglog format in Fig. 5, which indicatesthat the distribution does follow the power law.A unique feature of the power law is the scale free property.In order to examine this feature, we measure the compromisedhosts in terms of domain names at three differentdomain levels: the top level, level 1, and level 2, respectively.Some statistics of this experiment are listed inTable 4.Once again, we present the data in a loglog format inFigs. 6a, 6b and 6c, respectively. The diagrams show thatthe main body of the three scale measures are roughlystraight lines. In other words, they all fall into power lawdistributions. We note that the flat head in Fig. 6 can beexplained through a Zipf-Mandelbrot distribution. Therefore,Theorem 2 holds.In order to examine whether the tails are exponential, wetake the smallest six data from each tail of the three levels. Itis reasonable to say that they are the networks compromisedat the last 6 time units, the details are listed in Table 5 (wenote that t ¼ 1 is the sixth last time point, and t ¼ 6 is thelast time point).When we present the data of Table 5 into a graph asshown in Fig. 7, we find that they fit an exponential distributionvery well, especially for the level 2 and level 3domain name cases. This experiment confirms our claimin Theorem 3.8 FURTHER DISCUSSIONIn this paper, we have explored the problem of malwaredistribution in large-scale networks. There are many directionsthat could be further explored. We list some importantones as follows.1) The dynamics of the late stage. We have found thatthe main body of malware distribution follows thepower law with a short exponential tail at the latestage. It is very attractive to explore the mathematicalmechanism of how the propagation leads to suchkinds of mixed distributions.2) The transition from exponential distribution topower law distribution. It is necessary to investigatewhen and how a malware distribution moves froman exponential distribution to the power law. Inother words, how can we clearly define the transitionpoint between the early stage and the late stage.3) Multiple layer modelling. We hire the fluid model inboth of the two layers in our study as both layers aresufficiently large and meet the conditions for themodelling methods. In order to improve the accuracyof malware propagation, we may extend ourwork to nðn > 2Þ layers. In another scenario, weTABLE 3Statistics for Conficker Distribution in Terms of ASsFig. 5. Power law distribution of Conficker in terms of autonomousnetworks.TABLE 4Statistics for Conficker Distribution in Terms of DomainNames at the Three Top LevelsFig. 6. Power law distribution of Conficker botnet in the top three levels of domain names.YU ET AL.: MALWARE PROPAGATION IN LARGE-SCALE NETWORKS 177may expect to model a malware distribution for middlesize networks, e.g., an ISP network with manysubnetworks. In these cases, the conditions for thefluid model may not hold. Therefore, we need toseek suitable models to address the problem.4) Epidemic model for the proposed two layer model.In this paper, we use the SI model, which is thesimplest for epidemic analysis. More practical models,e.g., SIS or SIR, could be chosen to serve thesame problem.5) Distribution of coexist multiple malware in networks.In reality, multiple malware may coexist atthe same networks. Due to the fact that different malwarefocus on different vulnerabilities, the distributionsof different malware should not be the same. Itis challenging and interesting to establish mathematicalmodels for multiple malware distribution interms of networks.9 SUMMARY AND FUTURE WORKIn this paper, we thoroughly explore the problem of malwaredistribution at large-scale networks. The solution tothis problem is desperately desired by cyber defenders asthe network security community does not yet have solidanswers. Different from previous modelling methods, wepropose a two layer epidemic model: the upper layerfocuses on networks of a large scale networks, for example,domains of the Internet; the lower layer focuses on the hostsof a given network. *This two layer model improves theaccuracy compared with the available single layer epidemicmodels in malware modelling. Moreover, the proposed twolayer model offers us the distribution of malware in termsof the low layer networks.We perform a restricted analysis based on the proposedmodel, and obtain three conclusions: The distribution for agiven malware in terms of networks follows exponentialdistribution, power law distribution with a short exponentialtail, and power law distribution, at its early, late, andfinal stage, respectively. In order to examine our theoreticalfindings, we have conducted extensive experiments basedon two real-world large-scale malware, and the results confirmour theoretical claims.In regards to future work, we will first further investigatethe dynamics of the late stage. More details of the findingsare expected to be further studied, such as the length of theexponential tail of a power law distribution at the late stage.Second, defenders may care more about their own network,e.g., the distribution of a given malware at their ISPdomains, where the conditions for the two layer model maynot hold. We need to seek appropriate models to addressthis problem. Finally, we are interested in studying the distributionof multiple malware on large-scale networks aswe only focus on one malware in this paper. We believe it isnot a simple linear relationship in the multiple malwarecase compared to the single malware one.ACKNOWLEDGMENTSDr Yu’s work is partially supported by the National NaturalScience Foundation of China (grant No. 61379041), Prof.Stojmenovic’s work is partially supported by NSERCCanada Discovery grant (grant No. 41801-2010), and KAUDistinguished Scientists Program.Shui Yu (M’05-SM’12) received the BEng andMEng degrees from the University of ElectronicScience and Technology of China, Chengdu, P.R. China, in 1993 and 1999, respectively, andthe PhD degree from Deakin University, Victoria,Australia, in 2004. He is currently a senior lecturerwith the School of Information Technology,Deakin University, Victoria, Australia. He haspublished nearly 100 peer review papers, includingtop journals and top conferences, such asIEEE TPDS, IEEE TIFS, IEEE TFS, IEEE TMC,and IEEE INFOCOM. His research interests include networking theory,network security, and mathematical modeling. His actively servers hisresearch communities in various roles, which include the editorial boardsof the IEEE Transactions on Parallel and Distributed Systems, IEEECommunications Surveys and Tutorials, and IEEE Access, IEEE INFOCOMTPC members 2012-2015, symposium co-chairs of IEEE ICC2014, IEEE ICNC 2013-2015, and many different roles of internationalconference organizing committees. He is a senior member of the IEEE,and a member of the AAAS.Guofei Gu (S’06-M’08) received the PhD degreein computer science from the College of Computing,Georgia Institute of Technology. He is anassistant professor in the Department of ComputerScience and Engineering, Texas A&M University(TAMU), College Station, TX. Hisresearch interests are in network and systemsecurity, such as malware analysis, detection,defense, intrusion and anomaly detection, andweb and social networking security. He is currentlydirecting the Secure Communication andComputer Systems (SUCCESS) Laboratory at TAMU. He received the2010 National Science Foundation (NSF) Career Award and a corecipientof the 2010 IEEE Symposium on Security and Privacy (Oakland 10)Best Student Paper Award. He is a member of the IEEE.Ahmed Barnawi received the PhD degree fromthe University of Bradford, United Kingdom in2006. He is an associate professor at the Facultyof Computing and IT, King Abdulaziz University,Jeddah, Saudi Arabia, where he works since2007. He was visiting professor at the Universityof Calgary in 2009. His research areas are cellularand mobile communications, mobile ad hocand sensor networks, cognitive radio networksand security. He received three strategicresearch grants and registered two patents in theUS. He is a member of the IEEE.Song Guo (M’02-SM’11) received the PhDdegree in computer science from the Universityof Ottawa, Canada in 2005. He is currently asenior associate professor at the School of ComputerScience and Engineering, the University ofAizu, Japan. His research interests are mainly inthe areas of protocol design and performanceanalysis for reliable, energy-efficient, and costeffective communications in wireless networks.He is an associate editor of the IEEE Transactionson Parallel and Distributed Systems and aneditor of Wireless Communications and Mobile Computing. He is asenior member of the IEEE and the ACM.Ivan Stojmenovic was editor-in-chief of theIEEE Transactions on Parallel and DistributedSystems (2010-3), and is founder of three journals.He is editor of the IEEE Transactions onComputers, IEEE Network, IEEE Transactionson Cloud Computing, and ACM Wireless Networksand steering committee member of theIEEE Transactions on Emergent Topics in Computing.He is on Thomson Reuters list of HighlyCited Researchers from 2013, has top h-index inCanada for mathematics and statistics, and hasmore than 15,000 citations. He received five Best Paper Awards. He is afellow of the IEEE, Canadian Academy of Engineering and AcademiaEuropaea. He has received the Humboldt Research Award.” For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.YU ET AL.: MALWARE PROPAGATION IN LARGE-SCALE NETWORKS 179
Lossless and Reversible Data Hiding in Encrypted Images with Public Key Cryptography
Abstract—This paper proposes a lossless, a reversible, and a combined data hiding schemes for ciphertext images encrypted by public key cryptosystems with probabilistic and homomorphic properties. In the lossless scheme, the ciphertext pixels are replaced with new values to embed the additional data into several LSB-planes of ciphertext pixels by multi-layer wet paper coding. Then, the embedded data can be directly extracted from the encrypted domain, and the data embedding operation does not affect the decryption of original plaintext image. In the reversible scheme, a preprocessing is employed to shrink the image histogram before image encryption, so that the modification on encrypted images for data embedding will not cause any pixel oversaturation in plaintext domain. Although a slight distortion is introduced, the embedded data can be extracted and the original image can be recovered from the directly decrypted image. Due to the compatibility between the lossless and reversible schemes, the data embedding operations in the two manners can be simultaneously performed in an encrypted image. With the combined technique, a receiver may extract a part of embedded data before decryption, and extract another part of embedded data and recover the original plaintext image after decryption.
Index Terms—reversible data hiding, lossless data hiding, image encryption
I. INTRODUCTION
E
ncryption and data hiding are two effective means of data protection. While the encryption techniques convert plaintext content into unreadable ciphertext, the data hiding techniques embed additional data into cover media by introducing slight modifications. In some distortion-unacceptable scenarios, data hiding may be performed with a lossless or reversible manner. Although the terms “lossless” and “reversible” have a same meaning in a set of previous references, we would distinguish them in this work.
We say a data hiding method is lossless if the display of cover signal containing embedded data is same as that of original cover even though the cover data have been modified for data embedding. For example, in [1], the pixels with the most used color in a palette image are assigned to some unused color indices for carrying the additional data, and these indices are redirected to the most used color. This way, although the indices of these pixels are altered, the actual colors of the pixels are kept unchanged. On the other hand, we say a data hiding method is reversible if the original cover content can be perfectly recovered from the cover version containing embedded data even though a slight distortion has been introduced in data embedding procedure. A number of mechanisms, such as difference expansion [2], histogram shift [3] and lossless compression [4], have been employed to develop the reversible data hiding techniques for digital images. Recently, several good prediction approaches [5] and optimal transition probability under payload-distortion criterion [6, 7] have been introduced to improve the performance of reversible data hiding.
Combination of data hiding and encryption has been studied in recent years. In some works, data hiding and encryption are jointed with a simple manner. For example, a part of cover data is used for carrying additional data and the rest data are encrypted for privacy protection [8, 9]. Alternatively, the additional data are embedded into a data space that is invariable to encryption operations [10]. In another type of the works, data embedding is performed in encrypted domain, and an authorized receiver can recover the original plaintext cover image and extract the embedded data. This technique is termed as reversible data hiding in encrypted images (RDHEI). In some scenarios, for securely sharing secret images, a content owner may encrypt the images before transmission, and an inferior assistant or a channel administrator hopes to append some additional messages, such as the origin information, image notations or authentication data, within the encrypted images though he does not know the image content. For example, when medical images have been encrypted for protecting the patient privacy, a database administrator may aim to embed the personal information into the corresponding encrypted images. Here, it may be hopeful that the original content can be recovered without any error after decryption and retrieve of additional message at receiver side. In [11], the original image is encrypted by an exclusive-or operation with pseudo-random bits, and then the additional data are embedded by flipping a part of least significant bits (LSB) of encrypted image. By exploiting the spatial correlation in natural images, the embedded data and the original content can be retrieved at receiver side. The performance of RDHEI can be further
Lossless and Reversible Data Hiding in Encrypted Images with Public Key Cryptography
Xinpeng Zhang, Jing Long, Zichi Wang, and Hang Cheng 1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSVT.2015.2433194, IEEE Transactions on Circuits and Systems for Video Technology
improved by introducing an implementation order [12] or a flipping ratio [13]. In [14], each additional bit is embedded into a block of data encrypted by the Advanced Encryption Standard (AES). When a receiver decrypts the encrypted image containing additional data, however, the quality of decrypted image is significantly degraded due to the disturbance of additional data. In [15], the data-hider compresses the LSB of encrypted image to generate a sparse space for carrying the additional data. Since only the LSB is changed in the data embedding phase, the quality of directly decrypted image is satisfactory. Reversible data hiding schemes for encrypted JPEG images is also presented [16]. In [17], a sparse data space for accommodating additional data is directly created by compress the encrypted data. If the creation of sparse data space or the compression is implemented before encryption, a better performance can be achieved [18, 19].
While the additional data are embedded into encrypted images with symmetric cryptosystem in the above-mentioned RDHEI methods, a RDHEI method with public key cryptosystem is proposed in [20]. Although the computational complexity is higher, the establishment of secret key through a secure channel between the sender and the receiver is needless. With the method in [20], each pixel is divided into two parts: an even integer and a bit, and the two parts are encrypted using Paillier mechanism [21], respectively. Then, the ciphertext values of the second parts of two adjacent pixels are modified to accommodate an additional bit. Due to the homomorphic property of the cryptosystem, the embedded bit can be extracted by comparing the corresponding decrypted values on receiver side. In fact, the homomorphic property may be further exploited to implement signal processing in encrypted domain [22, 23, 24]. For recovering the original plaintext image, an inverse operation to retrieve the second part of each pixel in plaintext domain is required, and then two decrypted parts of each pixel should be reorganized as a pixel.
This paper proposes a lossless, a reversible, and a combined data hiding schemes for public-key-encrypted images by exploiting the probabilistic and homomorphic properties of cryptosystems. With these schemes, the pixel division/reorganization is avoided and the encryption/decryption is performed on the cover pixels directly, so that the amount of encrypted data and the computational complexity are lowered. In the lossless scheme, due to the probabilistic property, although the data of encrypted image are modified for data embedding, a direct decryption can still result in the original plaintext image while the embedded data can be extracted in the encrypted domain. In the reversible scheme, a histogram shrink is realized before encryption so that the modification on encrypted image for data embedding does not cause any pixel oversaturation in plaintext domain. Although the data embedding on encrypted domain may result in a slight distortion in plaintext domain due to the homomorphic property, the embedded data can be extracted and the original content can be recovered from the directly decrypted image. Furthermore, the data embedding operations of the lossless and the reversible schemes can be simultaneously performed in an encrypted image. With the combined technique, a receiver may extract a part of embedded data before decryption, and extract another part of embedded data and recover the original plaintext image after decryption.
II. LOSSLESS DATA HIDING SCHEME
In this section, a lossless
data hiding scheme for public-key-encrypted images is proposed. There are three
parties in the scheme: an image provider, a data-hider, and a receiver. With a
cryptosystem possessing probabilistic property, the image provider encrypts
each pixel of the original plaintext image using the public key of the
receiver, and a data-hider who does not know the original image can modify the
ciphertext pixel-values to embed some additional data into the encrypted image
by multi-layer wet paper coding under a condition that the decrypted values of
new and original cipher-text pixel values must be same. When having the
encrypted image containing the additional data, a receiver knowing the data
hiding key may extract the embedded data, while a receiver with the private key
of the cryptosystem may perform decryption to retrieve the original plaintext
image. In other words, the embedded data can be extracted in the encrypted
domain, and cannot be extracted after decryption since the decrypted image
would be same as the original plaintext image due to the probabilistic
property. That also means the data embedding does not affect the decryption of
the plaintext image. The sketch of lossless data hiding scheme is shown in
Figure 1.
Data extraction
Decryption
Additional data
Data embedding
Image encryption
Original image
Original image
Additional data
Receiver
Encrypted image containing embedded data
Data-hider
Image provider
Encrypted image
Figure 1.
Sketch of lossless data hiding scheme for public-key-encrypted images 1051-8215 (c) 2015 IEEE. Personal use
is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for
more information. This article has been accepted for publication in a future
issue of this journal, but has not been fully edited. Content may change prior
to final publication. Citation information: DOI 10.1109/TCSVT.2015.2433194,
IEEE Transactions on Circuits and Systems for Video Technology
A. Image encryption
In this phase, the image provider encrypts a plaintext image using the public key of probabilistic cryptosystem pk. For each pixel value m(i, j) where (i, j) indicates the pixel position, the image provider calculates its ciphertext value,
(1) ()()()[]jirjimpEjick,,,,,=
where E is the encryption operation and r(i, j) is a random value. Then, the image provider collects the ciphertext values of all pixels to form an encrypted image.
Actually, the proposed scheme is capitable with various probabilistic public-key cryptosystems, such as Paillier [18] and Damgard-Jurik cryptosystems [25]. With Paillier cryptosystem [18], for two large primes p and q, calculate n = p⋅q, λ = lcm (p−1, q−1), where lcm means the least common multiple. Here, it should meet that gcd (n, (p−1)⋅(q−1)) = 1, where gcd means the greatest common divisor. The public key is composed of n and a randomly selected integer g in , while the private key is composed of λ and 2*nZ
(2) ()()nngLmodmod12−=λμ
where
(3) ()()nxxL1−=
In this case, (1) implies
(4) ()()()()2,mod,,njirgjicnjim⋅=
where r(i, j) is a random integer in Z*n. The plaintext pixel value can be obtained using the private key,
(5) ()()()()nnjicLjimmodmod,,2μλ⋅=
As a generalization of Paillier cryptosystem, Damgard-Jurik cryptosystem [25] can be also used to encrypt the plaintext image. Here, the public key is composed of n and an element g in such that g = (1+n)j⋅x mod ns+1 for a known j relatively prime to n and x belongs to a group isomorphic to Z*n, and we may choose d as the private key when meeting d mod n ∈ Z*n and d = 0 mod λ. Then, the encryption in (1) can be rewritten as1*+snZ
(6) ()()()()1,mod,,+⋅=snjimnjirgjics
where r(i, j) is a random integer in . By applying a recursive version of Paillier decryption, the plaintext value can be obtained from the ciphertext value using the private key. Note that, because of the probabilistic property of the two cryptosystems, the same gray values at different positions may correspond to different ciphertext values. 1*+snZ
B. Data embedding
When having the encrypted image, the data-hider may embed some additional data into it in a lossless manner. The pixels in the encrypted image are reorganized as a sequence according to the data hiding key. For each encrypted pixel, the data-hider selects a random integer r’(i, j) in Z*n and calculates
(7) ()()()()2mod,’,,’njirjicjicn⋅=
if Paillier cryptosystem is used for image encryption, while the data-hider selects a random integer r’(i, j) in and calculates 1*+snZ
(8) ()()()()1mod,’,,’+⋅=ssnnjirjicjic
if Damgard-Jurik cryptosystem is used for image encryption. We denote the binary representations of c(i, j) and c’(i, j) as bk(i, j) and b’k(i, j), respectively,
(9) ()()…,2,1,2mod2,,1==−kjicjibkk
(10) ()()…,2,1,2mod2,’,’1==−kjicjibkk
Clearly, the probability of bk(i, j) = b’k(i, j) (k = 1, 2, …) is 1/2. We also define the sets
()()(){}()()()()(){}()()()()(){}1…,,2,1,,’,,,’,|,,’,,,’,|,,’,|,11222111−==≠==≠=≠=KkjibjibjibjibjiSjibjibjibjibjiSjibjibjiSkkKKK
(11)
By viewing the k-th LSB of encrypted pixels as a wet paper channel (WPC) [26] and the k-th LSB in Sk as “dry” elements of the wet paper channel, the data-hider may employ the wet paper coding [26] to embed the additional data by replacing a part of c(i, j) with c’(i, j). The details will be given in the following.
Considering the first LSB, if c(i, j) are replaced with c’(i, j), the first LSB in S1 would be flipped and the rest first LSB would be unchanged. So, the first LSB of the encrypted pixels can be regarded as a WPC, which includes changeable (dry) elements and unchangeable (wet) elements. In other words, the first LSB in S1 are dry elements and the rest first LSB are wet positions. By using the wet paper coding [26], one can represent on average Nd bits by only flipping a part of dry elements where Nd is the number of dry elements. In this scenario, the data-hider may flip the dry elements by replacing c(i, j) with c’(i, j). Denoting the number of pixels in the image as N, the data-hider may embed on average N/2 bits in the first LSB-layer using wet paper coding.
Considering the second LSB (SLSB) layer, we call the SLSB in S2 as dry elements and the rest SLSB as wet elements. Note that the first LSB of ciphertext pixels in S1 have been determined by replacing c(i, j) with c’(i, j) or keeping c(i, j) unchanged in the first LSB-layer embedding, meaning that the SLSB in S1 are unchangeable in the second layer. Then, the data-hider may flip a part of SLSB in S2 by replacing c(i, j) with c’(i, j) to embed on average N/4 bits using wet paper coding.
Similarly, in the k-th LSB layer, the data-hider may
flip a part of k-th LSB in Sk to
embed on average N/2k bits. When the data
embedding is implemented in K layers, the total N⋅(1−1/2K)
bits, on average, are embedded. That implies the embedding rate, a ratio
between the number of embedded bits and the number of pixels in cover image, is
approximately (1−1/2K). That implies the upper
bound of the embedding rate is 1 bit per pixel. The next subsection will show
that, although a part of c(i, j) is replaced with c’(i,
j), the original plaintext image can still be obtained by decryption.1051-8215
(c) 2015 IEEE. Personal use is permitted, but republication/redistribution
requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for
more information. This article has been accepted for publication in a future
issue of this journal, but has not been fully edited. Content may change prior
to final publication. Citation information: DOI 10.1109/TCSVT.2015.2433194,
IEEE Transactions on Circuits and Systems for Video Technology
C. Data extraction and image decryption
After receiving an encrypted image containing the additional data, if the receiver knows the data-hiding key, he may calculate the k-th LSB of encrypted pixels, and then extract the embedded data from the K LSB-layers using wet paper coding. On the other hand, if the receiver knows the private key of the used cryptosystem, he may perform decryption to obtain the original plaintext image. When Paillier cryptosystem is used, Equation (4) implies
(12) ()()()()2,,,njirgjicnjim⋅+⋅=α
where α is an integer. By substituting (12) into (7), there is
(13) ()()()()()2,mod,’,,’njirjirgjicnjim⋅⋅=
Since r(i, j)⋅r’(i, j) can be viewed as another random integer in Z*n, the decryption on c’(i, j) will result in the plaintext value,
(14) ()()()()nnjicLjimmodmod,’,2μλ⋅=
Similarly, when Damgard-Jurik cryptosystem is used,
(15) ()()()()()1,mod,’,,’+⋅⋅=snjimnjirjirgjics
The decryption on c’(i, j) will also result in the plaintext value. In other words, the replacement of ciphertext pixel values for data embedding does not affect the decryption result.
III. REVERSIBLE DATA HIDING SCHEME
This section
proposes a reversible data hiding scheme for public-key-encrypted images. In
the reversible scheme, a preprocessing is employed to shrink the image
histogram, and then each pixel is encrypted with additive homomorphic
cryptosystem by the image provider. When having the encrypted image, the
data-hider modifies the ciphertext pixel values to embed a bit-sequence
generated from the additional data and error-correction codes. Due to the
homomorphic property, the modification in encrypted domain will result in
slight increase/decrease on plaintext pixel values, implying that a decryption
can be implemented to obtain an image similar to the original plaintext image
on receiver side. Because of the histogram shrink before encryption, the data
embedding operation does not cause any overflow/underflow in the directly
decrypted image. Then, the original plaintext image can be recovered and the
embedded additional data can be extracted from the directly decrypted image.
Note that the data-extraction and content-recovery of the reversible scheme are
performed in plaintext domain, while the data extraction of the previous
lossless scheme is performed in encrypted domain and the content recovery is
needless. The sketch of reversible data hiding scheme is given in Figure 2.
Decryption
Histogram shrink
Data extraction & image recovery
Data embedding
Image encryption
Original image
Original image
Additional data
Receiver
Data-hider
Image provider
Encrypted image
Additional data
Encrypted image containing embedded data
Figure 2.
Sketch of reversible data hiding scheme for public-key-encrypted images
A. Histogram shrink and image encryption
In the reversible scheme, a small integer δ shared by the image provider, the data-hider and the receiver will be used, and its value will be discussed later. Denote the number of pixels in the original plaintext image with gray value v as hv, implying
(16) Nhvv=Σ=2550
where N is the number of all pixels in the image. The image provider collects the pixels with gray values in [0, δ+1], and represent their values as a binary stream BS1. When an efficient lossless source coding is used, the length of BS1
(17) ⋅≈ΣΣΣΣ+=++=+=+=101101100101,…,,δδδδδvvvvvvvvhhhhhhHhl
where H(⋅) is the entropy function. The image provider also collects the pixels with gray values in [255−δ, 255], and represent their values as a binary stream BS2 with a length l2. Similarly,
(18) ⋅≈ΣΣΣΣ−=−=+−−=−−=25525525525525512552552552552552552,…,,δδδδδδvvvvvvvvhhhhhhHhl
Then, the gray values of all pixels are enforced into [δ+1, 255−δ],
()()()()()+≤+−<<+−≥−=1,if,1255,1if,,255,if,255,δδδδδδjimjimjimjimjimS
(19)
Denoting the new histogram as h’v, there must be 1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSVT.2015.2433194, IEEE Transactions on Circuits and Systems for Video Technology
(20) −>−=−<<++=≤=ΣΣ−=+=δδδδδδδδ255,0255,2551,1,,0’25525510vvhvhvhvhvvvvvv
The image provider finds the peak of the new histogram,
(21) vvhV‘maxarg2551δδ−≤≤+=
The image provider also divides all pixels into two sets: the first set including (N−8) pixels and the second set including the rest 8 pixels, and maps each bit of BS1, BS2 and the LSB of pixels in the second set to a pixel in the first set with gray value V. Since the gray values close to extreme black/white are rare, there is
(22) 16’21++≥llhV
when δ is not too large. In this case, the mapping operation is feasible. Here, 8 pixels in the second set cannot be used to carry BS1/BS2 since their LSB should be used to carry the value of V, while 8 pixels in the first set cannot be used to carry BS1/BS2 since their LSB should be used to carry the original LSB of the second set. So, a total of 16 pixels cannot be used for carrying BS1/BS2. That is the reason that there is a value 16 in (22). The experimental result on 1000 natural images shows (22) is always right when δ is less than 15. So, we recommend the parameter δ < 15. Then, a histogram shift operation is made,
()()()()()()()<−=−=>=VjimjimVjimVVjimVVjimjimjimSSSSSST,if,1,1isbitingcorrespondtheand,if,10isbitingcorrespondtheand,if,,if,,,
(23)
In other word, BS1, BS2 and the LSB of pixels in the second set are carried by the pixels in the first set. After this, the image provider represents the value of V as 8 bits and maps them to the pixels in the second set in a one-to-one manner. Then, the values of pixels in the second set are modified as follows,
()()()()()−=bitingcorrespondthefrom differs,ofLSBif,1,bitingcorrespondtheas same is,ofLSBif,,,jimjimjimjimjimSSSST
(24)
That means the value of V is embedded into the LSB of the second set. This way, all pixel values must fall into [δ, 255−δ].
At last, the image provider encrypts all pixels using a public key cryptosystem with additive homomorphic property, such as Paillier and Damgard-Jurik cryptosystems. When Paillier cryptosystem is used, the ciphertext pixel is
(25) ()()()()2,mod,,njirgjicnjimT⋅=
And, when Damgard-Jurik cryptosystem is used, the ciphertext pixel is
(26) ()()()()1,mod,,+⋅=snjimnjirgjicsT
Then, the ciphertext values of all pixels are collected to form an encrypted image.
B. Data embedding
With the encrypted image, the data-hider divides the ciphertext pixels into two set: Set A including c(i, j) with odd value of (i+j), and Set B including c(i, j) with even value of (i+j). Without loss of generality, we suppose the pixel number in Set A is N/2. Then, the data-hider employs error-correction codes expand the additional data as a bit-sequence with length N/2, and maps the bits in the coded bit-sequence to the ciphertext pixels in Set A in a one-to-one manner, which is determined by the data-hiding key. When Paillier cryptosystem is used, if the bit is 0, the corresponding ciphertext pixel is modified as
(27) ()()()()2mod,’,,’njirgjicjicnn⋅⋅=−δ
where r’(i, j) is a integer randomly selected in Z*n. If the bit is 1, the corresponding ciphertext pixel is modified as
(28) ()()()()2mod,’,,’njirgjicjicn⋅⋅=δ
When Damgard-Jurik cryptosystem is used, if the bit is 0, the corresponding ciphertext pixel is modified as
(29) ()()()()1mod,’,,’1+−⋅⋅=+snnnjirgjicjicssδ
where r’(i, j) is a integer randomly selected in . If the bit is 1, the corresponding ciphertext pixel is modified as 1*+snZ
(30) ()()()()1mod,’,,’+⋅⋅=snnjirgjicjicsδ
This way, an encrypted image containing additional data is produced. Note that the additional data are embedded into Set A. Although the pixels in Set B may provide side information of the pixel-values in Set A, which will be used for data extraction, the pixel-values in Set A are difficult to be precisely obtained on receiver side, leading to possible errors in directly extracted data. So, the error-correction coding mechanism is employed here to ensure successful data extraction and perfect image recovery.
C. Image decryption, data extraction and content recovery
After receiving an encrypted image containing additional data, the receiver firstly performs decryption using his private key. We denote the decrypted pixels as m’(i, j). Due to the homomorphic property, the decrypted pixel values in Set A meet
()()()−+=0isbitingcorrespondtheif,,1isbitingcorrespondtheif,,,’δδjimjimjimTT
(31)
On the other hand, the decrypted pixel values in Set B are just mT(i, j) since their ciphertext values are unchanged in data embedding phase. When δ is small, the decrypted image is perceptually similar to the original plaintext image.
Then, the receiver with the data-hiding key can extract the embedded data from the directly decrypted image. He estimates the pixel values in Set A using their neighbors,
()()()()()41,,11,,1,++++−+−=jimjimjimjimjimTTTTT
(32)
and obtain an estimated version of the coded bit-sequence by comparing the decrypted and estimated pixel values in Set A. That means the coded bit is estimated as 0 if or as 1 if . With the estimate of coded ()()jimjimT,’,> ()()jimjimT,’,≤1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSVT.2015.2433194, IEEE Transactions on Circuits and Systems for Video Technology
bit-sequence, the receiver may employ the error-correction method to retrieve the original coded bit-sequence and the embedded additional data. Note that, with a larger δ, the error rate in the estimate of coded bits would be lower, so that more additional data can be embedded when ensuring successful error correction and data extraction. In other words, a smaller δ would result in a higher error rate in the estimate of coded bits, so that the error correction may be unsuccessful when excessive payload is embedded. That means the embedding capacity of the reversible data hiding scheme is depended on the value of δ.
After retrieving the original coded bit-sequence and the embedded additional data, the original plaintext image may be further recovered. For the pixels in Set A, mT(i, j) are retrieved according to the coded bit-sequence,
()()()+−=0isbitingcorrespondtheif,,’1isbitingcorrespondtheif,,’,δδjimjimjimT
(33)
For the pixels in Set B, as mentioned above, mT(i, j) are just m’(i, j). Then, divides all mT(i, j) into two sets: the first one including (N−8) pixels and the second one including the rest 8 pixels. The receiver may obtain the value of V from the LSB in the second set, and retrieve mS(i, j) of the first set,
(34) ()()()()()()−<+−=>=1,if,1,1or,if,,if,,,VjimjimVVjimVVjimjimjimTTTTTS
Meanwhile, the receiver extracts a bit 0 from a pixel with mT(i, j) = V and a bit 1 from a pixel with mT(i, j) = V−1. After decomposing the extracted data into BS1, BS2 and the LSB of mS(i, j) in the second set, the receiver retrieves mS(i, j) of the second set,
()()()()()()()+=differentare,and,ofLSBif,1,sameare, and,ofLSBif,,,jimjimjimjimjimjimjimTSTTSTS
(35)
Collect all pixels with mS(i, j) = δ+1, and, according to BS1, recover their original values within [0, δ+1]. Similarly, the original values of pixels with mS(i, j) = 255−δ are recovered within [255−δ, 255] according to BS2. This way, the original plaintext image is recovered.
IV. COMBINED DATA HIDING SCHEME
As described in Sections 3 and 4, a lossless and a reversible data hiding schemes for public-key-encrypted images are proposed. In both of the two schemes, the data embedding operations are performed in encrypted domain. On the other hand, the data extraction procedures of the two schemes are very different. With the lossless scheme, data embedding does not affect the plaintext content and data extraction is also performed in encrypted domain. With the reversible scheme, there is slight distortion in directly decrypted image caused by data embedding, and data extraction and image recovery must be performed in plaintext domain. That implies, on receiver side, the additional data embedded by the lossless scheme cannot be extracted after decryption, while the additional data embedded by the reversible scheme cannot extracted before decryption. In this section, we combine the lossless and reversible schemes to construct a new scheme, in which data extraction in either of the two domains is feasible. That means the additional data for various purposes may be embedded into an encrypted image, and a part of the additional data can be extracted before decryption and another part can be extracted after decryption.
In the combined scheme, the image provider performs histogram shrink and image encryption as described in Subsection 3.A. When having the encrypted image, the data-hider may embed the first part of additional data using the method described in Subsection 3.B. Denoting the ciphertext pixel values containing the first part of additional data as c’(i, j), the data-hider calculates
(36) ()()()()2mod,”,’,”njirjicjicn⋅=
or
(37) ()()()()1mod,”,’,”+⋅=snnjirjicjics
where r”(i,
j) are randomly selected in Z*n or
for Paillier and Damgard-Jurik cryptosystems, respectively. Then, he may employ
wet paper coding in several LSB-planes of ciphertext pixel values to embed the
second part of additional data by replacing a part of c’(i, j)
with c”(i, j). In other words, the method described in
Subsection 2.B is used to embed the second part of additional data. On receiver
side, the receiver firstly extracts the second part of additional data from the
LSB-planes of encrypted domain. Then, after decryption with his private key, he
extracts the first part of additional data and recovers the original plaintext
image from the directly decrypted image as described in Subsection 3.C. The
sketch of the combined scheme is shown in Figure 3. Note that, since the
reversibly embedded data should be extracted in the plaintext domain and the
lossless embedding does not affect the decrypted result, the lossless embedding
should implemented after the reversible embedding in the combined scheme.1*+snZ 1051-8215
(c) 2015 IEEE. Personal use is permitted, but republication/redistribution
requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for
more information. This article has been accepted for publication in a future
issue of this journal, but has not been fully edited. Content may change prior
to final publication. Citation information: DOI 10.1109/TCSVT.2015.2433194,
IEEE Transactions on Circuits and Systems for Video Technology
Data extraction in encrypted domain
Lossless data embedding
Decryption
Histogram shrink
Data extraction & image recovery in plaintext domain
Reversible data embedding
Image encryption
Original image
Receiver
Data-hider
Image provider
Encrypted image
Additional data 1
Encrypted image containing embedded data
Additional data 2
Additional data 1
Additional data 2
Original image
Figure 3. Sketch of combined scheme
V. EXPERIMENTAL RESULTS
Four gray images sized 512×512, Lena, Man, Plane and Crowd, shown in Figure 4, and 50 natural gray images sized 1920×2560, which contain landscape and people, were used as the original plaintext covers in the experiment. With the lossless scheme, all pixels in the cover images were firstly encrypted using Paillier cryptosystem, and then the additional data were embedded into the LSB-planes of ciphertext pixel-values using multi-layer wet paper coding as in Subsection 2.B. Table 1 lists the average value of embedding rates when K LSB-planes were used for carrying the additional data in the 54 encrypted images. In fact, the average embedding rate is very close to (1−1/2K). On receiver side, the embedded data can be extracted from the encrypted domain. Also, the original plaintext images can be retrieved by direct decryption. In other word, when the decryption was performed on the encrypted images containing additional data, the original plaintext images were obtained.
With the reversible scheme, all pixels were encrypted after histogram shrink as in Subsection 3.A. Then, a half of ciphertext pixels were modified to carry the additional data as in Subsection 3.B, and after decryption, we implemented the data extraction and image recovery in the plaintext domain. Here, the low-density parity-check (LDPC) coding was used to expand the additional data as a bit-sequence in data embedding phase, and to retrieve the coded bit-sequence and the embedded additional data on receiver side. Although the error-correction mechanism was employed, an excessive payload may cause the failure of data extraction and image recovery. With a larger value of δ, a higher embedding capacity could be ensured, while a higher distortion would be introduced into the directly decrypted image. For instance, when using Lena as the cover and δ = 4, a total of 4.6×104 bits were embedded and the value of PSNR in directly decrypted image was 40.3 dB. When using δ = 7, a total of 7.7×104 bits were embedded and the value of PSNR in directly decrypted image was 36.3 dB. In both of the two cases, the embedded additional data and the original plaintext image were extracted and recovered without any error. Figure 5 gives the two directly decrypted images. Figure 6 shows the rate-distortion curves generated from different cover images and various values of δ under the condition of successful data-extraction/image-recovery. The abscissa represents the pure embedding rate, and the ordinate is the PSNR value in directly decrypted image. The rate-distortion curves on four test images, Lena, Man, Plane and Crowd, are given in Figures 6, respectively. We also used 50 natural gray images sized 1920×2560 as the original plaintext covers, and calculated the average values of embedding rates and PSNR values, which are also shown as a curve marked by asterisks in the figure. Furthermore, Figure 7 compares the average rate-PSNR performance between the proposed reversible scheme with public-key cryptosystems and several previous methods with symmetric cryptosystems under a condition that the original plaintext image can be recovered without any error using the data-hiding and encryption keys. In [11] and [12], each block of encrypted image with given size is used to carry one additional bit. So, the embedding rates of the two works are fixed and low. With various parameters, we obtain the performance curves of the method in [15] and the proposed reversible scheme, which are shown in the figure. It can be seen that the proposed reversible scheme significantly outperforms the previous methods when the embedding rate is larger than 0.01 bpp. With the combined scheme, we implemented the histogram shrink operation with a value of parameter δ, and encrypted the 1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSVT.2015.2433194, IEEE Transactions on Circuits and Systems
Efficient Top-k Retrieval on Massive Data
Abstract:
Top-k query is an important operation to return a set of interesting points in a potentially huge data space. It is analyzed in this paper that the existing algorithms cannot process top-k query on massive data efficiently. This paper proposes a novel table-scan-based T2S algorithm to efficiently compute top-k results on massive data. T2S first constructs the presorted table, whose tuples are arranged in the order of the round-robin retrieval on the sorted lists. T2S maintains only fixed number of tuples to compute results. The early termination checking for T2S is presented in this paper, along with the analysis of scan depth. The selective retrieval is devised to skip the tuples in the presorted table which are not top-k results. The theoretical analysis proves that selective retrieval can reduce the number of the retrieved tuples significantly. The construction and incremental-update/batch-processing methods for the used structures are proposed.
Introduction:
Top-k query is an important operation to return a set of interesting points from a potentially huge data space. In top-k query, a ranking function F is provided to determine the score of each tuple and k tuples with the largest scores are returned. Due to its practical importance, top-k query has attracted extensive attention proposes a novel table-scan-based T2S algorithm (Top-k by Table Scan) to compute top-k results on massive data efficiently.
The analysis of scan depth in T2S is developed also. The result size k is usually small and the vast majority of the tuples retrieved in PT are not top-k results, this paper devises selective retrieval to skip the tuples in PT which are not query results. The theoretical analysis proves that selective retrieval can reduce the number of the retrieved tuples significantly.
The construction and incremental-update/batch-processing methods for the data structures are proposed in this paper. The extensive experiments are conducted on synthetic and real life data sets.
Existing System:
To its practical importance, top-k query has attracted extensive attention. The existing top-k algorithms can be classified into three types: indexbased methods view-based methods and sorted-list-based methods . Index-based methods (or view-based methods) make use of the pre-constructed indexes or views to process top-k query.
A concrete index or view is constructed on a specific subset of attributes, the indexes or views of exponential order with respect to attribute number have to be built to cover the actual queries, which is prohibitively expensive. The used indexes or views can only be built on a small and selective set of attribute combinations.
Sorted-list-based methods retrieve the sorted lists in a round-robin fashion, maintain the retrieved tuples, update their lower-bound and upper-bound scores. When the kth largest lower-bound score is not less than the upper-bound scores of other candidates, the k candidates with the largest lower-bound scores are top-k results.
Sorted-list-based methods compute topk results by retrieving the involved sorted lists and naturally can support the actual queries. However, it is analyzed in this paper that the numbers of tuples retrieved and maintained in these methods increase exponentially with attribute number, increase polynomially with tuple number and result size.
Disadvantages:
- Computational Overhead.
- Data redundancy is more.
- Time consuming process.
Problem Definition:
Ranking is a central part of many information retrieval problems, such as document retrieval, collaborative filtering, sentiment analysis, computational advertising (online ad placement).
Training data consists of queries and documents matching them together with relevance degree of each match. It may be prepared manually by human assessors (or raters, as Google calls them), who check results for some queries and determine relevance of each result. It is not feasible to check relevance of all documents, and so typically a technique called pooling is used only the top few documents, retrieved by some existing ranking models are checked.
Typically, users expect a search query to complete in a short time (such as a few hundred milliseconds for web search), which makes it impossible to evaluate a complex ranking model on each document in the corpus, and so a two-phase scheme is used.
Proposed System:
Our proposed system describe with layered indexing to organize the tuples into multiple consecutive layers. The top-k results can be computed by at most k layers of tuples. Also our propose layer-based Pareto-Based Dominant Graph to express the dominant relationship between records and top-k query is implemented as a graph traversal problem.
Then propose a dual-resolution layer structure. Top k query can be processed efficiently by traversing the dual-resolution layer through the relationships between tuples. propose the Hybrid- Layer Index, which integrates layer level filtering and list-level filtering to significantly reduce the number of tuples retrieved in query processing propose view-based algorithms to pre-construct the specified materialized views according to some ranking functions.
Given a top-k query, one or more optimal materialized views are selected to return the top-k results efficiently. Propose LPTA+ to significantly improve efficiency of the state-of-the-art LPTA algorithm. The materialized views are cached in memory; LPTA+ can reduce the iterative calling of the linear programming sub-procedure, thus greatly improving the efficiency over the LPTA algorithm. In practical applications, a concrete index (or view) is built on a specific subset of attributes. Due to prohibitively expensive overhead to cover all attribute combinations, the indexes (or views) can only be built on a small and selective set of attribute combinations.
If the attribute combinations of top-k query are fixed, index-based or viewbased methods can provide a superior performance. However, on massive data, users often issue ad-hoc queries, it is very likely that the indexes (or views) involved in the ad-hoc queries are not built and the practicability of these methods is limited greatly.
Correspondingly, T2S only builds presorted table, on which top-k query on any attribute combination can be dealt with. This reduces the space overhead significantly compared with index-based (or view-based) methods, and enables actual practicability for T2S.
Advantages:
- The evaluation of an information retrieval system is the process of assessing how well a system meets the information needs of its users.
- Traditional evaluation metrics, designed for Boolean retrieval or top-k retrieval, include precision and recall.
- All common measures described here assume a ground truth notion of relevancy: every document is known to be either relevant or non-relevant to a particular query.
Modules:
Multi-keyword ranked search:
To design search schemes which allow multi-keyword query and provide result similarity ranking for effective data retrieval, instead of returning undifferentiated results.
Privacy-preserving:
To prevent the cloud server from learning additional information from the data set and the index, and to meet privacy requirements. if the cloud server deduces any association between keywords and encrypted documents from index, it may learn the major subject of a document, even the content of a short document. Therefore, the searchable index should be constructed to prevent the cloud server from performing such kind of association attack.
Efficiency:
Above goals on functionality and privacy should be achieved with low communication and computation overhead. Assume the number of query keywords appearing in a document the final similarity score is a linear function of xi, where the coefficient r is set as a positive random number. However, because the random factor “i is introduced as a part of the similarity score, the final search result on the basis of sorting similarity scores may not be as accurate as that in original scheme. For the consideration of search accuracy, we can let follow a normal distribution where the standard deviation functions as a flexible tradeoff parameter among search accuracy and security.
Conclusion:
The proposed novel T2S algorithm successfully implemented and to efficiently return top-k results on massive data by sequentially scanning the presorted table, in which the tuples are arranged in the order of round-robin retrieval on sorted lists. Only fixed number of candidates needs to be maintained in T2S. This paper proposes early termination checking and the analysis of the scan depth. Selective retrieval is devised in T2S and it is analyzed that most of the candidates in the presorted table can be skipped. The experimental results show that T2S significantly outperforms the existing algorithm.
Future Enhancement:
In future development of Multi keyword ranked search scheme should explore checking the integrity of the rank order in the search result from the un trusted network server infrastructure.
Feature Enhancement:
A novel table-scan-based T2S algorithm implemented successfully to compute top-k results on massive data efficiently. Given table T, T2Sfirst presorts T to obtain table PT(Presorted Table), whose tuples are arranged in the order of the round robin retrieval on the sorted lists. During its execution, T2S only maintains fixed and small number of tuples to compute results. It is proved that T2S has the Characteristic of early termination. It does not need to examine all tuples in PT to return results.
Continuous and Transparent User Identity Verification for Secure Internet Services
Continuous and Transparent User IdentityVerification for Secure Internet ServicesAndrea Ceccarelli, Leonardo Montecchi, Francesco Brancati, Paolo Lollini,Angelo Marguglio, and Andrea Bondavalli, Member, IEEEAbstract—Session management in distributed Internet services is traditionally based on username and password, explicit logouts andmechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username andpassword with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, andthe identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact onthe usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometricsin the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. Theprotocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user.The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried outto assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototypefor PCs and Android smartphones is discussed.Index Terms—Security, web servers, mobile environments, authenticationÇ1 INTRODUCTIONSECURE user authentication is fundamental in most ofmodern ICT systems. User authentication systems aretraditionally based on pairs of username and password andverify the identity of the user only at login phase. No checksare performed during working sessions, which are terminatedby an explicit logout or expire after an idle activityperiod of the user.Security of web-based applications is a serious concern,due to the recent increase in the frequency and complexityof cyber-attacks; biometric techniques [10] offer emergingsolution for secure and trusted authentication, where usernameand password are replaced by biometric data. However,parallel to the spreading usage of biometric systems,the incentive in their misuse is also growing, especially consideringtheir possible application in the financial and bankingsectors [20], [11].Such observations lead to arguing that a single authenticationpoint and a single biometric data cannot guarantee asufficient degree of security [5], [7]. In fact, similarly to traditionalauthentication processes which rely on usernameand password, biometric user authentication is typically formulatedas a “single shot” [8], providing user verificationonly during login phase when one or more biometric traitsmay be required. Once the user’s identity has been verified,the system resources are available for a fixed period of timeor until explicit logout from the user. This approachassumes that a single verification (at the beginning of thesession) is sufficient, and that the identity of the user is constantduring the whole session. For instance, we considerthis simple scenario: a user has already logged into a security-critical service, and then the user leaves the PC unattendedin the work area for a while. This problem is eventrickier in the context of mobile devices, often used in publicand crowded environments, where the device itself can belost or forcibly stolen while the user session is active, allowingimpostors to impersonate the user and access strictlypersonal data. In these scenarios, the services where theusers are authenticated can be misused easily [8], [5]. Abasic solution is to use very short session timeouts and periodicallyrequest the user to input his/her credentials overand over, but this is not a definitive solution and heavilypenalizes the service usability and ultimately the satisfactionof users.To timely detect misuses of computer resources and preventthat an unauthorized user maliciously replaces anauthorized one, solutions based on multi-modal biometriccontinuous authentication [5] are proposed, turning user verificationinto a continuous process rather than a onetimeoccurrence [8]. To avoid that a single biometric trait isforged, biometrics authentication can rely on multiple biometricstraits. Finally, the use of biometric authenticationallows credentials to be acquired transparently, i.e., withoutexplicitly notifying the user or requiring his/her interaction,which is essential to guarantee better service usability. Wepresent some examples of transparent acquisition of biometricdata. Face can be acquired while the user is located infront of the camera, but not purposely for the acquisition of_ A. Ceccarelli, L. Montecchi, P. Lollini, and A. Bondavalli are with theDepartment of Mathematics and Informatics, University of Firenze, VialeMorgagni 65, 50134 Firenze, Italy. E-mail: {andrea.ceccarelli,leonardo.montecchi, paolo.lollini, bondavalli}@unifi.it._ F. Brancati is with Resiltech S.R.L., Piazza Iotti 25, 56025 Pontedera,Pisa, Italy. E-mail: francesco.brancati@resiltech.com._ A. Marguglio is with Engineering Ingegneria Informatica S.p.A., VialeRegione Siciliana 7275, 90146 Palermo, Italy.E-mail: angelo.marguglio@eng.it.Manuscript received 12 Nov. 2012; revised 18 Dec. 2013; accepted 22 Dec.2013. Date of publication 8 Jan. 2014; date of current version 15 May 2015.For information on obtaining reprints of this article, please send e-mail to:reprints@ieee.org, and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TDSC.2013.2297709270 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 12, NO. 3, MAY/JUNE 20151545-5971 _ 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.the biometric data; e.g., the user may be reading a textualSMS or watching a movie on the mobile phone. Voice canbe acquired when the user speaks on the phone, or withother people nearby if the microphone always capturesbackground. Keystroke data can be acquired whenever theuser types on the keyboard, for example, when writing anSMS, chatting, or browsing on the Internet. This approachdifferentiates from traditional authentication processes,where username/password are requested only once at logintime or explicitly required at confirmation steps; such traditionalauthentication approaches impair usability forenhanced security, and offer no solutions against forgery orstealing of passwords.This paper presents a new approach for user verificationand session management that is applied in the contextaware security by hierarchical multilevel architectures(CASHMA) [1]) system for secure biometric authenticationon the Internet. CASHMA is able to operate securely withany kind of web service, including services with high securitydemands as online banking services, and it is intendedto be used from different client devices, e.g., smartphones,Desktop PCs or even biometric kiosks placed at the entranceof secure areas. Depending on the preferences and requirementsof the owner of the web service, the CASHMAauthentication service can complement a traditional authenticationservice, or can replace it.The approach we introduced in CASHMA for usable andhighly secure user sessions is a continuous sequential (a singlebiometric modality at once is presented to the system [22])multi-modal biometric authentication protocol, which adaptivelycomputes and refreshes session timeouts on the basisof the trust put in the client. Such global trust is evaluated asa numeric value, computed by continuously evaluating thetrust both in the user and the (biometric) subsystems used foracquiring biometric data. In the CASHMA context, eachsubsystem comprises all the hardware/software elementsnecessary to acquire and verify the authenticity of one biometrictrait, including sensors, comparison algorithms andall the facilities for data transmission and management.Trust in the user is determined on the basis of frequency ofupdates of fresh biometric samples, while trust in each subsystemis computed on the basis of the quality and varietyof sensors used for the acquisition of biometric samples,and on the risk of the subsystem to be intruded.Exemplary runs carried out using Matlab are reported,and a quantitative model-based security analysis of theprotocol is performed combining the stochastic activitynetworks (SANs [16]) and ADversary VIew Security Evaluation(ADVISE [12]) formalisms.The driving principles behind our protocol were brieflydiscussed in the short paper [18], together with minor qualitativeevaluations. This paper extends [18] both in thedesign and the evaluation parts, by providing an in-depthdescription of the protocol and presenting extensive qualitativeand quantitative analysis.The rest of the paper is organized as follows. Section 2introduces the preliminaries to our work. Section 3 illustratesthe architecture of the CASHMA system, whileSections 4 describes our continuous authentication protocol.Exemplary simulations of the protocol using Matlabare shown in Section 5, while Section 6 presents aquantitative model-based analysis of the security propertiesof the protocol. Section 7 present the running prototype,while concluding remarks are in Section 8.2 PRELIMINARIES2.1 Continuous AuthenticationA significant problem that continuous authentication aimsto tackle is the possibility that the user device (smartphone,table, laptop, etc.) is used, stolen or forcibly taken after theuser has already logged into a security-critical service, orthat the communication channels or the biometric sensorsare hacked.In [7] a multi-modal biometric verification system isdesigned and developed to detect the physical presence ofthe user logged in a computer. The proposed approachassumes that first the user logs in using a strong authenticationprocedure, then a continuous verification process isstarted based on multi-modal biometric. Verification failuretogether with a conservative estimate of the time requiredto subvert the computer can automatically lock it up. Similarly,in [5] a multi-modal biometric verification system ispresented, which continuously verifies the presence of auser working with a computer. If the verification fails, thesystem reacts by locking the computer and by delaying orfreezing the user’s processes.The work in [8] proposes a multi-modal biometric continuousauthentication solution for local access to high-securitysystems as ATMs, where the raw data acquired areweighted in the user verification process, based on i) type ofthe biometric traits and ii) time, since different sensors areable to provide raw data with different timings. Point ii)introduces the need of a temporal integration method whichdepends on the availability of past observations: based onthe assumption that as time passes, the confidence in theacquired (aging) values decreases. The paper applies adegeneracy function that measures the uncertainty of thescore computed by the verification function. In [22], despitethe focus is not on continuous authentication, an automatictuning of decision parameters (thresholds) for sequentialmulti-biometric score fusion is presented: the principle toachieve multimodality is to consider monomodal biometricsubsystems sequentially.In [3] a wearable authentication device (a wristband) ispresented for a continuous user authentication and transparentlogin procedure in applications where users arenomadic. By wearing the authentication device, the usercan login transparently through a wireless channel, and cantransmit the authentication data to computers simplyapproaching them.2.2 Quantitative Security EvaluationSecurity assessment relied for several years on qualitativeanalyses only. Leaving aside experimental evaluation anddata analysis [26], [25], model-based quantitative securityassessment is still far from being an established techniquedespite being an active research area.Specific formalisms for security evaluation have beenintroduced in literature, enabling to some extent the quantificationof security. Attack trees are closely related to faulttrees: they consider a security breach as a system failure,CECCARELLI ET AL.: CONTINUOUS AND TRANSPARENT USER IDENTITY VERIFICATION FOR SECURE INTERNET SERVICES 271and describe sets of events that can lead to system failure ina combinatorial way [14]; they however do not consider thenotion of time. Attack graphs [13] extend attack trees byintroducing the notion of state, thus allowing more complexrelations between attacks to be described. Mission orientedrisk and design analysis (MORDA) assesses system risk bycalculating attack scores for a set of system attacks. Thescores are based on adversary attack preferences and theimpact of the attack on the system [23]. The recently introducedAdversary VIew Security Evaluation formalism [12]extends the attack graph concept with quantitative informationand supports the definition of different attackersprofiles.In CASHMA assessment, the choice of ADVISE wasmainly due to: i) its ability to model detailed adversary profiles,ii) the possibility to combine it with other stochasticformalisms as the M€obius multi-formalism [15], and iii) theability to define ad-hoc metrics for the system we were targeting.This aspect is explored in Section 6.2.3 Novelty of Our ApproachOur continuous authentication approach is grounded ontransparent acquisition of biometric data and on adaptivetimeout management on the basis of the trust posed in theuser and in the different subsystems used for authentication.The user session is open and secure despite possibleidle activity of the user, while potential misuses are detectedby continuously confirming the presence of the proper user.Our continuous authentication protocol significantly differsfrom the work we surveyed in the biometric field as itoperates in a very different context. In fact, it is integrated ina distributed architecture to realize a secure and usableauthentication service, and it supports security-critical webservices accessible over the Internet. We remark thatalthough some very recent initiatives for multi-modal biometricauthentication over the Internet exist (e.g., the BioIDBaaS—Biometric Authentication as a Service is presented in2011 as the first multi-biometric authentication service basedon the Single Sign-On [4]), to the authors’ knowledge none ofsuch approaches supports continuous authentication.Another major difference with works [5] and [7] is thatour approach does not require that the reaction to a userverification mismatch is executed by the user device (e.g.,the logout procedure), but it is transparently handled by theCASHMA authentication service and the web services,which apply their own reaction procedures.The length of the session timeout in CASHMA is calculatedaccording to the trust in the users and the biometricsubsystems, and tailored on the security requirements ofthe service. This provides a tradeoff between usability andsecurity. Although there are similarities with the overallobjectives of the decay function in [8] and the approach forsequential multi-modal system in [22], the reference systemsare significantly different. Consequently, differentrequirements in terms of data availability, frequency, quality,and security threats lead to different solutions [27].2.4 Basic DefinitionsIn this section we introduce the basic definitions that areadopted in this paper. Given n unimodal biometricsubsystems Sk, with k ¼ 1; 2; :::; n that are able to decideindependently on the authenticity of a user, the False Non-Match Rate, FNMRk, is the proportion of genuine comparisonsthat result in false non-matches. False non-match is thedecision of non-match when comparing biometric samplesthat are from same biometric source (i.e., genuine comparison)[10]. It is the probability that the unimodal system Skwrongly rejects a legitimate user. Conversely, the FalseMatch Rate, FMRk, is the probability that the unimodal subsystemSk makes a false match error [10], i.e., it wronglydecides that a non legitimate user is instead a legitimate one(assuming a fault-free and attack-free operation). Obviously,a false match error in a unimodal system would leadto authenticate a non legitimate user. To simplify the discussionbut without losing the general applicability of theapproach, hereafter we consider that each sensor allowsacquiring only one biometric trait; e.g., having n sensorsmeans that at most n biometric traits are used in our sequentialmultimodal biometric system.The subsystem trust level mðSk; tÞ is the probability that theunimodal subsystem Sk at time t does not authenticate animpostor (a non-legitimate user) considering both the qualityof the sensor (i.e., FMRk) and the risk that the subsystemis intruded.The user trust level g(u, t) indicates the trust placed bythe CASHMA authentication service in the user u attime t, i.e., the probability that the user u is a legitimateuser just considering his behavior in terms of device utilization(e.g., time since last keystroke or other action)and the time since last acquisition of biometric data.The global trust level trustðu; tÞ describes the belief that attime t the user u in the system is actually a legitimate user,considering the combination of all subsystems trust levelsmðSk¼1;:::n; tÞ and of the user trust level g(u, t).The trust threshold gmin is a lower threshold on the globaltrust level required by a specific web service; if the resultingglobal trust level at time t is smaller than gmin (i.e.,gðu; tÞ < gmin), the user u is not allowed to access to the service.Otherwise if g(u,t) _ gmin the user u is authenticatedand is granted access to the service.3 THE CASHMA ARCHITECTURE3.1 Overall View of the SystemThe overall system is composed of the CASHMA authenticationservice, the clients and the web services (Fig. 1),connected through communication channels. Each communicationchannel in Fig. 1 implements specific securitymeasures which are not discussed here for brevity.Fig. 1. Overall view of the CASHMA architecture.272 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 12, NO. 3, MAY/JUNE 2015The CASHMA authentication service includes: i) anauthentication server, which interacts with the clients, ii) a setof high-performing computational servers that perform comparisonsof biometric data for verification of the enrolledusers, and iii) databases of templates that contain the biometrictemplates of the enrolled users (these are required for userauthentication/verification). The web services are the variousservices that use the CASHMA authentication service anddemand the authentication of enrolled users to theCASHMA authentication server. These services are potentiallyany kind of Internet service or application withrequirements on user authenticity. They have to be registeredto the CASHMA authentication service, expressingalso their trust threshold. If the web services adopt the continuousauthentication protocol, during the registration processthey shall agree with the CASHMA registration officeon values for parameters h; k and s used in Section 4.2.Finally, by clients we mean the users’ devices (laptop anddesktop PCs, smartphones, tablet, etc.) that acquire the biometricdata (the raw data) corresponding to the various biometrictraits from the users, and transmit those data to theCASHMA authentication server as part of the authenticationprocedure towards the target web service. A client containsi) sensors to acquire the raw data, and ii) theCASHMA application which transmits the biometric data tothe authentication server. The CASHMA authenticationserver exploits such data to apply user authentication andsuccessive verification procedures that compare the rawdata with the stored biometric templates.Transmitting raw data has been a design decisionapplied to the CASHMA system, to reduce to a minimumthe dimension, intrusiveness and complexity of the applicationinstalled on the client device, although we are awarethat the transmission of raw data may be restricted, forexample, due to National legislations.CASHMA includes countermeasures to protect the biometricdata and to guarantee users’ privacy, including policiesand procedures for proper registration; protection ofthe acquired data during its transmission to the authenticationand computational servers and its storage; robustnessimprovement of the algorithm for biometric verification[24]. Privacy issues still exist due to the acquisition of datafrom the surrounding environment as, for example, voicesof people nearby the CASHMA user, but are considered outof scope for this paper.The continuous authentication protocol explored in thispaper is independent from the selected architectural choicesand can work with no differences if templates and featuresets are used instead of transmitting raw data, or independentlyfrom the set of adopted countermeasures.3.2 Sample Application ScenarioCASHMA can authenticate to web services, ranging fromservices with strict security requirements as online bankingservices to services with reduced security requirements asforums or social networks. Additionally, it can grant accessto physical secure areas as a restricted zone in an airport, ora military zone (in such cases the authentication system canbe supported by biometric kiosk placed at the entrance ofthe secure area). We explain the usage of the CASHMAauthentication service by discussing the sample applicationscenario in Fig. 2 where a user u wants to log into an onlinebanking service using a smartphone.It is required that the user and the web service areenrolled to the CASHMA authentication service. Weassume that the user is using a smartphone where aCASHMA application is installed.The smartphone contacts the online banking service,which replies requesting the client to contact the CASHMAauthentication server and get an authentication certificate.Using the CASHMA application, the smartphone sends itsunique identifier and biometric data to the authenticationserver for verification. The authentication server verifies theuser identity, and grants the access if: i) it is enrolled in theCASHMA authentication service, ii) it has rights to accessthe online banking service and, iii) the acquired biometricdata match those stored in the templates database associatedto the provided identifier. In case of successful userverification, the CASHMA authentication server releases anauthentication certificate to the client, proving its identity tothird parties, and includes a timeout that sets the maximumduration of the user session. The client presents this certificateto the web service, which verifies it and grants access tothe client.The CASHMA application operates to continuouslymaintain the session open: it transparently acquires biometricdata from the user, and sends them to the CASHMAauthentication server to get a new certificate. Such certificate,which includes a new timeout, is forwarded to the webservice to further extend the user session.3.3 The CASHMA CertificateIn the following we present the information contained in thebody of the CASHMA certificate transmitted to the client bythe CASHMA authentication server, necessary to understanddetails of the protocol.Time stamp and sequence number univocally identify eachcertificate, and protect from replay attacks.ID is the user ID, e.g., a number.Decision represents the outcome of the verification procedurecarried out on the server side. It includes the expirationtime of the session, dynamically assigned by the CASHMAauthentication server. In fact, the global trust level and thesession timeout are always computed considering the timeinstant in which the CASHMA application acquires the biometricdata, to avoid potential problems related to unknowndelays in communication and computation. Since suchdelays are not predicable, simply delivering a relative timeoutvalue to the client is not feasible: the CASHMA serverFig. 2. Example scenario: accessing an online banking service using asmartphone.CECCARELLI ET AL.: CONTINUOUS AND TRANSPARENT USER IDENTITY VERIFICATION FOR SECURE INTERNET SERVICES 273therefore provides the absolute instant of time at which thesession should expire.4 THE CONTINUOUS AUTHENTICATION PROTOCOLThe continuous authentication protocol allows providingadaptive session timeouts to a web service to set up andmaintain a secure session with a client. The timeout isadapted on the basis of the trust that the CASHMA authenticationsystem puts in the biometric subsystems and in theuser. Details on the mechanisms to compute the adaptivesession timeout are presented in Section 4.2.4.1 Description of the ProtocolThe proposed protocol requires a sequential multi-modalbiometric system composed of n unimodal biometric subsystemsthat are able to decide independently on theauthenticity of a user. For example, these subsystems can beone subsystem for keystroke recognition and one for facerecognition.The idea behind the execution of the protocol is that theclient continuously and transparently acquires and transmitsevidence of the user identity to maintain access to aweb service. The main task of the proposed protocol is tocreate and then maintain the user session adjusting the sessiontimeout on the basis of the confidence that the identityof the user in the system is genuine.The execution of the protocol is composed of two consecutivephases: the initial phase and the maintenance phase.The initial phase aims to authenticate the user into the systemand establish the session with the web service. During themaintenance phase, the session timeout is adaptively updatedwhen user identity verification is performed using fresh rawdata provided by the client to the CASHMA authenticationserver. These two phases are detailed hereafter with thehelp of Figs. 3 and 4.Initial phase. This phase is structured as follows:_ The user (the client) contacts the web service for aservice request; the web service replies that a validcertificate from the CASHMA authentication serviceis required for authentication._ Using the CASHMA application, the client contactsthe CASHMA authentication server. The first stepconsists in acquiring and sending at time t0 the datafor the different biometric traits, specifically selectedto perform a strong authentication procedure (step 1).The application explicitly indicates to the user thebiometric traits to be provided and possible retries._ The CASHMA authentication server analyzes thebiometric data received and performs an authenticationprocedure. Two different possibilities arisehere. If the user identity is not verified (the globaltrust level is below the trust threshold gmin), newor additional biometric data are requested (backto step 1) until the minimum trust threshold gminis reached. Instead if the user identity is successfullyverified, the CASHMA authentication serverauthenticates the user, computes an initial timeoutof length T0 for the user session, set the expirationtime at T0 þ t0, creates the CASHMA certificateand sends it to the client (step 2)._ The client forwards the CASHMA certificate to theweb service (step 3) coupling it with its request._ The web service reads the certificate and authorizesthe client to use the requested service (step 4) untiltime t0 þ T0.For clarity, steps 1-4 are represented in Fig. 3 for the caseof successful user verification only.Maintenance phase. It is composed of three steps repeatediteratively:_ When at time ti the client application acquires fresh(new) raw data (corresponding to one biometric trait),it communicates them to the CASHMA authenticationserver (step 5). The biometric data can beacquired transparently to the user; the user may howeverdecide to provide biometric data which areunlikely acquired in a transparent way (e.g., fingerprint).Finally when the session timeout is going toexpire, the client may explicitly notify to the user thatfresh biometric data are needed._ The CASHMA authentication server receives the biometricdata from the client and verifies the identityof the user. If verification is not successful, the useris marked as not legitimate, and consequently theCASHMA authentication server does not operate torefresh the session timeout. This does not imply thatthe user is cut-off from the current session: if otherbiometric data are provided before the timeoutexpires, it is still possible to get a new certificate andrefresh the timeout. If verification is successful, theCASHMA authentication server applies the algorithmdetailed in Section 4.2 to adaptively compute anew timeout of length Ti, the expiration time of thesession at time Ti þ ti and then it creates and sends anew certificate to the client (step 6)._ The client receives the certificate and forwards it tothe web service; the web service reads the certificateFig. 3. Initial phase in case of successful user authentication.Fig. 4. Maintenance phase in case of successful user verification.274 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 12, NO. 3, MAY/JUNE 2015and sets the session timeout to expire at time ti þ Ti(step 7).The steps of the maintenance phase are represented inFig. 4 for the case of successful user verification (step 6b).4.2 Trust Levels and Timeout ComputationThe algorithm to evaluate the expiration time of the sessionexecutes iteratively on the CASHMA authentication server.It computes a new timeout and consequently the expirationtime each time the CASHMA authentication server receivesfresh biometric data from a user. Let us assume that the initialphase occurs at time t0 when biometric data is acquiredand transmitted by the CASHMA application of the user u,and that during the maintenance phase at time ti > t0 forany i ¼ 1; :::;m new biometric data is acquired by theCASHMA application of the user u (we assume these dataare transmitted to the CASHMA authentication server andlead to successful verification, i.e., we are in the conditionsof Fig. 4). The steps of the algorithm described hereafter areexecuted.To ease the readability of the notation, in the followingthe user u is often omitted; for example, gðtiÞ ¼ gðu; tiÞ.4.2.1 Computation of Trust in the SubsystemsThe algorithm starts computing the trust in the subsystems.Intuitively, the subsystem trust level could be simply set tothe static value mðSk; tÞ ¼ 1 _ FMRðSkÞ for each unimodalsubsystem Sk and any time t (we assume that informationon the subsystems used, including their FMRs, is containedin a repository accessible by the CASHMA authenticationserver). Instead we apply a penalty function to calibrate thetrust in the subsystems on the basis of its usage. Basically,in our approach the more the subsystem is used, the less itis trusted: to avoid that a malicious user is required tomanipulate only one biometric trait (e.g., through sensorspoofing [10]) to keep authenticated to the online service,we decrease the trust in those subsystems which are repeatedlyused to acquire the biometric data.In the initial phase mðSk; t0Þ is set to 1 _ FMRðSkÞ foreach subsystem Sk used. During the maintenance phase, apenalty function is associated to consecutive authenticationsperformed using the same subsystem as follows:penalty ðx; hÞ ¼ ex_h;where x is the number of consecutive authenticationattempts using the same subsystem and h > 0 is aparameter used to tune the penalty function. This functionincreases exponentially; this means that using the same subsystemfor several authentications heavily increases thepenalty.The computation of the penalty is the first step for thecomputation of the subsystem trust level. If the samesubsystem is used in consecutive authentications, thesubsystem trust level is a multiplication of i) the subsystemtrust level mðSk; ti_1Þ computed in the previous executionof the algorithm, and ii) the inverse of the penaltyfunction (the higher is the penalty, the lower is the subsystemtrust level):mðSk; tiÞ ¼ mðSk; ti_1Þ _ ðpenalty ðx; hÞÞ_1:Otherwise if the subsystem is used for the first time or innon-consecutive user identity verification, mðSk; tiÞ is setto 1 _ FMRðSkÞ. This computation of the penalty is intuitivebut fails if more than one subsystem are compromised(e.g., two fake biometric data can be provided inan alternate way). Other formulations that include thehistory of subsystems usage can be identified but areoutside the scope of this paper.4.2.2 Computation of Trust in the UserAs time passes from the most recent user identity verification,the probability that an attacker substituted to the legitimateuser increases, i.e., the level of trust in the userdecreases. This leads us to model the user trust levelthrough time using a function which is asymptoticallydecreasing towards zero. Among the possible models weselected the function in (1), which: i) asymptoticallydecreases towards zero; ii) yields trustðti_1Þ for D ti ¼ 0;and iii) can be tuned with two parameters which control thedelay ðsÞ and the slope ðkÞ with which the trust leveldecreases over time (Figs. 5 and 6). Different functions maybe preferred under specific conditions or users requirements;in this paper we focus on introducing the protocol,which can be realized also with other functions.During the initial phase, the user trust level is simply setto gðt0Þ ¼ 1. During the maintenance phase, the user trustlevel is computed for each received fresh biometric data.The user trust level at time ti is given by:gðtiÞ ¼__arctanððDti _ sÞ _ kÞ þ p2__ trustðti_1Þ_arctanð_s _ kÞ þ p2: (1)Fig. 5. Evolution of the user trust level when k ¼ ½0:01; 0:05; 0:1_ ands ¼ 40. Fig. 6. Evolution of the user trust level when k ¼ 0:05 and s ¼ ½20; 40; 60_.CECCARELLI ET AL.: CONTINUOUS AND TRANSPARENT USER IDENTITY VERIFICATION FOR SECURE INTERNET SERVICES 275Value D ti ¼ ti _ ti_1 is the time interval betweentwo data transmissions; trustðti_1Þ instead is the globaltrust level computed in the previous iteration of thealgorithm. Parameters k and s are introduced to tune thedecreasing function: k impacts on the inclination towardsthe falling inflection point, while s translates the inflectionpoint horizontally, i.e., allows anticipating or delayingthe decay.Figs. 5 and 6 show the user trust level for different valuesof s and k. Note that s and k allow adapting the algorithm todifferent services: for example, services with strict securityrequirements as banking services may adopt a high k valueand a small s value to have a faster decrease of the user trustlevel. Also we clarify that in Figs. 5, 6 and in the following ofthe paper, we intentionally avoid using measurements unitsfor time quantities (e.g., seconds), since they depend uponthe involved application and do not add significant value tothe discussion.4.2.3 Merging User Trust and Subsystems Trust:The Global Trust LevelThe global trust level is finally computed combining theuser trust level with the subsystem trust level.In the initial phase, multiple subsystems may be used toperform an initial strong authentication. Let n be the numberof different subsystems, the global trust level is firstcomputed during the initial phase as follows:trustðt0Þ ¼ 1 _ Pk¼1;…;nð1 _mðSk; t0ÞÞ: (2)Equation (2) includes the subsystem trust level of all subsystemsused in the initial phase. We remind that for thefirst authentication mðSk; t0Þ is set to 1 _ FMRðSkÞ. The differentsubsystems trust levels are combined adopting theOR-rule from [2], considering only the false acceptance rate:each subsystem proposes a score, and the combined score ismore accurate than the score of each individual subsystem.The first authentication does not consider trust in the userbehavior, and only weights the trust in the subsystems. TheFNMR is not considered in this computation because it onlyimpact on the reliability of the session, while the user trustlevel is intended only for security.Instead, the global trust level in the maintenance phase isa linear combination of the user trust level and the subsystemtrust level. Given the user trust level gðtiÞ and the subsystemtrust level mðSk; tiÞ, the global trust level is computed againadopting the OR-rule from [2], this time with only two inputvalues. Result is as follows:trustðtiÞ ¼ 1 _ ð1 _ gðtiÞÞ ð1 _mðSk; tiÞÞ¼ gðtiÞ þ mðSk; tiÞ _ gðtiÞ mðSk; tiÞ¼ gðtiÞ þ ð1 _ gðtiÞÞ mðSk; tiÞ:(3)4.2.4 Computation of the Session TimeoutThe last step is the computation of the length Ti of the sessiontimeout. This value represents the time required by theglobal trust level to decrease until the trust threshold gmin(if no more biometric data are received). Such value can bedetermined by inverting the user trust level function (1) andsolving it for D ti.Starting from a given instant of time ti, we considertiþ1 as the instant of time at which the global trust levelreaches the minimum threshold gmin, i.e., gðtiþ1Þ ¼ gmin.The timeout is then given by Ti ¼ D ti ¼ tiþ1 _ ti. Toobtain a closed formula for such value we first instantiated(1) for i þ 1, i.e., we substituted trustðti_1Þ withtrustðtiÞ; D ti ¼ Ti and gðtiÞ ¼ gmin.By solving for Ti, we finally obtain Equation (4), whichallows the CASHMA service to dynamically compute thesession timeout based on the current global trust level. Theinitial phase and the maintenance phase are computed inthe same way: the length Ti of the timeout at time ti for theuser u is:Ti ¼ tangmin _ ðarctanð_s _ kÞ _ p2ÞtrustðtiÞþ p2_ __ 1kþs ifTi > 00 otherwise:8<:(4)It is then trivial to set the expiration time of the certificateat Ti þ ti.In Fig. 7 the length Ti of the timeout for different valuesof gmin is shown; the higher is gmin, the higher are the securityrequirements of the web service, and consequently theshorter is the timeout.5 EXEMPLARY RUNSThis section reports Matlab executions of the protocol. Fourdifferent biometric traits acquired through four differentsubsystems are considered for biometric verification: voice,keystroke, fingerprint, and face.We associate the following FMRs to each of them: 0.06 tothe voice recognition system (vocal data is acquired througha microphone), 0.03 to the fingerprint recognition system(the involved sensor is a fingerprint reader; the correspondingbiometric data are not acquired transparently but areexplicitly provided by the user), 0.05 to the facial recognitionsystem (the involved sensor is a camera), and 0.08 tokeystroke recognition (a keyboard or a touch/tactile-screencan be used for data acquisition). Note that the FMRs mustbe set on the basis of the sensors and technologies used. Wealso assume that the initial phase of the protocol needs onlyone raw data.Fig. 7. Timeout values for gmin2 ½0:1; 0:9_; k ¼ 0:05 and s ¼ 40.276 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 12, NO. 3, MAY/JUNE 2015The first scenario, depicted in Fig. 8, is a simple but representativeexecution of the protocol: in 900 time units, theCASHMA authentication server receives 20 fresh biometricdata from a user and performs successful verifications. Theupper part of Fig. 8 shows the behavior of the user trustlevel (the continuous line) with the gmin threshold (thedashed line) set to gmin¼ 0:7. In the lower graph the evolutionof the session timeout is shown (it is the continuousline). When the continuous line intersects the dashed line,the timeout expires. The time units are reported on thex-axis. The k and s parameters are set to k ¼ 0:05 ands ¼ 100. The first authentication is at time unit 112, followedby a second one at time unit 124. The global trust level afterthese first two authentications is 0.94. The correspondingsession timeout is set to expire at time unit 213: if no freshbiometric data are received before time unit 213, the globaltrust level intersects the threshold gmin. Indeed, this actuallyhappens: the session closes, and the global trust level is setto 0. Session remains closed until a new authentication attime unit 309 is performed. The rest of the experiment runsin a similar way.The next two runs provide two examples of how thethreshold gmin and the parameters k and s can be selected tomeet the security requirements of the web service. We representthe execution of the protocol to authenticate to twoweb services with very different security requirements: thefirst with low security requirements, and the second withsevere security requirements.Fig. 9 describes the continuous authentication protocolfor the first system. The required trust on the legitimacy ofthe user is consequently reduced; session availability andtransparency to the user are favored. The protocol is tunedto maintain the session open with sparse authentications.Given gmin¼ 0:6, and parameters s ¼ 200 and k ¼ 0:005 setfor a slow decrease of user trust level, the plot in Fig. 9 contains10 authentications in 1,000 time units, showing aunique timeout expiration after 190 time units from the firstauthentication.Fig. 10 describes the continuous authentication protocolapplied to a web service with severe security requirements.In this case, session security is preferred to sessionavailability or transparency to the user: the protocol is tunedto maintain the session open only if biometric data are providedfrequently and with sufficient alternation betweenthe available biometric traits. Fig. 10 represents the globaltrust level of a session in which authentication data are provided40 times in 1,000 time units using gmin¼ 0:9, and theparameters s ¼ 90 and k ¼ 0:003 set for rapid decrease.Maintaining the session open requires very frequent transmissionsof biometric data for authentication. This comes atthe cost of reduced usability, because a user which does notuse the device continuously will most likely incur in timeoutexpiration.6 SECURITY EVALUATIONA complete analysis of the CASHMA system was carriedout during the CASHMA project [1], complementing traditionalsecurity analysis techniques with techniques forquantitative security evaluation. Qualitative security analysis,having the objective to identify threats to CASHMA andselect countermeasures, was guided by general andaccepted schemas of biometric attacks and attack points as[9], [10], [11], [21]. A quantitative security analysis of thewhole CASHMA system was also performed [6]. As thispaper focuses on the continuous authentication protocolrather than the CASHMA architecture, we briefly summarizethe main threats to the system identified within theproject (Section 6.1), while the rest of this section (Section6.2) focuses on the quantitative security assessment ofthe continuous authentication protocol.6.1 Threats to the CASHMA SystemSecurity threats to the CASHMA system have been analyzedboth for the enrollment procedure (i.e., initial registrationof an user within the system), and the authenticationprocedure itself. We report here only on authentication. Thebiometric system has been considered as decomposed inFig. 8. Global trust level (top) and session timeout (bottom) in a nominalscenario.Fig. 9. Global trust level and 10 authentications for a service with lowsecurity requirements.Fig. 10. Global trust level and 40 authentications for a service with highsecurity requirements.CECCARELLI ET AL.: CONTINUOUS AND TRANSPARENT USER IDENTITY VERIFICATION FOR SECURE INTERNET SERVICES 277functions from [10]. For authentication, we considered collectionof biometric traits, transmission of (raw) data, featuresextraction, matching function, template search andrepository management, transmission of the matchingscore, decision function, communication of the recognitionresult (accept/reject decision).Several relevant threats exist for each function identified[9], [10], [11]. For brevity, we do not consider threatsgeneric of ICT systems and not specific for biometrics(e.g., attacks aimed to Deny of Service, eavesdropping,man-in-the-middle, etc.). We thus mention the following.For the collection of biometric traits, we identified sensorspoofing and untrusted device, reuse of residuals tocreate fake biometric data, impersonation, mimicry andpresentation of poor images (for face recognition). For thetransmission of (raw) data, we selected fake digital biometric,where an attacker submits false digital biometric data.For the features extraction, we considered insertion ofimposter data, component replacement, override of featureextraction (the attacker is able to interfere with the extractionof the feature set), and exploitation of vulnerabilitiesof the extraction algorithm. For the matching function,attacks we considered are insertion of imposter data, componentreplacement, guessing, manipulation of matchscores. For template search and repository management,all attacks considered are generic for repositories and notspecific to biometric systems. For the transmission of thematching score, we considered manipulation of matchscore. For the decision function, we considered hill climbing(the attacker has access of thematching score, and iterativelysubmits modified data in an attempt to raise theresulting matching score), system parameter override/modification (the attacker has the possibility to change keyparameters as system tolerances in feature matching), componentreplacement, decision manipulation. For the communicationof recognition result, we considered onlyattacks typical of Internet communications.Countermeasures were selected appropriately for eachfunction on the basis of the threats identified.6.2 Quantitative Security Evaluation6.2.1 Scenario and Measures of InterestFor the quantitative security evaluation of the proposedprotocol we consider a mobile scenario, where a registereduser uses the CASHMA service through a client installed ona mobile device like a laptop, a smartphone or a similardevice. The user may therefore lose the device, or equivalentlyleave it unattended for a time long enough for attackersto compromise it and obtain authentication. Moreover,the user may lose the control of the device (e.g., he/she maybe forced to hand over it) while a session has already beenestablished, thus reducing the effort needed by the attacker.In the considered scenario the system works with three biometrictraits: voice, face, and fingerprint.A security analysis on the first authentication performedto acquire the first certificate and open a secure session hasbeen provided in [6]. We assume here that the attacker hasalready been able to perform the initial authentication (or toaccess to an already established session), and we aim toevaluate how long he is able to keep the session alive, atvarying of the parameters of the continuous authenticationalgorithm and the characteristics of the attacker. The measuresof interest that we evaluate in this paper are the following:i) PkðtÞ: Probability that the attacker is able to keep thesession alive until the instant t, given that the session hasbeen established at the instant t ¼ 0; ii) Tk: Mean time forwhich the attacker is able to keep the session alive.Since most of the computation is performed server-side,we focus on attacks targeting the mobile device. In order toprovide fresh biometric data, the attacker has to compromiseone of the three biometric modalities. This can beaccomplished in several ways; for example, by spoofing thebiometric sensors (e.g., by submitting a recorded audio sample,or a picture of the accounted user), or by exploitingcyber-vulnerabilities of the device (e.g., through a “reuse ofresiduals” attack [9]). We consider three kind of abilities forattackers: spoofing, as the ability to perform sensor spoofingattacks, hacking as the ability to perform cyber attacks, andlawfulness, as the degree to which the attacker is prepared tobreak the law.The actual skills of the attacker influence the chance of asuccessful attack, and the time required to perform it. Forexample, having a high hacking skill reduces the timerequired to perform the attack, and also increases the successprobability: an attacker having high technological skillsmay able to compromise the system is such a way that theeffort required to spoof sensors is reduced (e.g., by alteringthe data transmitted by the client device).6.2.2 The ADVISE [12] FormalismThe analysis method supported by ADVISE relies on creatingexecutable security models that can be solved using discrete-event simulation to provide quantitative metrics. Oneof the most significant features introduced by this formalismis the precise characterization of the attacker (the“adversary”) and the influence of its decisions on the finalmeasures of interest.The specification of an ADVISE model is composed oftwo parts: an Attack Execution Graph (AEG), describinghow the adversary can attack the system, and an adversaryprofile, describing the characteristics of the attacker. AnAEG is a particular kind of attack graph comprising differentkinds of nodes: attack steps, access domains, knowledgeitems, attack skills, and attack goals. Attack steps describethe possible attacks that the adversary may attempt, whilethe other elements describe items that can be owned byattackers (e.g., intranet access). Each attack step requires acertain combination of such items to be held by the adversary;the set of what have been achieved by the adversarydefines the current state of the model. ADVISE attack stepshave also additional properties, which allow creating executablemodels for quantitative analysis. The adversary profiledefines the set of items that are initially owned by theadversary, as well as his proficiency in attack skills. Theadversary starts without having reached any goal, andworks towards them. To each attack goal it is assigned apayoff value, which specifies the value that the adversaryassigns to reaching that goal. Three weights define the relativepreference of the adversary in: i) maximizing the payoff,ii) minimizing costs, or iii) minimizing the probability278 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 12, NO. 3, MAY/JUNE 2015of being detected. Finally, the planning horizon defines thenumber of steps in the future that the adversary is able totake into account for his decisions; this value can be thoughtto model the “smartness” of the adversary.The ADVISE execution algorithm evaluates the reachablestates based on enabled attack steps, and selects the mostappealing to the adversary based on the above describedweights. The execution of the attack is then simulated, leadingthe model to a new state. Metrics are defined usingreward structures [14]. By means of the Rep/Join compositionformalism [15] ADVISE models can be composed withmodels expressed in other formalisms supported by theM€obius framework, and in particular with stochastic activitynetworks [16] models.6.2.3 Modeling ApproachThe model that is used for the analysis combines anADVISE model, which takes into account the attackers’behavior, and a SAN model, which models the evolution oftrust over time due to the continuous authentication protocol.Both models include a set of parameters, which allowevaluating metrics under different conditions and performingsensitivity analysis. Protocol parameters used for theanalysis are reported in the upper labels of Figs. 13 and 14;parameters describing attackers are shown in Table 1 andtheir values are discussed in Section 6.2.4.ADVISE model. The AEG of the ADVISE model is composedof one attack goal, three attack steps, three attackskills, and five access domains. Its graphical representationis shown in Fig. 11, using the notation introduced in [12].The only attack goal present in the model, “RenewSession”represents the renewal of the session timeout by submittingfresh biometric data to the CASHMA server.To reach its goal, the attacker has at its disposal threeattack steps, each one representing the compromise of oneof the three biometric traits: “Compromise_Voice”,“Compromise_Face”, and “Compromise_Fingerprint”.Each of them requires the “SessionOpen” access domain,which represents an already established session. The threeabilities of attackers are represented by three attack skills:“SpoofingSkill”, “HackSkill” and “Lawfulness”.The success probability of such attack steps is a combinationof the spoofing skills of the attacker and the false nonmatchrate (FNMR) of the involved biometric subsystem. Infact, even if the attacker was able to perfectly mimic theuser’s biometric trait, reject would still be possible in case ofa false non-match of the subsystem. For example, the successprobability of the “Compromise_Voice” attack step isobtained as:FNMR Voice_ðSpoofingSkill ->MarkðÞ=1; 000:0Þ;where “FNMR_Voice” is the false non-match rate of thevoice subsystem, and SpoofingSkill ranges from a minimumof 0 to a maximum of 1,000. It should be noted that theactual value assigned to the spoofing skill is a relative value,which also depends on the technological measures implementedto constrast such attack. Based on the skill value,the success probability ranges from 0 (spoofing is not possible)to the FNMR of the subsystem (the same probability ofa non-match for a “genuine” user). The time required to performthe attack is exponentially distributed, and its rate alsodepends on attacker’ skills.When one of the three attack step succeeds, the corresponding“OK_X” access domain is granted to the attacker.Owning one of such access domains means that the systemhas correctly recognized the biometric data, and that it isupdating the global trust level; in this state all the attacksteps are disabled. A successful execution of the attack stepsalso grants the attackers the “RenewSession” goal.“LastSensor” access domain is used to record the last subsystemthat has been used for authentication.SAN model. The SAN model in Fig. 12 models the managementof session timeout and its extension through thecontinuous authentication mechanism. The evolution oftrust level over time is modeled using the functions introducedin Section 4.2; it should be noted that the model introducedin this section can also be adapted to other functionsthat might be used for realizing the protocol.Fig. 11. AEG of the ADVISE model used for security evaluations.TABLE 1Attackers and Their CharacteristicsFig. 12. SAN model for the continuous authentication mechanism.CECCARELLI ET AL.: CONTINUOUS AND TRANSPARENT USER IDENTITY VERIFICATION FOR SECURE INTERNET SERVICES 279Place “SessionOpen” is shared with the ADVISEmodel, and therefore it contains one token if the attackerhas already established a session (i.e., it holds the“SessionOpen” access domain). The extended places“LastTime” and “LastTrust” are used to keep track of thelast time at which the session timeout has been updated,and the corresponding global trust level. These values correspond,respectively, to the quantities t0 and gðt0Þ andcan therefore be used to compute the current global trustlevel g(t). Whenever the session is renewed, the extendedplace “AuthScore” is updated with the global trust levelPðSkÞ of the subsystem that has been used to renew thesession. The extended place “CurrentTimeout” is used tostore the current session timeout, previously calculated attime t0. The activity “Timeout” models the elapsing of thesession timeout and it fires with a deterministic delay,which is given by the value contained in the extended place“CurrentTimeout”. Such activity is enabled only when thesession is open (i.e., place “SessionOpen” contains onetoken). Places “OK_Voice”, “OK_Face” and“OK_Fingerprint” are shared with the respective accessdomains in the ADVISE model. Places “Voice_Consecutive”,“Face_Consecutive”, and “Fingerprint_Consecutive” areused to track the number of consecutive authentications performedusing the same biometric subsystem; this informationis used to evaluate the penalty function.When place “OK_Voice” contains a token, the instantaneousactivity “CalculateScore1” is enabled and fires; theoutput gate “OGSCoreVoice” then sets the marking of place“AuthScore” to the authentication score of the voice subsystem,possibly applying the penalty. The marking of“Voice_Consecutive” is then updated, while the count forthe other two biometric traits is reset. Finally, a token isadded in place “Update”, which enables the immediateactivity “UpdateTrust”. The model has the same behaviorfor the other two biometric traits.When the activity “UpdateTrust” fires, the gate“OGTrustUpdate” updates the user trust level, which iscomputed based on the values in places “LastTrust” and“LastTime”, using (1). Using (3) the current user trust levelis then fused with the score of the authentication that isbeing processed, which has been stored in place“AuthScore”. Finally, the new timeout is computed using(4) and the result is stored in the extended place“CurrentTimeout”. The reactivation predicate of the activity“Timeout” forces the resample of its firing time, and thenew session timeout value is therefore adopted.Composed model. The ADVISE and SAN models are thencomposed using the Join formalism [15]. Places“SessionOpen”, “OK_Voice”, “OK_Face”, and “OK_Fingerprint”are shared with the corresponding access domains inthe ADVISE model. The attack goal “RenewSession” isshared with place “RenewSession”.6.2.4 Definition of AttackersOne of the main challenges in security analysis is the identificationof possible human agents that could pose securitythreats to information systems. The work in [17] defined aThreat Agent Library (TAL) that provides a standardizedset of agent definitions ranging from government spies tountrained employees. TAL classifies agents based on theiraccess, outcomes, limits, resources, skills, objectives, andvisibility, defining qualitative levels to characterize the differentproperties of attackers. For example, to characterizethe proficiency of attackers in skills, four levels are adopted:“none” (no proficiency), “minimal” (can use existing techniques),“operational” (can create new attacks within a narrowdomain) and “adept” (broad expert in suchtechnology). The “Limits” dimension describes legal andethical limits that may constrain the attacker. “Resources”dimension defines the organizational level at which anattacker operates, which in turn determines the amount ofresources available to it for use in an attack. “Visibility”describes the extent to which the attacker intends to hide itsidentity or attacks.Agent threats in the TAL can be mapped to ADVISEadversary profiles with relatively low effort. The “access”attribute is reproduced by assigning different sets of accessdomains to the adversary; the “skills” attribute is mappedto one or more attack skills; the “resources” attribute can beused to set the weight assigned to reducing costs in theADVISE model. Similarly, “visibility” is modeled by theweight assigned to the adversary in avoiding the possibilityof being detected. The attributes “outcomes” and“objectives” are reproduced by attack goals, their payoff,and the weight assigned to maximise the payoff. Finally, theFig. 13. Effect of the continuous authentication mechanism on different Fig. 14. Effect of varying the threshold gmin on the TMA attacker.attackers.280 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 12, NO. 3, MAY/JUNE 2015“limits” attribute can be thought as a specific attack skilldescribing the extent to which the attacker is prepared tobreak the law. In this paper, it is represented by the“Lawfulness” attack skill.In our work we have abstracted four macro-agents thatsummarize the agents identified in TAL, and we havemapped their characteristics to adversary profiles in theADVISE formalism. To identify such macro-agents we firsthave discarded those attributes that are not applicable toour scenario; then we aggregated in a single agent thoseattackers that after this process resulted in similar profiles.Indeed, it should be noted that not all the properties areapplicable in our evaluation; most notably, “objectives” arethe same for all the agents, i.e., extending the session timeoutas much as possible. Similarly “outcome” is notaddressed since it depends upon the application to whichthe CASHMA authentication service provides access. Moreover,in our work we consider hostile threat agents only (i.e.,we do not consider agents 1, 2 and 3 in [17]), as opposed tonon-hostile ones, which include, for example, the“Untrained Employee”.The attributes of the four identified agents are summarizedin Table 1. As discussed in [17], names have the onlypurpose to identify agents; their characteristics should bedevised from agent properties. “Adverse Organization”(ORG) represents an external attacker, with governmentlevelresources (e.g., a terrorist organization or an adversenation-state entity), and having good proficiency in both“Hack” and “Spoofing” skills. It intends to keep its identitysecret, although it does not intend to hide the attack itself. Itdoes not have particular limits, and is prepared to use violenceand commit major extra-legal actions. This attackermaps agents 6, 7, 10, 15, and 18 in [17].“Technology Master Individual” (TMA) represents theattacker for which the term “hacker” is commonly used: anexternal individual having high technological skills, moderate/low resources, and strong will in hide himself and itsattacks. This attacker maps agents 5, 8, 14, 16, and 21 in [17].“Generic Individual” (GEN) is an external individual withlow skills and resources, but high motivation—either rationalor not—that may lead him to use violence. This kind ofattacker does not take care of hiding its actions. The GENattacker maps 4, 13, 17, 19, and 20 in [17]. Finally, the“Insider” attacker (INS) is an internal attacker, having minimalskill proficiency and organization-level resources; it isprepared to commit only minimal extra-legal actions, andone of its main concerns is avoiding him or its attacks beingdetected. This attacker maps agents 9, 11, and 12 in [17].6.2.5 EvaluationsThe composed model has been solved using the discreteeventsimulator provided by the M€obius tool [15]. All themeasures have been evaluated by collecting at least 100.000samples, and using a relative confidence interval of _1 %,confidence level 99 percent. For consistency, the parametersof the decreasing functions are the same as in Fig. 10 ðs ¼ 90and k ¼ 0:003Þ; FMRs of subsystems are also the same usedin simulations of Section 5 (voice: 0.06, fingerprint: 0.03,face: 0.05); for all subsystems, the FNMR has been assumedto be equal to its FMR.Results in Fig. 13 show the effectiveness of the algorithmin contrasting the four attackers. The left part of the figuredepicts the measure PkðtÞ, while Tk is shown in the rightpart. All the attackers maintain the session alive with probability1 for about 60 time units. Such delay is given by theinitial session timeout, which depends upon the characteristicsof the biometric subsystems, the decreasing function(1) and the threshold gmin.With the same parameters a similarvalue was obtained also in MAtlab simulationsdescribed in Section 5 (see Fig. 10): from the highest valueof g(u,t), if no fresh biometric data is received, the globaltrust level reaches the threshold in slightly more than 50time units. By submitting fresh biometric data, all the fourattackers are able to renew the authentication and extendthe session timeout. The extent to which they are able tomaintain the session alive is based on their abilities andcharacteristics.The GEN attacker has about 40 percent probability ofbeing able to renew the authentication and on the averagehe is able to maintain the session for 80 time units. Moreover,after 300 time units he has been disconnected by thesystem with probability 1. The INS and ORG attackers areable to renew the session for 140 and 170 time units onthe average, respectively, due to their greater abilities in thespoofing skill. However, the most threatening agent is theTMA attacker, which has about 90 percent chance to renewthe authentication and is able, on the average, to extend itssession up to 260 time units, which in this setup is morethan four times the initial session timeout. Moreover, theprobability that TMA is able to keep the session alive up to30 time units is about 30 percent, i.e., on the average onceevery three attempts the TMA attacker is able to extend thesession beyond 300 time units, which is roughly five timesthe initial session timeout.Possible countermeasures consist in the correct tuning ofalgorithm parameters based on the attackers to which thesystem is likely to be subject. As an example, Fig. 14 showsthe impact of varying the threshold gmin on the two measuresof interest, PkðtÞ and Tk, with respect to the TMAattacker. Results in the figure show that increasing thethreshold is an effective countermeasure to reduce the averagetime that the TMA attacker is able to keep the sessionalive. By progressively increasing gmin the measure Tkdecreases considerably; this is due to both a reduced initialsession timeout, and to the fact that the attacker has lesstime at his disposal to perform the required attack steps. Asshown in the figure, by setting the threshold to 0.95, theprobability that the TMA attacker is able to keep the sessionalive beyond 300 time units approaches zero, while it isover 30 percent when gmin is set to 0.9.7 PROTOTYPE IMPLEMENTATIONThe implementation of the CASHMA prototype includesface, voice, iris, fingerprint and online dynamic handwrittensignature as biometric traits for biometric kiosks and PCs/laptops, relying on on-board devices when available orpluggable accessories if needed. On smartphones only faceand voice recognition are applied: iris recognition was discardeddue to the difficulties in acquiring high-quality irisscans using the camera of commercial devices, andCECCARELLI ET AL.: CONTINUOUS AND TRANSPARENT USER IDENTITY VERIFICATION FOR SECURE INTERNET SERVICES 281handwritten signature recognition is impractical on most ofsmartphones today available on market (larger displays arerequired). Finally, fingerprint recognition was discardedbecause few smartphones include a fingerprint reader. Theselected biometric traits (face and voice) suit the need to beacquired transparently for the continuous authenticationprotocol described.A prototype of the CASHMA architecture is currentlyavailable, providing mobile components to access a securedweb-application. The client is based on the Adobe Flash [19]technology: it is a specific client, written in Adobe ActionsScript 3, able to access and control the on-board devices inorder to acquire the raw data needed for biometric authentication.In case of smartphones, the CASHMA client componentis realized as a native Android application (using theAndroid SDK API 12). Tests were conducted on smartphonesSamsung Galaxy S II, HTC Desire, HTC Desire HDand HTC Sensation with OS Android 4.0.x. On averagefrom the executed tests, for the smartphones considered weachieved FMR ¼ 2.58% for face recognition and FMR ¼ 10%for voice. The dimensions of biometric data acquired usingthe considered smartphones and exchanged are approximately500 KB. As expected from such limited dimension ofthe data, the acquisition, compression and transmission ofthese data using the mentioned smartphones did not raiseissues on performance or communication bandwidth. Inparticular, the time required to establish a secure sessionand transmit the biometric data was deemed sufficientlyshort to not compromise usability of the mobile device.Regarding the authentication service, it runs on ApacheTomcat 6 servers and Postgres 8.4 databases. The web servicesare, instead, realized using the Jersey library (i.e., aJAX-RS/JSR311 Reference Implementation) for buildingRESTful web services.Finally, the example application is a custom portal developedas a Rich Internet Application using Sencha ExtJS 4JavaScript framework, integrating different external onlineservices (e.g., Gmail, Youtube, Twitter, Flickr) made accessibledynamically following the current trust value of the continuousauthentication protocol.8 CONCLUDING REMARKSWe exploited the novel possibility introduced by biometricsto define a protocol for continuous authentication thatimproves security and usability of user session. The protocolcomputes adaptive timeouts on the basis of the trustposed in the user activity and in the quality and kind of biometricdata acquired transparently through monitoring inbackground the user’s actions.Some architectural design decisions of CASHMA arehere discussed. First, the system exchanges raw data andnot the features extracted from them or templates, whilecripto-token approaches are not considered; as debated inSection 3.1, this is due to architectural decisions where theclient is kept very simple. We remark that our proposedprotocol works with no changes using features, templatesor raw data. Second, privacy concerns should be addressedconsidering National legislations. At present, our prototypeonly performs some checks on face recognition, where onlyone face (the biggest one rusting from the face detectionphase directly on the client device) is considered for identityverification and the others deleted. Third, when data isacquired in an uncontrolled environment, the quality of biometricdata could strongly depend on the surroundings.While performing a client-side quality analysis of the dataacquired would be a reasonable approach to reduce computationalburden on the server, and it is compatible with ourobjective of designing a protocol independent from qualityratings of images (we just consider a sensor trust), this goesagainst the CASHMA requirement of having a light client.We discuss on usability of our proposed protocol. In ourapproach, the client device uses part of its sensors extensivelythrough time, and transmits data on the Internet.This introduces problematic of battery consumption,which has not been quantified in this paper: as discussedin Section 7, we developed and exercised a prototype toverify the feasibility of the approach but a complete assessmentof the solution through experimental evaluation isnot reported. Also, the frequency of the acquisition of biometricdata is fundamental for the protocol usage; if biometricdata are acquired too much sparingly, the protocolwould be basically useless. This mostly depends on theprofile of the client and consequently on his usage of thedevice. Summarizing, battery consumption and user profilemay constitute limitations to our approach, which inthe worst case may require to narrow the applicability ofthe solution to specific cases, for example, only whenaccessing specific websites and for a limited time window,or to grant access to restricted areas (see also the examplesin Section 3.2). This characterization has not been investigatedin this paper and constitute part of our future work.It has to be noticed that the functions proposed for theevaluation of the session timeout are selected amongst a verylarge set of possible alternatives. Although in literature wecould not identify comparable functions used in very similarcontexts, we acknowledge that different functions may beidentified, compared and preferred under specific conditionsor users requirements; this analysis is left out as goesbeyond the scope of the paper, which is the introduction ofthe continuous authentication approach for Internet services.ACKNOWLEDGMENTSThis work was partially supported by the Italian MIURthrough the projects FIRB 2005 CASHMA (DM1621 18 July2005) and PRIN 2010-3P34XC TENACE.
Collision Tolerant and Collision Free Packet Scheduling for Underwater Acoustic Localization
Abstract—This article considers the joint problem of packetscheduling and self-localization in an underwater acoustic sensornetwork with randomly distributed nodes. In terms of packetscheduling, our goal is to minimize the localization time, andto do so we consider two packet transmission schemes, namelya collision-free scheme (CFS), and a collision-tolerant scheme(CTS). The required localization time is formulated for theseschemes, and through analytical results and numerical examplestheir performances are shown to be dependent on the circumstances.When the packet duration is short (as is the case for alocalization packet), the operating area is large (above 3 km in atleast one dimension), and the average probability of packet-loss isnot close to zero, the collision-tolerant scheme is found to requirea shorter localization time. At the same time, its implementationcomplexity is lower than that of the collision-free scheme, becausein CTS, the anchors work independently. CTS consumes slightlymore energy to make up for packet collisions, but it is shown toprovide a better localization accuracy. An iterative Gauss-Newtonalgorithm is employed by each sensor node for self-localization,and the Cramér Rao lower bound is evaluated as a benchmark.Index Terms—Underwater acoustic networks, localization,packet scheduling, collision.I. INTRODUCTIONAFTER the emergence of autonomous underwater vehicles(AUVs) in the 70s, developments in computer systemsand networking have been paving a way toward fullyautonomous underwater acoustic sensor networks (UASNs)[1], [2]. Modern underwater networks are expected to handlemany tasks automatically. To enable applications such astsunami monitoring, oil field inspection, bathymetry mapping,or shoreline surveillance, the sensor nodes measure variousManuscript received April 24, 2014; revised October 23, 2014; acceptedDecember 29, 2014. Date of publication January 8, 2015; date of currentversion May 7, 2015. The research leading to these results has receivedfunding in part from the European Commission FP7-ICT Cognitive Systems,Interaction, and Robotics under the contract #270180 (NOPTILUS), NSF GrantCNS-1212999, and ONR Grant N00014-09-1-0700. Part of this work waspresented at the IEEE ICC Workshop on Advances in Network Localizationand Navigation (ANLN), Sydney, Australia, June 10–14, 2014. The associateeditor coordinating the review of this paper and approving it for publication wasA. Zajic.H. Ramezani and G. Leus are with the Faculty of Electrical Engineering,Mathematics and Computer Science, Delft University of Technology, 2826 CDDelft, The Netherlands (e-mail: h.mashhadiramezani@tudelft.nl; g.j.t.leus@tudelft.nl).F. Fazel and M. Stojanovic are with the Department of Electrical andComputer Engineering, Northeastern University, MA 02611 USA (e-mail:ffazel@ece.neu.edu; millitsa@ece.neu.edu).Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TWC.2015.2389220environmental parameters, encode them into data packets, andexchange the packets with other sensor nodes or send them to afusion center. In many underwater applications, the sensed datahas to be labeled with the time and the location of their origin toprovide meaningful information. Therefore, sensor nodes thatexplore the environment and gather data have to know theirposition, and this makes localization an important task for thenetwork.Due to the challenges of underwater acoustic communicationssuch as low data rates and long propagation delays withvariable sound speed [3], a variety of localization algorithmshave been introduced and analyzed in the literature [4], [5].In contrast to underwater systems, sensor nodes in terrestrialwireless sensor networks (WSNs) can be equipped with a GPSmodule to determine location. GPS signals (radio-frequencysignals), however, cannot propagate more than a few meters,and underwater acoustic signals are used instead. In addition,radio signals experience negligible propagation delays as comparedto the sound (acoustic) waves.An underwater sensor node can determine its location bymeasuring the time of flight (ToF) to several anchors withknown positions, and performing multilateration. Other approachesmay be employed for self-localization, such as fingerprinting[6] or angle of arrival estimation [7]. All theseapproaches require packet transmission from anchors.Many factors determine the accuracy of self-localization.Other than noise, the number of anchors, their constellation andrelative position of the sensor node [8], propagation losses andfading also affect the localization accuracy. Some of these parameterscan be adjusted to improve the localization accuracy,but others cannot.Although a great deal of research exists on underwater localizationalgorithms [1], little work has been done to determinehow the anchors should transmit their packets to the sensornodes. In long base-line (LBL) systems where transpondersare fixed on the sea floor, an underwater node interrogatesthe transponders for round-trip delay estimation [9]. In theunderwater positioning scheme of [10], a master anchor sendsa beacon signal periodically, and other anchors transmit theirpackets in a given order after the reception of the beaconfrom the previous anchor. The localization algorithm in [11]addresses the problem of joint node discovery and collaborativelocalization without the aid of GPS. The algorithm starts witha few anchors as primary seed nodes, and as it progresses,suitable sensor nodes are converted to seed nodes to help indiscovering more sensor nodes. The algorithm works by broadcastingcommand packets which the nodes use for time-of-flight1536-1276 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.RAMEZANI et al.: PACKET SCHEDULING FOR UNDERWATER ACOUSTIC LOCALIZATION 2585measurements. The authors evaluate the performance of thealgorithm in terms of the average network set-up time andcoverage. However, physical factors such as packet loss due tofading or shadowing and collisions are not included, and it is notestablished whether this algorithm is optimal for localization.In reactive localization [12], an underwater node initiates theprocess by transmitting a “hello” message to the anchors inits vicinity, and those anchors that receive the message transmittheir packets. An existing medium access control (MAC)protocol may be used for packet exchanging [13]; however,there is no guarantee that it will perform satisfactorily forthe localization task. The performance of localization underdifferent MAC protocols is evaluated in [14], where it is shownthat a simple carrier sense multiple access (CSMA) protocolperforms better than the recently introduced underwater MACprotocols such as T-Lohi [15].In our previous work, we considered optimal collision-freepacket scheduling in a UASN for the localization task insingle-channel (L-MAC) [16] and multi-channel [17] scenarios(DMC-MAC). In these algorithms, the position information ofthe anchors is used to minimize the localization time. In spiteof the remarkable performance of L-MAC and DMC-MAC overother algorithms (or MAC protocols), they are highly demanding.The main drawback of L-MAC or DMC-MAC is that theyrequire a fusion center which gathers the positions of all theanchors, and decides on the time of packet transmission fromeach anchor. In addition, these two collision-free algorithmsneed the anchors to be synchronized and equipped with radiomodems to exchange information fast.In this paper, we also consider packet scheduling algorithmsthat do not need a fusion center. Although the synchronizationof the anchors which are equipped with GPS is not difficult, theproposed algorithms can work with asynchronized anchors ifthere is a request from a sensor node.We assume a single-hop UASN where anchors are equippedwith half-duplex acoustic modems, and can broadcast theirpackets based on two classes of scheduling: a collision-freescheme (CFS), where the transmitted packets never collidewith each other at the receiver, and a collision-tolerant scheme(CTS), where the collision probability is controlled by thepacket transmission rate in such a way that each sensornode can receive sufficiently many error-free packets for selflocalization.Our contributions are listed below.• Assuming packet loss and collisions, the localizationtime is formulated for each scheme, and its minimum isobtained analytically for a predetermined probability ofsuccessful localization for each sensor node. A shorterlocalization time allows for a more dynamic network, andleads to a better network efficiency in terms of throughput.• It is shown how the minimum number of anchors canbe determined to reach the desired probability of selflocalization.• An iterative Gauss-Newton self-localization algorithm isintroduced for a sensor node which experiences packetloss or collision. Furthermore, the way in which thisalgorithm can be used for each packet scheduling schemeis outlined.• The Cramér Rao lower bound (CRB) on localization is derivedfor each scheme. Other than the distance-dependentsignal to noise ratio, the effects of packet loss due to fadingor shadowing, collisions, and the probability of successfulself-localization are included in this derivation.The structure of the paper is as follows. Section II describesthe system model, and outlines the self-localizationprocess. The problem of minimizing the localization time inthe collision-free and collision-tolerant packet transmissionschemes is formulated and analyzed in Section III-A andSection III-B, respectively. The self-localization algorithm isintroduced in Section IV. The average energy consumption isanalyzed in Section V, and Section VI compares the two classesof localization packet scheduling through several numericalexamples. Finally, we conclude the paper in Section VII, andoutline the topics of future works.II. SYSTEM MODELWe consider a UASN consisting of M sensor nodes and Nanchors. The anchor index starts from 1, whereas the sensornode index starts from N + 1. Each anchor in the networkencapsulates its ID, its location, time of packet transmission,and a predetermined training sequence for the time of flightestimation. The so-obtained localization packet is broadcast tothe network based on a given protocol, e.g., periodically, orupon the reception of a request from a sensor node. The systemstructure is specified as follows.• Anchors and sensor nodes are equipped with half-duplexacoustic modems, i.e., they cannot transmit and receivesimultaneously.• Anchors are placed randomly on the surface, and havethe ability to move within the operating area. The anchorsare equipped with GPS and can determine their positionswhich will be broadcast to the sensor nodes. It is assumedthat the probability density function (pdf) of the distancebetween the anchors is known, fD(z). It is further assumedthat the sensor nodes are located randomly in an operatingarea according to some probability density function. Thesensor nodes can move in the area, but within the localizationprocess, their position is assumed to be constant. Thepdf of the distance between a sensor node and an anchoris gD(z). These pdfs can be estimated from the empiricaldata gathered during past network operations.• We consider a single-hop network where all the nodes arewithin the communication range of each other.• The received signal strength (which is influenced by pathloss,fading and shadowing) is a function of transmissiondistance. Consequently, the probability of a packet loss isa function of distance between any pair of nodes in thenetwork.The considered localization algorithms are assumed to bebased on ranging, whereby a sensor node determines its distanceto several anchors via ToF or round-trip-time (RTT). Eachsensor node can determine its location if it receives at leastK different localization packets from K different anchors. Thevalue of K depends on the geometry (2-D or 3-D), and other2586 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 14, NO. 5, MAY 2015factors such as whether depth of the sensor node is available,or whether sound speed estimation is required. The value ofK is usually 3 for a 2-D operating environment with knownsound speed and 4 for a 3-D one. In a situation where theunderwater nodes are equipped with pressure sensors, threedifferent successful packets would be enough for a 3-D localizationalgorithm [18].The localization procedure starts either periodically for apredetermined duration (in a synchronized network), or uponreceiving a request from a sensor node (in any kind of network,synchronous or asynchronous) as explained below.Periodic localization: If all the nodes in the network includinganchors and sensor nodes are synchronized with eachother, a periodic localization approach may be employed. Inthis approach, after the arrival of a packet from the jth anchor,the mth sensor node estimates its distance to that anchor asˆdm, j = c(ˆtRm, j− tTj) where c is the sound speed, tTjis thetime at which the anchor transmits its packet, and ˆtRm, j is theestimated time at which the sensor node receives this packet.The departure time tTjis obtained by decoding the receivedpacket (the anchor inserts this information into the localizationpacket), and the arrival time ˆtRm, j can be calculated by correlatingthe received signal with the known training sequence (or similarprocedures). The estimated time of arrival is related to theactual arrival time through ˆtRm, j = tRm, j + nm, j , where nm, j iszero-mean Gaussian noise with power ó2m, j which varies withdistance and can be modeled as [19]ó2m, j = kEdn0m, j, (1)with dm, j the distance between the jth anchor and the sensornode, n0 the path-loss exponent (spreading factor), and kE aconstant that depends on system parameters (such as signalbandwidth, sampling frequency, channel characteristics, andnoise level). In periodic localization, sensor nodes are notrequired to be synchronized with the anchors. If they arenot synchronized, they can calculate the time-differences ofarrival (TDoAs) from the measured ToFs; however, we will notconsider this situation in our calculation.On-demand localization: In this procedure (which can beapplied to a synchronous or an asynchronous network) a sensornode initiates the localization process. It transmits a highpowerfrequency tone immediately before the request packet.The tone wakes up the anchors from their idle mode, and putsthem into the listening mode. The request packet may also beused for a more accurate estimation of the arrival time. Weassume that all the anchors have been correctly notified by thisfrequency tone. After the anchors have received the wake uptone, they reply with localization packets. The time when therequest has been received by an anchor, tRj,m, and the time tTjatwhich a localization packet is transmitted are included in thelocalization packet. This information will be used by the sensornode to estimate its round-trip-time (which is proportional totwice the distance) to the anchor. The round-trip-time can bemodeled asˆtRTTm, j = (tRm, j−tTm)−(tRj,m−tTj)+nj,m+nm, j, (2)Fig. 1. Packet transmission from anchors in the collision-free scheme. Here,each anchor transmits its packets according to its index value (ID number). Alllinks between anchors are assumed to function properly in this figure (there areno missing links).where tTmis the transmission time of the request signal from thesensor node. Therefore, the estimated distance to anchor j isˆdm, j =12cˆtRTTm, j . (3)After the sensor node estimates its location, it broadcasts itsposition to other sensor nodes. This enables the sensor nodeswhich have overheard the localization process to estimate theirpositions without initializing another localization task [20].The time it takes for an underwater node to gather atleast K different packets from K different anchors is called thelocalization time. In the next section, we formally define thelocalization time, and show how it can be minimized forthe collision-free and collision-tolerant packet transmissionschemes.III. PACKET SCHEDULINGA. Collision-Free Packet SchedulingCollision-free localization packet transmission is analyzedin [16], where it is shown that in a fully-connected (singlehop)network, based on a given sequence of the anchors’indices, each anchor has to transmit immediately after receivingthe previous anchor’s packet. Furthermore, it is shown thatthere exists an optimal ordering sequence which minimizes thelocalization time. However, to obtain that sequence, a fusioncenter is required to know the positions of all the anchors. Ina situation where this information is not available, we may assumethat anchors simply transmit in order of their ID numbersas illustrated in Fig. 1.In the event of a packet loss, a subsequent anchor will notknow when to transmit. If an anchor does not receive a packetfrom a previous anchor, it waits for a predefined time (countingfrom the starting time of the localization process), and thentransmits its packet, similarly as introduced in [21]. With aslight modification of the result from [21], the waiting time forthe jth anchor who has not received a packet from its previousanchor, could be as short as tk+(j−k)(Tp+ Daac ), where k is theindex of the anchor whose packet is the last one which has beenreceived by the jth anchor, tk is the time at which this packetRAMEZANI et al.: PACKET SCHEDULING FOR UNDERWATER ACOUSTIC LOCALIZATION 2587Fig. 2. Packet transmission from anchors in the collision-tolerant scheme.Here, each anchor transmits its packets at random according to a Poissondistribution.TABLE IPOSSIBLE TIMES THAT ANCHOR j TRANSMITS ITS PACKETwas transmitted from the kth anchor (counting from the startingtime of the localization process), c is the sound speed, Daac isthe maximum propagation delay between two anchors, and Tpis the packet length. The packet length is related to the systembandwidth B (or symbol time Ts ≈ 1B), number of bits in eachsymbol bs, number of bits in each packet bp, and guard time Tgas formulated inTp = Tg+bpbsTs. (4)Under this condition, the transmission time of the jth anchort j can be selected from one of the values listed in Table I whereDr = Dsa in on-demand localization which is the distance correspondingto the maximally separated sensor-anchor pair, andDr = 0 in periodic localization, t1 = 0 for periodic localization,and t1 = dsc for on-demand localization, with ds the distancebetween the first anchor and the sensor who sent the requestpacket, and pl(di, j) is the probability of packet loss betweentwo anchors located di, j meters away from each other. Thepacket loss can be defined aspl(d) =_ ∞ã0N0BfX0|d(x)dx (5)where N0B is the noise power, ã0 is the minimum SNR at whicha received packet can be detected at the receiver, and given thedistance between two nodes, d, fX0|d(x) is the conditional pdfof the received signal power which will be derived in the nextsubsection. The first row of Table I indicates that no packetloss (with probability 1− pl(dj, j−1)) occurs between the jthand ( j − 1)th anchor, and the jth anchor transmits after itreceives the packet from the ( j−1)th anchor. The second rowdenotes that there is a packet loss between the jth and ( j−1)thanchor (with probability pl(dj, j−1)), but there is no packetlossbetween the jth and ( j − 2)th anchor (with probability1− pl(dj, j−2)). Therefore, according to the protocol, the jthanchor waits until t j−2 + 2(Daac + Tp) before it transmits itspacket. The last row of Table I specifies that the jth anchor haslost all the packets from all anchors, and as a result transmits ata worst possible time to avoid any collision.Since di, j for j =1, . . . ,N−1, and ds are independent of eachother, the average time at which the jth anchor transmits itspacket can be obtained as¯t j =(1− ¯pl)j−1Ók=1¯tk ¯pj−k−1l +Tp(1− ¯pl)+¯dc−d¯plc+(1− ¯pl)_Daac+Tp_ j−1Ók=2k ¯pk−1l+(j−1)_Daac+Tp_¯pj−1l +Drc¯pj−1l (6)where ¯pl , ¯d, and d¯pl are the expected values of pl(di, j), di, j,and di, j pl(di, j), respectively.The average localization time of a collision-free scheme canbe obtained asTavgCF = ¯tN +Tp+Dsac, (7)where Dsac is added to ensure that the last transmitted packetfrom the Nth anchor reaches the furthest point in the operatingarea.In the best case there is no packet loss between the anchorsand the average localization time reaches its minimum value atTlowCF = (N −1)¯dc+¯dsc+NTp+Dsac, (8)where ¯ds is the average distance between a sensor node and ananchor. In the worst case, all the packets between the anchorsare lost, and the requesting sensor node is the farthest from theinitiating anchor. This case yields the longest localization timegiven byTuppCF = NTp +(N −1)Daac+Dsac+Dsac, (9)which is equivalent to a packet transmission based on timedivision multiple access (TDMA) with time-slot duration Tp +Dc(assuming D = Dsa = Daa).Another figure of merit is the probability with which a nodecan localize itself. If this probability is required to be abovea design value Pss, the necessary number of anchors whichalso minimizes TavgCF (TavgCF is an increasing function of N) isdetermined as the smallest N for whichPlocCF =NÓk=K_Nk_pkCF (1− pCF )N−k ≥ Pss (10)2588 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 14, NO. 5, MAY 2015where pCF is the probability that a transmitted packet reaches asensor node correctly, and it can be calculated aspCF =_ ∞ã0N0BfX0 (x)dx, (11)where fX0 (x) is the pdf of the received signal power.B. Collision-Tolerant Packet SchedulingTo avoid the need for coordination among anchor nodes,in a collision-tolerant packet scheduling, anchors work independentlyof each other. During a localization period or uponreceiving a request from a sensor node, they transmit randomly,e.g., according to a Poisson distribution with an average transmissionrate of ë packets per second. Packets transmitted fromdifferent anchors may now collide at a sensor node, and thequestion arises as to what is the probability of successful reception.This problem is a mirror image of the one investigated in[22] where sensor nodes transmit their packets to a commonfusion center. Unlike [22] however, where the sensors knowtheir location, and power control fully compensates for theknown path-loss, path-loss is not known in the present scenario,and there is no power control. The average received signalstrength is thus different for different links (this signal strength,along with a given fading model, determines the probability ofpacket loss). In this regard, the signal received at the mth sensornode from the jth anchor isvm, j(t) = cm, jvj(t)+im(t)+wm(t), (12)where vj(t) is the signal transmitted from the jth anchor, cm, jis the channel gain, wm(t) is the additive white Gaussian noisewith power N0B, and im(t) is the interference caused by otheranchors whose packets overlap with the desired packet,im(t) = Ók_=jcm,kvk(t −ôk), (13)with ôk being the difference in the arrival times of the interferingsignals w.r.t. the desired signal which is modeledas an exponentially distributed random variable. The signal-tointerference-plus-noise-ratio (SINR) at the receiver depends onthe interference level, and is given byã =X0I0+N0B, (14)where X0 = |cm, j|2P0 is the power of the signal of interestwith P0 the anchor’s transmit power, and where I0 is the totalinterference power which can be expressed asI0 =qÓi=1|cm,ki|2P0 (15)with q the number of interferers, and ki the index of the ithinterferer. We can express the signal power as|cm, j|2 = a−1PL (dm, j)egm, j |hm, j|2, (16)where gm, j ∼N (0,ó2g) models the large scale log-normal shadowing,hm, j ∼ CN (¯h,ó2h) models the small scale fading, andaPL models the path-loss attenuation which can be formulatedas [23]aPL(di, j) = á0_di, jd0_n0a( f )di, j (17)where á0 is a constant, d0 is the reference distance, n0 isthe path-loss exponent, and a( f ) is the frequency-dependentabsorption coefficient. For localization, where the bandwidthis not large, á( f ) can be approximated by a constant.The pdf of the received signal power, fX0 (x) can be obtainednumerically. Since aPL, gm, j and hm, j are independent randomvariables, we calculate the pdfs of 10log|hm, j|2, 10logegm, j , and−10logaPL separately. Then we convolve them which resultsin fX0,dB(xdB). With a simple change of variables x = 100.1xdBwe can find fX0 (x), and the pdf of the interference can beobtained asfI0 (x) = fX0 (x) ∗ fX0 (x) ∗ . . . ∗ fX0 (x) _ __ _qtimes. (18)The probability that a packet is received correctly by a sensornode is then [22]ps =N−1Óq=0P(q)ps|q, (19)where P(q) = (2NëTp)qq! e−2NëT p is the probability that q packetsinterfere with the desired packet, and ps|q is the probability thatthe desired packet “survives” under this condition,ps|q =__ ∞ã0N0B fX0 (x)dx q = 0 _ ∞ã0_ ∞N0B fX0 (ãw) fI0 (w−N0B)wdwdã q ≥ 1(20)where w = I0+N0B.In addition, it should be noted that multiple receptions ofa packet from an anchor does not affect the probability ofself-localization (localization coverage), but in case where asensor node is able to localize itself, multiple receptions of apacket from an anchor affects the accuracy of the localization(see Section IV).If we assume that the packets transmitted from the sameanchor fade independently, the probability of receiving a usefulpacket from an anchor during the transmission time TT can nowbe approximated by [22]pCT = 1−e−psëTT , (21)and the probability that a sensor node accomplishes selflocalizationusing N anchors can be obtained asPlocCT =NÓk=K_Nk_pkCT (1− pCT )N−k, (22)which is equivalent to the probability that a node receives atleast K different localization packets.It can be shown that PlocCT is an increasing function of TT (seeAppendix A), and as a result for any value of psë _= 0, thereis a TT that leads to a probability of self-localization equal toor greater than Pss. The minimum value for the required TT canRAMEZANI et al.: PACKET SCHEDULING FOR UNDERWATER ACOUSTIC LOCALIZATION 2589Fig. 3. Probability of successful localization for different values of ë and TCT .be obtained at a point where psë is maximum (ëopt). It can beproven that the lower bound of ëopt is ëlowopt = 12NTp, and its upperbound is N+12NTp(see Appendix B). These points will be illustratedvia numerical examples in Section VI (cf. Fig. 3).Given the number of anchors N, and a desired probabilityof successful self-localization Pss, one can determine pCTfrom (22), while ë and the minimum localization time canbe determined jointly from (19) and (21). Similarly as in thecollision-free scheme, we then add the time of request dsc , andthe maximum propagation delay between a sensor-anchor pairDsac to the (minimum) TT that is obtained from (19) and (21).The so-obtained value represents the (minimum) localizationtime (TminCT ) TCT , for the collision-tolerant scheme.IV. SELF-LOCALIZATION PROCESSWe have seen that a sensor node requires at least K distinctpackets (or time-of-flight measurements) to determine its location.However, it may receive more than K different packets,as well as some replicas, i.e., qj packets from anchor j, wherej = 1, . . . ,N. In this case, a sensor uses all of this informationfor self-localization. Note that in the collision-free scheme, qj iseither zero or one; however, in the collision-tolerant scheme qjcan be more than 1. Packets received from the jth anchor can beused to estimate the sensor node’s distance to that anchor, andthe redundant packets add diversity (or reduce measurementnoise) for this estimate. In the next two subsections, we showhow all of the correctly received packets can be used in a localizationalgorithm, and how the CRB of the location estimatecan be obtained for the proposed scheduling schemes.A. Localization AlgorithmAfter the anchors transmit their localization packets, eachsensor node has Q measurements. Each measurement is contaminatedby noise whose power is related to the distancebetween the sensor and the anchor from which the measurementhas been obtained. The lth measurement obtained from the jthanchor is related to the sensor’s position x (sensor index isomitted for simplicity) asˆtl = f (x)+nl , (23)where nl is the measurement noise (see (1)) and f (x) isf (x) =1c_x−xj_2 (24)where xj is the jth anchor’s position. Stacking all the measurementsgives us a Q ×1 vectorˆt. The number of measurementsis given byQ =NÓj=1qj, (25)where qj is the number of measurements which are obtainedcorrectly from the jth anchor. In CFS, qj is a Bernoulli randomvariable with success probability P1j= P(qj = 1) = 1− pl(dj)where dj is the distance between the sensor node and thejth anchor. In CTS qj is a Poisson random variable withdistributionPnj= P(qj = n) =(psëTT )nn!e−ëTT pjs|d , (26)where pjs|d is the conditional probability that a sensor nodecorrectly receives a packet from the jth anchor, knowing itsdistance from all anchors (elements of d). This pdf can befound from the conditional pdf of the received signal and theinterference power (see (19) and (20)).Since the measurement errors are independent of each other,the maximum likelihood solution for x is given byˆx = argminx ˆt−f(x) 2 , (27)which can be calculated using a method such as theGauss-Newton algorithm specified in Algorithm 1. In thisalgorithm, ç controls the convergence speed, ∇f(x(i)) =[ ∂ f1∂x , ∂ f2∂x , . . . ,∂ fQ∂x ]Tx=x(i) represents the gradient of the vector fw.r.t. the variable x at x(i), x(i) is the estimate in the ith iteration,and ∂ fl∂x = [∂ fl∂x , ∂ fl∂y , ∂ fl∂z ]T where l = 1, . . . ,Q . Here, I and å arethe user-defined limits on the stopping criterion. The initialguess is also an important factor and can be determined throughtriangulation, similarly as explained in [24].Algorithm 1 Gauss-Newton AlgorithmStart with an initial location guess.Set i = 1 and E = ∞.while i ≤ I and E ≥ å doNext state:x(i+1) = x(i)−ç(∇f(x(i))T∇f(x(i)))−1∇f(x(i))T (f(x(i))−ˆt)E = _x(i+1)−x(i)_i = i+1end whileˆx = x(i)2590 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 14, NO. 5, MAY 2015B. Cramér-Rao BoundThe Cramér-Rao bound is a lower bound on the varianceof any unbiased estimator of a deterministic parameter. In thissubsection, we derive the CRB for the location estimate of asensor node.To find the CRB, the Fisher information matrix (FIM) hasto be calculated. The Fisher information is a measure of informationthat an observable random variable ˆt carries about anunknown parameter x upon which the pdf of ˆt depends. Theelements of the FIM are defined asI(x)i, j = −E∂2 logh(ˆt;x)|x∂xi∂xj_(28)where x is the location of the sensor node, h(ˆt;x) is the pdfof the measurements parametrized by the value of x, and theexpected value is over the cases where the sensor is localizable.In a situation where the measurements (ToFs or RTTs betweena sensor node and the anchors) are contaminated withGaussian noise (whose power is related to the mutual distancebetween a sensor-anchor pair), the elements of the FIM can beformulated asI(x)i, j =1PlocQNÓqN=0. . .Q2Óq2=0Q1Óq1=0s.t.{q1,…,qN} enable self-localization×_∂f∂xiTR−1w∂f∂xj+12trR−1w∂Rw∂xiR−1w∂Rw∂xj_ÐNj=1Pqjj (29)where Ploc is the localization probability (see (10) and (22)),Qi = 1 for CFS, and ∞ for CTS, Rw is the Q × Q noisecovariance matrix∂Rw∂xi= diag_∂[Rw]11∂xi,∂[Rw]22∂xi, . . . ,∂[Rw]QQ∂xi_, (30)and∂f∂xi=∂ f1∂xi,∂ f2∂xi, . . . ,∂ fQ∂xi_T, (31)with fi a ToF or RTT measurement.Once the FIM has been computed, the lower bound on thevariance of the estimation error can be expressed as CRB =Ó3i=1CRBxi where CRBxi is the variance of the estimation errorin the ith variable, defined asCRBxi =_I−1(x)_ii . (32)Note that the CRB is meaningful if a node is localizable ( 1Plocin (29)), meaning that a sensor node has at least K differentmeasurements. Hence, only ÓNk=K_Nk_possible states have tobe considered to calculate (29) for collision-free scheduling,while the number of states is countless for collision-tolerantscheduling. Nonetheless, it can be shown that the numberof possible states in CTS can be dropped to that of CFS(see Appendix C).TABLE IIVALUES OF ès AND èe BASED ON DISTANCE dV. ENERGY CONSUMPTIONIn this section, we investigate the average energy consumedby all the anchors during the localization. In CFS, the receiverof anchor j is on for t j seconds, and its transmitter is on onlyfor Tp seconds. With power consumption PL in listening modeand PT in transmitting mode, the average energy consumptionin CFS isEavgCF = NTpPT+NÓj=1¯t jPL, (33)where the energy consumed for processing is ignored. As isclear from (6), an anchor with a higher index value has to listento the channel longer, and consequently consumes more energyin comparison with the one that has a lower index. To overcomethis problem, anchors can swap indices between localizationprocedures.In CTS, the anchors do not need to listen to the channel andthey only transmit at an average rate of ë packets per second.The average energy consumption is thusEavgCT = ëTTNTpPT. (34)For ( PLPT<NTp(ëTT−1)ÓNj=1 ¯t j), the average energy consumption of CTSis always greater than that of CFS. However, as ë gets smaller(or equivalently TCT gets larger), the energy consumption ofCTS reduces.VI. NUMERICAL RESULTSTo illustrate the results, a 2-D rectangular-shape operatingarea with length Dx and width Dy is considered with uniformlydistributed anchors and sensors. There is no difference in howthe anchors and sensor nodes are distributed, and therefore wehave fD(d) = gD(d) which can be obtained as [26]fD(d) =2dD2xD2y_d2(sin2 èe−sin2 ès)+2DxDy(èe−ès)+2Dxd(cosèe−cosès)−2Dyd(sinèe−sinès)] (35)where ès and èe are related to d as given in Table II.The parameter values for the numerical results are listed inTable III, and used for all examples.The number of bits in each packet is set to bp = 200 whichis sufficient for the position information of each anchor, timeof transmission, (arrival time of the request packet), and thetraining sequence. Assuming QPSK modulation (bs =2), guardtime Tg =50 ms, and a bandwidth of B=2 kHz the localizationpacket length is Tp = 100 ms (see (4)). In addition, kE is setRAMEZANI et al.: PACKET SCHEDULING FOR UNDERWATER ACOUSTIC LOCALIZATION 2591TABLE IIISIMULATION PARAMETERS. NOTE THAT, IN THIS TABLE SOMEPARAMETERS SUCH AS N, Daa, Tg, etc. ARE RELATED TO OTHERPARAMETERS, e.g., N DEPENDS ON THE VALUES OF THE ¯pl , AND Pssto 10−10 which is approximately equivalent to 1.9 m rangeaccuracy at 1 km away from an anchor. Moreover, to keepthe transmitted packets from an anchor in CTS independentof each other, we set óg = 0 (no shadowing effect) for thesimulations. Fig. 3 shows the probability of successful selflocalizationin the collision-tolerant scheme as a function of ëand the indicated value for TCT . It can be observed that thereis an optimal value of ë (denoted by ëopt) which correspondsto the minimal value of TCT (TminCT ) which satisfies PlocCT≥ Pss.The highlighted area in Fig. 3 shows the predicted region ofëopt (obtained in Appendix B). As it can be seen, ëopt is closeto ëlowopt , and it gets closer to this value as Ps|q>0 gets smaller.In addition, for the values of TCT greater than TminCT , a range ofvalues for ë ∈ [ëlow,ëupp] can attain the desired probability ofself-localization. In this case, the lowest value for ë should beselected to minimize the energy consumption.Fig. 4 shows the probability of correct packet receptionversus the number of interferers (the desired Pss is set to 0.90in this example) for different values of the path-loss exponentn0. When there is no interference, the probability of packetreception is high. Yet, when there is an interferer, the chanceof correct reception of a packet becomes small (0.126 for n0 =1.4), and as the number of interferers grows, it gets smaller.The probability that two or more packets overlap is alsodepicted in part (b) of this figure for the three values of ëshown in Fig. 3. It can be seen that as the value of ë is reducedfrom ëopt (which is equivalent to a larger TCT ), the probabilityof collision becomes smaller. The chance of correct packetreception thus increases, and the energy consumption reducesas explained in Section V. In addition, it can be observed thatalthough using ëupp results in the same performance as ëlow,Fig. 4. (a) Probability of successful packet reception versus different numberof interferers. (b) Probability that q interferers collide with the desired packet.For this figure, ëlow, ëopt and ëupp are chosen from Fig. 3.it relies on the packets that have survived collisions, which isnot energy-efficient in practical situations neither for anchors(required energy for multiple packet transmissions) nor forsensor nodes (processing energy needed for packet detection).Part (a) of Fig. 5 shows the time required for localizationversus the transmit power. As P0 increases, ¯pl gets smaller,and consequently fewer anchors are required for collision-freelocalization. In Fig. 5, for a given P0, the number of anchorsN is calculated using (10), which is then used to calculatethe minimum required time for the collision-free and collisiontolerantlocalization. Each fall in TuppCF in CFS indicates that thenumber of anchors has been decreased by one.We also note thatfor a given number of anchors, the upper and lower bounds ofTCF are constant over a range of P0 values; however, the actualperformance of both schemes becomes better as P0 grows. Thecollision-tolerant approach performs better for a wide rangeof P0 values, and as the number of anchors decreases, itsperformance slightly degrades. In part (b) of Fig. 5, we calculatethe ratio PLPTbelow which the average energy of CTS is greaterthan that of CFS. The ratio of EavgCF /EavgCT is a linear functionof PLPT, and as P0 increases for larger values of PLPT, the averageenergy consumption of CTS becomes greater than that of CFS.In practice, for a range of 6 km the PLPTis less than 1100 [25], andthis means that CTS consumes more energy.Many factors such as noise power or packet length aredirectly dependent on the operating frequency and the systembandwidth. Assuming single-hop communication among thesensor nodes, an optimum frequency band exists for a given2592 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 14, NO. 5, MAY 2015Fig. 5. (a) Effect of transmit power on the minimum time required forlocalization, and the average probability of a packet-loss ¯pl (dashed-line).(b) The minimum value of PLPTin dB below which the average energy consumptionof CTS is greater than that of CFS.operating area. As the size of the operating area increases,a lower operating frequency (with less bandwidth) is used tocompensate for the increased attenuation. Furthermore, as thedistance increases, the amount of available bandwidth for theoptimum operating frequency also gets smaller [23]. As it wasmentioned before, the localization packet is usually short interms of the number of bits, but its duration (in seconds) stilldepends on the system bandwidth. Below, we investigate theeffect of packet length (or equivalently system bandwidth) onthe localization time.As it is shown in Fig. 6, the length of the localization packetplays a significant role in the collision-tolerant algorithm. Theminimum localization time grows almost linearly with Tp inall cases; however, the rate of growth is much higher for thecollision-tolerant system than for the collision-free one. At thesame time, as shown in Fig. 7, the size of the operating areahas a major influence on the performance of the CFS, whilethat of the CTS does not change very much. It can be deducedthat in a network where the ratio of packet length to the maximumpropagation delay is low, the collision-tolerant algorithmoutperforms the collision-free one in terms of localization time.The localization accuracy is related to the noise level atwhich a ToF measurement is taken, and to the anchors’ constellation.If a sensor node in a 2-D operating system receivespackets from the anchors which are (approximately) locatedon a line, the sensor node is unable to localize itself (or itexperiences a large error). To evaluate the localization accuracyof each algorithm, we considered M = 100 sensor nodes, andFig. 6. Effect of packet length on the minimum time required for localization.Fig. 7. Effect of the operating area size on the time required localization.run a Monte Carlo simulation (103 runs) to extract the results.The number of iterations in Algorithm 1 is set to I =50, and theconvergence rate is ç = 15. The TCF was set equal to the averagelocalization time of CFS. In this special case where TminCT islower than TavgCF , the successful localization probability (Ploc)of CTS is greater than that of CFS. The probability distributionof the localization error _ˆx−x_ is illustrated in Fig. 8 for bothschemes. In this figure, the root mean square error (RMSE),and root CRB (R-CRB) are also shown with the dashed anddash-dotted lines, respectively. It can be observed that in CTSthe pdf is concentrated at lower values of the localization errorcompared to CFS, because each sensor in CTS has a chance ofreceiving multiple copies of the same packet, and thus reducingthe range estimation error.VII. CONCLUSIONWe have considered two classes of packet scheduling forself-localization in an underwater acoustic sensor network,RAMEZANI et al.: PACKET SCHEDULING FOR UNDERWATER ACOUSTIC LOCALIZATION 2593Fig. 8. Probability distribution of the localization error, and its correspondingCRB for CTS and CFS.one based on a collision-free design and another based on acollision-tolerant design. In collision-free packet scheduling,the time of the packet transmission from each anchor is setin such a way that none of the sensor nodes experiences acollision. In contrast, collision-tolerant algorithms are designedso as to control the probability of collision to ensure successfullocalization with a pre-specified reliability. We have alsoproposed a simple Gauss-Newton based localization algorithmfor these schemes, and derived their Cramér-Rao lower bounds.The performance of the two classes of algorithms in terms ofthe time required for localization was shown to be dependenton the circumstances. When the ratio of the packet length tothe maximum propagation delay is low, as it is the case withlocalization, and the average probability of packet-loss is notclose to zero, the collision-tolerant protocol requires less timefor localization in comparison with the collision-free one forthe same probability of successful localization.Except for theaverage energy consumed by the anchors, the collision-tolerantscheme has multiple advantages. The major one is its simplicityof implementation due to the fact that anchors work independentlyof each other, and as a result the scheme is spatiallyscalable, with no need for a fusion center. Furthermore, itslocalization accuracy is always better than that of the collisionfreescheme due to multiple receptions of desired packets fromanchors. These features make the collision-tolerant localizationscheme appealing from a practical implementation view point.In the future, we will extend our work to a multi-hop networkwhere the communication range of the acoustic modems ismuch shorter than the size of the operating area.APPENDIX APlocCT IS AN INCREASING FUNCTION OF TCTIn this appendix, we show that the probability of successfullocalization is an increasing function of the localization time.According to (21), and the fact that psë is independent of TT, itis clear that pCT is an increasing function of TT. Therefore, PlocCTis an increasing function of TT if PlocCT is an increasing functionof pCT . The derivative of PlocCT w.r.t. pCT is∂PlocCT∂pCT=NÓk=K_Nk_(k−NpCT )pk−1CT (1−pCT )N−k−1. (36)With a simple modification we have∂PlocCT∂pCT=1pCT (1− pCT )×__NÓk=0_Nk_kpkCT (1− pCT )N−k−K−1Ók=0_Nk_kpkCT (1− pCT )N−k_−NpCT_NÓk=0_Nk_pkCT (1− pCT )N−k−K−1Ók=0_Nk_pkCT (1− pCT )N−k_. (37)Using the properties of binomial random variables we have thatNÓk=0_Nk_kpkCT (1− pCT )N−k = NpCT , (38)andNÓ k=0_Nk_pkCT (1− pCT )N−k = 1. (39)Now, equation (37) (or equivalently (36)) is equal to∂PlocCT∂pCT=K−1Ók=0_Nk_(NpCT−k)pk−1CT (1−pCT )N−k−1. (40)It can be observed that (36) is always positive for pCT < KN ,and (40) is always positive for pCT > KN . As a result∂PlocCT∂pCTispositive for any value of pCT ; therefore, PlocCT is an increasingfunction of pCT , and consequently of TT.APPENDIX BMAXIMUM VALUE OF psëThe first and second derivatives of psë w.r.t. ë can beobtained as∂psë∂ë =NÓq=0ps|qxqe−xq!(q−x+1), (41)(∂psë)2∂2ë =NÓq=0ps|qxq−1e−xq![(q−x)(q−x+1)−x] , (42)where x = 2NëTp. For x < 1 the derivative in (41) is positive,and for x > N +1 it is negative. Therefore, psë has at least onemaximum within x ∈ [1,N+1]. In practical scenarios the valueof ps|q for k > 0 is usually small, so that it can be approximatedby zero. For a special case where ps|q>0 = 0, (41) is zero if2594 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 14, NO. 5, MAY 2015x = 1, and (42) is negative, and as a result ëlowopt = 12NT p maximizesPlocCT . This corresponds to a lower bound on the optimalpoint in a general problem (i.e., ps|q>0 _= 0).APPENDIX CCRAMÉR RAO LOWER BOUND FOR CTSThe upper bound on the sum operation in (29) for CTS is ∞(note that in practice at most TTTppackets can be transmitted froman anchor), and this makes the CRB calculation very difficulteven if it is implemented numerically. To reduce the complexityof the problem, the observation of a sensor node from thejth anchor is divided into two parts: Either a sensor nodedoes not receive any packet from this anchor (no informationis obtained), or it receives one or more packets. Since theanchor and the sensor node do not move very much during thelocalization procedure, their distance can be assumed almostconstant, and therefore the noise power is the same for allmeasurements obtained from an anchor. When a sensor nodegathers multiple measurements contaminated with independentnoise with the same power (diagonal covariance matrix), CRBcan be computed with less complexity. We will explain complexityreduction for the first anchor, and then generalize forthe other anchors.Considering the first anchor, each element of the FIM can becalculated in two parts: no correct packet reception, and one ormore correct packet receptions from this anchor, which can beformulated asI(x)i, j = P01 I (x|q1 = 0)i, j +P>01 I (x|q1 > 0)i, j , (43)where P01 is the probability that no packet is received from thefirst anchor, and P>01 = Ó∞q1=1 Pk1 is the probability that oneor more than one packets are received from the first anchorwhich depends on the distance between the sensor node andthe anchor. The second term in (43) can be expanded asI(x|q1 > 0)i, j=1PlocQNÓqN=0. . .Q2Óq2=0s.t. {q1,…,qN} enable self-localization×_1ó−21∂ f1∂xi∂ f1∂xj+c1+1ó−41∂ó21∂xi∂ó21∂xj+c2_×P11 /P>01 ÐNj=2Pqjj+_2ó−21∂ f1∂xi∂ f1∂xj+c1+2ó−41∂ó21∂xi∂ó21∂xj+c2_×P21 /P>01 ÐNj=2Pqjj+… _ kó−21∂ f1∂xi∂ f1∂xj+c1+kó−41∂ó21∂xi∂ó21∂xj+c2_×Pk1/P>01 ÐNj=2Pqjj+…(44)where c1 and c2 are affected only by measurements from theother anchors. Using a simple factorization we haveI(x|q1 > 0)i, j =1PlocQNÓqN=0. . .Q2Óq2=0s.t. {q1,…,qN} enable self-localization×_gjó−21∂ f1∂xi∂ f1∂xj+ó−41∂ó21∂xi∂ó21∂xj_+c1+c2_ÐNj=2Pqjj (45)wheregj =Ó∞qj=1 kPkjÓ∞q j=1 Pkj=ëTT pjs|d1−P0j. (46)Now, we define aN×1 with its kth element ak either zero (ifqk = 0) or gj (if qk > 0). We also define bN×1 with its kthelement bk = [ó−2k∂ fk∂xi∂ fk∂x j+ó−4k∂ó2k∂xi∂ó2k∂x j]. Then, we haveI(x|a)i, j =1Ploc×_aT b__ÐN−nan=1 P0k,ak=0__Ðnan=1(1−P0k,ak>0)_, (47)where na is the number of non-zero elements in a. Hence, toevaluate I(x)i, j for the localizable scenarios only_NK_possiblestates (different realizations of a which lead to localizablescenarios) have to be considered. This number is the same asthat of CFS.Hamid Ramezani (S’11) was born in Tehran, Iran.He received the B.Sc. degree in electrical engineeringfrom Tehran University, Tehran, and the M.Sc.degree in telecommunications engineering from theIran University of Science and Technology, Tehran,in 2007. He worked at several companies focusingon the implementations of wireless system standardssuch as DVB-T and DVB-H. He is currently pursuingthe Ph.D. degree with the Electrical EngineeringDepartment, Delft University of Technology (TUDelft), Delft, The Netherlands. His current researchinterests include Underwater acoustic communications and networking.Fatemeh Fazel (S’05–M’07) received the B.Sc. degreefrom Sharif University, Tehran, Iran, the M.Sc.degree from University of Southern California, andthe Ph.D. degree from the University of California,Irvine, all in electrical engineering. She is currentlyan Associate Research Scientist in the Electricaland Computer Engineering Department, NortheasternUniversity, Boston, MA, USA. Her research interestsare in signal processing methods for wirelesscommunications and sensor networks.Milica Stojanovic (S’90–M’93–SM’08–F’10)received the B.S. degree from the University ofBelgrade, Serbia, in 1988, and the M.S. and Ph.D.degrees in electrical engineering from NortheasternUniversity, Boston, MA, USA, in 1991 and 1993,respectively. She was a Principal Scientist at theMassachusetts Institute of Technology, and in2008 joined Northeastern University where she iscurrently a Professor of electrical and computerengineering. She is also a Guest Investigatorat the Woods Hole Oceanographic Institution,and a Visiting Scientist at MIT. Her research interests include digitalcommunications theory, statistical signal processing and wireless networks,and their applications to underwater acoustic systems. She is an AssociateEditor for the IEEE JOURNAL OF OCEANIC ENGINEERING and a pastAssociate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSINGand TRANSACTIONS ON VEHICULAR TECHNOLOGY. She also serves onthe Advisory Board of the IEEE Communication Letters, and chairs theIEEE Ocean Engineering Society’s Technical Committee for UnderwaterCommunication, Navigation and Positioning.Geert Leus (M’01–SM’05–F’12) received the M.Sc.and the Ph.D. degrees in applied sciences from theKatholieke Universiteit Leuven, Belgium, in June1996 and May 2000, respectively. Currently, he is an“Antoni van Leeuwenhoek” Full Professor with theFaculty of Electrical Engineering, Mathematics andComputer Science, Delft University of Technology,The Netherlands. His research interests are in thearea of signal processing for communications. Hereceived a 2002 IEEE Signal Processing SocietyYoung Author Best Paper Award and a 2005 IEEESignal Processing Society Best Paper Award. He was the Chair of the IEEESignal Processing for Communications and Networking Technical Committee,and an Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING,the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, the IEEESIGNAL PROCESSING LETTERS, and the EURASIP Journal on Advancesin Signal Processing. Currently, he is a Member-at-Large to the Board ofGovernors of the IEEE Signal Processing Society and a member of the IEEESensor Array and Multichannel Technical Committee. Finally, he serves as theEditor in Chief of the EURASIP Journal on Advances in Signal Processing.
BRACER A Distributed Broadcast Protocol in Multi-Hop Cognitive Radio Ad Hoc Networks
SECURITY
OPTIMIZATION OF DYNAMIC NETWORKS WITH PROBABILISTIC GRAPH MODELING AND LINEAR
PROGRAMMINGByAPROJECT REPORTSubmitted to the
Department of Computer Science & Engineering in the
FACULTY OF ENGINEERING & TECHNOLOGYIn partial fulfillment of the requirements for the award of the degreeOfMASTER
OF TECHNOLOGYINCOMPUTER SCIENCE & ENGINEERINGAPRIL 2016
CERTIFICATECertified
that this project report titled “SECURITY
OPTIMIZATION OF DYNAMIC NETWORKS WITH PROBABILISTIC GRAPH MODELING AND LINEAR
PROGRAMMING” is the bonafide work of Mr. _____________Who carried out the research
under my supervision Certified further, that to the best of my knowledge the
work reported herein does not form part of any other project report or
dissertation on the basis of which a degree or award was conferred on an
earlier occasion on this or any other candidate.Signature of the Guide Signature of the H.O.DName Name
DECLARATIONI hereby declare that the project work entitled “SECURITY
OPTIMIZATION OF DYNAMIC NETWORKS WITH PROBABILISTIC GRAPH MODELING AND LINEAR
PROGRAMMING”
Submitted to BHARATHIDASAN UNIVERSITY in partial fulfillment of the requirement
for the award of the Degree of MASTER OF SCIENCE IN COMPUTER SCIENCE is a
record of original work done by me the guidance of Prof.A.Vinayagam
M.Sc., M.Phil., M.E., to the best of my knowledge, the work reported here
is not a part of any other thesis or work on the basis of which a degree or
award was conferred on an earlier occasion to me or any other candidate.
(Student
Name)
(Reg.No)Place:Date:
ACKNOWLEDGEMENTI am extremely glad to present my project “SECURITY
OPTIMIZATION OF DYNAMIC NETWORKS WITH PROBABILISTIC GRAPH MODELING AND LINEAR
PROGRAMMING”
which is a part of my curriculum of third semester Master of Science in
Computer science. I take this opportunity to express my sincere gratitude to
those who helped me in bringing out this project work.I would like to express
my Director,
Dr. K. ANANDAN, M.A.(Eco.), M.Ed., M.Phil.,(Edn.), PGDCA., CGT., M.A.(Psy.) of who had
given me an opportunity to undertake this project.I am highly indebted to Co-Ordinator
Prof. Muniappan Department of Physics and thank from my deep heart for her
valuable comments I received through my project.I wish to express my deep sense
of gratitude to my guide
Prof. A.Vinayagam M.Sc., M.Phil., M.E., for
her immense help and encouragement for successful completion of this project.I
also express my sincere thanks to the all the staff members of Computer
science for their kind advice.And last, but not the least, I express my deep
gratitude to my parents and friends for their encouragement and support
throughout the project.CHAPTER 11.1
ABSTRACT:Securing the networks of large organizations is technically
challenging due to the complex configurations and constraints. Managing these
networks requires rigorous and comprehensive analysis tools. A network
administrator needs to identify vulnerable configurations, as well as tools for
hardening the networks. Such networks usually have dynamic and fluidic
structures, thus one may have incomplete information about the connectivity and
availability of hosts. In this paper, we address the problem of statically
performing a rigorous assessment of a set of network security defense
strategies with the goal of reducing the probability of a successful
large-scale attack in dynamically changing and complex network architecture. We
describe a probabilistic graph model and algorithms for analyzing the security
of complex networks with the ultimate goal of reducing the probability of successful
attacks. Our model naturally utilizes a scalable state-of-the-art optimization
technique called sequential linear programming that is extensively applied and
studied in various engineering problems. In comparison to related solutions on
attack graphs, our probabilistic model provides mechanisms for expressing
uncertainties in network configurations, which is not reported elsewhere. We
have performed comprehensive experimental validation with real-world network
configuration data of a sizable organization.
1.2 INTRODUCTIONBRACER:
A Distributed Broadcast Protocolin Multi-Hop Cognitive Radio Ad HocNetworks
with Collision AvoidanceYi Song and Jiang Xie, Senior Member, IEEEAbstract—Broadcast
is an important operation in wireless ad hoc networks where control information
is usually propagated asbroadcasts for the realization of most networking
protocols. In traditional ad hoc networks, since the spectrum availability is
uniform,broadcasts are delivered via a common channel which can be heard by all
users in a network. However, in cognitive radio (CR) ad hocnetworks, different
unlicensed users may acquire different available channel sets. This non-uniform
spectrum availability imposesspecial design challenges for broadcasting in CR
ad hoc networks. In this paper, a fully-distributed Broadcast protocol in multi-hopCognitive Radio ad hoc networks
with collision avoidance, BRACER, is proposed. In our design, we consider
practical scenarios thateach unlicensed user is not assumed to be aware of the
global network topology, the spectrum availability information of other users,and
time synchronization information. By intelligently downsizing the original
available channel set and designing the broadcastingsequences and scheduling
schemes, our proposed broadcast protocol can provide very high successful
broadcast ratio while achievingvery short average broadcast delay. It can also
avoid broadcast collisions. To the best of our knowledge, this is the first
work thataddresses the unique broadcasting challenges in multi-hop CR ad hoc
networks with collision avoidance.Index Terms—Cognitive radio ad hoc networks, distributed broadcast, channel
hopping, broadcast collision avoidanceÇ1
INTRODUCTIONCOGNITIVE radio (CR) technology has been proposed asan enabling solution
to alleviate the spectrum underutilizationproblem [1]. With the capability of
sensing the frequencybands in a time and location-varying spectrumenvironment
and adjusting the operating parameters basedon the sensing outcome, CR
technology allows an unlicenseduser (or, secondary user (SU)) to exploit those
frequencybands unused by licensed users (or, primary users)in an opportunistic
manner [2]. Secondary users can form aCR infrastructure-based network or a CR
ad hoc network.Recently, CR ad hoc networks have attracted plentifulresearch
attention due to their various applications [3], [4].Broadcast is an important
operation in ad hoc networks,especially in distributed multi-hop multi-channel
networks.Control information exchange among nodes, such as channelavailability
and routing information, is crucial for therealization of most networking
protocols in an ad hoc network.This control information is often sent out as
networkwidebroadcasts, messages that are sent to all other nodes ina network.
In addition, some exigent data packets such asemergency messages and alarm
signals are also deliveredas network-wide broadcasts [5]. Due to the importance
ofthe broadcast operation, in this paper, we address thebroadcasting issue in
multi-hop CR ad hoc networks. Sincebroadcast messages often need to be
disseminated to all destinationsas quickly as possible, we aim to achieve very
highsuccessful broadcast ratio and very short broadcast delay.The broadcasting
issue has been studied extensively intraditional ad hoc networks [6], [7], [8],
[9]. However, unliketraditional single-channel or multi-channel ad hoc networkswhere
the channel availability is uniform, in CR ad hoc networks,different SUs may
acquire different sets of availablechannels. This non-uniform channel
availability imposesspecial design challenges for broadcasting in CR ad hoc
networks.First of all, for traditional single-channel and multichannelad hoc
networks, due to the uniformity of channelavailability, all nodes can tune to
the same channel. Thus,broadcast messages can be conveyed through a single
commonchannel which can be heard by all nodes in a network.However, in CR ad
hoc networks, the availability of a commonchannel for all nodes may not exist.
More importantly,before any control information is exchanged, a SU isunaware of
the available channels of its neighboring nodes.Therefore, broadcasting
messages on a global commonchannel is not feasible in CR ad hoc networks.To
further illustrate the challenges of broadcasting in CRad hoc networks, we
consider a single-hop scenario shownin Fig. 1, where node A is
the source node. For traditionalsingle-channel and multi-channel ad hoc
networks, asshown in Fig. 1a, nodes can tune to the same channel (e.g.,channel
1) for broadcasting. Thus, node A only needs onetime slot to let all
its neighboring nodes receive the broadcastmessage in an error-free
environment. However, in CRad hoc networks where the channel availability is
heterogeneousand SUs are unaware of the available channels of_ Y.
Song is with the Department of Electrical Engineering and ComputerScience, Wichita
State University, Wichita, KS 67260.E-mail: yi.song@wichita.edu._ J.
Xie is with the Department of Electrical and Computer Engineering,University of
North Carolina at Charlotte, 9201 University City Blvd.,Charlotte, NC
28223-0001. E-mail: linda.xie@uncc.edu.Manuscript received 22 Feb. 2012;
revised 22 Apr. 2014; accepted 20 May2014. Date of publication 4 June 2014;
date of current version 29 Jan. 2015.For information on obtaining reprints of
this article, please send e-mail to:reprints@ieee.org, and reference the
Digital Object Identifier below.Digital Object Identifier no.
10.1109/TMC.2014.2328998IEEE
TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 3, MARCH 2015 5091536-1233 _ 2014
IEEE. Personal use is permitted, but republication/redistribution requires IEEE
permission.See
http://www.ieee.org/publications_standards/publications/rights/index.html for
more information.each
other, as shown in Fig. 1b, node A may
have to usemultiple channels for broadcasting and may not be able tofinish the
broadcast within one time slot. In fact, the exactbroadcast delay for all
single-hop neighboring nodes to successfullyreceive the broadcast message in CR
ad hoc networksrelies on various factors (e.g., channel availabilityand the
number of neighboring nodes) and it is random.Furthermore, since multiple
channels may be used forbroadcasting and the exact time for all single-hop
neighboringnodes to successfully receive the broadcast message israndom, to
avoid broadcast collisions (i.e., a node receivesmultiple copies of the
broadcast message simultaneously) ismuch more complicated in CR ad hoc
networks, as comparedto traditional ad hoc networks. In traditional ad hocnetworks,
numerous broadcast scheduling schemes are proposedto reduce the probability of
broadcast collisions whileoptimizing the network performance [10], [11], [12],
[13],[14], [15]. All these proposals are on the basis that all nodesuse a
single channel for broadcasting and the exact delayfor a single-hop broadcast
is one time slot. However, in CRad hoc networks, without the information about
the channelused for broadcasting and the exact delay for a single-hopbroadcast,
to predict when and on which channel a broadcastcollision occurs is extremely
difficult. Hence, to designa broadcast protocol which can avoid broadcast
collisions,as well as provide high successful broadcast ratio and shortbroadcast
delay is a very challenging issue for multi-hopCR ad hoc networks under
practical scenarios. Simplyextending existing broadcast protocols to CR ad hoc
networkscannot yield the optimal performance.Currently, research on
broadcasting in multi-hop CRad hoc networks is still in its infant stage. There
are onlylimited papers addressing the broadcasting issue in CRad hoc networks
[16], [17], [18], [19]. However, in [16]and [17], the global network topology
and the availablechannel information of all SUs are assumed to be known.Additionally,
in [17], a common signaling channel for thewhole network is employed which is
also not practical.These two papers adopt impractical assumptions whichmake
them inadequate to be used in practical scenarios.In [18], a Quality-of-Service
(QoS)-based broadcast protocolunder blind information is proposed. However,
thisscheme does not consider optimizing the network performance.Moreover, it
ignores the broadcast collision issue.Other proposals aiming to locally
establish a commoncontrol channel may also be considered for broadcasting[20],
[21], [22], [23]. However, these proposals need a-priorichannel availability
information of all SUs which isusually obtained via broadcasts. In addition,
althoughsome schemes on channel hopping in CR networks can beused for finding a
common channel between two nodes[24], [25], [26], they still suffer various
limitations andcannot be used in broadcast scenarios. In [24] and [25],the
proposed channel hopping schemes cannot guaranteerendezvous under some special
circumstances. In addition,one of the proposed schemes in [24] only workswhen
two SUs have exactly the same available channelsets. Furthermore, in [26], a
jump-stay based channel hoppingalgorithm is proposed for guaranteed rendezvous.However,
the expected rendezvous time for the asymmetricmodel (i.e., different users
have different availablechannels) is of polynomial complexity with respect to
thetotal number of channels. Thus, it is unsuitable for broadcastscenarios in
CR ad hoc networks where channelavailability is usually non-uniform and short
broadcastdelay is often required. Other channel hopping algorithmsexplained in
[27] require tight time synchronizationwhich is also not feasible before any
controlinformation is exchanged.In this paper, a fully-distributed broadcast protocol in amulti-hop CR ad hoc network, BRACER, is
proposed. Weconsider practical scenarios in our design: 1) no global andlocal
common control channel is assumed to exist; 2) theglobal network topology is
not known; 3) the channel informationof any other SUs is not known; 4) the
available channelsets of different SUs are not assumed to be the same;and 5)
tight time synchronization is not required. Our proposedBRACER protocol can
provide very high successfuldelivery ratio while achieving very short broadcast
delay. Itcan also avoid broadcast collisions. To the best of ourknowledge, this
is the first work that addresses the broadcastingchallenges specifically in
multi-hop CR ad hoc networkswith a solution for broadcast collision avoidance.The
remainder of this paper is organized as follows.In Section 2, the proposed
broadcast protocol for multihopCR ad hoc networks, BRACER, is presented. Thederivation
of an important system parameter is given inSection 3. Two implementation
issues of the proposedprotocol are further discussed in Section 4. Simulationresults
are shown in Section 5, followed by the conclusionsin Section 6.2 THE PROPOSED
BRACER PROTOCOLIn this section, we introduce the
proposed broadcast protocolfor multi-hop CR ad hoc networks, BRACER. There arethree
components of the proposed BRACER protocol: 1) theconstruction of the
broadcasting sequences; 2) the distributedbroadcast scheduling scheme; and 3)
the broadcast collisionavoidance scheme. We assume that a time-slottedsystem is
adopted for SUs, where the length of a time slot islong enough to transmit a
broadcast packet [28]. In addition,we assume that the locations of SUs are
static. We alsoassume that each SU knows the locations of its all two-hopneighbors.
We claim that this is a more valid assumptionthan the knowledge of global
network topology. We providea detailed discussion on this issue in Section 4.
In therest of the paper, we use the term “sender” to indicate a SUwho has just
received a message and will rebroadcast themessage. In addition, we use the
term “receiver” to indicatea SU who has not received the message. The notations
usedin our protocol design are listed in Table 1.Fig. 1. The single-hop broadcast scenario.510 IEEE TRANSACTIONS ON MOBILE
COMPUTING, VOL. 14, NO. 3, MARCH 20152.1 Construction of the Broadcasting SequencesThe broadcasting sequences are the
sequences of channelsby which a sender and its receivers hop for successfulbroadcasts.
First of all, we consider the single-hop broadcastscenario. As explained in
Section 1, due to the nonuniformchannel availability in CR ad hoc networks, a
SUsender may have to use multiple channels for broadcastingin order to let all
its neighboring nodes receive thebroadcast message. Accordingly, the
neighboring nodesmay also have to listen to multiple channels in order toreceive
the broadcast message. Hence, the first issue todesign a broadcast protocol is
which channels should beused for broadcasting. One possible method is to
broadcaston all the available channels of the SU sender. However,this method is
quite costly in terms of the broadcastdelay when the number of available
channels is large.Therefore, we propose to select a subset of available
channelsfrom the original available channel set of each SU.First, the available
channels of each SU are ranked basedon the channel indexes. Then, each SU
selects the first wchannels from the ranked channel
list and forms a downsizedavailable channel set. The value of w needs
to becarefully designed to ensure that at least one commonchannel exists
between the downsized available channelsets of the SU sender and each of its
neighboring nodes.The detailed derivation process to obtain a proper w isgiven
in Section 3. Based on the derivation process, eachSU can calculate the value
of w of its own and its one-hopneighbors before a broadcast
starts.On the other hand, the second issue is the sequences ofthe channels by
which a sender and its receivers hop forsuccessful broadcasts. In this paper,
we design differentbroadcasting sequences for a SU sender and its receivers toguarantee
a successful broadcast in the single-hop scenarioas long as they have at least
one common channel. Thesender hops and broadcasts a message on each channel in
atime slot following its own sequence. On the other hand, thereceiver hops and
listens on each channel following its ownsequence. The pseudo-codes for
constructing the broadcastingsequences are shown in Algorithms 1 and 2. wðvÞ is theinitial w of
node v.Algorithm
1: Construction of the Broadcasting
SequenceBSv for
a SU Sender v.Input: wðvÞ; Lv.Output: BSv.1 randomize
the order of elements in Lv;2 BSv ?; = _ initialization _ =3
i 1;4 while i _ wðvÞ2 do5 BSvðiÞ Lvðði mod
wðvÞÞ þ 1Þ;6 i i þ 1; = _ repeat Lv for wðvÞ times _ =7
return BSv;Algorithm 2: Construction of the Broadcasting SequenceRSv for a SU Receiver v.Input: wðvÞ; Lv.Output: RSv.1 randomize the order of elements in Lv;2 RSv ?;
= _ initialization _ =3
j 1;4 while i _ wðvÞ do5 i 16 while j _ wðvÞ2 do7
RSvðði _ 1ÞwðvÞ þ jÞ LvðiÞ;8 j j þ 1; = _ repeat an element forwðvÞ times _ =9 i i þ 1; = _ repeat for every element inLðvÞ _=10
return RSv;From Algorithms 1 and 2, for a SU sender, it hops
periodicallyon the w available channels for w periods
(i.e., w2time
slots). For each receiver, it stays on one of the w availablechannels
for w time slots. Then, it repeats for everychannel in the w available
channels. Fig. 2 gives an exampleto illustrate the construction of the
broadcastingsequences for SU senders and receivers. In Fig. 2, thedownsized
available channel set of a sender and a receiveris f1; 2g and
f2; 3; 4g,
respectively. Based on Algorithm 1,the broadcasting sequence of the sender is f2; 1; 2; 1g. Similarly,based on Algorithm 2,
the broadcasting sequence ofthe receiver is f4; 4; 4; 3; 3; 3; 2; 2; 2g.
Since a sender usuallydoes not know the length of the broadcasting sequence ofthe
receiver, it broadcasts the message following its broadcastingsequence for bM2w2
cþ1 cycles,
where M is the totalnumber of channels. In this way, the total
length of timeslots that the sender broadcasts is bound to be longer thanTABLE 1Notations Used in the
ProtocolNðvÞ The
set of the neighboring nodesof node vNðNðvÞÞ The set of the neighbors of the
neighboringnodes of node vdðv; uÞ The
Euclidean distance betweennode v and urc The radius of the transmission
rangeof each nodej _ j The number of elements in a setLv The downsized available channelset
of node vwðvÞ The
size of the downsized available channelset of node vC The
set of the initial w of intermediate nodesBSv The broadcasting sequence for a
sender vRSv The
broadcasting sequence for a receiver vDSv The default sequence of a sender vstv The starting time slot of a sender
vrtv The
time slot that a receiver vreceives the messageRv The random number assigned to areceiver
v by its senderFig. 2. An example of the broadcasting sequences.SONG AND XIE: BRACER: A
DISTRIBUTED BROADCAST PROTOCOL IN MULTI-HOP COGNITIVE RADIO AD HOC NETWORKS…
511one cycle of the receiver’s
broadcasting sequence. Asshown in Fig. 2, the shaded part represents a
successfulbroadcast.Since every SU calculates the initial value of w based
onits local information and the derivation process in Section 3,different SUs
may obtain different values of w. We furtherdenote ws and wr as the w used
by the sender and thereceiver to construct their broadcasting sequences,
respectively.Note that ws and
wr may
not necessarily be the sameas the initial w calculated
by each SU. They also depend onthe initial w of
its neighboring nodes. The following theoremgives an upper-bound on the
single-hop broadcast delay.Theorem 1. If ws _ wr,
the single-hop broadcast is a guaranteedsuccess within w2rtime slots as long as the sender
and thereceiver have at least one common channel between their downsizedavailable
channel sets.Proof. Based on Algorithm 1, a SU sender
broadcasts onall the channels in its downsized available channel setin ws consecutive time slots. Based on
Algorithm 2, aSU receiver listens to every channel in its downsizedavailable
channel set for wr consecutive
time slots. Ifws_wr, during the wr consecutive time slots for whichthe
SU receiver stays on the same channel, every channelof the SU sender must
appear at least once. Thus, aslong as the SU sender and the receiver have at
least onecommon channel, there must exists a time slot that thesender and the
receiver hop on the same channel duringone cycle of the broadcasting sequence
of thereceiver (i.e., w2r).
Since we let the total length of timeslots that the sender broadcasts be longer
than one cycleof the receiver’s broadcasting sequence, the broadcast isguaranteed
to be successful. tuThen, how to determine ws and wr? From Theorem 1,ws _ wr is
a sufficient condition of a single-hop successfulbroadcast. Therefore, in order
to satisfy this condition, aproper wr needs to be selected by any SU who
has notreceived the broadcast message to ensure the reception ofthe broadcast
message sent from any potential neighbor.Since wr depends on ws and a SU receiver usually does notknow
which neighboring node is sending until it receivesthe broadcast message, it
selects the largest initial w of all itsone-hop neighbors as its
wr.
That is, for a SU receiver v,wrðvÞ ¼ maxfwðuÞju 2 NðvÞg. On the other hand, the senderuses
its calculated initial w as ws to broadcast. Therefore, thews selected by the actual sender is
bound to be smaller thanor equal to this wr. Thus, according to Theorem 1,
the single-hop broadcast is a guaranteed success as long as thesender and its
receiver have at least one common channelbetween their downsized available
channel sets.To illustrate the above discussed operation, we considera
multi-hop scenario shown in Fig. 3. The initial w calculatedby
each SU before the broadcast starts based on itslocal information are shown.
Every node also calculatesthe initial w of
its one-hop neighbors. Without loss of generality,node A is
assumed to be the source node. Based onTheorem 1, the values of wr employed by each receiver canbe
obtained. For instance, since node B knows
the initial wof its neighbors (i.e., wðAÞ ¼ 3, wðDÞ ¼ 4,
and wðFÞ ¼ 4),
itselects the largest initial w as its own wr (i.e., wrðBÞ ¼ 4).Similarly,
we have wrðCÞ ¼ 4; wrðDÞ ¼ 3; wrðEÞ ¼ 4, andwrðFÞ ¼ 5. Then, all nodes except node A use
their wr toconstruct
the broadcasting sequences based on Algorithm2. On the other hand, since each
sender uses its calculatedinitial w as ws, we have wsðAÞ ¼ 3, wsðBÞ ¼ 3;wsðCÞ ¼ 5;wsðDÞ ¼ 4; wsðEÞ ¼ 2, and wsðFÞ ¼ 4. Then, if a node needsto broadcast a message, it uses
its ws to
construct thebroadcasting sequence based on Algorithm 1.2.2 The Distributed Broadcast
Scheduling SchemeNext,
we consider the broadcast scheduling issue in themulti-hop broadcast scenario.
The goal of the proposed distributedbroadcast scheduling scheme is to
intelligentlyselect SU nodes for rebroadcasting in order to achieve theshortest
broadcast delay.First, Fig. 4 shows the simulation results using theparameters
given in Section 5. From Fig. 4, we observe thatthe single-hop broadcast delay
increases when w increases.Therefore, in a
multi-hop broadcast scenario, if there aremultiple intermediate nodes with the
same child node, theintermediate node with the smallest w is
selected torebroadcast. If there are more than one intermediate nodewith the
smallest w, all these nodes should rebroadcastand a broadcast
collision avoidance scheme (which isexplained in detail in Section 2.3) is
executed before theyrebroadcast the message. The pseudo-code of the proposedscheduling
scheme is shown in Algorithm 3, where node vhas
just received the broadcast message from node q andneeds
to decide whether to rebroadcast. Node q includesthe
calculated initial w of its one-hop neighbors in thebroadcast
message. Algorithm 3 indicates that each SUshould know the locations of its
one-hop neighbors (inorder to obtain NðvÞ) and its two-hop neighbors (in
orderto obtain NðqÞ and
dðu; kÞ).
Once a node receives the message,it executes Algorithm 3 to decide whether it
shouldrebroadcast or not. If it needs to rebroadcast, it uses its calculatedinitial
w as ws to
construct the broadcastingFig.
3. A multi-hop broadcast scenario.Fig. 4. The single-hop broadcast delay when ws ¼ wr ¼ w.512
IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 3, MARCH 2015sequence based on Algorithm1.
Thus, as illustrated inFig. 3, the message deliveries are shown by the arrows.Algorithm 3: The Pseudo-Code of the BroadcastScheduling
Scheme for a SU Sender v.Input: q;NðvÞ;NðNðvÞÞ; fwðuÞju 2 NðqÞg.Output: Decision of rebroadcasting.1 C fwðvÞg;2 If
fkjk 2 ðNðvÞ _ NðvÞ \ NðqÞÞg
6¼ ? then = _ v has atleast one receiver _ =3 foreach k do4 if
fuju 2 NðqÞ; dðu; kÞ _ rc; u 6¼ vg 6¼
? then= _ there are multiple paths fromq ! k _ =5
foreach u do6 C {C;wðuÞg;7 if
wðvÞ ¼ minC and jfeje ¼ minCgj ¼ 1then
= _ v is the only node with thesmallest
w _ =8 return TRUE;9 else if wðvÞ ¼ minC and jfeje ¼ minCgj >
1then = _ v is one of the multiple nodeswith
the same smallest w _ =10 run Algorithm 4;11 return
TRUE;12 else13
return FALSE; = _ v does notrebroadcast _ =14 else15
return TRUE; = _ v rebroadcaststhe message _ =16 else17
return FALSE;From the above design, it is
noted that each SU (eithersending or receiving) follows the same rules and no
centralizedentity or prior information about the sender isrequired. Thus, the
proposed broadcast scheduling schemeis fully distributed. In addition, since
the node with thesmallest w is selected for rebroadcasting,
the broadcastdelay is the shortest. Moreover, because only a subset ofintermediate
nodes are selected to rebroadcast, the numberof intermediate nodes that need to
forward the message isreduced. Thus, the probability that multiple senders
broadcastingto the same receiver simultaneously can be reduced.Hence, the
proposed broadcast scheduling scheme alsocontributes to the broadcast collision
avoidance.2.3 The
Broadcast Collision Avoidance SchemeFrom Algorithm 3, if there are multiple intermediate nodeswith
the same child node, only the intermediate node withthe smallest w should
rebroadcast. However, if more thanone intermediate node with the same smallest w,
all theseintermediate nodes should rebroadcast and a broadcast collisionmay
occur if these nodes deliver the messages on thesame channel at the same time.
For instance, in the exampleshown in Fig. 5 where node A is
the source node, node Band C have
the same w, which may lead to a broadcast collisionwhen they
rebroadcast simultaneously.Most broadcast collision avoidance methods in
traditionalad hoc networks assign different time slots to differentintermediate
nodes to avoid simultaneous transmissions.However, as explained in Section 1,
these methods cannotbe applied to CR ad hoc networks because the exact time forthe
intermediate nodes to receive the broadcast message israndom. As a result, to
assign different time slots for differentintermediate nodes is very
challenging. In addition,since the intermediate nodes use multiple channels forbroadcasting,
the channel on which the broadcast collisionoccurs is also unknown. To the best
of our knowledge, noexisting collision avoidance scheme can address these
challengesin CR ad hoc networks.In this paper, we propose a broadcast collision
avoidancescheme for CR ad hoc networks. The main idea is to prohibitintermediate
nodes from rebroadcasting on the same channelat the same time. Our proposed
broadcast collisionavoidance scheme works in a scenario where the intermediatenodes
have the same parent node, as shown in Fig. 5.The procedure of the proposed
broadcast collision avoidancescheme is summarized as follows:Step 1 generating a default
sequence. When a source node(e.g., node A in
Fig. 5) broadcasts the message, it includesits own original available channel
set in the message.Hence, if an intermediate node receives the message, itobtains
the original available channel information of itsparent node. Then, the
intermediate node uses the first wavailable
channels of its parent node to generate a defaultsequence, where w is
its own calculated initial w (whichmay not be the same as the
initial w of its parent node). Ifa channel in the default
sequence is not available for thisintermediate node, a void channel is assigned
to replacethe corresponding channel. For instance, if node B and
Cboth obtain w¼3 and the original available channels ofnode A,
B, and C are f1; 2; 3; 4; 5g, f2; 3; 4; 5g,
and f1; 3;4; 6g, respectively, node B and
C only use the first threeavailable channels of node A to
generate their defaultsequences. Therefore, the default sequence of node B isf0; 2; 3g and
the default sequence of node C is f1; 0; 3g,where
0 means a void channel. A node does not send anythingon a void channel.Step 2 circular shifting the
default sequence with a randomnumber. Apart from the available channel set, the sourcenode also
includes a distinctive integer for each intermediatenode v randomly
selected from ½1; wðvÞ_. If there aremore than wðvÞ intermediate nodes, the parent
node randomlyselects wðvÞ of
them and assigns a random integer.Only those intermediate nodes that acquire
the random integerwill rebroadcast the packet. Then, each intermediatenode
generates a new sequence from its default sequenceFig. 5. The broadcast scenario
where a broadcast collision may occur.SONG AND XIE: BRACER: A DISTRIBUTED BROADCAST PROTOCOL IN
MULTI-HOP COGNITIVE RADIO AD HOC NETWORKS… 513using circular shift and the random integer. If we
denote thedefault sequence as DS and the random integer as R,
theintermediate node performs circular shift on the DS for
Rtimes (there is no difference of right-shift or left-shift).
Forinstance, if node B and C get
3 and 1 as
their random integers,respectively, the new sequences they generate fromleft-handed
circular shift are f0; 2; 3g and f0; 3; 1g,respectively.Step 3 forming the broadcasting
sequence. Denote the startingtime slot of
the source node’s broadcasting sequence as st andthe
time slot when an intermediate node receives the broadcastmessage as rt.
The source node includes its st in thebroadcast message. Then, the
intermediate node performs circularshift on the new sequence generated from
Step 2 foranother (rt _ st þ 1)
times. It repeats that sequence for wðvÞtimes to forma cycle of its
broadcasting sequence.The pseudo-code of the broadcast collision avoidancescheme
is shown in Algorithm 4, where q is the source nodeand Circshift()
is the function of circular shift. To further elaboratethe scheme, Fig. 6 shows
an example of the proposedbroadcast collision avoidance scheme. Without loss of
generality,the starting time slot of the source node is 1. When nodeB and
C do not receive the broadcast message, they hopthrough
the channels based on the broadcasting sequencesgenerated from Algorithm 2.
Then, node B and C receive
thebroadcast message at time slot 4 and 1, respectively. Based onAlgorithm 4
and if the random integers for node B and
C are3 and 1, respectively, node B forms
the broadcastingsequence as f2; 3; 0; 2; 3; 0; 2; 3; 0g and node C forms
thebroadcasting sequence as f3; 1; 0; 3; 1; 0; 3; 1; 0g. Then, theystart rebroadcasting
from time slots 5 and 2 using the broadcastingsequences, respectively. The
underlined channels arethose a node hops on if it starts from time slot 1.Algorithm 4: The Pseudo-Code of the BroadcastCollision
Avoidance Scheme for SU v.Input: q;Lq; Lv; stq; rtv;Rv; wðvÞ.Output: BS0v.1 BS0v?; = _ initialization _ =2 i 1;3 l 1;4 While
i _ wðvÞ do = _ generating adefault sequence _ =5 j 16 While j _ wðvÞ do7 If LvðiÞ ¼ LqðjÞ then8 DSvðjÞ LqðjÞ;9 Tv Circshift(DSv;Rv ); = _ circular shifting _ =10 While l _ wðvÞ2 do
= _ forming a broadcastsequence _ =11 BS0vðlÞ Tvðl þ ðrtv _ stqÞ þ 1 mod
wðvÞÞ;12 l l þ 1;13 return BS0v;Therefore, by constructing the broadcasting sequencesfrom
the same channel set (the channel set of the commonparent node, node A)
but circular shifting different times fordifferent nodes, the intermediate
nodes are guaranteed notto send on the same channel at the same time. Thus,
broadcastcollisions can be avoided. In addition, the proposedbroadcast
collision avoidance scheme still works when intermediatenodes are not
synchronized. They can be synchronizedbased on the time stamp received from the
commonparent node. In this way, time slots of the intermediate nodesare
perfectly aligned. Then, broadcast collisions are resolved.A tradeoff of the
proposed broadcast collision avoidancescheme is that less available channels
are used for broadcastingbecause some void channels may be assigned. However,the
benefit (e.g., the increase of the successful broadcastratio) gained from
eliminating broadcast collisions is greaterthan the loss of a very few number
of channels. Hence, theonly issue left is the derivation of the initial w,
which is introducedin Section 3.2.4 Protocol Flow ChartIn this section, we summarize the procedure of the proposedBRACER
protocol. Fig. 7 shows the flow chart of theBRACER protocol. As shown in Fig.
7, before a broadcaststarts, every SU node first calculates its own initial w andthe
initial w of its one-hop neighboring nodes using thetwo-hop
location information. If this node is the sourcenode, it uses its own initial w as
its ws and
constructs thebroadcasting sequence based on Algorithm 1. Then, it hopsand
broadcasts a message on each channel during one timeslot following its
sequence. On the other hand, if this nodeis not the source node, it is by
default a receiver. Then, ituses the maximum w of
its one-hop neighboring nodes asits wr and constructs the broadcasting
sequence based onAlgorithm 2. It hops and listens on each channel during oneFig. 6. An example of the proposed
broadcast collision avoidancescheme.Fig. 7. The flow chart of the proposed
BRACER protocol.514
IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 3, MARCH 2015time slot following its sequence.
If the node receives thebroadcast message from a sender, it runs the broadcastscheduling
scheme based on Algorithm 3 to determinewhether it needs to rebroadcast this
message. If it needs torebroadcast and there is only one smallest w,
it uses its ownw as ws and runs Algorithm 1 to
rebroadcast. If it needs torebroadcast and there are more than one smallest w,
itruns the broadcast collision avoidance scheme based onAlgorithm 4 to
rebroadcast the message.3
THE DERIVATION
OF THE VALUE OF wIn
this section, we first introduce a network model weconsider. Then, based on
this model, we present the derivationprocess of the size of the downsized
availablechannel set w.3.1 The Network ModelIn this paper, we consider a CR ad hoc network where NSUs
and K primary users (PUs) co-exist in an a _ a area.PUs
are evenly distributed within the area. The SUs opportunisticallyaccess M licensed
channels. Each SU has a circulartransmission range with a radius of rc. The SUswithin the transmission
range are considered as the neighboringnodes of the corresponding SU. That is,
only whena SU receiver is within the transmission range of a SUtransmitter, the
signal-to-noise ratio (SNR) at the SUreceiver is considered to be acceptable
for reliable communications.In addition, apart from the broadcast collision,other
factors may also contribute to the packet error (e.g.,channel quality,
modulation schemes, and coding rate).However, in this paper, we only consider
broadcast collisionsas the reason for the packet error. We claim that thisis a
valid assumption in most broadcast scenarios [6], [7],[8], [9], [10], [11],
[12], [14], [15], [16], [17], [29], [30].Each SU also has a circular sensing
range with a radiusof rs.
That is, if a PU is currently active within the sensingrange of a SU, the
corresponding SU is able to detect itsappearance. Since different SUs have
different local sensingranges which include different PUs, their acquiredavailable
channels may be different [31], [32]. In addition,because the available
channels of a SU are obtained basedon the sensing outcome within the sensing
range, a SU isnot allowed to communicate with other SUs outside itssensing
range since it may mistakenly use an occupiedchannel by a PU, which results in
interference to the PU.Therefore, in this paper, we assume that rc _ rs.In
this paper, we model the PU activity as an ON/OFFprocess, where the length of
the ON period is the length ofa PU packet. The length of the ON period and the
OFFperiod can follow arbitrary distributions. We assume thateach PU randomly
selects a channel from the spectrumband to transmit one packet which consists
of multipletime slots. Moreover, because PUs at different locationscan claim
any channels for communications, the packetson the same channel do not
necessarily belong to the samePU. This is a more practical scenario, as
compared to somepapers which assume that each channel is associated witha
different PU. Under such a practical scenario, only thosePUs that are within
the sensing range of a SU and areactive during the broadcast process contribute
to theunavailable channels of the SU [18].3.2 The Derivation of the Value of wAs
explained in Section 2, the value of w is
essential to ensurea successful single-hop broadcast. Denote the probability ofa
successful single-hop broadcast as PsuccðwÞ,
where PsuccðwÞis a function of w.
Our goal is to obtain an appropriate w thatsatisfies
the condition: PsuccðwÞ 1 _ _, where _ is
a smallpre-defined value. From Theorem 1, the condition that atleast one common
channel exists between the downsizedavailable channel sets of a SU pair is a
necessary conditionfor a successful single-hop broadcast. Therefore, if we
denotethe source SU of a single-hop broadcast as S0
and the neighborsof S0 as fS1; S2; . . . ; SHg, where H is
the number ofneighbors, PsuccðwÞ is equal to the probability that
there is atleast one common channel between S0
and each of its neighborsin their
downsized available channel sets.3.2.1 The Single-Pair ScenarioWe first calculate the probability that there is at
least onecommon channel between the downsized available channelsets of S0 and
one of its neighbors Si.
The relative locationsof the two SUs and their sensing ranges are shown inFig.
8a. As illustrated in Fig. 8a, sensing ranges are dividedinto three areas: A1, A2,
and A3.
Note that PUs in differentareas have different impact on the channel
availability ofthe two SUs. For instance, if a PU is active within A3,
thechannel used by this PU is unavailable for both SUs. However,if a PU is
active within A1, the channel used by this PUis only unavailable for S0.
Thus, we first calculate the probabilitythat a channel is available within each
area,Pk; k 2 ½1; 2; 3_.
The size of the total network area is denotedas AL (i.e., AL ¼ a2). Since the locations of PUs are
evenlydistributed, the probability that p PUs
are within Ak isPrðpÞ ¼Kp_ _AkAL_ _p AL_AkAL_ _K_p; (1)where
ðKp Þ represents the total combinations
of K choosingp. In addition, we define the
probability that a PU isactive, r,
asr ¼E½ON
duration_E½ON duration_ þ E½OFF duration_; (2)where
E½__ represents
the expectation of the random variable.Therefore, given that there are p PUs
within Ak,
theFig. 8. The single-hop broadcast
scenario.SONG AND XIE: BRACER: A
DISTRIBUTED BROADCAST PROTOCOL IN MULTI-HOP COGNITIVE RADIO AD HOC NETWORKS…
515probability that there are b PUs
active isPrðb j pÞ ¼pb_ _rbð1 _ rÞp_b: (3)Furthermore,
given that there are p PUs and b active
PUswithin Ak,
the probability that there are c available channelsis denoted as Prðc j p; bÞ.
Since the number of availablechannels is only related to the number of active
PUs, c isindependent of p.
In addition, since an active PU randomlyselects a channel from M channels
in the band, Prðc j p; bÞis
equivalent to the probability that there are exactly cempty
boxes given that b balls are randomly put into atotal
of M boxes and a box can have more than one ball(because we
do not limit a channel to only one PU). Thus,Prðc j p; bÞ can be expressed asPrðc j p; bÞ ¼Mc_ _ðM _ cÞSðb;M _ cÞMb ; c 2 ½maxð0;M _ bÞ;M_;(4)where Sðb;M_cÞ is the Stirling number of the
second kind.In addition, Sðb;M_cÞ is
defined asSðb;M_cÞ ¼1ðM_cÞ!XM_ci¼0 ð_1Þi M_ci_ _ðM_c_iÞb: (5)Hence,
the probability that there are c available channelsand there are p PUs
and b active PUs within Ak is the productof (1), (3), and
(4). Then, the probability that a channel isavailable within Ak is obtained from (6).Pk ¼1MXKp¼0Xpb¼0XMc¼maxð0;M_bÞc Mc_ _ðM _ cÞ!Sðb;M _ cÞMbpb_ _rbð1 _ rÞp_b Kp_ _AkAL_ _p AL _ AkAL_ _K_p:(6)Next,
we consider the relationship between the downsizedavailable channel sets of the
two SUs. In our derivation,we only consider the scenario where the senderand
its receiver have the same w (i.e., ws ¼ wr).
Ifwr > ws,
the channels after the first ws channels
do notaffect the number of common channels. Thus, the derivationprocess is the
same. Fig. 9 shows an example of thechannel availability status of two SUs when
wðS0Þ ¼ 3,where a shaded square indicates
an idle channel and awhite square indicates a busy channel. A square with across
means that a channel can be either idle or busy.Since each SU only selects the
first w available channelsto form a downsized available channel
set, theavailability status of the channels after the first w availablechannels
is not specified. Then, without loss of generality,we denote t and
h as the index of the lastavailable channel in the
downsized available channelsets of S0
and Si, respectively. We first assume
thatt _ h. Hence, from channel 1 to t,
there are four possiblescenarios of every channel in terms of its availability
forthe two SUs. They are: 1) the channel is available forboth SUs (denoted as C1);
2) the channel is unavailablefor both SUs (denoted as C2);
3) the channel is onlyavailable for S0
(denoted as C3);
and 4) the channel isonly available for Si (denoted as C4).
In addition, fromchannel t þ 1 to h (if
t < h), there are two possible
scenarios:1) the channel is available for Si but it can be anystatus for S0 (denoted
as C5)
and 2) the channel isunavailable for Si but it can be any status for S0(denoted
as C6).
Based on Fig. 8a, the probabilities ofthe above six scenarios can be obtained:
1) PC1
¼ P1P2P3;2) PC2 ¼ ð1 _ P3Þ þ ð1
_ P1Þð1 _ P2ÞP3;
3) PC3
¼ P1P3ð1_ P2Þ;
4) PC4
¼ ð1 _ P1ÞP2P3; 5) PC5 ¼ PC1þ PC4; and 6)PC6 ¼ PC2 þ PC3.Denote
Zð0; iÞ as the number of common channelsbetween
S0 and
Si in
their downsized available channelsets. In order to obtain PrðZð0; iÞ¼zÞ,
we need to considerall the combinations of the channel status for every channelfrom
channel 1 to h. There are two possible cases: 1)
t¼hand 2) t<h.
For the first case, channel h is a common channelbetween the two
SUs. In addition, from channel 1 tochannel h_1, there must be z _ 1 channels in scenario C1;h _ 2w þ z channels
in C2,
and w_z channels in C3 and
C4,respectively.
Since t¼h, no channel is in scenario C5 or C6.Thus,
the probability that there are zðz>0Þ common
channelsin the first case isP0ðhÞ ¼h _ 1z _ 1_ _h _ zw _ z_ _h _ ww _ z_ _PzC1Ph_2wþzC2 Pw_zC3 Pw_zC4 :(7)For the second case, since t<h,
the common availablechannels can only be between channel 1 to t.
We denotethe number of available channels for Si from channel 1to t as
x. Thus, from channel 1 to t,
similar to the firstcase, there are z channels
in C1;
t _ w _ x þ z channels inC2; w _ z channels
in C3;
and x _ z channels in C4.
Inaddition, from channel t þ 1 to h,
there are w _ x channelsin C5 and
h _ t _ w þ x channels in C6.
Therefore,the probability that there are totally z common
channelsis obtained from (7).P00 1 ðhÞ ¼ PzC1Pw_zC3Xh_1t¼wXt_wx¼maxð0;wþt_hÞt _ 1w _ 1_ _wz_ _t _ wx _ z_ __h _ t _ 1w _ x _ 1_ _Px_zC4 Pðt_w_xþzÞ C2 ðPC1 þ PC4Þðw_xÞ_ðPC2 þ PC3Þðh_t_wþxÞ:(8)If
we switch S0 and Si in
Fig. 9, we can obtain the probabilityfor the dual case. Hence, the probability
that there are zFig. 9. An example of the channel
availability status when w(S0) ¼ 3.516
IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 3, MARCH 2015common channels in the second case
is expressed in (9).P00ðhÞ ¼ PzC1Pw_zC3Xh_1t¼wXt_wx¼max ð0;wþt _hÞt _ 1w _ 1_ _wz_ _t _ wx _ z_ __h _ t _ 1w _ x _ 1_ _Px_zC4 Pðt_ w_xþzÞ C2 ðPC1 þ PC4Þðw_xÞ_ðPC2 þ PC3Þðh_t_wþxÞþ PzC1Pw_zC4Xh_1t¼wXt_wx¼max ð0;wþt_hÞt _ 1w _ 1_ _wz_ _t _ wx _ z_ __h _ t _ 1w _ x _ 1_ _Px _zC3 Pðt_w_xþzÞ C2 ðPC1 þ PC3Þðw_xÞ_ðPC2 þ PC4Þðh_t_wþxÞ:(9)Therefore, the probability that
there are z common channelsfor the first w available
channels for each SU isPrðZð0; iÞ ¼ zÞ ¼XMh¼2w_zP0ðhÞ þ P00ðhÞ: (10)Thus, the probability of a
successful single-hop broadcastfrom S0
to Si isPsuccðwÞ ¼ 1 _ PrðZð0; iÞ ¼ 0Þ: (11)Fig.
10a shows the analytical and simulation results ofPsuccðwÞ in
the single-pair scenario under various w and
differentM. To obtain these results, the number of PUs K ¼ 40and the probability that a PU is active r¼0:9.
In addition,the side length of the network area a¼10 (unit length) andtwo neighboring
SUs are at the border of each other’s sensingrange where rs¼2 (unit
length). As shown in Fig. 10a,the simulation results match extremely well with
the analyticalresults.3.2.2
The Multi-Pair ScenarioWe
extend the above results to a multi-pair scenario, asshown in Fig. 8b where Si and Sj are two neighbors of S0.Based
on the knowledge of combination mathematics, theprobability of a successful
broadcast in the multi-pair scenarioshown in Fig. 8b isPsuccðwÞ ¼ 1 _ PrðZð0; iÞ ¼ 0Þ _ PrðZð0; jÞ ¼ 0Þþ PrðZð0; i; jÞ ¼ 0Þ;(12)where
Prðzð0; i; jÞ ¼ 0Þ is the probability that both Si and Sjdo not have any common channel in
the downsized availablechannel sets with S0. Since the other two terms in
(12)(i.e., PrðZð0; iÞ ¼ 0Þ and PrðZð0; jÞ ¼ 0Þ)
can be obtainedfrom (10), we only need to calculate PrðZð0; i; jÞ ¼ 0Þ.To calculate PrðZð0; i; jÞ¼0Þ, we use the same idea fromthe
single-pair scenario. That is, we consider Si and Sjtogether as one new neighboring
node. The sensing rangeof the new neighboring node is the union of the sensingranges
of the two original nodes (i.e., the shaded area inFig. 8b). Therefore, we can
obtain new P1, P2, and P3 for themulti-pair scenario based on the new size of the
sensingrange. Moreover, the probabilities of every scenario of thechannel
status can also be obtained accordingly. Therefore,by using (7)-(10), we can
calculate PrðZð0; i; jÞ ¼ 0Þ. Then,given the locations of the H neighbors,
each SU can get theprobability of a successful single-hop broadcast by
performingthe same procedure iteratively for H times.
Finally, byletting PsuccðwÞ 1 _ _,
a proper w can be acquired for S0.Fig. 10a shows the analytical and
simulation results ofPsuccðwÞ in the two-pair scenario under
various w and differentM.From
Fig. 10b, the simulation results match very wellwith the analytical results.4 DISCUSSION ON THE PROPOSED
BRACERPROTOCOLIt is noted that our proposed
BRACER protocol is particularlydesigned for broadcast scenarios in multi-hop CRad
hoc networks without a common control channel. Asdescribed in Sections 1 and 2,
there are two implementationissues that are essential to the performance of ourproposed
distributed broadcast protocol: 1) the two-hoplocation information; and 2) the
time synchronization. Inthis section, we provide a further discussion on thesetwo
issues.4.1 Two-Hop Location InformationFrom Section 2, in our proposed
BRACER protocol, every SUnode needs the location information of its two-hop
neighboringnodes in order to calculate the size of the downsizedavailable
channel sets of its one-hop neighboring nodes.Even though the localization
issue for CR ad hoc networks isout of the scope of this paper, we hereby
introduce severalsolutions to obtain the two-hop location information indetail.
Generally speaking, the location information for a traditionalad hoc network
can be obtained either from externalpositioning techniques (e.g., Global
Positioning System(GPS) [33]) or from some localization algorithms withoutexternal
positioning techniques [34], [35]. Hence, GPS is anoption to obtain the
location information of the two-hopneighboring nodes in CR ad hoc networks.
However, GPSrequires additional hardware and consumes extra energy,which may
not be efficient in CR ad hoc networks wherecost and power constraints are often
needed.On the other hand, a number of localization algorithmsthat do not rely
on GPS for CR ad hoc networks have beenproposed [36], [37]. In these works, the
legacy localizationalgorithms proposed for traditional ad hoc networks, suchas
time-of-arrival (TOA)-based, angle-of-arrival (AOA)-based, and
received-signal-strength (RSS)-based methodsFig. 10. Analytical and simulation results of PsuccðwÞ under various w anddifferentM.SONG
AND XIE: BRACER: A DISTRIBUTED BROADCAST PROTOCOL IN MULTI-HOP COGNITIVE RADIO
AD HOC NETWORKS… 517are
improved and adopted in CR ad hoc networks. Theselocalization algorithms often
require the assistance fromcertain special nodes with known location
information(named reference nodes). However, all these algorithmsignore the
control message exchange issue between the referencenodes and the regular nodes
in CR ad hoc networks.The control message exchange issue is either not
consideredor simplified by using a common control channel. Based onSection 1,
transmitting messages on a global common channelwithout any additional control
information is not feasiblein CR ad hoc networks. Therefore, in order to
receivethe control message containing the location informationfrom the
reference nodes, a communication mechanism thatdoes not rely on any other
control information (i.e., underblind information) between the reference nodes
and the regularnodes is needed. As mentioned before, in [18], a QoSbasedbroadcast
protocol under blind information is proposed.We can use this scheme as the
communicationscheme between the reference nodes and the regular nodesto obtain
the two-hop location information. Since the broadcastprotocol proposed in [18]
can only support QoS provisioning,the successful broadcast ratio and averagebroadcast
delay of this scheme for the whole network arenot optimized. Therefore, this
scheme is suitable to be usedin the early stage of a broadcast procedure. After
everynode in the network acquires the two-hop location information,the proposed
BRACER protocol can be executed.4.2 Time SynchronizationFrom Section 1, an advantage of our proposed BRACERprotocol is
that it does not require tight time synchronization.This special advantage is
essential since tight time synchronizationis extremely difficult to achieve in
a realad hoc network system. In this paper, we define tight timesynchronization
as the scenario where time slots of differentnodes are precisely aligned. This
means that the proposedBRACER protocol can guarantee the successfulreception of
a whole broadcast message even if the timeslots of the sender and the receiver
have an offset. Denotethe length of the offset as d. Without the loss of generality, dis less than a time slot. Based on Theorem 1, in order
toguarantee a successful single-hop broadcast, ws must besmaller than or equal to wr. Thus, we consider the time
synchronizationissue under the following two scenarios.4.2.1 Scenario Iws is strictly smaller than wr. If ws < wr and the sender andthe receiver
have at least one common channel betweentheir downsized available channel sets,
we have the followingtheorem:Theorem 2. If ws < wr, the single-hop broadcast is a
guaranteedsuccess within w2rtime
slots even if the time slots of the senderand the receiver have an offset.Proof. Similar to the proof of Theorem 1,
if ws < wr,
duringthe wr consecutive
time slots for which the receiver stayson the same channel, every channel of
the sender mustappear at least once. More importantly, since d is lessthan a time slot, at least a whole time slot of
the commonchannel between the sender and the receiver must becompletely covered
by the wr consecutive
time slots ofthe common channel. That is, the receiver can hear awhole time
slot of the common channel when the senderbroadcasts the message. Thus, a
successful single-hopbroadcast is guaranteed. tuFig.
11 shows an example of Scenario 1 where ws < wr.We assume that the time slots of
the sender are ahead of thereceiver with an offset of d. As illustrated in Fig. 11, on the9th slot of the
sender’s broadcasting sequence, the senderand the receiver are on the same
channel (i.e., channel 2). Inaddition, this time slot is completely covered by
the threeconsecutive time slots when the receiver is on channel 2.Hence, the
broadcast message can be successfully receivedby the receiver.4.2.2 Scenario IIws is equal to wr. If ws ¼ wr,
there are two sub-cases: 1)Case
1: a time slot of the common
channel is completelycovered by the wr consecutive time slots of the
receiver onthe same channel; and 2) Case 2: a
time slot of the commonchannel is partially covered by the wr consecutive timeslots of the
receiver on the same channel. Fig. 12 shows anexample of Case 1 in Scenario II.
Similar to Scenario I, thebroadcast message can still be successfully received
even ifan offset exists.On the other hand, Fig. 13 shows an example of Case 2
inScenario II. This case occurs when the time slot of the commonchannel of the
sender is partially covered by the firstand the last time slot of the wr consecutive time slots of thereceiver.
From the communication theory, if a node onlyreceives a part of a packet, it
cannot decode this packet correctlyand will drop it at the physical (PHY)
layer. Thus,even if the sender and the receiver have a common channel,the
receiver cannot successfully receive the broadcast messagewithin w2rtime slots in Case 2.We provide
two simple modifications of our proposedBRACER protocol for this case. The
first way is that thereceiver always shift the whole cycle of the broadcastingsequence
one slot forward or one slot backward after ithops for one cycle (i.e., w2rtime slots) and has not receivedFig. 12. An example of Case 1 in
Scenario II when time slots areunsynchronized.Fig. 11. An example of Scenario I
when time slots are unsynchonized.Fig. 13. An example of Case 2 in Scenario II
when time slots areunsynchronized.518 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 3, MARCH
2015the broadcast message. At the same
time, the total length oftime slots that the sender broadcasts needs to be
longer thanthree cycles of the receiver’s broadcasting sequence. That is,the
sender broadcasts the message following its broadcastingsequence for b3_M2w2scþ1 cycles. In this way, Case 2becomes Case 1. Then, even
if the receiver may not receivethe message within one cycle, it can still successfully
receivethe message in the following cycle, as shown in Fig. 14.On the other
hand, the second way is that the receiver vselects
wrðvÞ to be maxfwðuÞju 2 NðvÞg þ 1,
where NðvÞ isthe
set of the neighboring nodes of the receiver v.
Therefore,the wr of
the receiver is always larger than the wsused by the sender. In this way,
Case 2 becomesScenario I. Based on Theorem 2, the successful broadcast isguaranteed
within w2rtime slots, as shown in Fig. 15.
Tosum up, from the above analysis, our proposed BRACERprotocol can be used in
an environment where tight timesynchronization is not required.5 PERFORMANCE EVALUATIONIn this section, we evaluate the
performance of the proposedbroadcast protocol. We consider two types of PU
trafficmodels in the simulation [38]. The first PU traffic model isdiscrete-time,
where the PU packet inter-arrival time followsthe biased-geometric distribution
[39]. The second PUtraffic model is continuous-time, where the PU packet
interarrivaltime follows the Pareto distribution [39]. We assumethat the
probability that a PU is active is fixed (i.e., r ¼ 0:9).In addition, the side length of
the network area a ¼ 10 (unitlength). We assume that the
radius of the sensing range andthe transmission range are the same (i.e., rs ¼ rc ¼ 2 (unitlength)). In this paper, we mainly investigate the
followingtwo performance metrics: 1) successful broadcast ratio: theprobability that all nodes in a network successfully
receivethe broadcast message and 2) average broadcast delay: theaverage duration from the moment a broadcast starts tothe
moment the last node receives the broadcast message.In addition, we compare our
proposed broadcast protocolwith five other schemes: 1) RandomþFlooding: each SU randomlyselects a
channel to hop and uses flooding (i.e., a SUis obligated to rebroadcast once
receiving the message); 2)SequenceþFlooding
(1=3 of our design): each SU downsizesits available channel
set and constructs broadcastingsequences based on our scheme and uses flooding;
3)SequenceþSchedule
(2=3 of our design): each SU constructsbroadcasting
sequences based on our scheme and uses ourbroadcast scheduling scheme; 4) Basic QoS Scheme: each SUuses the basic scheme of
the QoS-based broadcast protocolto broadcast [18]; and 5) JSþFlooding: each SU uses thejump-stay scheme
[26] to construct the broadcastingsequences and uses flooding.5.1 Successful Broadcast RatioSince the single-hop successful
broadcast ratio depends onw which is related to a pre-defined
value _, we define_ ¼ 0:001. In fact, _ can
be an arbitrary small value. Thus,based on Section 3, each SU calculates a
proper w before thebroadcast starts in our scheme, the Sequence+Floodingscheme, and the SequenceþSchedule
scheme. Tables 2 and 3show the
simulation results of the successful broadcast ratiounder different number of
SUs and PUs, where the value inthe upper cell is for the discrete-time PU traffic
and thelower cell is for the continous-time PU traffic. In Table 2,M ¼ 20 and K ¼ 40. In Table 3, M ¼ 20 and N ¼ 20. Asshown in Tables 2 and 3, the successful broadcast
ratio ishigher than 99 percent under our proposed broadcast protocolin all scenarios.
In addition, the proposed broadcast protocoloutperforms other schemes in terms
of highersuccessful broadcast ratio. Since the jump-stay schemerequires that
the ith available channel in the available channelset is
also channel i, it cannot utilize the technique
in ourscheme to downsize the original available channel set. Inaddition, the
jump-stay scheme can guarantee rendezvouswithin 6MPðP_GÞ,
where P is the smallest prime numberlarger than M and
G is the number of common channelsbetween two SUs. Thus,
in order to ensure a successfulbroadcast, each SU broadcasts the message for 6MPðP_GÞFig. 14. An example of the first
way of modification for Case 2 inScenario II when time slots unsynchronized.Fig.
15. An example of the second way of modification for Case 2 in ScenarioII when
time slots are unsynchronized.TABLE 2Successful Broadcast Ratio under Different Number of SUsN N¼ 10 N ¼ 15 N ¼ 20 N ¼ 25RandomþFlooding 0.8801 0.8180 0.8100 0.8726 0.88210.8630
0.9148 0.9075 0.8698 0.8708SequenceþFlooding
0.9849 0.9839 0.9828 0.9823 0.98630.9762
0.9769 0.9777 0.9773 0.9719SequenceþSchedule
0.9859 0.9864 0.9823 0.9857 0.98550.9812
0.9845 0.9849 0.9876 0.9861Basic
QoS Scheme 0.8915 0.9022
0.8543 0.9314 0.93170.8739 0.8386 0.8952 0.8498 0.8624Proposed Scheme 0.9991 0.9973 0.9969 0.9982 0.99090.9994
0.9959 0.9954 0.9967 0.9951TABLE 3Successful Broadcast Ratio under Different Number of PUsK ¼ 20 K ¼ 30 K ¼ 40 K ¼ 50 K ¼ 60RandomþFlooding 0.8189 0.8326 0.8842 0.9208 0.89070.7980
0.8738 0.9191 0.9139 0.8849SequenceþFlooding
0.9866 0.9863 0.9823 0.9819 0.98710.9742
0.9765 0.9773 0.9711 0.9797SequenceþSchedule
0.9868 0.9872 0.9857 0.9881 0.98720.9874
0.9885 0.9876 0.9833 0.9850Basic
QoS Scheme 0.9502 0.9167
0.9314 0.8222 0.78840.8950 0.8921 0.8498 0.8792 0.8463Proposed Scheme 0.9978 0.9976 0.9982 0.9951 0.99210.9946
0.9941 0.9967 0.9977 0.9969SONG AND XIE: BRACER: A DISTRIBUTED BROADCAST PROTOCOL IN
MULTI-HOP COGNITIVE RADIO AD HOC NETWORKS… 519slots. However, 6MPðP_GÞ is
usually a very large numberwhen M is large. Hence, to better
illustrate the trade-offbetween the successful broadcast ratio and broadcast
delay,we compare our scheme with JSþFlooding in Section 5.2.5.2 Average Broadcast DelayTables 4 and 5 show the simulation
results of the averagebroadcast delay under different number of SUs and PUs.Similarly
to the successful broadcast ratio, in Table 4,M ¼ 20 and K ¼ 40. In Table 5, M ¼ 20 and N ¼ 20. Asshown in Tables 4 and 5, the proposed broadcast
protocoloutperforms other schemes in terms of shorter averagebroadcast delay.
Furthermore, Figs. 16 and 17 show the averagebroadcast delay under different
number of channelswhen N ¼ 10 and
K ¼ 40. As explained in Section 1,
besidesour proposed scheme, we also compare with JSþFloodingand our scheme without downsizing
the available channelset (i.e., w ¼ M). It is shown that even though
the successfulbroadcast ratio is similar, the broadcast delay underJSþFlooding
is much longer than our proposed
scheme.To sum up, our proposed broadcast protocol outperformsRandomþFlooding
in terms of higher successful
broadcastratio and shorter broadcast delay. It also outperformsJSþFlooding
in terms of shorter broadcast
delay. In addition,even with the tradeoff in our proposed broadcast collisionavoidance
scheme as explained in Section 2.3 and limitedoverhead, our proposed scheme and
the schemes that use apart of our design (e.g., SequenceþFlooding) can still achievebetter
performance results than RandomþFlooding
for bothmetrics and JSþFlooding
for the broadcast delay.5.3 The Impact of Unsynchronized
Time SlotsFrom the
discussion in Section 4.2, our proposed BRACERprotocol has an advantage that
tight time synchronizationis not required. Accordingly, we provide two
modificationsof our proposed protocol when time slots are unsynchronized.In
this section, we evaluate the impact of the unsynchronizedtime slots on the
performance of the proposedBRACER protocol.Figs. 18 and 19 show the single-hop
successful broadcastratio and the average broadcast delay under different
scenarios.In the first modification, we let ws ¼ wr ¼ w,whereas
in the second modification, we let ws ¼ w andwr ¼ w þ 1.
It is shown that unsynchronized scenarios usuallylead to lower successful
broadcast ratio and longeraverage broadcast delay than the synchronized
scenario.However, with the modifications of our proposed protocol,the low
successful broadcast ratio can be significantlyimproved. From the figures, we
may see that the secondmodification outperforms the first modification in terms
ofhigher successful broadcast ratio. However, it also results inlonger average
broadcast delay than the first modification.Fig. 16. Successful broadcast ratio under different number of
channels.TABLE 4Average Broadcast Delay
under Different Number of SUsDelay
(unit: slots) N ¼ 5 N ¼ 10 N ¼ 15 N ¼ 20 N ¼ 25RandomþFlooding
19.781 26.483 28.003 29.252 31.20320.981
23.765 27.686 33.153 32.883SequenceþFlooding
8.458 11.168 12.744 14.243 15.9097.712
11.799 12.903 14.534 17.257SequenceþSchedule
7.811 10.995 13.324 13.896 15.8237.155
11.457 13.553 14.551 15.078Basic
QoS Scheme 15.576 19.642
26.447 22.745 24.59916.093 23.164 21.698 26.834 32.078Proposed Scheme 7.066 10.532 12.259 13.353 15.1986.545
11.097 12.786 13.639 14.801TABLE 5Average Broadcast Delay under Different Number of PUsDelay (unit: slots) K ¼ 20 K ¼ 30 K ¼ 40 K ¼ 50 K ¼ 60RandomþFlooding 29.189 31.459 25.737 25.361 24.24334.547
30.629 27.617 28.424 26.399SequenceþFlooding
13.918 14.886 14.243 14.649 14.25914.413
13.958 14.534 14.867 14.389SequenceþSchedule
12.747 14.206 13.896 14.361 14.01413.652
14.086 14.551 14.521 14.237Basic
QoS Scheme 25.148 25.187
22.745 27.182 28.53329.111 24.931 26.834 24.639 24.907Proposed Scheme 12.322 13.555 13.352 14.279 13.59713.249
13.401 13.639 13.335 13.471Fig. 17. Average broadcast delay under different unmber of
channels.Fig. 18. The impact of unsynchronized time slots on the single-hop
successfulbroadcast ratio.520
IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 3, MARCH 2015Furthermore, when w>5,
the performance of the two modificationsis very close to the unsynchronized
scenario withoutmodification. This is because that when w is
largeenough, more than one common channels exist between thesender and the
receiver. Thus, there is at least one time sloton the common channel that is
completely covered by thewr consecutive
time slots. Hence, the receiver can successfullyreceive the message without any
modification.Figs. 20 and 21 show the multi-hop successful broadcastratio and
average broadcast delay under differentscenarios. It is illustrated in Fig. 20
that when the numberof SUs is small (e.g., N < 20), the synchronized scenariooutperforms
all the unsynchronized scenarios in terms ofhigher successful broadcast ratio.
This is because when Nis small, each SU usually selects
small w for broadcasting.Thus, from Fig. 18, the successful
broadcast ratio of theunsynchronized scenarios is lower than the synchronizedscenario.
However, when N is large (e.g., N > 20),
theunsynchronized scenarios with both modifications outperformthe synchronized
scenario in terms of higher successfulbroadcast ratio. This is because when N is
large, areceiver often has more than one senders. These sendersbroadcast the
message on different channels to thereceiver. Thus, the impact of
unsynchronized time slots isdiminished.5.4 Broadcast Collision AnalysisIn this section, we evaluate the performance of
broadcastcollisions for our proposed BRACER protocol. Sincebroadcast collisions
usually lead to the waste of networkresources, they should be efficiently
avoided to save networkresources. In this paper, we use the average numberof
broadcast collisions in a broadcast procedure per SUnode as the performance
metric.Fig. 22 shows the average number of broadcast collisionsunder different
numbers of channels. It is illustratedthat the Proposed Scheme outperforms the SequenceþFloodingand SequenceþSchedule schemes in terms of fewer
broadcastcollisions on average. This means that the broadcast collisionavoidance
scheme in the Proposed Scheme
can effectivelyavoid broadcast
collisions. In addition, the ProposedScheme
also incurs fewer broadcast
collisions than the RandomþFloodingscheme when M _ 20. That is, the RandomþFloodingscheme performs better than the Proposed Schemeonly when M is
very large. This is because that in the RandomþFloodingscheme, each sender randomly
selects anavailable channel in the band to broadcast. If the numberof channels
is large, the probability that two senders selectthe same channel is fairly
low. However, when M is small,the RandomþFlooding
scheme leads to the highest numberof
broadcast collisions among the four schemes (e.g.,M ¼ 5). Even though the RandomþFlooding scheme causesthe fewest broadcast
collisions when M is large, the successfulbroadcast
ratio and average broadcast delay of theRandomþFlooding scheme are not acceptable, as
shown inTables 2, 3, 4, and 5. Additionally, the SequenceþSchedulescheme performs better than the SequenceþFlooding
scheme,as shown in Fig. 22. This
means that our proposedFig.
19. The impact of unsynchronized time slots on the single-hop averagebroadcast
delay.Fig. 20. The impact of unsynchronized time slots on the multi-hop
successfulbroadcast ratio.Fig. 21. The impact of unsynchronized time slots on
the multi-hop averagebroadcast delay.Fig. 22. Average number of broadcast
collisions under different numbersof chennels when N ¼ 10.SONG
AND XIE: BRACER: A DISTRIBUTED BROADCAST PROTOCOL IN MULTI-HOP COGNITIVE RADIO
AD HOC NETWORKS… 521distributed
broadcast scheduling scheme also contributesto the collision avoidance.5.5 Overhead AnalysisOverhead is an important metric to
evaluate the efficiencyof a broadcast protocol. To evaluate the impact of
overhead,we use normalized overhead as the performancemetric [40], [41].
Normalized overhead is defined as theratio of the total broadcast packets (in
bits) propagated byevery node in the network to the total broadcast packets(in
bits) received by the receivers [40], [41].We denote the length of the original
broadcast packet asLb.
Based on Section 2.2, extra information needs to beadded in the original
broadcast packet in order to realizethe proposed BRACER protocol. The extra information
in abroadcast packet mainly consists of three parts. First of all,as mentioned
in Section 2.2, the sender should include thecalculated initial w of
its one-hop neighbors in the broadcastmessage. Second, as described in Section
2.3, thesender should include its own channel availability informationand the
starting time slot of its broadcastingsequence in the message. Thirdly, the
sender shouldinclude random integers for the intermediate nodes whoneed to
rebroadcast to the same node. Thus, if we definethe length of the initial w,
the starting time slot, and therandom integer as 8 bits, the length of the
total extra informationin a broadcast packet in bits for a node isQ ¼ 8Na þM þ 8 þ 8Nb; (13)where
Na is
the number of the one-hop neighbors of thenode and Nb is the number of the intermediate
nodes whoneed to rebroadcast to the same node. Therefore, the totallength of a
broadcast packet of the proposed BRACER protocolis Lb þ Q.Fig.
23 shows the normalized overhead under differentlengths of the original
broadcast packet. We set the range ofthe original broadcast packet length as ½50; 500_ bits.
Sincebroadcast packets are control packets which are often veryshort, they
mainly fall in this range. In addition, we compareour proposed scheme with the SequenceþFlooding
andSequenceþSchedule schemes. The RandomþFlooding
schemedoes not require the two-hop
location information, so weexclude it for fair comparison. From Section 2, the
length ofthe extra information in a broadcast packet for the SequenceþFloodingand SequenceþSchedule schemes are Q ¼ 0
andQ ¼ 8Na, respectively. Thus, the Proposed Scheme has thelongest broadcast packets
among the three schemes. Eventhough the Proposed Scheme has
the longest extra informationin a packet, it outperforms the other two schemes
interms of lower normalized overhead, as shown in Fig. 23.The Proposed Scheme can achieve up to 106 and 12:5 percentless normalized overhead than the SequenceþFlooding
andSequenceþSchedule schemes, respectively.Fig. 24
shows the normalized overhead under differentnumbers of SUs. We use the AODV
route request (RREQ)packet as a typical original broadcast packet (i.e., Lb ¼ 192bits)
[42]. From Fig. 24, it is shown that the proposed BRACERbroadcast protocol
outperforms the other twoschemes in terms of lower normalized overhead undervarious
numbers of SUs. More importantly, when thenumber of SUs increases by 400
percent, the normalizedoverhead of the Proposed Scheme only
increases by 115 percent.Thus, the scalability of the proposed BRACER protocolis
satisfactory.6 CONCLUSIONIn this paper, the broadcasting
challenges specifically inmulti-hop CR ad hoc networks under practical
scenarioswith collision avoidance have been addressed for thefirst time. A
fully-distributed broadcast protocol namedBRACER is proposed without the
existence of a global orlocal common control channel. By intelligently
downsizingthe original available channel set and designing the broadcastingsequences
and broadcast scheduling schemes, ourproposed broadcast protocol can provide
very high successfulbroadcast ratio while achieving very short broadcastdelay.
In addition, it can also avoid broadcast collisions.Simulation results show
that our proposed BRACER protocoloutperforms other possible broadcast schemes
in termsof higher successful broadcast ratio and shorter averagebroadcast
delay.ACKNOWLEDGMENTSThis work was supported in part by the US NationalScience
Foundation (NSF) under Grant No. CNS-0953644, CNS-1218751, and CNS-1343355. The
authorswould like to thank the anonymous reviewers for theirconstructive
comments which greatly improved the qualityof this work.Fig. 23. Normalized overhead under
lengths of the original broadcastpacket.Fig. 24. Normalized overhead under
different numbers of SUs whenLb ¼ 192 bits.522 IEEE TRANSACTIONS ON MOBILE
COMPUTING, VOL. 14, NO. 3, MARCH 2015networks,” IEEE
Pers. Commun., vol. 8, no.
1, pp. 16–28, Feb. 2001.[42] C. E. Perkins, E. M. Belding-Royer, and S. Das, “Ad
hoc ondemanddistance vector (AODV) routing,” Request for Comments(RFC) 3561,
Internet Eng. Task Force (IETF), Jul. 2003.SONG AND XIE: BRACER: A DISTRIBUTED BROADCAST PROTOCOL IN
MULTI-HOP COGNITIVE RADIO AD HOC NETWORKS… 523Yi Song received the BS degree in electricalengineering from Wuhan
University, Wuhan,China, in 2006, the ME degree in electrical engineeringfrom
Tongji University, Shanghai, China,in 2008, and the PhD degree in electrical
engineeringfrom the University of North Carolina atCharlotte, Charlotte, in
2013. He joined theDepartment of Electrical Engineering and ComputerScience,
Wichita State University, as anassistant professor in August 2013. He receivedthe
Kansas National Science Foundation EPSCoRFirst Award in 2014. His research
interests include protocol design,modeling, and analysis of spectrum management
and spectrum mobilityin cognitive radio networks.Jiang Xie received the BE degree from TsinghuaUniversity, Beijing, China,
in 1997, the MPhildegree from the Hong Kong University of Scienceand Technology
in 1999, and the MS and PhDdegrees from the Georgia Institute of Technology,in
2002 and 2004, respectively, all in electrical andcomputer engineering. She
joined the Departmentof Electrical and Computer Engineering at the Universityof
North Carolina at Charlotte (UNC-Charlotte)as an assistant professor in August
2004,where she is currently an associate professor. Hercurrent research
interests include resource and mobility management inwireless networks, QoS
provisioning, and the next-generation Internet.She is on the Editorial Boards
of the IEEE Transactions on Mobile
Computing,IEEE Communications Surveys and Tutorial, Computer Networks(Elsevier), Journal of Network and Computer Applications(Elsevier), and the Journal of Communications (ETPub). She receivedthe US
National Science Foundation (NSF) Faculty Early Career Development(CAREER)
Award in 2010, a Best Paper Award from IEEE/WIC/ACM International Conference on
Intelligent Agent Technology (IAT2010), and a Graduate Teaching Excellence
Award from the College ofEngineering at UNC-Charlotte in 2007. She is a senior
member of theIEEE and the ACM.”
For more information on this or
any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.524 IEEE TRANSACTIONS ON MOBILE
COMPUTING, VOL. 14, NO. 3, MARCH 2015
CHAPTER 99.1 REFERENCES[1] S. Fenz, “An ontology- and
Bayesian-based approach for determining threat probabilities,” in Proceedings
of the 6 th ACM Symposium on Information, Computer and Communications Security,
ser. ASIACCS ’11. New York, NY, USA: ACM, 2011, pp. 344–354. [2] M. Frigault,
L. Wang, A. Singhal, and S. Jajodia, “Measuring network security using dynamic
Bayesian network,” in Proceedings of the 4 th ACM Workshop on Quality of
Protection, ser. QoP ’08. New York, NY, USA: ACM, 2008, pp. 23–30. [Online].
Available: http://doi.acm.org/10.1145/1456362.1456368
[3] N. Poolsappasit, R. Dewri, and I. Ray, “Dynamic security risk management
using Bayesian attack graphs,” IEEE Transactions on Dependable and Secure
Computing, vol. 9, no. 1, pp. 61–74, Jan 2012. [4] P. Xie, J. H. Li, X. Ou, P.
Liu, and R. Levy, “Using Bayesian networks for cyber security analysis,” in The
40th Annual IEEE/IFIP International Conference on Dependable Systems and
Networks (DSN), 2010. [5] S. Noel, S. Jajodia, L. Wang, and A. Singhal,
“Measuring security risk of networks using attack graphs,” International
Journal of Next-Generation Computing, vol. 1, no. 1, July 2010. [6] L. Wang, T.
Islam, T. Long, A. Singhal, and S. Jajodia, “An attack graph-based
probabilistic security metric,” in Proceedings of the 22nd annual IFIP WG 11.3
Working Conference on Data and Applications Security. Berlin, Heidelberg:
Springer-Verlag, 2008, pp. 283–296. [7] R. E. Sawilla and X. Ou, “Identifying
Critical Attack Assets in Dependency Attack Graphs,” in Proceedings of the 13th
European Symposium on Research in Computer Security: Computer Security, ser.
ESORICS ’08. Berlin, Heidelberg: SpringerVerlag, 2008, pp. 18–34.
Statistical Dissemination Control in Large Machine-to-Machine Communication Networks
Cloud based machine-to-machine (M2M) communications have emerged to achieve ubiquitous and autonomous data transportation for future daily life in the cyber-physical world. In light of the need of network characterizations, we analyze the connected M2M network in the machine swarm of geometric random graph topology, including degree distribution, network diameter, and average distance (i.e., hops). Without the need of end-to-end information to escape catastrophic complexity, information dissemination appears an effective way in machine swarm. To fully understand practical data transportation, G/G/1 queuing network model is exploited to obtain average end-to-end delay and maximum achievable system throughput.
Furthermore,
as real applications may require dependable networking performance across the
swarm, quality of service (QoS) along with large network diameter creates a new
intellectual challenge. We extend the concept of small-world network to form
shortcuts among data aggregators as infrastructure-swarm two-tier heterogeneous
network architecture, and then leverage the statistical concept of network control
instead of precise network optimization, to innovatively achieve QoS
guarantees. Simulation results further confirm the proposed heterogeneous
network architecture to effectively control delay guarantees in a statistical
way and to facilitate a new design paradigm in reliable M2M communications.
1.2 INTRODUCTION:
Cloud based machine-to-machine (M2M) communications have emerged to enable services through interaction between cyber and physical worlds, achieving ubiquitous and autonomous data transportation among objects and the surrounding environment in our daily lives. The wireless network involving tremendous machines that the availability of end-to-end information at each machine is not possible is referred to the large M2M network, which is getting importance into next-generation wireless systems. While these tremendous machines have short-range communication capabilities, multi-hop networking is a must for information dissemination over machine swarm. The connectivity and low delivery latency in the machine swarm are consequently crucial to achieve reliable communications.
However, lacking complete understanding of large network characteristics, effective traffic control for message delivery remains open a proper control scheme of routing with quality-of-service (QoS) guarantee regarding end-to-end delay becomes an urgent need to practically facilitate M2M communications. This is even more challenging due to the scalability of multi-hop ad hoc networks and energy-efficient and spectral efficient operation for each machine. To investigate the routing mechanism for large-scale networks, network topology analysis can be scientifically exploited by random network analysis provides a comprehensive study in network structure and functions from complex networks perspective. Aiming at social communities mediated by network technologies, reviews the historical research for community analysis and community discovery methods in social media.
We develop an unbiased sampling for users in an online social network by crawling the social graph, they further examine multiple underlying relations for such network in to introduce a random walk sampling. For social networks related research, proposes the information-centric networking as it brings the advantages to the network operator and the end users. Exploring various research challenges in context management, presents a context management architecture that is suitable for social networking systems enhanced with pervasive features. Through a survey of current routing solutions, discuss the trend toward social based routing protocols, which are classified by employed network graph.
In addition, to employ social network analysis in message delivery remarkably pioneers the methodology to exercise the small-world phenomenon of social networks in navigation, successfully creating transmissions with less delay. Small-world phenomenon plays a crucial role in social networks, which states that each individual in such network links to others by a short chain of acquaintances and has great potential for improving spectral and energy efficiency for shorting the end-to-end delay. Reference also presents a thorough examination of average message delivery time for small-world networks in the continuum limit. Via random network analysis, studies the properties of giant component in wireless multi-hop networks, while provides a heterogeneous structure for such networks and conducts the throughput and delay analysis. Furthermore, the concepts of rumor and gossip routing algorithms are also widely employed in sensor networks for disconnected delay-tolerant MANETs and generalized complex networks, and respectively provide the social network analysis for information flow and epidemic information dissemination.
In this paper, inspired by small-world
phenomenon, we connect data aggregators (DAs) to machine swarm and propose a promising
two-tier heterogeneous architecture with DA’s smallworld network for statistical traffic
control in large M2M communication networks. To address efficient dissemination
control for routing and QoS such as surveillance applications, we first
analytically supply the condition to establish connected M2M networks and
explore some essential geometric properties (i.e., degree distribution, network
diameter, and average distance) for the networks. Analytic bounds of average
distance characterize the average number of hops that machines’ packets need to
traverse over the swarm, thus dominating the QoS guarantee capability for
reliable communications. Furthermore, through G/G/1 (i.e., both inter-arrival
time and service time distributions of a traffic queue are arbitrary
distributions) queuing network model for traffic modeling, the practical data
transportation takes place in connected M2M networks. Both the average
end-to-end delay and maximum achievable throughput per machine from information
dissemination in machine swarm multi-hop networking are examined.
1.3 LITRATURE SURVEY
TOWARD UBIQUOTOUS MASSIVE ACCESS IN 3GPP MACHINE-TO-MACHINE COMMUNICATIONS IN 3GPP
AUTHOR: S. Lien, K. C. Chen, and Y. Lin,
PUBLISH: IEEE Commun. Mag., vol. 49, no. 4, pp. 66–74, Apr. 2011.
EXPLANATION:
To enable full
mechanical automation where each smart device can play multiple roles among
sensor, decision maker, and action executor, it is essential to construct
scrupulous connections among all devices. Machine-to-machine communications
thus emerge to achieve ubiquitous communications among all devices. With the
merit of providing higher-layer connections, scenarios of 3GPP have been
regarded as the promising solution facilitating M2M communications, which is
being standardized as an emphatic application to be supported by LTE-Advanced.
However, distinct features in M2M communications create diverse challenges from
those in human-to-human communications. To deeply understand M2M communications
in 3GPP, in this article, we provide an overview of the network architecture
and features of M2M communications in 3GPP, and identify potential issues on
the air interface, including physical layer transmissions, the random access
procedure, and radio resources allocation supporting the most critical QoS
provisioning. An effective solution is further proposed to provide QoS
guarantees to facilitate M2M applications with inviolable hard timing
constraints.
SMALL-WORLD NETWORKS EMPOWERED LARGE MACHINE-TO-MACHINE COMMUNICATIONS
AUTHOR: L. Gu, S. C. Lin, and K. C. Chen
PUBLISH: IEEE WCNC, 2013, pp. 1–6.
EXPLANATION:
Cloud-based
machine-to-machine communications emerge to facilitate services through linkage
between cyber and physical worlds. In addition to great challenges in a large
network of machine/sensor swarm, effective network architecture involving
interconnection of wireless infrastructure and multi-hop ad hoc networking in
the machine swarm remains open. Inspired by the small-world phenomenon in
social networks, we may establish a short-cut path under heterogeneous network
architecture through wireless infrastructure and cloud, by connecting to data
aggregators or access points in the machine swarm, such that end-to-end delay
can be significantly reduced. Our mathematical analysis on network diameter and
average delay, along with verifications by simulations, demonstrate spectral
and energy efficiency of our proposed heterogeneous network architecture in
large machine-to-machine communication networks.
COGNITIVE MACHINE-TO-MACHINE COMMUNICATIONS: VISIONS AND POTENTIALS FOR THE SMART GRID
AUTHOR: Y. Zhang et al.,
PUBLISH: IEEE Netw., vol. 26, no. 3, pp. 6–13, May/Jun. 2012.
EXPLANATION:
Visual capability
introduced to Wireless Sensor Networks (WSNs) render many novel applications
that would otherwise be infeasible. However, unlike legacy WSNs which are commercially
deployed in applications, visual sensor networks create additional research
problems that delay the real-world implementations. Conveying real-time video
streams over resource constrained sensor hardware remains to be a challenging
task. As a remedy, we propose a fairness-based approach to enhance the event
reporting and detection performance of the Video Surveillance Sensor Networks.
Instead of achieving fairness only for flows or for nodes as investigated in
the literature, we concentrate on the whole application requirement.
Accordingly, our Event-Based Fairness (EBF) scheme aims at fair resource
allocation for the application level messaging units called events. We identify
the crucial network-wide resources as the in-queue processing turn of the
frames and the channel access opportunities of the nodes. We show that fair
treatment of events, as opposed to regular flow of frames, results in enhanced
performance in terms of the number of frames reported per event and the
reporting latency. EBF is a robust mechanism that can be used as a stand-alone
or as a complementary method to other possible performance enhancement methods
for video sensor networks implemented at other communication layers.
CHAPTER 2
2.0 SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
Existing methods for nodes as investigated in the literature; machine-to-machine communications emerge to facilitate services through linkage between cyber and physical worlds. In addition to great challenges in a large network of machine/sensor swarm, effective network architecture involving interconnection of wireless infrastructure and multi-hop ad hoc networking in the machine swarm remains open. Inspired by the small-world phenomenon in social networks, we may establish a short-cut path under heterogeneous network.
Previous discussion of existing tradeoff, but heterogeneous schemes are able to provide promising guaranteed throughput even under strict QoS demand for tight τ.Moreover, Fig. 8 further provides the exhaustive throughput comparison among different scenarios to complete our evaluation. While QoS guaranteed throughput is upper bounded by maximum achievable throughput, the great throughput improvement is provided by heterogeneous architecture as compared with plain machine swarm.
QoS fair resource
allocation for the application level messaging units called events. We identify
the crucial network-wide resources as the in-queue processing turn of the
frames and the channel access opportunities of the nodes that fair treatment of
events, as opposed to regular flow of frames, results in enhanced performance
in terms of the number of frames reported per event and the reporting latency
can be used as a stand-alone or as a complementary method to other possible
performance enhancement methods for video sensor networks implemented at other
communication layers.
2.1.1 DISADVANTAGES:
- Single source-destination pair, there exist a source machine, a destination machine, and several relay machines that forward traffic from the source to the destination.
- Data loss of generality, it is assumed that sequences of packets follow the general arrival process and the general service time, and each transmission link is modeled.
- Such a queue represents a queuing system with a single server, infinite buffer size, and the scheduling discipline of interarrival times have a general (meaning arbitrary) distribution and service times have a (different) general distribution.
2.2 PROPOSED SYSTEM:
Machine-to-machine (M2M) communications emerge to autonomously operate to link interactions between Internet cyber world and physical systems. We present the technological scenario of M2M communications consisting of wireless infrastructure to cloud, and machine swarm of tremendous devices. Related technologies toward practical realization are explored to complete fundamental understanding and engineering knowledge of this new communication and networking technology front. We connect data aggregators (DAs) to machine swarm and propose a promising two-tier heterogeneous architecture with DA’s smallworld network for statistical traffic control in large M2M communication networks address efficient dissemination control for routing and QoS such as surveillance applications.
We first analytically supply the condition to establish connected M2M networks and explore some essential geometric properties (i.e., degree distribution, network diameter, and average distance) for the networks. Analytic bounds of average distance characterize the average number of hops that machines’ packets need to traverse over the swarm, thus dominating the QoS guarantee capability for reliable communications. Furthermore, through G/G/1 (i.e., both inter-arrival time and service time distributions of a traffic queue are arbitrary distributions) queuing network model for traffic modeling, the practical data transportation takes place in connected M2M networks.
Aiming at statistical performance in
large M2M networks, we propose a statistical control mechanism for the networks
by establishing the heterogeneous network architecture and exploiting statistical
QoS guarantee for end-toend transmissions without the need of feedback control
at each link. By forming DA’s network with small-world property and linking
machines to DAs, this novel heterogeneous architecture significantly improves
the performance of end-to-end traffic for tolerable delay and makes dependable
communications possible from guaranteing traffic QoS, with extremely simple network
operation for each machine.
2.2.1 ADVANTAGES:
- To understand geometric properties of large M2M networks and thus benchmark performance, we first analytically examine network connectivity, degree, distribution, network diameter, and average distance under Poisson Point Process (PPP) machine distribution.
- Introducing queuing network theory into such network analysis for practical data transportation, the average delay and achievable throughput for message delivery in connected M2M networks are analytically obtained as well as the QoS guaranteed throughput in real applications.
- Standing on hereby established analysis, statistical dissemination control is proposed that incorporates DA’s network with machine swarm (or sensor swarm) for favorable heterogeneous network architecture.
- Due to infeasible end-to-end information exchange and subsequent precise control, we exploit statistical QoS guarantees over two-tier heterogeneous network architecture to exhibit remarkable enhancement of system performance, and to facilitate the merits of small-world phenomenon into engineering reality.
2.3 HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1 HARDWARE REQUIREMENT:
v Processor – Pentium –IV
- Speed –
1.1 GHz
- RAM – 256 MB (min)
- Hard Disk – 20 GB
- Floppy Drive – 1.44 MB
- Key Board – Standard Windows Keyboard
- Mouse – Two or Three Button Mouse
- Monitor – SVGA
2.3.2 SOFTWARE REQUIREMENTS:
- Operating System : Windows XP or Win7
- Front End : JAVA JDK 1.7
- Script : Java Script
- Tools : Netbeans 7
- Document : MS-Office 2007
CHAPTER 3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use Case Diagram / Flow Diagram:
- The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
- The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
- DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
- DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External sources or destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and retrieved.
PROCESS:
People, procedures or devices that produce data’s in the physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.
There are several common modeling rules when creating DFDs:
- All processes must have at least one data flow in and one data flow out.
- All processes should modify the incoming data, producing new forms of outgoing data.
- Each data store must be involved with at least one data flow.
- Each external entity must be involved with at least one data flow.
- A data flow must be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM
3.2 DATAFLOW DIAGRAM
UML DIAGRAMS:
3.2 USE CASE DIAGRAM:
3.3 CLASS DIAGRAM:
3.4 SEQUENCE DIAGRAM:
3.5 ACTIVITY DIAGRAM:
CHAPTER 4
4.0 IMPLEMENTATION:
GEOMETRIC RANDOM GRAPH (GRG) :
M2M communication network consists of tremendous self organized machines/sensors and enables autonomous connections among different applications for ubiquitous communications upon such large swarm system. To facilitate this scenario into practice, providing the connectivity accompanied with reliable transportation is a must for such large network. In the following, we highlight the relevant research and introduce the M2M network model using geometric random graph (GRG) as its topology and local clustering property are suitable for benchmarking large wireless ad hoc sensor networks.
Without the need of end-to-end
information to escape catastrophic complexity, information dissemination
becomes the only way in machine swarm. We exploit an open G/G/1 queuing
network model for delay and throughput analysis of M2M networks. Furthermore,
the diffusion approximation is used to analyze the queuing network. Our analytical
methodology to deal with wireless networks have general inter-arrival and
service time distributions by providing closed form expressions of end-to-end
delay and maximum achievable throughput per node. In the following, to fully
understand practical data transportation, we present the traffic model and an
equivalent queuing network model in connected M2M networks.
4.1 ALGORITHM
M2M ROUTING ALGORITHM:
M2M routing algorithm, this paper studies the asymptotic performance of several statistical QoS requirements, such as end-to-end delay and maximum throughput as well as the throughput under guaranteed delay, for a general forwarding scheme inM2M network. What is more important, our previous work focuses on obtaining the traffic performance under a specific scenario setting, which can simplify the analysis, while failing to maintain the same level of transmission qualities when the scenario changes, e.g., the network topology or traffic pattern becomes different.
Proposed algorithms solve this challenge
through statistical dissemination control by leveraging the heterogeneous
network architecture. In particular, the upper layer of DAs’ network enables
shortcut transmissions to reduce the excess end-to-end delay from the long
route transmissions in the lower layer of machine swarm. A comprehensive
performance analysis upon such a heterogeneous architecture is also included in
this paper. With these accomplishments, we provide an original and significant
paradigm to facilitate M2M communications, practically realizing information
dissemination control to meet the need of time sensitive applications in next-generation
wireless standards.
4.2 MODULES:
NETWORK TOPOLOGY DESIGN:
SERVER CLIENT MODULE:
STATISTICAL QOS GUARANTEE:
M2M COMMUNICATION CONTROL:
END-TO-END DELAY ANALYSIS:
4.3 MODULE DESCRIPTION:
NETWORK TOPOLOGY DESIGN:
This module is developed to wireless mesh based Topology design all node place particular distance. Without using any cables then fully wireless equipment based transmission and received packet data. Node and wireless sensor between calculate distance and transmission range then physically all nodes interconnected. The sink is at the center of the circular sensing area.
This module is developed to node creation and more than 20 nodes placed particular distance. Wireless sensor placed intermediate area. Each node knows its location relative to the sink. Each node is programmed with the total number of nodes in the network.
SERVER CLIENT MODULE:
Client-server
computing or networking is a distributed application architecture that
partitions tasks or workloads between service providers (servers) and service
requesters, called clients. Often clients and servers operate over a computer
network on separate hardware. A server machine is a high-performance host that
is running one or more server programs which share its resources with clients.
A client also shares any of its resources; Clients therefore initiate
communication sessions with servers which await (listen to) incoming requests.
STATISTICAL QOS GUARANTEE:
M2M COMMUNICATION CONTROL:
M2M communication with low data rate and energy cost, the machine-to-DA communication with medium data rate, and the DA-to-DA communication with high data rate. We adopt the related values from as shown in Table II and set up the experiment as follows. The 1 Mb data is sent from the source machine to the destination machine in both plain machine swarm and heterogeneous architecture separately. Moreover, DAs’ communication capabilities are characterized as the number of machines z that can be served simultaneously by each single DA.
DAs for heterogeneous architecture with respect to the number of machines in the DA’s capability linearly increases, the required number of DAs drops exponentially. It suggests that few powerful DAs are preferable than bunch of DAs with limited capability. Furthermore, Fig. 10 shows the average end-to-end delay with respect to different area sizes of Metropolis. As the area size increases (so does the number of machines in each block), the heterogeneous architecture supports much less traffic delay than the plain machine swarm.
For example, with the area size 60 km2
and 108 machines, the delay from heterogeneous architecture is 115 s as
compared to 2,500 s from the swarm. Moreover, the linear curves in the log
scale of Fig. 10(b) confirms our asymptotic results, and suggest that the heterogeneous architecture outperforms the
plain machine swarm with about 95% delay reduction for 10 billion machines. To conclude,
by efficiently connecting few DAs to construct small world shortcuts, proposed
statistical control accompanied with heterogeneous architecture resolves the
undependable end-to end transmissions.
END-TO-END DELAY ANALYSIS:
We compare the performance of the proposed heterogeneous network architecture with plain machine swarm. Simulation results confirm that heterogeneous architecture achieves remarkable delay reduction as well as high throughput gain with only few DAs installed, favored by practical implementation in large M2M networks. All simulation parameters and value settings are listed in Table I. In particular, to ensure every packet could be sent to its corresponding destination from the source, a connected M2M network is first established via the proposed analysis (i.e., selecting the appropriate machine communication range r with respect to the total machine number n). When a source machine generates a packet, it routes the packet to a specific destination, uniformly selected among other machines.
Moreover, for plain machine swarm, source simply hops forward based on the sensing and relaying; for heterogeneous architecture, it employs dissemination without selecting a particular DA. In the following, we first evaluate average distance to DAs and end-to-end distance for plain machine swarm and heterogeneous architecture. Next, end-toend packet delay, maximum system throughput, and throughput under guaranteed delay are thoroughly examined for such different architecture and compared with simulation validation in the Metropolis is established to facilitate our design into an even more practical stage.
CHAPTER 8
8.1 CONCLUSION AND FUTURE WORK:
In this paper, we resolve the most critical challenge on providing statistical control for reliable information dissemination over large M2M communication networks. Examining network topology of M2M networks, the geometric properties of such large networks are well studied to analytically characterize message delivery over connected M2M networks.
Moreover, by leveraging queuing network model, the practical data transportation is employed and both the average end-to end delay and maximum achievable throughput for these connected networks are accessible. Based on above explorations, the promising statistical control with sophisticated small-world network of data aggregators and thus the heterogeneous architecture are proposed to establish shortcut paths among machine communications.
Performance evaluation verifies that
instead of exploiting long concatenation of multi-hop transmissions in the
machine swarm, our heterogeneous network architecture enables machines to
communicate through overlaid ultra-fast “highway”, like shortcut in small-world
networks, with desired throughput. It is particularly crucial for
next-generation networks of tremendous amounts of machines. Therefore, we successfully
achieve reliable communications via our proposed methodology and facilitate
novel traffic control in M2M communication networks.
Shared Authority Based Privacy-Preserving Authentication Protocol in Cloud Computing
Shared Authority Based Privacy-PreservingAuthentication Protocol in Cloud ComputingHong Liu, Student Member, IEEE, Huansheng Ning, Senior Member, IEEE,Qingxu Xiong, Member, IEEE, and Laurence T. Yang, Member, IEEEAbstract—Cloud computing is an emerging data interactive paradigm to realize users’ data remotely stored in an online cloudserver. Cloud services provide great conveniences for the users to enjoy the on-demand cloud applications without considering thelocal infrastructure limitations. During the data accessing, different users may be in a collaborative relationship, and thus datasharing becomes significant to achieve productive benefits. The existing security solutions mainly focus on the authentication torealize that a user’s privative data cannot be illegally accessed, but neglect a subtle privacy issue during a user challenging thecloud server to request other users for data sharing. The challenged access request itself may reveal the user’s privacy no matterwhether or not it can obtain the data access permissions. In this paper, we propose a shared authority based privacy-preservingauthentication protocol (SAPA) to address above privacy issue for cloud storage. In the SAPA, 1) shared access authority isachieved by anonymous access request matching mechanism with security and privacy considerations (e.g., authentication, dataanonymity, user privacy, and forward security); 2) attribute based access control is adopted to realize that the user can only accessits own data fields; 3) proxy re-encryption is applied to provide data sharing among the multiple users. Meanwhile, universalcomposability (UC) model is established to prove that the SAPA theoretically has the design correctness. It indicates that theproposed protocol is attractive for multi-user collaborative cloud applications.Index Terms—Cloud computing, authentication protocol, privacy preservation, shared authority, universal composabilityÇ1 INTRODUCTIONCLOUD computing is a promising information technologyarchitecture for both enterprises and individuals. Itlaunches an attractive data storage and interactive paradigmwith obvious advantages, including on-demand selfservices,ubiquitous network access, and location independentresource pooling [1]. Towards the cloud computing, atypical service architecture is anything as a service (XaaS),in which infrastructures, platform, software, and others areapplied for ubiquitous interconnections. Recent studieshave been worked to promote the cloud computing evolvetowards the internet of services [2], [3]. Subsequently, securityand privacy issues are becoming key concerns with theincreasing popularity of cloud services. Conventional securityapproaches mainly focus on the strong authenticationto realize that a user can remotely access its own data in ondemandmode. Along with the diversity of the applicationrequirements, users may want to access and share each other’sauthorized data fields to achieve productive benefits,which brings new security and privacy challenges for thecloud storage.An example is introduced to identify the main motivation.In the cloud storage based supply chain management,there are various interest groups (e.g., supplier, carrier, andretailer) in the system. Each group owns its users which arepermitted to access the authorized data fields, and differentusers own relatively independent access authorities. Itmeans that any two users from diverse groups shouldaccess different data fields of the same file. Thereinto, a suppliermay want to access a carrier’s data fields, but it is notsure whether the carrier will allow its access request. If thecarrier refuses its request, the supplier’s access desire willbe revealed along with nothing obtained towards thedesired data fields. Actually, the supplier may not send theaccess request or withdraw the unaccepted request inadvance if it firmly knows that its request will be refused bythe carrier. It is unreasonable to thoroughly disclose thesupplier’s private information without any privacy considerations.Fig. 1 illustrates three revised cases to addressabove imperceptible privacy issue._ Case 1. The carrier also wants to access the supplier’sdata fields, and the cloud server should inform eachother and transmit the shared access authority to theboth users;_ Case 2. The carrier has no interest on other users’data fields, therefore its authorized data fieldsshould be properly protected, meanwhile the supplier’saccess request will also be concealed;_ Case 3. The carrier may want to access the retailer’sdata fields, but it is not certain whether the retailerwill accept its request or not. The retailer’s authorizeddata fields should not be public if the retailer_ H. Liu and Q. Xiong are with the School of Electronic and InformationEngineering, Beihang University, Beijing, China.E-mail: liuhongler@ee.buaa.edu.cn, qxxiong@buaa.edu.cn._ H. Ning is with the School of Computer and Communication Engineering,University of Science and Technology Beijing, Beijing, China, and theSchool of Electronic and Information Engineering, Beihang University,Beijing, China. E-mail: ninghuansheng@ustb.edu.cn._ L.T. Yang is with the School of Computer Science and Technology,Huazhong University of Science and Technology, Wuhan, Hubei, China,and the Department of Computer Science, St. Francis Xavier University,Antigonish, NS, Canada. E-mail: ltyang@stfx.ca.Manuscript received 3 Nov. 2013; revised 23 Dec. 2013; accepted 30 Dec.2013. Date of publication 24 Feb. 2014; date of current version 5 Dec. 2014.Recommended for acceptance by J. Chen.For information on obtaining reprints of this article, please send e-mail to:reprints@ieee.org, and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TPDS.2014.2308218IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015 2411045-9219 _ 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.has no interests in the carrier’s data fields, and thecarrier’s request is also privately hidden.Towards above three cases, security protection and privacypreservation are both considered without revealing sensitiveaccess desire related information.In the cloud environments, a reasonable security protocolshould achieve the following requirements. 1) Authentication:a legal user can access its own data fields, only theauthorized partial or entire data fields can be identified bythe legal user, and any forged or tampered data fields cannotdeceive the legal user. 2) Data anonymity: any irrelevantentity cannot recognize the exchanged data and communicationstate even it intercepts the exchanged messages viaan open channel. 3) User privacy: any irrelevant entity cannotknow or guess a user’s access desire, which represents auser’s interest in another user’s authorized data fields. Ifand only if the both users have mutual interests in each other’sauthorized data fields, the cloud server will inform thetwo users to realize the access permission sharing. 4) Forwardsecurity: any adversary cannot correlate two communicationsessions to derive the prior interrogations accordingto the currently captured messages.Researches have been worked to strengthen security protectionand privacy preservation in cloud applications, andthere are various cryptographic algorithms to addresspotential security and privacy problems, including securityarchitectures [4], [5], data possession protocols [6], [7], datapublic auditing protocols [8], [9], [10], secure data storageand data sharing protocols [11], [12], [13], [14], [15], [16],access control mechanisms [17], [18], [19], privacy preservingprotocols [20], [21], [22], [23], and key management [24],[25], [26], [27]. However, most previous researches focus onthe authentication to realize that only a legal user can accessits authorized data, which ignores that different users maywant to access and share each other’s authorized data fieldsto achieve productive benefits. When a user challenges thecloud server to request other users for data sharing, theaccess request itself may reveal the user’s privacy no matterwhether or not it can obtain the data access permissions. Inthis work, we aim to address a user’s sensitive access desirerelated privacy during data sharing in the cloud environments,and it is significant to design a humanistic securityscheme to simultaneously achieve data access control,access authority sharing, and privacy preservation.In this paper, we address the aforementioned privacyissue to propose a shared authority based privacy-preservingauthentication protocol (SAPA) for the cloud data storage,which realizes authentication and authorization withoutcompromising a user’s private information. The main contributionsare as follows.1) Identify a new privacy challenge in cloud storage,and address a subtle privacy issue during a userchallenging the cloud server for data sharing, inwhich the challenged request itself cannot reveal theuser’s privacy no matter whether or not it can obtainthe access authority.2) Propose an authentication protocol to enhance auser’s access request related privacy, and the sharedaccess authority is achieved by anonymous accessrequest matching mechanism.3) Apply ciphertext-policy attribute based access controlto realize that a user can reliably access its owndata fields, and adopt the proxy re-encryption toprovide temp authorized data sharing among multipleusers.The remainder of the paper is organized as follows.Section 2 introduces related works. Section 3 introduces thesystem model, and Section 4 presents the proposed authenticationprotocol. The universal composability (UC) modelbased formal security analysis is performed in Section 5Finally, Section 6 draws a conclusion.2 RELATED WORKDunning and Kresman [11] proposed an anonymous IDassignment based data sharing algorithm (AIDA) for multipartyoriented cloud and distributed computing systems. Inthe AIDA, an integer data sharing algorithm is designed ontop of secure sum data mining operation, and adopts a variableand unbounded number of iterations for anonymousassignment. Specifically, Newton’s identities and Sturm’stheorem are used for the data mining, a distributed solutionof certain polynomials over finite fields enhances the algorithmscalability, and Markov chain representations are usedto determine statistics on the required number of iterations.Liu et al. [12] proposed a multi-owner data sharingsecure scheme (Mona) for dynamic groups in the cloudapplications. The Mona aims to realize that a user cansecurely share its data with other users via the untrustedcloud server, and can efficiently support dynamic groupinteractions. In the scheme, a new granted user can directlydecrypt data files without pre-contacting with data owners,and user revocation is achieved by a revocation list withoutupdating the secret keys of the remaining users. Access controlis applied to ensure that any user in a group can anonymouslyutilize the cloud resources, and the data owners’real identities can only be revealed by the group managerfor dispute arbitration. It indicates the storage overheadand encryption computation cost are independent with theamount of the users.Fig. 1. Three possible cases during data accessing and data sharing incloud applications.242 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015Grzonkowski and Corcoran [13] proposed a zeroknowledgeproof (ZKP) based authentication scheme forcloud services. Based on the social home networks, a usercentric approach is applied to enable the sharing of personalizedcontent and sophisticated network-based servicesvia TCP/IP infrastructures, in which a trusted third partyis introduced for decentralized interactions.Nabeel et al. [14] proposed a broadcast group key management(BGKM) to improve the weakness of symmetrickey cryptosystem in public clouds, and the BGKM realizesthat a user need not utilize public key cryptography, andcan dynamically derive the symmetric keys during decryption.Accordingly, attribute based access control mechanismis designed to achieve that a user can decrypt thecontents if and only if its identity attributes satisfy the contentprovider’s policies. The fine-grained algorithm appliesaccess control vector (ACV) for assigning secrets to usersbased on the identity attributes, and allowing the users toderive actual symmetric keys based on their secrets andother public information. The BGKM has an obviousadvantage during adding/revoking users and updatingaccess control policies.Wang et al. [15] proposed a distributed storage integrityauditing mechanism, which introduces the homomorphictoken and distributed erasure-coded data to enhance secureand dependable storage services in cloud computing. Thescheme allows users to audit the cloud storage with lightweightcommunication overloads and computation cost,and the auditing result ensures strong cloud storage correctnessand fast data error localization. Towards the dynamiccloud data, the scheme supports dynamic outsourced dataoperations. It indicates that the scheme is resilient againstByzantine failure, malicious data modification attack, andserver colluding attacks.Sundareswaran et al. [16] established a decentralizedinformation accountability framework to track the users’actual data usage in the cloud, and proposed an objectcenteredapproach to enable enclosing the logging mechanismwith the users’ data and policies. The Java ARchives(JAR) programmable capability is leveraged to create adynamic and mobile object, and to ensure that the users’data access will launch authentication. Additionally, distributedauditing mechanisms are also provided to strengthenuser’s data control, and experiments demonstrate theapproach efficiency and effectiveness.In the aforementioned works, various security issues areaddressed. However, a user’s subtle access request relatedprivacy problem caused by data accessing and data sharinghas not been studied yet in the literature. Here, we identifya new privacy challenge, and propose a protocol not onlyfocusing on authentication to realize the valid data accessing,but also considering authorization to provide the privacy-preserving access authority sharing. The attributebased access control and proxy re-encryption mechanismsare jointly applied for authentication and authorization.3 SYSTEM MODELFig. 2 illustrates a system model for the cloud storage architecture,which includes three main network entities: users(Ux), a cloud server (S), and a trusted third party._ User. An individual or group entity, which owns itsdata stored in the cloud for online data storage andcomputing. Different users may be affiliated with acommon organization, and are assigned with independentauthorities on certain data fields._ Cloud server. An entity, which is managed by aparticular cloud service provider or cloud applicationoperator to provide data storage and computingservices. The cloud server is regarded as anentity with unrestricted storage and computationalresources._ Trusted third party. An optional and neutral entity,which has advanced capabilities on behalf of theusers, to perform data public auditing and disputearbitration.In the cloud storage, a user remotely stores its data viaonline infrastructures, flatforms, or software for cloud services,which are operated in the distributed, parallel, andcooperative modes. During cloud data accessing, the userautonomously interacts with the cloud server without externalinterferences, and is assigned with the full and independentauthority on its own data fields. It is necessary toguarantee that the users’ outsourced data cannot be unauthorizedaccessed by other users, and is of critical importanceto ensure the private information during the users’data access challenges. In some scenarios, there are multipleusers in a system (e.g., supply chain management), and theusers could have different affiliation attributes from differentinterest groups. One of the users may want to accessother associated users’ data fields to achieve bi-directionaldata sharing, but it cares about two aspects: whether theaimed user would like to share its data fields, and how toavoid exposing its access request if the aimed user declinesor ignores its challenge. In the paper, we pay more attentionon the process of data access control and access authoritysharing other than the specific file oriented cloud datamanagement.In the system model, assume that point-to-point communicationchannels between users and a cloud server are reliablewith the protection of secure shell protocol (SSH). Therelated authentication handshakes are not highlighted inthe following protocol presentation.Towards the trust model, there are no full trust relationshipsbetween a cloud server S and a user Ux._ S is semi-honest and curious. Being semi-honest meansthat S can be regarded as an entity that appropriatelyfollows the protocol procedure. Being curiousFig. 2. The cloud storage system model.LIU ET AL.: SHARED AUTHORITY BASED PRIVACY-PRESERVING AUTHENTICATION PROTOCOL IN CLOUD COMPUTING 243means that S may attempt to obtain Ux’s privateinformation (e.g., data content, and user preferences).It means that S is under the supervision of itscloud provider or operator, but may be interested inviewing users’ privacy. In the passive or honest-butcuriousmodel, S cannot tamper with the users’ datato maintain the system normal operation with undetectedmonitoring._ Ux is rational and sensitive. Being rational means thatUx’s behavior would be never based on experienceor emotion, and misbehavior may only occur for selfishinterests. Being sensitive means that Ux is reluctantto disclosure its sensitive data, but has stronginterests in other users’ privacy.Towards the threat model, it covers the possible securitythreats and system vulnerabilities during cloud data interactions.The communication channels are exposed in public,and both internal and external attacks exist in the cloudapplications [15]. The internal attacks mainly refer to theinteractive entities (i.e., S, and Ux). Thereinto, S may be selfcenteredand utilitarian, and aims to obtain more user datacontents and the associated user behaviors/habits for themaximization of commercial interests; Ux may attempt tocapture other users’ sensitive data fields for certain purposes(e.g., curiosity, and malicious intent). The externalattacks mainly consider the data CIA triad (i.e., confidentiality,integrity, and availability) threats from outside adversaries,which could compromise the cloud data storageservers, and subsequently modify (e.g., insert, or delete) theusers’ data fields.4 THE SHARED AUTHORITY BASED PRIVACYPRESERVINGAUTHENTICATION PROTOCOL4.1 System InitializationThe cloud storage system includes a cloud server S, andusers {Ux} (x ¼ f1; . . .;mg, m 2 N_). Thereinto, Ua and Ubare two users, which have independent access authoritieson their own data fields. It means that a user has an accesspermission for particular data fields stored by S, and theuser cannot exceed its authority access to obtain other users’data fields. Here, we consider S and {Ua, Ub} to present theprotocol for data access control and access authority sharingwith enhanced privacy considerations. The main notationsare introduced in Table 1.Let BG ¼ ðq; g; h;G;G0; e;HÞ be a pairing group, in whichq is a large prime, {G;G0} are of prime order q, G ¼ hgi ¼ hhi,and H is a collision-resistant hash function. The bilinearmap e : G _ G ! G0 satisfies the bilinear non-degenerateproperties: i.e., for all g; h 2 G and a; b 2 Z_q , it turns out thateðga; hbÞ ¼ eðg; hÞab, and eðg; hÞ 6¼ 1. Meanwhile, eðg; hÞ canbe efficiently obtained for all g; h 2 G, and it is a generatorof G0.Let S and Ux respectively own the pairwise keys {pkS,skS} and {pkUx , skUx }. Besides, S is assigned with all users’public keys {pkU1 ; . . . ; pkUm}, and Ux is assigned with pkS.Here, the public key pkt ¼ gskt ðmod qÞ (t 2 fS;Uxg) and thecorresponding privacy key skt 2 Z_q are defined accordingto the generator g.Let FðRUyUx ðRUxUy ÞT Þ¼Cont2Zq describe the algebraic relation of{RUyUx , RUxUy }, which are mutually inverse access requests challengedby {Ux, Uy}, and Cont is a constant. Here, Fð:Þ is acollision-resistant function, for any randomized polynomialtime algorithm A, there is a negligible function pðkÞ for asufficiently large value k:Probhfðx; x0Þ; ðy; y0Þg Að1kÞ : ðx 6¼ x0; y 6¼ y0Þ^F_RUxUy_RU0yU0x_T_¼ Conti_ pðkÞ:Note that RU_ Uyis a m-dimensional Boolean vector, inwhich only the _-th pointed-element and the y-th selfelementare 1, and other elements are 0. It turns out that:_ FðRUyUx ðRUxUy ÞT Þ¼Fð2Þ¼Cont means that both Ux and Uy areinterested in each other’s data fields, and the twoaccess requests are matched;_ FðRUyUx ðRU~xUy ÞTÞ ¼ FðRU~yUx ðRUxUy ÞTÞ ¼ Fð1Þ means thatonly one user (i.e., Ux or Uy) is interested in theother’s data fields, and the access requests are notmatched. Note that U~x/U~y represents that the user isnot Ux/Uy;_ FðRU~yUx ðRU~xUy ÞTÞ ¼ Fð0Þ means that neither Ux nor Uy isinterested in each other’s data fields, and the twoaccess requests are not matched.Let A be the attribute set, there are n attributesA ¼ fA1;A2; . . .; Ang for all users, and Ux has its own attributeset AUx _ A for data accessing. Let AUx and PUx bemonotone Boolean matrixes to represent Ux’s data attributeaccess list and data access policy._ Assume that Ux has AUx ¼ ½aij_n_m, which satisfiesthat aij ¼ 1 for Ai 2 A, and aij ¼ 0 for Ai =2 A._ Assume that S owns PUx ¼ ½pij_n_m, which is appliedto restrain Ux’s access authority, and satisfies thatpij ¼ 1 for Ai 2 PUx , and pij ¼ 0 for Ai =2 PUx. Ifaij _ pij8i ¼ f1; . . . ; ng; j ¼ f1; . . .;mg holds, it willbe regarded that AUx is within PUx ’s access authoritylimitation.TABLE 1Notations244 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015Note that full-fledged cryptographic algorithms (e.g.,attribute based access control, and proxy re-encryption) canbe exploited to support the SAPA.4.2 The Proposed Protocol DescriptionsFig. 3 shows the interactions among {Ua, Ub, S}, in whichboth Ua and Ub have interests on each other’s authorizeddata fields for data sharing. Note that the presented interactionsmay not be synchronously launched, and a certaintime interval is allowable.4.2.1 {Ua, Ub}’s Access Challenges and S’s Responses{Ua, Ub} respectively generate the session identifiers {sidUa ,sidUb }, extract the identity tokens {TUa , TUb }, and transmits{sidUakTUa , sidUbkTUa} to S as an access query to initiate anew session. Accordingly, we take the interactions of Uaand S as an example to introduce the following authenticationphase. Upon receiving Ua’s challenge, S first generatesa session identifier sidSa , and establishes the master publickey mpk ¼ ðgi; h; hi; BG; eðg; hÞ;HÞ and master privacy keymsk ¼ ða; gÞ. Thereinto, S randomly chooses a 2 Zq, andcomputes gi ¼ gaiand hi ¼ hai_1(i ¼ f1; . . . ; ng 2 Z_).S randomly chooses s 2 f0; 1g_, and extracts Ua’s accessauthority policy PUa ¼ ½pij_n_m (pij 2 f0; 1g), and Ua isassigned with the access authority on its own data fieldsDUa within PUa ’s permission. S further defines a polynomialFSa ðx; PUa Þ according to PUa and TUa :FSa ðx; PUaÞ ¼Yn;mi¼1;j¼1ðx þ ijHðTUa ÞÞpij ðmod qÞ:S computes a set of values {MSa0, MSa1, fMSa2ig, MSa3,MSa4} to establish the ciphertext CSa ¼ fMSa1; fMSa2ig;MSa3;MSa4g, and transmits sidSakCSa to Ua.MSa0 ¼ HðPUakDUakTUaksÞ;MSa1 ¼ hFSa ða;PUa ÞMSa0 ;MSa2i ¼ ðgiÞMSa0 ; ði ¼ 1; . . . ; nÞ;MSa3 ¼ Hðeðg; hÞMSa0Þ s;MSa4 ¼ HðsidUaksÞ DUa :Similarly, S performs the corresponding operationsfor Ub, including that S randomly chooses a0 2 Zq ands02 f0; 1g_, establishes {g0i, h0i}, extracts {PUb , DUb },defines FSb ðx; PUb Þ, and computes {MSb0, MSb1, fMSb2ig,MSb3, MSb4} to establish the ciphertext CSb fortransmission.4.2.2 {Ua, Ub}’s Data Access ControlUa first extracts it data attribute access list AUa ¼ ½aij_(aij 2 f0; 1g, aij _ pij) to re-structure an access listLUa ¼ ½lij_n_m for lij ¼ pij _ aij. Ua also defines a polynomialFUa ðx;LUa Þ according to LUa and TUa :FUa ðx;LUaÞ ¼Yn;mi¼1;j¼1ðx þ ijHðTUa ÞÞlij ðmod qÞ:It turns out that FUa ðx;LUa Þ satisfies the equationFUa ðx;LUaÞ ¼Yn;mi¼1;j¼1ðx þ ijHðTUa ÞÞpij_aij¼ FSa ðx; PUa Þ=FSa ðx;AUa Þ:Afterwards, Ua randomly chooses b 2 Zq, and the decryptionkey kAUa for AUa can be obtained as follows:kAUa ¼ ðgðbþ1Þ=FSa ða;AUa Þ; hb_1Þ:Ua further computes a set of values {NUa1, NUa2, NUa3}.Here, fSai is used to represent xi’s coefficient inFSa ðx; PUa Þ, and fUai is used to represent xi’s coefficientin FUa ðx; LUa Þ:NUa1 ¼e MSa21;Yni¼1ðhiÞfUaihfUa0!;NUa2 ¼ eYni¼1ðMSa2iÞfUai; hb_1!;NUa3 ¼ eðgðbþ1Þ=FSa ða;AUa Þ;MSa1Þ:Fig. 3. The shared authority based privacy-preserving authentication protocol.LIU ET AL.: SHARED AUTHORITY BASED PRIVACY-PRESERVING AUTHENTICATION PROTOCOL IN CLOUD COMPUTING 245It turns out that eðg; hÞMSa0 satisfies the equationeðg; hÞMSa0 ¼NUa3ðNUa1NUa2Þ_ _1=fUa0:For the right side of (1), we have,NUa1 ¼ egaiMSa0 ;Yni¼1ðhiÞfUaihfUa0!¼ eðg; hÞaMSa0Pni¼1ðai_1fUaiþfUa0Þ¼ eðg; hÞMSa0FUa ða;LUa Þ;NUa2 ¼ eYni¼1gaiMSa0fUai; hb_1!¼ eðg; hÞMSa0_Pni¼1aifUaiþfUa0_fUa0_ðb_1Þ¼ eðg; hÞMSa0bFUa ða;LUaÞ_MSa0fUa0 ;NUa3 ¼ egðbþ1Þ=FSa ða;AUa Þ; hfSa0MSa0Yni¼1ðhiÞfSaiMSa0!¼ eðg; hÞðbþ1Þ=FSa ða;AUa ÞFSa ða;PUa ÞMSa0¼ eðg; hÞMSa0bFUa ða;LUaÞþMSa0FUa ða;LUa Þ:Ua locally re-computes {s‘, M‘Sa0}, derives its own authorizeddata fields DUa , and checks whether the ciphertext CSais encrypted by M‘Sa0. If it holds, Ua will be a legal user thatcan properly decrypt the ciphertext CSa ; otherwise, the protocolwill terminates‘ ¼ MSa3 Hðeðg; hÞMSa0 Þ;M‘Sa0 ¼ H_PUakDUakTUaks‘_;DUa ¼ MSa4 H_sidUaks‘_:Ua further extracts its pseudonym PIDUa , a sessionsensitiveaccess request RUbUa, and the public key pkUa .Here, RUbUa is introduced to let S know Ua’s data accessdesire. It turns out that RUbUa makes S know the facts: 1) Uawants to access Ub’s temp authorized data fields _DUb ;2) Ra will also agree to share its temp authorized datafields _DUa with Ub in the case that Ub grants its request.Afterwards, Ua randomly chooses rUa 2 Z_q , computes aset of values {MUa0, MUa1, MUa2, MUa3} to establish a ciphertextCUa , and transmits CUa to S for further access requestmatchingMUa0 ¼ HðsidSakPIDUaÞ RUbUa;MUa1 ¼ gpkUa rUa ;MUa2 ¼ eðg; hÞrUa ;MUa3 ¼ hrUa :Similarly, Ub performs the corresponding operations,including that Ub extracts AUb , and determines {LUb ,FUb ðx;LUb Þ, fUbi}. Ub further randomly chooses b0 2 Zq, andcomputes the values {NUb1, NUb2, NUb3, s0‘, M‘Ub} to derive itsown data fields DUb . Ub also extracts its pseudonym PIDUband an access request RUaUbto establish a ciphertext CUb withthe elements {MUb0;MUb1;MUb2;MUb3}.4.2.3 {Ua, Ub}’s Access Request Matching and DataAccess Authority SharingUpon receiving the ciphertexts {CUa , CUb } within an allowabletime interval, and S extracts {PIDUa , PIDUb } to derivethe access requests {RUbUa , RUaUb}:RUbUa ¼ HðsidSakPIDUaÞ MUa0;RUaUb ¼ HðsidSbkPIDUbÞ MUb0:S checks whether {RUbUa , RUaUb} satisfy FðRUbUa ðRUaUb ÞTÞ ¼Fð2Þ ¼ Cont. If it holds, S will learn that both Ua and Ubhave the access desires to access each other’s authorizeddata, and to share its authorized data fields with each other.S extracts the keys {skS, pkUa , pkUb } to establish the aggregatedkeys {kS, kSu } by the Diffie-Hellman key agreement,and computes the available re-encryption key kUu for Uu(u 2 fa; bg):kS ¼ ðpkUapkUb ÞskS ¼ gðskUaþskUb ÞskS ;kSu ¼ ðpkUu ÞskS ¼ gskUuskS ;kUu ¼ kSu=pkUu :S performs re-encryption to obtainM0Uu1. Towards Ua/Ub,S extracts Ub/Ua’s temp authorized data fields _DUb/ _DUa tocomputeM0Ub2/M0Ua2:M0Uu1 ¼ ðMUu1ÞkUu ¼ gkSurUu ;M0Ua2 ¼ MUa2EkSb ð _DUa Þ;M0Ub2 ¼ MUb2EkSa ð _DUb Þ:Thereafter, S establishes the re-structured ciphertextC0Uu ¼ ðM0Uu1;M0Uu2;MUu3Þ, and respectively transmits{C0UbkkS, C0UakkS} to {Ua, Ub} for access authority sharing.Upon receiving the messages, Ua computes kSa ¼ ðpkSÞskUa ,and performs verification by comparing the followingequation:e_M0Ub1; h_¼?eðgkS=kSa;MUb3Þ:For the left side of (2), we have,e_M0Ub1; h_¼ e_ggskUbskS rUb ; h_:For the right side of (2), we have,e_gkS=kSa;MUb3_¼ eðgðpkSÞskUb ; hrUb Þ¼ eðg; hÞgskSskUb rUb :246 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015Ua derives Ub’s temp authorized data fields _DUb :_DUb ¼ E_1kSa_M0Ub2e_M0Ub1; h__kSa=kS_:Similarly, Ub performs the corresponding operations,including that Ub obtains the keys {kS, kSb }, checks Ub’svalidity, and derives the temp authorized data field _DUa .In the SAPA, S acts as a semi-trusted proxy to realize{Ua, Ub}’s access authority sharing. During the proxy reencryption,{Ua, Ub} respectively establish ciphertexts{MUa1, MUb1} by their public keys {pkUa , pkUb }, and S generatesthe corresponding re-encryption keys {kUa , kUb} for {Ua,Ub}. Based on the re-encryption keys, the ciphertexts {MUa1,MUb1} are re-encrypted into {M0Ua1, M0Ub1}, and {Ua, Ub} candecrypt the re-structured ciphertexts {M0Ub1, M0Ua1} by theirown private key {skUa , skUb } without revealing any sensitiveinformation.Till now, {Ua, Ub} have realized the access authority sharingin the case that both Ua and Ub have the access desireson each other’s data fields. Meanwhile, there may be othertypical cases when Ua has an interest in Ub’s data fields witha challenged access request RUbUa .1. In the case that Ub has no interest in Ua’s data fields,it turns out that Ub’s access request RUbUband RUbUa satisfythat FðRUbUa ðRUbUb ÞT Þ¼Fð1Þ. For Ua, S will extract adummy data fields Dnull as a response. Ub will beinformed that a certain user is interested in its datafields, but cannot determine Ua’s detailed identityfor privacy considerations.2. In the case that Ub has an interest in Uc’s data fieldsrather than Ua’s data fields, but Uc has no interest inUb’s data fields. It turns out that the challengedaccess requests RUbUa , RUcUb, and RU~bUc satisfy thatFðRUbUa ðRUcUb ÞT Þ¼FðRUcUb ðRU~bUc ÞT Þ¼Fð1Þ, in which U~b indicatesthat the user is not Ub. Dnull will be transmitted to{Ua, Ub, Uc} without data sharing.In summary, the SAPA adopts integrative approaches toaddress secure authority sharing in cloud applications._ Authentication. The ciphertext-policy attribute basedaccess control and bilinear pairings are introducedfor identification between Uu and S, and only thelegal user can derive the ciphertexts. Additionally,Uu checks the re-computed ciphertexts according tothe proxy re-encryption, which realizes flexible datasharing instead of publishing the interactive users’secret keys._ Data anonymity. The pseudonym PIDUu are hiddenby the hash function so that other entities cannotderives the real values by inverse operations.Meanwhile, U~u ’s temp authorized fields _DU~uareencrypted by kSu for anonymous data transmission.Hence, an adversary cannot recognize thedata, even if the adversary intercepts the transmitteddata, it will not decode the full-fledged cryptographicalgorithms._ User privacy. The access request pointer (e.g., RUxUu) iswrapped along with HðsidSukPIDUu Þ for privatelyinforming S about Uu’s access desires. Only if bothusers are interested in each other’s data fields, S willestablish the re-encryption key kUu to realize authoritysharing between Ua and Ub. Otherwise, S willtemporarily reserve the desired access requests for acertain period of time, and cannot accurately determinewhich user is actively interested in the otheruser’s data fields._ Forward security. The dual session identifiers {sidSu ,sidUu } and pseudorandom numbers are introducedas session variational operators to ensure the communicationsdynamic. An adversary regards theprior session as random even if {S, Uu} get corrupted,or the adversary obtains the PRNG algorithm. Thecurrent security compromises cannot correlate withthe prior interrogations.5 FORMAL SECURITY ANALYSIS WITH THEUNIVERSAL COMPOSABILITY MODEL5.1 PreliminariesThe universal composability model specifies an approachfor security proofs [28], and guarantees that the proofs willremain valid if the protocol is modularly composed withother protocols, and/or under arbitrary concurrent protocolexecutions. There is a real-world simulation, an ideal-worldsimulation, and a simulator Sim translating the protocolexecution from the real-world to the ideal-world. Additionally,the Byzantine attack model is adopted for securityanalysis, and all the parties are modeled as probabilisticpolynomial-time Turing machines (PPTs), and a PPT captureswhatever is external to the protocol executions. Theadversary controls message deliveries in all communicationchannels, and may perform malicious attacks (e.g., eavesdropping,forgery, and replay), and may also initiate newcommunications to interact with the legal parties.In the real-world, let p be a real protocol, Pi (i ¼ f1; . . . ;Ig 2 N_) be real parties, and A be a real-world adversary. Inthe ideal-world, let F be an ideal functionality, ~ Pi bedummy parties, and ~A be an ideal-world adversary. Z is aninteractive environment, and communicates with all entitiesexcept the ideal functionality F. Ideal functionality acts asan uncorruptable trusted party to realize specific protocolfunctions.Theorem 1. UC Security. The probability, that Z distinguishesbetween an interaction of A with Pi and an interactionof ~A with ~ Pi, is at most negligible probability. We havethat a real protocol p UC-realizes an ideal functionality F,i.e., IdealF; ~ A;Z Realp;A;Z.The UC formalization of the SAPA includes the idealworldmodel Ideal, and the real-world model Real._ Ideal: Define two uncorrupted idea functionalities{Faccess, Fshare}, a dummy party ~ P (e.g., ~ Uu, ~ S,u 2 fa; bg), and an ideal adversary ~ A. { ~ P, ~ A} cannotestablish direct communications. ~ A can arbitrarilyinteract with Z, and can corrupt any dummy party~ P, but cannot modify the exchanged messages._ Real: Define a real protocol pshare (run by a partyP including Uu and S) with a real adversary A andan environment Z. Each real parties canLIU ET AL.: SHARED AUTHORITY BASED PRIVACY-PRESERVING AUTHENTICATION PROTOCOL IN CLOUD COMPUTING 247communicate with each other, and A can fully controlthe interconnections of P to obtain/modify theexchanged messages. During the protocol execution,Z is activated first, and dual session identifiersshared by all the involved parties reflects theprotocol state.5.2 Ideal FunctionalityDefinition 1. Functionality Faccess. Faccess is an incorruptibleideal data accessing functionality via available channels, asshown in Table 2.In Faccess, a party P (e.g., Uu, S) is initialized (via inputInitialize), and thereby initiates a new session along withgenerating dual session identifiers {sidUu , sidSu }. P followsthe assigned protocol procedure to send (via input Send)and receive (via input Receive) messages. A random numberrPu is generated by P for further computation (via inputGenerate). Data access control is realized by checking{sendð:Þ, recð:Þ, localð:Þ} (via input Access). If P is controlledby an ideal adversary ~ A, four types of behaviors may beperformed: ~ A may record the exchanged messages on listenedchannels, and may forward the intercepted messagesto P (via request Forward); ~ A may record the state ofauthentication between Uu and S to interfere in the normalverification (via request Accept); ~ A may impersonate anlegal party to obtain the full state (via request Forge), andmay replay the formerly intercepted messages to involvethe ongoing communications (via request Replay).Definition 2. Functionality Fshare. Fshare is an incorruptibleideal authority sharing functionality, as shown in Table 3.Fshare is activated by P (via input Activate), and the initializationis performed via Initialize of Faccess. The accessrequest pointers {RUbUa , RUaUb} are respectively published andchallenged by {Ua, Ub} to indicate their desires (via inputChallenge). The authority sharing between {Ua, Ub} is realized,and the desired data fields { _D Ub , _D Ua } are accordinglyobtained by {Ua, Ub} (via input Share). If P is controlled byan ideal adversary ~ A, ~ A may detect the exchanged challengedaccess request pointer RUxUu(via request Listen); ~ Amay record the request pointer to interfere in the normalauthority sharing between Ua and Ub (via requestForge/Replay).In the UC model, Faccess and Fshare formally define thebasic components of the ideal-world simulation._ Party. Party P refers to multiple users Uu (e.g., Ua,Ub), and a cloud server S involved in a session.Through a successful session execution, {Uu, S} establishauthentication and access control, and {Ua, Ub}TABLE 3Ideal Authority Sharing Functionality: FshareTABLE 2Ideal Data Accessing Functionality: Faccess248 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015obtain each other’s temp authorized data fields fordata authority sharing._ Session identifier. The session identifiers sidUu andsidSu are generated for initialization by the environmentZ. The ideal adversary ~ A may control and corruptthe interactions between Uu and S._ Access request pointer. The access request pointer RUxUuis applied to indicate Uu’s access request on Ux’stemp authorized data fields _D Ux .5.3 Real Protocol pshareA real protocol pshare is performed based on the ideal functionalitiesto realize Fshare in Faccess-hybrid model.Upon input ActivateðPÞ at P (e.g., Uu, and S), P is activatedvia Fshare to trigger a new session, in whichInitialize of Faccess is applied for initialization and assignment.{initðsidUu ; UuÞ, initðsidSu ; SÞ} are respectivelyobtained by {Uu, S}. Message deliveries are accordingly performedby inputting Send and Receive. Upon input Sendfrom Uu, Uu records and outputs sendðsidUu ; UuÞ via Faccess.Upon input Receive from S, S obtains recðsidUu ; SÞ viaFaccess. Upon input GenerateðSÞ from S, S randomly choosesa random number rSu to output genðrSu Þ and to establisha ciphertext for access control. Upon input GenerateðUuÞfrom Uu, Uu generates a random number rUu for furtherchecking the validity of {AUu , PUu }. Upon input Access fromUu, Uu checks whether {sendð:Þ, recð:Þ, localð:Þ} are matchedvia Faccess. If it holds, output validðAUu; PUu Þ is valid. Else,output invalidðAUu; PUu Þ and terminate the protocol. Uponinput ChallengeðUxÞ from Uu, Uu generates an accessrequest pointer RUxUu, and outputs challðRUxUu Þ to Ux. Uponinput Send from Uu, Uu computes a message mUu , recordsand outputs sendðmUu ; UuÞ via Faccess, in which RUxUuiswrapped in mUu . Upon input Receive from S, S obtainsrecðmUu ; SÞ for access request matching. Upon inputShareð _D Ub ; UaÞ and Shareð _D Ua ; UbÞ from {Ua, Ub}, S checkswhether {challðRUbUa ; UaÞ, challðRUaUb; UbÞ} are matched. If itholds, output shareð _D Ub ; UaÞ to Ua and shareð _D Ua ; UbÞ to Ubto achieve data sharing. Else, output shareðDnull; UaÞ to Uaand shareðDnull; UbÞ to Ub for regular data accessing.5.4 Security Proof of pshareTheorem 3. The protocol pshare UC-realizes the ideal functionalityFshare in the Faccess-hybrid model.Proof: Let A be a real adversary that interacts with the partiesrunning pshare in the Faccess-hybrid model. Let ~ A bean ideal adversary such that any environment Z cannotdistinguish with a non-negligible probability whether itis interacting with A and pshare in Real or it is interactingwith ~ A and Fshare in Ideal. It means that there is a simulatorSim that translates pshare procedures into Real suchthat these cannot be distinguished by Z.Construction of the ideal adversary ~ A: The ideal adversary~ A acts as Sim to run the simulated copies of Z, A,and P. ~ A correlates runs of pshare from Real into Ideal:the interactions of A and P is corresponding to the interactionsof ~ A and ~ P. The input of Z is forwarded to A asA’s input, and the output of A (after running pshare) iscopied to ~ A as ~ A’s output.Simulating the party P. Uu and S are activated and initializedby Activate and Initialization, and ~ A simulatesas A during interactions._ Whenever ~ A obtains {initðsidPu ; PÞ, genðrPu ; PÞ}from Faccess, ~ A transmits the messages to A._ Whenever ~ A obtains {recð:Þ, sendð:Þ} from Faccess,~ A transmits the messages to A, and forwards A’sresponse forwardðsidPu;mPu ; PÞ to Faccess._ Whenever ~ A obtains {initð:Þ, forwardð:Þ} fromFaccess, S transmits the messages to A, and forwardsA’s response acceptðPÞ to Faccess._ Whenever ~ A obtains challðRUxUu; UuÞ from Fshare, ~ Atransmits the message to A, and forwards A’sresponse listenðRUxUu; UuÞ to Fshare.Simulating the party corruption. Whenever P is corruptedby A, thereby ~ A corrupts the corresponding ~ P. ~ Aprovides A with the corrupted parties’ internal states._ Whenever ~ A obtains accessðDUu Þ from Faccess, ~ Atransmits the message accessðDUu Þ to A, and forwardsA’s response acceptðPÞ to Faccess._ Whenever ~ A obtains challðRUxUu; UuÞ from Fshare, ~ Atransmits the message to A, and forwards A’sresponse shareðDnull; UuÞ to Fshare.Ideal and Real are indistinguishable: Assume that {CS,CUu} respectively indicate the events that corruptions of{S, U}. Z invokes Activate and Initialize to launch aninteraction. The commands Generate and Access areinvoked to transmit accessðDUu Þ to ~ A, and A respondsacceptðPÞ to ~ A. Thereafter, Challenge and Share areinvoked to transmit shareðRUxUu; UuÞ, and A respondsshareðDnull; UuÞ to ~ A. Note that initð:Þ independentlygenerates dual session identifiers {sidUu , sidSu }, and thesimulations of Real and Ideal are consistent eventhough ~ A may intervene to prevent the data access controland authority sharing in Ideal. The pseudorandomnumber generator (introduced in {initð:Þ, genð:Þ}), andthe collision-resistant hash function (introduced in{accessð:Þ, shareð:Þ}) are applied to guarantee that theprobability of the environment Z can distinguish theadversary’s behaviors in Ideal and Real is at most negligible.The simulation is performed based on the fact thatno matter the event CS or CUu occurs or not, Therefore,pshare UC-realizes the ideal functionality Fshare in theFaccess-hybrid model. tu6 CONCLUSIONIn this work, we have identified a new privacy challengeduring data accessing in the cloud computing to achieveprivacy-preserving access authority sharing. Authenticationis established to guarantee data confidentiality anddata integrity. Data anonymity is achieved since thewrapped values are exchanged during transmission. Userprivacy is enhanced by anonymous access requests to privatelyinform the cloud server about the users’ accessLIU ET AL.: SHARED AUTHORITY BASED PRIVACY-PRESERVING AUTHENTICATION PROTOCOL IN CLOUD COMPUTING 249desires. Forward security is realized by the session identifiersto prevent the session correlation. It indicates that theproposed scheme is possibly applied for privacy preservationin cloud applications.ACKNOWLEDGMENTSThis work was funded by DNSLAB, China Internet NetworkInformation Center, Beijing 100190, China. [28] R. Canetti, “Universally Composable Security: A New Paradigmfor Cryptographic Protocols,” Proc. 42nd IEEE Symp. Foundationsof Computer Science (FOCS ’01), pp. 136-145, Oct. 2001.Hong Liu is currently working toward the PhDdegree at the School of Electronic and InformationEngineering, Beihang University, China. Shefocuses on the security and privacy issues inradio frequency identification, vehicle-to-grid networks,and Internet of Things. Her research interestsinclude authentication protocol design, andsecurity formal modeling and analysis. She is astudent member of the IEEE.Huansheng Ning received the BS degree fromAnhui University in 1996 and the PhD degreefrom Beihang University in 2001. He is a professorin the School of Computer and CommunicationEngineering, University of Science andTechnology Beijing, China. His current researchinterests include Internet of Things, aviationsecurity, electromagnetic sensing and computing.He has published more than 50 papers injournals, international conferences/workshops.He is a senior member of the IEEE.250 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015Qingxu Xiong received the PhD degree in electricalengineering from Peking University, Beijing,China, in 1994. From 1994 to 1997, he worked inthe Information Engineering Department at theBeijing University of Posts and Telecommunicationsas a postdoctoral researcher. He is currentlya professor in the School of Electrical andInformation Engineering at the Beijing Universityof Aeronautics and Astronautics. His researchinterests include scheduling in optical and wirelessnetworks, performance modeling of wirelessnetworks, and satellite communication. He is a member of the IEEE.Laurence T. Yang received the BE degree incomputer science from Tsinghua University,China, and the PhD degree in computer sciencefrom the University of Victoria, Canada. He is aprofessor in the School of Computer Scienceand Technology at the Huazhong University ofScience and Technology, China, and in theDepartment of Computer Science, St. FrancisXavier University, Canada. His research interestsinclude parallel and distributed computing,and embedded and ubiquitous/pervasive computing.His research is supported by the National Sciences and EngineeringResearch Council and the Canada Foundation for Innovation.He is amember of the IEEE.” For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.LIU ET AL.: SHARED AUTHORITY BASED PRIVACY-PRESERVING AUTHENTICATION PROTOCOL IN CLOUD COMPUTING 251
Security Optimization of Dynamic Networks with Probabilistic Graph Modeling and Linear Programming
Large organizations need rigorous security tools for analyzing potential vulnerabilities in their networks. However, managing large-scale networks with complex configurations is technically challenging. For example, organizational networks are usually dynamic with frequent configuration changes. These changes may include changes in the availability and connectivity of hosts and other devices, and services added to or removed from the network. Network administrators also need to respond to newly discovered vulnerabilities by applying patches and modifications to the network configuration and security policies, or utilizing defensive security resources to minimize the risk from external attacks. For instance, to prevent a remote attack targeting a host it is useful to analyze the candidate defensive strategies in choosing installation and runtime parameters for one or several intrusion prevention systems. To facilitate a scalable security analysis of organizational networks, attack graphs were proposed. Attack graphs show possible attack paths with respect to a particular network setting, which provide the necessary elements for modeling and improving the security of the network.
Existing work utilizes attack graphs for analyzing the security risks by quantifying attack graphs using a variety of techniques such as Bayesian belief propagation basic laws of probability and vertex ranking algorithms. These models lack a systematic and scalable computation of optimized network configurations. Current attack graph quantification models assume a network with known and fixed configurations in terms of the connectivity, availability and policies of the network services and components disregarding the dynamic nature of modern networks. Moreover, except for a few attempts previous work has solely focused on computing a numerical representation of the risk without addressing the more challenging problem of risk management and reduction.
In this paper, we present a rigorous probabilistic model that measures the security risk as the proba- bility of success in an attack. Our probabilistic model referred to as the success measurement model has three main features: (i) rigorous and scalable model with a clear probabilistic semantic, (ii) computation of risk probabilities with the goal of finding the maximum attack capabilities, and (iii) considering dynamic network features and the availability of mobile devices in the network. As an application of our success measurement model, we formalize the problem of utilizing network security resources as an optimization problem with the goal of computing an optimal placement of security products across a network. Our new contribution is to define this optimization problem and provide an efficient algorithm based on a standard technique called sequential linear programming. Our algorithm is proved to converge and it is scalable to large networks with thousands of components and attack paths.
Our contributions in this paper include:
• A scalable probabilistic model that uses a Bernoulli model to measure the risk in terms of the probability of success to achieve an attack goal.
• An efficient security optimization model, generated based on a quantified attack graph, to compute an optimal placement of security products according to organizational and technical constraints.
• Modeling dynamic network features for a realistic and accurate analysis of the risk associated with modern networks.
The results of our experiments confirm
three key properties of our model. First, the vulnerability values computed
from our model are accurate. Our manual inspection of the results confirms that
the probability values obtained in the experiments correlate to the
vulnerabilities of components in the network. Second, our security improvement
method efficiently finds the optimal placement of security products subject to
constraints. Third, we quantify the additional vulnerabilities introduced by
mobile devices of a dynamic network. Our results indicate that an infected
mobile device within the trusted region creates a preferred attack direction
towards the attack target, which increases the chance of success at the target
host. Our implementation efficiently computes the probabilities throughout
large attack graphs with a quadratic execution performance.
1.3 LITRATURE SURVEY
DYNAMIC SECURITY RISK MANAGEMENT USING BAYESIAN ATTACK GRAPHS
AUTHOR: N. Poolsappasit, R. Dewri, and I. Ray
PUBLISH: IEEE Transactions on Dependable and Secure Computing, vol. 9, no. 1, pp. 61–74, Jan 2012.
EXPLANATION:
Security risk assessment and mitigation
are two vital processes that need to be executed to maintain a productive IT
infrastructure. On one hand, models such as attack graphs and attack trees have
been proposed to assess the cause-consequence relationships between various
network states, while on the other hand, different decision problems have been
explored to identify the minimum-cost hardening measures. However, these risk
models do not help reason about the causal dependencies between network states.
Further, the optimization formulations ignore the issue of resource
availability while analyzing a risk model. In this paper, we propose a risk
management framework using Bayesian networks that enable a system administrator
to quantify the chances of network compromise at various levels. We show how to
use this information to develop a security mitigation and management plan. In
contrast to other similar models, this risk model lends itself to dynamic
analysis during the deployed phase of the network. A multi objective
optimization platform provides the administrator with all trade-off information
required to make decisions in a resource constrained environment.
TIME-EFFICIENT AND COST EFFECTIVE NETWORK HARDENING USING ATTACK GRAPHS
AUTHOR: M. Albanese, S. Jajodia, and S. Noel
PUBLISH: Dependable Systems and Networks (DSN), 2012 42nd Annual IEEE/IFIP International Conference on, june 2012
EXPLANATION:
Attack graph analysis has been
established as a powerful tool for analyzing network vulnerability. However,
previous approaches to network hardening look for exact solutions and thus do
not scale. Further, hardening elements have been treated independently, which
is inappropriate for real environments. For example, the cost for patching many
systems may be nearly the same as for patching a single one. Or patching a
vulnerability may have the same effect as blocking traffic with a firewall,
while blocking a port may deny legitimate service. By failing to account for
such hardening interdependencies, the resulting recommendations can be
unrealistic and far from optimal. Instead, we formalize the notion of hardening
strategy in terms of allowable actions, and define a cost model that takes into
account the impact of interdependent hardening actions. We also introduce a
near-optimal approximation algorithm that scales linearly with the size of the
graphs, which we validate experimentally.
MINIMUM-COST NETWORK HARDENING USING ATTACK GRAPHS
AUTHOR: L. Wang, S. Noel, and S. Jajodia
PUBLISH: Computer Communications, vol. 29, no. 18, pp. 3812–3824, Nov. 2006. [Online]. Available: http://dx.doi.org/10.1016/j.comcom.2006.06.018
EXPLANATION:
In defending one’s network against
cyber attack, certain vulnerabilities may seem acceptable risks when considered
in isolation. But an intruder can often infiltrate a seemingly well-guarded
network through a multi-step intrusion, in which each step prepares for the
next. Attack
graphs can reveal the
threat by enumerating possible sequences of exploits that can be followed to
compromise given critical resources. However, attack graphs do not directly
provide a solution to remove the threat. Finding a solution by hand is
error-prone and tedious, particularly for larger and less secure networks whose
attack graphs are overly complicated. In this paper, we propose a solution to
automate the task of hardening a network against multi-step intrusions. Unlike
existing approaches whose solutions require removing exploits, our solution is
comprised of initially satisfied conditions only. Our solution is thus more
enforceable, because the initial conditions can be independently disabled,
whereas exploits are usually consequences of other exploits and hence cannot be
disabled without removing the causes. More specifically, we first represent
given critical resources as a logic proposition of initial conditions. We then
simplify the proposition to make hardening options explicit. Among the options
we finally choose solutions with the minimum cost. The key improvements over the
preliminary version of this paper include a formal framework of the minimum
network hardening problem, and an improved one-pass algorithm in deriving the
logic proposition while avoiding logic loops.
CHAPTER 2
2.0 SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
Existing work utilizes attack graphs for analyzing the security risks by quantifying attack graphs using a variety of techniques such as Bayesian belief propagation basic laws of probability and vertex ranking algorithms. These models lack a systematic and scalable computation of optimized network configurations. Current attack graph quantification models assume a network with known and fixed configurations in terms of the connectivity, availability and policies of the network services and components disregarding the dynamic nature of modern networks. Moreover, except for a few attempts previous work has solely focused on computing a numerical representation of the risk without addressing the more challenging problem of risk management and reduction.
Security risk assessment and mitigation are two vital processes that need to be executed to maintain a productive IT infrastructure. On one hand, models such as attack graphs and attack trees have been proposed to assess the cause-consequence relationships between various network states, while on the other hand, different decision problems have been explored to identify the minimum-cost hardening measures. However, these risk models do not help reason about the causal dependencies between network states.
Further, the optimization formulations ignore the issue of resource availability while analyzing a risk model management framework using Bayesian networks that enable a system administrator to quantify the chances of network compromise at various levels to use this information to develop a security mitigation and management plan. In contrast to other similar models, this risk model lends itself to dynamic analysis during the deployed phase of the network. A multi objective optimization platform provides the administrator with all trade-off information required to make decisions in a resource constrained environment.
2.1.1 DISADVANTAGES:
- Except for a few attempts previous work has solely focused on computing a numerical representation of the risk without addressing the more challenging problem of risk management and reduction.
- Assume a network with known and fixed configurations in terms of the connectivity, availability and policies of the network services and components disregarding the dynamic nature of modern networks.
- None of the previous work considers the effect of device availability on open networks. Furthermore, optimized network configurations and improvement in our work has not been previously studied.
- Bayesian methods are powerful in computing unobserved facts, such as predicting possible threats. It remains unclear how Bayesian methods can be used to support variability in attacker’s decisions, device availability, and the effect of mobile devices.
2.2 PROPOSED SYSTEM:
We present a rigorous probabilistic model that measures the security risk as the probability of success in an attack. Our new contribution is to define this optimization problem and provide an efficient algorithm based on a standard technique called sequential linear programming. Our algorithm is proved to converge and it is scalable to large networks with thousands of components and attack paths.
Our experiments confirm three key properties of our model.
First, the vulnerability values computed
from our model are accurate. Our manual inspection of the results confirms that
the probability values obtained in the experiments correlate to the
vulnerabilities of components in the network. Second, our security improvement
method efficiently finds the optimal placement of security products subject to
constraints. Third, we quantify the additional vulnerabilities introduced by
mobile devices of a dynamic network. Our results indicate that an infected
mobile device within the trusted region creates a preferred attack direction
towards the attack target, which increases the chance of success at the target
host. Our implementation efficiently computes the probabilities throughout
large attack graphs with a quadratic execution performance.
2.2.1 ADVANTAGES:
Our probabilistic model referred to as the success measurement model main features:
- Rigorous and scalable model with a clear probabilistic semantic, Computation of risk probabilities with the goal of finding the maximum attack capabilities.
- Efficient security optimization model, generated based on a quantified attack graph, to compute an optimal placement of security products according to organizational and technical constraints.
- Considering dynamic network features and the availability of mobile devices in the network as an application of our success measurement model, we formalize the problem of utilizing network.
- Security
resources as an optimization problem with the goal of computing an optimal
placement of security products across a network. Modeling dynamic network
features for a realistic and accurate analysis of the risk associated with
modern networks.
2.3 HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1 HARDWARE REQUIREMENT:
v Processor – Pentium –IV
- Speed –
1.1 GHz
- RAM – 256 MB (min)
- Hard Disk – 20 GB
- Floppy Drive – 1.44 MB
- Key Board – Standard Windows Keyboard
- Mouse – Two or Three Button Mouse
- Monitor – SVGA
2.3.2 SOFTWARE REQUIREMENTS:
- Operating System : Windows XP or Win7
- Front End : JAVA JDK 1.7
- Back End : MS-Access 2007
- Document : MS-Office 2007
CHAPTER 3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use Case Diagram / Flow Diagram:
- The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
- The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
- DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
- DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External sources or destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and retrieved.
PROCESS:
People, procedures or devices that produce data’s in the physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.
There are several common modeling rules when creating DFDs:
- All processes must have at least one data flow in and one data flow out.
- All processes should modify the incoming data, producing new forms of outgoing data.
- Each data store must be involved with at least one data flow.
- Each external entity must be involved with at least one data flow.
- A data flow must be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM
3.2 DATAFLOW DIAGRAM
UML DIAGRAMS:
3.2 USE CASE DIAGRAM:
3.3 CLASS DIAGRAM:
3.4 SEQUENCE DIAGRAM:
Nearest Router |
3.5 ACTIVITY DIAGRAM:
CHAPTER 4
4.0 IMPLEMENTATION:
ECSA ATTACK MODEL
Our probabilistic quantification model, referred to as success measurement model, quantifies the vulnerabilities of networked components and resources, by computing the expected chance of successful attack (ECSA) at every attack step, which is represented by an attack graph node. Our security improvement model uses the computed probabilities from the success measurement model to find optimal security defense strategies given a set of available options in the success measurement model requires three sets of inputs, which are a set of attack steps, a set of network configuration and potential vulnerabilities, and a set of ground facts. The first set includes the steps necessary to execute a targeted attack in a network.
These steps represent intermediate attack goals such as compromising a machine that has an internal connectivity with a targeted server. In addition, the attack steps also describe the various parallel choices available to an attack when achieving a specific target. The second set includes the network configurations and vulnerability data that collectively provide host software installations, inter host connectivity, running services and connections, and known or potential software vulnerabilities. The third set contains the ground fact values that describe the vulnerability, availability, and connectivity of various network configurations.
In our implementation, the first two sets of inputs (i.e., the attack steps and the network configuration data) are taken from dependency attack graphs. The system administrators use vulnerability assessment tools to explore the configurations and vulnerability data in their networks. The output of such assessment is provided as an input to attack graph generation tools. Attack graph generation tools (such as MulVAL often include customized predefined attack step rules that are applied to the configurations and vulnerability data of a network and produce a plain (that is, not quantified) attack graph.
Our model is to develop a set of ground fact values bootstrap the computation of success probabilities throughout an attack graph. The output of the computation based on our success measurement model is the input to the security optimization model (Figure 1). Using the security improvement model, we transform the quantified attack graph from the success measurement model into a mathematical program.
The resulting mathematical program includes an additional set of data that represent various network security defense strategies. In the tool that we developed, the security administrators simply feed this information as logical predicates such as ips_installed(T, E), which describes a potential installation of an intrusion prevention system of type T and security effectiveness E. The effectiveness value E is a score estimated by the system administrator based on prior experiences and available effectiveness data.
We present our success measurement model
to compute the expected chance of a successful attack on a network with respect
to the attack’s ultimate goal. We first present the definitions of the expected
chance of a successful attack (ECSA) followed by the description of an
efficient method to compute ECSA values. Our success measurement model computes
probabilities as a function of initial belief probabilities without the need
for specifying conditional probabilities required by Bayes’ theorem. Our model
measures the success of an attacker based on the attack dependencies determined
by a logical attack graph.
4.1 ALGORITHM
GNU LINEAR PROGRAMMING KIT
We implemented a tool for our computational procedures (Section 4.3) in Java (with approximately 3500 lines of code). We use (GNU Linear Programming Kit) GLPK, a well known open source linear programming API for our SLP-based procedure. Our tool parses an attack graph input file (obtained from MulVAL, computes the ECSA values according to various parameters, and performs security improvement analysis based on a set of improvement options and constraints.
We demonstrate the performance of our implementation. For each graph, we repeat the corresponding experiment to measure the time to compute the final expected chance of a successful attack at the graph’s root vertex. We compute ECSA values for the target graphs using our tool. We run our tool as a single threaded program on a machine with a 2.4 GHz Intel Core i7 processor and a 8 GB DDR3 memory. All our experiments converged with at most 20 iterations towards the solution. On average, 87.99% of the execution time for Procedure 2 is spent on the Taylor expansion from which on average 78.27% of the execution time is spent on symbolic differentiation performed using DJep1 Java library for symbolic operations. The Taylor expansion is parallelizable, and scales with the number of vertices, hence can be done efficiently offline.
SLP LINEAR ALGORITHM
For a network configuration w, let Gw be the corresponding attack graph. The complete procedure to compute the ECSA values of nodes (Definition 2) for an attack graph (Definition 1) is given next. To prepare the attack graph for computation, we execute the following procedure. Our procedureis a technique called sequential linear programming (SLP). SLP is a standard technique for solving nonlinear optimization problems, which is found to be computationally efficient and converges to an optimal solution.
4.2 MODULES:
NETWORK SECURITY:
PROBABILISTIC MODEL:
GENERATING ATTACK GRAPH:
SECURITY
OPTIMIZATION:
4.3 MODULE DESCRIPTION:
NETWORK SECURITY:
Network-accessible resources may be deployed in a network as surveillance and early-warning tools, as the detection of attackers are not normally accessed for legitimate purposes. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the data’s. Data forwarding can also direct an attacker’s attention away from legitimate servers. A user encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a server, a user is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker’s methods can be studied and that information can be used to increase network security.
PROBABILISTIC MODEL:
Our probabilistic model referred to as the success measurement model has three main features: (i) rigorous and scalable model with a clear probabilistic semantic, (ii) computation of risk probabilities with the goal of finding the maximum attack capabilities, and (iii) considering dynamic network features and the availability of mobile devices in the network.
Our probabilistic quantification model, referred to as success measurement model, quantifies the vulnerabilities of networked components and resources, by computing the expected chance of successful attack (ECSA) at every attack step, which is represented by an attack graph node. Our security improvement model uses the computed probabilities from the success measurement model to find optimal security defense strategies given a set of available options.
Probabilistic risk assessment is to accurately capture attack step dependencies and correlations. Attack dependencies in the form of attack preconditions are intrinsically captured by our model. That is because we base our analysis on attack graphs that are formed based on the dependency relations among the nodes. Therefore, the probabilities of success are computed by considering the dependency relations determined in an attack graph.
- The focus of our experiments is to practically demonstrate the practicality, feasibility, and accuracy of the model.
- Our experiments include novel features such as analyzing networks with less studied but potentially vulnerable devices such as mobile devices and networked printers. To the best of our knowledge, the experiments in the network analysis literature lack this level of detail.
- Our model will give system administrators a solid analysis of the security in their networks that will assist in actual implementation of security features to downgrade the possibility of successful attack.
GENERATING ATTACK GRAPH:
Attack graph has several goal nodes dependencies is a logical disjunction. In reality, this disjunction indicates that there are multiple attack choices for an attacker towards a specific attack goal. For instance, consider a server with a local privilege escalation vulnerability (which is exploitable remotely in a multistep attack) and runs a network service with multiple remote vulnerabilities. An attacker must exploit one (or more) of these vulnerabilities to gain privileges on the target server. In the lack of observable evidence, one needs to compute the ECSA of a goal node with a function that correctly captures the probabilities of such attack choices. Our approach is to computationally determine attack choice probabilities according to various attack patterns.
SECURITY OPTIMIZATION:
To achieve our main research goal of reducing the probability of success in an attack, and thus optimizing the overall security of the network, we point out the necessity to model this problem as an optimization problem. Further, we attempt to model an important feature that is to consider the availability of machines in the network. In this section we describe these two contributions of our work as summarized below.
Optimizing the security of the networks given a set of security hardening products (e.g., a host based firewall), we compute an optimal distribution of these resources subject to given placement constraints. Using the rigorous probabilistic model introduced in Section 4.1, this is the first work in which a logical attack graph (Definition 1) is transformed into a system of linear and nonlinear equations with the global objective of reducing the probability of success on the graph’s ultimate attack goal. This transformation is performed efficiently and naturally and directly captures our research goal.
Machine availability and the effect of mobile devices:
Our work is the first to show how to represent and assess devices with variable availability (frequently joining and leaving the network), which is one of the characteristics of mobile devices with variable connectivity. Resources for hardening an organizational network, it is important to install a single or a combination of security hardening products so that the expected chance of a successful attack on the network is minimized. To find the best placement of a set of security products in a network, we extend the attack graph to define a security product as a special fact node referred to as an improvement node, which is a fact node that represents a security hardening product, service, practice, or policy. The objective of solving the problem of optimal placement of security products is to compute the effects of various placements of one or more improvement nodes subject to certain constraints and choose the placement that minimizes the attack goal’s ECSA value.
CHAPTER 5
5.0 SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
- ECONOMICAL FEASIBILITY
- TECHNICAL FEASIBILITY
- SOCIAL FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.
This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a
user-oriented vehicle before implementation. The best programs are worthless if
it produces the correct outputs.
5.2.1 UNIT TESTING:
Description | Expected result |
Test for application window properties. | All the properties of the windows are to be properly aligned and displayed. |
Test for mouse operations. | All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions. |
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
5.1.2 FUNCTIONAL TESTING:
Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem. The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.
Description | Expected result |
Test for all modules. | All peers should communicate in the group. |
Test for various peer in a distributed network framework as it display all users available in the group. | The result after execution should give the accurate result. |
5.1. 3 NON-FUNCTIONAL TESTING:
The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:
- Load testing
- Performance testing
- Usability testing
- Reliability testing
- Security testing
5.1.4 LOAD TESTING:
An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.
Description | Expected result |
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. | Should designate another active node as a Server. |
5.1.5 PERFORMANCE TESTING:
Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.
Description | Expected result |
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management. | Should handle large input values, and produce accurate result in a expected time. |
5.1.6 RELIABILITY TESTING:
The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.
Description | Expected result |
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. | In case of failure of the server an alternate server should take over the job. |
5.1.7 SECURITY TESTING:
Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.
Description | Expected result |
Checking that the user identification is authenticated. | In case failure it should not be connected in the framework. |
Check whether group keys in a tree are shared by all peers. | The peers should know group key in the same group. |
5.1.8 WHITE BOX TESTING:
White box testing, sometimes called glass-box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Using white box testing method, the software engineer can derive test cases. The White box testing focuses on the inner structure of the software structure to be tested.
Description | Expected result |
Exercise all logical decisions on their true and false sides. | All the logical decisions must be valid. |
Execute all loops at their boundaries and within their operational bounds. | All the loops must be finite. |
Exercise internal data structures to ensure their validity. | All the data structures must be valid. |
5.1.9 BLACK BOX TESTING:
Black box testing, also called behavioral testing, focuses on the functional requirements of the software. That is, black testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black box testing is not alternative to white box techniques. Rather it is a complementary approach that is likely to uncover a different class of errors than white box methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.
Description | Expected result |
To check for incorrect or missing functions. | All the functions must be valid. |
To check for interface errors. | The entire interface must function normally. |
To check for errors in a data structures or external data base access. | The database updation and retrieval must be done. |
To check for initialization and termination errors. | All the functions and data structures must be initialized properly and terminated normally. |
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER 6
6.0 SOFTWARE DESCRIPTION:
6.1 JAVA TECHNOLOGY:
Java technology is both a programming language and a platform.
The Java Programming Language
The Java programming language is a high-level language that can be characterized by all of the following buzzwords:
- Simple
- Architecture neutral
- Object oriented
- Portable
- Distributed
- High performance
- Interpreted
- Multithreaded
- Robust
- Dynamic
- Secure
With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.
You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.
6.2 THE JAVA PLATFORM:
A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.
The Java platform has two components:
- The Java Virtual Machine (Java VM)
- The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.
Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.
6.3 WHAT CAN JAVA TECHNOLOGY DO?
The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.
However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.
A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.
How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:
- The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
- Applets: The set of conventions used by applets.
- Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
- Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
- Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
- Software components: Known as JavaBeansTM, can plug into existing component architectures.
- Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
- Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.
6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?
We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:
- Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
- Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
- Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
- Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
- Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
- Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
- Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.
6.5 ODBC:
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.
From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.
The advantages
of this scheme are so numerous that you are probably thinking there must be
some catch. The only disadvantage of ODBC is that it isn’t as efficient as
talking directly to the native database interface. ODBC has had many detractors
make the charge that it is too slow. Microsoft has always claimed that the
critical factor in performance is the quality of the driver software that is
used. In our humble opinion, this is true. The availability of good ODBC
drivers has improved a great deal recently. And anyway, the criticism about
performance is somewhat analogous to those who said that compilers would never
match the speed of pure assembly language. Maybe not, but the compiler (or
ODBC) gives you the opportunity to write cleaner programs, which means you
finish sooner. Meanwhile, computers get faster every year.
6.6 JDBC:
In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.
The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.
6.7 JDBC Goals:
Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:
SQL Level API
The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.
SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.
JDBC must be implemental on top of common database interfaces
The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.
- Provide a Java interface that is consistent with the rest of the Java system
Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.
- Keep it simple
This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.
- Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.
- Keep the common cases simple
Because more often than not, the usual SQL calls
used by the programmer are simple SELECT’s,
INSERT’s,
DELETE’s
and UPDATE’s,
these queries should be simple to perform with JDBC. However, more complex SQL
statements should also be possible.
Finally we decided to precede the implementation using Java Networking.
And for dynamically updating the cache table we go for MS Access database.
Java ha two things: a programming language and a platform.
Java is a high-level programming language that is all of the following
Simple Architecture-neutral
Object-oriented Portable
Distributed High-performance
Interpreted Multithreaded
Robust Dynamic Secure
Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.
Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.
6.7 NETWORKING TCP/IP STACK:
The TCP/IP stack is shorter than the OSI one:
TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.
IP datagram’s:
The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.
UDP:
UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.
TCP:
TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.
Internet addresses
In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.
Network address:
Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.
Subnet address:
Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.
Host address:
8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.
Total address:
The 32 bit address is usually written as 4 integers separated by dots.
Port addresses
A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.
Sockets:
A socket is a data structure maintained by the system
to handle network connections. A socket is created using the call socket
. It returns an integer that is like a file descriptor.
In fact, under Windows, this handle can be used with Read File
and Write File
functions.
#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);
Here “family” will be AF_INET
for IP communications, protocol
will be zero, and type
will depend on whether TCP or UDP is used. Two
processes wishing to communicate over a network create a socket each. These are
similar to two ends of a pipe – but the actual pipe does not yet exist.
6.8 JFREE CHART:
JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:
A consistent and well-documented API, supporting a wide range of chart types;
A flexible design that is easy to extend, and targets both server-side and client-side applications;
Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);
JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.
6.8.1. Map Visualizations:
Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);
Creating an appropriate dataset interface (plus
default implementation), a rendered, and integrating this with the existing
XYPlot class in JFreeChart; Testing, documenting, testing some more,
documenting some more.
6.8.2. Time Series Chart Interactivity
Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.
6.8.3. Dashboards
There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.
6.8.4. Property Editors
The property editor mechanism in JFreeChart only
handles a small subset of the properties that can be set for charts. Extend (or
reimplement) this mechanism to provide greater end-user control over the
appearance of the charts.
CHAPTER 8
8.1 CONCLUSION & FUTURE WORK:
In this work we formalized, implemented, and evaluated a new probabilistic model for measuring the security threats in large enterprise networks. The novelty of our work is the ability to quantitatively analyze the chance of successful attack in the presence of uncertainties about the configuration of a dynamic network and routes of potential attacks.
The results of our experiments confirm three key properties of our model. First, the vulnerability values computed from our model are accurate. Our manual inspection of the results confirms that the probability values obtained in the experiments correlate to the vulnerabilities of components in the network. Second, our security improvement method efficiently finds the optimal placement of security products subject to constraints. Third, we quantify the additional vulnerabilities introduced by mobile devices of a dynamic network.
Our results indicate that an infected mobile device within the trusted region creates a preferred attack direction towards the attack target, which increases the chance of success at the target host. Our implementation efficiently computes the probabilities throughout large attack graphs with a quadratic execution performance.
For future work, we plan to utilize and extend our success measurement model and optimal security placement algorithm to solve more complex network security optimization problems. For instance, an important issue is noise elimination in the initial belief set of values. This is an important problem that if solved will lead to the production of more accurate results.