Enabling Fine-grained Multi-keyword Search Supporting Classified Sub-dictionaries over Encrypted Clou

Enabling Fine-grained Multi-keyword Search Supporting Classified Sub-dictionaries over Encrypted Cloud DataAbstract—Using cloud computing, individuals can store their data on remote servers and allow data access to public users through thecloud servers. As the outsourced data are likely to contain sensitive privacy information, they are typically encrypted before uploaded tothe cloud. This, however, significantly limits the usability of outsourced data due to the difficulty of searching over the encrypted data. Inthis paper, we address this issue by developing the fine-grained multi-keyword search schemes over encrypted cloud data. Our originalcontributions are three-fold. First, we introduce the relevance scores and preference factors upon keywords which enable the precisekeyword search and personalized user experience. Second, we develop a practical and very efficient multi-keyword search scheme.The proposed scheme can support complicated logic search the mixed “AND”, “OR” and “NO” operations of keywords. Third, we furtheremploy the classified sub-dictionaries technique to achieve better efficiency on index building, trapdoor generating and query. Lastly,we analyze the security of the proposed schemes in terms of confidentiality of documents, privacy protection of index and trapdoor,and unlinkability of trapdoor. Through extensive experiments using the real-world dataset, we validate the performance of the proposedschemes. Both the security analysis and experimental results demonstrate that the proposed schemes can achieve the same securitylevel comparing to the existing ones and better performance in terms of functionality, query complexity and efficiency.Index Terms—Searchable encryption, Multi-keyword, Fine-grained, Cloud computing.F1 INTRODUCTIONTHE cloud computing treats computing as a utility andleases out the computing and storage capacities to thepublic individuals [1], [2], [3]. In such a framework, theindividual can remotely store her data on the cloud server,namely data outsourcing, and then make the cloud data openfor public access through the cloud server. This represents amore scalable, low-cost and stable way for public data accessbecause of the scalability and high efficiency of cloud servers,and therefore is favorable to small enterprises._ H. Li and Y. Yang are with the School of Computer Science andEngineering, University of Electronic Science and Technology of China,Chengdu, Sichuan, China (e-mail: hongweili@uestc.edu.cn; yangyi.buku@gmail.com)._ H. Li is with State Key Laboratory of Information Security (Institute ofInformation Engineering, Chinese Academy of Sciences, Beijing 100093)(e-mail: hongweili@uestc.edu.cn)._ T. Luan is with the School of Information Technology, Deakin University,Melbourne, Australia(e-mail: tom.luan@deakin.edu.au)._ X. Liang is with the Department of Computer Science, Dartmouth College,Hanover, USA(e-mail: Xiaohui.Liang@dartmouth.edu)._ L. Zhou is with the National Key Laboratory of Science and Technologyon Communication, University of Electronic Science and Technology ofChina, China(e-mail: lzhou@uestc.edu.cn)._ X. Shen is with the Department of Electrical and Computer Engineering,University of Waterloo,Waterloo, Ontario, Canada(e-mail:sshen@uwaterloo.ca).Note that the outsourced data may contain sensitive privacyinformation. It is often necessary to encrypt the private databefore transmitting the data to the cloud servers [4], [5].The data encryption, however, would significantly lower theusability of data due to the difficulty of searching over theencrypted data [6]. Simply encrypting the data may stillcause other security concerns. For instance, Google Searchuses SSL (Secure Sockets Layer) to encrypt the connectionbetween search user and Google server when private data,such as documents and emails, appear in the search results [7].However, if the search user clicks into another website fromthe search results page, that website may be able to identifythe search terms that the user has used.On addressing above issues, the searchable encryption (e.g.,[8], [9], [10]) has been recently developed as a fundamentalapproach to enable searching over encrypted cloud data,which proceeds the following operations. Firstly, the dataowner needs to generate several keywords according to theoutsourced data. These keywords are then encrypted and storedat the cloud server. When a search user needs to access theoutsourced data, it can select some relevant keywords andsend the ciphertext of the selected keywords to the cloudserver. The cloud server then uses the ciphertext to matchthe outsourced encrypted keywords, and lastly returns thematching results to the search user. To achieve the similarsearch efficiency and precision over encrypted data as that ofplaintext keyword search, an extensive body of research hasbeen developed in literature. Wang et al. [11] propose a rankedkeyword search scheme which considers the relevance scores1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing2of keywords. Unfortunately, due to using order-preservingencryption (OPE) [12] to achieve the ranking property, theproposed scheme cannot achieve unlinkability of trapdoor.Later, Sun et al. [13] propose a multi-keyword text searchscheme which considers the relevance scores of keywords andutilizes a multidimensional tree technique to achieve efficientsearch query. Yu et al. [14] propose a multi-keyword top-kretrieval scheme which uses fully homomorphic encryption toencrypt the index/trapdoor and guarantees high security. Caoet al. [6] propose a multi-keyword ranked search (MRSE),which applies coordinate machine as the keyword matchingrule, i.e., return data with the most matching keywords.Although many search functionalities have been developedin previous literature towards precise and efficient searchableencryption, it is still difficult for searchable encryption toachieve the same user experience as that of the plaintextsearch, like Google search. This mainly attributes to followingtwo issues. Firstly, query with user preferences is very popularin the plaintext search [15], [16]. It enables personalized searchand can more accurately represent user’s requirements, but hasnot been thoroughly studied and supported in the encrypteddata domain. Secondly, to further improve the user’s experienceon searching, an important and fundamental function isto enable the multi-keyword search with the comprehensivelogic operations, i.e., the “AND”, “OR” and “NO” operationsof keywords. This is fundamental for search users to prunethe searching space and quickly identify the desired data.Cao et al. [6] propose the coordinate matching search scheme(MRSE) which can be regarded as a searchable encryptionscheme with “OR” operation. Zhang et al. [17] propose aconjunctive keyword search scheme which can be regarded asa searchable encryption scheme with “AND” operation withthe returned documents matching all keywords. However, mostexisting proposals can only enable search with single logicoperation, rather than the mixture of multiple logic operationson keywords, which motivates our work.In this work, we address above two issues by developingtwo Fine-grained Multi-keyword Search (FMS) schemes overencrypted cloud data. Our original contributions can be summarizedin three aspects as follows:We introduce the relevance scores and the preference factorsof keywords for searchable encryption. The relevancescores of keywords can enable more precise returnedresults, and the preference factors of keywords representthe importance of keywords in the search keyword setspecified by search users and correspondingly enablespersonalized search to cater to specific user preferences. Itthus further improves the search functionalities and userexperience.We realize the “AND”, “OR” and “NO” operations in themulti-keyword search for searchable encryption. Comparedwith schemes in [6], [13] and [14], the proposedscheme can achieve more comprehensive functionalityand lower query complexity.We employ the classified sub-dictionaries technique toenhance the efficiency of the above two schemes. Extensiveexperiments demonstrate that the enhanced schemescan achieve better efficiency in terms of index building,trapdoor generating and query in the comparison withschemes in [6], [13] and [14].The remainder of this paper is organized as follows. InSection 2, we outline the system model, threat model, securityrequirements and design goals. In Section 3, we describethe preliminaries of the proposed schemes. We present thedeveloped schemes and enhanced schemes in details in Section4 and Section 5, respectively. Then we carry out the securityanalysis and performance evaluation in Section 6 and Section7, respectively. Section 8 provides a review of the relatedworks and Section 9 concludes the paper.2 SYSTEM MODEL, THREAT MODELAND SECURITY REQUIREMENTS2.1 System ModelAs shown in Fig. 1, we consider a system consists of threeentities.Data owner: The data owner outsources her data tothe cloud for convenient and reliable data access to thecorresponding search users. To protect the data privacy,the data owner encrypts the original data throughsymmetric encryption. To improve the search efficiency,the data owner generates some keywords for eachoutsourced document. The corresponding index is thencreated according to the keywords and a secret key. Afterthat, the data owner sends the encrypted documents andthe corresponding indexes to the cloud, and sends thesymmetric key and secret key to search users.Cloud server: The cloud server is an intermediate entitywhich stores the encrypted documents and correspondingindexes that are received from the data owner, andprovides data access and search services to search users.When a search user sends a keyword trapdoor to the cloudserver, it would return a collection of matching documentsbased on certain operations.Search user: A search user queries the outsourced documentsfrom the cloud server with following three steps.First, the search user receives both the secret key andsymmetric key from the data owner. Second, accordingto the search keywords, the search user uses the secretkey to generate trapdoor and sends it to the cloud server.Last, she receives the matching document collection fromthe cloud server and decrypts them with the symmetrickey.2.2 Threat Model and Security RequirementsIn our threat model, the cloud server is assumed to be “honestbut-curious”, which is the same as most related works onsecure cloud data search [13], [14], [6]. Specifically, the cloudserver honestly follows the designated protocol specification.However, the cloud server could be “curious” to infer andanalyze data (including index) in its storage and messageflows received during the protocol so as to learn additionalinformation. we consider two threat models depending on theinformation available to the cloud server, which are also usedin [13], [6].1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing3Fig. 1. System modelKnown Ciphertext Model: The cloud server can onlyknow encrypted document collection C and index collectionI, which are both outsourced from the data owner.Known Background Model: The cloud server can possessmore knowledge than what can be accessed inthe known ciphertext model, such as the correlationrelationship of trapdoors and the related statistical ofother information, i.e., the cloud server can possess thestatistical information from a known comparable datasetwhich bears the similar nature to the targeting dataset.Similar to [13], [6], we assume search users are trustedentities, and they share the same symmetric key and secretkey. Search users have pre-existing mutual trust with thedata owner. For ease of illustration, we do not considerthe secure distribution of the symmetric key and the secretkey between the data owner and search users; it can beachieved through regular authentication and secure channelestablishment protocols based on the prior security contextshared between search users and the data owner [18]. Inaddition, to make our presentations more focused, we donot consider following issues, including the access controlproblem on managing decryption capabilities given to usersand the data collection’s updating problem on inserting newdocuments, updating existing documents, and deleting existingdocuments, are separated issues. The interested readers onabove issues may refer to [6], [5], [10], [19].Based on the above threat model, we define the securityrequirements as follows:Confidentiality of documents: The outsourced documentsprovided by the data owner are stored in the cloud server.If they match the search keywords, they are sent to thesearch user. Due to the privacy of documents, they shouldnot be identifiable except by the data owner and theauthorized search users.Privacy protection of index and trapdoor: As discussed inSection 2.1, the index and the trapdoor are created basedon the documents’ keywords and the search keywords,respectively. If the cloud server identifies the content ofindex or trapdoor, and further deduces any associationbetween keywords and encrypted documents, it may learnthe major subject of a document, even the content of ashort document [20]. Therefore, the content of index andtrapdoor cannot be identified by the cloud server.Unlinkability of trapdoor: The documents stored in thecloud server may be searched many times. The cloudserver should not be able to learn any keyword informationaccording to the trapdoors, e.g., to determine twotrapdoors which are originated from the same keywords.Otherwise, the cloud server can deduce relationship oftrapdoors, and threaten to the privacy of keywords. Hencethe trapdoor generation function should be randomized,rather than deterministic. Even in case that two searchkeyword sets are the same, the trapdoors should bedifferent.3 PRELIMINARIESIn this section, we define the notation and review the securekNN computation and relevance score, which will serve as thebasis of the proposed schemes.3.1 NotationF—the document collection to be outsourced, denoted asa set of N documents F = (F1; F2; · · · ; FN).C—the encrypted document collection according to F,denoted as a set of N documents C = (C1;C2; · · · ;CN).FID—the identity collection of encrypted documents C,denoted as FID = (FID1; FID2; · · · ; FIDN).W—the keyword dictionary, including m keywords, denotedas W = (w1;w2; · · · ;wm).I—the index stored in the cloud server, which is builtfrom the keywords of each document, denoted as I =(I1; I2; · · · ; IN).fW—the query keyword set generated by a search user,which is a subset of W.TfW—the trapdoor for keyword set fW.]FID—the identity collection of documents returned tothe search user.FMS(CS)—the abbreviation of FMS and FMSCS.3.2 Secure kNN ComputationWe adopt the work of Wong et al. [21] as our foundation.Wong et al. propose a secure k-nearest neighbor (kNN) schemewhich can confidentially encrypt two vectors and computeEuclidean distance of them. Firstly, the secret key (S;M1;M2)should be generated. The binary vector S is a splitting indicatorto split plaintext vector into two random vectors, whichcan confuse the value of plaintext vector. And M1 and M2 areused to encrypt the split vectors. The correctness and securityof secure kNN computation scheme can be referred to [21].3.3 Relevance ScoreThe relevance score between a keyword and a documentrepresents the frequency that the keyword appears in thedocument. It can be used in searchable encryption for returningranked results. A prevalent metric for evaluating the relevancescore is TF × IDF, where TF (term frequency) representsthe frequency of a given keyword in a document and IDF1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing4(inverse document frequency) represents the importance ofkeyword within the whole document collection. Without lossof generality, we select a widely used expression in [22] toevaluate the relevance score asScore(fW; Fj) =ΣwfW1|Fj (1 + lnfj;w) · ln(1 +Nfw) (1)where fj;w denotes the TF of keyword w in document Fj ;fw denotes the number of documents contain keyword w; Ndenotes the number of documents in the collection; and |Fj |denotes the length of Fj , obtained by counting the number ofindexed keywords.4 PROPOSED SCHEMESIn this section, we firstly propose a variant of the secure kNNcomputation scheme, which serves as the basic framework ofour schemes. Furthermore, we describe two variants of ourbasic framework and the corresponding functionalities of themin detail.4.1 Basic FrameworkThe secure kNN computation scheme uses Euclidean distanceto select k nearest database records. In this section, we presenta variant of the secure kNN computation scheme to achievethe searchable encryption property.4.1.1 InitializationThe data owner randomly generates the secret key K =(S;M1;M2), where S is a (m+1)-dimensional binary vector,M1 and M2 are two (m + 1) × (m + 1) invertible matrices,respectively, and m is the number of keywords in W. Thenthe data owner sends (K; sk) to search users through a securechannel, where sk is the symmetric key used to encryptdocuments outsourced to the cloud server.4.1.2 Index buildingThe data owner firstly utilizes symmetric encryption algorithm(e.g., AES) to encrypt the document collection(F1; F2; · · · ; FN) with the symmetric key sk [23], the encrypteddocument collection are denoted as Cj(j = 1; 2; · · · ;N).Then the data owner generates an m-dimensional binaryvector P according to Cj(j = 1; 2; · · · ;N), where eachbit P[i] indicates whether the encrypted document containsthe keyword wi, i.e., P[i] = 1 indicates yes and P[i] = 0indicates no. Then she extends P to a (m + 1)-dimensionalvector P, where P[m + 1] = 1. The data owner usesvector S to split Pinto two (m + 1)-dimensional vectors(pa; pb), where the vector S functions as a splitting indicator.Namely, if S[i] = 0(i = 1; 2; · · · ;m + 1), pa[i] and pb[i]are both set as P[i]; if S[i] = 1(i = 1; 2; · · · ;m + 1),the value of P[i] will be randomly split into pa[i] and pb[i](P[i] = pa[i]+pb[i]). Then, the index of encrypted documentCj can be calculated as Ij = (paM1; pbM2). Finally, the dataowner sends Cj ||FIDj ||Ij (j = 1; 2; · · · ;N) to the cloudserver.4.1.3 Trapdoor generatingThe search user firstly generates the keyword set fW forsearching. Then, she creates a m-dimensional binary vector Qaccording to fW, where Q[i] indicates whether the i-th keywordof dictionary wi is in fW, i.e., Q[i] = 1 indicates yes andQ[i] = 0 indicates no. Further, the search user extends Q toa (m + 1)-dimensional vector Q, where Q[m + 1] = s(the value of s will be defined in the following schemesin detail). Next, the search user chooses a random numberr > 0 to generate Q′′ = r · Q. Then she splits Q′′ into two(m + 1) vectors (qa; qb): if S[i] = 0(i = 1; 2; · · · ;m + 1),the value of Q′′[i] will be randomly split into qa[i] and qb[i];if S[i] = 1(i = 1; 2; · · · ;m + 1), qa[i] and qb[i] are bothset as Q′′[i]. Thus, the search trapdoor TfW can be generatedas (M11 qa;M12 qb). Then the search user sends TfW to thecloud server.4.1.4 QueryWith the index Ij(j = 1; 2; · · · ;N) and trapdoor TfW, thecloud server calculates the query result asRj = Ij · TfW = (paM1; pbM2) · (M11 qa;M12 qb)= pa · qa + pb · qb = P· Q′′= rP· Q= r · (P · Q s)(2)If Rj > 0, the corresponding document identity FIDj willbe returned.Discussions: The Basic Framework has defined the fundamentalsystem structure of the developed schemes. Based onthe secure kNN computation scheme [21], the complementaryrandom parameter r further enhances the security. Differentvalues for parameter s and vectors P and Q can lead to newvariants of the Basic Framework. This will be elaborated inthe follows.4.2 FMS IIn the Basic Framework, P is a m-dimensional binary vector,and each bit P[i] indicates whether the encrypted documentcontains the keyword wi. In the FMS I, the data ownerfirst calculates the relevance score between the keyword wiand document Fj . The relevance score can be calculated asfollows:Score(wi; Fj) =1|Fj (1 + lnfj;wi ) · ln(1 +Nfwi) (3)where fj;wi denotes the TF of keyword wi in document Fj ;fwi denotes the number of documents contain keyword wi; Ndenotes the number of documents in the collection; and |Fj |denotes the length of Fj , obtained by counting the number ofindexed keywords.Then the data owner replaces the value of P[i] with thecorresponding relevance score. On the other hand, we alsoconsider the preference factors of keywords. The preferencefactors of keywords indicate the importance of keywords inthe search keyword set personalized defined by the searchuser. For a search user, he may pay more attention to thepreference factors of keywords defined by himself than therelevance scores of the keywords. Thus, our goal is that1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing5if a document has a keyword with larger preference factorthan other documents, it should have a higher priority inthe returned ]FID; and for two documents, if their largestpreference factor keywords are the same, the document withhigher relevance score of the keyword is the better matchingresult.As shown in Fig. 2, we replace the values of P[i] andQ[i] by the relevance score and the preference factor of akeyword, respectively (thus P and Q are no longer binary).The search user can dynamically adjust the preference factorsto achieve a more flexible search. For convenience, the scoreis rounded up, i.e., Score(wi; Fj) = 10 Score(wi; Fj),and we assume the relevance score is not more than D,i.e., Score(wi; Fj) < D. For the search keyword set fW =(wn1 ;wn2 ; · · · ;wnl)(1 n1 < n2 < · · · < nl m) whichis ordered by ascending importance, the search user randomlychooses a Σ super-increasing sequence (d1 > 0; d2; · · · ; dl) (i.e., j1i=1 di ·D < dj(j = 2; 3; · · · ; l)), where di is the preferencefactor of keyword wni . Then the search result would be:Rj = r · (P · Q s) = r · li=1Score(wni ; Fj) · di s) (4)Theorem 1: (Correctness) For the search keyword set fW =(wn1 ;wn2 ; · · · ;wnl)(1 n1 < n2 < · · · < nl m) whichis ordered by ascending preference factors, if F1 contains alarger preference factor keyword compared with F2, then F1has higher priority in the returned ]FID.Proof: For the search keyword set fW =(wn1 ;wn2 ; · · · ;wnl ), assume the keyword sets F1and F2 contain in fW are denoted as fW1 =(wni ; · · · ;wnx )(n1 ni < · · · < nx nl) andfW2 = (wnj ; · · · ;wny )(n1 nj < · · · < ny nl),respectively, where fW1 and fW2 are both ordered byascending preference factors, and nx > ny. As stated above,Score(wnx Σ ; Fj) 1 since the score is rounded up, and j1i=1 di · D < dj(j = 2; 3; · · · ; l). Therefore, there will beR2 = r · wnjgW2Score(wnj ; F2) · dj s)< r · yj=1Score(wnj ; F2) · dj s)< r · yj=1D · dj s) < r · (dx s)< r · (Score(wnx ; F1) · dx s)< r · wnigW1Score(wni ; F1) · di s)< R1(5)Therefore, F1 has higher priority in the returned ]FID.Theorem 2: (Correctness) For the search keyword set fW =(wn1 ;wn2 ; · · · ;wnl)(1 n1 < n2 < · · · < nl m)which is ordered by ascending preference factors, if the largestpreference factor keyword F1 contains is the same as thatF2 contains, and F1 have the higher relevance score of thekeyword, then F1 have higher priority in the returned ]FID.Proof: For the search keyword set fW =(wn1 ;wn2 ; · · · ;wnl ), assume the keyword sets F1 andF2 contain are denoted as fW1 = (wni ; · · · ;wnx )(n1 ni < · · · < nx nl) and fW2 = (wnj ; · · · ;wnx )(n1 nj < · · · < nx nl), respectively, where fW1 and fW2are both ordered by ascending preference factors andScore(wnx ; F1) Score(wnx ; F2) 1. Thus, there will beR1 =r · wnigW1Score(wni ; F1) · di s)r · (Score(wnx ; F1) · dx s)(7)R2 =r · wnjgW2Score(wnj ; F2) · dj s) (8)=r · (Score(wnx ; F2) · dxwnjgW2wnxScore(wnj ; F2) · dj s)<r · (Score(wnx ; F2) · dx wnjgW2wnxD · dj s)<r · (Score(wnx ; F2) · dx + dx s)R1 R2 > r · ((Score(wnx ; F1) Score(wnx ; F2)) · dx dx)> r · (dx dx)> 0 (9)Therefore, F1 have higher priority in the returned ]FID thanF2.Example. We present a concrete example to help understandTheorem 2. The example also illustrates the workingprocess of FMS I. Specifically, we assume that thesearch keyword set is fW = (wn1 ;wn2 ; · · · ;wn5 ), and thelargest preference factor keyword of sets F1 and F2 is thesame, which is wn4 . In addition, we assume the keywordsets F1 and F2 are fW1 = (wn2 ;wn3 ;wn4 ) and fW2 =(wn1 ;wn3 ;wn4 ) respectively. Furthermore, we assume thatthe relevance score is not more than D = 5, and specially,let Score(wn4 ; F1) = 4 and Score(wn4 ; F2) = 2,which satisfy Score(wn4 ; F1) Score(wn4 ; F2) = 2 1. we randomly choose a super-increasing sequence di ={1; 10; 60; 500; 3000}(i = 1; · · · ; 5), for arbitrary r > 0, therewill beR1 =r · wnigW1Score(wni ; F1) · di s) (11)r · (Score(wn4 ; F1) · d4 s)r · (4 · 500 s)r · (2000 s)1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing6Data owner Cloud server Search user111222 1 2 22keyword dictionary: ( , , )keywords of document : ( , , )corresponding ( , ): ( , , )in : ( , , , , , , , , )( 0 ,0 , , , , , , , ) ji jw wF w wScore w F S Sw w w w wP S S SP Pa ba b= ×××= ××××××××× ××× ××× ×××= ××× ××× ××× ×××® ¢®WWW W1 2( , )( , ) j a ab bp pI = p M p M´ jI_TW__ 2 1 211 22 1 2 2keyword dictionary: ( , , )search keyword set: ( , , )super-increasing sequence: ( , , )in : ( , , , , , , , , )( 0 , 0 , , , , , , l n n nln n nw ww w wd d dw w w w wQ d d da ba b= ×××= ××××××××× ××× ××× ×××= ××× ××× ×××WWW W_ 11 21, )( , )( , ) a ab bQ Q r Q q qT M q M q – -×××® ¢® × ¢®=1 W( )( ( , ) )larger , better result. j ilj ni j iR r P Q sr Score w F d sR== × × -= × _ × -Fig. 2. Structure of the FMS IR2 =r · wnjgW2Score(wnj ; F2) · dj s) (12)=r · (Score(wn4 ; F2) · d4+ΣwnjgW2wn4Score(wnj ; F2) · dj s)<r · (Score(wn4 ; F2) · dx wnjgW2wn4D · dj s)<r · (Score(wn4 ; F2) · d4 + d4 s)<r · (2 · 500 + 500 s)<r · (1500 s)R1 R2 >r · (2000 s) r · (1500 s) (13)>r · (2000 1500)>500 · r > 04.3 FMS IIIn the FMS II, we do not change the vector P in the BasicFramework, but replace the value of Q[i] by the weight ofsearch keywords, as shown in Fig. 3. With the weight ofkeywords, we can also implement some operations like “OR”,“AND” and “NO” in the Google Search to the searchableencryption.Assume that the keyword sets corresponding to the“OR”, “AND” and “NO” operations are (w1;w2; · · · ;wl1),(w′′1 ;w′′2 ; · · · ;w′′l2) and (w′′′1 ;w′′′2 ; · · · ;w′′′l3), respectively.Denote “OR”, “AND” and “NO” operations by , and, respectively. Thus the matching rule can be representedas (w1 w2 · · · wl1) (w′′1 w′′2 · · · w′′l2) (w′′′1 w′′′2 · · · w′′′l3). For “OR” operation,the search user chooses a super-increasing sequence(a1 > 0; a2; · · · ; al1 )(Σj1k=1 ak < aj(j = 2; · · · ; l1)) toachieve searching with keyword weight. To enable searchableencryption with “AND” and “NO” operations, the searchuser chooses a sequence (b1; b2; · · · ; bl2 ; c1; c2; · · · ; cl3 ),whereΣl1k=1 Σ ak < bh(h = 1; 2; · · · ; l2) and l1k=1 ak l2h=1 bh < ci(i = 1; 2; · · · ; l3).Assume (w1;w2; · · · ;wl1) are ordered by ascendingimportance, then according to the search keyword set(w1;w2; · · · ;wl1;w′′1 ;w′′2 ; · · · ;w′′l2;w′′′1 ;w′′′2 ; · · · ;w′′′l3),the corresponding values in Q are set as(a1; a2; · · · ; al1 ; b1; b2; · · · ; bl2 ;c1;c2; · · · ;cl3 ). Othervalues in Q are set as 0. Finally, the search user setss l2h=1 bh. In the Query phase, For a document Fj , ifthe corresponding Rj > 0, we claim that Fj can satisfy theabove matching rule.Theorem 3: (Correctness) Fj satisfies the above matchingrule with “OR”, “AND” and “NO” if and only if the correspondingRj > 0.Proof: Firstly, we proof the completeness. Since the weightof w′′′Σ i (i = 1; 2; · · · ; l3) in the vector Q is ci and ci > l1k=1 ak l2h=1 bh, if any corresponding value of w′′′i in Pof Fj is 1, we can infer P ·Q < 0 and Rj = r·(P ·Qs) < 0.Therefore, if Rj > 0, any of w′′′i is not in the keyword set ofFj , i.e., Fj satisfies the “NO” operation. Moreover, if Rj > 0,then r · (P · Q s) = r · (P · Q Σl2h=1 bh) > 0. Sincebh >Σl1k=1 ak(h = 1; 2; · · · ; l2), all corresponding values ofw′′h in P have to be 1 and at least one corresponding value ofwk(k = 1; 2; · · · ; l1) in P should be 1. Thus, Fj satisfies the“AND” and “OR” operations. Therefore, if Rj > 0, the vectorP satisfies the operations of “OR”, “AND” and “NO”.Next, we show the soundness. If the vector P satisfiesthe operations of “OR”, “AND” and “NO”, i.e., at least onecorresponding value of keyword wk in P is 1 (assume thiskeyword is w(1  l1)), all corresponding values ofkeywords w′′h in P are 1 and no corresponding value ofkeyword w′′′i in P is 1. Therefore, Rj = r · (P · Q s) r · (a + b1 + b2 + · · · + bl2s) = r · a > 0.Example.We present a concrete example to help understandTheorem 3. The example also illustrates the working processof FMS II. Specifically, we assume that the keyword setscorresponding to the “OR”, “AND” and “NO” operations are(w1;w2;w3), (w′′1 ;w′′2 ;w′′3 ) and (w′′′1 ;w′′′2 ), respectively. Thus,the matching rule can be represented as (w1 w2 w3) (w′′1 w′′2 w′′3 ) (w′′′1 w′′′2 ). we assume that the searchweights (a1; a2; a3), (b1; b2; b3) and (c1; c2) for “OR”, “AND”and “NO” are(1,5,8), (20,24,96) and (-500,-600), respectively.We firstly prove Rj > 0 when Fj satisfies the matchingrule. Specifically, assume that Fj satisfies the matching rulew2(w′′1w′′2w′′3 )(w′′′1w′′′2 ). Thus the correspondingvalues of vector P are (0; 1; 0), (1; 1; 1) and (0; 0), respectively.Thus, the result of s =Σ3h=1 bh = 20 + 24 + 96 = 140,1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing7Data owner Cloud server Search user´ jI_T1 2 2 1 12 2 W1 2keyword dictionary: ( , , )keywords of document : ( , , )in : ( , , , , , , , , )( 0, 0 , , 1 , , 1 , , 1 , )( , )( , ) jj a ab bw wF w ww w w w wPP P p pI p M p Ma b= ×××= ×××××× ××× ××× ×××= ××× ××× ××× ×××® ¢®=WWW W 12 121 2 1122 11 22 1 2keyword dictionary: ( , , )operation keyword set keyword weightOR ( , , , ) ( , , , )AND ( , , , ) ( , , , )NO ( , , , ll llw ww w w a a aw w w b b bw w= ×××¢ ¢ ××× ¢ ×××¢ ¢ ××× ¢ ×××¢¢ ¢¢ ×××W_1 2 3 1 2 311 21) ( , , , )in : ( , , , , , , , , )( 0, 0 , , , , , , , )( , )( , ) l la ab bw c c cw w w w wQ a c bQ Q r Q q qT M q M qa b ca b c- -¢¢ ×××××× ¢ ××× ¢¢ ××× ¢ ×××= ××× ××× – ××× ×××® ¢® × ¢®=WW( )( )0 satisfy “OR”,”AND” and “NO”larger larger weight of “OR” j j jR r P Q sr a c b sRRa b c= × × -= × ×××+ +×××- +×××+ + ×××-> __Fig. 3. Structure of the FMS IIfor arbitrary r > 0, the result of Rj will beRj = r · (P · Q s)= r · (a2 + b1 + b2 + b3 s)= r · (5 + 20 + 24 + 96 140)= 5r > 0(14)From the above example, we can easily see that Rj > 0when Fj satisfies the matching rule. Next, we show thatRj < 0 when Fj does not satisfy the matching rule. Especially,we assume that the ”AND” operation does not satisfy thematching rule. Here, we set the first keyword does not matchthe rule, therefore the search keyword set of ”AND” operationsare (0; 1; 1) instead of (1; 1; 1). Thus, the result of Ri will beRj = r · (P · Q s)= r · (a2 + b2 + b3 s)= r · (5 + 24 + 96 140)= 15r < 0(15)Obviously, Rj < 0 when Fj does not satisfy the matchingrule.5 ENHANCED SCHEMEIn practice, apart from some common keywords, other keywordsin dictionary are generally professional terms, and thispart of the dictionary will rapidly increase when the dictionarybecomes larger and more comprehensive. Simultaneously, thedata owner’s index will become longer, although many dimensionsof keywords will never appear in her documents.That will cause redundant computation and communicationoverhead.In this section, we further propose a Fine-grained MultikeywordSearch scheme supporting Classified Sub-dictionaries(FMSCS), which classifies the total dictionary as a commonsub-dictionary and many professional sub-dictionaries. Ourgoal is to significantly reduce the computation and communicationoverhead. We have researched in a file set randomlychosen from the National Science Foundation (NSF) ResearchAwards Abstracts 1990-2003 [24]. As shown in Fig. 4, weclassify the total dictionary to many sub-dictionaries suchas common sub-dictionary, computer science sub-dictionary,mathematics sub-dictionary and physics sub-dictionary, etc.And the search process will only be some minor changes inInitialization.Change of Initialization: Compared with theBasic Framework, in the enhanced scheme thedata owner should firstly choose corresponding subdictionaries.Then her own dictionary can be combined as{f1||Subdic1||f2||Subdic2|| · · · }, where Subdici representsall keywords contained in corresponding sub-dictionary andfi is filling factor with random length which will be 0 stringin the index, the filling factor is used to confuse length ofthe data owner’s own dictionary and relative positions of subdictionaries.Then, the data owner and search user will use thisdictionary to generate the index and trapdoor, respectively.Note that in an dictionary, two professional sub-dictionariescan even contain a same keyword, but only the first appearedkeyword will be used to generate index and trapdoor, anotherwill be set to 0 in the vector. And the secret key K willbe formed as (S;M1;M2; |f1|;DID1 ; |f2|;DID2 ; · · · ), whereDIDi represents the identity of sub-dictionary and |fi| is thelength of fi. Other than these changes, the remaining phases(i.e., Index building, Trapdoor generating and Query) aresame as the Basic Framework.Dictionary Updating: In the searchable encryptionschemes with dictionary, dictionary update is a challengeproblem because it may cause to update massive indexesoutsourced to the cloud server. In general dictionary-basedsearch schemes, e.g., [13] and [14], the update of dictionarywill lead to re-generation of all indexes. In our FMSCSschemes, when it needs to change the sub-dictionaries or addnew sub-dictionaries, only the data owners who use the correspondingsub-dictionaries need to update their indexes, mostother data owners do not need to do any update operations.Such dictionary update operations are particularly lightweight.In addition, Li et al. [9] utilize the dimension expansiontechnique to implement the efficient dictionary expansion.Such method can also be included into our dictionary updatingprocess. And our scheme can even be more efficient than [9]since although [9] does not need to re-generate all indexes,but the corresponding extended operations on all indexes arenecessary. In comparison, our schemes only need to extendthe indexes of partial data owners.1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing8________ ______    _________ ___________ _________ _______     ___ _______ _        ____ _________ ______________ _     _______  ___ ________ __                _              __ ______ _______ ___         __ _______ _______________ ___       __ __________ __________ ______ ___ ___ ___ _________ _!          _” __________1 2 3 4Dictionary: || common || || computer science f f || f ||mathematics || f 1 2 3 4Dictionary: f ¢|| physics || f ¢ || mathematics || f ¢|| common || f ¢ _________ __________            __Fig. 4. Classified sub-dictionaries6 SECURITY ANALYSISIn this section, we analyze the main security properties ofthe proposed schemes. In particular, our analysis focuseson how the proposed schemes can achieve confidentialityof documents, privacy protection of index and trapdoor, andunlinkability of trapdoor. Other security features are not thefocus of our concern.6.1 Confidentiality of DocumentsIn our schemes, the outsourced documents are encrypted bythe traditional symmetric encryption algorithm (e.g., AES). Inaddition, the secret key sk is generated by the data owner andsent to the search user through a secure channel. Since theAES encryption algorithm is secure [23], any entity cannotrecover the encrypted documents without the secret key sk.Therefore, the confidentiality of encrypted documents can beachieved.6.2 Privacy Protection of Index and TrapdoorAs shown in Section 4.1, both the index Ij = (paM1; pbM2)and the trapdoor TfW = (M11 qa;M12 qb) are ciphertextsof vectors (P;Q). The secret key is K = (S;M1;M2) inthe FMS or (S;M1;M2; |f1|;DID1 ; |f2|;DID2 ; · · · ) in theFMSCS, where S functions as a splitting indicator whichsplits P and Q into (pa; pb) and (qa; qb), respectively, twoinvertible matrices M1 and M2 are used to encrypt (pa; pb)and (qa; qb). The security of this encryption algorithm has beenproved in the known ciphertext model [21]. Thus, the contentof index and trapdoor cannot be identified. Therefore, privacyprotection of index and trapdoor can be achieved.6.3 Unlinkability of TrapdoorTo protect the security of search, the unlinkability of trapdoorshould be achieved. Although the cloud server cannotdirectly recover the keywords, the linkability of trapdoor maycause leakage of privacy, e.g., the same keyword set may besearched many times, if the trapdoor generation function isdeterministic, even though the cloud server cannot decryptthe trapdoors, it can deduce the relationship of keywords. Weconsider whether the trapdoor TfW = (M11 qa;M12 qb) can belinked to the keywords. We prove our schemes can achieve theunlinkability of trapdoor in a strong threat model, i.e., knownbackground model [6].Known Background Model: In this model, the cloudserver can possess the statistical information from a knowncomparable dataset which bears the similar nature to thetargeting dataset.TABLE 1Structure of QQ[1] _ _ _Q[m] Q[m + 1]FMS(CS) I _ _ _ 0 _ _ _ di _ _ _ 0 _ _ _ dj _ _ _ 􀀀sFMS(CS) II _ _ _ ak _ _ _ bh _ _ _ 0 _ _ _ ci _ _ _ 􀀀sAs shown in Table 1, in our FMS(CS) I, the trapdooris constituted by two parts. The values of all dimensionsdi(i = 1; 2; · · · ; l) are the super-increasing sequence randomlychosen by the search user (assume there are _ possiblesequences). And the (m+ 1) dimension is s defined by thesearch user, where s is a positive random number. Assumethe size of s is _s bits, there are 2_s possible values fors. Further, to generate Q′′ = r · Q, Qis multiplied by apositive random number r, there are 2_r possible values forr (if the search user chooses _r-bit r). Finally, Q′′ is splitto (qa; qb) according the splitting indicator S. Specifically, ifS[i] = 0(i = 1; 2; · · · ;m + 1), the value of Q′′[i] will berandomly split into qa[i] and qb[i], assume in S the numberof ‘0’ is _, and each dimension of qa and qb is _q bits. Notethat _s, _r, _ and _q are independent of each other. Thenin our FMS(CS) I, we can compute the probability that twotrapdoors are the same as follows:P1 =1_ · 2_s · 2_r · (2_q )_ =1_ · 2_s+_r+__q(16)Therefore, the larger _, _s, _r, _ and _q can achieve thestronger security, e.g., if we choose 1024-bit r, then the1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing9probability P1 < 1=21024. As a result, the probability thattwo trapdoors are the same is negligible.And in the FMS(CS) II, because s = Σl2h=1 bh,its value depends on the weight sequence(a1; a2; · · · ; al1 ; b1; b2; · · · ; bl2 ; c1; c2; · · · ; cl3 ). Assumethe number of different sequences is denoted as _, then wecan compute:P2 =1_ · 2_r · (2_q )_ =1_ · 2_r+__q(17)Similarly, in the FMS(CS) II and the FMS(CS) III, the probabilitythat two trapdoors are the same is negligible. Therefore,in our schemes, the unlinkability of trapdoor can be achieved.In summary, we present the comparison results of securitylevel in Table 2, where I and II represent FMS(CS) I andFMS(CS) II, respectively. It can be seen that all schemes canachieve confidentiality of documents and privacy protectionof index and trapdoor, but the OPE schemes [11], [25] cannotachieve the unlinkability of trapdoor very well because of thesimilarity relevance mentioned in [14].TABLE 2Comparison of Security Level[11], [25] [6], [13], [14] I IIConfidentialityp p p pPrivacy protectionp p p pUnlinkabilityp p pDiscussions: In MRSE [6], the values of P ·Q are equal tothe number of matching keywords, which suffers scale analysisattack when the cloud server is powerful and has knowledgeof some background information. To solve this problem, itextends the index and inserts a random number j whichfollows a normal distribution and can confuse the values ofP ·Q. Thus, enhanced MRSE can resist scale analysis attack.However, the introduction of j causes precision decrease ofthe returned results. There is a trade-off between precisionand security in MRSE. In comparison, our schemes do notsuffer the scale analysis attack. Because the values of P · Qin our schemes do not disclose any information due to therandomly selected sequences mentioned in Section 4.2 andSection 4.3. Therefore, our proposal can achieve the securitywithout sacrificing precision.7 PERFORMANCE EVALUATIONIn this section, we evaluate the performance of the proposedschemes using simulations, and compare the performance withthat of existing proposals in [6], [13], [14]. We apply a realworlddataset from the National Science Foundation ResearchAwards Abstracts 1990-2003 [24], in which we random selectmultiple documents and conduct real-world experiments on anIntel Core i5 3.2 GHz system.7.1 FunctionalityWe compare functionalities between [6], [13], [14] and ourschemes in Table 3, where I and II represent FMS(CS) I andFMS(CS) II, respectively.MRSE [6] can achieve multi-keyword search and coordinatematching using secure kNN computation scheme. And [13]and [14] considers the relevance scores of keywords. Comparedwith the other schemes, our FMS(CS) I considers boththe relevance scores and the preference factors of keywords.Note that if the search user sets all relevance scores andpreference factors of keywords as the same, the FMS(CS) Idegrades to MRSE and the coordinate matching can beachieved. And in the FMS(CS) II, if the search user sets allpreference factors of “OR” operation keywords as the same,the FMS(CS) II can also achieve the coordinate matchingof “OR” operation keywords. Particularly, the FMS(CS) IIachieves some fine-grained operations of keyword search,i.e., “AND”, “OR” and “NO” operations in Google Search,which are definitely practical and significantly enhance thefunctionalities of encrypted keyword search.TABLE 3Comparison of Functionalities[6] [13] [14] I IIMulti-keyword searchp p p p pCoordinate matchingp p p p pRelevance scorep p pPreference factorp pAND OR NO operationsp7.2 Query ComplexityIn the FMS(CS) II, we can implement “OR”, “AND” and“NO” operations by defining appropriate weights of keywords,this scheme provides a more fine-grained search than [6],[13] and [14]. If the keywords to perform “OR”, “AND” and“NO” operations are (w1;w2; · · · ;wl1), (w′′1 ;w′′2 ; · · · ;w′′l2)and (w′′′1 ;w′′′2 ; · · · ;w′′′l3), respectively. Our FMS(CS) II cancomplete the search with only one query, however, in [6],[13] and [14], they would complete the search through thefollowing steps:For the “OR” operation of l1 keywords, they need onlyone query Query(w1;w2; · · · ;wl1) to return a collectionof documents with the most matching keywords (i.e.,coordinate matching), which can be denoted as X =Query(w1;w2; · · · ;wl1).For the “AND” operation of l2 keywords, [6], [13]and [14] cannot generate a query for multiple keywordsto achieve the “AND” operation. Therefore, aftercosting l2 queries Query(w′′i )(i = 1; 2; · · · ; l2),they can do the “AND” operation, and the correspondingdocument set can be denoted as Y =Query(w′′1 )∩Query(w′′2 )∩· · ·Query(w′′l2).For the “NO” operation of l3 keywords, they need l3queries Query(w′′′i )(i = 1; 2; · · · ; l3), firstly. Then, thedocument set of the “NO” operation can be denoted asZ = Query(w′′′1 )∩Query(w′′′2 )∩· · ·Query(w′′′l3).Finally, the document collection achieved “OR”, “AND”and “NO” operations can be represented as XYZ.As shown in Fig. 5a, 5b and 5c, to achieve these operations,the FMS(CS) II can outperform the existing proposals withless queries generated.1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing102468102468100510152025Number Number of “NO” keywords of “AND” keywordsNumber of queries[6] & [13] & [14]FMS(CS)_II(a)24681024681005101520Number of “NO” keywords Number of “OR” keywordsNumber of queries[6] & [13] & [14]FMS(CS)_II(b)24681024681005101520Number of “OR” keywords Number of “AND” keywordsNumber of queries[6] & [13] & [14]FMS(CS)_II(c)Fig. 5. Time for building index. (a) Number of queriesfor the different number of “AND” and “NO” keywordswith the same number of “OR” keywords, l1 = 5. (b)Number of queries for the different number of “OR” and“NO” keywords with the same number of “AND” keywords,l2 = 5. (c) Number of queries for the different number of“AND” and “OR” keywords with the same number of “NO”keywords, l3 = 5.7.3 Efficiency7.3.1 Computation overheadIn order to easily demonstrate our scheme computation overhead,we analysis our scheme from each phase.Index building. Note that the Index building phase of [6]is the same as our FMS II scheme, without calculating therelevance score. And the Index building phase of the FMS Iis the same as [13], containing the relevance score computing.Compared with the FMS I, the FMS II does not need to calculatethe relevance score. And compared with the computationcost of building index, the cost of calculating the relevancescore is negligible, we do not distinguish them. Moreover,in our enhanced schemes (FMSCS), we divide the totaldictionary into 1 common sub-dictionary and 20 professionalsub-dictionaries (assume each data owner averagely chooses 1common sub-dictionary and 3 professional sub-dictionaries togenerate the index). As shown in Fig. 6, we can see the time forbuilding index is dominated by both the size of dictionary andthe number of documents. And compared with [6], [13], [14]and our FMS schemes, the FMSCS schemes largely reducethe computation overhead.Trapdoor generating. In Trapdoor generating phase, [6]and [13] firstly creates a vector according to the searchkeyword set fW, then encrypts the vector by the secure kNNcomputation scheme. And [14] also generates a vector anduses homomorphic encryption to encrypt each dimension. Incomparison, our FMS I and FMS II schemes should firstly2000 4000 6000 8000 100000500100015002000Size of dictionaryTime (s)[6] & [13] & FMS[14]FMSCS(a)2000 4000 6000 8000 100000100200300400500600Number of documentsTime (s)[6] & [13] & FMS[14]FMSCS(b)Fig. 6. Time for building index. (a) For the different size ofdictionary with the same number of documents, N=6000.(b) For the different number of documents with the samesize of dictionary, |W| = 4000.2000 4000 6000 8000 10000020040060080010001200Size of dictionaryTime (ms)[6] & [13] & FMS[14]FMSCS(a)10 20 30 40 50050100150200250300Number of query keywordsTime (ms)[6] & [13] & FMS[14]FMSCS(b)Fig. 7. Time for generating trapdoor. (a) For the differentsize of dictionary with the same number of query keywords,|fW|=20. (b) For the different number of query keywordswith the same size of dictionary, |W| = 4000.generate a super-increasing sequence and a weight sequence,respectively. But actually, we can pre-select a correspondingsequence for each scheme, it can also achieve search processand privacy. Because even if the vectors are the same formultiple queries, the trapdoors will be not the same due tothe security of kNN computation scheme. Therefore, the computationcost of [6], [13] and all FMS schemes in Trapdoorgenerating phase are the same. As shown in Fig. 7, the timefor generating trapdoor is dominated by the size of dictionary,instead of the number of query keywords. Hence, our FMSCSschemes are also very efficient in Trapdoor generating phase.Query. As [6], [13] and the FMS all adopt the secure kNNcomputation scheme, the time for query is the same. Thecomputation overhead in Query phase, as shown in Fig. 8,is greatly affected by the size of dictionary and the numberof documents, and almost has no relation to the number ofquery keywords. Further we can see, our FMSCS schemessignificantly reduce the computation cost in Query phase.As [14] needs to encrypt each dimension of index/trapdoorusing full homomorphic encryption, its index/trapdoor size isenormous. Note that, in Trapdoor generating and Queryphases, the computation overheads are not affected by thenumber of query keywords. Thus our FMS and FMSCSschemes are more efficient compared with some multiplekeywordsearch schemes [26], [27], as their cost is linear withthe number of query keywords.1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing112000 4000 6000 8000 10000010203040506070Size of dictionaryTime (s)[6] & [13] & FMS[14]FMSSCS(a)2000 4000 6000 8000 1000001020304050Number of documentsTime (s)[6] & [13] & FMS[14]FMSCS(b)10 20 30 40 5005101520253035Number of query keywordsTime (s)[6] & [13] & FMS[14]FMSCS(c)Fig. 8. Time for query. (a) For the different size ofdictionary with the same number of documents and numberof search keywords, N = 6000; |fW| = 20. (b) Forthe different number of documents with the same sizeof dictionary and number of search keywords, |W| =4000; |fW| = 20. (c) For the different number of searchkeyword with the same size of dictionary and number ofdocuments, N = 6000; |W| = 4000.7.3.2 Storage overheadAs shown in Table 4, we provide a comparison of storageoverhead among several schemes. Specifically, we evaluate thestorage overhead from three parts: the data owner, the searchuser and the cloud server.According to Table 4, in the FMS, the FMSCS as well asschemes of [6] and [13], the storage overhead of the dataowner are the same. In these schemes, the data owner preservesher secret key K = (S;M1;M2) and symmetric key sklocally, where S is an (m+1)-dimensional vector, M1 and M2are (m+1)×(m+1) invertible matrices. All elements in S,M1and M2 are the float number. Since the size of a float numberis 4 bytes, the size of K is 4· (m+1)+8· (m + 1)2 bytes. Weassume that the size of sk is Ssk that is a constant. Thus, thetotal size of storage overhead is 4·(m+1)+8·(m + 1)2+Sskbytes. However, in [14], the storage overhead of data owneris _5=8 bytes, where the _ is the secure parameter. Thestorage overhead is 4GB when we choose _ = 128, which ispopular in a full homomorphic encryption scheme. However,the storage overhead of the FMS and the FMSCS are almost763MB when we choose m = 10000, which is large enoughfor a search scheme. Therefore, the FMS and the FMSCS aremore efficient than scheme in [14] in terms of the storageoverhead of the data owner.As shown in Table 4, a search user in the FMS, the FMSCSas well as the schemes of [6] and [13] preserves the secret keyK = (S;M1;M2) and the symmetric key sk locally. Therefore,the total storage overhead is 4(m+1)+8(m + 1)2+Sskbytes. However, in [14], the storage overhead is _5=8 + _2=8bytes. The storage overhead is 4GB when we choose _ = 128,which is popular in a full homomorphic encryption scheme.However, the storage overhead of the FMS and the FMSCSare almost 763MB when we choose m = 10000, which islarge enough for a search scheme. Therefore, the FMS andthe FMSCS are more efficient than scheme in [14] in termsof the storage overhead of the search user.The cloud server preserves the encrypted documents and theindexes. The size of encrypted documents in all schemes arethe same, i.e., N·Ds. For the indexes, in the FMS and schemesin [6] and [13], the storage overhead are 8 · (m+1) ·N bytes.In the FMSCS, the storage overhead is 8·· (m+1) ·N bytes,where 0 < ” < 1. When m = 1000 and N = 10000 whichare large enough for a search scheme, the storage overhead ofindexes is about 132MB in the FMSCS. And in schemes of [6]and [13] as well as the FMS, the size of indexes is 760MB withthe same conditions. In scheme in [14], the storage overheadof indexes is N · Ds + m · N · (_=8)5 bytes, it is 4GB whenwe choose _ = 128, which is popular in a full homomorphicencryption scheme. Therefore, the FMS and the FMSCS aremore efficient than scheme in [14] in terms of the storageoverhead of the cloud server.7.3.3 Communication overheadAs shown in Table 5, we provide a comparison of communicationoverhead among several schemes. Specifically,we consider the communication overhead from three parts:the communication between the data owner and the cloudserver (abbreviated as D-C), the communication between thesearch user and the cloud server (abbreviated as C-S) and thecommunication between the data owner and the search user(abbreviated as D-S).D-C. In the FMS as well as schemes of [6] and [13], the dataowner needs to send information to cloud server in the formof Cj ||FIDj ||Ij (j = 1; 2; · · · ;N), where the Cj representsthe encrypted documents, FIDj represents the identity of thedocument and Ij represents the index. We assume that theaverage size of documents is Ds, thus the size of documentsis N ·Ds. We assume the encrypted documents identity FIDis a 10-byte string. Thus, the total size of the identity FIDis 10 · N bytes. The index Ij = (paM1; pbM2) contains two(m+1)-dimensional vectors. Each dimension is a float number(the size of each float is 4 bytes). Thus, the total size of index is8·(m+1)·N bytes. Therefore, the total size of communicationoverhead is 8·(m+1)·N+10·N+N·Ds bytes. In the FMSCS,the total size of communication overhead is 8·· (m+1)·N +10·N+N·Ds bytes. If we choose the as 0:2, the size of indexis 1:6 · (m+1) ·N bytes, and the total size of communicationof FMSCS is 1:6·(m+1)·N+10·N+Ds ·N bytes. However,in [14], the communication overhead is N ·Ds +m·N · _5=8bytes, where _ is the secure parameter. If we choose _ = 128which is popular in a full homomorphic encryption schemeand m = 1000 and N = 10000 which are large enough fora search scheme, the FMS and the FMSCS are more efficientthan scheme in [14] in terms of the communication overheadof D-C.1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing12TABLE 4Comparison of Storage Overhead (Bytes). (m represents the size of dictionary; N represents the number ofdocuments; Ds represents the average size of each encrypted document; _ represents the secure parameter; represents the decrease rate of dictionary by using our classified sub-dictionaries technology; Ssk represents the sizeof symmetric key.)[14] [6], [13] and FMS FMSCSData Owner _5=8 4 _ (m + 1) + 8 _ (m + 1)2 + Ssk 4 _ (m + 1) + 8 _ (m + 1)2 + SskSearch User _5=8 + _2=8 4 _ (m + 1) + 8 _ (m + 1)2 + Ssk 4 _ (m + 1) + 8 _ (m + 1)2 + SskCloud Server N _ Ds + m _ N _ _5=8 N _ Ds + 8 _ (m + 1) _ N N _ Ds + 8 _ _ (m + 1) _ NTABLE 5Comparison of Communication Overhead (Bytes). (m represents the size of dictionary; N represents the number ofdocuments; Ds represents the average size of each encrypted document; T represents the number of returneddocuments; _ represents the secure parameter; represents the decrease rate of dictionary by using our classifiedsub-dictionaries technology; Ssk represents the size of symmetric key.)[14] [6], [13] and FMS FMSCSD-C N _ Ds + m _ N _ _5=8 8 _ (m + 1) _ N + 10 _ N + N _ Ds 8 _ _ (m + 1) _ N + 10 _ N + N _ DsC-S m _ _5=8 + T _ Ds 8 _ (m + 1) + T _ Ds 8 _ _ (m + 1) + T _ DsD-S _5=8 + _2=8 4 _ (m + 1) + 8 _ (m + 1)2 + Ssk 4 _ (m + 1) + 8 _ (m + 1)2 + SskC-S. The C-S consists of two phases: Query and Resultsreturning. In the Query phase, a search user in the FMS as wellas the schemes in [6] and [13] sends the trapdoor to the cloudserver in the form of TfW = (M11 qa;M12 qb), which containstwo (m+1)-dimensional vectors. Thus, the communicationoverhead is 8·(m+1) bytes. In the FMSCS, the communicationoverhead is 8 · · (m + 1)(0 < ” < 1) bytes. In the Resultsreturning phase, the cloud server sends the correspondingresult to the search user. The communication overhead of CSincreases along with the number of returned documentsat this point. We assume that the number of the returneddocuments is T, thus, the total communication overhead ofcloud server to search user is T · Ds bytes. Therefore, thetotal communication overhead of C-S is 8 ·m+T ·Ds bytes.In the FMS as well as the schemes in [6] and [13], the totalcommunication overhead of C-S is 8 · · (m + 1) + T · Dsbytes. In [14], the total communication overhead of C-S ism·_5=8+T ·Ds bytes. If we choose _ = 128 which is popularin a full homomorphic encryption scheme and m = 1000 andN = 10000 which are large enough for a search scheme, theFMS and the FMSCS are more efficient than scheme in [14]in terms of the communication overhead of C-S.D-S. From table 5, we can see that the communicationoverhead of the FMS, the FMSCS as well as schemes in[6] and [13] are the same. In the Initialization phase, thedata owner sends the secret key K = (S;M1;M2) andsymmetric key sk to the search user, where S is an (m+ 1)-dimensional vector, M1 and M2 are (m + 1) × (m + 1)invertible matrices. Thus, the size of the secret key K is4 · (m + 1) + 8 · (m + 1)2 bytes. Therefore, the total sizeof communication overhead is 4 · (m+1)+8 · (m + 1)2+Sskbytes, where the Ssk represents the size of symmetric key.However, the communication overhead of scheme in [14] is_5=8+_2=8 bytes. The communication overhead is 4GB whenwe choose _ = 128, which is popular in a full homomorphicencryption scheme. However, the communication overhead ofthe FMS and the FMSCS are almost 763MB when we choosem = 10000, which is large enough for a search scheme.Therefore, the FMS and the FMSCS are more efficient thanscheme in [14] in terms of the communication overhead ofD-S.8 RELATED WORKThere are mainly two types of searchable encryption in literature,Searchable Public-key Encryption (SPE) and SearchableSymmetric Encryption (SSE).8.1 SPESPE is first proposed by Boneh et al. [28], which supportssingle keyword search on encrypted data but the computationoverhead is heavy. In the framework of SPE, Boneh et al. [27]propose conjunctive, subset, and range queries on encrypteddata. Hwang et al. [29] propose a conjunctive keyword schemewhich supports multi-keyword search. Zhang et al. [17] proposean efficient public key encryption with conjunctivesubsetkeywords search. However, these conjunctive keywordsschemes can only return the results which match all thekeywords simultaneously, and cannot rank the returned results.Qin et al. [30] propose a ranked query scheme which usesa mask matrix to achieve cost-effectiveness. Yu et al. [14]propose a multi-keyword top-k retrieval scheme with fullyhomomorphic encryption, which can return ranked results andachieve high security. In general, although SPE allows moreexpressive queries than SSE [13], it is less efficient, andtherefore we adopt SPE in the work.8.2 SSEThe concept of SSE is first developed by Song et al. [8].Wang et al. [25] develop the ranked keyword search scheme,which considers the relevance score of a keyword. However,the above schemes cannot efficiently support multi-keywordsearch which is widely used to provide the better experience1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing13to the search user. Later, Sun et al. [13] propose a multikeywordsearch scheme which considers the relevance scoresof keywords, and it can achieve efficient query by utilizingthe multidimensional tree technique. A widely adopted multikeywordsearch approach is multi-keyword ranked search(MRSE) [6]. This approach can return the ranked results ofsearching according to the number of matching keywords. Liet al. [10] utilize the relevance score and k-nearest neighbortechniques to develop an efficient multi-keyword searchscheme that can return the ranked search results based on theaccuracy. Within this framework, they leverage an efficientindex to further improve the search efficiency, and adopt theblind storage system to conceal access pattern of the searchuser. Li et al. [19] also propose an authorized and ranked multikeywordsearch scheme (ARMS) over encrypted cloud databy leveraging the ciphertext policy attribute-based encryption(CP-ABE) and SSE techniques. Security analysis demonstratesthat the proposed ARMS scheme can achieve collusion resistance.In this paper, we propose FMS(CS) schemes which notonly support multi-keyword search over encrypted data, butalso achieve the fine-grained keyword search with the functionto investigate the relevance scores and the preference factors ofkeywords and, more importantly, the logical rule of keywords.In addition, with the classified sub-dictionaries, our proposalis efficient in terms of index building, trapdoor generating andquery.9 CONCLUSIONIn this paper, we have investigated on the fine-grained multikeywordsearch (FMS) issue over encrypted cloud data, andproposed two FMS schemes. The FMS I includes both therelevance scores and the preference factors of keywords toenhance more precise search and better users’ experience,respectively. The FMS II achieves secure and efficient searchwith practical functionality, i.e., “AND”, “OR” and “NO”operations of keywords. Furthermore, we have proposed theenhanced schemes supporting classified sub-dictionaries (FMSCS)to improve efficiency.For the future work, we intend to further extend the proposalto consider the extensibility of the file set and the multi-usercloud environments. Towards this direction, we have madesome preliminary results on the extensibility [5] and the multiusercloud environments [19]. Another interesting topic is todevelop the highly scalable searchable encryption to enableefficient search on large practical databases.ACKNOWLEDGMENTThis work is supported by the National Natural ScienceFoundation of China under Grants 61472065, 61350110238,61103207, U1233108, U1333127, and 61272525, the InternationalScience and Technology Cooperation and ExchangeProgram of Sichuan Province, China under Grant 2014HH0029,China Postdoctoral Science Foundation funded projectunder Grant 2014M552336, and State Key Laboratory ofInformation Security Open Foundation under Grant 2015-MS-02.REFERENCES[1] H. Liang, L. X. Cai, D. Huang, X. Shen, and D. Peng, “An smdpbasedservice model for interdomain resource allocation in mobile cloudnetworks,” IEEE Transactions on Vehicular Technology, vol. 61, no. 5,pp. 2222–2232, 2012.[2] M. M. Mahmoud and X. Shen, “A cloud-based scheme for protectingsource-location privacy against hotspot-locating attack in wireless sensornetworks,” IEEE Transactions on Parallel and Distributed Systems,vol. 23, no. 10, pp. 1805–1818, 2012.[3] Q. Shen, X. Liang, X. Shen, X. Lin, and H. Luo, “Exploiting geodistributedclouds for e-health monitoring system with minimum servicedelay and privacy preservation,” IEEE Journal of Biomedical and HealthInformatics, vol. 18, no. 2, pp. 430–439, 2014.[4] T. Jung, X. Mao, X. Li, S.-J. Tang, W. Gong, and L. Zhang, “Privacypreservingdata aggregation without secure channel: multivariate polynomialevaluation,” in Proceedings of INFOCOM. IEEE, 2013, pp.2634–2642.[5] Y. Yang, H. Li, W. Liu, H. Yang, and M. Wen, “Secure dynamicsearchable symmetric encryption with constant document update cost,”in Proceedings of GLOBCOM. IEEE, 2014, to appear.[6] N. Cao, C. Wang, M. Li, K. Ren, and W. Lou, “Privacy-preserving multikeywordranked search over encrypted cloud data,” IEEE Transactionson Parallel and Distributed Systems, vol. 25, no. 1, pp. 222–233, 2014.[7] https://support.google.com/websearch/answer/173733?hl=en.[8] D. X. Song, D. Wagner, and A. Perrig, “Practical techniques for searcheson encrypted data,” in Proceedings of S&P. IEEE, 2000, pp. 44–55.[9] R. Li, Z. Xu, W. Kang, K. C. Yow, and C.-Z. Xu, “Efficient multikeywordranked query over encrypted data in cloud computing,” FutureGeneration Computer Systems, vol. 30, pp. 179–190, 2014.[10] H. Li, D. Liu, Y. Dai, T. H. Luan, and X. Shen, “Enabling efficientmulti-keyword ranked search over encrypted cloud data through blindstorage,” IEEE Transactions on Emerging Topics in Computing, 2014,DOI10.1109/TETC.2014.2371239.[11] C. Wang, N. Cao, J. Li, K. Ren, and W. Lou, “Secure ranked keywordsearch over encrypted cloud data,” in Proceedings of ICDCS. IEEE,2010, pp. 253–262.[12] A. Boldyreva, N. Chenette, Y. Lee, and A. Oneill, “Order-preservingsymmetric encryption,” in Advances in Cryptology-EUROCRYPT.Springer, 2009, pp. 224–241.[13] W. Sun, B. Wang, N. Cao, M. Li, W. Lou, Y. T. Hou, and H. Li,“Verifiable privacy-preserving multi-keyword text search in the cloudsupporting similarity-based ranking,” IEEE Transactions on Parallel andDistributed Systems, vol. DOI: 10.1109/TPDS.2013.282, 2013.[14] J. Yu, P. Lu, Y. Zhu, G. Xue, and M. Li, “Towards secure multikeywordtop-k retrieval over encrypted cloud data,” IEEE Transactionson Dependable and Secure Computing, vol. 10, no. 4, pp. 239–250,2013.[15] A. Arvanitis and G. Koutrika, “Towards preference-aware relationaldatabases,” in International Conference on Data Engineering (ICDE).IEEE, 2012, pp. 426–437.[16] G. Koutrika, E. Pitoura, and K. Stefanidis, “Preference-based querypersonalization,” in Advanced Query Processing. Springer, 2013, pp.57–81.[17] B. Zhang and F. Zhang, “An efficient public key encryption withconjunctive-subset keywords search,” Journal of Network and ComputerApplications, vol. 34, no. 1, pp. 262–267, 2011.[18] D. Stinson, Cryptography: theory and practice. CRC press, 2006.[19] H. Li, D. Liu, K. Jia, and X. Lin, “Achieving authorized and rankedmulti-keyword search over encrypted cloud data,” in Proceedings of ICC.IEEE, 2015, to appear.[20] S. Zerr, E. Demidova, D. Olmedilla, W. Nejdl, M. Winslett, andS. Mitra, “Zerber: r-confidential indexing for distributed documents,” inProceedings of the 11th international conference on Extending databasetechnology: Advances in database technology. ACM, 2008, pp. 287–298.[21] W. K. Wong, D. W.-l. Cheung, B. Kao, and N. Mamoulis, “Secureknn computation on encrypted databases,” in Proceedings of SIGMODInternational Conference on Management of data. ACM, 2009, pp.139–152.[22] J. Zobel and A. Moffat, “Exploring the similarity space,” in ACM SIGIRForum, vol. 32, no. 1. ACM, 1998, pp. 18–34.[23] N. Ferguson, R. Schroeppel, and D. Whiting, “A simple algebraicrepresentation of rijndael,” in Selected Areas in Cryptography. Springer,2001, pp. 103–111.[24] http://kdd.ics.uci.edu/databases/nsfabs/nsfawards.html.1545-5971 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TDSC.2015.2406704, IEEE Transactions on Dependable and Secure Computing14[25] C. Wang, N. Cao, K. Ren, and W. Lou, “Enabling secure and efficientranked keyword search over outsourced cloud data,” IEEE Transactionson Parallel and Distributed Systems, vol. 23, no. 8, pp. 1467–1479,2012.[26] P. Golle, J. Staddon, and B. Waters, “Secure conjunctive keyword searchover encrypted data,” in Applied Cryptography and Network Security.Springer, 2004, pp. 31–45.[27] D. Boneh and B. Waters, “Conjunctive, subset, and range queries onencrypted data,” in Theory of cryptography. Springer, 2007, pp. 535–554.[28] D. Boneh, G. Di Crescenzo, R. Ostrovsky, and G. Persiano, “Public keyencryption with keyword search,” in Advances in Cryptology–Eurocrypt.Springer, 2004, pp. 506–522.[29] Y. Hwang and P. Lee, “Zpublic key encryption with conjunctive keywordsearch and its extension to a multi-user system,” in Proceeding ofPairing. Springer, 2007, pp. 2–22.[30] Q. Liu, C. C. Tan, J. Wu, and G. Wang, “Efficient information retrievalfor ranked queries in cost-effective cloud environments,” in Proceedingsof INFOCOM. IEEE, 2012, pp. 2581–2585.Hongwei Li is an Associate Professor with theSchool of Computer Science and Engineering,University of Electronic Science and Technologyof China, China. He received the PhD degreein computer software and theory from Universityof Electronic Science and Technology of China,China in 2008. He has worked as a Post-Doctoral Fellow in Department of Electrical andComputer Engineering at University of Waterloofor one year until Oct. 2012. His research interestsinclude network security, applied cryptography,and trusted computing. Dr. Li serves as the Associate Editor ofPeer-to-Peer Networking and Applications, the Guest Editor for Peerto-Peer Networking and Applications Special Issue on Security andPrivacy of P2P Networks in Emerging Smart City. He also serves onthe technical program committees for many international conferencessuch as IEEE INFOCOM, IEEE ICC, IEEE GLOBECOM, IEEE WCNC,IEEE SmartGridComm, BODYNETS and IEEE DASC. He is a memberof IEEE, a member of China Computer Federation and a member ofChina Association for Cryptologic Research.Yi Yang received his B.S. degree in NetworkEngineering from Tianjin University of Scienceand Technology (TUST) in 2012. Currently, he isa master student at the School of Computer Scienceand Engineering, University of ElectronicScience and Technology of China (UESTC), China.He serves as the reviewer of Peer-to-PeerNetworking and Application, IEEE INFOCOM,IEEE ICC, IEEE GLOBECOM, IEEE ICCC, etc.His research interests include cryptography, andthe secure smart grid.Tom H. Luan received the B.Sc. degree fromXian Jiaotong University, China, in 2004, theM.Phil. degree from Hong Kong University ofScience and Technology, Hong Kong, China, in2007, and Ph.D. degrees from the Universityof Waterloo, Canada, in 2012. Since December2013, he has been the Lecturer in Mobile andApps at the School of Information Technology,Deakin University, Melbourne Burwood, Australia.His research mainly focuses on vehicularnetworking, wireless content distribution, peerto-peer networking and mobile cloud computing.Xiaohui Liang received the B.Sc. degree inComputer Science and Engineering and theM.Sc. degree in Computer Software and Theoryfrom Shanghai Jiao Tong University (SJTU), China,in 2006 and 2009, respectively. He is currentlyworking toward a Ph.D. degree in the Departmentof Electrical and Computer Engineering,University of Waterloo, Canada. His research interestsinclude applied cryptography, and securityand privacy issues for e-healthcare system,cloud computing, mobile social networks, andsmart grid.Liang Zhou is a professor with the NationalKey Lab of Science and Technology on Communicationat University of Electronic Scienceand Technology of China, China. His currentresearch interests include error control coding,secure communication and cryptography.Xuemin (Sherman) Shen is a Professor andUniversity Research Chair, Department of Electricaland Computer Engineering, University ofWaterloo, Canada. He was the Associate Chairfor Graduate Studies from 2004 to 2008. Dr.Shen’s research focuses on resource managementin interconnected wireless/wired networks,wireless network security, vehicular ad hocand sensor networks. Dr. Shen served as theTechnical Program Committee Chair for IEEEVTC’10 Fall and IEEE Globecom’07. He alsoserves/served as the Editor-in-Chief for IEEE Network, Peer-to-PeerNetworking and Application, and IET Communications; a Founding AreaEditor for IEEE Transactions on Wireless Communications; an AssociateEditor for IEEE Transactions on Vehicular Technology, ComputerNetworks. Dr. Shen is a registered Professional Engineer of Ontario,Canada, an IEEE Fellow, an Engineering Institute of Canada Fellow, aCanadian Academy of Engineering Fellow, and a Distinguished Lecturerof IEEE Vehicular Technology Society and Communications Society.

Enabling Efficient Multi-Keyword Ranked Search Over Encrypted Mobile Cloud Data Through Blind Storage

In mobile cloud computing, a fundamental application is to outsource the mobile data to external cloud servers for scalable data storage. The outsourced data, however, need to be encrypted due to the privacy and confidentiality concerns of their owner. This results in the distinguished difficulties on the accurate search over the encrypted mobile cloud data.

In this paper, we develop the searchable encryption for multi-keyword ranked search over the storage data. Specifically, by considering the large number of outsourced documents (data) in the cloud, we utilize the relevance score and k-nearest neighbor techniques to develop an efficient multi-keyword search scheme that can return the ranked search results based on the accuracy.

This framework, we leverage an efficient index to further improve the search efficiency, and adopt the blind storage system to conceal access pattern of the search user. Security analysis demonstrates that our scheme can achieve confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and concealing access pattern of the search user. Finally, using extensive simulations, we show that our proposal can achieve much improved efficiency in terms of search functionality and search time compared with the existing proposals.

1.1 GOAL OF THE PROJECT:

Efficient and privacy-preserving multi-keyword ranked search over encrypted mobile cloud data via blind storage system, the EMRS has following design goals:

• Multi-Keyword Ranked Search: To meet the requirements for practical uses and provide better user experience, the EMRS should not only support multi-keyword search over encrypted mobile cloud data, but also achieve relevance-based result ranking.

• Search Efficiency: Since the number of the total documents may be very large in a practical situation, the EMRS should achieve sublinear search with better search efficiency.

• Confidentiality and Privacy Preservation: To prevent the cloud server from learning any additional information about the documents and the index, and to keep search users’ trapdoors secret, the EMRS should cover all the security requirements that we introduced above.

1.2 INTRODUCTION

Mobile cloud computing gets rid of the hardware limitation of mobile devices by exploring the scalable and virtualized cloud storage and computing resources, and accordingly is able to provide much more powerful and scalable mobile services to users. In mobile cloud computing, mobile users typically outsource their data to external cloud servers, e.g., iCloud, to enjoy a stable, low-cost and scalable way for data storage and access. However, as outsourced data typically contain sensitive privacy information, such as personal photos, emails, etc., which would lead to severe confidentiality and privacy violations, if without efficient protections. It is therefore necessary to encrypt the sensitive data before outsourcing them to the cloud. The data encryption, however, would result in salient difficulties when other users need to access interested data with search, due to the difficulties of search over encrypted data.

This fundamental issue in mobile cloud computing accordingly motivates an extensive body of research in the recent years on the investigation of searchable encryption technique to achieve efficient searching over outsourced encrypted data. A collection of research works have recently been developed on the topic of multi-keyword search over encrypted data. Propose a symmetric searchable encryption scheme which achieves high efficiency for large databases with modest scarification on security guarantees. Propose a multi-keyword search scheme supporting result ranking by adopting k-nearest neighbors (kNN) technique. Propose a dynamic searchable encryption scheme through blind storage to conceal access pattern of the search user.

In order to meet the practical search requirements, search over encrypted data should support the following three functions.

First, the searchable encryption schemes should support multi-keyword search, and provide the same user experience as searching in Google search with different keywords; single-keyword search is far from satisfactory by only returning very limited and inaccurate search results. Second, to quickly identify most relevant results, the search user would typically prefer cloud servers to sort the returned search results in a relevance-based order ranked by the relevance of the search request to the documents. In addition, showing the ranked search to users can also eliminate the unnecessary network traffic by only sending back the most relevant results from cloud to search users.

Third, as for the search efficiency, since the number of the documents contained in a database could be extraordinarily large, searchable encryption schemes should be efficient to quickly respond to the search requests with minimum delays.

In contrast to the theoretical benefits, most of the existing proposals, however, fail to offer sufficient insights towards the construction of full functioned searchable encryption as described above. As an effort towards the issue, in this paper, we propose an efficient multi-keyword ranked search (EMRS) scheme over encrypted mobile cloud data through blind storage.

Our main contributions can be summarized as follows:

• We introduce a relevance score in searchable encryption to achieve multi-keyword ranked search over the encrypted mobile cloud data. In addition to that, we construct an efficient index to improve the search efficiency.

• By modifying the blind storage system in the EMRS, we solve the trapdoor unlinkability problem and conceal access pattern of the search user from the cloud server.

• We give thorough security analysis to demonstrate that the EMRS can reach a high security level including confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and concealing access pattern of the search user. Moreover, we implement extensive experiments, which show that the EMRS can achieve enhanced efficiency in the terms of functionality and search efficiency compared with existing proposals.

1.3 LITRATURE SURVEY

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

Existing works built various types of secure index and corresponding index-based keyword matching algorithms to improve search efficiency. All these works only support the search of single keyword. Subsequent works extended the search capability to multiple, conjunctive or disjunctive, keywords search. However, they support only exact keyword matching. Misspelled keywords in the query will result in wrong or no matching. Very recently, a few works extended the search capability to approximate keyword matching (also known as fuzzy search). These are all for single keyword search, with a common approach involving expanding the index file by covering possible combinations of keyword misspelling so that a certain degree of spelling error, measured by edit distance, can be tolerated. Although a wild-card approach is adopted to minimize the expansion of the resulting index file, for a l-letter long keyword to tolerate an error up to an edit distance of d, the index has to be expanded times.

Thus, it is not scalable as the storage complexity increases exponentially with the increase of the error tolerance. To support multi-keyword search, the search algorithm will have to run multiple rounds To date, efficient multi-keyword fuzzy search over encrypted data remains a challenging problem. We want to point out that the efforts on search over encrypted data involve not only information retrieval techniques such as advanced data structures used to represent the searchable index, and efficient search algorithms that run over the corresponding data structure, but also the proper design of cryptographic protocols to ensure the security and privacy of the overall system. Although single keyword search and fuzzy search have been implemented separately, a combination of the two does not lead to a secure and efficient single keyword fuzzy search scheme.

2.1.1 DISADVANTAGES:

The large number of data users and documents in cloud, it is crucial for the search service to allow multi-keyword query and provide result similarity ranking to meet the effective data retrieval need. The searchable encryption focuses on single keyword search or Boolean keyword search, and rarely differentiates the search results.

  • Single-keyword search without ranking
  • Boolean- keyword search without ranking
  • Single-keyword similarity search with ranking


2.2 PROPOSED SYSTEM:

Propose a symmetric searchable encryption scheme which achieves high efficiency for large databases with modest scarification on security guarantees. Propose a multi-keyword search scheme supporting result ranking by adopting k-nearest neighbors (kNN) technique. Propose a dynamic searchable encryption scheme through blind storage to conceal access pattern of the search user.

We propose the detailed EMRS. Since the encrypted documents and index z are both stored in the blind storage system, we would provide the general construction of the blind storage system. Moreover, since the EMRS aims to eliminate the risk of sharing the key that is used to encrypt the documents with all search users and solve the trapdoor unlinkability problem in Naveed’s scheme.

We modify the construction of blind storage and leverage ciphertext policy attribute-based encryption (CP-ABE) technique in the EMRS. However, specific construction of CP-ABE is out of scope of this paper and we only give a simple indication here. The notations of this paper are shown in Table 1. The EMRS consists of the following phases: System Setup, Construction of Blind Storage, Encrypted Database Setup, Trapdoor Generation, Efficient and Secure Search, and Retrieve Documents from Blind Storage.

2.2.1 ADVANTAGES:

In this paper, we propose an efficient multi-keyword ranked search (EMRS) scheme over encrypted mobile cloud data through blind storage.

Our main contributions can be summarized as follows:

• We introduce a relevance score in searchable encryption to achieve multi-keyword ranked search over the encrypted mobile cloud data. In addition to that, we construct an efficient index to improve the search efficiency.

• By modifying the blind storage system in the EMRS, we solve the trapdoor unlinkability problem and conceal access pattern of the search user from the cloud server.

• We give thorough security analysis to demonstrate that the EMRS can reach a high security level including confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and concealing access pattern of the search user. Moreover, we implement extensive experiments, which show that the EMRS can achieve enhanced efficiency in the terms of functionality and search efficiency compared with existing proposals

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Back End                                :           MYSQL Server
  • Server                                      :           Apache Tomact Server
  • Script                                       :           JSP Script
  • Document                               :           MS-Office 2007

CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

3.3 CLASS DIAGRAM:

3.4 SEQUENCE DIAGRAM:

3.5 ACTIVITY DIAGRAM:

CHAPTER 4

4.0 IMPLEMENTATION:

EMRS SCHEME (EFFICIENT MULTI-KEYWORD RANKED SEARCH):

4.1 ALGORITHM

CP-ABE ENCRYPTION ALGORITHM:

The data owner builds the encrypted database as follows:

Step 1: The data owner computes the d-dimension relevance vector p = (p1, p2, · · · pd ) for each document using the TF-IDF weighting technique, where pj for j ∈ (1, 2 · · · d) represents the weighting of keyword ωj in document di . Then, the data owner extends the p to a (d+2)-dimension vector p ∗ . The (d+1)-th entry of p ∗ is set to a random number ε and the (d+2)-th entry is set to 1. We would let ε follow a normal distribution N(µ, σ2 ) [11]. For each document di , to compute the encrypted relevance vector, the data owner encrypts the associated extended relevance vector p ∗ using the secret key M1, M2 and S. First, the data owner chooses a random number r and splits the extended relevance vector p ∗ into two (d+2)-dimension vectors p 0 and p 00 using the vector S. For the j-th item in p ∗ , set

Step 2: For each document di in D, set the document into blocks of mb bits each. For each block, there is a header H(idi) indicating that this block belongs to document di . And the sizei of the document is contained in the header of the first block of di . Then, for each document di , the data owner chooses a 192-bit key Ki for the algorithm Enc(). More precisely, for each block B[j] of the document di , where j represents the index number of this block, compute the Ki ⊕ 8(j) as the key for the encryption of this block. Since each block has a unique index number, the blocks of the same document are encrypted with different keys. The document di contains sizei encrypted blocks and the first block of the document di with index number j is as

Finally, the data owner encrypts all the documents and writes them to the blind storage system using the B.Build function. Step 3: To enable efficient search over the encrypted documents, the data owner builds the index z. First, the data owner defines the access policy υi for each document di . We denote the result of attribute-based encryption using access policy υi as ABEυi (). The data owner initializes z to an empty array indexed by all keywords. Then, the index z can be constructed as shown in Algorithm 1.

4.2 MODULES:

SEARCHABLE ENCRYPTION CP-ABE:

MULTI-KEYWORD RANKED SEARCH:

BLIND STORAGE SYSTEM:

EMRS SECURITY REQUIRMENTS:

4.3 MODULE DESCRIPTION:

SEARCHABLE ENCRYPTION CP-ABE:

In ciphertext policy attribute-based encryption (CP-ABE), ciphertexts are created with an access structure (usually an access tree) which defines the access policy.A user can decrypt the data only if the attributes embedded in his attribute keys satisfy the access policy in the ciphertext. In CP-ABE, the encrypter holds the ultimate authority of the access policy. The documents are encrypted by the traditional symmetric cryptography technique before being outsourced to the cloud server. Without a correct key, the search user and cloud server cannot decrypt the documents. As for index confidentiality, the relevance vector for each document is encrypted using the secret key M1, M2, and S. And the descriptors of the documents are encrypted using CP-ABE technique. Thus, the cloud server can only use the index z to retrieve the encrypted relevance vectors without knowing any additional information, such as the associations between the documents and the keywords. And only the search user with correct attribute keys can decrypt the descriptor ABE_i (idijjKijjx) to get the document id and the associated symmetric key. Thus, the confidentiality of documents and index can be well protected.

MULTI-KEYWORD RANKED SEARCH:

Multi-keyword rankedsearch over encrypted data should support the following three functions. First, the searchable encryption schemes should support multi-keyword search, and provide the same user experience as searching in Google search with different keywords; single-keyword search is far from satisfactory by only returning very limited and inaccurate search results. Second, to quickly identify most relevant results, the search user would typically prefer cloud servers to sort the returned search results in a relevance-based order ranked by the relevance of the search request to the documents. In addition, showing the ranked search to users can also eliminate the unnecessary network traffic by only sending back the most relevant results from cloud to search users. Third, as for the search efficiency, since the number of the documents contained in a database could be extraordinarily large, searchable encryption schemes should be efficient to quickly respond to the search requests with minimum delays.

BLIND STORAGE SYSTEM:

A blind storage system is built on the cloud server to support adding, updating and deleting documents and concealing the access pattern of the search user from the cloud server. In the blind storage system, all documents are divided into fixed-size blocks. These blocks are indexed by a sequence of random integers generated by a document-related seed. In the view of a cloud server, it can only see the blocks of encrypted documents uploaded and downloaded. Thus, the blind storage system leaks little information to the cloud server. Specifically, the cloud server does not know which blocks are of the same document, even the total number of the documents and the size of each document. Moreover, all the documents and index can be stored in the blind storage system to achieve a searchable encryption scheme.

EMRS SECURITY REQUIRMENTS:

EMRS, we consider the cloud server to be curious but honest which means it executes the task assigned by the data owner and the search user correctly. However, it is curious about the data in its storage and the received trapdoors to obtain additional information. Moreover, we consider the Knowing Background model in the EMRS, which allows the cloud server to know more background information of the documents such as statistical information of the keywords.

Specifically, the EMRS aims to provide the following four security requirements:

• Confidentiality of Documents and Index: Documents and index should be encrypted before being outsourced to a cloud server. The cloud server should be prevented from prying into the outsourced documents and cannot deduce any associations between the documents and keywords using the index.

• Trapdoor Privacy: Since the search user would like to keep her searches from being exposed to the cloud server, the cloud server should be prevented from knowing the exact keywords contained in the trapdoor of the search user.

• Trapdoor Unlinkability: The trapdoors should not be linkable, which means the trapdoors should be totally different even if they contain the same keywords. In other words, the trapdoors should be randomized rather than determined. The cloud server cannot deduce any associations between two trapdoors.

• Concealing Access Pattern of the Search User: Access pattern is the sequence of the searched results. In the EMRS, the access pattern should be totally concealed from the cloud server. Specifically, the cloud server cannot learn the total number of the documents stored on it nor the size of the searched document even when the search user retrieves this document from the cloud server.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

Java Program
Compilers
Interpreter
My Program

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CHAPTER 8

8.1 CONCLUSION

In this paper, we have proposed a multi-keyword ranked search scheme to enable accurate, efficient and secure search over encrypted mobile cloud data. Security analysis have demonstrated that proposed scheme can effectively achieve confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and concealing access pattern of the search user. Extensive performance evaluations have shown that the proposed scheme can achieve better efficiency in terms of the functionality and computation overhead compared with existing ones. For the future work, we will investigate on the authentication and access control issues in searchable encryption technique.

EMR A Scalable Graph-based Ranking Model for Content-based Image Retrieval

Abstract—Graph-based ranking models have been widely applied in information retrieval area. In this paper, we focus on a wellknown graph-based model – the Ranking on Data Manifold model, or Manifold Ranking (MR). Particularly, it has been successfullyapplied to content-based image retrieval, because of its outstanding ability to discover underlying geometrical structure of the givenimage database. However, manifold ranking is computationally very expensive, which significantly limits its applicability to largedatabases especially for the cases that the queries are out of the database (new samples). We propose a novel scalable graph-basedranking model called Efficient Manifold Ranking (EMR), trying to address the shortcomings of MR from two main perspectives:scalable graph construction and efficient ranking computation. Specifically, we build an anchor graph on the database instead of atraditional k-nearest neighbor graph, and design a new form of adjacency matrix utilized to speed up the ranking. An approximatemethod is adopted for efficient out-of-sample retrieval. Experimental results on some large scale image databases demonstrate thatEMR is a promising method for real world retrieval applications.Index Terms—Graph-based algorithm, ranking model, image retrieval, out-of-sample1 INTRODUCTIONGRAPH-BASED ranking models have been deeplystudied and widely applied in information retrievalarea. In this paper, we focus on the problem of applyinga novel and efficient graph-based model for contentbasedimage retrieval (CBIR), especially for out-of-sampleretrieval on large scale databases.Traditional image retrieval systems are based on keywordsearch, such as Google and Yahoo image search. Inthese systems, a user keyword (query) is matched withthe context around an image including the title, manualannotation, web document, etc. These systems don’tutilize information from images. However these systemssuffer many problems, such as shortage of the text informationand inconsistency of the meaning of the text andimage. Content-based image retrieval is a considerablechoice to overcome these difficulties. CBIR has drawn agreat attention in the past two decades [1]–[3]. Differentfrom traditional keyword search systems, CBIR systems utilizethe low-level features, including global features (e.g.,color moment, edge histogram, LBP [4]) and local features(e.g., SIFT [5]), automatically extracted from images. A great• B. Xu, J. Bu, C. Chen, and C. Wang are with the Zhejiang ProvincialKey Laboratory of Service Robot, College of Computer Science, ZhejiangUniversity, Hangzhou 310027, China.E-mail: {xbzju, bjj, chenc, wcan}@zju.edu.cn.D. Cai and X. He are with the State Key Lab of CAD&CG, Collegeof Computer Science, Zhejiang University, Hangzhou 310027, China.E-mail: {dengcai, xiaofeihe}@cad.zju.edu.cn.Manuscript received 9 Oct. 2012; revised 7 Apr. 2013; accepted 22 Apr. 2013.Date of publication 1 May 2013; date of current version 1 Dec. 2014.Recommended for acceptance by H. Zha.For information on obtaining reprints of this article, please send e-mail to:reprints@ieee.org, and reference the Digital Object Identifier below.Digital Object Identifier 10.1109/TKDE.2013.70amount of researches have been performed for designingmore informative low-level features to represent images,or better metrics (e.g., DPF [6]) to measure the perceptualsimilarity, but their performance is restricted by many conditionsand is sensitive to the data. Relevance feedback [7]is a useful tool for interactive CBIR. User’s high level perceptionis captured by dynamically updated weights basedon the user’s feedback.Most traditional methods focus on the data features toomuch but they ignore the underlying structure information,which is of great importance for semantic discovery,especially when the label information is unknown. Manydatabases have underlying cluster or manifold structure.Under such circumstances, the assumption of label consistencyis reasonable [8], [9]. It means that those nearby datapoints, or points belong to the same cluster or manifold,are very likely to share the same semantic label. This phenomenonis extremely important to explore the semanticrelevance when the label information is unknown. In ouropinion, a good CBIR system should consider images’ lowlevelfeatures as well as the intrinsic structure of the imagedatabase.Manifold Ranking (MR) [9], [10], a famous graph-basedranking model, ranks data samples with respect to theintrinsic geometrical structure collectively revealed by alarge number of data. It is exactly in line with our consideration.MR has been widely applied in many applications,and shown to have excellent performance and feasibilityon a variety of data types, such as the text [11], image[12], [13], and video[14]. By taking the underlying structureinto account, manifold ranking assigns each data sample arelative ranking score, instead of an absolute pairwise similarityas traditional ways. The score is treated as a similarity1041-4347 c_ 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.XU ET AL.: EMR: A SCALABLE GRAPH-BASED RANKING MODEL FOR CONTENT-BASED IMAGE RETRIEVAL 103metric defined on the manifold, which is more meaningfulto capturing the semantic relevance degree. He et al. [12]firstly applied MR to CBIR, and significantly improvedimage retrieval performance compared with state-of-the-artalgorithms.However, manifold ranking has its own drawbacks tohandle large scale databases – it has expensive computationalcost, both in graph construction and ranking computationstages. Particularly, it is unknown how to handlean out-of-sample query (a new sample) efficiently underthe existing framework. It is unacceptable to recompute themodel for a new query. That means, original manifold rankingis inadequate for a real world CBIR system, in whichthe user provided query is always an out-of-sample.In this paper, we extend the original manifold rankingand propose a novel framework named Efficient ManifoldRanking (EMR). We try to address the shortcomings ofmanifold ranking from two perspectives: the first is scalablegraph construction; and the second is efficient computation,especially for out-of-sample retrieval. Specifically, webuild an anchor graph on the database instead of the traditionalk-nearest neighbor graph, and design a new form ofadjacency matrix utilized to speed up the ranking computation.The model has two separate stages: an offline stagefor building (or learning) the ranking model and an onlinestage for handling a new query. With EMR, we can handle adatabase with 1 million images and do the online retrievalin a short time. To the best of our knowledge, no previousmanifold ranking based algorithm has run out-of-sampleretrieval on a database in this scale.A preliminary version of this work previously appearedas [13]. In this paper, the new contributions are as follows:• We pay more attention to the out-of-sample retrieval(online stage) and propose an efficient approximatemethod to compute ranking scores for a new queryin Section 4.5. As a result, we can run out-ofsampleretrieval on a large scale database in a shorttime.• We have optimized the EMR code1 and re-run all theexperiments (Section 5). Three new databases includingtwo large scale databases with about 1 millionssamples are added for testing the efficiency of theproposed model. We offer more detailed analysis forexperimental result.• We formally define the formulation of local weightestimation problem (Section 4.1.1) for buildingthe anchor graph and two different methods arecompared to determine which method is better(Section 5.2.2).The rest of this paper is organized as follows. InSection 2, we briefly discuss some related work and inSection 3, we review the algorithm of MR and makean analysis. The proposed approach EMR is described inSection 4. In Section 5, we present the experiment resultson many real world image databases. Finally we provide aconclusions in Section 6.1. http://eagle.zju.edu.cn/∼binxu/2 RELATED WORKThe problem of ranking has recently gained great attentionsin both information retrieval and machine learning areas.Conventional ranking models can be content based models,like the Vector Space Model, BM25, and the language modeling[15]; or link structure based models, like the famousPageRank [16] and HITS [17]; or cross media models [18].Another important category is the learning to rank model,which aims to optimize a ranking function that incorporatesrelevance features and avoids tuning a large numberof parameters empirically [19], [20]. However, many conventionalmodels ignore the important issue of efficiency,which is crucial for a real-time systems, such as a web application.In [21], the authors present a unified framework forjointly optimizing effectiveness and efficiency.In this paper, we focus on a particular kind of rankingmodel – graph-based ranking. It has been successfullyapplied in link-structure analysis of the web [16], [17], [22]–[24], social networks research [25]–[27] and multimedia dataanalysis [28]. Generally, a graph [29] can be denoted asG = (V, E,W), where V is a set of vertices in which eachvertex represents a data point, E V × V is a set of edgesconnecting related vertices, and W is a adjacency matrixrecording the pairwise weights between vertices. The objectof a graph-based ranking model is to decide the importanceof a vertex, based on local or global information draw fromthe graph.Agarwal [30] proposed to model the data by a weightedgraph, and incorporated this graph structure into the rankingfunction as a regularizer. Guan et al. [26] proposed agraph-based ranking algorithm for interrelated multi-typeresources to generate personalized tag recommendation.Liu et al. [25] proposed an automatically tag ranking schemeby performing a random walk over a tag similarity graph.In [27], the authors made the music recommendation byranking on a unified hypergraph, combining with richsocial information and music content. Hypergraph is a newgraph-based model and has been studied in many works[31]. Recently, there have been some papers on speeding upmanifold ranking. In [32], the authors partitioned the datainto several parts and computed the ranking function by ablock-wise way.3 MANIFOLD RANKING REVIEWIn this section, we briefly review the manifold ranking algorithmand make a detailed analysis about its drawbacks.Westart form the description of notations.3.1 Notations and FormulationsGiven a set of data χ = {x1, x2, . . . , xn} ⊂ Rm and builda graph on the data (e.g., kNN graph). W Rn×n denotesthe adjacency matrix with element wij saving the weight ofthe edge between point i and j. Normally the weight canbe defined by the heat kernel wij = exp [ − d2(xi, xj)/2σ2)]if there is an edge linking xi and xj, otherwise wij = 0.Function d(xi, xj) is a distance metric of xi and xj definedon χ, such as the Euclidean distance. Let r:χ R be aranking function which assigns to each point xi a rankingscore ri. Finally, we define an initial vector y = [y1, . . . , yn]T,in which yi = 1 if xi is a query and yi = 0 otherwise.104 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015The cost function associated with r is defined to beO(r) = 12⎛⎝_ni,j=1wij_ 1 √Diiri − 1 _Djjrj_2 + μ_ni=1_ri yi_2⎞⎠,(1)where μ > 0 is the regularization parameter and D is adiagonal matrix with Dii =_nj=1 wij.The first term in the cost function is a smoothness constraint,which makes the nearby points in the space havingclose ranking scores. The second term is a fitting constraint,which means the ranking result should fit to theinitial label assignment. With more prior knowledge aboutthe relevance or confidence of each query, we can assigndifferent initial scores to the queries. Minimizing the costfunction respect to r results into the following closed formsolutionr∗ = (In αS)−1y, (2)where α = 11+μ, In is an identity matrix with n×n, and S isthe symmetrical normalization of W, S = D−1/2WD−1/2. Inlarge scale problems, we prefer to use the iteration scheme:r(t + 1) = αSr(t) + (1 − α)y. (3)During each iteration, each point receives informationfrom its neighbors (first term), and retains its initial information(second term). The iteration process is repeateduntil convergence. When manifold ranking is applied toretrieval (such as image retrieval), after specifying a queryby the user, we can use the closed form or iteration schemeto compute the ranking score of each point. The rankingscore can be viewed as a metric of the manifold distancewhich is more meaningful to measure the semanticrelevance.3.2 AnalysisAlthough manifold ranking has been widely used in manyapplications, it has its own drawbacks to handle large scaledatabased, which significantly limits its applicability.The first is its graph construction method. The kNNgraph is quite appropriate for manifold ranking becauseof its good ability to capture local structure of the data. Butthe construction cost for kNN graph is O(n2 log k), whichis expensive in large scale situations. Moreover, manifoldranking, as well as many other graph-based algorithmsdirectly use the adjacency matrix W in their computation.The storage cost of a sparse W is O(kn). Thus, we need tofind a way to build a graph in both low construction costand small storage space, as well as good ability to captureunderlying structure of the given database.The second, manifold ranking has very expensive computationalcost because of the matrix inversion operationin equation (2). This has been the main bottleneck to applymanifold ranking in large scale applications. Although wecan use the iteration algorithm in equation (3), it is stillinefficient in large scale cases and may arrive at a local convergence.Thus, original manifold ranking is inadequate fora real-time retrieval system.4 EFFICIENT MANIFOLD RANKINGWe address the shortcomings of original MR from twoperspectives: scalable graph construction and efficient rankingcomputation. Particularly, our method can handle theout-of-sample retrieval, which is important for a real-timeretrieval system.4.1 Scalable Graph ConstructionTo handle large databases, we want the graph constructioncost to be sub-linear with the graph size. That means, foreach data point, we can’t search the whole database, as kNNstrategy does. To achieve this requirement, we constructan anchor graph [33], [34] and propose a new design ofadjacency matrix W.The definitions of anchor points and anchor graph haveappeared in some other works. For instance, in [35], theauthors proposed that each data point on the manifoldcan be locally approximated by a linear combination of itsnearby anchor points, and the linear weights become itslocal coordinate coding. Liu et al. [33] designed the adjacencymatrix in a probabilistic measure and used it forscalable semi-supervised learning. This work inspires usmuch.4.1.1 Anchor Graph ConstructionNow we introduce how to use anchor graph to modelthe data [33], [34]. Suppose we have a data set χ ={x1, . . . , xn} ⊂ Rm with n samples in m dimensions, andU = {u1, . . . , ud} ⊂ Rm denotes a set of anchors sharingthe same space with the data set. Let f :χ R be a realvalue function which assigns each data point in χ a semanticlabel. We aim to find a weight matrix Z Rd×n thatmeasures the potential relationships between data pointsin χ and anchors in U. Then we estimate f (x) for each datapoint as a weighted average of the labels on anchors       f(xi) =_dk=1zkif (uk), i = 1, . . . , n, (4)with constraints_dk=1 zki = 1 and zki ≥ 0. Element zki representsthe weight between data point xi and anchor uk. Thekey point of the anchor graph construction is how to computethe weight vector zi for each data point xi. Two issuesneed to be considered: (1) the quality of the weight vectorand (2) the cost of the computation.Similar to the idea of LLE [8], a straightforward wayto measure the local weight is to optimize the followingconvex problem:minziε(zi) = 12_xi −_|N(xi)|s=1 usN(xi)zis_2s.t._s zis = 1, zi ≥ 0,(5)where N(xi) is the index set of xi’s nearest anchors. Wecall the above problem as the local weight estimation problem.A standard quadratic programming (QP) can solve thisproblem, but QP is very computational expensive. A projectedgradient based algorithm was proposed in [33] tocompute weight matrix and in our previous work [13], akernel regression method was adopted. In this paper, wecompare these two different methods to find the weightvector zi. Both of them are much faster than QP.XU ET AL.: EMR: A SCALABLE GRAPH-BASED RANKING MODEL FOR CONTENT-BASED IMAGE RETRIEVAL 105(1) Solving by Projected GradientThe first method is the projected gradient method, whichhas been used in the work of [33]. The updating rule in thismethod is expressed as the following iterative formula [33]:z(t+1)i= _s(z(t)iηtε(zti)), (6)where ηt denotes the step size of time t, ∇ε(z) denotes thegradient of ε at z, and _s(z) denotes the simplex projectionoperator on any z ∈ Rs. Detailed algorithm can be foundin Algorithm 1 of [33].(2) Solving by Kernel RegressionWe adopt the Nadaraya-Watson kernel regression toassign weights smoothly [13]zki =K|xiuk|λ__dl=1 K|xiul|λ_, (7)with the Epanechnikov quadratic kernelKλ(t) =_34(1 − t2) if |t| ≤ 1;0 otherwise.(8)The smoothing parameter λ determines the size of thelocal region in which anchors can affect the target point. Itis reasonable to consider that one data point has the samesemantic label with its nearby anchors in a high probability.There are many ways to determine the parameter λ. Forexample, it can be a constant selected by cross-validationfrom a set of training data. In this paper we use a morerobust way to get λ, which uses the nearest neighborhoodsize s to replace λ, that isλ(xi) = |xi u[s]|, (9)where u[s] is the sth closest anchor of xi. Later in the experimentpart, we’ll discuss the effectiveness and efficiency ofthe above two methods.Specifically, to build the anchor graph, we connect eachsample to its s nearest anchors and then assign the weights.So the construction has a total complexity O(nd log s), whered is the number of anchors and s is very small. Thus, thenumber of anchors determines the efficiency of the anchorgraph construction. If d  n, the construction is linear tothe database.How can we get the anchors? Active learning [36], [37] orclustering methods are considerable choices. In this paper,we use k-means algorithm and select the centers as anchors.Some fast k-means algorithms [38] can speed up the computation.Random selection is a competitive method which hasextremely low selection cost and acceptable performance.The main feature, also the main advantage of buildingan anchor graph is separating the graph construction intotwo parts – anchor selection and graph construction. Eachdata sample is independent to the other samples but relatedto the anchors only. The construction is always efficientsince it has linear complexity to the date size. Note that wedon’t have to update the anchors frequently, as informativeanchors for a large database are relatively stable (e.g., thecluster centers), even if a few new samples are added.4.1.2 Design of Adjacency MatrixWe present a new approach to design the adjacency matrixW and make an intuitive explanation for it. The weightmatrix Z Rd×n can be seen as a d dimensional representationof the data X Rm×n, d is the number of anchorpoints. That is to say, data points can be represented inthe new space, no matter what the original features are.This is a big advantage to handle some high dimensionaldata. Then, with the inner product as the metric to measurethe adjacent weight between data points, we designthe adjacency matrix to be a low-rank form [33], [39]W = ZTZ, (10)which means that if two data points are correlative (Wij >0), they share at least one common anchor point, otherwiseWij = 0. By sharing the same anchors, data pointshave similar semantic concepts in a high probability as ourconsideration. Thus, our design is helpful to explore thesemantic relationships in the data.This formula naturally preserves some good propertiesof W: sparseness and nonnegativeness. The highly sparsematrix Z makes W sparse, which is consistent with theobservation that most of the points in a graph have onlya small amount of edges with other points. The nonnegativeproperty makes the adjacent weight more meaningful:in real world data, the relationship between two items isalways positive or zero, but not negative. Moreover, nonnegativeW guarantees the positive semidefinite property ofthe graph Laplacian in many graph-based algorithms [33].4.2 Efficient Ranking ComputationAfter graph construction, the main computational cost formanifold ranking is the matrix inversion in equation (2),whose complexity is O(n3). So the data size n can not betoo large. Although we can use the iteration algorithm, itis still inefficient for large scale cases.One may argue that the matrix inversion can be done offline,then it is not a problem for on-line search. However,off-line calculation can only handle the case when the queryis already in the graph (an in-sample). If the query is notin the graph (an out-of-sample), for exact graph structure,we have to update the whole graph to add the new queryand compute the matrix inversion in equation (2) again.Thus, the off-line computation doesn’t work for an out-ofsamplequery. Actually, for a real CBIR system, user’s queryis always an out-of-sample.With the form of W = ZTZ , we can rewrite the equation(2), the main step of manifold ranking, by Woodburyformula as follows. Let H = ZD−12 , and S = HTH, then thefinal ranking function r can be directly computed byr∗ = (In αHTH)−1y =In HT_HHT − 1αId_−1H_y.(11)By equation (11), the inversion part (taking the mostcomputational cost) changes from a n×n matrix to a d×dmatrix. If d  n, this change can significantly speed upthe calculation of manifold ranking. Thus, applying ourproposed method to a real-time retrieval system is viable,which is a big shortage for original manifold ranking.During the computation process, we never use the adjacencymatrix W. So we don’t save the matrix W in memory,106 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015but save matrix Z instead. In equation (11), D is a diagonalmatrix with Dii =_nj=1 wij. When W = ZTZ,Dii =_nj=1zTizj = zTiv, (12)where zi is the ith column of Z and v =_nj=1 zj. Thus weget the matrix D without using W.A useful trick for computing r∗ in equation (11) is runningit from right to left. So every time we multiply a matrixby a vector, avoiding the matrix – matrix multiplication.As a result, to compute the ranking function, EMR has acomplexity O(dn + d3).4.3 Complexity AnalysisIn this subsection, we make a comprehensive complexityanalysis of MR and EMR, including the computation costand storage cost. As we have mentioned, both MR andEMR have two stages: the graph construction stage andthe ranking computation stage.For the model of MR:• MR builds a kNN graph, i.e., for each data sample,we need to calculate the relationships to its k-nearestneighbors. So the computation cost is O(n2 log k). Atthe same time, we save the adjacency matrix W Rn×n with a storage cost O(kn) since W is sparse.• In the ranking computation stage, the main stepis to compute the matrix inversion in 2, which isapproximately O(n3).For the model of EMR:• EMR builds an anchor graph, i.e., for each data sample,we calculate the relationships to its s-nearestanchors. The computation cost is O(nd log s). We usek-means to select the anchors, we need a cost ofO(Tdn), where T is the iteration number. But thisselection step can be done off-line and unnecessarilyupdated frequently. At the same time, wesave the sparse matrix Z Rd×n with a storagecost O(sn).• In the ranking computation stage, the main step isEq.(11), which has a computational complexity ofO(dn + d3).As a result, EMR has a computational cost of O(dn) +O(d3) (ignoring s, T) and a storage cost O(sn), while MR hasa computational cost of O(n2) + O(n3) and a storage costO(kn). Obviously, when d  n, EMR has a much lower costthan MR in computation.4.4 EMR for Content-Based Image RetrievalIn this part, we make a brief summary of EMR applied topure content-based image retrieval. To add more information,we just extend the data features.First of all, we extract the low-level features of imagesin the database, and use them as coordinates of data pointsin the graph. We will further discuss the low-level featuresin Section 5. Secondly, we select representative points asanchors and construct the weight matrix Z with a smallneighborhood size s. Anchors are selected off-line and doesFig. 1. Extend matrix W (MR) and Z (EMR) in the gray regions for anout-of-sample.not affect the on-line process. For a stable data set, we don’tfrequently update the anchors. At last, after the user specifyingor uploading an image as a query, we get or extract itslow-level features, update the weight matrix Z, and directlycompute the ranking scores by equation (11). Images withhighest ranking scores are considered as the most relevantand return to the user.4.5 Out-of-Sample RetrievalFor in-sample data retrieval, we can construct the graphand compute the matrix inversion part of equation (2) offline.But for out-of-sample data, the situation is totallydifferent. A big limitation of MR is that, it is hard to handlethe new sample query. A fast strategy for MR is leavingthe original graph unchanged and adding a new row anda new column to W (left picture of Fig. 1). Although thenew W is efficiently to compute, it is not helpful for theranking process (Eq.(2)). Computing Eq.(2) for each newquery in the online stage is unacceptable due to its highcomputational cost.In [40], the authors solve the out-of-sample problemby finding the nearest neighbors of the query and usingthe neighbors as query points. They don’t add the queryinto the graph, therefore their database is static. However,their method may change the query’s initial semantic meaning,and for a large database, the linear search for nearestneighbors is also costly.In contrast, our model EMR can efficiently handle thenew sample as a query for retrieval. In this subsection,we describe the light-weight computation of EMR for anew sample query. We want to emphasize that this is abig improvement over our previous conference version ofthis work, which makes EMR scalable for large-scale imagedatabases (e.g., 1 million samples). We show the algorithmas follows.For one instant retrieval, it is unwise to update the wholegraph or rebuild the anchors, especially on a large database.We believe one point has little effect to the stable anchorsin a large data set (e.g., cluster centers). For EMR, each datapoint (zi) is independently computed, so we assign weightsbetween the new query and its nearby anchors, forming anew column of Z (right picture of Fig. 1).We use zt to denote the new column. Then, Dt = zTtvand ht = ztD−12t , where ht is the new column of H. As wehave described, the main step of EMR is Eq.(11). Our goalis to further speedup the computation of Eq.(11) for a newquery. LetC =_HHT − 1αId_−1=_ni=1hihTi− 1αId_−1, (13)XU ET AL.: EMR: A SCALABLE GRAPH-BASED RANKING MODEL FOR CONTENT-BASED IMAGE RETRIEVAL 107Fig. 2. COREL image samples randomly selected from semantic conceptballoon, beach, and butterfly.and the new C_ with adding the column ht isC_ =_ni=1hihTi+ hthTt− 1αId_−1≈ C (14)when n is large and ht is highly sparse. We can see thematrix C as the inverse of a covariance matrix. The aboveequation says that one single point would not affect thecovariance matrix of a large database. That is to say, thecomputation of C can be done in the off-line stage.The initial query vector yt isyt =_0n1_, (15)where 0n is a n-length zero vector. We can rewrite Eq.(11)with the new query asr(n+1)×1 =_In+1 −_HTChTtC_[H ht]__0n1_. (16)Our focus is the top n elements of r, which is equal torn×1 = −HTCht = Eht. (17)The matrix En×d = −HTC can be computed offline, i.e., inthe online stage, we need to compute a multiplication of an × d matrix and a d × 1 vector only. As ht is sparse (e.g., snon-zero elements), the essential computation is to select scolumns of E according to ht and do a weighted summation.As a result, we need to do sn scalar multiplications and(s − 1)n scalar additions to get the ranking score (rn×1) foreach database sample; while for linear scan using Euclideandistance, we need to do mn scalar subtractions, mn scalarmultiplications and (m−1)n scalar additions. As s  m, ourmodel EMR is much faster than linear scan using Euclideandistance in the online stage.5 EXPERIMENTAL STUDYIn this section, we show several experimental results andcomparisons to evaluate the effectiveness and efficiency ofour proposed method EMR on four real world databases:two middle size databases COREL (5,000 images) andMNIST (70,000 images), and two large size databasesSIFT1M (1 million sift descriptors) and ImageNet (1.2 millionimages). We use COREL and MNIST to compare theranking performance and use SIFT1M and ImageNet toshow the efficiency of EMR for out-of-sample retrieval. OurTABLE 1Statistics of the Four Databasesexperiments are implemented in MATLAB and run on acomputer with 2.0 GHz(×2) CPU, 64GB RAM.5.1 Experiments SetupThe COREL image data set is a subset of COREL imagedatabase consisting of 5,000 images. COREL is widely usedin many CBIR works [2], [41], [42]. All of the images arefrom 50 different categories, with 100 images per category.Images in the same category belong to the same semanticconcept, such as beach, bird, elephant and so on. That isto say, images from the same category are judged relevantand otherwise irrelevant. We use each image as a queryfor testing the in-sample retrieval performance. In Fig. 2,we randomly select and show nine image samples fromthree different categories. In our experiments, we extractfour kinds of effective features for COREL database, includingGrid Color Moment, edge histogram, Gabor WaveletsTexture, Local Binary Pattern and GIST feature. As a result,a 809-dimensional vector is used for each image [43].The MNIST database2 of handwritten digits has a set of70,000 examples. The images were centered in a 28 × 28image by computing the center of mass of the pixels, andtranslating the image so as to position this point at the centerof the 28 × 28 field. We use the first 60,000 images asdatabase images and the rest 10,000 images as queries fortesting the out-of-sample retrieval performance. The normalizedgray-scale values for each pixel are used as imagefeatures.The SIFT1M database contains one million SIFT featuresand each feature is represented by a 128-dimensional vector.The ImageNet is an image database organized accordingto the WordNet nouns hierarchy, in which each node ofthe hierarchy is depicted by hundreds and thousands ofimages3. We downloaded about 1.2 million images’ BoWrepresentations. A visual vocabulary of 1,000 visual wordsis adopted, i.e., each image is represented by a 1,000-lengthvector. Due to the complex structure of the database andhigh diversity of images in each node, as well as the lowquality of simple BoW representation, the retrieval task isvery hard.We use SIFT1M and ImageNet databases to evaluatethe efficiency of EMR on large and high dimensional data.We randomly select 1,000 images as out-of-sample testqueries for each. Some basic statistics of the four databasesare listed in Table 1. For COREL, MNIST and SIFT1Mdatabases, the data samples have dense features, while forImageNet database, the data samples have sparse features.5.1.1 Evaluation Metric DiscussionThere are many measures to evaluate the retrieval resultssuch as precision, recall, F measure, MAP and NDCG [44].2. http://yann.lecun.com/exdb/mnist/3. http://www.image-net.org/index108 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015They are very useful for a real CBIR application, especiallyfor a web application in which only the top returned imagescan attract user interests. Generally, the image retrievalresults are displayed screen by screen. Too many imagesin a screen will confuse the user and drop the experienceevidently. Images in the top pages attract the most interestsand attentions from the user. So the precision at K metricis significant to evaluate the image retrieval performance.MAP (Mean Average Precision) provides a single-figuremeasure of quality across recall levels. MAP has beenshown to have especially good discriminative power andstability. For a single query, Average Precision is the averageof the precision value obtained for the set of top k itemsexisting after each relevant item is retrieved, and this valueis then averaged over all queries [44]. That is, if the set ofrelevant items for a query qj Q is {d1, . . . , dmj} and Rjk isthe set of ranked retrieval results from the top result untilyou get to item dk, thenMAP(Q) = 1|Q||Q|_j=11mj_mjk=1Precision(Rjk). (18)NDCG is a wildly used metric to evaluate a ranked list[44]. NDCG@K is defined as:NDCG@K = 1IDCG×_Ki=12ri−1log2(i + 1), (19)where ri is 1 if the item at position i is a relevant item and0 otherwise. IDCG is chosen so that the perfect ranking hasa NDCG value 1.5.2 Experiments on COREL DatabaseThe goal of EMR is to improve the speed of manifold rankingwith acceptable ranking accuracy loss. We first compareour model EMR with the original manifold ranking (MR)and fast manifold ranking (FMR [32]) algorithm on CORELdatabase. As both MR and FMR are designed for in-sampleimage retrieval, we use each image as a query and evaluatein-sample retrieval performance. More comparison toranking with SVM can be found in our previous conferenceversion [13]. In this paper, we pay more attention onthe trade-off of accuracy and speed for EMR respect to MR,so we ignore the other methods.We first compare the methods without relevance feedback.Relevance feedback asks users to label some retrievedsamples, making the retrieval procedure inconvenient. Soif possible, we prefer an algorithm having good performancewithout relevance feedback. In Section 5.2.4, weevaluate the performance of the methods after one round ofrelevance feedback. MR-like algorithms can handle the relevancefeedback very efficiently – revising the initial scorevector y.5.2.1 Baseline AlgorithmEud: the baseline method using Euclidean distance forranking.MR: the original manifold ranking algorithm, the mostimportant comparison method. Our goal is to improvethe speed of manifold ranking with acceptable rankingaccuracy loss.TABLE 2Precision and Time Comparisons of TwoWeight Estimation MethodsFMR: fast manifold ranking [32] firstly partitions the datainto several parts (clustering) and computes the matrixinversion by a block-wise way. It uses the SVD techniquewhich is time consuming. So its computational bottleneckis transformed to SVD. When SVD is accurately solved,FMR equals MR. But FMR uses the approximate solution tospeed up the computation. We use 10 clusters and calculatethe approximation of SVD with 10 singular values. Higheraccuracy requires much more computational time.5.2.2 Comparisons of Two Weight Estimation Methodsfor EMRBefore the main experiment of comparing our algorithmEMR to some other models, we use a single experimentto decide which weight estimation method described inSection 4.1.1 should be adopted. We records the averageretrieval precision (each image is used as a query) and thecomputational time (seconds) of EMR with the two weightestimation methods in Table 2.From the table, we see that the two methods havevery close retrieval results. However, the projected gradientis much slower than kernel regression. In the rest ofour experiments, we use the kernel regression method toestimate the local weight (computing Z).5.2.3 PerformanceAn important issue needs to be emphasized: although wehave the image labels (categories), we don’t use them inour algorithm, since in real world applications, labeling isvery expensive. The label information can only be used toevaluation and relevance feedback.Each image is used as a query and the retrieval performanceis averaged. Fig. 3 prints the average precision (at 20to 80) of each method and Table 3 records the average valuesof recall, F1 score, NDCG and MAP (MAP is evaluatedonly for the top-100 returns). For our method EMR, 1000anchors are used. Later in the model selection part, we findthat using 500 anchors achieves a close performance. It iseasy to find that the performance of MR and EMR are veryclose, while FMR lose a little precision due to its approximationby SVD. As EMR’s goal is to improve the speedof manifold ranking with acceptable ranking accuracy loss,the performance results are not to show which method isbetter but to show the ranking performance of EMR is closeto MR on COREL.We also record the offline building time for MR, FMRand EMR in Table 3. For in-sample retrieval, all the threeXU ET AL.: EMR: A SCALABLE GRAPH-BASED RANKING MODEL FOR CONTENT-BASED IMAGE RETRIEVAL 109Fig. 4. Precision at the top 10 returns of the three algorithms on each category of COREL database.methods have the same steps and cost, so we ignore it onCOREL. We find that for a database with 5,000 images, allthe three methods have acceptable building time, and EMRis the most efficient. However, according to the the analysisin Section 4.3, MR’s computational cost is cubic to thedatabase size while EMR is linear to the database size. Theresult can be found in our experiments on MNIST database.The anchor points are computed off-line and do notaffect the current on-line retrieval system. In the workof [13], we have tested different strategies for anchorpoints selection, including normal k-means, fast k-meansand random anchors. The conclusion is that the cost andperformance are trade-offs in many situations.To see the performance distribution in the whole dataset more concretely, we plot the retrieval precision at top10 returns for all 50 categories in Fig. 4. As can be seen, theperformance of each algorithm varies with different categories.We find that EMR is fairly close to MR in almostevery categories, but for FMR, the distribution is totallydifferent.5.2.4 Performance with Relevance FeedbackRelevance Feedback [7] is a powerful interactive techniqueused to improve the performance of image retrieval systems.With user provided relevant/irrelevant informationon the retrieved images, The system can capture the semanticconcept of the query more correctly and graduallyimprove the retrieval precision.Fig. 3. Retrieval precision at top 20 to 80 returns of Eud (left), MR, FMRand EMR (right).Applying relevance feedback to EMR (as well as MR andFMR)is extremely simple.We update the initial vector y andrecompute the ranking scores.We use an automatic labelingstrategy to simulate relevance feedback: for each query, thetop 20 returns’ ground truth labels (relevant or irrelevant tothe query) are used as relevance feedbacks. It is performedfor one round, since the users have no patience to do more.The retrieval performance are plotted in Fig. 5. By relevancefeedback, MR, FMR and EMR get higher retrieval precisionbut still remain close to each other.5.2.5 Model SelectionModel selection plays a key role to many machine learningmethods. In some cases, the performance of an algorithmmay drastically vary by different choices of the parameters,thus we have to estimate the quality of the parameters. Inthis subsection, we evaluate the performance of our methodEMR with different values of the parameters.There are three parameters in our method EMR: s, α,and d. Parameter s is the neighborhood size in the anchorgraph. Small value of s makes the weight matrix Z verysparse. Parameter α is the tradeoff parameter in EMR andMR. Parameter d is the number of anchor points. ForTABLE 3Recall, F1, NCDG and MAP Values, as well as the OfflineBuilding Time (Seconds) of MR, FMR and EMR110 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015Fig. 5. Retrieval precision at top 20 to 80 returns of Eud (left), MR, FMRand EMR (right) after one round of relevance feedback.convenience, the parameter α is fixed at 0.99, consistentwith the experiments performed in [9], [10], [12].Fig. 6 shows the performance of EMR (Precision at 60)by k-means anchors at different values of s. We find thatthe performance of EMR is not sensitive to the selection ofs when s > 3. With small s, we can guarantee the matrix Zhighly sparse, which is helpful to efficient computation. Inour experiments, we just select s = 5.Fig. 7 shows the performance of EMR versus differentnumber of anchors in the whole data set. We findthat the performance increases very slowly when thenumber of anchors is larger than 500 (approximately).In previous experiments, we fix the number of anchorsto 1000. Actually, a smaller number of anchors, like 800or 600 anchors, can achieve a close performance. Withfewer anchors, the graph construction cost will be furtherreduced. But as the size of COREL is not large, the savingis not important.5.3 Experiments on MNIST DatabaseWe also investigate the performance of our method EMR onthe MNIST database. The samples are all gray digit imagesin the size of 28 × 28. We just use the gray values on eachFig. 6. Retrieval precision versus different values of parameter s. Thedotted line represents MR performance.Fig. 7. Retrieval precision versus different number of anchorss. Thedotted line represents MR performance.pixel to represent the images, i.e., for each sample, we usea 784-dimensional vector to represent it. The database wasseparated into 60,000 training data and 10,000 testing data,and the goal is to evaluate the performance on the testingdata. Note that although it is called ’training data’, aretrieval system never uses the given labels. All the rankingmodels use the training data itself to build their modelsand rank the samples according to the queries. Similaridea can be found in many unsupervised hashing algorithms[45], [46] for approximate and fast nearest neighborsearch.With MNIST database, we want to evaluate the efficiencyand effectiveness of the model EMR. As we havementioned, MR’s cost is cubic to the database size, whileEMR is much faster. We record the training time (buildingthe model offline) of MR, FMR and EMR (1k anchors) inTable 4 with the database size increasing step by step. Therequired time for MR and FMR increases very fast and forthe last two sizes, their procedures are out of memory dueto inverse operation. The algorithm MR with the solutionof Eq.(2) is hard to handle the size of MNIST. FMR performseven worse than MR as it clusters the samples andcomputes a large SVD – it seems that FMR is only usefulfor small-size database. However, EMR is much fasterin this test. The time cost scales linearly – 6 seconds for10,000 samples and 35 seconds for 60,000 samples. We usek-means algorithm with maximum 5 iterations to generatethe anchor points. We find that running k-means with 5iterations is good enough for anchor point selection.TABLE 4Computational Time (s) for Offline Training of MR, FMR, andEMR (1k Anchors) on MNIST DatabaseXU ET AL.: EMR: A SCALABLE GRAPH-BASED RANKING MODEL FOR CONTENT-BASED IMAGE RETRIEVAL 111(a) (b) (c)Fig. 8. (a) MAP values with different number of anchors for EMR. (b) Offline training time of EMR with different number of anchors. (c) Online newquery retrieval time of EMR with different number of anchors on MNIST.5.3.1 Out-of-Sample Retrieval TestIn this section, we evaluate the response time of EMRwhen handling an out-of-sample (a new sample). As MR(as well as FMR)’s framework is hard to handle the outof-sample query and is too costly for training the modelon the size of MNIST (Table 4), from now on, we don’tuse MR and FMR as comparisons, but some other rankingscore (similarity or distance) generating methods should becompared. We use the following two methods as baselinemethods:Eud: linear scan by Euclidean distance. This maybe themost simple but meaningful baseline to compare the out-ofsampleretrieval performance. Many previous fast nearestneighbor search algorithms or hashing-based algorithmswere proposed to accelerate the linear scan speed withsome accuracy loss than Euclidean distance. Their goal isdifferent with ranking – the ranking model assigns eachsample a score but not only the neighbors.LSH: locality sensitive hashing [45], a famous hashing codegenerating method. We use LSH to generate binary codesfor the images for both training and testing samples andthen calculate the hamming distance of a query to alldatabase samples as ranking metric. We use 128 bits and256 bits as the code length of LSH.In Fig. 8(a), we draw the MAP (top 200) values for allthe testing data of our model EMR with different numberof anchor points. The performance of Eud and LSHare showed by three horizontal lines. We can see that,when more than 400 anchors are used, EMR outperformsEuclidean distance metric significantly. LSH is worse thanEud due to its binary representation. We also record EMR’soffline training time and online retrieval time in Fig. 8(b)and Fig. 8(c). The computational time for both offline andonline increases linearly to the number of anchors.Then, in Table 5, we record the computational time (inseconds) and out-of-sample retrieval performance of EMR(1000 anchors), Eud and LSH with 128 and 256 code length.The best performance of each line is in bold font. EMR andLSH-128 have close online retrieval time, which is greatlyfaster than linear scan Eud – about 30 times faster. LSHhas very small training cost as its hashing functions arerandomly selected, while EMR needs more time to buildthe model. With more offline building cost, EMR receiveshigher retrieval performance in metric of precision, NDCGat 100 and MAP. The offline cost is valuable. The numberwith ∗ means it is significant higher than Eud at the 0.001significance level.5.3.2 Case StudyFig. 9 is an out-of-sample retrieval case with Fig. 9(a) usingEuclidean distance to measure the similarity and Fig. 9(b)using EMR with 400 anchors and Fig. 9(c) with 600 anchors.Since the database structure is simple, we just need to usea small number of anchors to build our anchor graph.When we use 400 anchors, we have received a good result(Fig. 9(b)). Then, when we use more anchors, we can get abetter result. It is not hard to see that, the results of Fig. 9(b)and (c) are all correct, but the quality of Fig. 9(c) is a littlebetter – the digits are more similar with the query.5.4 Experiments on Large Scale DatabasesIn our consideration, the issue of performance shouldinclude both efficiency and effectiveness. Since our methodis designed to speedup the model ’manifold ranking’, theefficiency is the main point of this paper. The first severalexperiments are used to show that our model is muchfaster than MR in both offline training and online retrievalprocesses, with only a small accuracy loss. The originalMR model can not be directly applied to a large data set,e.g., a data set with 1 million samples. Thus, to show theperformance of our method for large data sets, we comparemany state-of-the-art hash-based fast nearest neighborsearch algorithms (our ranking model can naturally do theTABLE 5Out-of-Sample Retrieval Time (s) and Retrieval PerformanceComparisons of EMR (1k Anchors), Eud and LSH with128 and 256 Code Length on MNIST DatabaseThe best performance is in bold font. The number with means it is significanthigher than Eud at the 0.001 significance level.112 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 27, NO. 1, JANUARY 2015Fig. 9. Top retrieved MNIST digits via (a) Euclidean distance, (b) EMR with 400 anchor points, and (c) EMR with 600 anchor points. The digit in thefirst line is a new query and the rest digits are the top returns.work of nearest neighbor search) on SIFT1M and ImageNetdatabases.For these two sets, there is no exact labels, so we followthe criterion used in many previous fast nearest neighborsearch work [46]: the groundtruth neighbors are obtainedby brute force search. We use the top-1 percent nearestneighbors as groundtruth. We record the computationaltime (offline training and online retrieval) and rankingperformance in Tables 6 and 7. The offline time is for trainingand the online time is for a query retrieval (averaged).We randomly select 1,000 images from the database asout-of-sample queries and evaluate the performance.For comparison, some state-of-the-art hashing methodsincluding LSH, Spectral Hashing [46] and SphericalHashing (a very recent proposed method [47]) are used.For EMR, we select 10% of the database samples to run kmeansalgorithm with maximum 5 iterations,which is veryfast. In the online stage, the hamming distances betweenthe query sample and the database samples are calculatedfor LSH, Spectral hashing and Spherical Hashing and thenthe distances are sorted. While for our method, we directlycompute the scores via Eq.(17) and sort them. If we adoptany filtering strategy to reduce the number of candidatesamples, the computational cost for each method would bereduced equally. So we only compare the largest computationalcost (brute force search). We adopt 64-bit binarycodes for SIFT1M and 128-bit for ImageNet for all the hashmethods.From Tables 6 and 7, we find that EMR has a comparableonline query cost, and a high nearest neighborsearch accuracy, especially on the high dimensional dataset ImageNet, showing its good performance.TABLE 6Computational Time (s) and Retrieval PerformanceComparison of EMR (1k Anchors), and LSH andSpherical Hash on SIFT1M Database(1 Million-Sample, 128-Dim)5.5 Algorithm AnalysisFrom the comprehensive experimental results above, weget a conclusion that our algorithm EMR is effective andefficient. It is appropriate for CBIR since it is friendly tonew queries. A core point of the algorithm is the anchorpoints selection. Two issues should be further discussed: thequality and the number of anchors. Obviously, our goal isto select less anchors with higher quality. We discuss themas follows:• How to select good anchor points? This is an openquestion. In our method, we use k-means clusteringcenters as anchors. So any faster or better clusteringmethods do help to the selection. There is a tradeoffbetween the selection speed and precision. However,the k-means centers are not perfect – some clustersare very close while some clusters are very small.There is still much space for improvement.• How many anchor points we need? There isno standard answer but our experiments providesome clues: SIFT1M and ImageNet databasesare larger than COREL, but they need similarnumber of anchors to receive acceptable results,i.e., the required number of anchors is not proportionalto the database size. This is important,otherwise EMR is less useful. The numberof anchors is determined by the intrinsic clusterstructure.6 CONCLUSIONIn this paper, we propose the Efficient Manifold Rankingalgorithm which extends the original manifold ranking toTABLE 7Computational Time (s) and Retrieval PerformanceComparison of EMR (1k Anchors), and LSHand Spherical Hash on ImageNet Database(1.2 Million-Sample, 1k-Dim)XU ET AL.: EMR: A SCALABLE GRAPH-BASED RANKING MODEL FOR CONTENT-BASED IMAGE RETRIEVAL 113handle large scale databases. EMR tries to address theshortcomings of original manifold ranking from two perspectives:the first is scalable graph construction; and thesecond is efficient computation, especially for out-of-sampleretrieval. Experimental results demonstrate that EMR is feasibleto large scale image retrieval systems – it significantlyreduces the computational time.ACKNOWLEDGMENTSThis work was supported in part by National NaturalScience Foundation of China under Grant 61125203,91120302, 61173186, 61222207, and 61173185, and inpart by the National Basic Research Program of China(973 Program) under Grant 2012CB316400, FundamentalResearch Funds for the Central Universities, Programfor New Century Excellent Talents in University(NCET-09-0685), Zhejiang Provincial Natural ScienceFoundation under Grant Y1101043 and Foundation ofZhejiang Provincial Educational Department under GrantY201018240.

Effective Key Management in Dynamic Wireless Sensor Network

Effective Key Management in DynamicWireless Sensor NetworksAbstract—Recently, wireless sensor networks (WSNs) havebeen deployed for a wide variety of applications, includingmilitary sensing and tracking, patient status monitoring, trafficflow monitoring, where sensory devices often move betweendifferent locations. Securing data and communications requiressuitable encryption key protocols. In this paper, we propose acertificateless-effective key management (CL-EKM) protocol forsecure communication in dynamic WSNs characterized by nodemobility. The CL-EKM supports efficient key updates when anode leaves or joins a cluster and ensures forward and backwardkey secrecy. The protocol also supports efficient key revocationfor compromised nodes and minimizes the impact of a nodecompromise on the security of other communication links.A security analysis of our scheme shows that our protocol is effectivein defending against various attacks.We implement CL-EKMin Contiki OS and simulate it using Cooja simulator to assess itstime, energy, communication, and memory performance.Index Terms—Wireless sensor networks, certificateless publickey cryptography, key management scheme.I. INTRODUCTIONDYNAMIC wireless sensor networks (WSNs), whichenable mobility of sensor nodes, facilitate wider networkcoverage and more accurate service than static WSNs. Therefore,dynamic WSNs are being rapidly adopted in monitoringapplications, such as target tracking in battlefield surveillance,healthcare systems, traffic flow and vehicle status monitoring,dairy cattle health monitoring [9]. However, sensor devicesare vulnerable to malicious attacks such as impersonation,interception, capture or physical destruction, due to theirunattended operative environments and lapses of connectivityin wireless communication [20]. Thus, security is one ofthe most important issues in many critical dynamic WSNapplications. DynamicWSNs thus need to address key securityrequirements, such as node authentication, data confidentialityand integrity, whenever and wherever the nodes move.To address security, encryption key management protocolsfor dynamic WSNs have been proposed in the past basedManuscript received August 6, 2014; revised October 17, 2014; acceptedNovember 18, 2014. Date of publication December 4, 2014; date of currentversion January 13, 2015. This work was supported in part by the BrainKorea 21 Plus Project. The associate editor coordinating the review of thismanuscript and approving it for publication was Prof. Kui Q. Ren.S.-H. Seo is with the Center for Information Security Technologies, KoreaUniversity, Seoul 136-701, Korea (e-mail: seosh77@gmail.com).J. Won, S. Sultana, and E. Bertino are with the Department ofComputer Science, Purdue University, West Lafayette, IN 47907 USA(e-mail: won12@purdue.edu; ssultana@purdue.edu; bertino@purdue.edu).Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TIFS.2014.2375555on symmetric key encryption [1]–[3]. Such type of encryptionis well-suited for sensor nodes because of their limitedenergy and processing capability. However, it suffers from highcommunication overhead and requires large memory space tostore shared pairwise keys. It is also not scalable and notresilient against compromises, and unable to support nodemobility. Therefore symmetric key encryption is not suitablefor dynamic WSNs. More recently, asymmetric key basedapproaches have been proposed for dynamic WSNs [4]–[7],[10], [15], [18], [25], [27]. These approaches take advantageof public key cryptography (PKC) such as elliptic curvecryptography (ECC) or identity-based public key cryptography(ID-PKC) in order to simplify key establishment anddata authentication between nodes. PKC is relatively moreexpensive than symmetric key encryption with respect tocomputational costs. However, recent improvements in theimplementation of ECC [11] have demonstrated the feasibilityof applying PKC to WSNs. For instance, the implementationof 160-bit ECC on an Atmel AT-mega 128, which has an8-bit 8 MHz CPU, shows that an ECC point multiplicationtakes less than one second [11]. Moreover, PKC is moreresilient to node compromise attacks and is more scalableand flexible. However, we found the security weaknessesof existing ECC-based schemes [5], [10], [25] that theseapproaches are vulnerable to message forgery, key compromiseand known-key attacks. Also, we analyzed the critical securityflaws of [15] that the static private key is exposed to the otherwhen both nodes establish the session key. Moreover, theseECC-based schemes with certificates when directly appliedto dynamic WSNs, suffer from the certificate managementoverhead of all the sensor nodes and so are not a practicalapplication for large scale WSNs. The pairing operationbasedID-PKC [4], [18] schemes are inefficient due to thecomputational overhead for pairing operations. To the best ofour knowledge, efficient and secure key management schemesfor dynamic WSNs have not yet been proposed.In this paper, we present a certificateless effective keymanagement (CL-EKM) scheme for dynamic WSNs. In certificatelesspublic key cryptography (CL-PKC) [12], the user’sfull private key is a combination of a partial private keygenerated by a key generation center (KGC) and the user’s ownsecret value. The special organization of the full private/publickey pair removes the need for certificates and also resolves thekey escrow problem by removing the responsibility for theuser’s full private key. We also take the benefit of ECC keysdefined on an additive group with a 160-bit length as secureas the RSA keys with 1024-bit length.1556-6013 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.372 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 2, FEBRUARY 2015In order to dynamically provide both node authenticationand establish a pairwise key between nodes, we buildCL-EKM by utilizing a pairing-free certificateless hybridsigncryption scheme (CL-HSC) proposed by us in an earlierwork [13], [14]. Due to the properties of CL-HSC, thepairwise key of CL-EKM can be efficiently shared betweentwo nodes without requiring taxing pairing operations andthe exchange of certificates. To support node mobility, ourCL-EKM also supports lightweight processes for cluster keyupdates executed when a node moves, and key revocation isexecuted when a node is detected as malicious or leaves thecluster permanently. CL-EKM is scalable in case of additionsof new nodes after network deployment. CL-EKM is secureagainst node compromise, cloning and impersonation, andensures forward and backward secrecy. The security analysisof our scheme shows its effectiveness. Below we summarizethe contributions of this paper:• We show the security weaknesses of existingECC based key management schemes for dynamicWSNs [10], [15], [25].• We propose the first certificateless effective keymanagement scheme (CL-EKM) for dynamic WSNs.CL-EKM supports four types of keys, each of whichis used for a different purpose, including securepair-wise node communication and group-oriented keycommunication within clusters. Efficient key managementprocedures are defined as supporting node movementsacross different clusters and key revocation process forcompromised nodes.• CL-EKM is implemented using Contiki OS and use aTI exp5438 emulator to measure the computation andcommunication overhead of CL-EKM. Also we developa simulator to measure the energy consumption ofCL-EKM. Then, we conduct the simulation of nodemovement by adopting the RandomWalk Mobility Modeland the Manhattan Mobility Model within the grid. Theexperimental results show that our CL-EKM scheme islightweight and hence suitable for dynamic WSNs.The remainder of this paper is organized as follows:In Section 2, we briefly discuss related work and show thesecurity weaknesses of the existing schemes. In Section 3, weprovide our network model and adversary model. In Section 4,we provide an overview of our CL-EKM. In Section 5, weintroduce the details of CL-EKM. In Section 6, we analyzethe security of CL-EKM. In Section 7, we evaluate theperformance of CL-EKM, conduct the simulation of nodemovement in Section 8, and conclude in Section 9.II. RELATED WORKSymmetric key schemes are not viable for mobile sensornodes and thus past approaches have focused only on staticWSNs. A few approaches have been proposed based on PKCto support dynamic WSNs. Thus, in this section, we reviewprevious PKC-based key management schemes for dynamicWSNs and analyze their security weaknesses or disadvantages.Chuang et al. [7] and Agrawal et al. [8] proposed atwo-layered key management scheme and a dynamickey update protocol in dynamic WSNs based on theDiffie-Hellman (DH), respectively. However, bothschemes [7], [8] are not suited for sensors with limitedresources and are unable to perform expensive computationswith large key sizes (e.g. at least 1024 bit). Since ECC iscomputationally more efficient and has a short key length(e.g. 160 bit), several approaches with certificate [5], [10],[15], [25] have been proposed based on ECC. However,since each node must exchange the certificate to establishthe pairwise key and verify each other’s certificate beforeuse, the communication and computation overhead increasedramatically. Also, the BS suffers from the overhead ofcertificate management. Moreover, existing schemes [5], [10],[15], [25] are not secure. Alagheband et al. [5] proposed a keymanagement scheme by using ECC-based signcryption, butthis scheme is insecure against message forgery attacks [16].Huang et al. [15] proposed a ECC-based key establishmentscheme for self-organizing WSNs. However, we found thesecurity weaknesses of their scheme. In step 2 of their scheme,a sensor node U sends z = qU · H(MacKey) + dU (modn)to the other node V for authentication, where qU is astatic private key of U. But, once V receives the z, itcan disclose qU, because V already got MacKey anddU in step 1. So, V can easily obtain qU by computingqU = (z dU) · H(MacKey)−1. Thus, the sensor node’sprivate key is exposed to the other node during the keyestablishment between two nodes. Zhang et al. [10] proposeda distributed deterministic key management scheme based onECC for dynamic WSNs. It uses the symmetric key approachfor sharing the pairwise key for existing nodes and uses anasymmetric key approach to share the pairwise keys for anew node after deployment. However, since the initial key KIis used to compute the individual keys and the pairwise keysafter deployment for all nodes, if an adversary obtains KI, theadversary has the ability to compute all individual keys andthe pairwise keys for all nodes. Thus, such scheme suffersfrom weak resilience to node compromises. Also, sincesuch scheme uses a simple ECC-based DH key agreementby using each node’s long-term public key and privatekey, the shared pairwise key is static and as a result, isnot secure against known-key attacks and cannot providere-key operation. Du et al. [25] use a ECDSA scheme toverify the identity of a cluster head and a static EC-Diffie-Hellman key agreement scheme to share the pairwise keybetween the cluster heads. Therefore, the scheme by Duet al. is not secure against known-key attacks, because thepairwise key between the cluster heads is static. On the otherhand, Du et al. use a modular arithmetic-based symmetrickey approach to share the pairwise key between a sensornode and a cluster head. Thus, a sensor node cannot directlyestablish a pairwise key with other sensor nodes and, instead,it requires the support of the cluster head. In their scheme, inorder to establish a pairwise key between two nodes in thesame cluster, the cluster head randomly generates a pairwisekey and encrypts it using the shared keys with these twonodes. Then the cluster head transmits the encrypted pairwisekey to each node. Thus, if the cluster head is compromised,the pairwise keys between non-compromised sensor nodesin the same cluster will also be compromised. Therefore,SEO et al.: EFFECTIVE KEY MANAGEMENT IN DYNAMIC WSNs 373Fig. 1. Heterogeneous dynamic wireless sensor network.their scheme is not compromise-resilient against clusterhead capture, because the cluster head randomly generates apairwise key between sensor nodes whenever it is requestedby the nodes. Moreover, in their scheme, in order to share apairwise key between two nodes in different clusters, thesetwo nodes must communicate via their respective clusterheads. So, after one cluster head generates the pairwisekey for two nodes, the cluster head must securely transmitthis key to both its node and the other cluster head. Thus,this pairwise key should be encrypted by using the sharedpairwise key with the other cluster head and the shared keywith its node, respectively. Therefore, if the pairwise keybetween the cluster heads is exposed, all pairwise keys of thetwo nodes in different clusters are disclosed. The scheme byDu et al. supports forward and backward secrecy by using akey update process whenever a new node joins the clusteror if a node is compromised. However, the scheme does notprovide a process to protect against clone and impersonationattack.Most recently, Rahman et al. [4] and Chatterjee et al. [18]have proposed ID-PKC based key management schemessupporting the mobility of nodes in dynamic WSNswhich removes the certificate management overhead.However, their schemes require expensive pairing operations.Although many approaches that enable pairing operations forsensor nodes have been proposed, the computational costrequired for pairing is still considerably higher than standardoperations such as ECC point multiplication. For example,NanoECC, which uses the MIRACL library, takes around17.93s to compute one pairing operation and around 1.27s tocompute one ECC point multiplication on the MICA2(8MHz)mote [17].III. NETWORK AND ADVERSARY MODELSA. Network ModelWe consider a heterogeneous dynamic wireless sensornetwork (See Fig. 1). The network consists of a number ofstationary or mobile sensor nodes and a BS that manages thenetwork and collects data from the sensors. Sensor nodes canbe of two types: (i) nodes with high processing capabilities,referred to as H-sensors, and (ii) nodes with low processingcapabilities, referred to as L-sensors. We assume to haveN nodes in the network with a number N1 of H-sensorsand a number N2 of L-sensors, where N = N1 + N2, andN1 _ N2. Nodes may join and leave the network, and thusthe network size may dynamically change. The H-sensors actas cluster heads while L-sensors act as cluster members. Theyare connected to the BS directly or by a multi-hop path throughother H-sensors. H-sensors and L-sensors can be stationary ormobile. After the network deployment, each H-sensor formsa cluster by discovering the neighboring L-sensors throughbeacon message exchanges. The L-sensors can join a cluster,move to other clusters and also re-join the previous clusters.To maintain the updated list of neighbors and connectivity,the nodes in a cluster periodically exchange very lightweightbeacon messages. The H-sensors report any changes in theirclusters to the BS, for example, when a L-sensor leaves orjoins the cluster. The BS creates a list of legitimate nodes,M, and updates the status of the nodes when an anomalynode or node failure is detected. The BS assigns each nodea unique identifier. A L-sensor nLi is uniquely identified bynode ID Li whereas a H-sensor nHj is assigned a node ID Hj .A Key Generation Center (KGC), hosted at the BS, generatespublic system parameters used for key management by theBS and issues certificateless public/private key pairs for eachnode in the network. In our key management system, a uniqueindividual key, shared only between the node and the BS isassigned to each node. The certificateless public/private keyof a node is used to establish pairwise keys between any twonodes. A cluster key is shared among the nodes in a cluster.B. Adversary Model and Security RequirementsWe assume that the adversary can mount a physical attackon a sensor node after the node is deployed and retrieve secretinformation and data stored in the node. The adversary can alsopopulate the network with the clones of the captured node.Even without capturing a node, an adversary can conduct animpersonation attack by injecting an illegitimate node, whichattempts to impersonate a legitimate node. Adversaries canconduct passive attacks, such as, eavesdropping, replay attack,etc to compromise data confidentiality and integrity. Specificto our proposed key management scheme, the adversary canperform a known-key attack to learn pairwise master keys if itsomehow learns the short-term keys, e.g., pairwise encryptionkeys. As described in [26] and [8], in order to provide a securekey management scheme for WSNs supporting mobile nodes,the following security properties are critical:• Compromise-Resilience: A compromised node must notaffect the security of the keys of other legitimate nodes.In other words, the compromised node must not be ableto reveal pairwise keys of non-compromised nodes. Thecompromise-resilience definition does not mean that anode is resilient against capture attacks or that a capturednode is prevented from sending false data to other nodes,BS, or cluster heads.• Resistance Against Cloning and Impersonation: Thescheme must support node authentication to protectagainst node replication and impersonation attacks.• Forward and Backward Secrecy: The scheme must assureforward secrecy to prevent a node from using an oldkey to continue decrypting new messages. It must alsoassure backward secrecy to prevent a node with the newkey from going backwards in time to decrypt previouslyexchanged messages encrypted with prior keys. forwardand backward secrecy are used to protect against nodecapture attacks.374 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 2, FEBRUARY 2015• Resilience Against Known-Key Attack: The scheme mustbe secure against the known-key attack.IV. OVERVIEW OF THE CERTIFICATELESS EFFECTIVEKEY MANAGEMENT SCHEMEIn this paper, we propose a Certificateless Key Managementscheme (CL-EKM) that supports the establishment of fourtypes of keys, namely: a certificateless public/private key pair,an individual key, a pairwise key, and a cluster key. Thisscheme also utilizes the main algorithms of the CL-HSCscheme [13] in deriving certificateless public/private keys andpairwise keys. We briefly describe the major notations usedin the paper (See Table I), the purpose of these keys and howthey are setup.A. Types of KeysCertificateless Public/Private Key: Before a node isdeployed, the KGC at the BS generates a uniquecertificateless private/public key pair and installs the keysin the node. This key pair is used to generate a mutuallyauthenticated pairwise key.• Individual Node Key: Each node shares a uniqueindividual key with BS. For example, a L-sensor can usethe individual key to encrypt an alert message sent tothe BS, or if it fails to communicate with the H-sensor.An H-sensor can use its individual key to encrypt themessage corresponding to changes in the cluster. TheBS can also use this key to encrypt any sensitive data,such as compromised node information or commands.Before a node is deployed, the BS assigns the node theindividual key.• Pairwise Key: Each node shares a different pairwise keywith each of its neighboring nodes for secure communicationsand authentication of these nodes. For example, inorder to join a cluster, a L-sensor should share a pairwisekey with the H-sensor. Then, the H-sensor can securelyencrypt and distribute its cluster key to the L-sensorby using the pairwise key. In an aggregation supportiveWSN, the L-sensor can use its pairwise key to securelytransmit the sensed data to the H-sensor. Each nodecan dynamically establish the pairwise key between itselfand another node using their respective certificatelesspublic/private key pairs.• Cluster Key: All nodes in a cluster share a key, named ascluster key. The cluster key is mainly used for securingbroadcast messages in a cluster, e.g., sensitive commandsor the change of member status in a cluster. Only thecluster head can update the cluster key when a L-sensorleaves or joins the cluster.V. THE DETAILS OF CL-EKMThe CL-EKM is comprised of 7 phases: system setup,pairwise key generation, cluster formation, key update, nodemovement, key revocation, and addition of a new node.TABLE ILIST OF NOTATIONSA. System SetupBefore the network deployment, the BS generates systemparameters and registers the node by including it in a memberlist M.1) Generation of System Parameters: The KGC at theBS runs the following steps by taking a security parameterk ∈ Z+ as the input, and returns a list of system parameter_ = {Fq , E/Fq , Gq , P, Ppub = x P, h0, h1, h2, h3} and x.• Choose a k-bit prime q• Determine the tuple {Fq , E/Fq , Gq , P}.• Choose the master private key x R Z∗qand compute thesystem public key Ppub = x P.• Choose cryptographic hash functions {h0, h1,h2, h3} so that h0 : {0, 1}∗ × G2q→ {0, 1}∗, h1 :G3q× {0, 1}∗ × Gq → {0, 1}n, h2 : Gq × {0, 1}∗ ×Gq × {0, 1}∗ × Gq × {0, 1}∗ × Gq → Z∗q, andh3 : Gq×{0, 1}∗×Gq×{0, 1}∗×Gq×{0, 1}∗×Gq → Z∗q.Here, n is the length of a symmetric key.The BS publishes _ and keeps x secret.2) Node Registration: The BS assigns a unique identifier,denoted by Li , to each L-sensor nLi and a unique identifier,denoted by Hj , to each H-sensor nHj, where 1 ≤ i N1,1 ≤ j N2, N = N1 + N2. Here we describe the certificatelesspublic/private key and individual node key operationsfor Li , the same mechanisms apply for H-sensors. Duringinitialization, each node nLi chooses a secret value xLiR Z∗qand computes PLi= xLi P. Then, the BS requests the KGCfor partial private/public keys of nLi with the input parametersLi and PLi. The KGC chooses rLiR Z∗qand then computesa pair of partial public/private key (RLi , dLi ) as below:RLi= rLi PdLi= rLi+ x · h0(Li , RLi , PLi ) mod qThe Li can validate its private key by checking whetherthe condition dLi P = RLi+ h0(Li , RLi , PLi )Ppub holds.SEO et al.: EFFECTIVE KEY MANAGEMENT IN DYNAMIC WSNs 375Li then sets skLi= (dLi , xLi ) as its full private key andpkLi= (PLi , RLi ) as its full public key. The BS also choosesa uniform random number x0 ∈ Z∗qto generate the node’sindividual key K0Li(K0Hjfor nHj ). The individual key iscomputed as an HMAC of x0, Li as followsK0Li= HMAC(x0, Li )After the key generation for all the nodes, the BS generatesa member list M consisting of identifiers and public keysof all these nodes. It also initializes a revocation list R thatenlists the revoked nodes. The public/private key, _, and theindividual key are installed in the memory of each node.B. Pairwise Key GenerationAfter the network deployment, a node may broadcast anadvertisement message to its neighborhood to trigger thepairwise key setup with its neighbors. The advertisementmessage contains its identifier and public key. At first, twonodes set up a long-term pairwise master key between them,which is then used to derive the pairwise encryption key. Thepairwise encryption key is short-term and can be used as asession key to encrypt sensed data.1) Pairwise Master Key Establishment: In this paragraph,we describe the protocol for establishing a pairwise master keybetween any two nodes nA and nB with unique IDs A and B,respectively.We utilize the CL-HSC scheme [13] as a buildingblock. When nA receives an advertisement message from nB,it executes the following encapsulation process to generate along-term pairwise master key KAB and the encapsulated keyinformation, ϕA = (UA,WA).• Choose lA R Z∗qand compute UA = lAP.• ComputeTA = lA · h0(B, RB, PB)Ppub + lA · RB mod qKAB = h1(UA, TA, lA · PB, B, PB)• Computeh = h2(UA, τA, TA, A, PA, B, PB)h_ = h3(UA, τA, TA, A, PA, B, PB)WA = dA + lA · h + xA · h_where τA is a random string to give a freshness.• Output KAB and ϕA = (UA,WA).Then, nA sends A, pkA, τA and ϕA to nB. nB then performsdecapsulation to obtain KAB.• Compute TA = dB · UA.Note: Because of dB = rB + x · h0(B, RB, PB) andUA = lAP mod q, TA is computed as TA = (rB + x ·h0(B, RB, PB)) · lAP mod q = lA · h0(B, RB, PB)Ppub +lA · RB mod q,• Compute h = h2(UA, τA, TA, A, PA, B, PB) andh_ = h3(UA, τA, TA, A, PA, B, PB).• If WA · P = RA +h0(A, RA, PA) · Ppub +h ·UA +h_ · PA,output KAB = h1(UA, TA, xB · UA, B, PB). Otherwise,output invalid.TABLE IICLUSTER FORMATION PROCESS2) Pairwise Encryption Key Establishment: OncenA and nB set the pairwise master key KAB, they generatean HMAC of KAB and a nonce r R Z∗q. The HMAC is thenvalidated by both nA and nB. If the validation is successful,the HMAC value is established as the short-term pairwiseencryption key kAB. The process is summarized below:• nB chooses a random nonce r R Z∗q, computeskAB = HMAC(KAB, r ) and C1 = EkAB (r, A, B). Then,nB sends r and C1 to nA.• When nA receives r and C1, it computeskAB = HMAC(KAB, r ) and decrypts C1. Then itvalidates r , A and B and if valid confirms that nB knowsKAB and it can compute kAB.C. Cluster FormationOnce the nodes are deployed, each H-sensor discoversneighboring L-sensors through beacon message exchanges andthen proceeds to authenticate them. If the authentication issuccessful, the H-sensor forms a cluster with the authenticatedL-sensors and they share a common cluster key. TheH-sensor also establishes a pairwise key with each memberof the cluster. To simplify the discussion, we focus on theoperations within one cluster and consider the j th cluster.We also assume that the cluster head H-sensor is nHj withnLi (1 ≤ i n) as cluster members. nHj establishes a clusterkey GKj for secure communication in the cluster. Table IIshows the cluster formation process.1) Node Discovery and Authentication: For node discovery,nHj broadcasts an advertisement message containingHj and pkHj. Once nLi within Hj ’s radio range receivesthe advertisement, it checks Hj and pkHj , and initiatesthe Pairwise Key Generation procedure. Note that nLi mayreceive multiple advertisement messages if it is within therange of more than one H-sensor. However, nLi must chooseone H-sensor, may be by prioritizing over the proximity andsignal strength. Additionally, nLi can record other H-sensoradvertisements as backup cluster heads in the event that theprimary cluster head is disabled. If nLi selects multiple clusterheads and sends a response to all of them, it is considered as376 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 2, FEBRUARY 2015a compromised node. nLi and nHj perform the Pairwise KeyGeneration procedure to obtain a pairwise master key, KLi Hjand a pairwise encryption key, kLi Hj .2) Cluster Key Generation: nHj chooses x j R Z∗qtogenerate a cluster key GKj as followsGKj = HMAC(x j , Hj )Then, nHj computes C2 = EkLi Hj(GKj , Hj , Li ) to distributethe GKj. Then nHj sends Hj and C2 to nLi . nLi decryptsC2 to recover Hj , Li and GKj by using kLi Hj . If nLi fails tocheck Hj , Li , it discards the message and reports nHj to theBS as an illegitimate cluster head. Otherwise, nLi confirms thatnHj is valid and can compute GKj . Then, nLi stores GKj asa cluster key. Next, nLi computes HMAC(kLiHj ,GKj ) andC3 = EkLi Hj(Li ,HMAC(kLi Hj ,GKj )). It transmits C3 andLi to nHj. After nHj receives messages from nLi , it decryptsC3 by using kLi Hj . Then it checks Li and the validity ofHMAC(kLiHj ,GKj ). If the validity check fails, nHj discardsthe message. Otherwise, nHj can confirm that nLi shares thevalid GKj and kHj Li . nHj adds Li and pkLi on member listof the j th cluster, Mj .3) Membership Validation: After discovering all the neighboringnodes nLi (1 ≤ i n) in the j th cluster, nHj computesC4 = EK0Hj(Hj ,Mj ) and transmits C4 and Hj to the BS.After receiving messages from nHj , the BS checks the validityof the nodes listed in Mj . If all nodes are legitimate, the BSsends an acknowledgement to nHj . Otherwise, the BS rejectsMj and investigates the identities of invalid nodes (false orduplicate ID). Then, the BS adds the identities of invalid nodesto the revocation list and reports it to nHj . Upon receiving theacknowledge message, nHj computes C5 = EGKj (Hj ,Mj )and broadcasts C5 to all the nodes in j th cluster.D. Key UpdateIn order to protect against cryptanalysis and mitigatedamage from compromised keys, frequent encryption keyupdates are commonly required. In this section we providethe pairwise key update and cluster key update operations.1) Pairwise Key Update: To update a pairwise encryptionkey, two nodes which shared the pairwise key perform aPairwise Encryption Key Establishment process. On the otherhand, the pairwise master key does not require periodicalupdates, because it is not directly used to encrypt each sessionmessage. As long as the nodes are not compromised, the pairwisemaster keys cannot be exposed. However, if a pairwisemaster key is modified or needs to be updated according tothe policy of the BS, the Pairwise Master Key Establishmentprocess must be executed.2) Cluster Key Update: Only cluster head H-sensors canupdate their cluster key. If a L-sensor attempts to changethe cluster key, the node is considered a malicious node.The operation for any j th cluster is described as follows:1) nHj chooses x_jR Z∗qand computes a new cluster keyGK_j= HMAC(x_j , Hj ). nHj also generates an Updatemessage including HMAC(GK_j ,Update) and computesC6 = EGKj (GK_j ,HMAC(GK_j ,Update)). Then, nHjtransmits Update and C6 to its cluster members.2) Each member nLi decrypts C6 using the GKj , verifiesHMAC(GK_j ,Update) and updates a cluster key as GK_j .Then, each nLi sends the acknowledgement message to nHj .E. Node MovementWhen a node moves between clusters, the H-sensorsmust properly manage the cluster keys to ensure theforward/backward secrecy. Thus, the H-sensor updates thecluster key and notifies the BS of the changed node status.Through this report, the BS can immediately update the nodestatus in the M. We denote a moving node as nLm .1) Node Leave: A node may leave a cluster due to nodefailure, location change or intermittent communication failure.There are both proactive and reactive ways for the cluster headto detect when a node leaves the cluster. The proactive caseoccurs when the node nLm actively decides to leave the clusterand notifies the cluster head nHj or the cluster head decidesto revoke the node. Since in this case nHj can confirm that thenode has left, it transmits a report EK0Hj(NodeLeave, Lm) toinform the BS that nLm has left the cluster. After receivingthe report, the BS updates the status of nLm in M and sendsan acknowledgement to nHj . The reactive case occurs whenthe cluster head nHj fails to communicate with nLm. It mayhappen that a node dies out of battery power, fails to connectto nHj due to interference or obstacles, is captured by theattacker or is moved unintentionally. Since the nodes in acluster periodically exchange lightweight beacon messages,nHj can detect a disappeared node nLm when it does notreceive the beacon message from nLm for a predeterminedtime period. So, nHj reports the status of the node nLmto the BS by sending EK0Hj(NodeDisappear, Lm). Whenthe BS receives the report, it updates the status of nLm inthe M and acknowledges to nHj. Once nHj receives theacknowledgement from the BS, it changes its cluster keywith the following operations: 1) nHj chooses a new clusterkey GK_j and computes EkLi Hj(GK_j , NodeLeave, Lm) usingpairwise session keys with each node in its cluster, except nLm .2) Then, nHj sends EkLi Hj(GK_j , NodeLeave, Lm) to eachmember node except nLm . 3) Each nLi decrypts it using kLi Hjand updates the cluster key as GK_j .2) Node Join: Once the moving node nLm leaves a cluster,it may join other clusters or return to the previous cluster aftersome period. For the sake of simplicity, we assume that nLmwants to join the lth cluster or return to the j th cluster.(i) Join a New Cluster: nLm sends a join request whichcontains Ln+1 and pkLn+1 to join a lth cluster. After nHlreceives the join request, nLm and nHl perform PairwiseKey Generation procedure to generate KLm Hl and kLm Hl ,respectively. Next, nHl transmits EK0Hl(NodeJoin, Lm)to the BS. The BS decrypts the message and validateswhether nLm is a legitimate node or not and sends anacknowledgement to nHl if successful. The BS alsoupdates the node member list, M. In case of nodevalidation failure at the BS, nHl stops this processand revokes the pairwise key with nLm. Once nHlSEO et al.: EFFECTIVE KEY MANAGEMENT IN DYNAMIC WSNs 377receives the acknowledgement, it performs the ClusterKey Update process with all other nodes in the cluster.nHl also computes EkLm Hl(GK_l , Hl , Lm), and sends itto the newly joined node nLm .(ii) Return to the Previous Cluster: nLm sends a join requestwhich contains Ln+1 and pkLn+1 to join a j th cluster.Once nHj receives the join request, it checks a timerfor nLm which is initially set to the Thold . Thold indicatesthe waiting time before discarding the pairwise masterkey when a L-sensor leaves. If nLm returns to the j thcluster before the timer expires, nLm and nHj performonly the Pairwise Encryption Key Establishment procedureto create a new pairwise encryption key, k_LmHj.Otherwise, they perform the Pairwise Key Generationprocedure to generate a new K_LmHland k_LmHl, respectively.Then, the cluster head nHj also updates the clusterkey to protect backward key secrecy. Before updatingthe cluster key, nHj transmits EK0Hj(NodeReJoin, Lm)to the BS. Once the BS decrypts the message anddetermines that nLm is a valid node, the BS sends theacknowledgement to nHl . The BS then updates the memberlist M. Once nHl receives the acknowledgement,it performs the Cluster Key Update process with allother nodes in the cluster. Afterwards, nHj computesEk_Lm Hj(GK_j , Hj , Lm) and sends it to nLm .F. Key RevocationWe assume that the BS can detect compromisedL-sensors and H-sensors. The BS may have an intrusiondetection system or mechanism to detect malicious nodes oradversaries [19], [20]. Although we do not cover how the BScan discover a compromised node or cluster head in this paper,the BS can utilize the updated node status information of eachcluster to investigate an abnormal node. In our protocol, acluster head reports the change of its node status to the BS,such as whenever a node joins or leaves a cluster. Thus, the BScan promptly manage the node status in the member list, M.For instance, the BS can consider a node as compromisedif the node disappears for a certain period of time. In thatcase, the BS must investigate the suspicious node and itcan utilize the node fault detection mechanism introducedin [21] and [22]. In this procedure, we provide a key revocationprocess to be used when the BS discovers a compromised nodeor a compromised cluster head. We denote a compromisednode by nLc in the j th cluster for a compromise node caseand a compromised head by nHj for a compromise clusterhead case.1) Compromised Node: The BS generates a CompNodemessage and a EK0Hj(CompNode, Lc). Then it sendsEK0Hj(CompNode, Lc) to all nHj , (1 ≤ j N2). After allH-sensors decrypt the message, they update the revocationlist of their clusters. Then, if related keys with nLc exist, therelated keys are discarded. Other than nLc , nHj performs theNode leave operations to change the current cluster key withthe remaining member nodes.2) Compromised Cluster Head: After the BS generates aCompHeader message and a EK0Li(CompHeader, Hj), itsends the message to all nLi (1 ≤ i n) in the j th cluster. TheBS also computes EK0Hi(CompHeader, Hj), (1 ≤ i N2,i _= j ) and transmits it to all H-sensors except nHj. Onceall nodes decrypt the message, they discard the related keyswith nHj . Then, each nLi attempts to find other neighboringcluster heads and performs the Join other cluster steps of theNode join process with the neighboring cluster head. If somenode nLi is unable to find another cluster head node, it mustnotify the BS by sending EK0Li(FindNewClusteLi ). The BSproceeds to find the nearest cluster head nHn for nLi andconnect nHn with nLi . Then, they can perform the Join othercluster steps.G. Addition of a New NodeBefore adding a new node into an existing networks, theBS must ensure that the node is not compromised. Thenew node nLn+1 establishes a full private/public key throughthe node registration phase. Then, the public systemparameters, a full private/public key and individualkey K0Ln+1are stored into nLn+1 . The BS generatesEK0Hj(NewNode, Ln+1, pkLn+1) and sends it to all nHj ,(1 ≤ j N2). After nLn+1 is deployed in the network,it broadcasts an advertisement message which containsLn+1 and pkLn+1 to join a neighboring cluster. If multipleH-sensors receive nLn+1’s message, they will transmit aResponse message to nLn+1 . nLn+1 must choose one H-sensorfor a valid registration. If nLn+1 selects nHj according to thedistance and the strength of signal, it initiates the PairwiseKey Generation procedure. In order to provide backwardsecrecy, nHj performs Cluster Key Update procedure, wherethe Update message contains Ln+1 and pkLn+1. Then, nHjcomputes C7 = EkLn+1 Hj(GK_j , Hj , Ln+1), and sends C7and Hj to nLn+1. After nLn+1’s registration, nHj transmitsEK0Hj(NodeJoin, Ln+1) to the BS. Once the BS decrypts themessage, it updates the status of the node nLn+1 in memberlist, M.VI. SECURITY ANALYSISFirst, we briefly discuss the security of CL-HSC [13]which is utilized as a building block of CL-EKM. Later,we discuss how CL-EKM achieves our security goals. TheCL-HSC [13] provides both confidentiality and unforgeabilityfor signcrypted messages based on the intractability of theEC-CDH1 Moreover, it is not possible to forge or expose thefull private key of an entity based on the difficulty of EC-CDH,without the knowledge of both KGC’s master private key andan entity’s secret value. Here, the confidentiality is definedas indistinguishability against adaptive chosen ciphertext andidentity attacks (IND-CCA2) while unforgeability is defined1The Elliptic Curve Computational Diffie-Hellman problem (EC-CDH) isdefined as follows: Given a random instance (P,aP, bP) Gq for a,b R Z∗q, compute abP.378 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 2, FEBRUARY 2015as existential unforgeability against adaptive chosen messagesand identity attacks (EUF-CMA). Further details on theCL-HSC scheme and its security proof are provided in [13].A. Compromise-Resilience of CL-EKMWe assume that an adversary captures a node nLi in thej th cluster. This adversary can then extract the keys of nLi ,such as the pairwise key shared with the cluster head nHj ,the public/private key pair, the cluster key GKj, and theindividual key. However, the pairwise master/encryption keygeneration between any two nodes are independent of others,and hence each pair of nodes has different pairwise keys.Therefore, even if the adversary manages to obtain nLi’s keys,it is unable to extract any information useful to compromisethe pairwise keys of other uncompromised nodes. Moreover,due to the intractability of EC-CDH problem, the adversarycannot obtain the KGC’s master private key x from nLi’spublic/private keys pkLi /skLi . As a result, the compromiseof a sensor does not affect the communication security amongother L-sensors or H-sensors. Even though the attacker canread the group communications within the cluster with thecluster key extracted from the compromised node, it cannotget any information about the cluster key of other clusters.B. Resistance Against Cloning and Impersonation AttackAn adversary can conduct the cloning attack if a node iscaptured; the key is then extracted and the node is replicatedin another neighborhood. However, since the cluster headvalidates each node with the BS in the node join process ofour CL-EKM, the BS is able to detect a cloned node when itis placed in an unintended cluster. After the BS investigatesthe cloned node, it revokes the node and notifies the noderevocation to all cluster heads. Thus, although the cloned nodemay try to join other clusters, the cluster head will abort eachattempt. Therefore, our scheme is resistant against the cloningattack.The adversary may also attempt an impersonation attackby inserting an illegitimate node nC. Assume that a node nCposes as nLi . The node ID Li and public key, pkLi=(PLi , RLi ) are publicly known within the network. Hence,nC can broadcast Li and pkLi. When nL j receives themessage, it will compute the pairwise master key KLi L j ,and the encapsulated key information ϕL j= (UL j ,WL j )towards establishing the pairwise Master key. As the next step,nL j sends     ϕL j , L j , pkL j to nC for decapsulation, whichrequires nC to compute TL j as (dLi· UL j ). However, nCfails to compute TL j since nC has no knowledge of nLi’spartial private key dLi . Moreover due to the intractability ofEC-CDH1, the adversary cannot forge dLi without the knowledgeof the KGC’s master private key. Thus, nC is unableto generate a legitimate pairwise master key, KLi L j. However,nC may try to establish the pairwise encryption with a randomkey K_, rather than generating a legitimate master key. To thisend, nC chooses a random nonce r , computes an encryptionkey k_ as HMAC(r, K_) and sends          r, E_k (r, Li , L j ) to nL j .However, nC cannot successfully pass the validation at nL j ,since nL j first computes the pairwise encryption key withnL j as kLi L j= HMAC(r, KLi L j ) and then tries to decryptE_k (r, Li , L j ) using kLi L j . Thus, nL j fails to decrypt andhence, it does not confirm the pairwise encryption key to nC,which is then reported to the BS. Thus, CL-EKM is resistantagainst impersonation attacks.C. Forward and Backward SecrecyIn CL-EKM, messages exchanged between nodes or withina cluster are encrypted with the pairwise encryption key orcluster key. CL-EKM provides the key update and revocationprocesses to ensure forward secrecy when a node leaves orcompromised node is detected. Using key update process,CL-EKM ensures backward secrecy when a new node joins.Once a node is revoked from the network, all its keys areinvalidated and the associated cluster key is updated. Thecluster head sends the new cluster key to each cluster node,except the revoked node, by encrypting the key with thepairwise encryption key between the cluster and each intendednode. Thus, the revoked node fails to decrypt any subsequentmessages using the old pairwise encryption key or cluster key.When a node joins a cluster, the cluster head generates a newcluster key by choosing a new random value. Since the joinednode receives the new cluster key, it cannot decrypt earliermessages encrypted using the older cluster keys.D. Resistance Against Known-Key AttackWe assume that an adversary obtains the current pairwiseencryption key kLi Hj= HMAC(KLi Hj , r ) betweennLi and nHj and conducts the known-key attack. The adversarymay attempt to extract the long term pairwise master keyKLi Hj using kLi Hj . However, due to the one-way featureof HMAC(.), the adversary fails to learn KLi Hj. Also,when nLi and nHj update the pairwise encryption key ask_Li Hj= HMAC(KLi Hj , r _), the adversary cannot computethe updated pairwise encryption key k_Li Hj, without the knowledgeof KLi Hj . Thus, CL-EKM is resistant against known-keyattack when the pairwise encryption key is compromised.VII. PERFORMANCE EVALUATIONWe implemented CL-EKM in Contiki OS [29] andused Contiki port [28] of TinyECC [24] for elliptic curvecryptography library. In order to evaluate our scheme, weuse the Contiki simulator COOJA. We run emulations on thestate-of-the-art sensor platform TI EXP5438 which has 16-bitCPU MSP430F5438A with 256KB flash and 16KB RAM.MSP430F5438A has 25MHz clock frequency and can belowered for power saving.A. Performance Analysis of CL-EKMWe measure the individual performance of the three stepsin the pairwise master/encryption key establishment process,namely, (i) encapsulation, (ii) decapsulation, and (iii) pairwiseencryption key generation. We evaluate each step in termsof (i) computation time, and (ii) energy consumption.In this experiment, we vary the processing power i.e. CPUclock rate of the sensors since we consider heterogeneousSEO et al.: EFFECTIVE KEY MANAGEMENT IN DYNAMIC WSNs 379Fig. 2. Computation overhead for pairwise master/encryption key establishment. (a) Encapsulating key information. (b) Decapsulating key information.(c) Pairwise encryption key establishment.Fig. 3. Energy consumption for pairwise master/encryption key establishment. (a) Encapsulating key information. (b) Decapsulating key information.(c) Pairwise encryption key establishment.WSNs with H-sensors being more powerful. Three differentelliptic curves recommended by SECG (Standards for EfficientCryptography Group) [30], i.e., (i) secp128r2 (128-bit ECC),(ii) secp160r1 (160-bit ECC), and (iii) secp192r1 (192-bitECC), are used for the experiment.Fig. 2 shows the time for the pairwise key generationprocess. As expected, the pairwise master key generationtakes most of the time due to the ECC operations(See Fig. 2(a), 2(b)). However, it is important to mention thatthe pairwise master key is used only to derive the short-termpairwise encryption key. Once two nodes establish the pairwisekeys, they do not require further ECC operations. Fig. 2(a)shows the computation times of the encapsulation process forvarious CPU clock rates of the sensor device. The computationtime increases with the ECC key bit length. secp192r1 needsalmost 1.5 times more time than secp160r1. secp128r2 takesapproximately 4% less time than secp160r1. If CPU clockrate is set to 25MHz and secp160r1 is adopted, 5.7 secondsare needed for encapsulation of key. Fig. 2(b) shows theprocessing time for the decapsulation. Decapsulation requiresabout 1.57 times more CPU computation time than encapsulation.This is because decapsulation has six ECC pointmultiplications, whereas encapsulation includes only four ECCpoint multiplications. Finally, the computation time for pairwiseencryption key establishment is shown in Fig. 2(c).At 25MHz CPU clock rate, it requires 5 ms, which is negligiblecompared to the first two steps. This is due to the factthat this step just needs one HMAC and one 128-bit AESoperation. Next, we measure the energy consumption. As wecan see from Fig. 3, the faster the processing power (i.e. CPUclock rate) is, the more energy is consumed. However, asshown in Fig. 3(a) and Fig. 3(b), there is no differencebetween 16MHz and 25MHz while 25MHz results in fastercomputation than 16MHz. In addition, secp160r1 might bea good choice for elliptic curve selection, since it is moresecure than secp128r2 and consumes reasonable CPU timeand energy for WSNs. In our subsequent experiments, weutilize secp160r1.B. Performance ComparisonsIn this section, we benchmark our scheme with three previousECC-based key management schemes for dynamic WSNs:HKEP [15], MAKM [25] and EDDK [10]. Due to the variabilityof every schemes, we chose to compare a performance ofthe pairwise master key generation step because it is the mosttime consuming portion in each of the schemes. We measuredthe total energy consumption of computation and communicationto establish a pairwise key between two L-sensors. Forthe experiment, we implemented four schemes on TI EXP5438at 25MHz using ECC with secp160r1 parameters andAES-128 symmetric key encryption. EC point is compressedto reduce the packet size and LPL (Low Power Listening)is utilized for power conservation. Thus, sensors wake upfor short durations to check for transmissions every second.If no transmission is detected, they revert to a sleep mode.To compute the energy consumption for communication, weutilize the energy consumption data of CC2420 from [27]and IEEE 802.15.4 protocol overhead data from [31]. We considertwo scenarios as shown in Fig. 4. In the first scenario,two L-sensors lie within a 1-hop range, but the distancebetween the H-sensor and the L-sensor varies from 1 to 8(see Fig. 4 (a)). In the second scenario, two L-sensors andthe H-sensor lie in a 1-hop range, but the wireless channelconditions are changed (see Fig. 4 (b)). When a wirelesschannel condition is poor, a sender may attempt to resend apacket to a destination multiple times. Expected TransmissionCount (ETX) is the expected number of packet transmissionto be received at the destination without error. Fig. 5shows the energy consumption of the four schemes for apairwise key establishment when the number of hops betweenL-sensors and H-sensor, n, increases. When n is one, HKEP380 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 2, FEBRUARY 2015Fig. 4. Network topology.Fig. 5. Energy consumption comparison for pairwise key establishment inscenario (a).requires more energy than our scheme because it performs sixmessage exchanges to establish a pairwise key between twoL-sensors, while our scheme needs just two messageexchanges. When n is one, MAKM consumes the leastenergy because the L-sensor performs a single AESsymmetric encryption, but other schemes run expensive ECCoperations. However, as n increases, the energy of MAKMincreases because the H-sensor is always involved in thegeneration of a pairwise key between two L-sensors. As aresult, MAKM consumes more energy than our scheme whenn is larger than one and the gap also widens when n increases.A packet delivery in a wireless sensor network is unreliabledue to unexpected obstacles, time-varying wireless channelconditions, and a low-power transceiver. Fig. 6 shows theenergy consumption of the four schemes for a pairwisekey establishment when ETX varies from 1 to 4. As ETXincreases, the energy consumption of HKEP increases morerapidly, because it requires six message exchanges. Also,HKEP is insecure, because the static private key of a nodeis exposed to the other node while the two nodes establishthe session key. Although EDDK and MAKM may showbetter performance due to low computational overhead, thedifference between MAKM and our scheme is only 0.121 Jand the difference between EDDK and our scheme is 0.045 J.Both EDDK and MAKM are insecure against the knownkeyattack and do not provide a re-keying operation for thecompromised pairwise key. EDDK also suffers from weakresilience to node compromises. Therefore, this performanceevaluation demonstrates that overall, our scheme outperformsthe existing schemes in terms of a better trade-off between thedesired security properties and energy consumption includingcomputational and communication overhead.VIII. SIMULATION OF NODE MOVEMENTSA. SettingWe developed a simulator which counts the keymanagement-related events and yields total energy consumptionfor key-management-related computations using the datain Sec. 7.1. We focus on the effects of node movement andFig. 6. Energy consumption comparison for pairwise key establishment inscenario (b).Fig. 7. Network topology for simulation.do not consider the impact of lower network layer protocols.We consider a 400×400 m2 space with 25 H-sensors placedon the grid corners (see Fig. 7). In CL-EKM, an H-sensormaintains two timers: Tbackof f and Thold to efficiently managethe cluster when a node moves. Tbackof f denotes the clusterkey update frequency. If Tbackof f = 0, the cluster key isupdated whenever a node joins or leaves. Otherwise, theH-sensor waits a time equal to Tbackof f after a node joinsor leaves to update the cluster key. Thold denotes the waitingtime before discarding the pairwise master key when aL-sensor leaves. If Thold = 0, the pairwise master key witha L-sensor is revoked right after the node leaves the cluster.Otherwise, the H-sensor stores the pairwise master key withthe left L-sensor for a time equal to Thold . For the movementof L-sensor, we adopt two well-known mobility models usedfor simulation of mobile ad-hoc network: the Random WalkMobility Model and the Manhattan Mobility Model [32].H-sensors are set to be stationary since they are usually partof the static infrastructure in real world applications.1) Random Walk Mobility Model: The Random WalkMobility Model mimics the unpredictable movements of manyobjects in nature. In our simulation, 1,000 L-sensors arerandomly distributed. Each L-sensor randomly selects anH-sensor among the four H-sensors in its vicinity and establishesthe pairwise key and cluster key. After the simulationstarts, the L-sensors randomly select a direction and moveat a random speed uniformly selected from [0, 2VL] (i.e., themean speed = VL ). The new direction and speed are randomlyselected every second. If a L-sensor crosses a line, it firstchecks whether it is still connected with its current H-sensor.If not, the node attempts to find an H-sensor which it hadSEO et al.: EFFECTIVE KEY MANAGEMENT IN DYNAMIC WSNs 381Fig. 8. Node movement simulation results in random walk mobility model. (a) Energy consumption of one H-sensor for cluster key update for one day(Thold = 100 sec). (b) Energy consumption of one H-sensor for pairwise key establishment for one day (Tbacko f f = 6 sec). (c) Energy consumption of oneL-sensor for pairwise key establishment for one day (Tbacko f f = 6 sec).Fig. 9. Node movement simulation results in Manhattan mobility model. (a) Energy consumption of one H-sensor for cluster key update for one day(Thold = 100 sec). (b) Energy consumption of one H-sensor for pairwise key establishment for one day (Tbacko f f = 6 sec). (c) Energy consumption of oneL-sensor for pairwise key establishment for one day (Tbacko f f = 6 sec).previously connected to and it still maintains a pairwise masterkey. In the case of a failure, the node randomly selects anH-sensor among the surrounding H-sensors.2) Manhattan Mobility Model: The Manhattan MobilityModel mimics the movement patterns in an urban area organizedaccording to streets and roads. In our simulation, 1,000L-sensors are randomly distributed and move in a grid. Theycan communicate with two adjacent H-sensors. Each L-sensorrandomly selects its direction and chooses an H-sensor withinits path as its cluster head. After the simulation starts, theL-sensors move at a random speed uniformly selected from[0, 2VL]. At each intersection, a L-sensor has 0.5 probabilityof moving straight and a 0.25 probability of turning left orright. If a L-sensor arrives at a new intersection, it first choosesa new direction and checks whether it is still connected withits current H-sensor. If not, it chooses an H-sensor within itsnew vector as its new cluster head.B. The Effect of Tbackof fFig. 8(a) shows the energy consumption of an H-sensorfor a cluster key update during the course of a day ina Random Walk Model. As Tbackof f increases, the energyconsumption decreases since the number of cluster key updatesis reduced. The faster VL is, the more rapidly the energyconsumption decreases as Tbackof f increases since L-sensorsfrequently cross the border lines. This tendency also showsin the Manhattan Mobility Model (see Fig. 9(a)). However,the H-sensors consume more energy at low speeds than inthe Random Walk Mobility Model since the L-sensors do notchange directions until they reach an intersection. A largerTbackof f means a lower security level. Thus, there is a tradeoffbetween the security level and the energy consumption ofthe H-sensor. However, at high speeds, i.e., 16 m/s, Tbackof fshould be less than 1 second since the number of cluster keyupdates is minimal when Tbackof f is greater than 1 second.However, at low speeds, 1, 2 or 3 seconds are a reasonablechoice for the H-sensors.C. The Effect of TholdFig. 8(b) and Fig. 8(c) show the energy consumption of oneH-sensor and one L-sensor for a pairwise key establishmentin the Random Walk Mobility Model over the course of aday, respectively. The effect of Thold increases as the nodevelocity increases. As Thold increases, the energy consumptiondecreases because in the event that the L-sensors return to theold clusters before the timers expire, no new pairwise masterkey establishment is necessary. As shown in Table III theenergy differences caused by node velocity and Thold is due tothe differences in the frequency of pairwise key establishment.Such frequency is linearly proportional to velocity increases.When Thold ranges from 0 to 500 seconds, energy consumptionrapidly decreases because several moving nodes may return totheir previous clusters within the 500 seconds. However, whenThold ranges from 500 to 1,500 seconds, the energy consumptiondecreases more slowly since the probability of nodesreturning to their previous clusters is dramatically reduced.In the Manhattan Mobility Model, when Thold is small, moreenergy is consumed than in the Random Walk Mobility Modelduring pairwise key establishment since the L-sensors returnto their previous clusters with low frequency. (see Fig. 9(b) andFig. 9(c)). However, when Thold is large, the energy consumedfor the pairwise key establishment dramatically decreases.For instance, as shown in Table IV, when the node speedis 16 m/s and Thold is 1,000 seconds, the number of pairwisekey establishments is only 24,418 which is 5.4 times smaller382 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 2, FEBRUARY 2015TABLE IIITHE FREQUENCY OF PAIRWISE KEY ESTABLISHMENTS FORONE DAY IN RANDOM WALK MOBILITY MODELTABLE IVTHE FREQUENCY OF PAIRWISE KEY ESTABLISHMENTS FORONE DAY IN MANHATTAN MOBILITY MODELthan in the Random Walk Mobility Model with the samesettings.Similarly to Tbackof f , a larger Thold means a lower securitylevel. Thus, Thold should be selected according to VL, energyconsumption amount, and the desired security level. Resultsfrom Fig. 8(c) and Fig. 9(c) show that our scheme is practicalfor real-world monitoring applications such as in animaltracking or traffic monitoring. For example, L-sensors movingat 1 m/s in the Random Walk Mobility Model only use at most0.67 J per day if Thold is greater than 100 seconds. Also,L-sensors moving at 16 m/s in the Manhattan Mobility Modelonly use at most 1.16 J per day if Thold is greater than1,000 seconds. Considering that the average energy of oneC-size alkaline battery is 34,398 J [23], the energyconsumption of the L-sensor for a pairwise key establishmentis relatively small.IX. CONCLUSIONS AND FUTURE WORKSIn this paper, we propose the first certificateless effective keymanagement protocol (CL-EKM) for secure communication indynamic WSNs. CL-EKM supports efficient communicationfor key updates and management when a node leaves or joinsa cluster and hence ensures forward and backward key secrecy.Our scheme is resilient against node compromise, cloningand impersonation attacks and protects the data confidentialityand integrity. The experimental results demonstrate the efficiencyof CL-EKM in resource constrained WSNs. As futurework, we plan to formulate a mathematical model for energyconsumption, based on CL-EKM with various parametersrelated to node movements. This mathematical model will beutilized to estimate the proper value for the Thold and Tbackof fparameters based on the velocity and the desired tradeoffbetween the energy consumption and the security level.

Distortion-Aware Concurrent Multipath Transfer for Mobile Video Streaming in Heterogeneous Wireless N

The massive proliferation of wireless infrastructures with complementary characteristics prompts the bandwidth aggregation for Concurrent Multipath Transfer (CMT) over heterogeneous access networks. Stream Control Transmission Protocol (SCTP) is the standard transport-layer solution to enable CMT in multihomed communication environments. However, delivering high-quality streaming video with the existing CMT solutions still remains problematic due to the stringent quality of service (QoS) requirements and path asymmetry in heterogeneous wireless networks.

In this paper, we advance the state of the art by introducing video distortion into the decision process of multipath data transfer. The proposed distortion-aware concurrent multipath transfer (CMT-DA) solution includes three phases: 1) per-path status estimation and congestion control; 2) quality-optimal video flow rate allocation; 3) delay and loss controlled data retransmission. The term ‘flow rate allocation’ indicates dynamically picking appropriate access networks and assigning the transmission rates.

We analytically formulate the data distribution over multiple communication paths to minimize the end-to-end video distortion and derive the solution based on the utility maximization theory. The performance of the proposed CMT-DA is evaluated through extensive semi-physical emulations in Exata involving H.264 video streaming. Experimental results show that CMT-DA outperforms the reference schemes in terms of video peak signal-to-noise ratio (PSNR), good put, and inter-packet delay.

1.2 INTRODUCTION:

During the past few years, mobile video streaming service online gaming, etc. has become one of the “killer applications” and the video traffic headed for hand-held devices has experienced explosive growth. The latest market research conducted by Cisco Company indicates that video streaming accounts for 53 percent of the mobile Internet traffic in parallel, global mobile data is expected to increase 11-fold in the next five years. Another ongoing trend feeding this tremendous growth is the popularity of powerful mobile terminals (e.g., smart phones and iPad), which facilitates individual users to access the Internet and watch videos from everywhere [4].

Despite the rapid advancements in network infrastructures, it is still challenging to deliver high-quality streaming video over wireless platforms. On one hand, the Wi-Fi networks are limited in radio coverage and mobility support for individual users; On the other hand, the cellular networks can well sustain the user mobility but their bandwidth is often inadequate to support the throughput-demanding video applications. Although the 4 G LTE and WiMAX can provide higher peak data rate and extended coverage, the available capacity will still be insufficient compared to the ever-growing video data traffic.

The complementary characteristics of heterogeneous access networks prompt the bandwidth aggregation for concurrent multipath transfer (CMT) to enhance transmission throughput and reliability (see Fig. 1). With the emergency of multihomed/multinetwork terminals CMT is considered to be a promising solution for supporting video streaming in future wireless networking. The key research issue in multihomed video delivery over heterogeneous wireless networks must be effective integration of the limited channel resources available for providing adequate quality of service (QoS). Stream control transmission protocol (SCTP) is the standard transport-layer solution that exploits the multihoming feature to concurrently distribute data across multiple independent end-to-end paths.

Therefore, many CMT solutions have been proposed to optimize the delay, throughput, or reliability performance for efficient data delivery. However, due to the special characteristics of streaming video, these network-level criteria cannot always improve the perceived media quality. For instance, a real-time video application encoded in constant bit rate (CBR) may not effectively leverage the throughput gains since its streaming rate is typically fixed or bounded by the encoding schemes. In addition, involving a communication path with available bandwidth but long delay in the multipath video delivery may degrade the streaming video quality as the end-to-end distortion increases. Consequently, leveraging the CMT for high-quality streaming video over heterogeneous wireless networks is largely unexplored.

In this paper, we investigate the problem by introducing video distortion into the decision process of multipath data transfer over heterogeneous wireless networks. The proposed Distortion-Aware Concurrent Multipath Transfer (CMT-DA) solution is a transport-layer protocol and includes three phases: 1) per-path status estimation and congestion control to exploit the available channel resources; 2) data flow rate allocation to minimize the end-to-end video distortion; 3) delay and loss constrained data retransmission for bandwidth conservation. The detailed descriptions of the proposed solution will be presented in Section 4. Specifically, the contributions of this paper can be summarized in the following.

_ An effective CMT solution that uses path status estimation, flow rate allocation, and retransmission control to optimize the real-time video quality in integrated heterogeneous wireless networks.

_ A mathematical formulation of video data distribution over parallel communication paths to minimize the end-to-end distortion. The utility maximization theory is employed to derive the solution for optimal transmission rate assignment extensive semi-physical emulations in Exata involving real-time H.264 video streaming.

1.3 LITRATURE SURVEY:

CMT-QA: QUALITY-AWARE ADAPTIVE CONCURRENT MULTIPATH DATA TRANSFER IN HETEROGENEOUS WIRELESS NETWORKS

AUTHOR: C. Xu, T. Liu, J. Guan, H. Zhang, and G. M. Muntean,

PUBLICATION: IEEE Trans. Mobile Comput., vol. 12, no. 11, pp. 2193–2205, Nov. 2013.

EXPLANATION:

Mobile devices equipped with multiple network interfaces can increase their throughput by making use of parallel transmissions over multiple paths and bandwidth aggregation, enabled by the stream control transport protocol (SCTP). However, the different bandwidth and delay of the multiple paths will determine data to be received out of order and in the absence of related mechanisms to correct this, serious application-level performance degradations will occur. This paper proposes a novel quality-aware adaptive concurrent multipath transfer solution (CMT-QA) that utilizes SCTP for FTP-like data transmission and real-time video delivery in wireless heterogeneous networks. CMT-QA monitors and analyses regularly each path’s data handling capability and makes data delivery adaptation decisions to select the qualified paths for concurrent data transfer. CMT-QA includes a series of mechanisms to distribute data chunks over multiple paths intelligently and control the data traffic rate of each path independently. CMT-QA’s goal is to mitigate the out-of-order data reception by reducing the reordering delay and unnecessary fast retransmissions. CMT-QA can effectively differentiate between different types of packet loss to avoid unreasonable congestion window adjustments for retransmissions. Simulations show how CMT-QA outperforms existing solutions in terms of performance and quality of service.

PERFORMANCE ANALYSIS OF PROBABILISTIC MULTIPATH TRANSMISSION OF VIDEO STREAMING TRAFFIC OVER MULTI-RADIO WIRELESS DEVICES

AUTHOR: W. Song and W. Zhuang

PUBLICATION: IEEE Trans. Wireless Commun., vol. 11, no. 4, pp. 1554–1564, 2012.

EXPLANATION:

Popular smart wireless devices become equipped with multiple radio interfaces. Multihoming support can be enabled to allow for multiple simultaneous associations with heterogeneous networks. In this study, we focus on video streaming traffic and propose analytical approaches to evaluate the packet-level and call-level performance of a multipath transmission scheme, which sends video traffic bursts over multiple available channels in a probabilistic manner. A probability generation function (PGF) and z-transform method is applied to derive the PGF of packet delay and any arbitrary moment in general. Particularly, we can obtain the average delay, delay jitter, and delay outage probability. The essential characteristics of video traffic are taken into account, such as deterministic burst intervals, highly dynamic burst length, and batch arrivals of transmission packets. The video substream traffic resulting from the probabilistic flow splitting is characterized by means of zero-inflated models. Further, the call-level performance, in terms of flow blocking probability and system throughput, is evaluated with a three-dimensional Markov process and compared with that of an always-best access selection. The numerical and simulations results demonstrate the effectiveness of our analysis framework and the performance gain of multipath transmission.

AN END-TO-END VIRTUAL PATH CONSTRUCTION SYSTEM FOR STABLE LIVE VIDEO STREAMING OVER HETEROGENEOUS WIRELESS NETWORKS

AUTHOR: S. Han, H. Joo, D. Lee, and H. Song

PUBLICATION: IEEE J. Sel. Areas Commun., vol. 29, no. 5, pp. 1032–1041, May 2011.

EXPLANATION:

In this paper, we propose an effective end-to-end virtual path construction system, which exploits path diversity over heterogeneous wireless networks. The goal of the proposed system is to provide a high quality live video streaming service over heterogeneous wireless networks. First, we propose a packetization-aware fountain code to integrate multiple physical paths efficiently and increase the fountain decoding probability over wireless packet switching networks. Second, we present a simple but effective physical path selection algorithm to maximize the effective video encoding rate while satisfying delay and fountain decoding failure rate constraints. The proposed system is fully implemented in software and examined over real WLAN and HSDPA networks.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

Existing method an effective approach in designing error-resilient wireless video broadcasting systems in recent years, Joint source-channel coding (JSCC) attracts increasing interests in both research community and industry because it shows better results in robust layered video transmission over error-prone channels of various techniques available during these years may be found. However, there are still many open problems in terms of how to serve heterogeneous users with diverse screen features and variable reception performances in wireless video broadcast system. One particular challenging problem of this heterogeneous quality-of-service (QoS) video provision is: the users would prefer flexible video with low quality to match their screens, at the same time; the video stream could be reliable received.

The main technical difficulties are as follows:

  • A distinctive characteristic in current wireless broadcast system is that the receivers are highly heterogeneous in terms of their terminal processing capabilities and available bandwidths. In source side, scalable video coding (SVC) has been proposed to provide an attractive solution to this problem.
  • However, in order to support flexible video broadcasting, the scalable video sources need to provide adaptation ability through a variety of schemes, such as scalable video stream extraction layer generation with different priority and summarization before they can be transmitted over the error-prone networks.


2.1.1 DISADVANTAGES:

  • Existing layered video data is very sensitive to transmission failures, the transmission must be more reliable, have low overhead and support large numbers of devices with heterogeneous characteristics. In broadcast and multicast networks, conventional schemes such as adaptive retransmission have their limitations, for example, retransmission may lead to implosion problem.
  • Forward error correction (FEC) and unequal error protection (UEP) are employed to provide the QoS support for video transmission. However, in order to obtain as minimum investment as possible in broadcasting system deployment, server-side must be designed more scalable, reliable, independent, and support vast number of autonomous receivers. Suitable FEC approaches are expected such that can eliminate the retransmission and lower the unnecessary receptions overhead at each receiver-side.
  • Conventionally, the joint source and channel coding are designed with seldom consideration in heterogeneous characteristics, and most of the above challenges are ignored in practical video broadcasting system. This leads to the need for heterogeneous QoS video provision in broadcasting network. This paper presents the point of view to study the hybrid-scalable video from new quality metric so as to support users’ heterogeneous requirements.


2.2 PROPOSED SYSTEM:

We proposed Distortion-Aware Concurrent Multipath Transfer (CMT-DA) solution is a transport-layer protocol and includes three phases: 1) per-path status estimation and congestion control to exploit the available channel resources; 2) data flow rate allocation to minimize the end-to-end video distortion; 3) delay and loss constrained data retransmission for bandwidth conservation an effective CMT solution that uses path status estimation, flow rate allocation, and retransmission control to optimize the real-time video quality in integrated heterogeneous wireless networks.

We propose a quality-aware adaptive concurrent multipath transfer (CMT-QA) scheme that distributes the data based on estimated path quality. Although the path status is an important factor that affects the scheduling policy, the application requirements should also be considered to guarantee the QoS. Basically, the proposed CMT-DA is different from the CMT-QA as we take the video distortion as the benchmark. Still, the proposed solutions (path status estimation, flow rate allocation, and retransmission control) in CMT-DA are significantly different from those in CMTQA. In another research conducted by a realistic evaluation tool-set is proposed to analyze and optimize the performance of multimedia distribution when taking advantage of CMT-based multihoming SCTP solutions.

2.2.1 ADVANTAGES:

  • We propose a novel out-of-order scheduling approach for in-order arriving of the data chunks in CMT-DA based on the progressive water-filling algorithm. Heterogeneous wireless networks based on fountain code. The encoded multipath streaming model proposed by Chow et al. is a joint multipath and FEC approach for real time live streaming applications.
  • We propose an end-to-end virtual path construction system that exploits the path diversity in heterogeneous wireless networks based on fountain code. The encoded multipath streaming model proposed by Chow et al. is a joint multipath and FEC approach for real time live streaming applications. The authors provide asymptotic analysis and derive closed-form solution for the FEC packets allocation.
  • The major components at the sender side are the parameter control unit, flow rate allocator, and retransmission controller. The parameter control unit is responsible for processing the acknowledgements (ACKs) feedback from the receiver, estimating the path status and adapting the congestion window size. The delay and loss requirements are imposed by the video applications to achieve the target video quality.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Tools                                       :           Netbeans or Eclipse
  • Script                                       :           Java Script
  • Document                               :           MS-Office 2007

CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

3.3 CLASS DIAGRAM:

3.4 SEQUENCE DIAGRAM:

3.5 ACTIVITY DIAGRAM:

CHAPTER 4

4.0 IMPLEMENTATION:

4.1 ALGORITHM

4.2 MODULES:

4.3 MODULE DESCRIPTION:

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

APPENDIX

7.1 SAMPLE SOURCE CODE

7.2 SAMPLE OUTPUT

CHAPTER 8

8.1 CONCLUSION:

The future wireless environment is expected to be a converged system that incorporates different access networks with diverse transmission features and capabilities. The increasing powerfulness and popularity of multihomed mobile terminals facilitate the bandwidth aggregation for enhanced transmission reliability and data throughput. Optimizing the SCTP is a critical step towards integrating heterogeneous wireless networks for efficient video delivery.

This paper proposes a novel distortion-aware concurrent multipath transfer scheme to support high-quality video streaming over heterogeneous wireless networks. Through modeling and analysis, we have developed solutions for per-path status estimation, congestion window adaption, flow rate allocation, and data retransmission. As future work, we will study the cost minimization problem of utilizing CMT for mobile video delivery in heterogeneous wireless networks.

Defeating Jamming With the Power of Silence A Game-Theoretic Analysis

Abstract:

The timing channel is a logical communication channel in which information is encoded in the timing between events. Recently, the use of the timing channel has been proposed as a countermeasure to reactive jamming attacks performed by an energy- constrained malicious node. In fact, while a jammer is able to disrupt the information contained in the attacked packets, timing information cannot be jammed, and therefore, timing channels can be exploited to deliver information to the receiver even on a jammed channel. Since the nodes under attack and the jammer have conflicting interests, their interactions can be modeled by means of game theory. Accordingly, in this paper, a game-theoretic model of the interactions between nodes exploiting the timing channel to achieve resilience to jamming attacks and a jammer is derived and analyzed. More specifically, the Nash equilibrium is studied in terms of existence, uniqueness, and convergence under best response dynamics. Furthermore, the case in which the communication nodes set their strategy and the jammer reacts accordingly is modeled and analyzed as a Stackelberg game, by considering both perfect and imperfect knowledge of the jammer’s utility function. Extensive numerical results are presented, showing the impact of network parameters on the system performance.

Introduction:

A timing channel is a communication channel which exploits silence intervals between consecutive transmissions to encode information. Recently, use of timing channels has been proposed in the wireless domain to support low rate, energy efficient communications  as well as covert and resilient communications Timing channels are more although not totally  immune from reactive jamming attacks. In fact, the interfering signal begins its disturbing action against the communication only after identifying an ongoing transmission, and thus after the timing information has been decoded by the receiver.

Timing channel-based communication scheme has been proposed to counteract jamming by establishing a low rate physical layer on top of the traditional physical/link layers using detection and timing of failed packet receptions at the receiver.

The energy cost of jamming the timing channel and the resulting trade-offs have been analyzed. The interactions between the jammer and the node whose transmissions are under attack, which we call target node.

Specifically, assume that the target node wants to maximize the amount of information that can be transmitted per unit of time by means of the timing channel, whereas, the jammer wants to minimize such amount of information while reducing the energy expenditure.

The target node and the jammer have conflicting interests; we develop a game theoretical framework that models their interactions. We investigate both the case in which these two adversaries play their strategies.

 The situation when the target node (the leader) anticipates the actions of the jammer (the follower). To this purpose, we study both the Nash Equilibria (NEs) and Stackelberg Equilibria (SEs) of our proposed games.

Existing system:

Recently, use of timing channels has been proposed in the wireless domain to support low rate, energy efficient communications as well as covert and resilient communications. In existing system methodologies to detect jamming attacks are illustrated; it is also shown that it is possible to identify which kind of jamming attack is ongoing by looking at the signal strength and other relevant network parameters, such as bit and packet errors. Several solutions against reactive jamming have been proposed that exploit different techniques, such as frequency hopping, power control and UN jammed bits.

Disadvantages:

  • Continuous jamming is very costly in terms of energy consumption for the jammer
  • Existing solutions usually rely on users’ cooperation and coordination, which might not be guaranteed in a jammed environment. In fact, the reactive jammer can totally disrupt each transmitted packet and, consequently, no information can be decoded and then used to this purpose.

Proposed system:

Our proposed system implementation focus on the resilience of timing channels to jamming attacks. In general, these attacks can completely disrupt communications when the jammer continuously emits a high power disturbing signal, i.e., when continuous jammingis performed.

Analyze the interactions between the jammer and the node whose transmissions are under attack, which we call target node. Specifically, we assume that the target node wants to maximize the amount of information that can be transmitted per unit of time by means of the timing channel, whereas, the jammer wants to minimize such amount of information while reducing the energy expenditure.

As the target node and the jammer have conflicting interests, we develop a game theoretical framework that models their interactions. We investigate both the case in which these two adversaries play their strategies simultaneously and the situation when the target node (the leader) anticipates the actions of the jammer (the follower). To this purpose, we study both the Nash Equilibria (NEs) and Stackelberg Equilibria (SEs) of our proposed games.

Advantages:

  • System model the interactions between a jammer and a target node as a jamming game
  • We prove the existence, uniqueness and convergence to the Nash equilibrium (NE) under best response dynamics
  • We prove the existence and uniqueness of the equilibrium of the Stackelberg game where the target node plays as a leader and the jammer reacts consequently
  • We investigate in this latter Stackelberg scenario the impact on the achievable performance of imperfect knowledge of the jammer’s utility function;
  • We conduct an extensive numerical analysis which shows that our proposed models well capture the main factors behind the utilization of timing channels, thus representing a promising framework for the design and understanding of such systems.

Modules:

NASH Equilibrium Analysis:

The Nash Equilibrium points (NEs), in which both players achieve their highest utility given the strategy profile of the opponent. In the following we also provide proofs of the existence, uniqueness and convergence to the Nash Equilibrium under best response dynamics.

Existence of the Nash Equilibrium:

 It is well known that the intersection points between bT(y) and bJ(x)are the NEs of the game. Therefore, to demonstrate the existence of at least one NE, it suffices to prove that bT(y) and bJ(x) have one or more intersection points. In other words, it is sufficient to find one or more pairs.

Uniqueness of the Nash Equilibrium:

After proving the NE existence in Theorem, let us prove the uniqueness of the NE, that is, there is only one strategy profile such that no player has incentive to deviate unilaterally.

Convergence to the Nash Equilibrium:

Analyze the convergence of the game to the NE when players follow Best Response Dynamics (BRD). In BRD the game starts from any initial point(x(0),y(0))∈Sand, at each successive step, each player plays its strategy by following its best response function.

Performance Analysis

The game allows the leader to achieve a utility which is atleast equal to the utility achieved in the ordinary game at the NE, if we assume perfect knowledge, that is, the target node is completely aware of the utility function of the jammer and its parameters, and thus it is able to evaluate bJ(x). Whereas, if some parameters in the utility function of the jammer are unknown at the target node

Conclusion:

Our system implementation proposed a game-theoretic model of the interactions between a jammer and a communication node that exploits a timing channel to improve resilience to jamming attacks. Structural properties of the utility functions of the two players have been analyzed and exploited to prove the existence and uniqueness of the Nash Equilibrium. The convergence of the game to the Nash Equilibrium has been studied and proved by analyzing the best response dynamics. Furthermore, as the reactive jammer is assumed to start transmitting its interference signal only after detecting activity of the node under attack, a Stackelberg game has been properly investigated, and proofs on the existence and uniqueness of the Stackelberg Equilibrium has been provided.

Data-Stream-Based Intrusion Detection System for Advanced Metering Infrastructure in Smart Grid A Fea

In this paper, we will focus on the security of advanced metering infrastructure (AMI), which is one of the most crucial components of SG. AMI serves as a bridge for providing bidirectional information flow between user domain and utility domain. AMI’s main functionalities encompass power measurement facilities, assisting adaptive power pricing and demand side management, providing self-healing ability, and interfaces for other systems.

AMI is usually composed of three major types of components, namely, smart meter, data concentrator, and central system (a.k.a. AMI headend) and bidirectional communication networks among those components. AMI is exposed to various security threats such as privacy breach, energy theft, illegal monetary gain, and other malicious activities. As AMI is directly related to revenue earning, customer power consumption, and privacy, of utmost importance is securing its infrastructure. In order to protect AMI from malicious attacks, we look into the intrusion detection system (IDS) aspect of security solution.

We can define IDS as a monitoring system for detecting any unwanted entity into a targeted system (such as AMI in our context). We treat IDS as a second line security measure after the first line of primary AMI security techniques such as encryption, authorization, and authentication, Hence, changing specifications in all key IDS sensors would be expensive and cumbersome. In this paper, we choose to employ anomaly-based IDS using data mining approaches.

1.2 INTRODUCTION

Smart grid (SG) is a set of technologies that integrate modern information technologies with present power grid system. Along with many other benefits, two-way communication, updating users about their consuming behavior, controlling home appliances and other smart components remotely, and monitoring power grid’s stability are unique features of SG. To facilitate such kinds of novel features, SG needs to incorporate many new devices and services. For communicating, monitoring, and controlling of these devices/services, there may also be a need for many new protocols and standards. However, the combination of all these new devices, services, protocols, and standards make SG a very complex system that is vulnerable to increased security threats—like any other complex systems are. In particular, because of its bidirectional, interoperable, and software-oriented nature, SG is very prone to cyber attacks. If proper security measures are not taken, a cyber attack on SG can potentially bring about a huge catastrophic impact on the whole grid and, thus, to the society. Thus, cyber security in SG is treated as one of the vital issues by the National Institute of Standards and Technology and the Federal Energy Regulatory Commission.

In this paper, we will focus on the security of advanced metering infrastructure (AMI), which is one of the most crucial components of SG. AMI serves as a bridge for providing bidirectional information flow between user domain and utility domain [2]. AMI’s main functionalities encompass power measurement facilities, assisting adaptive power pricing and demand side management, providing self-healing ability, and interfaces for other systems. AMI is usually composed of three major types of components, namely, smart meter, data concentrator, and central system (a.k.a. AMI headend) and bidirectional communication networks among those components. Being a complex system in itself, AMI is exposed to various security threats such as privacy breach, energy theft, illegal monetary gain, and other malicious activities. As AMI is directly related to revenue earning, customer power consumption, and privacy, of utmost importance is securing its infrastructure.

1.3 LITRATURE SURVEY

EFFICIENT AUTHENTICATION SCHEME FOR DATA AGGREGATION IN SMART GRID WITH FAULT TOLERANCE AND FAULT DIAGNOSIS

PUBLISH: IEEE Power Energy Soc. Conf. ISGT, 2012, pp. 1–8.

AUTOHR: D. Li, Z. Aung, J. R. Williams, and A. Sanchez

EXPLANATION:

Authentication schemes relying on per-packet signature and per-signature verification introduce heavy cost for computation and communication. Due to its constraint resources, smart grid’s authentication requirement cannot be satisfied by this scheme. Most importantly, it is a must to underscore smart grid’s demand for high availability. In this paper, we present an efficient and robust approach to authenticate data aggregation in smart grid via deploying signature aggregation, batch verification and signature amortization schemes to less communication overhead, reduce numbers of signing and verification operations, and provide fault tolerance. Corresponding fault diagnosis algorithms are contributed to pinpoint forged or error signatures. Both experimental result and performance evaluation demonstrate our computational and communication gains.

CYBER SECURITY ISSUES FOR ADVANCED METERING INFRASTRUCTURE (AMI)

PUBLISH: IEEE Power Energy Soc. Gen. Meet. – Convers. Del. Electr. Energy 21st Century, 2008, pp. 1–5.

AUTOHR: F. M. Cleveland

EXPLANATION:

Advanced Metering Infrastructure (AMI) is becoming of increasing interest to many stakeholders, including utilities, regulators, energy markets, and a society concerned about conserving energy and responding to global warming. AMI technologies, rapidly overtaking the earlier Automated Meter Reading (AMR) technologies, are being developed by many vendors, with portions being developed by metering manufacturers, communications providers, and back-office Meter Data Management (MDM) IT vendors. In this flurry of excitement, very little effort has yet been focused on the cyber security of AMI systems. The comment usually is “Oh yes, we will encrypt everything – that will make everything secure.” That comment indicates unawareness of possible security threats of AMI – a technology that will reach into a large majority of residences and virtually all commercial and industrial customers. What if, for instance, remote connect/disconnect were included as one AMI capability – a function of great interest to many utilities as it avoids truck rolls. What if a smart kid hacker in his basement cracked the security of his AMI system, and sent out 5 million disconnect commands to all customer meters on the AMI system.

INTRUSION DETECTION FOR ADVANCED METERING INFRASTRUCTURES: REQUIREMENTS AND ARCHITECTURAL DIRECTIONS

PUBLISH: IEEE Int. Conf. SmartGridComm, 2010, pp. 350–355.

AUTOHR: R. Berthier, W. H. Sanders, and H. Khurana,

EXPLANATION:

The security of Advanced Metering Infrastructures (AMIs) is of critical importance. The use of secure protocols and the enforcement of strong security properties have the potential to prevent vulnerabilities from being exploited and from having costly consequences. However, as learned from experiences in IT security, prevention is one aspect of a comprehensive approach that must also include the development of a complete monitoring solution. In this paper, we explore the practical needs for monitoring and intrusion detection through a thorough analysis of the different threats targeting an AMI. In order to protect AMI from malicious attacks, we look into the intrusion detection system (IDS) aspect of security solution. We can define IDS as a monitoring system for detecting any unwanted entity into a targeted system (such as AMI in our context). We treat IDS as a second line security measure after the first line of primary AMI security techniques such as encryption, authorization, and authentication, such as [3]. However, Cleveland [4] stressed that these first line security solutions alone are not sufficient for securing AMI.

MOA: MASSIVE ONLINE ANALYSIS, A FRAMEWORK FOR STREAM CLASSIFICATION AND CLUSTERING

PUBLISH: JMLR Workshop Conf. Proc., Workshop Appl. Pattern Anal., 2010, vol. 11, pp. 44–50.

AUTOHR: A. Bifet, G. Holmes, B. Pfahringer, P. Kranen, H. Kremer, T. Jansen, and T. Seidl

EXPLANATION:

In today’s applications, massive, evolving data streams are ubiquitous. Massive Online Analysis (MOA) is a software environment for implementing algorithms and running experiments for online learning from evolving data streams. MOA is designed to deal with the challenging problems of scaling up the implementation of state of the art algorithms to real world dataset sizes and of making algorithms comparable in benchmark streaming settings. It contains a collection of offline and online algorithms for both classification and clustering as well as tools for evaluation. Researchers benefit from MOA by getting insights into workings and problems of different approaches, practitioners can easily compare several algorithms and apply them to real world data sets and settings. MOA supports bi-directional interaction with WEKA, the Waikato Environment for Knowledge Analysis, and is released under the GNU GPL license. Besides providing algorithms and measures for evaluation and comparison, MOA is easily extensible with new contributions and allows the creation of benchmark scenarios through storing and sharing setting files.

SECURING ADVANCED METERING INFRASTRUCTURE USING INTRUSION DETECTION SYSTEM WITH DATA STREAM MINING

PUBLISH: Proc. PAISI, 2012, vol. 7299, pp. 96–111.

AUTOHR: M. A. Faisal, Z. Aung, J. Williams, and A. Sanchez

EXPLANATION:

Advanced metering infrastructure (AMI) is an imperative component of the smart grid, as it is responsible for collecting, measuring, analyzing energy usage data, and transmitting these data to the data concentrator and then to a central system in the utility side. Therefore, the security of AMI is one of the most demanding issues in the smart grid implementation. In this paper, we propose an intrusion detection system (IDS) architecture for AMI which will act as a complimentary with other security measures. This IDS architecture consists of three local IDSs placed in smart meters, data concentrators, and central system (AMI headend). For detecting anomaly, we use data stream mining approach on the public KDD CUP 1999 data set for analysis the requirement of the three components in AMI. From our result and analysis, it shows stream data mining technique shows promising potential for solving security issues in AMI.

DATA STREAM MINING ARCHITECTURE FOR NETWORK INTRUSION DETECTION

PUBLISH: IEEE Int. Conf. IRI, 2004, pp. 363–368

AUTOHR: N. C. N. Chu, A. Williams, R. Alhajj, and K. Barker

EXPLANATION:

In this paper, we propose a stream mining architecture which is based on a single-pass approach. Our approach can be used to develop efficient, effective, and active intrusion detection mechanisms which satisfy the near real-time requirements of processing data streams on a network with minimal overhead. The key idea is that new patterns can now be detected on-the-fly. They are flagged as network attacks or labeled as normal traffic, based on the current network trend, thus reducing the false alarm rates prevalent in active network intrusion systems and increasing the low detection rate which characterizes passive approaches.

RESEARCH ON DATA MINING TECHNOLOGIES APPLYING INTRUSION DETECTION

PUBLISH: Proc. IEEE ICEMMS, 2010, pp. 230–233

AUTOHR: Z. Qun and H. Wen-Jie

EXPLANATION:

Intrusion detection is one of network security area of technology main research directions. Data mining technology was applied to network intrusion detection system (NIDS), may automatically discover the new pattern from the massive network data, to reduce the workload of the manual compilation intrusion behavior patterns and normal behavior patterns. This article reviewed the current intrusion detection technology and the data mining technology briefly. Focus on data mining algorithm in anomaly detection and misuse detection of specific applications. For misuse detection, the main study the classification algorithm; for anomaly detection, the main study the pattern comparison and the cluster algorithm. In pattern comparison to analysis deeply the association rules and sequence rules . Finally, has analysed the difficulties which the current data mining algorithm in intrusion detection applications faced at present, and has indicated the next research direction.

AN EMBEDDED INTRUSION DETECTION SYSTEM MODEL FOR APPLICATION PROGRAM

PUBLISH: IEEE PACIIA, 2008, vol. 2, pp. 910–912.

AUTOHR: S. Wu and Y. Chen

EXPLANATION:

Intrusion detection is an effective security mechanism developed in the recent decade. Because of its wide applicability, intrusion detection becomes the key part of the security mechanism. The modern technologies and models in intrusion detection field are categorized and studied. The characters of current practical IDS are introduced. The theories and realization of IDS based on applications are presented. The basic ideas concerned with how to design and realize the embedded IDS for application are proposed.

ACCURACY UPDATED ENSEMBLE FOR DATA STREAMS WITH CONCEPT DRIFT

PUBLISH: Proc. 6th Int. Conf. HAIS Part II, 2011, pp. 155–163.

AUTOHR: D. Brzeziñski and J. Stefanowski

EXPLANATION:

In this paper we study the problem of constructing accurate block-based ensemble classifiers from time evolving data streams. AWE is the best-known representative of these ensembles. We propose a new algorithm called Accuracy Updated Ensemble (AUE), which extends AWE by using online component classifiers and updating them according to the current distribution. Additional modifications of weighting functions solve problems with undesired classifier excluding seen in AWE. Experiments with several evolving data sets show that, while still requiring constant processing time and memory, AUE is more accurate than AWE.

ACTIVE LEARNING WITH EVOLVING STREAMING DATA

PUBLISH: Proc. ECML-PKDD Part III, 2011, pp. 597–612.

AUTOHR: I. liobaitë, A. Bifet, B. Pfahringer, and G. Holmes

EXPLANATION:

In learning to classify streaming data, obtaining the true labels may require major effort and may incur excessive cost. Active learning focuses on learning an accurate model with as few labels as possible. Streaming data poses additional challenges for active learning, since the data distribution may change over time (concept drift) and classifiers need to adapt. Conventional active learning strategies concentrate on querying the most uncertain instances, which are typically concentrated around the decision boundary. If changes do not occur close to the boundary, they will be missed and classifiers will fail to adapt. In this paper we develop two active learning strategies for streaming data that explicitly handle concept drift. They are based on uncertainty, dynamic allocation of labeling efforts over time and randomization of the search space. We empirically demonstrate that these strategies react well to changes that can occur anywhere in the instance space and unexpectedly.

LEARNING FROM TIME-CHANGING DATA WITH ADAPTIVE WINDOWING

PUBLISH: Proc. SIAM Int. Conf. SDM, 2007, pp. 443–448.

AUTOHR: A. Bifet and R. Gavaldà,

EXPLANATION:

We present a new approach for dealing with distribution change and concept drift when learning from data sequences that may vary with time. We use sliding windows whose size, instead of being fixed a priori, is recomputed online according to the rate of change observed from the data in the window itself. This delivers the user or programmer from having to guess a time-scale for change. Contrary to many related works, we provide rigorous guarantees of performance, as bounds on the rates of false positives and false negatives. Using ideas from data stream algorithmics, we develop a time- and memory-efficient version of this algorithm, called ADWIN2. We show how to combine ADWIN2 with the Naïve Bayes (NB) predictor, in two ways: one, using it to monitor the error rate of the current model and declare when revision is necessary and, two, putting it inside the NB predictor to maintain up-to-date estimations of conditional probabilities in the data. We test our approach using synthetic and real data streams and compare them to both fixed-size and variable-size window strategies with good results.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

Existing methods in protecting an AMI against malicious activities is to create a monitoring solution that covers the heterogeneity of communication technologies through their requirements (e.g., encryption and real time) and constraints (e.g., topology and bandwidth). It is critical to identify these elements, for two reasons: 1) they can help to define the potential impact of malicious activities targeting the AMI; and 2) they can impose limits on the functionalities and security of a monitoring solution. For instance, the fact that large portions of an AMI network are wireless and use a mesh network topology facilitates network-related attacks such as traffic interception, and the design of the monitoring architecture is more challenging than in a traditional wired network. Moreover, a large number of nodes are deployed in the field or in consumer facilities, which means that attacks requiring physical access are easier to conduct.

These detection techniques are different for two fundamental reasons.

First, signature-based IDS uses a blacklist approach, while anomaly- and specification-based IDS use a white list approach. A blacklist approach requires creation of a knowledge base of malicious activity, while a white list approach requires training of the system and identification of its normal or correct behavior list approaches is that they provide little information about the root causes of attacks.

A second fundamental difference lies in the level of understanding required by each approach. Signature- and anomaly-based IDSes belong to the same group by monitoring activity at a low level, while specification-based IDS requires a high-level and stateful understanding of the activity monitored.

2.1.1 DISADVANTAGES:

  • Curious eavesdroppers, who are motivated to learn about the activity of their neighbors by listening in on the traffic of the surrounding meters
  • Motivated eavesdroppers, who desire to gather information about potential victims as part of an organized theft. • Unethical customers, who are motivated to steal electricity by tampering with the metering equipment installed inside their homes.
  • Overly intrusive meter data management agencies, which are motivated to gain high-resolution energy and behavior profiles about their users, which can damage customer privacy. This type of attacker also includes employees who could attempt to spy illegitimately on customers.
  • Active attackers, who are motivated by financial gain or terrorist goals. The objective of a terrorist would be to create large-scale disruption of the grid, either by remotely cutting off many customers or by creating instability in the distribution or transmission networks. Active attackers attracted by financial gain could also use disruptive actions, such as Denial of Service (DoS) attacks, or they could develop self-propagating malware in order to create revenue-making.
  • Publicity seekers, who use techniques similar to those of other types of attackers, but in a potentially less harmful way, because they are more interested in fame and usually have limited financial resources. Attackers may use a variety of attack techniques to reach their objectives. Based on a survey of the related literature information about the attack consequences and will be used in the next section to identify the monitoring mechanisms required for an intrusion detection system.


2.2 PROPOSED SYSTEM:

We propose a new AMI IDS architecture based on the AMI architecture presented by OPENMeter, which is a project deployed by several European countries to reduce gap between the state-of-the-art technologies and AMI’s requirements. We use the data stream mining algorithms available in the Massive Online Analysis (MOA) in order to simulate the IDSs of the proposed architecture.

Our proposed IDS architecture follows a sequential process. Communication data from various sources are inserted to Acceptor Module. Preprocessing Unit is responsible for producing data according to predetermined attributes by monitoring the communication data. This generated data would be treated as input for Stream Mining Module. Stream Mining Module runs a data stream mining algorithm over the data generated by Preprocessing Unit. Decision Maker Unit decides whether it should trigger an alarm or not.

This module also keeps records of the information associated with attacks. These records will be used for further analysis and improving the attack database. The proposed IDS architecture for the other two types of AMI components, namely, data concentrator and AMI head end, is more or less similar to that of smart meter IDS. Again, the security boxes for those components can be either inside (in the form of software or add-on hardware card) or outside (in the form of a dedicated box or server). In order to simultaneously our proposed IDS architecture follows a sequential process.

Preprocessing Unit is responsible for producing data according to predetermined attributes by monitoring the communication data. This generated data would be treated as input for Stream Mining Module. Stream Mining Module runs a data stream mining algorithm over the data generated by Preprocessing Unit. Decision Maker Unit decides whether it should trigger an alarm or not. This module also keeps records of the information associated with attacks. These records will be used for further analysis and improving the attack database.

2.2.1 ADVANTAGES:

  • We have proposed a reliable and pragmatic IDS architecture for AMI.
  • We have conducted a set of experiments on a public IDS data set using state-of-the-art data stream mining techniques and observed their performances.
  • We have performed a feasibility study of applying these data stream mining algorithms for different components of the proposed IDS architecture. Note that proposing a new data stream mining algorithm is out of the scope of this paper and is planned as future work.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Document                               :           MS-Office 2007


CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM

LEVEL 0

Base station
IP Address
Generate Authentication Key
Send Packet Data

LEVEL 1

Base station
IP Address
Send Data
File transfer
Socket connection  
Connecting

LEVEL 2

Router
IP Address
Socket connection
Routing
  Verify file transaction  
IDS Attack
Hash implementation Authentication key infrastructure Certificate revocation list    
Security analysis
Encrypt

LEVEL 3

Node
IP Address
Received path
IDS Attack  
Data Received

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

Base station
Router
IP Address
Socket connection
Data Transfer
Authentication
Hash implementation Public key infrastructure Certificate revocation list  
IDS
Received Node
Received
Node

3.3 CLASS DIAGRAM:

Node  
IP Adress  
Browse received path  
Connecting ()
Base station
IP Address
Browse file
Connecting
Socket connection ()  ()
File transfer ()  ()
Router   
IP Address  
Select Connection  
Routing ()
File received ()
Start receiving  
Security analysis ()
Encrypt ()


3.4 SEQUENCE DIAGRAM:

 Connection established

Send data Generating Authentication  Form routing Routing Finished   Decode data and view   Connection terminate Source Base station        Destination Establish communication Connection established IDS Attack   Data received Routing Success  

3.5 ACTIVITY DIAGRAM:

Node
Router
Check
File not receive
 
IP address & select connection
Security analysis IDS
Routing
              Yes Start file receive  
No
IP Address & browse file
IP Address
Browse received path & connecting
File Received
Base station
Socket connection & connecting  
File transfer


CHAPTER 4

4.0 IMPLEMENTATION:

AMI:

AMI is an updated version of automatic or automated meter reading (AMR) [2]. Present traditional AMR helps a utility company in reading meters through one-way communication. However, as AMR cannot meet the current requirements for two-way communication and others, AMI is introduced. AMI is composed of smart meters, data concentrators, and central system (AMI headend) and the communication networks among them. These AMI components are usually located in various networks and different realms such as public and private ones. Fig. 1 gives a pictorial view of AMI integration in a broader context of power generation, distribution, etc. From this figure, we can see that the smart meter, responsible for monitoring and recording power usage of home appliances, etc., is the key equipment for consumers.

Home appliances and other integrated devices/systems such as water and gas meters, in-home display, plug-in electric vehicle/plug-in hybrid electric vehicle, smart thermostat, rooftop photovoltaic system, etc., constitute a home area network (HAN), which is connected to the smart meter. For communicating among these constituents, Zegbee or power line communication can be used. A number of individual smart meters communicate to a data concentrator through neighborhood area network (NAN). WiMAX, cellulartechnologies, etc., are possible means for this network. A number of data concentrators are connected to an AMI headend in the utility side using wide area network (WAN). Various long-distance communication technologies such as fiber optic, digital subscriber line, etc., are used in WAN. The AMI headend located in the utility side consists of meter data management system, geographic information system (GIS), configuration system, etc. These subsystems may build a local area network (LAN) for intercommunication.

Let us look at the first component of AMI, namely, the smart meter. Along with the houses of ordinary people, smart meters are also installed in crucial places such as companies, banks, hospitals, educational institutes, government agencies, parliaments, and presidential residences. Thus, the security of smart meters is a vital issue. To our best knowledge, current smart meters do not possess IDS facility yet. If we are to furnish smart meters with IDS, one possible approach is to develop embedded software for IDS, such as the one proposed in [20], and update the firmware of the smart meter to include these embedded IDS. Although this can be done with relative ease, the main problem is the limitation of computing resources in the current smart meters. They are mostly equipped with low-end processors and limited amounts of main memory (in kilobytes to a few megabytes range). Although this may change in the near future, since a good number of smart meters have already been deployed in many developed countries, it is not very easy to replace them orupgrade those existing ones with more powerful resources.

Since a smart meter is supposed to consume most of its processor and main memory resources for its core businesses (such as recording electricity usage, interaction with other smart home appliances, and two-way communication with its associated data concentrator and, ultimately, the headend), only a small fraction of its already limited resources is available for IDS’ data processing purpose. We try to solve this problem of resource scarceness by proposing to use a separate IDS entity, either installed outside the smart meter (for existing ones) or integrated within the smart meter (for new ones). We name such an entity a “security box.” A possible design of a smart meter with this security box is provided in Fig. 2(a) based on the one presented in [22]. Here, we the show security box as a simple meter IDS. However, it is open for this component to cover other security functions such as firewall, encryption, authorization, authentication, etc. Care should be taken that the security box for the smart meter should not be too expensive and hence should be equipped with resources just enough to perform computations for IDS (and other security-related calculations, if applicable) at the meter level.

4.1 ALGORITHM

DATA STREAM MINING ALGORITHMS

 We use MOA in our experiment. It is an open source data stream mining framework. Although there are various static and evolving stream mining classification algorithms available in this software environment, we are only interested inthe evolving ones. Evolving classification algorithms care about the concept change or the distribution change in the data stream. There are 16 evolving data stream classifiers in MOA. After an initial trail on those 16 classifiers, the 7 ensemble classifiers listed in Table II are selected. (From now on, we will write classifier names in MOA in italic. In many cases, MOA’s classifiernames are self-explanatory.) These seven classifiers are chosen because they offered the highest accuracies (evaluated with EvaluatePrequential method in MOA) on the training set. Different variants of a Hoeffding tree are used as the base learner in MOA.

The algorithm establishes the Hoeffding bound, which quantifies the number of observations required to estimate necessary statistics within a prescribed precision. Mathematically, the Hoeffding bound can be expressed using (1), which says that the true mean of a random variable of range R will not differ from estimated mean after n independent examples or observations by more than with probability. The descriptions of these seven selected ensemble learners are as follows. (For more detailed descriptions, refer to the original papers [23]–[30].) AccuracyUpdatedEnsemble (AUE): It is a block-based ensemble classifier, an improved version of Accuracy Weighted Ensemble (AWE), for evolving data stream. Accuracy Updated Ensemble (AUE) makes this enhancement by using online component classifiers, which updates the base classifiers rather than only adjust their weights, as well as updating them according to present distribution. In addition, this method also leverages the drawback of AWE by redefining the weighting function.

4.2 MODULES:

SERVER CLIENT MODULE:

SMART GRID AMI:

DATA STREAM MINING METHODS:

INTRUSION DETECTION SYSTEM (IDS):

4.3 MODULE DESCRIPTION:

SERVER CLIENT MODULE:

Client-server computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client also shares any of its resources; Clients therefore initiate communication sessions with servers which await (listen to) incoming requests. Network-accessible resources may be deployed in a network as surveillance and early-warning tools, as the detection of attackers are not normally accessed for legitimate purposes.

Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the data’s. Data forwarding can also direct an attacker’s attention away from legitimate servers. A user encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a server, a user is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker’s methods can be studied and that information can be used to increase network security.

SMART GRID AMI:

We developing a specification for AMI networks is effective gradually, fresh specifications need to be included. Hence, changing specifications in all key IDS sensors would be expensive and cumbersome. In this paper, we choose to employ anomaly-based IDS using data mining approaches. However, instead of considering conventional static mining techniques, we select stream mining, precisely “evolving data stream mining,” as this approach is a more realistic approach in real-world monitoring and intrusion detection for AMI as various novel attacks can be introduced in AMI. Rodrigues and Gama mentioned that SG networks have various distributed sources of high-speed data streams. These stream data can be attributed as open ended and high speed and are produced in non stationary distributions. Thus, the dynamics of data are unpredictable. As the number of smart meters is suppose to eventually grow and their roles in AMI evolve over time, the topology of SG networks may change. Thus, data distribution in AMI networks can be changed. Hence, the model should be able to cope with evolved data.

AMI is composed of smart meters, data concentrators, and central system (AMI headend) and the communication networks among them. These AMI components are usually located in various networks and different realms such as public and private ones a pictorial view of AMI integration in a broader context of power generation, distribution, etc. From this figure, we can see that the smart meter, responsible for monitoring and recording power usage of home appliances, etc., is the key equipment for consumers. Home appliances and other integrated devices/systems such as water and gas meters, in-home display, plug-in electric vehicle/plug-in hybrid electric vehicle, smart thermostat, rooftop photovoltaic system, etc., constitute a home area network (HAN), which is connected to the smart meter.

DATA STREAM MINING METHODS:

We will focus on the security of advanced metering infrastructure (AMI), which is one of the most crucial components of SG. AMI serves as a bridge for providing bidirectional information flow between user domain and utility domain. AMI’s main functionalities encompass power measurement facilities, assisting adaptive power pricing and demand side management, providing self-healing ability, and interfaces for other systems.

AMI is usually composed of three major types of components, namely, smart meter, data concentrator, and central system (a.k.a. AMI headend) and bidirectional communication networks among those components. Being a complex system in itself, AMI is exposed to various security threats such as privacy breach, energy theft, illegal monetary gain, and other malicious activities. As AMI is directly related to revenue earning, customer power consumption, and privacy, of utmost importance is securing its infrastructure.

Our data streaming protect AMI from malicious attacks, we look into the intrusion detection system (IDS) aspect of security solution. We can define IDS as a monitoring system for detecting any unwanted entity into a targeted system (such as AMI in our context). We treat IDS as a second line security measure after the first line of primary AMI security techniques such as encryption, authorization, and authentication solutions alone are not sufficient for securing AMI.

INTRUSION DETECTION SYSTEM (IDS):

We develop embedded software for IDS, such as the one proposed in, and update the firmware of the smart meter to include these embedded IDS. Although this can be done with relative ease, the main problem is the limitation of computing resources in the current smart meters. They are mostly equipped with low-end processors and limited amounts of main memory (in kilobytes to a few megabytes range). Although this may change in the near future, since a good number of smart meters have already been deployed in many developed countries, it is not very easy to replace them or upgrade those existing ones with more powerful resources.

Since a smart meter is supposed to consume most of its processor and main memory resources for its core businesses (such as recording electricity usage, interaction with other smart home appliances, and two-way communication with its associated data concentrator and, ultimately, the head end), only a small fraction of its already limited resources is available for IDS’ data processing purpose. We try to solve this problem of resource scarceness by proposing to use a separate IDS entity, either installed outside the smart meter (for existing ones) or integrated within the smart meter (for new ones). We name such an entity a “security box.” A possible design of a smart meter with this security box is provided in Fig. 2(a) based on the one presented in show security box as a simple meter IDS.

However, it is open for this component to cover other security functions such as firewall, encryption, authorization, authentication, etc. IDS architecture for the other two types of AMI components, namely, data concentrator and AMI headend, is more or less similar to that of smart meter IDS. Again, the security boxes for those components can be either inside (in the form of software or add-on hardware card) or outside (in the form of a dedicated box or server). In order to simultaneously monitor a large number of data lows received from a large number of smart meters and detect security threats, such security box hardware (or the host equipment, in the case of software) must be rich in computing resources.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

Java Program
Compilers
Interpreter
My Program

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CHAPTER 8

8.1 CONCLUSION

In this paper, we have proposed architecture for the comprehensive IDS in AMI, which is designed to be reliable, dynamic, and considering the real-time nature of traffic for each component in AMI. Then, we conduct a performance analysis experiment of the seven existing state-of-the-art data stream mining algorithms on a public IDS data set. Finally, we elucidate the strengths and weaknesses of those algorithms and assess the suitability of each of them to serve as the IDSs for the three different components of AMI. We have observed that some algorithms that use very minimal amount of computing resources and offer moderate level of accuracy can potentially be used for the smart meter IDS. On the other hand, the algorithms that require more computing resources and offer higher accuracy levels can be useful for the IDSs in data concentrators and AMI headends.

8.2 FUTURE ENHANCEMENT:

As future work, we plan to develop our own lightweight yet accurate data stream mining algorithms to be used for the smart meter IDS and to set up a small-scale hardware platform for AMI to test our algorithms. In conclusion, we hope our research can make contributions toward more secure AMI deployments by the use of steam datamining-based IDSs.

Data-Driven Composition for Service-Oriented Situational Web Applications

This paper presents a systematic data-driven approach to assisting situational application development. We first propose a technique to extract useful information from multiple sources to abstract service capabilities with set tags. This supports intuitive expression of user’s desired composition goals by simple queries, without having to know underlying technical details. A planning technique then exploits composition solutions which can constitute the desired goals, even with some potential new interesting composition opportunities. A browser-based tool facilitates visual and iterative refinement of composition solutions, to finally come up with the satisfying outputs. A series of experiments demonstrate the efficiency and effectiveness of our approach. Data-driven composition technique for situational web applications by using tag-based semantics in to illustrate the overall life-cycle of our “compose as-you-search” composition approach, to propose the clustering technique for deriving tag-based composition semantics, and to evaluate the composition planning effectiveness, respectively.

Compared with previous work, this paper is significantly updated by introducing a semi-supervised technique for clustering hierarchical tag based semantics from service documentations and human-annotated annotations. The derived semantics link service capabilities and developers’ processing goals, so that the composition is processed by planning the “Tag HyperLinks” from initialquery to the goals. The planning algorithm is also further evaluated in terms of recommendation quality, performance, and scalability over data sets from real-world service repositories. Results show that our approach reaches satisfying precision and high-quality composition recommendations. We also demonstrate that our approach can accommodate even larger size of services than real world repositories so as to promise performance. Besides, more details of our interactive development prototyping are presented. We particularly demonstrate how the composition UI can help developers intuitively compose situational applications, and iteratively refine their goals until requirements are finally satisfied.

1.2 INTRODUCTION:

We develop and deliver software systems more quickly, and these systems must provide increasingly ambitious functionality to adapt ever-changing requirements and environments. Particularly, in recent a few years, the emergence and wide adoption of Web 2.0 have enlarged the body of service computing research. Web 2.0 not only focuses on the resource sharing and utilization from user and social perspective, but also exhibits the notion of “Web as a Platform” paradigm. A very important trend is that, more and more service consumers (including programmers, business analysts or even endusers) are capable of participating and collaborating for their own requirements and interests by means of developing situational software applications (also noted as “situated software”).

Software engineering perspective, situational software applications usually follow the opportunistic development fashion, where small subsets of users create applications to fulfill a specific purpose. Currently, composing available web-delivered services (including SOAP based web services, REST (RE presentational State Transfer) web services and RSS/Atom feeds) into a single web applications, or so called “service mashups” (or “mashups” for short) has been popular. They are supposed to be flexible response for new needs or demands and quick roll-out of some potentially unanticipated functionality. To support situational application development, a number of tools from both academia and industry have emerged.

However, we argue that, the large number of available services and the complexity of composition constraints make manual composition difficult. For the situational applications developers, who might be non-professional programmers, the key challenge remained is that they intend to represent their desired goals simply and intuitively, and be quickly navigated to proper service that can response their requests. They usually do not care about (or understand) the underlying technical details (e.g., syntactics, semantics, message mediation, etc). They just want to figure out all intermediate steps needed to generate desired outputs.

Moreover, many end-users may have a general wish to know what they are trying to achieve, but not know the specifics of what they want or what is possible. It means that the process of designing and developing the situational application requires not only the abstraction of individual services, but also much broader perspective on the evolving collections of services that can potentially incorporate with current onesWe first present a data-driven composition technique for situational web applications by using tag-based semantics in ICWS 2011 work.

The main contributions in this paper are to illustrate the overall life-cycle of our “composeas-you-search” composition approach, to propose the clustering technique for deriving tag-based composition semantics, and to evaluate the composition planning effectiveness, respectively. Compared with previous work, this paper is significantly updated by introducing a semi-supervised technique for clustering hierarchical tag-based semantics from service documentations and human-annotated annotations. The derived semantics link service capabilities and developers’ processing goals, so that the composition is processed by planning the “Tag HyperLinks” from initialquery to the goals.

The planning algorithm is also further evaluated in terms of recommendation quality, performance, and scalability over data sets from real-world service repositories. Results show that our approach reaches satisfying precision and high-quality composition recommendations. We also demonstrate that our approach can accommodate even larger size of services than real world repositories so as to promise performance. Besides, more details of our interactive development prototyping are presented. We particularly demonstrate how the composition UI can help developers intuitively compose situational applications, and iteratively refine their goals until requirements are finally satisfied.

1.3 SCOPE OF THE PROJECT

User-oriented abstraction: The tourist uses tags to represent their desired goals and find relevant services. Tags provide a uniform abstraction of user requirements and service capabilities, and lower the entry barrier to perform development. 

Data-driven development: In the whole development process, the tourist selects or inputs some tags, while some relevant services are recommended. This reflects a “Compose-as-you-Search” development process. Recommended services either process these tags as inputs, or produce these tags as outputs. As shown in Fig. 1, each service has some inputs and outputs, which are associated with tagged data. In this way, services can be connected to build data flows. Developers can search their goals by means of tags, and compose recommended services in a data driven fashion. 

Potential composition navigation: The developer is always assisted with possible composition suggestions, based on the tags in the current goals. Thecomposition engine interprets the user queries and automatically generates some appropriate compositions alternatives by a planning algorithm (Section 4). The recommendations not only contain the desired outputs from the developers’ goals, but also suggest some interesting or relevant suggestions leading to potential new composition possibilities.

For example, the tag “Italian” introduced the Google Translation service, which tourist was not aware of such composition possibility. In this way, the composition process is not like traditional semantic web services techniques which might need specific goals, but leads to some emergent opportunities according to current application situations.

1.4 LITRATURE SURVEY:

COMPOSING DATA-DRIVEN SERVICE MASHUPS WITH TAG-BASED SEMANTIC ANNOTATIONS

AUTHOR: X. Liu, Q. Zhao, G. Huang, H. Mei, and T. Teng

PUBLISH: Proc. IEEE Int’l Conf. Web Services (ICWS ’11), pp. 243-250, 2011.

EXPLANATION:

Spurred by Web 2.0 paradigm, there emerge large numbers of service mashups by composing readily accessible data and services. Mashups usually address solving situational problems and require quick and iterative development lifecyle. In this paper, we propose an approach to composing data driven mashups, based on tag-based semantics. The core principle is deriving semantic annotations from popular tags, and associating them with programmatic inputs and outputs data. Tag-based semantics promise a quick and simple comprehension of data capabilities. Mashup developers including end-users can intuitively search desired services with tags, and combine several services by means of data flows. Our approach takes a planning technique to retrieving the potentially relevant composition opportunities. With our graphical composition user interfaces, developers can iteratively modify, adjust and refine their mashups to be more satisfying.

TOWARDS AUTOMATIC TAGGING FOR WEB SERVICES

AUTHOR: L. Fang, L. Wang, M. Li, J. Zhao, Y. Zou, and L. Shao

PUBLISH: Proc. IEEE 19th Int’l Conf. Web Services, pp. 528-535, 2012.

EXPLANATION:

Tagging technique is widely used to annotate objects in Web 2.0 applications. Tags can support web service understanding, categorizing and discovering, which are important tasks in a service-oriented software system. However, most of existing web services’ tags are annotated manually. Manual tagging is time-consuming. In this paper, we propose a novel approach to tag web services automatically. Our approach consists of two tagging strategies, tag enriching and tag extraction. In the first strategy, we cluster web services using WSDL documents, and then we enrich tags for a service with the tags of other services in the same cluster. Considering our approach may not generate enough tags by tag enriching, we also extract tags from WSDL documents and related descriptions in the second step. To validate the effectiveness of our approach, a series of experiments are carried out based on web-scale web services. The experimental results show that our tagging method is effective, ensuring the number and quality of generated tags. We also show how to use tagging results to improve the performance of a web service search engine, which can prove that our work in this paper is useful and meaningful.

A TAG-BASED APPROACH FOR THE DESIGN AND COMPOSITION OF INFORMATION PROCESSING APPLICATIONS

AUTHOR: E. Bouillet, M. Feblowitz, Z. Liu, A. Ranganathan, and A. Riabov

PUBLISH: ACM SIGPLAN Notices, vol. 43, no. 10, pp. 585-602, Sept. 2008.

EXPLANATION:

In the realm of component-based software systems, pursuers of the holy grail of automated application composition face many significant challenges. In this paper we argue that, while the general problem of automated composition in response to high-level goal statements is indeed very difficult to solve, we can realize composition in a restricted context, supporting varying degrees of manual to automated assembly for specific types of applications. We propose a novel paradigm for composition in flow-based information processing systems, where application design and component development are facilitated by the pervasive use of faceted, tag-based descriptions of processing goals, of component capabilities, and of structural patterns of families of application. The facets and tags represent different dimensions of both data and processing, where each facet is modeled as a finite set of tags that are defined in a controlled folksonomy. All data flowing through the system, as well as the functional capabilities of components are described using tags. A customized AI planner is used to automatically build an application, in the form of a flow of components, given a high-level goal specification in the form of a set of tags. End-users use an automatically populated faceted search and navigation mechanism to construct these high-level goals. We also propose a novel software engineering methodology to design and develop a set of reusable, well-described components that can be assembled into a variety of applications. With examples from a case study in the Financial Services domain, we demonstrate that composition using a faceted, tag-based application design is not only possible, but also extremely useful in helping end-users create situational applications from a wide variety of available components.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

In our previous work we have designed a technique to extract tags by mining service specifications (including WSDL, Web API documents and web pages that contain references to web services) and collecting human-generated contents (including comments and queries). Several web services tagging approaches have been proposed, for example the FCA tagging system most of them annotate web services manually. Manual tagging is a time consuming work. Moreover, several existing systems can recommend tags for web services based on existing handmade tags such as the approach these systems consider nothing about similarities between tags and web services. Another problem in these systems is that if there is no handmade tag, they cannot work at all. Another system CDKH in can generate tags for web services automatically, but the system doesn’t use existing handmade tags of web services.

These different ways are combined in tagging tools that the tag-based platform facilitates. Moreover, inside of the platform and due to the preferences of the users, different tagging behaviours exist that actually obstruct the automated interoperability among tag sets. Despite the fact that the systems offer solutions to aid the understanding of the folksonomy that the users collectively build (tag clouds, tools based on related tag ideas, collective intelligence methods, data mining, etc.) Although tagging shows potential benefits, personal organization of information leads to implicit logical conditions that often differ from the global one. Tagging provides a sort of weak organisation of the information, very useful, but mediated by the user’s behaviour. Therefore, it is also possible that user’s tags associated with an object do not agree with the other users tags.

2.1.1 DISADVANTAGES:

  • There exist several limitations to collaborative tagging in sites such as Delicious. The first one is that a tag can be used to refer to different concepts; that is, there is a context dependent feature of the tag associated with the user.
  • This dependence limits both the effectiveness and adequacy of collaborative tagging. The limitation is called”Context Dependent Knowledge Heterogeneity” (CDKH). A second is the Classical Ambiguity (CA) of terms, inherited from natural language and/or the consideration of different”basic levels” among users
  • CA would not be critical when users work with urls (content of url induces, in fact, a disambiguation of terms because of its specific topic). In this case, the contextualization of tags in a graph structure (by means of clustering analysis) distinguishes the different terms associated with the same tag. CDKH is associated with concept structures that users do not represent in the system, but that FCA can extract.


2.2 PROPOSED SYSTEM:

We propose a heuristic graph-based planning algorithm within polynomial-time complexity. When the developer selects a tag from the tag cloud or input a keyword as the initial query request qi , the planning algorithm first computes the cost of achieving each tag starting from qi by conducting a forward search. Such a Depth-First Search step constructs all possible Tag Links that can perform the final goal. Based on the results above, the planning algorithm then approximates the sequence of Tag Links that connects qi to the final goal by a regression search step the tourist takes geographical locations of hotel, restaurant, bars and museum, we cannot give the reasonable order for visiting these places. Preferences, quality, ordering and other constraints would be helpful to improve the plan quality and performance. Due to the popularity and simplicity of tags, our tag-based service model can be extended, where all these constraints can be also presented as tags.   

Our approach relies on the popularity of tags on the web. The primitive of tag-based composition of flow applications was first proposed in the MARIO system. Tag-based search is a hot topic in the research body of information retrieval and data mining. Most of existing research works focus on processing tags from popular social networking sites like Del.icio.us, Twitter and flickr. To best of our knowledge, few works have been made in the area of existing service-based applications. The primitive of tag-based composition of flow applications was first proposed in the MARIO system. Some recent works try to leverage tag-based service discovery, but not fully consider the hierarchy relationships of tags.

Our approach provides a systematic way for extracting useful tags from service documents and user generated annotations, by fully considering the unique features of web services like interface naming rules and developer preferences. Besides traditional similarity-based measurement, the clustering process is also controlled by the probability of tag occurrence and its own property, without any needs of training data. It should be noted that, we currently make simple mapping between our top-level tags to WordNet. However, the search results seem to be satisfying in regular cases.

2.2.1 ADVANTAGES:

  • Tag extraction and clustering: Tags are extracted from multiple sources, including service textual documentation, user-generated comments and queries, etc. Browsing such a large size of tags is really tedious, and tag ambiguity might cause mistakes. Therefore, a semi-supervised technique is proposed to cluster tag-based taxonomy as uni- fied semantic foundation.
  • Composition semantics derivation: Service providers and application architects are responsible of annotating tagbased semantics to describe service capabilities, including functionalities, input and output data, and other useful information. Based on the generated tag hierarchy, some rules can help them accomplish the semantic annotation semi-automatically.
  • Composition goal search: In our browser-based development environment, developers can search their desired goals using tag queries. As tags are easier and more intuitive to understand, developers only focus on their desired goals without having to know underlying technical information of services. The queries are immediately submitted to the composition engine.
  • Composition planning: A composition engine interprets tag queries, and generates appropriate solutions that can contain or accomplish the goal. Our composition engine employs a graph-based planning technique to generate possible composition recommendations. As discussed above, this process retrieves prefabricated composition logics from task templates, or generates potentially new alternatives. Recommendations might be either individual services, or a set of services connected by data flows.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           Microsoft Visual Studio 2008
  • Back End                                :           MSSQL Server
  • Server                                      :           ASP Sever Page
  • Script                                       :           C# Script
  • Document                               :           MS-Office 2007


CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM

ADMIN:


USER:


UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

ADMIN:


USER:


3.3 CLASS DIAGRAM:

ADMIN:


USER:


3.4 SEQUENCE DIAGRAM:

ADMIN:


USER:


3.5 ACTIVITY DIAGRAM:

ADMIN:


USER:

CHAPTER 4

4.0 IMPLEMENTATION:

MARIO SYSTEM:

MARIO system is the most prior work to leverage tagbased descriptions as component annotations, whereby users can find desired goals by regular search. Based on the SPPL planner MARIO facilitates combination of components to create applications that satisfy end-user goals. Our approach shares common insights and learns successful experiences of MARIO. However, MARIO holds two assumptions: the tag-based semantics have to be prede- fined, and (2) the tag-based descriptions of all components might be (manually) pre-solved. The assumptions are quite reasonable to deal with relatively smaller component repository size or in a specific application domain with controlled vocabulary.

However, problems are yet remained in real-world scenarios: most of currently available web services and mashups are not with enough meaningful tags. In well-known repositories like ProgrammableWeb, 1 Seekda, 2 and Service-Finder, 3 existing tags are too limited and trivial to determine composition. For example, these tags can mainly help service categorization (like travel, education, games), but not provide sufficient enough information to reveal the relationships among services. In a sense, collecting enough tags and deriving semantics between them are indispensable step to achieve automated composition. Moreover, the quality of derived semantics should also be evaluated.

we have designed a technique to extract tags by mining service specifications (including WSDL, Web API documents and web pages that contain references to web services) and collecting human-generated contents (including comments and queries). Our work provides a similarity-based measurements including structure metric, lexical metric and frequency metric. We have obtained a repository with the size of 50,000 tags, which were extracted from over 20,000 real-world web services and 6,000 mashups. Initial experiments also showed that the tag-based search could improve the search performance and quality of a single web service. Based on the collected tags, this paper particularly addresses following three issues: (1) how to abstract tags for simple and precise service discovery; (2) how to identify the potential composition of a set of services by their tag-based descriptions; (3) how to operate the composition efficiently, even with the large size of tags.

4.1 ALGORITHM:

TAG CLUSTERING WITH ANNEALING ALGORITHM:

We apply a semi-supervised model to derive hierarchical structure from tags T annotating the services. It begins with the root node containing all tags in T and recursively splits them into a series of semantically meaningful clusters. The process does not terminate until each cluster represents a specific concept. At the final step of this algorithm, a cluster usually corresponds to a high-level category of a set of tags. For example, the tag set containing {country, street, city, milan, Italy, zipcode} represents the concept “geography”, and the one containing {rain, sunny, windchill, 27C, 80 F} is associated with the concept “weather”.

Our approach tries to generate a “Feature Tag” ô to summarize the semantics of other tags in the cluster (like “weather” in the example above), in order to navigate to high-level compositional semantics close to their desired goals. We briefly illustrate the splitting process as follows. At the beginning, we maintain a queue Q to store the information of all the nodes that are waiting for splitting, and a vector n in the queue indicates the probability that each tag emerges in this node. Initially all elements of n0 are assigned to 1, for all tags that are contained in the root node. In the clustering process, an annealing algorithm is employed to split the tags into several semantically meaningful clusters. Such optimizing algorithm could be stated as a process minimizing a predefined criterion.

4.2 MODULES:

DATA SET PREPARATION:

TAG-BASED SERVICE MODEL:

SEMANTICS DERIVATION:

TAG IDENTIFICATION AND EXTRACTION:

DATA-DRIVEN COMPOSITION:

4.3 MODULE DESCRIPTION:

DATA SET PREPARATION:

Tag semantics play the crucial role in our composition approach. We build up our service community in the Trustie Project,4 which is a testbed environment for software service production. The platform crawls web services and mashups from some well-known repositories like ProgrammableWeb and Seekda. As our approach takes input/output as composition unit, each API and each operation in WSDL is stored as one item. The data set in this experiment includes 19,083 service items. These items were put into some categories with statistics, like travel (728), news (484), weather (1,491), maps (792), geography (1,822), food (273), photo (489), messaging (816), blogging (332), and so on. Some sample services with their tags could be found via our website. We first applied the splitting technique to extract tags from textual descriptions. Then we manually filtered the redundant tags. For example, the three tags “Map”, “Maps” and “Mapping” are considered as one tag. Finally we chose a data set of 23,971 different tags. Applying algorithm for tag clustering and the EM processing, we attained 594 clusters such as hotel, geography, weather, search, map, etc.

TAG-BASED SERVICE MODEL:

Our data-driven, goal-oriented composition technique in the key primitive is the tag-based data flow between services. There are two constraints in terms of service composition: syntactic and semantic. In our approach, semantic constraints can be inferred from the hierarchical tag semantics; syntactic constraints depend on our underlying composition middleware, which takes responsibility of dealing with actual data types required and produced by the service. Based on the tag-based semantics, if a web service ws1 can produce t1 as its output, and the service ws2 can consume t1 or its father tag t2 as its input, we consider that ws1 and ws2 can be composed, since a data flow can be created between them. From this perspective, the tag-based service composition problem is defined as the result of creating a data flow of a sequence of tags. Just like the hyperlinks form the navigation among web pages, we call the tag-based data flow using the notion of Tag Link (TL) in the following. The first precondition indicates the mapping and propagation between web services at semantic level, which relies on the derived tag-based hierarchy. The second precondition ensures that no extra data is left at the syntactic level. Note that all selected services in the Tag-Links will be encapsulated according to our iMashup component model and composition runtime takes charge of interpreting and coordinating underlying technical details such as data object types and structures.

SEMANTICS DERIVATION:

Generally, efficient service composition relies on the precision of candidate services that are discovered. The precision of candidate services search also reflects the quality of derived tag-based semantics. Hereby, we tried to evaluate the precision of our tag-based search. We compared the results with traditional Term Frequency-Inverse Document Frequency (TF/IDF) retrieval technique for searching a single web service. We computed the similarities between the input query and web services in formula 14 by referencing classic formula defined by Manning.

Our planning approach is the discovery of potential composition opportunities. According to common experiences, the number of candidate services often implies the number of concepts. As about 90 percent mashups on Programmable Web contain less than five services, to make a comparison baseline, we chose 20 sample applications, each of which at least contains eight services. For each sample application, we still employed the same 20 junior students to manually extract the tags of the services, or add new annotations based from tag-based taxonomy. For each output and the user inputs, we ran the planning composition algorithm that might retrieve the same output given the user inputs and form an application incrementally. We compared the planned solutions with the original applications.

TAG IDENTIFICATION AND EXTRACTION:

Tags are actually a set of keywords to describe some aspects of a service. We extract tags from two main sources: (1) Service textual descriptions; (2) User-generated annotations. We briefly illustrate how to process them for extracting tags. For textual descriptions like WSDL documents, tags can be extracted from elements including SERVICES, INTERFACE, MESSAGE, TYPE and DOCUMENTATION. Usually, useful tags reside in: (1) service name containing the general information; (2) service interfaces describing service usage (including operations and input/output messages). From our investigation, we observe that over 90 percent WSDL documents use capital letters, numbers, or “ ” to separate tokens in service names [16]. So we use the following rules to split tokens into tags.  Capital letters, numbers, “%” and “ ” are treated as starting position of a new word.  First position is also treated as a starting position of a new word.  Contiguous, single capital letters and numbers should be merged into one token. For example, according to our rules, we split service name “AmazonSimpleDB” into {Amazon, Simple, DB}. For service interfaces, they are usually defined in form of “verb” plus “noun” e.g., “postZipcodeRequest”, “getHotelInfoResponse”, “getCompanyInfoResponse”. Verbs can reflect the type of messages: “post” and “request” are usually used for input messages, while “get” and “return” are usually used for output messages. In contrast, nouns may reflect more plentiful usage information of the services. So we extract the nouns and ignore verbs. For example, consider the output message “getHotelInfoResponse”, useful tags are {hotelname, address, zipcode, telephone, tax}.

DATA-DRIVEN COMPOSITION:

Our development process can be generally described as following steps in Fig. 2: Tag extraction and clustering. Tags are extracted from multiple sources, including service textual documentation, user-generated comments and queries, etc. (step ❶). Browsing such a large size of tags is really tedious, and tag ambiguity might cause mistakes. Therefore, a semi-supervised technique is proposed to cluster tag-based taxonomy as uni- fied semantic foundation (step ❷). Composition semantics derivation. Service providers and application architects are responsible of annotating tagbased semantics to describe service capabilities, including functionalities, input and output data, and other useful information. Based on the generated tag hierarchy, some rules can help them accomplish the semantic annotation semi-automatically. This (step ❸) aims to make services compatibly composed by the Tag-Link model (Section 5.1). Composition goal search. In our browser-based development environment, developers can search their desired goals using tag queries (step ❹). As tags are easier and more intuitive to understand, developers only focus on their desired goals without having to know underlying technical information of services. The queries are immediately submitted to the composition engine. Composition planning. A composition engine interprets tag queries, and generates appropriate solutions that can contain or accomplish the goal (step ❺). Our composition engine employs a graph-based planning technique to generate possible composition recommendations. As discussed above, this process retrieves prefabricated composition logics from task templates, or generates potentially new alternatives. Recommendations might be either individual services, or a set of services connected by data flows. Composition visualization, refinement and refactoring. Developers are able to directly run the generated compositions within a browser-based environment. At each composition step, the developers can revise immediate composition results, and iteratively refractor or re-design composition results (steps ❻ and ❼). Such a feature makes developers know what exactly the current composition results are. In this way, they can visually and iteratively refine composition requirements, until the final outputs are satisfied. In other words, steps ❹ to ❼ are iteratively performed.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CHAPTER 8

8.1 CONCLUSION

This paper presents our experiences of flowbased mashup development and tooling. The key principle of our approach is “Compose-as-you-Search”, which leverages tag-based service composition to lower the entry barrier for mashup development, and realizes the philosophy of “live development” by providing on-the-fly recommendations as well as the visual iterative refinement of applications. The key limitations of current approach are also addressed. As most service-oriented situational software applications are not appropriate for classic enterprise software that has strict quality requirements for security, availability, or performance. The approach proposed in this paper mainly targets at personal and small-scale data processing problems.

8.2 FUTURE ENHANCEMENT: One of our future directions is to accumulate useful composition knowledge. Composition knowledge retrieval is a recently hot topic in mashup development. We will attempt to combine our work with Knowledge Discovery from Service (KDS) proposed to extend the tag-based model to be more expressive beyond functionality specifications. For example, we can annotate a constraint by “tag “to the driving guide, to plan visiting orders. Certainly, the planning algorithm should add these constraints for scheduling actions.

Data Collection in Multi-Application Sharing Wireless Sensor Networks

Data Collection in Multi-Application SharingWireless Sensor NetworksHong Gao, Xiaolin Fang, Jianzhong Li, and Yingshu LiAbstract—Data sharing for data collection among multiple applications is an efficient way to reduce communication cost forWirelessSensor Networks (WSNs). This paper is the first work to introduce the interval data sharing problem which is to investigate howto transmit as less data as possible over the network, and meanwhile the transmitted data satisfies the requirements of all theapplications. Different from current studies where each application requires a single data sampling during each task, we studythe problem where each application requires a continuous interval of data sampling in each task. The proposed problem is anonlinear nonconvex optimization problem. In order to lower the high complexity for solving a nonlinear nonconvex optimizationproblem in resource restricted WSNs, a 2-factor approximation algorithm whose time complexity is Oðn2Þ and memory complexityis OðnÞ is provided. A special instance of this problem is also analyzed. This special instance can be solved with a dynamicprogramming algorithm in polynomial time, which gives an optimal result in Oðn2Þ time complexity and OðnÞ memory complexity.Three online algorithms are provided to process the continually coming tasks. Both the theoretical analysis and simulation resultsdemonstrate the effectiveness of the proposed algorithms.Index Terms—Data collection, data sharing, multi-application, wireless sensor networkÇ1 INTRODUCTIONWSN deployment is a difficult and time-consumingwork which requires much manpower or mechanicalpower. Once a network is deployed, it is expected to run fora long time without any human interruption. Therefore, itis inefficient to carry out only one application in a network.Sharing a network for multiple applications can significantlyimprove network utilization efficiency [1], [2], [3],[4], [20], [21]. Currently, it is popular for multiple applicationsto share a WSN. Each node in a network samples ata particular frequency and the sampled data is transmittedto the base station through multi-hops. All the applicationsprefer to receive all the sampled data. However, if all thesampled data is transmitted to the base station, thecommunication cost is high and network lifetime will bereduced. Fortunately, there may be some applicationsmonitoring the same physical attributes. In this case, acertain amount of data may not need to be repeatedlytransmitted back to the base station.Under the abovementioned scenario, carefully designeddata sharing algorithms are desired. Tavakoli et al. [5]proposed a data sampling algorithm for each node, suchthat the sampled data can be shared by as many applicationsas possible. Meanwhile, the amount of sampled data ateach node is reduced to a maximum level, reducing theoverall communication cost. In [5], each application consistsof a set of tasks. In each task, each node samples data once.As shown in Fig. 1, there are two applications running onthis node. Task T1 is for the first application, and Task T2 isfor the second one. T1 and T2 may overlap on the time axis,and both of themneed to sample data once.Anaivemethodis to sample data independently, e.g., s1 is sampled by T1and s2 is sampled by T2 as shown in Fig. 1a, resulting in twopieces of data s1 and s2. In [5], the authors designed a greedyalgorithm such that only one data sampling can serve bothapplications as shown in Fig. 1b.In many applications, data needs to be sampled for acontinuous interval as shown in Fig. 2, instead of samplingat a particular time point. For example, railway monitoringsystems collecting acoustic information [6], [7] need tosample data for a continuous interval. Volcanic andearthquake monitoring systems [8], [9], [10] also havesuch a requirement to measure vibrations. Habitat monitoringsystems for microclimate, plant physiology andanimal behavior [11], [12], [13] need to record wind speedand take video of animal behaviors, which again require tosample data for a continuous interval.This paper studies the interval data sharing problem of howto reduce the overall length of data sampling intervalswhich could be shared by multiple applications.We assumethere are multiple applications running on a same node,and each application consists of tasks. Each task requires tosample data for a continuous interval. In Fig. 2, T1 is for thefirst application, and T2 is for the second one. Both tasksneed to continuously sample data for an interval s. If twotasks sample data independently, two intervals of datawithlength s need to be sampled as shown in Fig. 2a. However,one interval of data with length s is enough if the startingpoints of data sampling of these two applications can beintelligently arranged. The data sampling interval lengthsfor different applications may be different, and for the same. H. Gao, X. Fang, and J. Li are with the Department of Computer Scienceand Technology, Harbin Institute of Technology, Harbin 150001, China.E-mail: {honggao, xlforu, lijzh}@hit.edu.cn.. Y. Li is with the Department of Computer Science and Technology,Harbin Institute of Technology, Harbin 150001, China, and also with theDepartment of Computer Science, Georgia State University, Atlanta, GA30303 USA. E-mail: yili@gsu.edu.Manuscript received 12 Aug. 2013; revised 3 Nov. 2013; accepted 4 Nov.2013. Date of publication 19 Nov. 2013; date of current version 9 Jan. 2015.Recommended for acceptance by A. Nayak.For information on obtaining reprints of this article, please send e-mail to:reprints@ieee.org, and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TPDS.2013.2891045-9219 _ 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 2, FEBRUARY 2015 403application, tasks may have different data samplinginterval lengths. The investigated problem in this paper isto minimize the overall data sampling interval length ateach node while satisfying all the applications’ needs.We formulate the aforementioned problem as a nonlinearnonconvex optimization problem. Since sensor nodesare resource constrained, the cost to solve such a problemat each node is very high. Therefore, we propose a 2-factorgreedy algorithm with time complexity Oðn2Þ and memorycomplexity OðnÞ.We also consider a special instance wherethe data sampling interval lengths of all the tasks are thesame. The special instance could be solved with a dynamicprogramming algorithm in polynomial time, whose timecomplexity is Oðn2Þ and memory complexity is OðnÞ. Thecontributions of this paper are as follows.. This is the first work to study the interval datasharing problem, where each node samples data fora continuous interval instead of for a discrete datapoint. This problem is formulated as a nonlinearnonconvex optimization problem.. A greedy approximation algorithm is proposed tosolve the problem so as to reduce the cost of solvingthe nonlinear nonconvex optimization problem atresource restricted sensor nodes. The proposedalgorithm is proved to be a 2-factor approximationalgorithm. The time complexity of this algorithm isOðn2Þ, and the memory complexity is OðnÞ.. We analyze a special instance of the interval datasharing problem. We give a dynamic programmingalgorithm which gives an optimal result in polynomialtime. The time complexity is Oðn2Þ and thememory complexity is OðnÞ.. Three online algorithms are proposed to process thetasks one by one.. Extensive simulations were conducted to validatethe correctness and effectiveness of our algorithms.The rest of this paper is organized as follows. Section 2reviews the related works. Section 3 formally defines theinterval data sharing problem. Section 4 gives an algorithmto solve the problem and the approximation ratio isanalyzed. A special instance is investigated in Section 5. Adynamic programming algorithm is also presented in thissection to address the special instance. Section 6 proposesthree online algorithms. The performance evaluations areshown in Section 7 and Section 8 concludes this paper.2 RELATED WORKSOur problem is inspired by the work in [5], which studiesthe problem of data sharing among multiple applications.It assumes each application only needs discrete data pointsamplings. While in our problem, the applications mayrequire a continuous interval of data. The proposedsolution in [5] cannot be applied to our problem. However,our solution can solve their problem.Our problem is a novel one inWSNs. It tries to collect aslittle data as possible. Query optimization inWSNs [2], [14]tries to find in-network schemes or distributed algorithmsto reduce communication cost for aggregation queries. Ourwork focuses on reducing the amount of transmitted datafor each node.Multi-query optimization in database systems studieshow to efficiently process queries with common subexpressions[15], [16]. It aims at exploiting the commonsub-expression of SQLs to reduce query cost, while ourproblem aims at reducing data volume.Krishnamurthy et al. [17] considered the problem of datasharing in data streaming systems for aggregate queries.They studied the min, max, sum; and count-like aggregationqueries. A stream is scanned at least once and ischopped into slices. Only the slices that overlap amongmultiple queries could be shared. Their studied problemsare different fromours. We expect to reduce the number ofsensor samplings at each individual node resulting in lesscommunication cost. Our problem differs in that we wantto provide each application enough sampled data whileminimizing the total number of sampling times.3 PROBLEM DEFINITIONIn order to make our problem clear, we first introduce anexample as shown in Fig. 3. We have two applications, andeach application consists of many tasks. Application A1requires an interval of data of length l1 during each taskduration, and A2 requires an interval of data of length l2during each task duration. The task duration lengths of A1and A2 are different as shown in Fig. 3. Application A1consists of tasks T11; T12; . . . ; T1i, and so on. Application A2includes tasks T21; T22; . . . ; T2j, and so on. Take tasks T11,T12, T13, T21, and T22 as examples. The optimal solution isshown in the bottom part of Fig. 3. Tasks T11, T12 and T13pick the intervals I11, I12, and I13 respectively. The intervalsI11, I12, and I13 are all of length l1. Tasks T21 and T22 pick theintervals I21 and I22 respectively. The intervals I21 and I22are both of length l2. The optimal solution gives a result oflength s1 þ s2 in this example, as shown in the bottom partof Fig. 3, where the tasks are sorted according an ascendingorder of the ending time of the tasks.Fig. 2. Data sampling for a continuous interval. (a) Independentsampling. (b) Greedy sampling.Fig. 1. Data sampling at a time point. (a) Independent sampling.(b) Greedy sampling.404 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 2, FEBRUARY 2015Data collected during the overlapped sampling intervalsof multiple tasks could be shared by these tasks. We aim atminimizing the overall length of the data sampling intervals.We now give some preliminary definitions.Definition 1. Define I ] I0 as the union of two intervals orinterval sets I and I0. For example, ½1; 5_ ] ½3; 7_ ¼ ½1; 7_,and f½1; 3_; ½5; 7_g] ½2; 6_ ¼ ½1; 7_.Definition 2. Define I \þ I0 as the overlap of two intervals I andI0. For example, ½1; 5_ \þ ½3; 7_ ¼ ½3; 5_.Definition 3. Define jIj as the length of interval I or the lengthof the union of the intervals in set I. For example, j½1; 5_j ¼ 4,j½1; 5_ ] ½3; 7_j ¼ 6, and j½1; 3_ ] ½5; 7_j ¼ 2.Definition 4. I_þ I0 means interval I is a sub-interval of I0. Forexample, ½2; 3_ ] _þ ½1; 5_.Given a set of n tasks T ¼ fTig, i ¼ 1; 2; . . . ; n. Each taskTi is a three-tuple Ti ¼ hbi; ei; lii, where bi denotes thebeginning time, ei represents the end time, and li meansthat Ti needs an interval of data with length li. It is assumedthat li _ ei _ bi. The problem is to find a continuous subintervalIi in interval ½bi; ei_, i.e., Ii_þ ½bi; ei_, for every tasksatisfying jIij ¼ li, so that the length of the union of all thesub-intervals on the time axis is minimized, i.e., j ]ni¼1 Iij isminimum. Note that sub-interval Ii is continuous.The bottom part of Fig. 3 illustrates an example. Sincesensor nodes have limited communication and computationalcapabilities, we want to find a set of sub-intervals I11,I21, I12, I22, and I13 for tasks T11, T21, T12, T22, and T13respectively, such that jI11 ] I21 ] I12 ] I22 ] I13j is minimum.In the example shown in Fig. 3, the optimal solutionis s1 þ s2, and all the tasks can obtain the expected data inintervals s1 and s2.We now formally define the interval data sharingproblem.Definition 5. Given a set of n tasks T, each task Ti is a threetupleTi ¼ hbi; ei; lii, that is, each task Ti has a beginning timebi,an end time ei, and an data sampling interval length li. Theproblem is to find a continuous sub-interval Ii for each taskso as tomin]ni¼1Ii__________(1)s.t.Ii_þ ½bi; ei_; i ¼1; 2; . . . ; n (2)jIij ¼ li; i ¼1; 2; . . . ; n: (3)The objective function of this problem is nonlinear. So ifbi, ei, and li are real numbers, the problem is a nonlinearproblem which has no efficient universal solution [18]. It iseasy to find that the objective function is nonconvex.Several methods are available for solving nonconvexoptimization problems. For example, one approach is touse special formulations of linear programming problems.Another method employs the branch and bound techniques,where the problem is divided into subclasses to besolved with convex or linear approximations that form alower bound on the overall cost within the subdivision.However, all these methods require high computationcomplexity which are impractical to be implemented onsensor nodes. Since digital signals are discrete, dataintervals can be regarded as integer sequences. Therefore,bi, ei, and li can be regarded as integers. The integervariables make the problem a nonlinear integer programmingproblem [19] which is hard to be solved.4 A 2-FACTOR APPROXIMATION ALGORITHMA naive method is to initiate a continuous data samplinginterval at the beginning time of each task independently.However, this method results in a large amount of data. Inthis section, we present a greedy algorithm which is a2-factor approximation algorithm for our interval datasharing problem. Before we present the approximationalgorithm, we propose a solution for the special case whereevery task overlaps with each other.4.1 Tasks Overlapped with Each OtherFor ease of understanding, we first define satisfy asfollows.Definition 6. We say that an interval I satisfies a task Ti ifjI \þ ½bi; ei_j _ li. An interval set S satisfies a task Ti if thereexists an interval I in S such that jI \þ ½bi; ei_j _ li.If all the tasks overlap with each other, then the intervaldata sharing problemcan be solved in polynomial time.Analgorithm is presented as follows.Step 1) Sort the tasks in an ascending order by theirend times.Step 2) Pick the sub-interval of length l1 at the end ofthe first task T1, i.e., pick sub-interval½e1 _ l1; e1_.Fig. 3. Interval data sampling for multi-applications.GAO ET AL.: DATA COLLECTION IN MULTI-APPLICATION SHARING WIRELESS SENSOR NETWORKS 405Step 3) Pick a sub-interval for each task from thesecond to the last. Take Ti as an example, if theunion of the picked sub-intervals satisfy Ti, donothing and continue to pick a sub-interval forthe next task Tiþ1. If it does not satisfy Ti,extend forward from the tail of the picked subintervals.If it still does not satisfy Ti, extendbackward from the head of the picked subintervals.The pseudo code for tasks overlapped with each other isdescribed in Algorithm 1 in Appendix which is available inthe Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2013.289. TakeFig. 4 as an example. Task T1, T2, and T3 overlap witheach other. T1 needs a data interval of length l1 ¼ 4, T2needs an interval of length l2 ¼ 3, and T3 needs an intervalof length l3 ¼ 9. First, the tasks are sorted in an ascendingorder by their end times. Second, pick the sub-interval oflength 4 at the end of T1. The picked interval for T1 isI ¼ ½7; 11_. Third, I satisfies task T2, so nothing is done forT2. Forth, I does not satisfy T3, thus, I is extended forwarduntil the end time of T3, at this time I ¼ ½7; 14_. But I stilldoes not satisfy T3, I is then extended backward from thehead of the picked interval to get I ¼ ½5; 14_ which satisfiesall these three tasks. The time complexity is Oðn log nÞ dueto the sorting step. If the tasks are pre-sorted, the timecomplexity is OðnÞ.One can find that the optimal interval I ¼ ½s; e_ for tasksoverlappedwith each other can be also obtained by anothermethod. An optimal interval I ¼ ½s; e_ can be derived fromthe following equations:s ¼ minni¼1fei _ lig (4)e ¼ max maxni¼1fbi þ lig; maxni¼1fs þ lig; minni¼1feig_ _: (5)The second method is described in Algorithm 2 inAppendix, which obtains the same result as Algorithm 1.This algorithm consists of two phases. Take Fig. 4 as anexample again. In the first phase, it needs to find thebeginning time s. In this example, s is the minimum ei _ li,and it is easy to find that s ¼ 5. In the second phase, we findthat e ¼ 14 which is the maximum s þ li in this example.Thus, an optimal interval is obtained which is [5, 14]. As wecan see, the case where tasks overlap with each other can besolved in time complexity Oð2nÞ ¼ OðnÞ with Algorithm 2.This algorithm does not require a sorting step. However, ifthe tasks are pre-sorted, Algorithm 1 is no worse thanAlgorithm 2. As shown in the later section, our approximationalgorithm pre-sorts the tasks, so either algorithmcan be used as a sub-process in our following approximationalgorithm.Lemma 1. Let Tm be the task with the minimum end time, i.e.em ¼ minni¼1ei. Then picking sub-interval ½em _ lm; em_ doesnot result in a worse result.Based on Lemma 1, we can find that Algorithm 1 andAlgorithm 2 are optimal. This is because, in the case wheretasks overlap with each other, pick the end sub-interval ofthe task which has the minimum end time will not resultin a worse result. Therefore, the overall result can bederived by extending this picked sub-interval forward andbackward.4.2 2-Factor Approximation AlgorithmWe now present our greedy approximation algorithm.First, sort all the tasks by the end time in an ascendingorder. Second, identify a subset of tasks that overlap withT1. It is easy to find that these tasks overlap with each other.Find the minimum interval that satisfies the tasks in theidentified subset by using Algorithm 1. Third, remove thepreviously identified tasks. Repeat the second and the thirdsteps for the remaining tasks until all the tasks areremoved. One can refer to Algorithm 3 in Appendix forthe detailed process.Fig. 5 illustrates the process of the greedy approximationalgorithm. The five tasks are sorted in an ascending orderby end time. In the first step, tasks T1, T2, and T5 are identifiedas a subset of tasks that overlap with each other. Onecan find that, if the tasks are sorted by end time, all thetasks which overlap with T1 also overlap with each other.Now, Algorithm 1 can be used to compute the interval thatsatisfies these three tasks. After that, the three tasks T1, T2,and T5 are removed. In the second step, T3 and T4 areidentified as a subset of tasks that overlap with each other.Now, Algorithm 1 is employed again to compute the intervalthat satisfies these two tasks. The union of the two foundintervals is the final result of this example returned byAlgorithm 3 in Appendix available online.Theorem 1. Algorithm 3 is a 2-factor approximation algorithm.A tight example is shown in Fig. 6. Algorithm 3 derivesan interval of length l for tasks T1 and T3 which overlapwith each other in the first iteration. Then it derives aninterval of length l for task T2 in the second iteration.Algorithm 3 returns a final result of length 2l as shown inFig. 6a. However, there exists an optimal solution whichderives an interval of length ” for T1 and an interval oflength l for T2 and T3 as shown in Fig. 6b. This optimalsolution returns a result of length ” þ l. Therefore, it deriveslim”!02lð”þlÞ ¼ 2.Fig. 4. Tasks overlapped with each other.Fig. 5. Illustration of the approximation algorithm.406 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 2, FEBRUARY 2015The time complexity of Algorithm 3 is Oðn2Þ due to thestep of identifying tasks which overlap with the firstremaining task in each iteration.5 MULTIPLE TASKS WITH SAME DATA SAMPLINGINTERVAL LENGTHIn this section, we study a special instance of the intervaldata sharing problem where the length of the datasampling interval of all the tasks is the same. Differentfrom the general problem, this special instance can besolved with a dynamic programming algorithm.Given a set of tasks T ¼ fT1; T2; . . . ; Tng and a positiveinteger l, each task Ti is denoted as Ti ¼ hbi; ei; li, where bi isthe beginning time and ei is the end time. The problem is tofind a continuous sub-interval of length l for each task Ti in½bi; ei_, so that the length of the union of all the picked subintervalson the time axis is minimized.Definition 7. In the same data sampling interval length case, atask Ti covers Tj if ½bj; ej_ is a sub-interval of ½bi; ei_, that is½bj; ej__þ ½bi; ei_.One can find that in the same data sampling intervallength case, tasks which cover some other tasks can beremoved. This is because any interval that satisfies thecovered shorter task must satisfy the longer task. In Fig. 7a,task T2 covers T1. If they have the same data samplinginterval length, then any interval I that satisfies T1 satisfiesT2. Therefore, we do not have to consider T2, and T2 can beremoved. As shown in Fig. 7b, we can get the same resultafter removing T2.Lemma 2. Let the data sampling interval length of all the tasksbe the same. If Ti covers Tj, i.e., ½bj; ej__þ ½bi; ei_ for any i; j ¼1; 2; . . . ; n, any interval that satisfies Tj satisfies Ti.After removing the tasks which cover other tasks, theproblem could be solved with a dynamic programmingalgorithm. Let T0 ¼ fT01; T02; . . . ; T0mg be the set of tasks anyof which does not cover some other task. Assume thatT01; T02; . . . ; T0mare sorted in an ascending order by end time.We have the following lemma.Lemma 3. In T0 ¼ fT0i; T0iþ1; . . . ; T0jg, b0p G b0q and e0p G e0q fori _ p G q _ j.Let Iði; jÞ be the interval that satisfies both T0i and T0j ,i _ j. We define Iði; jÞ as follows:Iði; jÞ ¼e0i _ l; e0i_ _if b0j _ e0i _ le0i _ l; b0j þ lh iif e0i _ l G b0j G e0iþ1 if b0j _ e0i.8><>:(6)There are only two cases when T0i overlaps with T0j , i.e.,T0i \þ T0j 6¼ ;, i _ j. In the first case, T0j covers interval½e0i _ l; e0i_ as shown in Fig. 8a, then we let Iði; jÞ ¼½e0i _ l; e0i_. In the second case, T0j overlaps with interval½e0i _ l; e0i_ as shown in Fig. 8b, then we let Iði; jÞ ¼½e0i _ l; b0j þ l_. When T0i overlaps with T0j, we define Iði; jÞas presented in the first two equations in Equation (6) basedon Lemma 1.When T0i does not overlap with T0j, i.e. T0i \þ T0j ¼ ;, i _ j,we define Iði; jÞ ¼ þ1as presented in the last equation inEquation (6).Lemma 4. Iði; jÞ in Equation (6) satisfies all the tasks T0i; T0iþ1;. . . ; T0j .Let f ðiÞ be the result with minimum length of the unionof the results from tasks T0i; T0iþ1; . . . ; T0m, where ½e0i _ l; e0i_is picked. Let gðiÞ be the index x which results in theminimum length of the union of the results fromT0i; T0iþ1; . . . ; T0m. Then fðiÞ and gðiÞ could be represented asfollows:fðiÞ ¼Iði; gðiÞÞ ] fðgðiÞ þ 1Þ 1 _ i G me0m _ l; e0m_ _i ¼ m; i 9 m8<: (7)gðiÞ ¼argmini_x_mfjIði; xÞ ] fðx þ 1Þjg 1 _ i G mm i¼ m:((8)An example is shown in Fig. 9, and the process of thisexample is presented in Table 1.First, we compute Iði; jÞ. By Equation (6), we deriveIði; jÞ in Table 1a. T01overlaps with T02and T03, so we getIð1; 1Þ ¼ ½3; 7_, Ið1; 2Þ ¼ ½3; 8_ and Ið1; 3Þ ¼ ½3; 10_. T02overlapswith T03, sowe derive Ið2; 2Þ ¼ ½5; 9_ and Ið2; 3Þ ¼ ½5; 10_.T03overlaps with T04, so we derive Ið3; 3Þ ¼ ½12; 16_ andIð3; 4Þ ¼ ½12; 16_. We have Ið4; 4Þ ¼ ½14; 18_.Then, we compute fðiÞ. By Equations (8) and (7), fðiÞ isobtained in Table 1b. As represented in Equation (7), we getfð5Þ ¼ ; first. By recalling the definition of fðiÞ, fð4Þ ¼Ið4; 4Þ ¼ ½e04 _ l; e04_. Then fð3Þ is the one with less length ofthe union of the intervals between Ið3; 3Þ ] fð4Þ andIð3; 4Þ ] fð5Þ, thus we get fð3Þ ¼ Ið3; 4Þ. After that, fð2Þ isthe one with smaller length of the union of the intervalsFig. 6. Tight example. (a) Greedy result. (b) Optimal result.Fig. 7. Example of covering. (a) Before removing. (b) After removing.Fig. 8. Illustration of computing Iði; jÞ. (a) Case 1. (b) Case 2.GAO ET AL.: DATA COLLECTION IN MULTI-APPLICATION SHARING WIRELESS SENSOR NETWORKS 407between Ið2; 2Þ ] fð3Þ and Ið2; 3Þ ] fð4Þ, and we obtainfð2Þ ¼ Ið2; 2Þ ] fð3Þ. Finally, fð1Þ is the one with thesmallest length of the union of the intervals amongIð1; 1Þ ] fð2Þ, Ið1; 2Þ ] fð3Þ, and Ið1; 3Þ ] fð4Þ, and we getfð1Þ ¼ Ið1; 2Þ ] fð3Þ. The dynamic programming algorithmis described in Algorithm 4 in Appendix available online.In Algorithm 4, the tasks are sorted in ascending orderby end time in line 1. Lines 2-7 remove the tasks whichcover other task. fðiÞ is computed in lines 10-20. Line 15checks whether T0xoverlaps with T0i. If T0xdoes not overlapwith T0i , nothing is done. Break the loop because all the latertasks will not overlap with T0i. If T0xoverlaps with T0i, thealgorithm needs to record the best index gðiÞ and theminimum result. The final result is fð1Þ.Lemma 5. The special instance where tasks have the same datasampling interval length could be solved in time complexityOðn2Þ and memory complexity OðnÞ.6 ONLINE ALGORITHMSThree online algorithms are presented in this section for thesituationwhere tasks come one by one. Although the onlinealgorithms may not obtain optimal solutions, they generatereasonable results in our experiments.A task Ti is denoted as Ti ¼ hbi; ei; lii, where bi is thebeginning time and ei is the end time. In real applications,tasks arrive in sequence by beginning time. For an arrivingtask Ti, an online algorithm picks a sub-interval of length liin ½bi; ei_, so as to minimize the union length of all thepicked sub-intervals.The general online algorithm is described as follows. Letthe set of picked sub-intervals from task T1 to task Ti_1 befði _ 1Þ. When task Ti arrives, the online algorithmpicks asub-interval for Ti based on fði _ 1Þ. The sub-interval canbe picked by differentmethods.We compare theminimumincrement,the latest-overlap and the maximum-overlapmethods in this paper.Before presenting the three methods, an extensionprocess is introduced first.6.1 Extension ProcessGiven an interval I, this section introduces how to extend Ito satisfy the arriving task Ti. As shown in Fig. 10, therelationship between I and Ti could be of five cases.Figs. 10a and 10e are the cases where I does not overlapwith Ti. Figs. 10b and 10d are the cases where I partlyoverlaps with Ti. Fig. 10c is the case where I is within Ti.If I ¼ ½b; e_ is empty, or it does not overlap with Ti likeFigs. 10a and 10e, then pick end sub-interval ½ei _ li; ei_. If Idoes not satisfy Ti, and it overlapswith Ti like Fig. 10b, thenpick the sub-interval ½bi; bi þ li_. In the cases shown inFigs. 10c and 10d, pick sub-interval ½ei _ li; ei_ if ½b; b þ li_exceeds Ti, otherwise, pick ½b; b þ li_ if ½b; b þ li_ exceeds I.The main idea of the extension process is to extend forwardfirst. The extension process is described in Algorithm 6 inAppendix.6.2 Minimum-Increment MethodBased on the extension process, when task Ti arrives, theminimum-increment method selects any two intervals infði _ 1Þ, and finds the pair with the minimum incrementallength. The minimum-increment method is described inAlgorithm 7 in Appendix available online.Take Fig. 11 as an example, where fði _ 1Þ ¼ fs1; s2; s3;s4; s5g and js1j ¼ 2, js2j ¼ 5, js3j ¼ 2, js4j ¼ 5, js5j ¼ 7. Let thedata sampling interval length for Ti be 7. It is easy to findthat in the minimum-increment method, [3, 11] is theminimum incremental solution which includes s2, s3 andan additional incremental interval [8, 9]. The incrementallength is 1. The minimum-increment method finds a localoptimal solution for Ti.6.3 Latest-Overlap MethodWhen task Ti arrives, the latest-overlap method is to findthe interval s in fði _ 1Þ that overlaps with ½bi; ei_ the latest.The latest-overlap method is described in Algorithm 8 inAppendix.In Fig. 11, s5 is the latest interval that overlaps with Ti.The latest-overlap method tries to find a solution for thelater tasks. The solution may satisfy the later tasks, thus,the overall result may be better.6.4 Maximum-Overlap MethodWhen task Ti arrives, the maximum-overlap method is tofind the interval s in fði _ 1Þ that overlaps with ½bi; ei_ to amaximumamount, i.e.,maxs2fði_1Þ js \þ ½bi; ei_j. Themaximumoverlapmethod is described in Algorithm 9 in Appendixavailable online.Fig. 9. Example for the dynamic programming algorithm.TABLE 1Computing Iði; jÞ and fðiÞ for the Example in Fig. 9.(a) Computing Iði; jÞ. (b) Computing fðiÞFig. 10. Relationship between I and Ti.408 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 2, FEBRUARY 2015Take Fig. 11 as an example again. In the maximumoverlapmethod, s4 is selected in Algorithm 9, and the datasampling interval for Ti is [14, 21]. The maximum-overlapmethod considers both Ti and the later tasks. The solutionmay not be optimal for Ti, but the overall result for Ti andthe later tasks may be better.6.5 Performance AnalysisAn online algorithm may not derive an optimal solution forall the tasks. All the above methods pick a sub-interval forthe arriving task based on the previous information. Theycannot be global optimal solutions. In the worst situation,every task picks a sub-interval independently, and theapproximation ratio is n. A worst instance is presented inFig. 12. In this example, one interval is enough for all thetasks in an optimal solution, while all the three onlinemethods derive a result with n intervals.However, the worst instance is almost impossible tooccur in the multi-application data sharing problem. In theworst instance, the tasks overlap with each other as shownin Fig. 12. We will analyze the probability that the worstinstance occurs. Assume the tasks arrive randomly. Let theoverall time interval be ½0; t_, and all the tasks are withinthis interval. Let task Ti in Fig. 12 be Ti ¼ hbi; ei; lii, and let_i ¼ ei _ bi. If the worst instance occurs, the probability thatthe longest task is fixed is 1t__1. Because t is usually a largenumber, _i _ t, and the probability could be representd as1t__1_ 1t. The probability that the second longest task iscovered by T1 is _1__2t__2_ 1t. Similarly, the probability of Ti is_i_1__it__i_ 1t. Therefore, the probability of the worst instanceis ðt _ _1Þ 1tn _ 1tn_1, which indicates that the worst instance isnearly impossible to occur. Our simulation results confirmthis analysis in the next section, where the performances ofthe online algorithms are much better than the theoreticalresults.7 PERFORMANCE EVALUATION7.1 Simulations with TOSSIMWe evaluate the effectiveness of the proposed algorithmsthrough simulations. The simulations are implementedwith TOSSIM which is a widely used simulation tool forWSNs. Four cases are tested. In each case, four applications,each with different task durations and different datasampling interval lengths are tested. In the first case, thetask durations of the four applications are 11, 13, 17, and19 unit time respectively. The task durations are 13, 17, 19,and 23 unit time respectively in the second case. The taskdurations of the third case are 17, 19, 23, and 29 unit time,and 19, 23, 29, and 31 for the forth case. We assume thatsensor nodes can sample once and obtain one unit data ineach unit time. The sensor nodes run Algorithm 4 everymaxTime unit time, where maxTime is the window sizeaccording to the computation ability of the sensor nodes.Higher computation ability allows larger maxTime. Thegreedy algorithm and the online algorithms are comparedwith the naive method which is introduced in Section 4.The naive method initiates a continuous data sampling atthe beginning of each task independently.7.1.1 Impact of Interval LengthIn the first set of simulations, we evaluate the performanceof the proposed algorithms in terms of the amount ofsampled data. The data sampling interval lengths for everycase are 2, 3, 5, and 7 unit time. It can be seen from Fig. 13that the naive method samples much more data than theoptimal solution, and it cannot be bounded. In thesimulations, maxTime is set to 150 unit time. Our greedyalgorithm samples more data than the optimal solution, butit is always no more than twice of the optimal result.Compared with the naive method, our algorithm samplesalmost 200 percent less data when the data samplinginterval length is short. One can also find that when thetask duration increases, the amount of data sampled byboth the naive method and the greedy algorithm decreases.Although the online algorithms give bad results in theworst situation in theoretical analysis, they have acceptableresults in the simulations as shown in Fig. 13, where minInc,maxOv, latestOv represent the minimum-increment method,the maximum-overlap method and the latest-overlap methodrespectively. It can be found in Fig. 13 that the data amountsampled by all these three online algorithms is larger thanthe optimal solution and the greedy algorithm, but lowerthan the naive method. These three online methods almosthave the same performance in all the simulations exceptcase 1, where minInc incurs more data. It indicates thatmaxOv and latestOv may be better in an online process.Fig. 11. Illustration of the online algorithm.Fig. 12. Worst instance.Fig. 13. Data amount for shorter interval lengths.GAO ET AL.: DATA COLLECTION IN MULTI-APPLICATION SHARING WIRELESS SENSOR NETWORKS 409Both maxOv and latestOv try to sample data for thearriving task as late as possible. Such a method couldsample data that may satisfy the later coming tasks. minIncis likely to pick data that is in the beginning part of thearriving task, which may not satisfy the later coming tasks.7.1.2 Impact of Window Size maxTimeThe next group of simulations is to evaluate howmaxTimeaffects the amount of sampled data. In the simulations, thetask durations of the four applications are 11, 13, 17, and19 unit time respectively, and the data sampling intervallengths for every case are 2, 3, 5, and 7 unit timerespectively. The result is shown in Fig. 14. The amountof sampled data changes slightly for different maxTimesettings. As maxTime increases, the amount of sampleddata increases. However, the average amount of data doesnot vary a lot. This observation means that it is notnecessary for the sensor nodes to take care of a largemaxTime. A small maxTime is already enough to derive agood result.As shown in Fig. 14, the online methods sample moredata than the optimal and the greedy algorithm, but lessthan the naive method in different cases with differentmaxTime settings. The three online methods have similarperformance. minInc derives slightly more data thanmaxOv and latestOv for some maxTime settings. Thereason is that minInr has a higher probability to pick datainterval that will not satisfy the later coming tasks.7.1.3 Impact of Node DensityNext we evaluate the impact of node density on the amountof sampled data. In the simulations, the area width of thenetwork is set to 100 m, the communication range is set to40 m, and the number of nodes increases from 10 to 160. Asthe node density increases, the amount of sent dataincreases, but the amount of data received by the basestation does not increase proportionally. Fig. 15 illustratesthe data loss rate of the proposed algorithms for networkswith different node densities.When the number of nodes is160, the naive method loses more than 30 percent of thesent sampled data. Data loss rate increases sharply in densenetworks when the traffic is heavy. This is becauseunreliable wireless links and retransmissions result inserious communication congestion. The greedy and theonline methods sample less data, thus the traffic carried onthe network is not quite heavy, and the data loss rate islower.7.1.4 Impact of Network ScaleFig. 16 shows how the data loss rate is affected by networkscale. In the simulations, the density is 10 nodes per50     50 m2, the communication range is 40 m, and the areawidth of the network increases from 50 m to 250 m. Thenaive method and the greedy algorithm have a similar lossrate in small scale networks. When the network scale isvery large, the data loss rate of the naive method is almost70 percent. This is because the naive method samples alarge amount of data which result in numerous collisions inlarge scale networks. The optimal solution and the greedyalgorithm which sample less amount of data show a betterresult. The online methods sample almost the same amountof data, thus their data loss rates are not quite different.Fig. 14. Data amount for different maxTime settings.Fig. 15. Data loss rate.Fig. 16. Data loss rate.410 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 2, FEBRUARY 20157.2 Simulations with More TasksIn this section, we investigate the performance of theproposed algorithms with large number of tasks. Weimplement the algorithms in C/C++. 10000 tasks are randomlygenerated in interval [1, 3000000]. The task durationlengths are at most 100.We first illustrate how the short data sampling intervallengths affect the amount of sampled data. The datasampling interval length for each task is at most 1/3 of thetask duration length. Seven cases are tested as shown inFig. 17. As the number of tasks is large, it is very difficult tofind the optimal solution. Therefore, the optimal result isnot presented in this and the next group of simulations. InFig. 17, the greedy algorithm derives the least data amount,and the naive method samples more data than other algorithms.Similar to the results in the simulations withTOSSIM, minInc samples the most data among the threeonline algorithms, while the other two methods havesimilar performance.In the next group of simulations, we show how thelonger data sampling interval lengths affect the amount ofsampled data. The data sampling interval length for eachtask is at most the task duration length. Seven cases aretested as shown in Fig. 18. The result is similar to thesimulations with short data sampling interval lengthsexcept that all the proposed algorithms derives more data.8 CONCLUSIONData sharing for multiple applications is an efficient way toreduce communication cost in WSNs. Many applicationsneed a continuous interval of data sampling periodically.This paper is the first work to introduce the interval datasharing problem among multiple applications, which is anonlinear nonconvex optimization problem. Since noefficient universal solution has been found for this problem,we provide a greedy approximation algorithm to lower thehigh computational complexity of the available solutions.We prove that the provided greedy algorithm is a 2-factorapproximation algorithm. The time complexity of thisalgorithm is Oðn2Þ and the memory complexity is OðnÞ. Ina special instance where all the tasks have the same datasampling interval length, the problem can be addressed inpolynomial time, and a dynamic programming algorithm isprovided for this special instance. The time complexity ofthe dynamic programming algorithm is Oðn2Þ and thememory complexity is OðnÞ. Because the tasks are comingone by one, three online algorithms are also provided.Although the online algorithmsmay sample a large amountof data in theoretical analysis, they show acceptableperformance in the simulations.ACKNOWLEDGMENTThis work was supported in part by the Major Program ofNational Natural Science Foundation of China under grantNo. 61190115, the National Basic Research Program ofChina (973 Program) under Grant 2012CB316200, theNational Natural Science Foundation of China (NSFC)under Grants 61033015, 60933001, and 61100030.REFERENCES[1] W.I. Grosky, A. Kansal, S. Nath, J. Liu, and F. Zhao, ‘‘Senseweb:An Infrastructure for Shared Sensing,’’ IEEE Multimedia, vol. 14,no. 4, pp. 8-13, Oct.-Dec. 2007.[2] N. Trigoni, Y. Yao, A. Demers, and J. Gehrke, ‘‘Multi-Query Optimizationfor Sensor Networks,’’ in Proc. DCOSS, 2005, pp. 307-321.[3] M. Li, T. Yan, D. Ganesan, E. Lyons, P. Shenoy, A. Venkataramani,and M. Zink, ‘‘Multi-User Data Sharing in Radar Sensor Networks,’’in Proc. 5th Int’l Conf. Embedded Netw. SenSys, 2007, pp. 247-260.[4] Y. Xu, A. Saifullah, Y. Chen, C. Lu, and S. Bhattacharya, ‘‘NearOptimal Multi-Application Allocation in Shared Sensor Networks,’’in Proc. 11th ACM Int’l Symp. MobiHoc, 2010, pp. 181-190.[5] A. Tavakoli, A. Kansal, and S. Nath, ‘‘On-Line Sensing TaskOptimization for Shared Sensors,’’ in Proc. 9th ACM/IEEE Int’lConf. IPSN, 2010, pp. 47-57.[6] S. Ganesan and R.D. Finch, ‘‘Monitoring of Rail Forces by UsingAcoustic Signature Inspection,’’ J. Sound Vibration, vol. 114, no. 2,pp. 165-171, Apr. 1987.[7] M. Cerullo, G. Fazio, M. Fabbri, F. Muzi, and G. Sacerdoti,‘‘Acoustic Signal Processing to Diagnose Transiting ElectricTrains,’’ IEEE Trans. Intell. Transp. Syst., vol. 6, no. 2, pp. 238-243,June 2005.[8] L. Cheng and S.N. Pakzad, ‘‘Agility of Wireless Sensor Networksfor Earthquake Monitoring of Bridges,’’ in Proc. 6th INSS,June 2009, pp. 1-4.Fig. 17. Data amount for short interval lengths. Fig. 18. Data amount for longer interval lengths.GAO ET AL.: DATA COLLECTION IN MULTI-APPLICATION SHARING WIRELESS SENSOR NETWORKS 411[9] M. Suzuki, S. Saruwatari, N. Kurata, and H.Morikawa, ‘‘A High-Density Earthquake Monitoring System Using Wireless SensorNetworks Proc. SenSys, 2007, pp. 373-374.[10] R. Tan, G. Xing, J. Chen, W. Song, and R. Huang, ‘‘Quality-Driven Volcanic Earthquake Detection Using Wireless SensorNetworks,’’ in Proc. IEEE 31st RTSS, Dec. 2010, pp. 271-280.[11] A. Mainwaring, D. Culler, J. Polastre, R. Szewczyk, and J. Anderson,‘‘Wireless Sensor Networks for Habitat Monitoring,’’ in Proc. 1stACM Int’l Workshop WSNA, 2002, pp. 88-97.[12] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, and D. Culler,‘‘An Analysis of a Large Scale Habitat Monitoring Application,’’ inProc. 2nd Int’l Conf. Embedded Netw. SenSys, 2004, pp. 214-226.[13] R. Szewczyk, E. Osterweil, J. Polastre, M. Hamilton, A. Mainwaring,andD. Estrin, ‘‘HabitatMonitoringwith SensorNetworks,’’ Commun.ACM, vol. 47, no. 6, pp. 34-40, June 2004.[14] S. Xiang, H.B. Lim, K.-L. Tan, and Y. Zhou, ‘‘Two-Tier MultipleQuery Optimization for Sensor Networks,’’ in Proc. 27th ICDCS,2007, p. 39.[15] T.K. Sellis, ‘‘Multiple-Query Optimization,’’ ACM Trans. DatabaseSyst., vol. 13, no. 1, pp. 23-52, Mar. 1988.[16] P. Roy, S. Seshadri, S. Sudarshan, and S. Bhobe, ‘‘Efficient andExtensible Algorithms for Multi Query Optimization,’’ in Proc. ACMSIGMOD Int’l Conf., 2000, pp. 249-260.[17] S. Krishnamurthy, C. Wu, and M. Franklin, ‘‘On-the-Fly Sharingfor Streamed Aggregation,’’ in Proc. ACM SIGMOD Int’l Conf.,2006, pp. 623-634.[18] D.P. Bertsekas, Nonlinear Programming. Belmont, MA, USA:Athena Scientific, 1999.[19] D. Li and X. Sun, Nonlinear Integer Programming. NewYork,NY,USA: Springer-Verlag, 2006.[20] S. Cheng, J. Li, and Z. Cai, ‘‘O(“)-Approximation to PhysicalWorld by Sensor Networks,’’ in INFOCOM, 2013, pp. 3084-3092.[21] J. Li, S. Cheng, H.Gao, andZ.Cai, ‘‘Approximate PhysicalWorldReconstruction Algorithms in Sensor Networks,’’ in TPDS, 2014.Hong Gao received the BS degree in computerscience from Heilongjiang University, China, theMS degree in computer science from HarbinEngineering University, China, and the PhD degreein computer science from Harbin Institute ofTechnology, China. She is currently a Professorin the School of Computer Science and Technologyat Harbin Institute of Technology. Her researchinterests include graph data management,sensor network, and massive data management.Xiaolin Fang received the BS degree from theDepartment of Computer Science and Technologyat Harbin Engineering University, China, andthe MS degree from the Department of ComputerScience and Technology at Harbin Institute ofTechnology, China. He is currently pursuing thePhD degree in Department of Computer Scienceand Technology at Harbin Institute of Technology,China. His research interests includemassive data processing and sensor network.Jianzhong Li is a Professor in the School ofComputer Science and Technology at HarbinInstitute of Technology, China. In the past, heworked as a visiting scholar at the University ofCalifornia at Berkeley, as a Staff Scientist in theInformation Research Group at the LawrenceBerkeley National Laboratory, and as a VisitingProfessor at the University of Minnesota. Hisresearch interests include data managementsystems, sensor networks, and data intensivecomputing.Yingshu Li received the BS degree from theDepartment of Computer Science and Engineeringat Beijing Institute of Technology, China, andthe MS and PhD degrees from the Department ofComputer Science and Engineering at Universityof Minnesota-Twin Cities. She is currently anAssociate Professor in the Department of ComputerScience at Georgia State University. Herresearch interests include Wireless Network,Sensory Data Management, and Optimization.. For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.412 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 2, FEBRUARY 2015

Cost-Aware SEcure Routing (CASER) Protocol Design for Wireless Sensor Networks

Lifetime optimization and security are two conflicting design issues for multi-hop wireless sensor networks (WSNs) with non-replenishable energy resources. In this paper, we first propose a novel secure and efficient Cost-Aware SEcure Routing (CASER) protocol to address these two conflicting issues through two adjustable parameters: energy balance control (EBC) and probabilistic based random walking. We then discover that the energy consumption is severely disproportional to the uniform energy deployment for the given network topology, which greatly reduces the lifetime of the sensor networks. We propose an efficient non-uniform energy deployment strategy to optimize the lifetime and message delivery ratio under the same energy resource and security requirement. We also provide a quantitative security analysis on the proposed routing protocol.

Our theoretical analysis and java simulation results demonstrate that the proposed CASER protocol can provide an excellent tradeoff between routing efficiency and energy balance, and can significantly extend the lifetime of the sensor networks in all scenarios. For the non-uniform energy deployment, our analysis shows that we can increase the lifetime and the total number of messages that can be delivered by more than four times under the same assumption. We also demonstrate that the proposed CASER protocol can achieve a high message delivery ratio while preventing routing traceback attacks.

1.1 INTRODUCTION:

The recent technological advances make wireless sensor networks (WSNs) technically and economically feasible to be widely used in both military and civilian applications, such as monitoring of ambient conditions related to the environment, precious species and critical infrastructures. A key feature of such networks is that each network consists of a large number of untethered and unattended sensor nodes. These nodes often have very limited and non-replenishable energy resources, which makes energy an important design issue for these networks. Routing is another very challenging design issue for WSNs. A properly designed routing protocol should not only ensure high message delivery ratio and low energy consumption for message delivery, but also balance the entire sensor network energy consumption, and thereby extend the sensor network lifetime.

WSNs rely on wireless communications, which is by nature a broadcast medium. It is more vulnerable to security attacks than its wired counterpart due to lack of a physical boundary. In particular, in the wireless sensor domain, anybody with an appropriate wireless receiver can monitor and intercept the sensor network communications. The adversaries may use expensive radio transceivers, powerful workstations and interact with the network from a distance since they are not restricted to using sensor network hardware. It is possible for the adversaries to perform jamming and routing traceback attacks. Motivated by the fact that WSNs routing is often geography-based, we propose a geography-based secure and effi- cient Cost-Aware SEcure routing (CASER) protocol for WSNs without relying on flooding.

CASER allows messages to be transmitted using two routing strategies, random walking and deterministic routing, in the same framework. The distribution of these two strategies is determined by the specific security requirements. This scenario is analogous to delivering US Mail through USPS: express mails cost more than regular mails; however, mails can be delivered faster. The protocol also provides a secure message delivery option to maximize the message delivery ratio under adversarial attacks. In addition, we also give quantitative secure analysis on the proposed routing protocol based on the criteria proposed in CASER protocol has two major advantages: (i) It ensures balanced energy consumption of the entire sensor network so that the lifetime of the WSNs can be maximized. (ii) CASER protocol supports multiple routing strategies based on the routing requirements, including fast/slow message delivery and secure message delivery to prevent routing traceback attacks and malicious traffic jamming attacks in WSNs.

Our contributions of this paper can be summarized as follows:

1) We propose a secure and efficient Cost-Aware SEcure Routing (CASER) protocol for WSNs. In this protocol, cost-aware based routing strategies can be applied to address the message delivery requirements.

2) We devise a quantitative scheme to balance the energy consumption so that both the sensor network lifetime and the total number of messages that can be delivered are maximized under the same energy deployment (ED).

3) We develop theoretical formulas to estimate the number of routing hops in CASER under varying routing energy balance control (EBC) and security requirements.

4) We quantitatively analyze security of the proposed routing algorithm.

5) We provide an optimal non-uniform energy deployment (noED) strategy for the given sensor networks based on the energy consumption ratio. Our theoretical and simulation results both show that under the same total energy deployment, we can increase the lifetime and the number of messages that can be delivered more than four times in the non-uniform energy deployment scenario.

1.2 LITRATURE SURVEY:

QUANTITATIVE MEASUREMENT AND DESIGN OF SOURCE-LOCATION PRIVACY SCHEMES FOR WIRELESS SENSOR NETWORKS

AUTHOR: Y. Li, J. Ren, and J. Wu

PUBLISH: IEEE Trans. Parallel Distrib. Syst., vol. 23, no. 7, pp. 1302–1311, Jul. 2012.

EXPLANATION:

Wireless sensor networks (WSNs) have been widely used in many areas for critical infrastructure monitoring and information collection. While confidentiality of the message can be ensured through content encryption, it is much more difficult to adequately address source-location privacy (SLP). For WSNs, SLP service is further complicated by the nature that the sensor nodes generally consist of low-cost and low-power radio devices. Computationally intensive cryptographic algorithms (such as public-key cryptosystems), and large scale broadcasting-based protocols may not be suitable. In this paper, we first propose criteria to quantitatively measure source-location information leakage in routing-based SLP protection schemes for WSNs. Through this model, we identify vulnerabilities of some well-known SLP protection schemes. We then propose a scheme to provide SLP through routing to a randomly selected intermediate node (RSIN) and a network mixing ring (NMR). Our security analysis, based on the proposed criteria, shows that the proposed scheme can provide excellent SLP. The comprehensive simulation results demonstrate that the proposed scheme is very efficient and can achieve a high message delivery ratio. We believe it can be used in many practical applications.

PROVIDING HOP-BY-HOP AUTHENTICATION AND SOURCE PRIVACY IN WIRELESS SENSOR NETWORKS

AUTHOR: Y. Li, J. Li, J. Ren, and J. Wu

PUBLISH: IEEE Conf. Comput. Commun. Mini-Conf., Orlando, FL, USA, Mar. 2012, pp. 3071–3075.

EXPLANATION:

Message authentication is one of the most effective ways to thwart unauthorized and corrupted traffic from being forwarded in wireless sensor networks (WSNs). To provide this service, a polynomial-based scheme was recently introduced. However, this scheme and its extensions all have the weakness of a built-in threshold determined by the degree of the polynomial: when the number of messages transmitted is larger than this threshold, the adversary can fully recover the polynomial. In this paper, we propose a scalable authentication scheme based on elliptic curve cryptography (ECC). While enabling intermediate node authentication, our proposed scheme allows any node to transmit an unlimited number of messages without suffering the threshold problem. In addition, our scheme can also provide message source privacy. Both theoretical analysis and simulation results demonstrate that our proposed scheme is more efficient than the polynomial-based approach in terms of communication and computational overhead under comparable security levels while providing message source privacy.

SOURCE-LOCATION PRIVACY THROUGH DYNAMIC ROUTING IN WIRELESS SENSOR NETWORKS

AUTHOR: Y. Li and J. Ren

PUBLISH: IEEE INFOCOM 2010, San Diego, CA, USA., Mar. 15–19, 2010. pp. 1–9.

EXPLANATION:

Wireless sensor networks (WSNs) have the potential to be widely used in many areas for unattended event monitoring. Mainly due to lack of a protected physical boundary, wireless communications are vulnerable to unauthorized interception and detection. Privacy is becoming one of the major issues that jeopardize the successful deployment of wireless sensor networks. While confidentiality of the message can be ensured through content encryption, it is much more difficult to adequately address the source-location privacy. For WSNs, source-location privacy service is further complicated by the fact that the sensor nodes consist of low-cost and low-power radio devices, computationally intensive cryptographic algorithms and large scale broadcasting-based protocols are not suitable for WSNs. In this paper, we propose source-location privacy schemes through routing to randomly selected intermediate node(s) before the message is transmitted to the SINK node. We first describe routing through a single a single randomly selected intermediate node away from the source node. Our analysis shows that this scheme can provide great local source-location privacy. We also present routing through multiple randomly selected intermediate nodes based on angle and quadrant to further improve the global source location privacy. While providing source-location privacy for WSNs, our simulation results also demonstrate that the proposed schemes are very efficient in energy consumption, and have very low transmission latency and high message delivery ratio. Our protocols can be used for many practical applications.

CHAPTER 2

2.0 SYSTEM ANALYSIS:

2.1 EXISTING SYSTEM:

In Geographic and energy aware routing (GEAR), the sink node disseminates requests with geographic attributes to the target region instead of using flooding. Each node forwards messages to its neighboring nodes based on estimated cost and learning cost. Source-location privacy is provided through broadcasting that mixes valid messages with dummy messages. The transmission of dummy messages not only consumes significant amount of sensor energy, but also increases the network collisions and decreases the packet delivery ratio. In phantom routing protocol, each message is routed from the actual source to a phantom source along a designed directed walk through either sector based approach or hop-based approach. The direction/sector information is stored in the header of the message. In this way, the phantom source can be away from the actual source. Unfortunately, once the message is captured on the random walk path, the adversaries are able to get the direction/sector information stored in the header of the message.

2.2 DISADVANTAGES:

  • More energy consumption
  • Increase the network collision
  • Reduce the packet delivery ratio
  • Cannot provide the full secure for packets


2.3 PROPOSED SYSTEM:

We propose a secure and efficient Cost Aware Secure Routing (CASER) protocol that can address energy balance and routing security concurrently in WSNs. In CASER routing protocol, each sensor node needs to maintain the energy levels of its immediate adjacent neighboring grids in addition to their relative locations. Using this information, each sensor node can create varying filters based on the expected design tradeoff between security and efficiency. The quantitative security analysis demonstrates the proposed algorithm can protect the source location information from the adversaries. In this project, we will focus on two routing strategies for message forwarding: shortest path message forwarding, and secure message forwarding through random walking to create routing path unpredictability for source privacy and jamming prevention.

  • We propose a secure and efficient Cost-Aware SEcure Routing (CASER) protocol for WSNs. In this protocol, cost-aware based routing strategies can be applied to address the message delivery requirements.
  • We devise a quantitative scheme to balance the energy consumption so that both the sensor network lifetime and the total number of messages that can be delivered are maximized under the same energy deployment (ED).
  • We develop theoretical formulas to estimate the number of routing hops in CASER under varying routing energy balance control (EBC) and security requirements.
  • We quantitatively analyze security of the proposed routing algorithm. We provide an optimal non-uniform energy deployment (noED) strategy for the given sensor networks based on the energy consumption ratio.
  • Our theoretical and simulation results both show that under the same total energy deployment, we can increase the lifetime and the number of messages that can be delivered more than four times in the non-uniform energy deployment scenario.

2.4 ADVANTAGES:

  • Reduce the energy consumption
  • Provide the more secure for packet and also routing
  • Increase the message delivery ratio
  • Reduce the time delay

2.5 HARDWARE & SOFTWARE REQUIREMENTS:

2.5.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.5.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Tools                                       :           Netbeans 7
  • Document                               :           MS-Office 2007


CHAPTER 3

3.0 SYSTEM DESIGN:

3.1 ARCHITECTURE DIAGRAM:


3.2 DATAFLOW DIAGRAM:

 

3.3 UML DIAGRAMS:

3.4.1 USECASE DIAGRAM:

 

   SOURCE                                                                                                    DESTINATION                                                                               

3.4.2 CLASS DIAGRAM:

3.4.3 SEQUENCE DIAGRAM:

 

SOURCE                                                                                                                  D        DESTINATION

 

                                         Routing Table

         Connect Routers

                                                                             Bandwidth Estimation

                                                Connected to Sub Routers

 

                                                                                                                  Packet Size          

                                                            Joint Routing and Medium Access Control

3.4.4 ACTIVITY DIAGRAM:


CHAPTER 4

4.0 IMPLEMENTATION:

CASER PROTOCOL:

We now describe the proposed CASER protocol. Under the CASER protocol, routing decisions can vary to emphasize different routing strategies. In this paper, we will focus on two routing strategies for message forwarding: shortest path message forwarding, and secure message forwarding through random walking to create routing path unpredictability for source privacy and jamming prevention. As described before, we are interested in routing schemes that can balance energy consumption.

Assumptions and Energy Balance Routing: In the CASER protocol, we assume that each node maintains its relative location and the remaining energy levels of its immediate adjacent neighboring grids. For node A, denote the set of its immediate adjacent neighboring grids as NA and the remaining energy of grid i as Eri; i 2 NA. With this information, the node A can compute the average remaining energy of the grids in the multi-hop routing protocol, node A selects its next hop grid only from the set NA according to the predetermined routing strategy. To achieve energy balance among all the grids in the sensor network, we carefully monitor and control the energy consumption for the nodes with relatively low energy levels by configuring A to only select the grids with relatively higher remaining energy levels for message forwarding.

For this purpose, we introduce a parameter a 2 ½0; 1_ to enforce the degree of the energy balance control. We define the candidate set for the next hop node as Na A ¼ fi 2 NA j Eri _ aEaðAÞg based on the EBC a. It can be easily seen that a larger a corresponds to a better EBC. It is also clear that increasing of a may also increase the routing length. However, it can effectively control energy consumption from the nodes with energy levels lower than aEaðAÞ. We summarize the CASER routing protocol in Algorithm 1. It should be pointed out that the EBC parameter a can be configured in the message level, or in the node level based on the application scenario and the preference.

4.2 ALGORITHM:

4.3 MODULES:

NETWORK SECURITY WSNs:

ROUTING AND JAMMING ATTACKS:

CASER ENERGY DEPLOYMENT:

ROUTING EFFICIENCY AND DELAY:

4.4  MODULES DESCRIPTION:

NETWORK SECURITY WSNs:

ROUTING AND JAMMING ATTACKS:

CASER ENERGY DEPLOYMENT:

ROUTING EFFICIENCY AND DELAY:

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CHAPTER 8

8.0 CONCLUSION AND FUTURE WORK:

In this paper, we presented a secure and efficient CostAware SEcure Routing (CASER) protocol for WSNs to balance the energy consumption and increase network lifetime. CASER has the flexibility to support multiple routing strategies in message forwarding to extend the lifetime while increasing routing security. Both theoretical analysis and simulation results show that CASER has an excellent routing performance in terms of energy balance and routing path distribution for routing path security. We also proposed a non-uniform energy deployment scheme to maximize the sensor network lifetime. Our analysis and simulation results show that we can increase the lifetime and the number of messages that can be delivered under the non-uniform energy deployment by more than four times.

Content-Based Image Retrieval Using Error Diffusion Block Truncation Coding Features

This paper presents a new approach to index color images using the features extracted from the error diffusion block truncation coding (EDBTC). The EDBTC produces two color quantizers and bitmap images, which are further, processed using vector quantization (VQ) to generate the image feature descriptor. Herein two features are introduced, namely, color histogram feature (CHF) and bit pattern histogram feature (BHF), to measure the similarity between a query image and the target image in database.

The CHF and BHF are computed from the VQ-indexed color quantizer and VQ-indexed bitmap image, respectively. The distance computed from CHF and BHF can be utilized to measure the similarity between two images. As documented in the experimental result, the proposed indexing method outperforms the former block truncation coding based image indexing and the other existing image retrieval schemes with natural and textural data sets. Thus, the proposed EDBTC is not only examined with good capability for image compression but also offers an effective way to index images for the content based image retrieval system.

1.2 INTRODUCTION

Many former schemes have been developed to improve the retrieval accuracy in the content-based image retrieval (CBIR) system. One type of them is to employ image features derived from the compressed data stream as opposite to the classical approach that extracts an image descriptor from the original image; this retrieval scheme directly generates image features from the compressed stream without first performing the decoding process. This type of retrieval aims to reduce the time computation for feature extraction/generation since most of the multimedia images are already converted to compressed domain before they are recorded in any storage devices. In the image features are directly constructed from the typical block truncation coding (BTC) or halftoning-based BTC compressed data stream without performing the decoding procedure.

These image retrieval schemes involve two phases, indexing and searching, to retrieve a set of similar images from the database.

The indexing phase extracts the image features from all of the images in the database which is later stored in database as feature vector. In the searching phase, the retrieval system derives the image features from an image submitted by a user (as query image), which are later utilized for performing similarity matching on the feature vectors stored in the database. The image retrieval system finally returns a set of images to the user with a specific similarity criterion, such as color similarity and texture similarity. The concept of the BTC is to look for a simple set of representative vectors to replace the original images. Specifically, the BTC compresses an image into a new domain by dividing the original image into multiple nonoverlapped image blocks, and each block is then represented with two extreme quantizers (i.e., high and low mean values) and bitmap image. Two subimages constructed by the two quantizers and the corresponding bitmap image are produced at the end of BTC encoding stage, which are later transmitted into the decoder module through the transmitter. To generate the bitmap image, the BTC scheme performs thresholding operation using the mean value of each image block such that a pixel value greater than the mean value is regarded as 1 (white pixel) and vice versa.

The traditional BTC method does not improve the image quality or compression ratio compared with JPEG or JPEG 2000. However, the BTC schemes achieve much lower computational complexity compared with that of these techniques. Some attempts have been addressed to improve the BTC reconstructed image quality and compression ratio, and also to reduce the time computation. Even though the BTC scheme needs low computational complexity, it often suffers from blocking effect and false contour problems, making it less satisfactory for human perception. The halftoning-based BTC, namely, error diffusion BTC (EDBTC) is proposed to overcome the two above disadvantages of the BTC. Similar to the BTC scheme, EDBTC looks for a new representation (i.e., two quantizers and bitmap image) for reducing the storage requirement. The EDBTC bitmap image is constructed by considering the quantized error which diffuses to the nearby pixels to compensate the overall brightness, and thus, this error difussion strategy effectively removes the annoying blocking effect and false contour, while maintaining the low computational complexity.

The low-pass nature of human visual system is employed in to access the reconstructed image quality, in which the continuous image and its halftone version are perceived similarly by human vision when these two images viewed from a distance. The EDBTC method divides a given image into multiple nonoverlapped image blocks and each block is processed independently to obtain two extreme quantizers. This unique feature of independent processing enables the parallelism scenario. In bitmap image generation step, the pixel values in each block are thresholded by a fixed average value in the block with employing error kernel to diffuse the quantization error to the neighboring pixels during the encoding stage. A new image retrieval system has been proposed for the color image.

Three feature descriptors, namely, structure element correlation (SEC), gradient value correlation (GVC), and gradient direction correlation (GDC) are utilized to measure the similarity between the query and the target images in database. This indexing scheme provides a promising result in big database and outperforms the former existing approaches, as reported in the method in compresses a grayscale image by combining the effectiveness of fractal encoding, discrete cosine transform (DCT), and standard deviation of an image block. An auxiliary encoding algorithm has also been proposed to improve the image quality and to reduce the blocking effect. As reported in this new encoding system achieves a good coding gain as well as the promising image quality with very efficient computation. In a new method for tamper detection and recovery is proposed utilizing the DCT coefficient, fractal coding scheme, and the matched block technique. This new scheme yields a higher tampering detection rate and achieves good restored image quality, as demonstrated in combines the fractal image compression and wavelet transform to reduce the time computation in image encoding stage.

This method produces a good image quality with a fast encoding speed, as reported in the fast and efficient image coding with the no-search fractal coding strategies have been proposed methods employ the modified graylevel transform to improve the successful matching probability between the range and domain block in the fractal coding. Two gray-level transforms on quadtree partition are used in to achieve a fast image coding and to improve the decoded image quality. The method in exploits a fitting plane method and a modified gray-level transform to speedup the encoding process. The fractal image coding presented in accelerates the image encoding stage, reduces the compression ratio, and simultaneously improves the reconstructed image quality. A fast fractal coding is also proposed in which utilizes the matching error threshold. This method first reduces the codebook capacity and takes advantage of matching error threshold to shorten the encoding runtime. The method in can achieve a similar or better decoded image with the fast compression process compared with the conventional fractal encoding system with full search strategy.

The contributions can be summarized as follows: 1) extending the EDBTC image compression technique for the color image; 2) proposing two feature descriptors, namely, color histogram feature (CHF) and bit pattern histogram feature (BHF), which can be directly derived from the EDBTC compressed data stream without performing decoding process; and 3) presenting a new low complexity joint CBIR system and color image compression by exploiting the superiority of EDBTC scheme. The rest of this paper is organized as follows. A brief introduction of EDBTC is provided in Section II. Section III presents the proposed EDBTC image retrieval including the image feature generation and accuracy computation. Extensive experimental results are reported at Section IV. Finally, the conclusion is drawn at the end of this paper.

1.3 LITRATURE SURVEY

IMAGE RETRIEVAL BASED ON TEXTURE AND COLOR METHOD IN BTC-VQ COMPRESSED DOMAIN

AUTHOR: M. R. Gahroudi and M. R. Sarshar,

PUBLISH: Proc. Int. Symp. Signal Process. Appl., Feb. 2007, pp. 1–4.

EXPLANTION:

In this article a new method for retrieval of images compressed by BTC has been provided. In our approach, we use some classified patterns, derived from BTC method as a retrieval feature.  This method has been examined on a database consisting of 9983 images with different contents and its results have been compared with similar methods. Maintenance of visual and natural features of image in compression and so that efficiency of an image depends on two parameters of 1-Data rate and 2- Distortion. If the retrieved image is completely similar to the original image, it is called “Lossless technique” and otherwise it is called “Lossy technique”. One of most usable methods is the method of cutting the image to non-covering blocks. The deficiency of this method is that margins of blocks may be seen at the retrieval time. BTC-VQ method has high speed in compressing, and also in articles it has been shown that this method has suitable capability for images retrieval, because in addition to using the information of block-in connection, it also stores the important information of each block in compressed form. In this article, BTC-VQ and a new presented method is used for compressing, based on Color Histogram and Block Pattern Histogram. Simultaneous utilization of Color Histogram and BPH provides us suitable information based on color and edges, and this cause an increase in system speed and efficiency. Utilization of color histogram minimizes the limits of browsing images and will cause the block pattern histogram to find the images with higher speed. One of the defects of BTC-VQ is low degree of compressing in comparison to other compressing methods such as JPEG and VQ.

COLOUR IMAGE RETRIEVAL USING PATTERN CO-OCCURRENCE MATRICES BASED ON BTC AND VQ

AUTHOR: F.-X. Yu, H. Luo, and Z.-M. Lu,

PUBLISH: Electron. Lett., vol. 47, no. 2, pp. 100–101, Jan. 2011.

EXPLANTION:

Proposed is an effective feature for colour image retrieval based on block truncation coding (BTC) and vector quantisation (VQ). Each input colour image is decomposed into Y, Cb and Cr components. BTC is performed on the 4×4 Y blocks, obtaining a mean pair sequence and a bitplane sequence, and then they are quantised with the contrast pattern codebook and visual pattern codebook to obtain the contrast and visual pattern co-occurrence matrix. VQ is performed on the 4×4 Cb blocks and Cr blocks with the Cb codebook and Cr codebook, respectively, to obtain the colour pattern co-occurrence matrix. Retrieval simulation results show that, compared with two existing BTC-based features, the proposed feature can greatly improve retrieval performance.

EFFICIENT CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SUPPORT VECTOR MACHINES ENSEMBLE

AUTHOR: E. Yildizer, A. M. Balci, M. Hassan, and R. Alhajj,

PUBLISH: Expert Syst. Appl., vol. 39, no. 3, pp. 2385–2396, 2012.

EXPLANTION:

With the evolution of digital technology, there has been a significant increase in the number of images stored in electronic format. These range from personal collections to medical and scientific images that are currently collected in large databases. Many users and organizations now can acquire large numbers of images and it has been very important to retrieve relevant multimedia resources and to effectively locate matching images in the large databases. In this context, content-based image retrieval systems (CBIR) have become very popular for browsing, searching and retrieving images from a large database of digital images with minimum human intervention. The research community is competing for more efficient and effective methods as CBIR systems may be heavily employed in serving time critical applications in scientific and medical domains. This paper proposes an extremely fast CBIR system which uses Multiple Support Vector Machines Ensemble. We have used Daubechies wavelet transformation for extracting the feature vectors of images. The reported test results are very promising. Using data mining techniques not only improved the efficiency of the CBIR systems, but they also improved the accuracy of the overall process.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

Existing method for retrieval of images compressed by BTC has been provided in some classified patterns, derived from BTC method as a retrieval feature.  This method has been examined on a database consisting of 9983 images with different contents and its results have been compared with similar methods. Maintenance of visual and natural features of image in compression and so that efficiency of an image depends on two parameters of 1-Data rate and 2- Distortion. If the retrieved image is completely similar to the original image, it is called “Lossless technique” and otherwise it is called “Lossy technique”. One of most usable methods is the method of cutting the image to non-covering blocks. The deficiency of this method is that margins of blocks may be seen at the retrieval time.

BTC-VQ method has high speed in compressing, and also in articles it has been shown that this method has suitable capability for images retrieval, because in addition to using the information of block-in connection, it also stores the important information of each block in compressed form. In this article, BTC-VQ and a new presented method is used for compressing, based on Color Histogram and Block Pattern Histogram.

Simultaneous utilization of Color Histogram and BPH provides us suitable information based on color and edges, and this cause an increase in system speed and efficiency. Utilization of color histogram minimizes the limits of browsing images and will cause the block pattern histogram to find the images with higher speed. One of the defects of BTC-VQ is low degree of compressing in comparison to other compressing methods such as JPEG and VQ.

2.1.1 DISADVANTAGES:

  • BTC scheme performs thresholding operation using the mean value of each image block such that a pixel value greater than the mean value is regarded as 1 (white pixel) and vice versa.
  • The traditional BTC method does not improve the image quality or compression ratio compared with JPEG or JPEG 2000 lower computational complexity compared with that of these techniques.
  • BTC scheme needs low computational complexity, it often suffers from blocking effect and false contour problems, making it less satisfactory for human perception.


2.2 PROPOSED SYSTEM:

We proposed in the literature triggered by the successfulness of EDBTC, such as image watermarking inverse halftoning, data hiding, image security, and halftone classification. The EDBTC scheme performs well in those areas with promising results, as reported in [3]–[10], since it provides better reconstructed image quality than that of the BTC scheme. In this paper, the concept of the EDBTC compression is catered to the CBIR domain, in which the image feature descriptor is constructed from the EDBTC compressed data stream.

In this scheme, the compressed data stream that is already stored in database is not necessary decoded to obtain the image feature descriptor. The descriptor is directly derived from EDBTC color quantizers and bitmap image in compressed domain by involving the vector quantization (VQ) for the indexing. The similarity criterion between the query and target images is simply measured using the EDBTC feature descriptor. This new CBIR system with the EDBTC feature can also be extended for video indexing and searching, in which the video is viewed and processed as a sequence of images.

The EDBTC feature descriptor can also be adopted as an additional feature for object tracking, background subtraction, image annotation, image classification, and segmentation. The EDBTC feature offers a competitive performance compared with that of the local binary pattern (LBP)-based feature, and thus the EDBTC feature can substitute the LBP-based feature for image processing and computer vision application with even faster processing efficiency. A new image retrieval system has b

The contributions can be summarized as follows:

1) extending the EDBTC image compression technique for the color image; 2) proposing two feature descriptors, namely, color histogram feature (CHF) and bit pattern histogram feature (BHF), which can be directly derived from the EDBTC compressed data stream without performing decoding process; and 3) presenting a new low complexity joint CBIR system and color image compression by exploiting the superiority of EDBTC scheme.

2.2.1 ADVANTAGES:

  • This method produces a good image quality with a fast encoding speed, as reported in the fast and efficient image coding with the no-search fractal coding strategies have been proposed in both methods employ the modified graylevel transform to improve the successful matching probability between the range and domain block in the fractal coding. Two gray-level transforms on quadtree partition are used in to achieve a fast image coding and to improve the decoded image quality.
  • The method in exploits a fitting plane method and a modified gray-level transform to speedup the encoding process. The fractal image coding presented in [74] accelerates the image encoding stage, reduces the compression ratio, and simultaneously improves the reconstructed image quality
  • This method first reduces the codebook capacity and takes advantage of matching error threshold to shorten the encoding runtime. The method in can achieve a similar or better decoded image with the fast compression process compared with the conventional fractal encoding system with full search strategy.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Back End                                :           MS-ACCESS
  • Script                                       :           JSP Script
  • Document                               :           MS-Office 2007

CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

3.3 CLASS DIAGRAM:

3.4 SEQUENCE DIAGRAM:

3.5 ACTIVITY DIAGRAM:

CHAPTER 4

4.0 IMPLEMENTATION:

EDBTC SCHEME:

4.1 ALGORITHM

CHF AND BHF (CBIR):

4.2 MODULES:

CBIR PREPROCESSING:

EDBTC FOR COLOR IMAGES:

EBDTC IMAGE INDEXING:

IMAGE RETRIEVAL WITH EDBTC:

4.3 MODULE DESCRIPTION:

CBIR PREPROCESSING:

EDBTC FOR COLOR IMAGES:

EBDTC IMAGE INDEXING:

IMAGE RETRIEVAL WITH EDBTC:

CHAPTER 8

8.1 CONCLUSION AND FUTURE WORK:

In this paper for color image indexing by exploiting the simplicity of the EDBTC method a feature descriptor obtained from a color image is constructed from the EDBTC encoded data (two representative quantizers and its bitmap image) by incorporating the VQ. The CHF effectively represents the color distribution within an image, while the BHF characterizes the image edge and texture. The experimental results demonstrate that the proposed method is not only superior to the former BTC-based image indexing schemes but also to the former existing methods in the literature related to the CBIR. To achieve a higher retrieval accuracy, another feature can be added into the EDBTC indexing scheme with the other color spaces such as YCbCr, hue–saturation–intensity, and lab. An extension of the EDBTC image retrieval system can be brought to index video by considering the video as a sequence of images. This strategy shall consider the temporal information of the video sequence to meet the user requirement in the CBIR context.

Authentication Handover and Privacy Protection in 5G HetNets Using Software-Defined Networking

INTRODUCTIONOver the past few years, anywhere, anytime wirelessconnectivity has gradually become a realityand has resulted in remarkably increased mobiletraffic. Mobile data traffic from prevailing smartterminals, multimedia-intensive social applications,video streaming, and cloud services is predictedto grow at a compound annual growthrate of 61 percent before 2018, and is expectedto outgrow the capabilities of the current fourthgeneration (4G) and Long Term Evolution(LTE) infrastructure by 2020 [1]. This explosivegrowth of data traffic and shortage of spectrumhave necessitated intensive research and developmentefforts on 5G mobile networks. However,the relatively narrow usable frequency bandsbetween several hundred megahertz and a fewgigahertz have been almost fully occupied by avariety of licensed or unlicensed networks,including 2G, 3G, LTE, LTE-Advanced (LTEA),and Wi-Fi. Although dynamic spectrum allocationcould provide some improvement, theonly way to find enough new bandwidth for 5Gis to explore idle spectrum in the millimeterwaverange of 30~300 GHz [2].NETWORK ARCHITECTURE OF 5GDue to the poor signal propagation characteristicsat extremely high frequencies, future 5G networkswill be heterogeneous with small celldeployment and overlay coverage, as shown inFig. 1. Cellular networks operating at low frequencies(e.g., 2G, 3G, LTE, LTE-A) could providewide area coverage, mobility support, andcontrol, while small cells operating at higher frequenciesguarantee high data rates in the area ofspectral and energy efficiency.This heterogeneous paradigm with multi-tiercoverage in 5G not only follows the natural evolutionfrom existing cellular technologies, butalso satisfies the requirements of increased datatraffic, with small cells providing very highthroughput and underlying macrocells providingextensive coverage. Therefore, network densificationusing low-power small cells is widely consideredto be a critical element toward low-costhigh-capacity 5G communications.SECURITY CHALLENGES IN 5GAlong with the advantages of 5G architecture inFig. 1, there also come several major technicalchallenges. The massive deployment of smallcells poses potential challenges in network management,including interference alignment,extensive backhauling, and inconsistent securitymechanisms over heterogeneous networks (Het-Nets). Network management and service provisioningare challenging in this multi-tier modeldue to the increased number of base stationsand complexity of network architecture. Therefore,new technologies are needed to provideintelligent control over HetNets for consistentand effective resource allocation as well as securitymanagement.Moreover, 5G users may leave one cell andjoin another more frequently with reduced cellsize, which could introduce excessive handoverinducedlatency in 5G. Future 5G applicationslike interactive gaming and tele-operationsrequire 5G latency to be an order of magnitudesmaller than 4G, with 1 ms target round-triptime [2]. However, due to smaller cell deployment,users and different access points (APs) in5G need to perform more frequent mutualauthentications than in 4G to prevent imperson-ABSTRACTRecently, densified small cell deploymentwith overlay coverage through coexisting heterogeneousnetworks has emerged as a viable solutionfor 5G mobile networks. However, thismulti-tier architecture along with stringent latencyrequirements in 5G brings new challenges insecurity provisioning due to the potential frequenthandovers and authentications in 5G smallcells and HetNets. In this article, we reviewrelated studies and introduce SDN into 5G as aplatform to enable efficient authentication hand -over and privacy protection. Our objective is tosimplify authentication handover by global managementof 5G HetNets through sharing of userdependentsecurity context information amongrelated access points. We demonstrate thatSDN-enabled security solutions are highly efficientthrough its centralized control capability,which is essential for delay-constrained 5G communications.SECURITY AND PRIVACY IN EMERGING NETWORKSXiaoyu Duan and Xianbin Communications Magazine • April 2015 29ation and man-in-the-middle (MitM) attacks. Onthe other hand, the power and resource constraintsof small cell APs require low complexityand highly efficient handover authentication procedures.Therefore, faster, efficient, and robusthandover authentication and privacy protectionschemes need to be developed for complex 5GHetNets.THE SCOPE OF THIS ARTICLEIn this article, we first introduce the 5G backgroundand identify the challenges in 5G Het-Nets, especially in security management. Existingrelated studies are overviewed, providing a summaryof the previous security solutions and stateof-the-art related technologies. Based on oursurvey and analysis, we believe that new solutionsmeeting the latency and complexity requirementsof 5G HetNet communications are yet tobe developed.Based on this observation, we introduce anew 5G network structure enabled by softwaredefinednetworking (SDN) to bring intelligenceand programmability into 5G networks for efficientsecurity management. With SDN, the controllogic is removed from the underlyinginfrastructures to a controller in the controllayer [3] so that software can be implemented onthe central SDN controller to provide consistentand efficient management over the whole 5GHetNet. With this paradigm, we propose anSDN-enabled user-specific secure context informationtransfer for efficient authenticationhand over and privacy protection in 5G to achieveseamless authentication during frequent hand -overs, while at the same time meeting the privacyand latency requirements effectively.STATE OF THE ART INHANDOVER AUTHENTICATION ANDCHALLENGES IN 5GRELATED WORK ON HANDOVERAUTHENTICATION AND 5G CHALLENGESTo support increased data traffic, 5G networksneed to have high capacity and efficient securityprovisioning mechanisms. Densification of heterogeneousnetworks and massive deployment ofsmall base stations become the natural choicefor 5G. On the other hand, many applicationssupported by 5G, such as mobile banking andcloud-based social applications, require higherdata confidentiality and reliable authenticationagainst malicious attacks.The common practice for secure communicationsin 3G and later wireless networks is basedon admission control and cryptographicexchange. Figure 2 gives an overview of thehand over authentication procedures between differentnetworks and within one network [9]. Theinvolved network components here are the userequipment (UE), access points (APs) or basestations (BSs), and an authentication server. Itcan be seen from Fig. 2 that mutual authenticationduring handover between the user and anew network (i.e., procedure 1) is realized by thepairing of specific hashing output. Each time theinvolved vector includes RAND, a random numberknown by the server, AUTH, an authenticationtoken sent by the server, a pairwise key, andso on. For mobility within the same network(i.e., procedure 2), the current serving AP willinform the target AP of the possible handover sothat the latter can retrieve the user authenticationand key context from the server. In the following,we analyze existing handoverauthentication procedures and identify the challengesin 5G HetNets based on Fig. 2.To enable handover between different wirelessnetworks (i.e., procedure 1 in Fig. 2), variousauthentication servers and protocols areinvolved due to the closed nature and structureof each network in a HetNet, rendering frequentestablishments of trust relationships and authenticationsduring mobility, especially in a 5Gsmall cell scenario [2]. The Third GenerationPartnership Project (3GPP) has provided specifickey hierarchy and handover message flows forvarious mobility scenarios [10]. However, thespecific key designed for handover and differenthandover procedures for various scenarios willincrease handover complexity when applied to5G HetNets. As the authentication server isoften located remotely, the delay due to frequentenquiries between small cell APs and theauthentication server for user verification maybe up to hundreds of milliseconds [5], which isunacceptable for 5G communications. Theauthors of [6, 7] have proposed simplified hand -over authentication schemes involving directauthentication between UE and APs based onpublic cryptography. These schemes realizemutual authentication and key agreements withnew networks through a three-way handshakewithout contacting any third party, like anauthentication, authorization, and accounting(AAA) server. Although the handover authenticationprocedure is simplified, computation costand delay are increased due to the overhead forexchanging more cryptographic messagesthrough a wireless interface [5]. For the samereason, carrying a digital signature is secure butnot efficient for dynamic 5G wireless communications.For handover within the same network (i.e.,procedure 2 in Fig. 2), existing security mechanismsutilize complex context transfer, and it hasFigure 1. 5G heterogeneous network structure with densified small cellsand overlay coverage.Cellular 2G, 3G, LTE, LTE-AHeterogeneousoverlay coverageSmall cells (high frequencies)Macrocell (low frequencies)FemtocellMicrocell(e.g. Wi-Fi)Picocell30 IEEE Communications Magazine • April 2015been found that most of the handover latency isdue to the scanning time for identifying the targetAP and round-trip time to the authenticationserver. Related work in [8] proposed a userassistedauthentication context transfer scheme,by which the current AP transfers a signedauthentication certificate as a security context tothe user, and then to the target AP through theuser. The UE is actively involved in handoverauthentication with its existing connections withthe current and next target APs to reduce latency.However, mutual trust between APs isassumed in these solutions, which could be infeasiblefor 5G HetNets due to the lack of directinterfaces between different networks. In addition,the transferred security context, which isjust a combination of identity and signature, maynot be secure enough to prevent 5G wirelesscommunication from potential attacks.In light of these challenges, robust and efficienthandover authentication and secure contextinformation transfer is crucial in securing5G networks. The unique link characteristicsexperienced by each UE can be explored as asecurity context to accelerate authenticationhandover. Such user-specific attributes includephysical layer attributes (clock skew, signalstrength, channel state information), location,and even moving speed and direction [11], someof which have already been reported to APs forthe purpose of resource allocation and seamlesshandover. It is believed that by taking advantageof these unique attribute combinations as noncryptographicsolutions, authentication can befaster, more robust, and less complex comparedto widely used cryptographic exchange mechanisms[12].SOFTWARE-DEFINED-NETWORKING-ENABLED5G NETWORKSSoftware-defined networking [3] is considered asa radical new network structure to centralizenetwork management, and enable innovationthrough network programmability in meeting theneeds of emerging applications. One main featureof SDN is decoupling the control plane anddata plane by taking control logic from theunderlying switches and routers to the centralizedSDN controller in the control plane.When introducing SDN into 5G networks,the SDN controller will have global control overthe network, while SDN switches will simply followdata forwarding instructions from the controller.Applications are implemented on top ofthe controller to define the behavior of theswitches and APs, thus creating a reconfigurable5G HetNet, as shown in Fig. 3. The separationof data forwarding switches and the controlplane enables easier implementation of new protocoland functions, consistent network policy, aswell as straightforward network management.In supporting SDN-enabled 5G, appropriateSDN protocols, such as Openflow and SimpleNetwork Management Protocol (SNMP), will beadded to base stations, access points, and wirelessswitches through an external standardizedapplication programming interface (API) [4].Figure 2. Authentication processes of handover procedure 1, between different networks, and handoverprocedure 2, within the same network.Target APHandover within same network Handover authentication between networksServing APProcedure 2 Procedure 1UE Authentication serverCheck:AUTHConfirmpairwise keyAssociatePairwise keyStart authentication, send identityTarget AP list for handoverRetrieve UE state: authentication context, keyDisconnectPossible handover, UE (identity, QoS)AssociateAccess permit: global identity,encryption/integrity keyAuthentication vectors(RAND, AUTH, pairwise key)RAND, AUTHWhen introducingSDN into 5G networks,the SDN controllerwill haveglobal control overthe network, whileSDN switches simplyfollow data forwardinginstructions fromthe controller. Applicationsare implementedon top ofthe controller todefine the behaviorof the switches andAPs, thus creating areconfigurable 5GHetNet.IEEE Communications Magazine • April 2015 31Importantly, OpenFlow is in charge of data pathcontrol, and SNMP can be used for device control.As the SDN controller is just a programrunning on a server, it can be placed anywherein the 5G network — even in a remote data center.An SDN-based 5G network structure enablesflexible ubiquitous connection, fast rerouting,and real-time network management with thesoftware controller. Users are able to access networkservices anywhere and anytime regardlessof the network type [4] (e.g., Wi-Fi, 3G, LTE,LTE-A) as long as these networks belong to thesame operator or there are agreements betweenoperators. Furthermore, consistent authenticationand privacy protection are also manageable.In this article, we explore SDN as a promisingplatform to introduce intelligence into 5Gand address the security challenges. Specifically,we discuss SDN-enabled authentication hand -over, which provides control over HetNet infrastructuresand helps the network to reduceredundant authentications across HetNets.Hand over authentication thus becomes a morecontrolled and prepared process instead of multipleindependent procedures. By sharing securecontext information along moving direction ofthe user and choosing multiple network paths totransmit data concurrently, the SDN structure iscapable of facilitating 5G security provisioningmore efficiently. In doing so, user-specificattributes are utilized as the shared security contextto reduce handover complexity. To furtherachieve privacy protection, SDN-enabled datatransmission over different network paths in 5GHetNets is also investigated in order to guaranteeprivacy.SDN-ENABLED5G AUTHENTICATION HANDOVERIn this section, we introduce SDN into 5G toenable the proposed authentication handoverscheme in coping with the frequent handoverauthentication in small cells and HetNets, asshown in Fig. 4. We implement an authenticationhandover module (AHM) in the SDN controllerto monitor and predict the location ofusers, and then prepare the relevant cells beforethe user arrives to guarantee seamless handoverauthentication. Using a traffic flow template(TFT) filter [13] (source/destination IP addressesand port numbers) and related quality of service(QoS) description, secure contextinformation (SCI) is collected by the AHM toshare along a projected user moving path (i.e.,from cell A to cell B, C in Fig. 4). The relevantcell APs thus prepare resource in advance andensure seamless user experience during mobility.Specifically, user specific attributes includingidentity, location, direction, round-trip time(RTT), and physical layer characteristics havebeen considered as reliable SCI to assist securehandover in 5G networks, instead of using complexcryptographic exchange mechanisms. As anon-cryptographic method, user-specificattributes are able to simplify the authenticationprocedure by providing the unique fingerprint ofthe specific device without additional hardwareand computation cost [12]. In this article, wefocus on using user-specific attributes as SCI(location, direction, etc.) to realize SDN-enabledauthentication handover. Based on the proposedauthentication context handover, security inSDN-enabled 5G networks becomes a monitoredseamless procedure instead of multiple independentverifications, which could significantlyreduce the possibility of impersonation andMitM attacks.More precisely, the way in which the SDNcontroller shares the user’s SCI to next cell APsalong the predicted path is just like a trustworthyintroduction from a previous AP before hand -over. The future cell APs thus finish authenticationwith the user quickly and begin to monitorthe user to prepare service according to the SCI.As the trace of the user is monitored, the risk ofimpersonation is significantly, if not entirely,reduced. More importantly, there would be riskof service disruption in previous networks if theconnection between APs and the authenticationserver is broken. Under similar network conditions,however, our mechanism will not loseglobal network connectivity because a new AP ismonitoring the user, which can help the controllerretrieve the necessary information accordingto the pre-shared SCI. Thus, theSDN-enabled security handover possesses highlevels of tolerance to network failures. In the following,a description of the authentication hand -over mechanism in terms of assumptions anddesigns is presented in detail.ASSUMPTIONS AND DESIGN GOALSWe assume that the SDN controller is a programrunning in a mobile operator’s data center withan AHM for user authorization. The AHM is incharge of both authentication and handover,which maintains user information specifyingwhat the user can access. The AHM also pos-Figure 3. SDN-enabled 5G wireless HetNet structure with control planedesign.Wi-FiWi-FiWiMAXLTEUEInternetWireless corenetworkApplicationsControl plane(controller placedin server)InterfaceOpenFLow SNMPMoving path…OpenFlowwirelessswitchDatapath plane32 IEEE Communications Magazine • April 2015sesses a master public-private key pair (K, K–1),with a public key K that is known to users andAPs. Both APs and UEs need to be verifiedbefore gaining access to network services toreduce security risks.Our design goal for the authentication hand -over mechanism is to accelerate authenticationin 5G HetNets by enabling SCI transfer usingSDN. In further reducing the overall authenticationdelay, the AHM in the controller couldperiodically authenticate the APs in off-peaktimes using its master key to avoid leakage ofprivacy caused by compromised APs. If certified,a key pair (KN, KN–1) with a signature [KN, T]K–1is distributed to the AP, where T is the timeoutof the signature; if the AP is detected as compromised,it will be blacked out from furtheroperation. This way, some of the authenticationprocedures are moved to off-peak times andrelieves the SDN controller burden.SDN-ENABLED AUTHENTICATIONHANDOVER MECHANISM DESIGNWith the assumptions and design goalsdescribed above, we can design the SDNenabledauthentication handover mechanism.User-specific SCI, such as ID, physical layerattributes, location, speed, and direction, can becollected and shared easily with SDN flow-basedforwarding [3]. According to the UE locationinformation from SCI, the SDN controller usesan ascending index to indicate the sequentialorder of next cells in the moving direction.Once authenticated by one cell AP, an appropriatecombination of user attributes is thenshared as SCI by the SDN controller along thisuser’s future path. This way, the UE is able toenjoy seamless service without complex operationduring authentication hand over, thus savingtime for data communications.For example, we assume that user U is in cellA, and the future cells are B and C, as shown inFig. 4. The authentication procedure betweenuser U and cell A follows the commonly usedauthentication protocol [10], and the proposedSDN-enabled authentication handover procedureis described in Algorithm 1.The SCI attributes in the proposed SDNenabledauthentication handover could includeidentity, physical layer attributes, location, movingspeed, and direction. The number ofattributes to be used is based on the securitylevel of the information requested. For example,if the user is requesting banking or email services,a higher security level can be achieved bytransferring more SCI attributes; if it is justInternet browsing or video gaming, the securitylevel can be lower, and few SCI attributes areneeded.The aforementioned authentication handovermethod requires no changes to the existing UEand AP hardware, and significantly simplifies theauthentication procedure and reduces handoverlatency through a non-cryptographic technique.By predicting the user moving path and shiftingthe authentication of APs to off-peak times, theSDN-enabled 5G networks can always be wellprepared for other service requests. Moreover,operators can choose to switch off/on lightlyloaded cells if the users approaching these cellsare not going to exceed a certain thresholdaccording to the SCI information to save moreenergy.SDN-ENABLED5G PRIVACY PROTECTIONData privacy means the right of network usersto seclude themselves from prying and eavesdropping.Due to the reduced cell size in 5GHetNets, users might move through multiplesmall cells before completing one communicationsession. Thus, the privacy protection ismore challenging in 5G due to the possibleinvolvement of untrusted or compromised APsduring handover. Existing privacy protectionschemes use complex key agreements and interactionsor additional watermarking to protectdata privacy. Such cryptographic methods bringcomputation burden and complexity to boththe AP and client sides [9], which is undesirablefor 5G low-power small cell infrastructures.On the other hand, privacy protectionrequires that no link can be establishedFigure 4. SDN enabled secure context information transfer between 5G UE, APs and AHM in SDNcontroller.Authenticationhandovermodule (AHM)Privacyprotectionmodule (PPM)Authenticate SCISecure context flowSecure context flowBase station AUserBCSmall cellsMoving directionDue to the reducedcell size in 5G Het-Nets, users mightmove through multiplesmall cells beforecompleting one communicationsession.Thus, the privacyprotection is morechallenging in 5Gdue to the possibleinvolvement ofuntrusted or compromisedAPs duringhandover.IEEE Communications Magazine • April 2015 33between information and the owner, whileauthentication requires an identity providedfor the purpose of authentication. Previously,these contradictory requirements were metthrough a trusted third party. However, multipleenquiries to the remote third party cause anetwork bottleneck, which is not suitable for5G low-latency communications.We introduce an SDN-enabled privacy protectionscheme, which employs partial transmissionover different SDN-controlled networkpaths to guarantee privacy and offload traffic in5G cellular networks at the same time. Withthe proposed privacy protection scheme, SDNcontroller is able to choose multiple networkpaths to transmit different parts of the datastream (i.e., partial transmission) according tothe HetNet coverage. The number of networkpaths is decided by the sensitivity level of thedata stream. As long as the UE has beenauthenticated and is covered by the HetNets(e.g., Wi-Fi, femtocell, or cellular), the induceddata stream can be routed through these networkbackhauls under the control of an SDNcontroller. Only the receiver can decrypt thedata using its private key and then re-organizethe data stream coming from multiple networkpaths, which avoids privacy leakage via compromisedAPs. Moreover, the proposed scheme isable to realize traffic offloading through theother network paths, which is desirable giventhe fact that a 5G cellular network will be floodedby a huge volume of mobile traffic [1]. Simplyby choosing nearby Wi-Fi or femtocells asdifferent paths for data offloading, the trafficload of a 5G cellular network is relievedthrough either the unlicensed band of Wi-Fi orreusing the femtocell’s band. The proposedSDN-enabled privacy protection mechanism isdescribed in Algorithm 2.In Algorithm 2, n is the number of networkpaths that an SDN controller chooses for datatransmission, and dn is the different part of datathat will be transmitted in the nth network concurrently.tr is the data transfer time within theinvolved networks. Ts is the delay threshold of5G applications, which means to achieve concurrentprivacy protection, this kind of serviceneeds to be finished before Ts to guarantee userexperience. For example, email transfer can toleratelong latency, while real-time video andtwo-way gaming have a very low delay threshold.bn is the bandwidth allocated by the SDN controlleraccording to the traffic situation of differentnetworks, and Vsn is the volume of data thatcan be transferred in the multiple paths (i.e.,offloading networks) within the application delaythreshold.More importantly, the number of paths nhere is decided by a trade-off between privacylevel, offloading revenue, and system complexity,which is reconfigurable and can easily be set upthrough an SDN controller application by 5Goperators. User privacy protection thus becomesprogrammable and under the control of SDN,which is especially desirable for future highlydiverse communication requirements and applicationneeds.Algorithm 1. User-SCI-based authentication handover.Algorithm 2. Partial data offloading over different SDN-controlled network paths.State(A, U): Authenticated.State(B, U): Not Authenticated.State(C, U): Not Authenticated.AHM ® B: (index = 1, ID, SCI)AHM ® C: (index = 2, ID, SCI)Ascending index number shows the direction of user movement. ID is the identity of U and SCI is the secure context information of U.B ® A: Handoff REQ(ID, SCI).When B discovers U in its coverage, B sends handoff request to A until receives reply from A.A ® B: Handoff ACK(ID, SCI¢).A replies with handoff acknowledgement. SCI¢ is the secure context information which is more recent than previous shared SCI.B ® U: Update REQ().After matching SCI¢ from A with U, B authenticates U and starts to associate with U.U ® B: Update ACK(SCI¢¢).Here U is connected with B. SCI¢¢ is the latest secure context information.State(B,U): Authenticated.B ® AHM: Update(SCI¢¢).B updates the UE secure context information to AHM. AHM then shares secure information to next cell APs according to the locationand direction information in new SCI¢¢.C ® B: C keeps on monitoring U and follows similar procedure.1: procedure PDO(n)2: Ts: delay threshold3: Vsn = bn min(tr; Ts): size in bytes to be transferred in nearby Wi-Fi, Femtocell or cellular within Ts4: for d1 < Vs1, d2 < Vs2, … dn < Vsn and d = d1 + d2 + … + dn do5: Encrypt d1, d2, … dn separately, send them on n networks concurrently and update d6: end for7: Receiver decrypt d1 ~ dn using private key and re-organize data8: end procedure34 IEEE Communications Magazine • April 2015PERFORMANCE ANALYSISMATLAB simulations of a 5G network withcommonly used hexagonal cells are adopted toevaluate the performance of the aforementionedmechanisms in terms of the secure level andlatency. A total of 19 small cells in Fig. 5 with aninter-site distance (i.e., distance between twoAPs) of 300 m is considered in the simulation.Users are randomly distributed around APs,while each UE takes a random walk and changesdirection every 5 s. The wrap-around technique(i.e., users moving out of the predefined servicearea are assumed to enter the area from theother side of the network) is used to avoidboundary effects. The specific simulation parametersare listed in Table 1.In simulating the proposed SDN-enabledauthentication handover, we consider the separationdistance between UE and APs, and themoving direction of the UE as the transferredSCI to verify the reliability of the proposed SCIbasedauthentication handover scheme. Fromthe simulation results, we find that during themonitored user handover process, the probabilitythat any two users have the same distance(with accuracy to the first decimal) to the closestAP is 44 percent. When it comes to the sameAP, the probability of two users having the samedistance to this AP decreases to 11 percent.Combined with moving direction, signal strength,channel state information, and other user-specificattributes, the probability of UEs with thesame SCI could be reduced to virtually 0. Therefore,we believe that the SDN-enabled authenticationhandover mechanism using SCI transfer isrobust to guarantee security with enough SCIattributes. Moreover, it is flexible in setting asecurity level by different combinations of userspecificattributes.Authentication handover delays from SDNenabledhandover and the traditional methodsare simulated and compared in evaluating thelatency performance of the proposed schemes.Without loss of generality, we assume that thedata of each user following Poisson arrivals andnew users initiate the authentication processwhen the UE is on the move. In simulating theproposed authentication handover, user-specificSCI is collected and transferred to relevant cellson the projected moving path of the UE underthe coordination of the SDN controller. On theother hand, traditional authentication handoverprotocol requires separate authentication in eachnetwork involved in the handover. Here we usetwo publicly available OpenFlow controllers asrepresentatives to show the performance [14],NOX-MT and Beacon. NOX-MT is a multithreadedsuccessor of NOX, while Beacon is aJava controller built by David Erickson at Stanford[3].Figure 6 shows the comparison of authenticationdelay vs. 5G network utilization rates. Herenetwork utilization is defined as the ratio of totaldata arrival rate and controller processing rate.Network utilization rate is used as it reflects thedifferent load situations of the network. We cansee from Fig. 6 that when the network load isfairly low, authentication delay is not a problemfor all different methods. With more arrivals andincreased network load, SDN-enabled authenticationhandover still keeps the latency under 1 msmost of the time, which meets the 5G latencyrequirement. NOX-MT- and Beacon-enabledsolutions perform 30 and 14.29 percent betterthan traditional handover authentication protocolin latency reduction with the commonly useddeployment of an eight-core machine, 2 GHzCPUs, and 32 switches in [14]. It is obvious thatthe SDN-enabled authentication handover andprivacy protection scheme meet the critical latencyrequirement in 5G, while maintaining the SDNflexibility, programmability, and data offloadingcapability in further improving the energy efficiencyand network management of 5G networks.CONCLUSIONWith the upcoming multi-tier architecture andsmall cell deployment, challenges emerge insecurity provisioning and privacy protection in5G heterogeneous networks. 5G network securityhandover needs to be fast, with low complexitydue to the reduced cell size and stringentlatency constraint. In this article, we review theexisting studies and identify current challengeson authentication handover and privacy protectionin 5G. In addressing these challenges, weFigure 5. Simulation layout of 5G small cells with proportional axis(1 = 300 m).-3 -2-2-3-10123-1 0 1 2 3Scattered usersAPCell centerTable 1. Simulation parameters of 5G networks.Cell layoutHexagonal grid, 19 cellsites, with wraparoundtechniqueCell radius 150mUser mobility speed 3 km/hUser mobility direction RandomTotal number of users 570IEEE Communications Magazine • April 2015 35propose SDN-enabled authentication handoverand privacy protection through sharing of userspecificsecurity context information amongrelated access points. The proposed SDNenabledsolution not only provides a reconfigurablenetwork management platform, but alsosimplifies authentication handover in achievingreduced latency. The performance of the proposedschemes have been demonstrated throughnumerical simulations and examples. We expectthat more progress could be made by usingemerging SDN-enabled 5G architecture andnon-cryptographic techniques to address the 5Gchallenges of reduced cell size and coexistenceof heterogeneous networks. Many interestingrelated topics, including network complexity,security performance under different attacks,and effective use of security context information,could be explored for SDN-enabled 5G securitymechanisms.

Authenticated Key Exchange Protocols for Parallel Network File Systemms

Authenticated Key Exchange Protocols for ParallelNetwork File SystemsHoon Wei Lim Guomin YangAbstract—We study the problem of key establishment for securemany-to-many communications. The problem is inspired bythe proliferation of large-scale distributed file systems supportingparallel access to multiple storage devices. Our work focuses onthe current Internet standard for such file systems, i.e., parallelNetwork File System (pNFS), which makes use of Kerberos toestablish parallel session keys between clients and storage devices.Our review of the existing Kerberos-based protocol shows thatit has a number of limitations: (i) a metadata server facilitatingkey exchange between the clients and the storage devices hasheavy workload that restricts the scalability of the protocol; (ii)the protocol does not provide forward secrecy; (iii) the metadataserver generates itself all the session keys that are used betweenthe clients and storage devices, and this inherently leads to keyescrow. In this paper, we propose a variety of authenticatedkey exchange protocols that are designed to address the aboveissues. We show that our protocols are capable of reducing up toapproximately 54% of the workload of the metadata server andconcurrently supporting forward secrecy and escrow-freeness.All this requires only a small fraction of increased computationoverhead at the client.Keywords-Parallel sessions, authenticated key exchange, networkfile systems, forward secrecy, key escrow.I. INTRODUCTIONIn a parallel file system, file data is distributed acrossmultiple storage devices or nodes to allow concurrent accessby multiple tasks of a parallel application. This is typicallyused in large-scale cluster computing that focuses on highperformance and reliable access to large datasets. That is,higher I/O bandwidth is achieved through concurrent accessto multiple storage devices within large compute clusters;while data loss is protected through data mirroring usingfault-tolerant striping algorithms. Some examples of highperformanceparallel file systems that are in production useare the IBM General Parallel File System (GPFS) [48], GoogleFile System (GoogleFS) [21], Lustre [35], Parallel Virtual FileSystem (PVFS) [43], and Panasas File System [53]; whilethere also exist research projects on distributed object storagesystems such as Usra Minor [1], Ceph [52], XtreemFS [25],and Gfarm [50]. These are usually required for advancedscientific or data-intensive applications such as, seismic dataprocessing, digital animation studios, computational fluid dynamics,and semiconductor manufacturing. In these environments,hundreds or thousands of file system clients share dataand generate very high aggregate I/O load on the file systemsupporting petabyte- or terabyte-scale storage capacities.H.W. Lim is with National University of Singapore. Email:hoonwei@nus.edu.sg.G. Yang is with University of Wollongong, Australia. Email:gyang@uow.edu.au.Independent of the development of cluster and highperformancecomputing, the emergence of clouds [5], [37]and the MapReduce programming model [13] has resultedin file systems such as the Hadoop Distributed File System(HDFS) [26], Amazon S3 File System [6], and Cloud-Store [11]. This, in turn, has accelerated the wide-spreaduse of distributed and parallel computation on large datasetsin many organizations. Some notable users of the HDFSinclude AOL, Apple, eBay, Facebook, Hewlett-Packard, IBM,LinkedIn, Twitter, and Yahoo! [23].In this work, we investigate the problem of secure manyto-many communications in large-scale network file systemsthat support parallel access to multiple storage devices. Thatis, we consider a communication model where there are alarge number of clients (potentially hundreds or thousands)accessing multiple remote and distributed storage devices(which also may scale up to hundreds or thousands) in parallel.Particularly, we focus on how to exchange key materialsand establish parallel secure sessions between the clientsand the storage devices in the parallel Network File System(pNFS) [46]—the current Internet standard—in an efficientand scalable manner. The development of pNFS is driven byPanasas, Netapp, Sun, EMC, IBM, and UMich/CITI, and thusit shares many common features and is compatible with manyexisting commercial/proprietary network file systems.Our primary goal in this work is to design efficient andsecure authenticated key exchange protocols that meet specificrequirements of pNFS. Particularly, we attempt to meet thefollowing desirable properties, which either have not beensatisfactorily achieved or are not achievable by the currentKerberos-based solution (as described in Section II):Scalability – the metadata server facilitating access requestsfrom a client to multiple storage devices shouldbear as little workload as possible such that the serverwill not become a performance bottleneck, but is capableof supporting a very large number of clients;Forward secrecy – the protocol should guarantee thesecurity of past session keys when the long-term secretkey of a client or a storage device is compromised [39];andEscrow-free – the metadata server should not learn anyinformation about any session key used by the client andthe storage device, provided there is no collusion amongthem.The main results of this paper are three new provablysecure authenticated key exchange protocols. Our protocols,progressively designed to achieve each of the above properties,1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems2demonstrate the trade-offs between efficiency and security.We show that our protocols can reduce the workload ofthe metadata server by approximately half compared to thecurrent Kerberos-based protocol, while achieving the desiredsecurity properties and keeping the computational overhead atthe clients and the storage devices at a reasonably low level.We define an appropriate security model and prove that ourprotocols are secure in the model.In the next section, we provide some background on pNFSand describe its existing security mechanisms associated withsecure communications between clients and distributed storagedevices. Moreover, we identify the limitations of the currentKerberos-based protocol in pNFS for establishing securechannels in parallel. In Section III, we describe the threatmodel for pNFS and the existing Kerberos-based protocol. InSection IV, we present our protocols that aim to address thecurrent limitations. We then provide formal security analysesof our protocols under an appropriate security model, as wellas performance evaluation in Sections VI and VII, respectively.In Section VIII, we describe related work, and finally inSection IX, we conclude and discuss some future work.II. INTERNET STANDARD — NFSNetwork File System (NFS) [46] is currently the sole filesystem standard supported by the Internet Engineering TaskForce (IETF). The NFS protocol is a distributed file systemprotocol originally developed by Sun Microsystems that allowsa user on a client computer, which may be diskless, to accessfiles over networks in a manner similar to how local storageis accessed [47]. It is designed to be portable across differentmachines, operating systems, network architectures, and transportprotocols. Such portability is achieved through the use ofRemote Procedure Call (RPC) [51] primitives built on top ofan eXternal Data Representation (XDR) [15]; with the formerproviding a procedure-oriented interface to remote services,while the latter providing a common way of representing a setof data types over a network. The NFS protocol has since thenevolved into an open standard defined by the IETF NetworkWorking Group [49], [9], [45]. Among the current key featuresare filesystem migration and replication, file locking, datacaching, delegation (from server to client), and crash recovery.In recent years, NFS is typically used in environments whereperformance is a major factor, for example, high-performanceLinux clusters. The NFS version 4.1 (NFSv4.1) [46] protocol,the most recent version, provides a feature called parallel NFS(pNFS) that allows direct, concurrent client access to multiplestorage devices to improve performance and scalability. Asdescribed in the NFSv4.1 specification:When file data for a single NFS server is storedon multiple and/or higher-throughput storage devices(by comparison to the server’s throughput capability),the result can be significantly better file accessperformance.pNFS separates the file system protocol processing into twoparts: metadata processing and data processing. Metadata is informationabout a file system object, such as its name, locationwithin the namespace, owner, permissions and other attributes.The entity that manages metadata is called a metadata server.On the other hand, regular files’ data is striped and storedacross storage devices or servers. Data striping occurs in atleast two ways: on a file-by-file basis and, within sufficientlylarge files, on a block-by-block basis. Unlike NFS, a read orwrite of data managed with pNFS is a direct operation betweena client node and the storage system itself. Figure 1 illustratesthe conceptual model of pNFS.Storage access protocol(direct, parallel data exchange)pNFS protocol(metadata exchange)Control protocol(state synchronization)Storage devices or servers(file, block, object storage)Metadata serverClients(heterogeneous OSes)Fig. 1. The conceptual model of pNFS.More specifically, pNFS comprises a collection of threeprotocols: (i) the pNFS protocol that transfers file metadata,also known as a layout,1 between the metadata server anda client node; (ii) the storage access protocol that specifieshow a client accesses data from the associated storage devicesaccording to the corresponding metadata; and (iii) the controlprotocol that synchronizes state between the metadata serverand the storage devices.2A. Security ConsiderationEarlier versions of NFS focused on simplicity and efficiency,and were designed to work well on intranets and local networks.Subsequently, the later versions aim to improve accessand performance within the Internet environment. However,security has then become a greater concern. Among manyother security issues, user and server authentication withinan open, distributed, and cross-domain environment are acomplicated matter. Key management can be tedious andexpensive, but an important aspect in ensuring security ofthe system. Moreover, data privacy may be critical in highperformanceand parallel applications, for example, those associatedwith biomedical information sharing [28], [44], financialdata processing & analysis [20], [34], and drug simulation &discovery [42]. Hence, distributed storage devices pose greaterrisks to various security threats, such as illegal modificationor stealing of data residing on the storage devices, as well asinterception of data in transit between different nodes within1A layout can be seen as a map, describing how a file is distributed acrossthe data storage system. When a client holds a layout, it is granted the abilityto directly access the byte-range at the storage location specified in the layout.2Note that the control protocol is not specified in NFSv4.1. It can take manyforms, allowing vendors the flexibility to compete on performance, cost, andfeatures.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems3the system. NFS (since version 4), therefore, has been mandatingthat implementations support end-to-end authentication,where a user (through a client) mutually authenticates to anNFS server. Moreover, consideration should be given to theintegrity and privacy (confidentiality) of NFS requests andresponses [45].The RPCSEC GSS framework [17], [16] is currently thecore security component of NFS that provides basic securityservices. RPCSEC GSS allows RPC protocols to access theGeneric Security Services Application Programming Interface(GSS-API) [33]. The latter is used to facilitate exchange of credentialsbetween a local and a remote communicating parties,for example between a client and a server, in order to establisha security context. The GSS-API achieves these through aninterface and a set of generic functions that are independentof the underlying security mechanisms and communicationprotocols employed by the communicating parties. Hence,with RPCSEC GSS, various security mechanisms or protocolscan be employed to provide services such as, encrypting NFStraffic and performing integrity check on the entire body of anNFSv4 call.Similarly, in pNFS, communication between the client andthe metadata server are authenticated and protected throughRPCSEC GSS. The metadata server grants access permissions(to storage devices) to the client according to pre-definedaccess control lists (ACLs).3 The client’s I/O request to astorage device must include the corresponding valid layout.Otherwise, the I/O request is rejected. In an environment whereeavesdropping on the communication between the client andthe storage device is of sufficient concern, RPCSEC GSS isused to provide privacy protection [46].B. Kerberos & LIPKEYIn NFSv4, the Kerberos version 5 [32], [18] and the LowInfrastructure Public Key (LIPKEY) [14] GSS-API mechanismsare recommended, although other mechanisms may alsobe specified and used. Kerberos is used particularly for userauthentication and single sign-on, while LIPKEY provides anTLS/SSL-like model through the GSS-API, particularly forserver authentication in the Internet environment.User and Server Authentication. Kerberos, a widely deployednetwork authentication protocol supported by all majoroperating systems, allows nodes communicating over a nonsecurenetwork to perform mutual authentication. It works ina client-server model, in which each domain (also known asrealm) is governed by a Key Distribution Center (KDC), actingas a server that authenticates and provides ticket-grantingservices to its users (through their respective clients) withinthe domain. Each user shares a password with its KDC anda user is authenticated through a password-derived symmetrickey known only between the user and the KDC. However,one security weakness of such an authentication method isthat it may be susceptible to an off-line password guessingattack, particularly when a weak password is used to derive3Typically, operating system principles are matched to a set of user andgroup access control lists.a key that encrypts a protocol message transmitted betweenthe client and the KDC. Furthermore, Kerberos has strict timerequirements, implying that the clocks of the involved hostsmust be synchronized with that of the KDC within configuredlimits.Hence, LIPKEY is used instead to authenticate the clientwith a password and the metadata server with a public keycertificate, and to establish a secure channel between the clientand the server. LIPKEY leverages the existing Simple Public-Key Mechanism (SPKM) [2] and is specified as an GSSAPImechanism layered above SPKM, which in turn, allowsboth unilateral and mutual authentication to be accomplishedwithout the use of secure time-stamps. Through LIPKEY,analogous to a typical TLS deployment scenario that consistsof a client with no public key certificate accessing a serverwith a public key certificate, the client in NFS [14]:obtains the metadata server’s certificate;verifies that it was signed by a trusted CertificationAuthority (CA);generates a random session symmetric key;encrypts the session key with the metadata server’s publickey; andsends the encrypted session key to the server.At this point, the client and the authenticated metadata serverhave set up a secure channel. The client can then provide auser name and a password to the server for user authentication.Single Sign-on. In NFS/pNFS that employs Kerberos, eachstorage device shares a (long-term) symmetric key with themetadata server (which acts as the KDC). Kerberos then allowsthe client to perform single sign-on, such that the client isauthenticated once to the KDC for a fixed period of time butmay be allowed access to multiple storage devices governed bythe KDC within that period. This can be summarized in threerounds of communication between the client, the metadataserver, and the storage devices as follows:1) the client and the metadata server perform mutual authenticationthrough LIPKEY (as described before), and theserver issues a ticket-granting ticket (TGT) to the clientupon successful authentication;2) the client forwards the TGT to a ticket-granting server(TGS), typically the same entity as the KDC, in orderto obtain one or more service tickets (each containinga session key for access to a storage device), and validlayouts (each presenting valid access permissions to astorage device according to the ACLs);3) the client finally presents the service tickets and layoutsto the corresponding storage devices to get access to thestored data objects or files.We describe the above Kerberos-based key establishmentprotocol in more detail in Section III-C.Secure storage access. The session key generated by theticket-granting server (metadata server) for a client and astorage device during single sign-on can then be used in thestorage access protocol. It protects the integrity and privacyof data transmitted between the client and the storage device.Clearly, the session key and the associated layout are validonly within the granted validity period.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems4C. Current LimitationsThe current design of NFS/pNFS focuses on interoperability,instead of efficiency and scalability, of various mechanismsto provide basic security. Moreover, key establishment betweena client and multiple storage devices in pNFS are basedon those for NFS, that is, they are not designed specificallyfor parallel communications. Hence, the metadata server is notonly responsible for processing access requests to storage devices(by granting valid layouts to authenticated and authorizedclients), but also required to generate all the correspondingsession keys that the client needs to communicate securelywith the storage devices to which it has been granted access.Consequently, the metadata server may become a performancebottleneck for the file system. Moreover, such protocol designleads to key escrow. Hence, in principle, the server can learn allinformation transmitted between a client and a storage device.This, in turn, makes the server an attractive target for attackers.Another drawback of the current approach is that pastsession keys can be exposed if a storage device’s long-term keyshared with the metadata server is compromised. We believethat this is a realistic threat since a large-scale file system mayhave thousands of geographically distributed storage devices.It may not be feasible to provide strong physical security andnetwork protection for all the storage devices.III. PRELIMINARIESA. NotationWe let M denote a metadata server, C denote a client, andS denote a storage device. Let entity X; Y 2 fM;C; Sg, wethen use IDX to denote a unique identity of X, and KXto denote X’s secret (symmetric) key; while KXY denotes asecret key shared between X and Y , and sk denotes a sessionkey.Moreover, we let E(K;m) be a standard (encryption only)symmetric key encryption function and let E(K;m) be anauthenticated symmetric key encryption function, where bothfunctions take as input a key K and a message m. Finally, weuse t to represent a current time and _ to denote a layout. Wemay introduce other notation as required.B. Threat AssumptionsExisting proposals [19], [40], [29], [30], [31] on securelarge-scale distributed file systems typically assume that boththe metadata server and the storage device are trusted entities.On the other hand, no implicit trust is placed on the clients.The metadata server is trusted to act as a reference monitor,issue valid layouts containing access permissions, and sometimeseven generate session keys (for example, in the case ofKerberos-based pNFS) for secure communication between theclient and the storage devices. The storage devices are trustedto store data and only perform I/O operations upon authorizedrequests. However, we assume that the storage devices areat a much higher risk of being compromised compared tothe metadata server, which is typically easier to monitor andprotect in a centralized location. Furthermore, we assume thatthe storage devices may occasionally encounter hardware orsoftware failure, causing the data stored on them no longeraccessible.We note that this work focuses on communication security.Hence, we assume that data transmitted between the clientand the metadata server, or between the client and the storagedevice can be easily eavesdropped, modified or deleted by anadversary. However, we do not address storage related securityissues in this work. Security protection mechanisms for dataat rest are orthogonal to our protocols.C. Kerberos-based pNFS ProtocolFor the sake of completeness, we describe the key establishmentprotocol4 recommended for pNFS in RFC 5661 betweena client C and n storage devices Si, for 1 _ i _ n, through ametadata server M in Figure 2. We will compare the efficiencyof the pNFS protocol against ours in Section VII.During the setup phase, we assume that M establishes ashared secret key KMSi with each Si. Here, KC is a keyderived from C’s password, that is also known by M; whileT plays the role of a ticket-granting server (we simply assumethat it is part of M). Also, prior to executing the protocol inFigure 2, we assume that C and M have already setup a securechannel through LIPKEY (as described in Section II-B).Once C has been authenticated by M and granted accessto S1; : : : ; Sn, it receives a set of service ticketsE(KMSi ; IDC; t; ski), session keys ski, and layouts5 _i (forall i 2 [1; n]) from T, as illustrated in step (4) of the protocol.Clearly, we assume that C is able to uniquely extract eachsession key ski from E(KCT ; sk1; : : : ; skn). Since the sessionkeys are generated by M and transported to Si through C, nointeraction is required between C and Si (in terms of keyexchange) in order to agree on a session key. This keeps thecommunication overhead between the client and each storagedevice to a minimum in comparison with the case where keyexchange is required. Moreover, the computational overheadfor the client and each storage device is very low since theprotocol is mainly based on symmetric key encryption.The message in step (6) serves as key confirmation, that isto convince C that Si is in possession of the same session keythat C uses.IV. OVERVIEW OF OUR PROTOCOLSWe describe our design goals and give some intuition ofa variety of pNFS authenticated key exchange6 (pNFS-AKE)protocols that we consider in this work. In these protocols,we focus on parallel session key establishment between aclient and n different storage devices through a metadataserver. Nevertheless, they can be extended straightforwardlyto the multi-user setting, i.e., many-to-many communicationsbetween clients and storage devices.4For ease of exposition, we do not provide complete details of the protocolparameters.5We assume that a layout (containing the client’s identity, file objectmapping information, and access permissions) is typically integrity protectedand it can be in the form of a signature or MAC.6Without loss of generality, we use the term “key exchange” here, althoughkey establishment between two parties can be based on either key transportor key agreement [39].1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems5(1) C ! M : IDC(2) M ! C : E(KC;KCT ), E(KT ; IDC; t;KCT )(3) C ! T : IDS1 ; : : : ; IDSn, E(KT ; IDC; t;KCT ), E(KCT ; IDC; t)(4) T ! C : _1; : : : ; _n, E(KMS1 ; IDC; t; sk1); : : : ;E(KMSn; IDC; t; skn), E(KCT ; sk1; : : : ; skn)(5) C ! Si : _i;E(KMSi ; IDC; t; ski), E(ski; IDC; t)(6) Si ! C : E(ski; t + 1)Fig. 2. A simplified version of the Kerberos-based pNFS protocol.A. Design GoalsIn our solutions, we focus on efficiency and scalabilitywith respect to the metadata server. That is, our goal is toreduce the workload of the metadata server. On the otherhand, the computational and communication overhead for boththe client and the storage device should remain reasonablylow. More importantly, we would like to meet all these goalswhile ensuring at least roughly similar security as that of theKerberos-based protocol shown in Section III-C. In fact, weconsider a stronger security model with forward secrecy forthree of our protocols such that compromise of a long-termsecret key of a client C or a storage device Si will not exposethe associated past session keys shared between C and Si.Further, we would like an escrow-free solution, that is, themetadata server does not learn the session key shared betweena client and a storage device, unless the server colludes witheither one of them.B. Main IdeasRecall that in Kerberos-based pNFS, the metadata server isrequired to generate all service tickets E(KMSi ; IDC; t; ski)and session keys ski between C and Si for all 1 _ i _ n,and thus placing heavy workload on the server. In our solutions,intuitively, C first pre-computes some key materials andforward them to M, which in return, issues the corresponding“authentication tokens” (or service tickets). C can then, whenaccessing Si (for all i), derive session keys from the precomputedkey materials and present the corresponding authenticationtokens. Note here, C is not required to compute thekey materials before each access request to a storage device,but instead this is done at the beginning of a pre-definedvalidity period v, which may be, for example, a day or week ormonth. For each request to access one or more storage devicesat a specific time t, C then computes a session key from thepre-computed material. This way, the workload of generatingsession keys is amortized over v for all the clients within thefile system. Our three variants of pNFS-AKE protocols can besummarized as follows:pNFS-AKE-I: Our first protocol can be regarded as amodified version of Kerberos that allows the client togenerate its own session keys. That is, the key materialused to derive a session key is pre-computed by theclient for each v and forwarded to the correspondingstorage device in the form of an authentication tokenat time t (within v). As with Kerberos, symmetric keyencryption is used to protect the confidentiality of secretinformation used in the protocol. However, the protocoldoes not provide any forward secrecy. Further, the keyescrow issue persists here since the authentication tokenscontaining key materials for computing session keys aregenerated by the server.pNFS-AKE-II: To address key escrow while achievingforward secrecy simultaneously, we incorporate a Diffie-Hellman key agreement technique into Kerberos-likepNFS-AKE-I. Particularly, the client C and the storagedevice Si each now chooses a secret value (that is knownonly to itself) and pre-computes a Diffie-Hellman keycomponent. A session key is then generated from both theDiffie-Hellman components. Upon expiry of a time periodv, the secret values and Diffie-Hellman key componentsare permanently erased, such that in the event when eitherC or Si is compromised, the attacker will no longer haveaccess to the key values required to compute past sessionkeys. However, note that we achieve only partial forwardsecrecy (with respect to v), by trading efficiency oversecurity. This implies that compromise of a long-termkey can expose session keys generated within the currentv. However, past session keys in previous (expired) timeperiods v(for v< v) will not be affected.pNFS-AKE-III: Our third protocol aims to achieve fullforward secrecy, that is, exposure of a long-term keyaffects only a current session key (with respect to t), butnot all the other past session keys. We would also liketo prevent key escrow. In a nutshell, we enhance pNFSAKE-II with a key update technique based on any efficientone-way function, such as a keyed hash function. In PhaseI, we require C and each Si to share some initial keymaterial in the form of a Diffie-Hellman key. In Phase II,the initial shared key is then used to derive session keysin the form of a keyed hash chain. Since a hash value inthe chain does not reveal information about its pre-image,the associated session key is forward secure.V. DESCRIPTION OF OUR PROTOCOLSWe first introduce some notation required for our protocols.Let F(k;m) denote a secure key derivation function that takesas input a secret key k and some auxiliary information m,and outputs another key. Let sid denote a session identifierwhich can be used to uniquely name the ensuing session. Letalso N be the total number of storage devices to which aclient is allowed to access. We are now ready to describe theconstruction of our protocols.A. pNFS-AKE-IOur first pNFS-AKE protocol is illustrated in Figure 3. Foreach validity period v, C must first pre-compute a set of key1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems6Phase I – For each validity period v:(1) C ! M : IDC, E(KCM;KCS1 ; : : : ;KCSN )(2) M ! C : E(KMS1 ; IDC; IDS1 ; v;KCS1 ); : : : ; E(KMSN ; IDC; IDSN ; v;KCSN )Phase II – For each access request at time t:(1) C ! M : IDC, IDS1 ; : : : ; IDSn(2) M ! C : _1; : : : ; _n(3) C ! Si : _i; E(KMSi ; IDC; IDSi ; v;KCSi ), E(sk0i ; IDC; t)(4) Si ! C : E(sk0i ; t + 1)Fig. 3. Specification of pNFS-AKE-I.materials KCS1 ; : : : ;KCSN before it can access any of theN storage device Si (for 1 _ i _ N). The key materials aretransmitted to M. We assume that the communication betweenC andM is authenticated and protected through a secure channelassociated with key KCM established using the existingmethods as described in Section II-B. M then issues an authenticationtoken of the form E(KMSi ; IDC; IDSi ; v;KCSi )for each key material if the associated storage device Si hasnot been revoked.7 This completes Phase I of the protocol.From this point onwards, any request from C to access Si isconsidered part of Phase II of the protocol until v expires.When C submits an access request to M, the request containsall the identities of storage devices Si for 1 _ i _ n _ Nthat C wishes to access. For each Si, M issues a layout_i. C then forwards the respective layouts, authenticationtokens (from Phase I), and encrypted messages of the formE(sk0i ; IDC; t) to all n storage devices.Upon receiving an I/O request for a file object from C, eachSi performs the following:1) check if the layout _i is valid;2) decrypt the authentication token and recover key KCSi ;3) compute keys skzi = F(KCSi ; IDC; IDSi ; v; sid; z) forz = 0; 1;4) decrypt the encrypted message, check if IDC matchesthe identity of C and if t is within the current validityperiod v;5) if all previous checks pass, Si replies C with a keyconfirmation message using key sk0i .At the end of the protocol, sk1i is set to be the session keyfor securing communication between C and Si. We note that,as suggested in [7], sid in our protocol is uniquely generatedfor each session at the application layer, for example throughthe GSS-API.B. pNFS-AKE-IIWe now employ a Diffie-Hellman key agreement techniqueto both provide forward secrecy and prevent key escrow. Inthis protocol, each Si is required to pre-distribute some keymaterial to M at Phase I of the protocol.Let gx 2 G denote a Diffie-Hellman component, where Gis an appropriate group generated by g, and x is a numberrandomly chosen by entity X 2 fC; Sg. Let _ (k;m) denote7Here KMSi is regarded as a long-term symmetric secret key shared betweenM and Si. Also, we use authenticated encryption instead of encryptiononly encryption for security reasons. This will be clear in our security analysis.a secure MAC scheme that takes as input a secret key k anda target message m, and output a MAC tag. Our partiallyforward secure protocol is specified in Figure 4.At the beginning of each v, each Si that is governed byM generates a Diffie-Hellman key component gsi . The keycomponent gsi is forwarded to and stored by M. Similarly, Cgenerates its Diffie-Hellman key component gc and sends it toM.8 At the end of Phase I, C receives all the key componentscorresponding to all N storage devices that it may accesswithin time period v, and a set of authentication tokens of theform _ (KMSi ; IDC; IDSi ; v; gc; gsi ). We note that for ease ofexposition, we use the same key KMSi for encryption in step(1) and MAC in step (2). In actual implementation, however,we assume that different keys are derived for encryption andMAC, respectively, with KMSi as the master key. For example,the encryption key can be set to be F(KMSi ; “enc”), whilethe MAC key can be set to be F(KMSi ; “mac”).Steps (1) & (2) of Phase II are identical to those in theprevious variants. In step (3), C submits its Diffie-Hellmancomponent gc in addition to other information required in step(3) of pNFS-AKE-I. Si must verify the authentication tokento ensure the integrity of gc. Here C and Si compute skzi forz = 0; 1 as follow:skzi = F(gcsi ; IDC; IDSi ; gc; gsi ; v; sid; z):At the end of the protocol, C and Si share a session keysk1i .Note that since C distributes its chosen Diffie-Hellmanvalue gc during each protocol run (in Phase II), each Si needsto store only its own secret value si and is not required tomaintain a list of gc values for different clients. Upon expiryof v, they erase their secret values c and si, respectively, fromtheir internal states (or memory).Clearly, M does not learn anything about skzi unless itcolludes with the associated C or Si, and thus achievingescrow-freeness.C. pNFS-AKE-IIIAs explained before, pNFS-AKE-II achieves only partialforward secrecy (with respect to v). In the third variant ofour pNFS-AKE, therefore, we attempt to design a protocol8For consistency with the existing design of the Kerberos protocol, weassume that the Diffie-Hellman components are “conveniently” transmittedthrough the already established secure channel between them, although theDiffie-Hellman components may not necessarily be encrypted from a securityview point.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems7Phase I – For each validity period v:(1) Si ! M : IDSi , E(KMSi ; gsi )(2) C ! M : IDC, E(KCM; gc)(3) M ! C : E(KCM; gs1 ; : : : ; gsN ),_ (KMS1 ; IDC; IDS1 ; v; gc; gs1 ); : : : ; _ (KMSN ; IDC; IDSN ; v; gc; gsN )Phase II – For each access request at time t:(1) C ! M : IDC, IDS1 ; : : : ; IDSn(2) M ! C : _1; : : : ; _n(3) C ! Si : _i; gc; _ (KMSi ; IDC; IDSi ; v; gc; gsi ), E(sk0i ; IDC; t)(4) Si ! C : E(sk0i ; t + 1)Fig. 4. Specification of pNFS-AKE-II (with partial forward secrecy and escrow-free).that achieves full forward secrecy and escrow-freeness. Astraightforward and well-known technique to do this is throughrequiring both C and Si to generate and exchange freshDiffie-Hellman components for each access request at timet. However, this would drastically increase the computationaloverhead at the client and the storage devices. Hence, we adopta different approach here by combining the Diffie-Hellman keyexchange technique used in pNFS-AKE-II with a very efficientkey update mechanism. The latter allows session keys to bederived using only symmetric key operations based on a agreedDiffie-Hellman key. Our protocol is illustrated in Figure 5.Phase I – For each validity period v:(1) Si ! M : IDSi , E(KMSi ; gsi )(2) C ! M : IDC, E(KCM; gc)(3) M ! C : E(KCM; gs1 ; : : : ; gsN )(4) M ! Si : E(KMSi ; IDC; IDSi ; v; gc; gsi )Phase II – For each access request at time t:(1) C ! M : IDC, IDS1 ; : : : ; IDSn(2) M ! C : _1; : : : ; _n(3) C ! Si : _i, E(skj,0i ; IDC; t)(4) Si ! C : E(skj,0i ; t + 1)Fig. 5. Specification of pNFS-AKE-III (with full forward secrecy and escrowfree).Phase I of the protocol is similar to that of pNFS-AKEII.In addition, M also distributes C’s chosen Diffie-Hellmancomponent gc to each Si. Hence, at the end of Phase I, bothC and Si are able to agree on a Diffie-Hellman value gcsi .Moreover, C and Si set F1(gcsi ; IDC; IDSi ; v) to be theirinitial shared secret state K0CSi .9During each access request at time t in Phase II, steps (1)& (2) of the protocol are identical to those in pNFS-AKE-II.In step (3), however, C can directly establish a secure sessionwith Si by computing skj,zi as follows:skj,zi = F2(Kj1CSi; IDC; IDSi ; j; sid; z)where j _ 1 is an increasing counter denoting the j-th sessionbetween C and Si with session key skj,1i . Both C and Si then9Unlike in pNFS-AKE-II where gc is distributed in Phase II, we need topre-distribute C’s chosen Diffie-Hellman component in Phase I because thesecret state K0C Sithat C and Si store will be updated after each request.This is essential to ensure forward secrecy.setKjCSi= F1(Kj1CSi; j)and update their internal states. Note that here we use twodifferent key derivation functions F1 and F2 to compute KjCSiand skj,zi , respectively. Our design can enforce independenceamong different session keys. Even if the adversary hasobtained a session key skj,1i , the adversary cannot derive Kj1CSior KjCSi . Therefore, the adversary cannot obtain skj+1,zi orany of the following session keys. It is worth noting that theshared state KjCSi should never be used as the session key inreal communications, and just like the long-term secret key, itshould be kept at a safe place, since otherwise, the adversarycan use it to derive all the subsequent session keys within thevalidity period (i.e., KjCSi can be regarded as a medium-termsecret key material). This is similar to the situation that oncethe adversary compromises the long-term secret key, it canlearn all the subsequence sessions.However, we stress that knowing the state information KjCSiallows the adversary to compute only the subsequence sessionkeys (i.e., skj+1,zi ; skj+2,zi ; _ _ _ ) within a validity period, butnot the previous session keys (i.e., sk1,zi ; sk2,zi ; _ _ _ ; skj,zi )within the same period. Our construction achieves thisby making use of one-way hash chains constructed usingthe pseudo-random function F1. Since knowing KjCSidoes not help the adversary in obtaining the previous states(Kj1CSi;Kj2CSi; :::;K0C Si ), we can prevent the adversary fromobtaining the corresponding session keys. Also, since compromiseof KMSi or KCM does not reveal the initial state K0CSiduring the Diffie-Hellman key exchange, we can achieve fullforward secrecy.VI. SECURITY ANALYSISWe work in a security model that allows us to showthat an adversary attacking our protocols will not able tolearn any information about a session key. Our model alsoimplies implicit authentication, that is, only the right protocolparticipant is able to learn or derive a session key.A. Security ModelWe now define a security model for pNFS-AKE. Let Mdenote the metadata server, SS = fS1; S2; _ _ _ ; SNg the setof storage devices, and CS = fC1;C2; _ _ _ ;Cg the set of1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems8clients. A party P 2 fMg[SS[CS may run many instancesconcurrently, and we denote instance i of party P by _iP .Our adversarial model is defined via a game between anadversary A and a game simulator SIM. SIM tosses arandom coin b at the beginning of the game and b willbe used later in the game. SIM then generates for eachSi 2 SS (Cj 2 CS, respectively) a secret key KMSi(KMCj , respectively) shared with M. A is allowed to makethe following queries to the simulator:SEND(P; i;m): This query allows the adversary to senda message m to an instance _iP . If the message m issent by another instance _jPwith the intended receiverP, then this query models a passive attack. Otherwise, itmodels an active attack by the adversary. The simulatorthen simulates the reaction of _iP upon receiving themessage m, and returns to A the response (if there isany) that _iP would generate.CORRUPT(P): This query allows the adversary to corrupta party P 2 SS[CS. By making this query, the adversarylearns all the information held by P at the time of thecorruption, including all the long-term and ephemeralsecret keys. However, the adversary cannot corrupt M(but see Remark 1).REVEAL(P; i): This query allows the adversary to learnthe session key that has been generated by the instance_iP (P 2 SS [ CS). If the instance _iP does not holdany session key, then a special symbol ? is returned tothe adversary.TEST(P; i): This query can only be made to a freshinstance _iP (as defined below) where P 2 SS [ CS.If the instance _iP holds a session key SKiP , then SIMdoes the following– if the coin b = 1, SIM returns SKiP to theadversary;– otherwise, a random session key is drawn from thesession key space and returned to the adversary.Otherwise, a special symbol ? is returned to the adversary.We define the partner id pidiP of an instance _iP as theidentity of the peer party recognized by _iP , and sidiP as theunique session id belonging to _iP . We say a client instance_iC and a storage device instance _jS are partners if pidiC = Sand pidjS = C and sidiC = sidjS.We say an instance _iP is fresh ifA has never made a CORRUPT query to P or pidiP ; andA has never made a REVEAL query to _iP or its partner.At the end of the game, the adversary outputs a bit bas herguess for b. The adversary’s advantage in winning the gameis defined asAdvpNFSA (k) = j2Pr[b= b] 􀀀 1j:Definition 1: We say a pNFS-AKE protocol is secure if thefollowing conditions hold.1) If an honest client and an honest storage device completematching sessions, they compute the same session key.2) For any PPT adversary A, AdvpNFSA (k) is a negligiblefunction of k.Forward Secrecy. The above security model for pNFS-AKEdoes not consider forward secrecy (i.e., the corruption ofa party will not endanger his/her previous communicationsessions). Below we first define a weak form of forwardsecrecy we call partial forward secrecy (PFS). We follow theapproach of Canetti and Krawczyk [10] by introducing a newtype of query:EXPIRE(P; v): After receiving this query, no instance ofP for time period v could be activated. In addition, thesimulator erases all the state information and session keysheld by the instances of party P which are activatedduring time period v.Then, we redefine the freshness of an instance _iP asfollows:A makes a CORRUPT(P) query only after anEXPIRE(P; v) query where the instance _iP is activatedduring time period v;A has never made a REVEAL(P; i) query; andIf _iP has a partner instance _jQ, then A also obeys theabove two rules with respect to _jQ; otherwise, A hasnever made a CORRUPT(pidiP ) query.The rest of the security game is the same. We define theadvantage of the adversary asAdvpNFSPFSA (k) = j2Pr[b= b] 􀀀 1j:We can easily extend the above definition to define fullforward secrecy (FFS) by modifying the EXPIRE query asfollows:EXPIRE(P; i): Upon receiving this query, the simulatorerases all the state information and the session key heldby the instance _iP .The rest of the security model is the same as in the PFS game.Remark 1. In our security model, we do not allow theadversary to corrupt the metadata server M which holds allthe long-term secret keys. However, in our Forward Secrecymodel, we actually do not really enforce such a requirement.It is easy to see that if the adversary corrupts all the partiesin SS [ CS, then the adversary has implicitly corrupted M.But we should also notice that there is no way to preventactive attacks once M is corrupted. Therefore, the adversarycan corrupt all the parties (or M) only after the Test sessionhas expired.Remark 2. Our above Forward Secrecy model has alsoescrow-freeness. One way to define escrow-freeness is todefine a new model which allows the adversary to corruptthe metadata server and learn all the long-term secret keys.However, as outlined in Remark 1, our Forward Secrecy modelallows the adversary to obtain all the long-term secret keysunder some necessary conditions. Hence, our Forward Secrecymodel has implicitly captured escrow-freeness.B. Security ProofsTheorem 1: The pNFS-AKE-I protocol is secure withoutPFS if the authenticated encryption scheme E is secure underchosen-ciphertext attacks and F is a family of pesudo-randomfunctions.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems9Proof. We define a sequence of games Gi(i _ 0) where G0 isthe original game defined in our security model without PFS.We also define AdvpNFSi as the advantage of the adversaryin game Gi. Then we have AdvpNFS0 = AdvpNFSA (k).In game G1 we change the original game as follows: thesimulator randomly chooses an instance _iP among all theinstances created in the game, if the TEST query is notperformed on _iP , the simulator aborts and outputs a randombit. Then we haveAdvpNFS1 =1nIAdvpNFS0where nI denotes the number of instances created in the game.In the following games, we use C and S to denote the clientand the storage device involved in the test session, respectively,and v to denote the time period when the test session isactivated.In game G2, we change game G1 as follows: let FORGEdenote the event that A successfully forges a valid ciphertextE(KMS ; IDC ; IDS ; v;KCS ). If the event FORGEhappens, then the simulator aborts the game and outputsa random bit. Since E is a secure authenticated encryptionscheme, we havePr[b= b in game G1jFORGE]= Pr[b= b in game G2jFORGE]andPr[b= b in game G1] 􀀀 Pr[b= b in game G2]_ Pr[FORGE] _ AdvUFCMAE (k):Therefore, we haveAdvpNFS1_ AdvpNFS2 + 2AdvUFCMAE (k):In game G3 we use a random key Kinstead of the decryptionof E(KMS ; IDC ; IDS ; v;KCS ) to simulate thegame. In the following, we show that jAdvpNFS2􀀀AdvpNFS1jis negligible if the authenticated encryption scheme E is secureunder adaptive chosen-ciphertext attacks (CCA).We construct an adversary B in the CCA game for theauthenticated encryption scheme E. B simulates the game G2for the adversary A as follows. B generates all the longtermkeys in the system except KMS . B then randomlyselects two keys K0 and K1 and obtains a challenge ciphertextCH = E(KMS ; IDC ; IDS ; v;K) from its challengerwhere K is either K0 or K1. B then uses CH as theauthentication token used between C and S during thetime period v, and uses K1 as the decryption of CH toperform any related computation. For other authenticationtokens related to KMS , B generates them by querying itsencryption oracle. Also, for any authentication token intendedfor S but not equal to CH, B performs the decryption byquerying its decryption oracle. Finally, if the adversary A winsin the game (denote this event by WIN), B outputs 1 (i.e., Bguesses K = K1), otherwise, B outputs 0 (i.e., B guessesK = K0).We can see that if K = K1, then the game simulated byB is the same as game G2; otherwise, if K = K0, then thegame simulated by B is the same as game G3. So we haveAdvCCAB (k) = j2(Pr[WINjK= K1]Pr[K= K1] +Pr[WINjK= K0]Pr[K= K0]) 􀀀 1j= Pr[WINjK= K1] 􀀀 Pr[WINjK= K0]=12(AdvpNFS2􀀀 AdvpNFS3 )andAdvpNFS2_ AdvpNFS3 + 2AdvCCAE (k):In game G4 we then replace the function F(K; _) with arandom function RF(_). Since F is a family of pseudo-randomfunctions, if the adversary’s advantage changes significantlyin game G4 , we can construct a distinguisher D against F.D simulates game G3 for A honestly except that wheneverD needs to compute F(K; x), D queries its own oracle Owhich is either F(K; _) or RF(_). At the end of the game, ifA wins the game, D outputs 1, otherwise, D outputs 0.We can see that if O = F(K; _), A is in game G3,otherwise, if O = RF(_), then A is in game G4. Therefore,we haveAdvprfD (k) = Pr[D outputs 1jO = F(K; _)] 􀀀Pr[D outputs 1jO = RF(_)]= Pr[WINjO = F(K; _)] 􀀀Pr[WINjO = RF(_)]=12(AdvpNFS3􀀀 AdvpNFS4 )andAdvpNFS3_ AdvpNFS4 + 2AdvprfF (k):In game G4, we havesk0i = RF(IDC ; IDS ; v; sid; 0)sk1i = RF(IDC ; IDS ; v; sid; 1)where sid is the unique session id for the test session. Nowsince RF is a random function, sk1i is just a random keyindependent of the game. Therefore, the adversary has noadvantage in winning the game, i.e.,AdvpNFS4 = 0:Combining all together, we haveAdvpNFSA (k)_2nI (AdvUFCMAE (k) + AdvCCAE (k) + AdvprfF (k)):Theorem 2: The pNFS-AKE-II protocol achieves partialforward secrecy if _ is a secure MAC scheme, the DDHassumption holds in the underlying group G, and F is a familyof pesudo-random functions.Proof. The proof is similar to that for Theorem 1. Below weonly elaborate on the differences between the two proofs. Wealso define a sequence of games Gi where G0 is the originalPFS security game.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems10In game G1, we change game G0 in the same way as inthe previous proof, then we also haveAdvpNFSPFS1 =1nIAdvpNFSPFS0where nI denotes the number of instances created in thegame. Let C and S denote the client and the storage deviceinvolved in the test session, respectively, and v the timeperiod when the test session is activated.In game G2, we further change game G1 as follows: letFORGE denote the event that A successfully forges a validMAC tag _ (KMSi ; IDC; IDSi ; v; gc; gsi ) before corruptingSi, if the event FORGE happens, then the simulator abortsthe game and outputs a random bit. Then we havePr[b= b in game G1jFORGE]= Pr[b= b in game G2jFORGE]andPr[b= b in game G1] 􀀀 Pr[b= b in game G2]_ Pr[FORGE] _ AdvUFCMAτ (k):Therefore, we haveAdvpNFSPFS1_ AdvpNFSPFS2 + 2AdvUFCMAτ (k):In game G3, we change game G2 by replacing the Diffie-Hellman key gcs in the test session with a random elementK2 G. Below we show that if the adversary’s advantagechanges significantly in game G3 , we can construct a distinguisherB to break the Decisional Diffie-Hellman (DDH)assumption.B is given a challenge (ga; gb;Z), in which with equalprobability, Z is either gab or a random element of G. Bsimulates game G2 honestly by generating all the long-termsecret keys for all the clients and storage devices. Then forthe time period v, B sets gc= ga and gs= gb. When thevalue of gcs is needed, B uses the value of Z to performthe corresponding computation. Finally, if A wins the game,B outputs 1, otherwise, B outputs 0.Since the adversary cannot corrupt C or S before the timeperiod v has expired, if a FORGE event did not happen,then the values of the Diffie-Hellman components in the testsession must be ga and gb. If Z = gab, then A is in game G2;otherwise, if Z is a random element of G, then A is in gameG3. Therefore we haveAdvDDHB (k) = Pr[B outputs 1jZ = gab] 􀀀Pr[B outputs 1jZ = gr]= Pr[WINjZ = gab] 􀀀 Pr[WINjZ = gr]=12(AdvpNFSPFS2􀀀 AdvpNFSPFS3 )andAdvpNFSPFS2_ AdvpNFSPFS3 + 2AdvDDH(k):In game G4, we replace the pseudo-random functionF(K; _) with a random function RF(_). By following thesame analysis as in the previous proof, we haveAdvpNFSPFS3_ AdvpNFSPFS4 + 2AdvprfF (k)andAdvpNFSPFS4 = 0:Therefore, combining all together, we haveAdvpNFSPFSA (k)_ 2nI (AdvUFCMAτ (k) + AdvDDH(k) + AdvprfF (k)):Theorem 3: The pNFS-AKE-III protocol achieves full forwardsecrecy if E is a secure authenticated encryption scheme,the DDH assumption holds in the underlying group G, and Fis a family of pesudo-random functions.Proof (Sketch). The proof is very similar to that for Theorem 2.Below we provide a sketch of the proof.Let C and S denote the client and the storage deviceinvolved in the test session, respectively, and v the timeperiod when the test session is activated. Without loss ofgenerality, suppose the test session is the j-th session betweenC and S within the period v. Since the adversary is notallowed to corrupt C or S before the test session sessionhas expired, due to the unforgeability of E, and the DDHassumption, the simulator can replace gcs in the time periodv with a random element K 2 G. Then in the next augmentedgame, the simulator replaces K0CS by a random key. SinceF1 is a secure pseudo-random function, such a replacementis indistinguishable from the adversary’s view point. Thesimulator then replaces ski,zi (for z = 0; 1) and KiCS withindependent random keys for all 1 _ i _ j. Once again, sinceF1 and F2 are secure pseudo-random functions, the augmentedgames are indistinguishable by the adversary. Finally, in thelast augmented game, we can claim that the adversary has noadvantage in winning the game since a random key is returnedto the adversary no matter b = 0 or b = 1. This completes thesketch of the proof. □VII. PERFORMANCE EVALUATIONA. Computational OverheadWe consider the computational overhead for w accessrequests over time period v for a metadata server M, a clientC, and storage devices Si for i 2 [1;N]. We assume that alayout _ is of the form of a MAC, and the computational costfor authenticated symmetric encryption E is similar to that forthe non-authenticated version E.10 Table I gives a comparisonbetween Kerberos-based pNFS and our protocols in terms ofthe number of cryptographic operations required for executingthe protocols over time period v.To give a more concrete view, Table II provides someestimation of the total computation times in seconds (s) foreach protocol by using the Crypto++ benchmarks obtained onan Intel Core 2 1.83 GHz processor under Windows Vistain 32-bit mode [12]. We choose AES/CBC (128-bit key) forencryption, AES/GCM (128-bit, 64K tables) for authenticatedencryption, HMAC(SHA-1) for MAC, and SHA-1 for keyderivation. Also, Diffie-Hellman exponentiations are based on10For example, according to the Crypto++ 5.6.0 Benchmarks, AES/GCM(128-bit, 64K tables) has similar speed as AES/CBC (128-bit key) [12].1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems11TABLE ICOMPARISON IN TERMS OF CRYPTOGRAPHIC OPERATIONS FOR w ACCESS REQUESTS FROM C TO Si VIA M OVER TIME PERIOD v, FOR ALL 1 _ i _ nAND WHERE n _ N.Protocol M C all Si TotalKerberos-pNFS– Symmetric key encryption / decryption w(n + 5) w(2n + 3) 3wn w(6n + 8)– MAC generation / verification wn 0 wn 2wnpNFS-AKE-I– Symmetric key encryption / decryption N + 1 2wn + 1 3wn 5wn + N + 2– MAC generation / verification wn 0 wn 2wn– Key derivation 0 2wn 2wn 4wnpNFS-AKE-II– Symmetric key encryption / decryption N + 2 2wn + 2 2wn + 1 4wn + N + 5– MAC generation / verification wn + N 0 2wn 3wn + N– Key derivation 0 2wn 2wn 4wn– Diffie-Hellman exponentiation 0 N + 1 N + wn 2N + wn + 1pNFS-AKE-III– Symmetric key encryption / decryption 2N + 2 2wn + 2 2wn + 1 4wn + 2N + 5– MAC generation / verification wn 0 wn 2wn– Key derivation 0 3wn + N 3wn + N 6wn + 2N– Diffie-Hellman exponentiation 0 N + 1 2N 3N + 1DH 1024-bit key pair generation. Our estimation is basedon a fixed message size of 1024 bytes for all cryptographicoperations, and we consider the following case:N = 2n and w = 50 (total access requests by C withinv);C interacts with 103 storage devices concurrently for eachaccess request, i.e. n = 103;M has interacted with 105 clients over time period v; andeach Si has interacted with 104 clients over time periodv.Table II shows that our protocols reduce the workload of Min the existing Kerberos-based protocol by up to approximately54%. This improves the scalability of the metadata serverconsiderably. The total estimated computational cost for Mfor serving 105 clients is 8:02 _ 104 s (_ 22.3 hours) inKerberos-based pNFS, compared with 3:68 _ 104 s (_ 10.2hours) in pNFS-AKE-I and 3:86 _ 104 s (_ 10.6 hours) inpNFS-AKE-III. In general, one can see from Table I that theworkload of M is always reduced by roughly half for anyvalues of (w; n;N). The scalability of our protocols from theserver’s perspective in terms of supporting a large number ofclients is further illustrated in the left graph of Figure 6 whenwe consider each client requesting access to an average ofn = 103 storage devices.Moreover, the additional overhead for C (and all Si) forachieving full forward secrecy and escrow-freeness using ourtechniques are minimal. The right graph of Figure 6 shows thatour pNFS-AKE-III protocol has roughly similar computationaloverhead in comparison with Kerberos-pNFS when the numberof accessed storage devices is small; and the increasedcomputational overhead for accessing 103 storage devicesin parallel is only roughly 1/500 of a second compared tothat of Kerberos-pNFS—a very reasonable trade-off betweenefficiency and security. The small increase in overhead is partlydue to the fact that some of our cryptographic cost is amortizedover a time period v (recall that and for each access requestat time t, the client runs only Phase II of the protocol).On the other hand, we note that the significantly highercomputational overhead incurred by Si in pNFS-AKE-II islargely due to the cost of Diffie-Hellman exponentiations. Thisis a space-computation trade-off as explained in Section V-B(see Section VII-C for further discussion on key storage).Nevertheless, 256 s is an average computation time for 103storage devices over time period v, and thus the averagecomputation time for a storage device is still reasonably small,i.e. less than 1/3 of a second over time period v. Moreover, wecan reduce the computational cost for Si to roughly similarto that of pNFS-AKE-III if C pre-distributes its gc value toall relevant Si so that they can pre-compute the gcsi value foreach time period v.TABLE IICOMPARISON IN TERMS OF COMPUTATION TIMES IN SECONDS (S) OVERTIME PERIOD v BETWEEN KERBEROS-PNFS AND OUR PROTOCOLS. HEREFFS DENOTES FULL FORWARD SECRECY, WHILE EF DENOTESESCROW-FREENESS.Protocol FFS EF M C SiKerberos-pNFS 8:02 _ 104 0.90 17.00pNFS-AKE-I 3:68 _ 104 1.50 23.00pNFS-AKE-II ✓ 3:82 _ 104 2.40 256.00pNFS-AKE-III ✓ ✓ 3:86 _ 104 2.71 39.60B. Communication OverheadAssuming fresh session keys are used to secure communicationsbetween the client and multiple storage devices, clearlyall our protocols have reduced bandwidth requirements. Thisis because during each access request, the client does not needto fetch the required authentication token set from M. Hence,the reduction in bandwidth consumption is approximately thesize of n authentication tokens.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems12secondsnumber of clientsmillisecondsnumber of storage devicesFig. 6. Comparison in terms of computation times for M (on the left) and for C (on the right) at a specific time t.C. Key StorageWe note that the key storage requirements for KerberospNFSand all our described protocols are roughly similar fromthe client’s perspective. For each access request, the clientneeds to store N or N + 1 key materials (either in the formof symmetric keys or Diffie-Hellman components) in theirinternal states.However, the key storage requirements for each storagedevice is higher in pNFS-AKE-III since the storage devicehas to store some key material for each client in their internalstate. This is in contrast to Kerberos-pNFS, pNFS-AKE-I andpNFS-AKE-II that are not required to maintain any client keyinformation.VIII. OTHER RELATED WORKSome of the earliest work in securing large-scale distributedfile systems, for example [24], [22], have already employedKerberos for performing authentication and enforcing accesscontrol. Kerberos, being based on mostly symmetric keytechniques in its early deployment, was generally believed tobe more suitable for rather closed, well-connected distributedenvironments.On the other hand, data grids and file systems such as,OceanStore [27], LegionFS [54] and FARSITE [3], make useof public key cryptographic techniques and public key infrastructure(PKI) to perform cross-domain user authentication.Independently, SFS [36], also based on public key cryptographictechniques, was designed to enable inter-operabilityof different key management schemes. Each user of thesesystems is assumed to possess a certified public/private keypair. However, these systems were not designed specificallywith scalability and parallel access in mind.With the increasing deployment of highly distributed andnetwork-attached storage systems, subsequent work, suchas [4], [55], [19], focussed on scalable security. Nevertheless,these proposals assumed that a metadata server shares agroup secret key with each distributed storage device. Thegroup key is used to produce capabilities in the form ofmessage authentication codes. However, compromise of themetadata server or any storage device allows the adversaryto impersonate the server to any other entities in the filesystem. This issue can be alleviated by requiring that eachstorage device shares a different secret key with the metadataserver. Nevertheless, such an approach restricts a capabilityto authorising I/O on only a single device, rather than largergroups of blocks or objects which may reside on multiplestorage devices.More recent proposals, which adopted a hybrid symmetrickey and asymmetric key method, allow a capability to spanany number of storage devices, while maintaining a reasonableefficiency-security ratio [40], [29], [30], [31]. For example,Maat [30] encompasses a set of protocols that facilitate (i)authenticated key establishment between clients and storagedevices, (ii) capability issuance and renewal, and (iii) delegationbetween two clients. The authenticated key establishmentprotocol allows a client to establish and re-use a shared(session) key with a storage device. However, Maat and otherrecent proposals do not come with rigorous security analysis.As with NFS, authentication in Hadoop Distributed FileSystem (HDFS) is also based on Kerberos via GSS-API.Each HDFS client obtains a TGT that lasts for 10 hoursand renewable for 7 days by default; and access control isbased on the Unix-style ACLs. However, HDFS makes use ofthe Simple Authentication and Security Layer (SASL) [38],a framework for providing a structured interface betweenconnection-oriented protocols and replaceable mechanisms.11In order to improve the performance of the KDC, the developersof HDFS chose to use a number of tokens forcommunication secured with an RPC digest scheme. TheHadoop security design makes use of Delegation Tokens, JobTokens, and Block Access Tokens. Each of these tokens issimilar in structure and based on HMAC-SHA1. DelegationTokens are used for clients to communicate with the Name11SASL’s design is intended to allow new protocols to reuse existingmechanisms without requiring redesign of the mechanisms and allows existingprotocols to make use of new mechanisms without redesign of protocols [38].1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems13Node in order to gain access to HDFS data; while BlockAccess Tokens are used to secure communication between theName Node and Data Nodes and to enforce HDFS filesystempermissions. On the other hand, the Job Token is used to securecommunication between the MapReduce engine Task Trackerand individual tasks. Note that the RPC digest scheme usessymmetric encryption and depending upon the token type, theshared key may be distributed to hundreds or even thousandsof hosts [41].IX. CONCLUSIONSWe proposed three authenticated key exchange protocols forparallel network file system (pNFS). Our protocols offer threeappealing advantages over the existing Kerberos-based pNFSprotocol. First, the metadata server executing our protocolshas much lower workload than that of the Kerberos-basedapproach. Second, two our protocols provide forward secrecy:one is partially forward secure (with respect to multiplesessions within a time period), while the other is fully forwardsecure (with respect to a session). Third, we have designed aprotocol which not only provides forward secrecy, but is alsoescrow-free.

Aggregated-Proof Based Hierarchical Authentication Scheme for the Internet of Things

AGGREGATED-PROOF BASED HIERARCHICAL AUTHENTICATION

SCHEME FOR THE INTERNET OF THINGS

ABSTRACT:

The Internet of Things (IoT) is becoming an attractive system paradigm to realize interconnections through the physical, cyber, and social spaces. During the interactions among the ubiquitous things, security issues become noteworthy, and it is significant to establish enhanced solutions for security protection. In this work, we focus on an existing U2IoT architecture (i.e., unit IoT and ubiquitous IoT), to design an aggregated-proof based hierarchical authentication scheme (APHA) for the layered networks. Concretely, 1) the aggregated-proofs are established for multiple targets to achieve backward and forward anonymous data transmission; 2) the directed path descriptors, homomorphism functions, and Chebyshev chaotic maps are jointly applied for mutual authentication; 3) different access authorities are assigned to achieve hierarchical access control. Meanwhile, the BAN logic formal analysis is performed to prove that the proposed APHA has no obvious security defects, and it is potentially available for the U2IoT architecture and other IoT applications.

INTRODUCTION:

The Internet of Things (IoT) is emerging as an attractive system paradigm to integrate physical perceptions, cyber interactions, and social correlations, in which the physical objects, cyber entities, and social attributes are required to achieve interconnections with the embedded intelligence. During the interconnections, the IoT is suffering from severe security challenges, and there are potential vulnerabilities due to the complicated networks referring to heterogeneous targets, sensors, and backend management systems. It becomes noteworthy to address the security issues for the ubiquitous things in the IoT.

Recent studies have been worked on the general IoT, including system models, service platforms, infrastructure architectures, and standardization. Particularly, a human-society inspired U2IoT architecture (i.e., unit IoT and ubiquitous IoT) is proposed to achieve the physical cyber- social convergence in the U2IoT architecture, mankind neural system and social organization framework are introduced to establish the single-application and multi-application IoT frameworks.

Multiple unit IoTs compose a local IoT within a region, or an industrial IoT for an industry. The local IoTs and industrial IoTs are covered within a national IoT, and jointly form the ubiquitous IoT. Towards the IoT security, related works mainly refer to the security architectures and recommended countermeasures secure communication and networking mechanisms cryptography algorithms and application security solutions.

Current researches mainly refer to three aspects: system security, network security, and application security.

_ System security mainly considers a whole IoT system to identify the unique security and privacy challenges, to design systemic security frameworks, and to provide security measures and guidelines.

_ Network security mainly focuses on wireless communication networks (e.g., wireless sensor networks (WSN), radio frequency identification (RFID), and the Internet) to design key distribution algorithms, authentication protocols, advanced signature algorithms, access control mechanisms, and secure routing protocols. Particularly, authentication protocols are popular to address security and privacy issues in the IoT, and should be designed considering the things’ heterogeneity and hierarchy.

_ Application security serves for IoT applications (e.g.., multimedia, smart home, and smart grid), and resolves practical problems with particular scenario requirements.

Towards the U2IoT architecture, a reasonable authentication scheme should satisfy the following requirements. 1) Data CIA (i.e., confidentiality, integrity, and availability): The exchanged messages between any two legal entities should be protected against illegal access and modification. The communication channels should be reliable for the legal entities. 2) Hierarchical access control: Diverse access authorities are assigned to different entities to provide hierarchical interactions.

An unauthorised entity cannot access data exceeding its permission. 3) Forward security: Attackers cannot correlate any two communication sessions, and also cannot derive the previous interrogations according to the ongoing session. 4) Mutual authentication: The untrusted entities should pass each other’s verification so that only the legal entity can access the networks for data acquisition. 5) Privacy preservation: The sensors cannot correlate or disclose an individual target’s private information (e.g., location). Considering above security requirements, we design an aggregated proof based hierarchical authentication scheme (APHA) for the unit IoT.

EXISTING SYSTEM:

Existing WSN network is to be completely integrated into the Internet as part of the Internet of Things (IoT), it is necessary to consider various security challenges, such as the creation of a secure channel between an Internet host and a sensor node. In order to create such a channel, it is necessary to provide key management mechanisms that allow two remote devices to negotiate certain security credentials (e.g. secret keys) that will be used to protect the information flow analyze not only the applicability.

Existing mechanisms such as public key cryptography and pre-shared keys for sensor nodes in the IoT context, but also the applicability of those link-layer oriented key management systems (KMS) whose original purpose is to provide shared keys for sensor nodes belonging to the same WSNs to provide key management mechanisms to allow that two remote devices can negotiate certain security certificates (e.g., shared keys, Blom key pairs, and polynomial shares). The authors analyzed the applicability of existing mechanisms, including public key infrastructure (PKI) and pre-shared keys for sensor nodes in IoT contexts.

DISADVANTAGES:

Smart community model for IoT applications, and a cyber-physical system with the networked smart homes was introduced with security considerations. Filtering false network traffic and avoiding unreliable home gateways are suggested for safeguard. Meanwhile, the security challenges are discussed, including the cooperative authentication, unreliable node detection, target tracking, and intrusion detection group of individuals that hacked into federal sites and released confidential information to the public in the government is supposed to have the highest level of security, yet their system was easily breached.   Therefore, if all of our information is stored on the internet, people could hack into it, finding out everything about individuals lives. Also, companies could misuse the information that they are given access to.  This is a common mishap that occurs within companies all the time.  

PROPOSED SYSTEM:

We proposed scheme realizes data confidentiality and data integrity by the directed path descriptor and homomorphism based Chebyshev chaotic maps, establishes trust relationships via the lightweight mechanisms, and applies dynamically hashed values to achieve session freshness. It indicates that the APHA is suitable for the U2IoT architecture.

In this work, the main purpose is to provide bottom-up safeguard for the U2IoT architecture to realize secure interactions. Towards the U2IoT architecture, a reasonable authentication scheme should satisfy the following requirements.

1) Data CIA (i.e., confidentiality, integrity, and availability): The exchanged messages between any two legal entities should be protected against illegal access and modification. The communication channels should be reliable for the legal entities.

2) Hierarchical access control: Diverse access authorities are assigned to different entities to provide hierarchical interactions. An unauthorised entity cannot access data exceeding its permission.

3) Forward security: Attackers cannot correlate any two communication sessions, and also cannot derive the previous interrogations according to the ongoing session.

4) Mutual authentication: The untrusted entities should pass each other’s verification so that only the legal entity can access the networks for data acquisition.

5) Privacy preservation: The sensors cannot correlate or disclose an individual target’s private information (e.g., location). Considering above security requirements, we design an aggregated proof based hierarchical authentication scheme (APHA) for the ubiquitous IoT.

ADVANTAGES:

Aggregated-proofs are established by wrapping multiple targets’ messages for anonymous data transmission, which realizes that individual information cannot be revealed during both backward and forward communication channels.

Directed path descriptors are defined based on homomorphism functions to establish correlation during the cross-layer interactions. Chebyshev chaotic maps are applied to describe the mapping relationships between the shared secrets and the path descriptors for mutual authentication.

Diverse access authorities on the group identifiers and pseudonyms are assigned to different entities for achieving the hierarchical access control through the layered networks.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Back End                                :           MYSQL Server
  • Server                                      :           Apache Tomact Server
  • Script                                       :           JSP Script
  • Document                               :           MS-Office 2007


ARCHITECTURE DIAGRAM:


DATA FLOW DIAGRAM:

UML DIAGRAMS:

USECASE DIAGRAM:

CLASS DIAGRAM:

SEQUENCE DIAGRAM:

ACITIVITY DIAGRAM:

MODULES:

NETWORK SECURITY MODULE:

U2IOT ARCHITECTURE SYSTEM:

PROOF BASED DATA INTEGRITY:

AUTHENTICATION SCHEME (APHA):

MODULES DESCRIPTION:

NETWORK SECURITY MODULE:

Network-accessible resources may be deployed in a network as surveillance and early-warning tools, as the detection of attackers are not normally accessed for legitimate purposes. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the data’s. Data forwarding can also direct an attacker’s attention away from legitimate servers. A user encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a server, a user is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker’s methods can be studied and that information can be used to increase network security. Considered IP-based IoT, discussed the applicability and limitations of current Internet protocols, and presented thing lifecycle based security architecture for the IP networks.

Our security architecture, node security model, and security bootstrapping are considered in the security solution. Moreover, the authors pointed that the security protocols should fully consider the resource-constrained heterogeneous communication environments a security architecture based on the host identity protocol (HIP) and multimedia Internet keying protocols to enhance secure network association and key management applied a mobile RFID security protocol to guarantee the mobile RFID networks, and a trust third party (TTP) based key management protocol is introduced to construct a secure session key. We focused on the integration of RFID tags into IP networks, and proposed a HIP address translation scheme. The scheme provides address translation services between the tag identifiers and IP addresses, which presents a prototype of the cross-layer IoT networks in the trust-based mechanisms (e.g., cryptographic, and authentication) in WSNs presented Lithe, which is an integration of datagram transport layer security (DTLS) and constrained application protocol (CoAP) to protect the transmission of sensitive information in the IoT.

U2IOT ARCHITECTURE SYSTEM:

IoT architectures and models, Unit and Ubiquitous Internet of Things introduce essential IoT concepts from the perspectives of mapping and interaction between the physical world and the cyber world. It addresses key issues such as strategy and education, particularly around unit and ubiquitous IoT technologies. Supplying a new perspective on IoT, the book covers emerging trends and presents the latest progress in the field. It also:

  • Outlines a fundamental architecture for future IoT together with the IoT layered model
  • Describes various topological structures, existence forms, and corresponding logical relationships
  • Establishes an IoT technology system based on the knowledge of IoT scientific problems
  • Provides an overview of the core technologies, including basic connotation, development status, and open challenges

U2IoT architecture, a reasonable authentication scheme should satisfy the following requirements. 1) Data CIA (i.e., confidentiality, integrity, and availability): The exchanged messages between any two legal entities should be protected against illegal access and modification. The communication channels should be reliable for the legal entities. 2) Hierarchical access control: Diverse access authorities are assigned to different entities to provide hierarchical interactions. An unauthorised entity cannot access data exceeding its permission. 3) Forward security: Attackers cannot correlate any two communication sessions, and also cannot derive the previous interrogations according to the ongoing session. 4) Mutual authentication: The untrusted entities should pass each other’s verification so that only the legal entity can access the networks for data acquisition. 5) Privacy preservation: The sensors cannot correlate or disclose an individual target’s private information (e.g., location).

PROOF BASED DATA INTEGRITY:

The pseudo-random numbers are generated as session-sensitive operators to provide session freshness and randomization. Additionally, the identity related values (e.g., identify flags, group identifier, and pseudonym) are dynamically updated during each session. Such variables are applied to obtain the authentication operators in the aggregated-proofs, and other intermediate variables. The transmitted messages are mainly computed based on the random numbers which make that the exchanged messages can be regarded as dynamically variables with perfect forward unlinkability, and an attacker cannot correlate the ongoing session with former sessions in the open channels to analyze the design correctness for security proof, and it is a rigorous evaluation method to detect subtle defects for authentication scheme. The formal analysis focuses on belief and freshness, involving the following steps: message formalization, initial assumptions declaration, anticipant goals declaration, and logic verification in the BAN logic an attribute-based access control model according to bilinear mappings scheme realizes anonymous access, and minimizes the number of the exchanged messages in the open channels.

Our proposed a fuzzy reputation based trust management model (TRM-IoT) to enforce the entities’ cooperation and interconnection. Proposed an anonymous authentication protocol, and applied the pseudonym and threshold secret sharing mechanism to achieve the tradeoff between anonymity and certification a mutual authentication scheme, which is designed based on the feature extraction, secure hash algorithm (SHA), and elliptic curve cryptography (ECC). There into, asymmetric authentication scheme is established without compromising computation cost and communication overhead. We analyzed cyber infrastructure security in the smart grid. A layered security scheme was established to evaluate security risks for the power applications. The authors highlighted power generation, transmission, distribution control and security, and introduced encryption, authentication, and access control to achieve secure communications. Furthermore, digital forensics, security incident and event management are applied for management, and cyber-security evaluation and intrusion tolerance are also considered.

AUTHENTICATION SCHEME (APHA):

We design an aggregated proof based hierarchical authentication scheme (APHA) for the unit IoT and ubiquitous IoT respectively, and the main contributions are as follows: 1) Aggregated-proofs are established by wrapping multiple targets’ messages for anonymous data transmission, which realizes that individual information cannot be revealed during both backward and forward communication channels, 2) Directed path descriptors are defined based on homomorphism functions to establish correlation during the cross-layer interactions. Chebyshev chaotic maps are applied to describe the mapping relationships between the shared secrets and the path descriptors for mutual authentication, 3) Diverse access authorities on the group identifiers and pseudonyms are assigned to different entities for achieving the hierarchical access control through the layered networks.In the APHA, an entity believes that: 1) the shared secrets and keys are obtained by the assigned entities, 2) the pseudo random numbers, identity flags, pseudonyms, and directed path descriptors are fresh, and 3) the trusted entity has jurisdiction on the entitled values. The initiative assumptions, including initial possessions and entity abilities are obtained as follows:

 

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CONCLUSION AND FUTURE WORK:

In this paper, we have proposed an aggregated-proof based hierarchical authentication scheme for the U2IoT architecture. In the APHA, two sub-protocols are respectively designed for the unit IoT and ubiquitous IoT to provide bottom- up security protection. The proposed scheme realizes data confidentiality and data integrity by the directed path descriptor and homomorphism based Chebyshev chaotic maps, establishes trust relationships via the lightweight mechanisms, and applies dynamically hashed values to achieve session freshness. It indicates that the APHA is suitable for the U2IoT architecture.

A Time Efficient Approach for Detecting Errors in Big Sensor Data on Cloud

1.1 ABSTRACT:

Big sensor data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity it is difficult to process using on-hand database management tools or traditional data processing applications. Cloud computing provides a promising platform to support the addressing of this challenge as it provides a flexible stack of massive computing, storage, and software services in a scalable manner at low cost. Some techniques have been developed in recent years for processing sensor data on cloud, such as sensor-cloud. However, these techniques do not provide efficient support on fast detection and locating of errors in big sensor data sets.

We develop a novel data error detection approach which exploits the full computation potential of cloud platform and the network feature of WSN. Firstly, a set of sensor data error types are classified and defined. Based on that classification, the network feature of a clustered WSN is introduced and analyzed to support fast error detection and location. Specifically, in our proposed approach, the error detection is based on the scale-free network topology and most of detection operations can be conducted in limited temporal or spatial data blocks instead of a whole big data set. Hence the detection and location process can be dramatically accelerated.

Furthermore, the detection and location tasks can be distributed to cloud platform to fully exploit the computation power and massive storage. Through the experiment on our cloud computing platform of U-Cloud, it is demonstrated that our proposed approach can significantly reduce the time for error detection and location in big data sets generated by large scale sensor network systems with acceptable error detecting accuracy.

1.2 INTRODUCTION:

Recently, we enter a new era of data explosion which brings about new challenges for big data processing. In general, big data is a collection of data sets so large and complex that it becomes difficult to process with onhand database management systems or traditional data processing applications. It represents the progress of the human cognitive processes, usually includes data sets with sizes beyond the ability of current technology, method and theory to capture, manage, and process the data within a tolerable elapsed time. Big data has typical characteristics of five ‘V’s, volume, variety, velocity, veracity and value. Big data sets come from many areas, including meteorology, connectomics, complex physics simulations, genomics, biological study, gene analysis and environmental research. According to literature since 1980s, generated data doubles its size in every 40 months all over the world. In the year of 2012, there were 2.5 quintillion (2.5  1018) bytes of data being generated every day.

Hence, how to process big data has become a fundamental and critical challenge for modern society. Cloud computing provides apromising platform for big data processing with powerful computation capability, storage, scalability, resource reuse and low cost, and has attracted significant attention in alignment with big data. One of important source for scientific big data is the data sets collected by wireless sensor networks (WSN). Wireless sensor networks have potential of significantly enhancing people’s ability to monitor and interact with their physical environment. Big data set from sensors is often subject to corruption and losses due to wireless medium of communication and presence of hardware inaccuracies in the nodes. For a WSN application to deduce an appropriate result, it is necessary that the data received is clean, accurate, and lossless. However, effective detection and cleaning of sensor big data errors is a challenging issue demanding innovative solutions. WSN with cloud can be categorized as a kind of complex network systems. In these complex network systems such as WSN and social network, data abnormality and error become an annoying issue for the real network applications.

Therefore, the question of how to find data errors in complex network systems for improving and debugging the network has attracted the interests of researchers. Some work has been done for big data analysis and error detection in complex networks including intelligence sensors networks. There are also some works related to complex network systems data error detection and debugging with online data processing techniques. Since these techniques were not designed and developed to deal with big data on cloud, they were unable to cope with current dramatic increase of data size. For example, when big data sets are encountered, previous offline methods for error detectionand debugging on a single computer may take a long time and lose real time feedback. Because those offline methods are normally based on learning or mining, they often introduce high time cost during the process of data set training and pattern matching. WSN big data error detection commonly requires powerful real-time processing and storing of the massive sensor data as well as analysis in the context of using inherently complex error models to identify and locate events of abnormalities.

In this paper, we aim to develop a novel error detection approach by exploiting the massive storage, scalability and computation power of cloud to detect errors in big data sets from sensor networks. Some work has been done about processing sensor data on cloud. However, fast detection of data errors in big data with cloud remains challenging. Especially, how to use the computation power of cloud to quickly find and locate errors of nodes in WSN needs to be explored. Cloud computing, a disruptive trend at present, poses a significant impact on current IT industry and research communities. Cloud computing infrastructure is becoming popular because it provides an open, flexible, scalable and reconfigurable platform. The proposed error detection approach in this paper will be based on the classification of error types. Specifically, nine types of numerical data abnormalities/errors are listed and introduced in our cloud error detection approach. The defined error model will trigger the error detection process. Compared to previous error detection of sensor network systems, our approach on cloud will be designed and developed by utilizing the massive data processing capability of cloud to enhance error detection speed and real time reaction. In addition, the architecture feature of complex networks will also be analyzed to combine with the cloud computing with a more efficient way. Based on current research literature review, we divide complex network systems into scale-free type and non scale-free type. Sensor network is a kind of scale-free complex network system which matches cloud scalability feature.

1.3 LITRATURE SURVEY

A SURVEY OF LARGE SCALE DATA MANAGEMENT APPROACHES IN CLOUD ENVIRONMENTS

PUBLISH: IEEE Comm. Surveys & Tutorials, vol. 13, no. 3, pp. 311-336, Third Quarter 2011.

AUTHOR: S. Sakr, A. Liu, D. Batista, and M. Alomari,

EXPLANATION:

In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data. Moreover, the recent advances in Web technology has made it easy for any user to provide and consume content of any form. This has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. Cloud computing is associated with a new paradigm for the provision of computing infrastructure. This paradigm shifts the location of this infrastructure to the network to reduce the costs associated with the management of hardware and software resources. This paper gives a comprehensive survey of numerous approaches and mechanisms of deploying data-intensive applications in the cloud which are gaining a lot of momentum in both research and industrial communities. We analyze the various design decisions of each approach and its suitability to support certain classes of applications and end-users. A discussion of some open issues and future challenges pertaining to scalability, consistency, economical processing of large scale data on the cloud is provided. We highlight the characteristics of the best candidate classes of applications that can be deployed in the cloud.

STREAM AS YOU GO: THE CASE FOR INCREMENTAL DATA ACCESS AND PROCESSING IN THE CLOUD

PUBLISH: Proc. IEEE ICDE Int’l Workshop Data Management in the Cloud (DMC’12), 2012.

AUTHOR: R. Kienzler, R. Bruggmann, A. Ranganathan, and N. Tatbul,

EXPLANATION:

Cloud infrastructures promise to provide high-performance and cost-effective solutions to large-scale data processing problems. In this paper, we identify a common class of data-intensive applications for which data transfer latency for uploading data into the cloud in advance of its processing may hinder the linear scalability advantage of the cloud. For such applications, we propose a “stream-as-you-go” approach for incrementally accessing and processing data based on a stream data management architecture. We describe our approach in the context of a DNA sequence analysis use case and compare it against the state of the art in MapReduce-based DNA sequence analysis and incremental MapReduce frameworks. We provide experimental results over an implementation of our approach based on the IBM InfoSphere Streams computing platform deployed on Amazon EC2, showing an order of magnitude improvement in total processing time over the state of the art.

A SCALABLE TWO-PHASE TOP-DOWN SPECIALIZATION APPROACH FOR DATA ANONYMIZATION USING SYSTEMS, IN MAPREDUCE ON CLOUD

PUBLISH: IEEE Trans. Parallel and Distributed, vol. 25, no. 2, pp. 363-373, Feb. 2014.

AUTHOR: X. Zhang, T. Yang, C. Liu, and J. Chen

EXPLANATION:

A large number of cloud services require users to share private data like electronic health records for data analysis or mining, bringing privacy concerns. Anonymizing data sets via generalization to satisfy certain privacy requirements such as k-anonymity is a widely used category of privacy preserving techniques. At present, the scale of data in many cloud applications increases tremendously in accordance with the Big Data trend, thereby making it a challenge for commonly used software tools to capture, manage, and process such large-scale data within a tolerable elapsed time. As a result, it is a challenge for existing anonymization approaches to achieve privacy preservation on privacy-sensitive large-scale data sets due to their insufficiency of scalability. In this paper, we propose a scalable two-phase top-down specialization (TDS) approach to anonymize large-scale data sets using the MapReduce framework on cloud. In both phases of our approach, we deliberately design a group of innovative MapReduce jobs to concretely accomplish the specialization computation in a highly scalable way. Experimental evaluation results demonstrate that with our approach, the scalability and efficiency of TDS can be significantly improved over existing approaches.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

A data error in big data with cloud remains challenging to use the computation power of cloud to quickly find and locate errors of nodes in WSN needs to be explored. Cloud computing, a disruptive trend at present, poses a significant impact on current IT industry and research communities. Cloud computing infrastructure is becoming popular because it provides an open, flexible, scalable and reconfigurable platform. Existing methods in wireless sensor networks is to provide low-cost, low-energy reliable data collection. Reliability against transient errors in sensor data can be provided using the model-based error correction described in which temporal correlation in the data is used to correct errors without any overheads at the sensor nodes. In the above work it is assumed that a perfect model of the data is available.

However, as variations in the physical process are context-dependent and time-varying in a real sensor network, it is infeasible to have an accurate model of the data properties a priori, thus leading to reduced correction efficiency issue by presenting a scalable methodology for improving the accuracy of data modeling through on-line estimation data correction algorithm to incorporate robustness against dynamic model changes and potential modeling errors. We evaluate our system through simulations using real sensor data collected from different sources. Experimental results demonstrate that the proposed enhancements lead to an improvement of up to a factor of 10 over the earlier approach.

2.1.1 DISADVANTAGES:

Ensuring the reliability of sensor data becomes harder, since the hardware becomes less robust to many types of errors due to the effects of aggressive technology scaling. Similarly, errors in the wireless communication channels are another source of unreliability, as limitations on transmission power due to tight energy constraints makes them more susceptible to noise and interference. The problem is further aggravated by exposure to harsh physical environments, which is common for many typical sensing applications. Subsequently, ensuring the reliability of the data in a sensor network is going to be a growing problem and be a challenging part of designing sensor networks.

2.2 PROPOSED SYSTEM:

We proposed error detection approach in this paper will be based on the classification of error types. Specifically, nine types of numerical data abnormalities/errors are listed and introduced in our cloud error detection approach. The defined error model will trigger the error detection process. Compared to previous error detection of sensor network systems, our approach on cloud will be designed and developed by utilizing the massive data processing capability of cloud to enhance error detection speed and real time reaction. However, the scalability and error detection accuracy are not dealt. It is an initial and important step for online error detection of WSN.

Especially, under the cloud environment, the computational power and scalability should be fully exploit to support the real time fast error detection for sensor data sets clustering can significantly reduce the time cost error locating and final decision making by avoiding whole network data processing. In addition, with this detection technique, cloud resources only need be distributed according to each partitioned cluster in a scale-free complex network on current research literature review, we divide complex network systems into scale-free type and non scale-free type. Sensor network is a kind of scale-free complex network system which matches cloud scalability feature.

Our proposed error detection approach on cloud is specifically trimmed for finding errors in big data sets of sensor networks. The main contribution of our proposed detection is to achieve significant time performance improvement in error detection without compromising error detection accuracy. Our proposed scale-free error detection algorithm achieves significant error detection performance gains compared to non scale-free error detection algorithms. Our proposed scale-free detection on cloud can fast detect most of error data (more than 80 percent) after 740 seconds time duration. However, the non scalefree error detection algorithm can only achieve as much as 44 percent error detection rate as the best case. So, it can be concluded from the experiment results in Fig. 5 that the scale-free detection algorithm on cloud for big data can significantly outperform non scale-free error detection algorithms in terms of error finding time cost.

2.2.1 ADVANTAGES:

To verify the time efficiency and the effectiveness of our approach for detecting errors in big data with cloud, experiments are conducted for this experiment.

  • Demonstrate that the significant time-saving is achieved in terms of detecting errors from complex network big data sets.
  • Demonstrate the effectiveness of our proposed error detection approach in terms of different error types.
  • Demonstrate that the false positive ratio of our proposed error detection algorithm is limited within a small value.
  • scale-free error detecting approach can signifi- cantly reduce the time for fast error detection in numeric big data sets in the proposed approach achieves similar error selection ratio to non-scale-free error detection approaches.
  • In future, in accordance with error detection for big data sets from sensor network systems on cloud, the issues such as error correction, big data cleaning and recovery will be further explored.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Back End                                :           MS ACCESS
  • Tools                                       :           Netbeans 7
  • Document                               :           MS-Office 2007


CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM:

 

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

 

   START                                                                                                           RESULTS                                                

3.3 CLASS DIAGRAM:

3.4 SEQUENCE DIAGRAM:

 

STRAT                                                                                                                        RESULTS

 

                                         Data Structure

         Cluster Analysis

                                                                             Complexity Analysis

                                                Using Error Detection Algorithm

 

                                                                                                Error Localization   

                                                              Classification and Complexity Analysis

Results View Graph

3.5 ACTIVITY DIAGRAM:


CHAPTER 4

4.0 IMPLEMENTATION:

MODEL BASED ERROR DETECTION ON CLOUD FOR SENSOR NETWORK BIG DATA

ERROR DETECTION:

We propose a two-phase approach to conduct the computation required in the whole process of error detection and localization. At the phase of error detection, there are three inputs for the error detection algorithm. The first is the graph of network. The second is the total collected data set D and the third is the defined error patterns p. The output of the error detection algorithm is the error set D’. The details of the error detection algorithm can be found in Appendix B.1, available in the online supplemental material.

ERROR LOCALIZATION:

After the error pattern matching and error detection, it is important to locate the position and source of the detected error in the original WSN graph G(V, E). The input of the Algorithm 2 is the original graph of a scale-free network G (V, E), and an error data D from Algorithm 1. The output of the algorithm 2 is G’(V’, E’) which is the subset of the G to indicate the error location and source. The details of the error detection algorithm can be found in Appendix B.2, available in the online supplemental material.

COMPLEXITY ANALYSIS:

Suppose that there is a sensor network system consisting of n nodes. For the error detection approach without considering the scale-free network feature, the error detection algorithm will carry out the error pattern matching and localization with whole network data by traversing the whole data set. Suppose that there is R nodes on the data routing, in the worst case, the detection algorithm without considering the scale-free network feature will be executed R  n time for error detection and localization, denoted as OðR  nÞ; 1 R n. Anyway, with the hierarchical network topology, the network can be partitioned in to m clusters.

Model based on our scale-free network definition and our algorithm, in each cluster, the nodes which are involved in error detection will be reduced to n/m on average. In addition, in each cluster, the data values are highly correlated. The data worst case of data traverse times for error detection and localization is determined. Because our scale-free error detection approach limits most of computation within each cluster, the communication and data exchange between clusters can be ignored. Finally, the worst case algorithm complexity of our scale-free error detection approach can outperform the traditional error detection algorithms.

4.1 ALGORITHM

Introduce the big data error detection/location algorithm, and its combination strategy with cloud. Our proposed algorithm on cloud, the data sets need to be partitioned before feeding to the algorithm on cloud. There are two points should be mentioned when carrying out partitioning. Firstly, the partition process could not bring new data errors into a data set; or change and influence the original errors in a data set. That is different to the previous partition algorithm which normally divides data set according certain application preference or clustering principles. Secondly, due to the scale-free network systems being a special topology, the partition has to form the data clusters according to the real world situation of scale-free network or Cluster-head based WSN.

MapReduce is a framework for processing parallelizable problems across huge data sets using a large number of computers (nodes), collectively referred to as a cluster (if all nodes are on the same local network and use similar hardware) or a grid (if the nodes are shared across geographically and administratively distributed systems, and use more heterogenous hardware). Computational processing can occur on data stored either in a filesystem (unstructured) or in a database (structured). MapReduce can take advantage of locality of data, processing data on or near the storage assets to reduce data transmission. “Map” function.

The master node takes the input, divides it into smaller subproblems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node. “Reduce” function. The master node then collects the answers to all the sub-problems and combines them in some way to form the output – the answer to the problem it was originally trying to solve. MapReduce allows for distributed processing of the map and reduction operations.


4.2 MODULES:

NETWORK TOPOLOGY DESIGNS:

ON-CLOUD PROCESSING FOR WSN:

TIME-EFFICIENT ERROR DETECTION:

ERROR AND ABNORMALITY CLASSIFICATION:

ERROR DEFINITION AND MODELING:

4.3 MODULE DESCRIPTION:

NETWORK TOPOLOGY DESIGNS:

Scale-free networks are inhomogeneous and only a few nodes have a large number of links. In real applications, the cluster-head WSN is similar to scale-free networks, which can be described with the scale-free complex networks and has the feature of scale-free networks. In Fig. 2, the instance of scale-free networks and exponential networks are compared. It can be concluded that the scale-free networks have a more clustered hierarchical nodes topology. Central nodes are highly connected by the out-layer nodes has only 1 or 2 links the traditional error detection for WSN data sets has not paid enough attention to making use of complex network features to improve the error detection efficiency on the cloud platform. Compared to the previous sensor data error detection and localization approach, complex network topology features will be explored with the computation power of cloud for error detection efficiency, scalability and low cost.

Wireless sensor network systems have been used in different areas, such as environment monitoring, military, disaster warning and scientific data collection. In order to process the remote sensor data collected by WSN, sensor-cloud platform has been developed including its definition, architecture, and applications. Due to the features of high variety, volume, and velocity, big data is difficult to process using onhand database management tools or traditional sensorcloud platform. Big data sets can come from complex network systems, such as social network and large scale sensor networks. In addition, under the theme of complex network systems, it may be difficult to develop timeefficient detecting or trouble-shooting methods for errors in big data sets, hence to debug the complex network systems in real time.

ON-CLOUD PROCESSING FOR WSN:

Sensor-Cloud is a unique sensor data storage, visualization and remote management platform that leverages powerful cloud computing technologies to provide excellent data scalability, fast visualization, and user programmable analysis. Initially, sensor-cloud was designed to support long-term deployments of MicroStrain wireless sensors. But nowadays, sensor-cloud has been developed to support any web-connected third party device, sensor, or sensor network through a simple OpenData API. Sensor-Cloud can be useful for a variety of applications, particularly where data from large sensor networks needs to be collected, viewed, and monitored remotely. For example, structural health monitoring and condition-based monitoring of high value assets are applications where commonly available data tools often come up short in terms of accessibility, data scalability, programmability, or performance.

Sensor-Cloud represents a direction for processing and analyzing big sensor data using cloud platform. The online WSN data quality and data cleaning issues are discussed in deal with the problems of outliers, missing information, and noise. A novel online approach for modeling and online learning of temporal-spatial data correlations in sensor networks is developed. A Bayesian approach for reducing the effect of noise on sensor data online is also proposed [37]. The proposed approach is efficient in reducing the uncertainty associated with noisy sensors. However, the scalability and error detection accuracy are not dealt. It is an initial and important step for online error detection of WSN. But lots of work still needs to be done. Especially, under the cloud environment, the computational power and scalability should be fully exploit to support the real time fast error detection for sensor data sets.

TIME-EFFICIENT ERROR DETECTION:

In this section, a cluster-head WSN will be introduced and processed as a kind of complex network system. These complex networks may have non-trivial statistical properties which will influence the data processing strategy on them. In order to test the false positive ratio of our error detection approach and time cost for error findings, we impose five types of data errors following the definition in Section 3 into the normalized testing data sets with a uniform random distribution. These five types of data errors are generated equally. Hence, the percentage of each type of errors is 20 percent from the total imposed errors for testing. The first imposed error type is the flat line error. The second imposed error type is out of bound error. The third imposed error type is the spike error. The forth imposed error type is the data lost error. Finally, the aggregate & fusion error type is imposed. By imposing the above listed five types of data error types, the experiment is designed to measure the error selection efficiency and accuracy during the on-cloud processing of data set.

Specifically, 10 different error rates are imposed into the experimental data set and tested independently. The testing error rate changes from 1 to 10 percent in 10 repetitive experiments. After about 100 seconds, the proposed algorithm can detect more than 60 percent errors whatever the testing error rate is within the domain between 1 and 10 percent . During the time duration between 0 and 100 second, all error detection rates increase dramatically with a steep trend. After the time point of 300 second, the error detection rates increase slowly with a flat trend. At the time of 740 second, the proposed error detection algorithm on cloud can find and locate more than 95 percent imposed errors from the testing data sets. When testing error rate is 1 percent, the best performance gains are achieved, as about 99.5 percent total errors detection. With the increase of the testing error rate, the error detection rate decreases.

ERROR AND ABNORMALITY CLASSIFICATION:

Big data sets from real world complex networks, there are mainly two types of data generated and exchanged within networks. (1) The numeric data sampled and exchanged between network nodes such as sensor network sampled data sets. (2) The text files and data logs generated by nodes such as social network data sets. In this paper, our research will focus on the error detection for numeric big data sets from complex networks can be classified as six main types for both numeric and text data as Appendix A.1, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety. org/10.1109/ TPDS.2013.2295810. This error classification can effectively describe the common error types in complex network systems.

However, when it comes to the errors in wireless sensor network data sets, the above classification loses the accuracy in separating node or edge data error caused by different wireless data communication failures. In addition, it is not enough in describing the error data phenomena in sensor data sets. To better capture the error features of sensor data sets, the above general error classification in should be extended. Considering the specific feature of numeric data errors, there are several abnormal data scenarios demonstrated in Fig. 1. The “flat line faults” indicates a time series of a node in a network system keeps unchanged for unacceptable long time duration. In real world applications, sampled data and transmitted data always have slight changes with the time flow. The “out of data bounds faults” indicates impossible data values are observed based on some domain knowledge. In real world applications, if a temperature value of water is reported as 300


C, it can be treated as a data fault directly. The “data lost fault” means there are missing data values in a time series during the data generation or communication.

ERROR DEFINITION AND MODELING:

With the above classification, the definition of each error type is presented to guide our error detection algorithm. Suppose that a data record from a network node is denoted as r(n, t, f(n, t), g(n, l)), where n is the ID of the node in a network systems. t represents the window length of a time series. f(n, t) is the numerical values collected within window t from the node n. g(n, l) is a location function which records the cluster, the data source node and partition situation related to the node n. g(n, l) is used to calculate the distance between the data source node n and the node l which is the initial data source node. g(n, l) indicates that a current detected error data node is the initial data source node. Furthermore, g(n, l) is also used to parse the data routing between data communication nodes.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CHAPTER 8

8.1 CONCLUSION AND FUTURE WORK:

In order to detect errors in big data sets from sensor network systems, a novel approach is developed with cloud computing. Firstly error classification for big data sets is presented. Secondly, the correlation between sensor network systems and the scale-free complex networks are introduced. According to each error type and the features from scale-free networks, we have proposed a time-efficient strategy for detecting and locating errors in big data sets on cloud.

Experiment results from our cloud computing environment U-Cloud, it is demonstrated that 1) the proposed scale-free error detecting approach can signifi- cantly reduce the time for fast error detection in numeric big data sets, and 2) the proposed approach achieves similar error selection ratio to non-scale-free error detection approaches. In future, in accordance with error detection for big data sets from sensor network systems on cloud, the issues such as error correction, big data cleaning and recovery will be further explored.

Our experiment results and analysis, it can be concluded that our proposed error detection approach for big data processing on cloud can dramatically increase the error detecting speed without losing error selecting accuracy. Especially, when the error rate for a targeting big data set is limited and within a small value (1-10 percent ), the algorithm can efficiently detect the error with high fidelity.