Authenticated Key Exchange Protocols for Parallel Network File Systemms

Authenticated Key Exchange Protocols for ParallelNetwork File SystemsHoon Wei Lim Guomin YangAbstract—We study the problem of key establishment for securemany-to-many communications. The problem is inspired bythe proliferation of large-scale distributed file systems supportingparallel access to multiple storage devices. Our work focuses onthe current Internet standard for such file systems, i.e., parallelNetwork File System (pNFS), which makes use of Kerberos toestablish parallel session keys between clients and storage devices.Our review of the existing Kerberos-based protocol shows thatit has a number of limitations: (i) a metadata server facilitatingkey exchange between the clients and the storage devices hasheavy workload that restricts the scalability of the protocol; (ii)the protocol does not provide forward secrecy; (iii) the metadataserver generates itself all the session keys that are used betweenthe clients and storage devices, and this inherently leads to keyescrow. In this paper, we propose a variety of authenticatedkey exchange protocols that are designed to address the aboveissues. We show that our protocols are capable of reducing up toapproximately 54% of the workload of the metadata server andconcurrently supporting forward secrecy and escrow-freeness.All this requires only a small fraction of increased computationoverhead at the client.Keywords-Parallel sessions, authenticated key exchange, networkfile systems, forward secrecy, key escrow.I. INTRODUCTIONIn a parallel file system, file data is distributed acrossmultiple storage devices or nodes to allow concurrent accessby multiple tasks of a parallel application. This is typicallyused in large-scale cluster computing that focuses on highperformance and reliable access to large datasets. That is,higher I/O bandwidth is achieved through concurrent accessto multiple storage devices within large compute clusters;while data loss is protected through data mirroring usingfault-tolerant striping algorithms. Some examples of highperformanceparallel file systems that are in production useare the IBM General Parallel File System (GPFS) [48], GoogleFile System (GoogleFS) [21], Lustre [35], Parallel Virtual FileSystem (PVFS) [43], and Panasas File System [53]; whilethere also exist research projects on distributed object storagesystems such as Usra Minor [1], Ceph [52], XtreemFS [25],and Gfarm [50]. These are usually required for advancedscientific or data-intensive applications such as, seismic dataprocessing, digital animation studios, computational fluid dynamics,and semiconductor manufacturing. In these environments,hundreds or thousands of file system clients share dataand generate very high aggregate I/O load on the file systemsupporting petabyte- or terabyte-scale storage capacities.H.W. Lim is with National University of Singapore. Email:hoonwei@nus.edu.sg.G. Yang is with University of Wollongong, Australia. Email:gyang@uow.edu.au.Independent of the development of cluster and highperformancecomputing, the emergence of clouds [5], [37]and the MapReduce programming model [13] has resultedin file systems such as the Hadoop Distributed File System(HDFS) [26], Amazon S3 File System [6], and Cloud-Store [11]. This, in turn, has accelerated the wide-spreaduse of distributed and parallel computation on large datasetsin many organizations. Some notable users of the HDFSinclude AOL, Apple, eBay, Facebook, Hewlett-Packard, IBM,LinkedIn, Twitter, and Yahoo! [23].In this work, we investigate the problem of secure manyto-many communications in large-scale network file systemsthat support parallel access to multiple storage devices. Thatis, we consider a communication model where there are alarge number of clients (potentially hundreds or thousands)accessing multiple remote and distributed storage devices(which also may scale up to hundreds or thousands) in parallel.Particularly, we focus on how to exchange key materialsand establish parallel secure sessions between the clientsand the storage devices in the parallel Network File System(pNFS) [46]—the current Internet standard—in an efficientand scalable manner. The development of pNFS is driven byPanasas, Netapp, Sun, EMC, IBM, and UMich/CITI, and thusit shares many common features and is compatible with manyexisting commercial/proprietary network file systems.Our primary goal in this work is to design efficient andsecure authenticated key exchange protocols that meet specificrequirements of pNFS. Particularly, we attempt to meet thefollowing desirable properties, which either have not beensatisfactorily achieved or are not achievable by the currentKerberos-based solution (as described in Section II):Scalability – the metadata server facilitating access requestsfrom a client to multiple storage devices shouldbear as little workload as possible such that the serverwill not become a performance bottleneck, but is capableof supporting a very large number of clients;Forward secrecy – the protocol should guarantee thesecurity of past session keys when the long-term secretkey of a client or a storage device is compromised [39];andEscrow-free – the metadata server should not learn anyinformation about any session key used by the client andthe storage device, provided there is no collusion amongthem.The main results of this paper are three new provablysecure authenticated key exchange protocols. Our protocols,progressively designed to achieve each of the above properties,1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems2demonstrate the trade-offs between efficiency and security.We show that our protocols can reduce the workload ofthe metadata server by approximately half compared to thecurrent Kerberos-based protocol, while achieving the desiredsecurity properties and keeping the computational overhead atthe clients and the storage devices at a reasonably low level.We define an appropriate security model and prove that ourprotocols are secure in the model.In the next section, we provide some background on pNFSand describe its existing security mechanisms associated withsecure communications between clients and distributed storagedevices. Moreover, we identify the limitations of the currentKerberos-based protocol in pNFS for establishing securechannels in parallel. In Section III, we describe the threatmodel for pNFS and the existing Kerberos-based protocol. InSection IV, we present our protocols that aim to address thecurrent limitations. We then provide formal security analysesof our protocols under an appropriate security model, as wellas performance evaluation in Sections VI and VII, respectively.In Section VIII, we describe related work, and finally inSection IX, we conclude and discuss some future work.II. INTERNET STANDARD — NFSNetwork File System (NFS) [46] is currently the sole filesystem standard supported by the Internet Engineering TaskForce (IETF). The NFS protocol is a distributed file systemprotocol originally developed by Sun Microsystems that allowsa user on a client computer, which may be diskless, to accessfiles over networks in a manner similar to how local storageis accessed [47]. It is designed to be portable across differentmachines, operating systems, network architectures, and transportprotocols. Such portability is achieved through the use ofRemote Procedure Call (RPC) [51] primitives built on top ofan eXternal Data Representation (XDR) [15]; with the formerproviding a procedure-oriented interface to remote services,while the latter providing a common way of representing a setof data types over a network. The NFS protocol has since thenevolved into an open standard defined by the IETF NetworkWorking Group [49], [9], [45]. Among the current key featuresare filesystem migration and replication, file locking, datacaching, delegation (from server to client), and crash recovery.In recent years, NFS is typically used in environments whereperformance is a major factor, for example, high-performanceLinux clusters. The NFS version 4.1 (NFSv4.1) [46] protocol,the most recent version, provides a feature called parallel NFS(pNFS) that allows direct, concurrent client access to multiplestorage devices to improve performance and scalability. Asdescribed in the NFSv4.1 specification:When file data for a single NFS server is storedon multiple and/or higher-throughput storage devices(by comparison to the server’s throughput capability),the result can be significantly better file accessperformance.pNFS separates the file system protocol processing into twoparts: metadata processing and data processing. Metadata is informationabout a file system object, such as its name, locationwithin the namespace, owner, permissions and other attributes.The entity that manages metadata is called a metadata server.On the other hand, regular files’ data is striped and storedacross storage devices or servers. Data striping occurs in atleast two ways: on a file-by-file basis and, within sufficientlylarge files, on a block-by-block basis. Unlike NFS, a read orwrite of data managed with pNFS is a direct operation betweena client node and the storage system itself. Figure 1 illustratesthe conceptual model of pNFS.Storage access protocol(direct, parallel data exchange)pNFS protocol(metadata exchange)Control protocol(state synchronization)Storage devices or servers(file, block, object storage)Metadata serverClients(heterogeneous OSes)Fig. 1. The conceptual model of pNFS.More specifically, pNFS comprises a collection of threeprotocols: (i) the pNFS protocol that transfers file metadata,also known as a layout,1 between the metadata server anda client node; (ii) the storage access protocol that specifieshow a client accesses data from the associated storage devicesaccording to the corresponding metadata; and (iii) the controlprotocol that synchronizes state between the metadata serverand the storage devices.2A. Security ConsiderationEarlier versions of NFS focused on simplicity and efficiency,and were designed to work well on intranets and local networks.Subsequently, the later versions aim to improve accessand performance within the Internet environment. However,security has then become a greater concern. Among manyother security issues, user and server authentication withinan open, distributed, and cross-domain environment are acomplicated matter. Key management can be tedious andexpensive, but an important aspect in ensuring security ofthe system. Moreover, data privacy may be critical in highperformanceand parallel applications, for example, those associatedwith biomedical information sharing [28], [44], financialdata processing & analysis [20], [34], and drug simulation &discovery [42]. Hence, distributed storage devices pose greaterrisks to various security threats, such as illegal modificationor stealing of data residing on the storage devices, as well asinterception of data in transit between different nodes within1A layout can be seen as a map, describing how a file is distributed acrossthe data storage system. When a client holds a layout, it is granted the abilityto directly access the byte-range at the storage location specified in the layout.2Note that the control protocol is not specified in NFSv4.1. It can take manyforms, allowing vendors the flexibility to compete on performance, cost, andfeatures.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems3the system. NFS (since version 4), therefore, has been mandatingthat implementations support end-to-end authentication,where a user (through a client) mutually authenticates to anNFS server. Moreover, consideration should be given to theintegrity and privacy (confidentiality) of NFS requests andresponses [45].The RPCSEC GSS framework [17], [16] is currently thecore security component of NFS that provides basic securityservices. RPCSEC GSS allows RPC protocols to access theGeneric Security Services Application Programming Interface(GSS-API) [33]. The latter is used to facilitate exchange of credentialsbetween a local and a remote communicating parties,for example between a client and a server, in order to establisha security context. The GSS-API achieves these through aninterface and a set of generic functions that are independentof the underlying security mechanisms and communicationprotocols employed by the communicating parties. Hence,with RPCSEC GSS, various security mechanisms or protocolscan be employed to provide services such as, encrypting NFStraffic and performing integrity check on the entire body of anNFSv4 call.Similarly, in pNFS, communication between the client andthe metadata server are authenticated and protected throughRPCSEC GSS. The metadata server grants access permissions(to storage devices) to the client according to pre-definedaccess control lists (ACLs).3 The client’s I/O request to astorage device must include the corresponding valid layout.Otherwise, the I/O request is rejected. In an environment whereeavesdropping on the communication between the client andthe storage device is of sufficient concern, RPCSEC GSS isused to provide privacy protection [46].B. Kerberos & LIPKEYIn NFSv4, the Kerberos version 5 [32], [18] and the LowInfrastructure Public Key (LIPKEY) [14] GSS-API mechanismsare recommended, although other mechanisms may alsobe specified and used. Kerberos is used particularly for userauthentication and single sign-on, while LIPKEY provides anTLS/SSL-like model through the GSS-API, particularly forserver authentication in the Internet environment.User and Server Authentication. Kerberos, a widely deployednetwork authentication protocol supported by all majoroperating systems, allows nodes communicating over a nonsecurenetwork to perform mutual authentication. It works ina client-server model, in which each domain (also known asrealm) is governed by a Key Distribution Center (KDC), actingas a server that authenticates and provides ticket-grantingservices to its users (through their respective clients) withinthe domain. Each user shares a password with its KDC anda user is authenticated through a password-derived symmetrickey known only between the user and the KDC. However,one security weakness of such an authentication method isthat it may be susceptible to an off-line password guessingattack, particularly when a weak password is used to derive3Typically, operating system principles are matched to a set of user andgroup access control lists.a key that encrypts a protocol message transmitted betweenthe client and the KDC. Furthermore, Kerberos has strict timerequirements, implying that the clocks of the involved hostsmust be synchronized with that of the KDC within configuredlimits.Hence, LIPKEY is used instead to authenticate the clientwith a password and the metadata server with a public keycertificate, and to establish a secure channel between the clientand the server. LIPKEY leverages the existing Simple Public-Key Mechanism (SPKM) [2] and is specified as an GSSAPImechanism layered above SPKM, which in turn, allowsboth unilateral and mutual authentication to be accomplishedwithout the use of secure time-stamps. Through LIPKEY,analogous to a typical TLS deployment scenario that consistsof a client with no public key certificate accessing a serverwith a public key certificate, the client in NFS [14]:obtains the metadata server’s certificate;verifies that it was signed by a trusted CertificationAuthority (CA);generates a random session symmetric key;encrypts the session key with the metadata server’s publickey; andsends the encrypted session key to the server.At this point, the client and the authenticated metadata serverhave set up a secure channel. The client can then provide auser name and a password to the server for user authentication.Single Sign-on. In NFS/pNFS that employs Kerberos, eachstorage device shares a (long-term) symmetric key with themetadata server (which acts as the KDC). Kerberos then allowsthe client to perform single sign-on, such that the client isauthenticated once to the KDC for a fixed period of time butmay be allowed access to multiple storage devices governed bythe KDC within that period. This can be summarized in threerounds of communication between the client, the metadataserver, and the storage devices as follows:1) the client and the metadata server perform mutual authenticationthrough LIPKEY (as described before), and theserver issues a ticket-granting ticket (TGT) to the clientupon successful authentication;2) the client forwards the TGT to a ticket-granting server(TGS), typically the same entity as the KDC, in orderto obtain one or more service tickets (each containinga session key for access to a storage device), and validlayouts (each presenting valid access permissions to astorage device according to the ACLs);3) the client finally presents the service tickets and layoutsto the corresponding storage devices to get access to thestored data objects or files.We describe the above Kerberos-based key establishmentprotocol in more detail in Section III-C.Secure storage access. The session key generated by theticket-granting server (metadata server) for a client and astorage device during single sign-on can then be used in thestorage access protocol. It protects the integrity and privacyof data transmitted between the client and the storage device.Clearly, the session key and the associated layout are validonly within the granted validity period.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems4C. Current LimitationsThe current design of NFS/pNFS focuses on interoperability,instead of efficiency and scalability, of various mechanismsto provide basic security. Moreover, key establishment betweena client and multiple storage devices in pNFS are basedon those for NFS, that is, they are not designed specificallyfor parallel communications. Hence, the metadata server is notonly responsible for processing access requests to storage devices(by granting valid layouts to authenticated and authorizedclients), but also required to generate all the correspondingsession keys that the client needs to communicate securelywith the storage devices to which it has been granted access.Consequently, the metadata server may become a performancebottleneck for the file system. Moreover, such protocol designleads to key escrow. Hence, in principle, the server can learn allinformation transmitted between a client and a storage device.This, in turn, makes the server an attractive target for attackers.Another drawback of the current approach is that pastsession keys can be exposed if a storage device’s long-term keyshared with the metadata server is compromised. We believethat this is a realistic threat since a large-scale file system mayhave thousands of geographically distributed storage devices.It may not be feasible to provide strong physical security andnetwork protection for all the storage devices.III. PRELIMINARIESA. NotationWe let M denote a metadata server, C denote a client, andS denote a storage device. Let entity X; Y 2 fM;C; Sg, wethen use IDX to denote a unique identity of X, and KXto denote X’s secret (symmetric) key; while KXY denotes asecret key shared between X and Y , and sk denotes a sessionkey.Moreover, we let E(K;m) be a standard (encryption only)symmetric key encryption function and let E(K;m) be anauthenticated symmetric key encryption function, where bothfunctions take as input a key K and a message m. Finally, weuse t to represent a current time and _ to denote a layout. Wemay introduce other notation as required.B. Threat AssumptionsExisting proposals [19], [40], [29], [30], [31] on securelarge-scale distributed file systems typically assume that boththe metadata server and the storage device are trusted entities.On the other hand, no implicit trust is placed on the clients.The metadata server is trusted to act as a reference monitor,issue valid layouts containing access permissions, and sometimeseven generate session keys (for example, in the case ofKerberos-based pNFS) for secure communication between theclient and the storage devices. The storage devices are trustedto store data and only perform I/O operations upon authorizedrequests. However, we assume that the storage devices areat a much higher risk of being compromised compared tothe metadata server, which is typically easier to monitor andprotect in a centralized location. Furthermore, we assume thatthe storage devices may occasionally encounter hardware orsoftware failure, causing the data stored on them no longeraccessible.We note that this work focuses on communication security.Hence, we assume that data transmitted between the clientand the metadata server, or between the client and the storagedevice can be easily eavesdropped, modified or deleted by anadversary. However, we do not address storage related securityissues in this work. Security protection mechanisms for dataat rest are orthogonal to our protocols.C. Kerberos-based pNFS ProtocolFor the sake of completeness, we describe the key establishmentprotocol4 recommended for pNFS in RFC 5661 betweena client C and n storage devices Si, for 1 _ i _ n, through ametadata server M in Figure 2. We will compare the efficiencyof the pNFS protocol against ours in Section VII.During the setup phase, we assume that M establishes ashared secret key KMSi with each Si. Here, KC is a keyderived from C’s password, that is also known by M; whileT plays the role of a ticket-granting server (we simply assumethat it is part of M). Also, prior to executing the protocol inFigure 2, we assume that C and M have already setup a securechannel through LIPKEY (as described in Section II-B).Once C has been authenticated by M and granted accessto S1; : : : ; Sn, it receives a set of service ticketsE(KMSi ; IDC; t; ski), session keys ski, and layouts5 _i (forall i 2 [1; n]) from T, as illustrated in step (4) of the protocol.Clearly, we assume that C is able to uniquely extract eachsession key ski from E(KCT ; sk1; : : : ; skn). Since the sessionkeys are generated by M and transported to Si through C, nointeraction is required between C and Si (in terms of keyexchange) in order to agree on a session key. This keeps thecommunication overhead between the client and each storagedevice to a minimum in comparison with the case where keyexchange is required. Moreover, the computational overheadfor the client and each storage device is very low since theprotocol is mainly based on symmetric key encryption.The message in step (6) serves as key confirmation, that isto convince C that Si is in possession of the same session keythat C uses.IV. OVERVIEW OF OUR PROTOCOLSWe describe our design goals and give some intuition ofa variety of pNFS authenticated key exchange6 (pNFS-AKE)protocols that we consider in this work. In these protocols,we focus on parallel session key establishment between aclient and n different storage devices through a metadataserver. Nevertheless, they can be extended straightforwardlyto the multi-user setting, i.e., many-to-many communicationsbetween clients and storage devices.4For ease of exposition, we do not provide complete details of the protocolparameters.5We assume that a layout (containing the client’s identity, file objectmapping information, and access permissions) is typically integrity protectedand it can be in the form of a signature or MAC.6Without loss of generality, we use the term “key exchange” here, althoughkey establishment between two parties can be based on either key transportor key agreement [39].1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems5(1) C ! M : IDC(2) M ! C : E(KC;KCT ), E(KT ; IDC; t;KCT )(3) C ! T : IDS1 ; : : : ; IDSn, E(KT ; IDC; t;KCT ), E(KCT ; IDC; t)(4) T ! C : _1; : : : ; _n, E(KMS1 ; IDC; t; sk1); : : : ;E(KMSn; IDC; t; skn), E(KCT ; sk1; : : : ; skn)(5) C ! Si : _i;E(KMSi ; IDC; t; ski), E(ski; IDC; t)(6) Si ! C : E(ski; t + 1)Fig. 2. A simplified version of the Kerberos-based pNFS protocol.A. Design GoalsIn our solutions, we focus on efficiency and scalabilitywith respect to the metadata server. That is, our goal is toreduce the workload of the metadata server. On the otherhand, the computational and communication overhead for boththe client and the storage device should remain reasonablylow. More importantly, we would like to meet all these goalswhile ensuring at least roughly similar security as that of theKerberos-based protocol shown in Section III-C. In fact, weconsider a stronger security model with forward secrecy forthree of our protocols such that compromise of a long-termsecret key of a client C or a storage device Si will not exposethe associated past session keys shared between C and Si.Further, we would like an escrow-free solution, that is, themetadata server does not learn the session key shared betweena client and a storage device, unless the server colludes witheither one of them.B. Main IdeasRecall that in Kerberos-based pNFS, the metadata server isrequired to generate all service tickets E(KMSi ; IDC; t; ski)and session keys ski between C and Si for all 1 _ i _ n,and thus placing heavy workload on the server. In our solutions,intuitively, C first pre-computes some key materials andforward them to M, which in return, issues the corresponding“authentication tokens” (or service tickets). C can then, whenaccessing Si (for all i), derive session keys from the precomputedkey materials and present the corresponding authenticationtokens. Note here, C is not required to compute thekey materials before each access request to a storage device,but instead this is done at the beginning of a pre-definedvalidity period v, which may be, for example, a day or week ormonth. For each request to access one or more storage devicesat a specific time t, C then computes a session key from thepre-computed material. This way, the workload of generatingsession keys is amortized over v for all the clients within thefile system. Our three variants of pNFS-AKE protocols can besummarized as follows:pNFS-AKE-I: Our first protocol can be regarded as amodified version of Kerberos that allows the client togenerate its own session keys. That is, the key materialused to derive a session key is pre-computed by theclient for each v and forwarded to the correspondingstorage device in the form of an authentication tokenat time t (within v). As with Kerberos, symmetric keyencryption is used to protect the confidentiality of secretinformation used in the protocol. However, the protocoldoes not provide any forward secrecy. Further, the keyescrow issue persists here since the authentication tokenscontaining key materials for computing session keys aregenerated by the server.pNFS-AKE-II: To address key escrow while achievingforward secrecy simultaneously, we incorporate a Diffie-Hellman key agreement technique into Kerberos-likepNFS-AKE-I. Particularly, the client C and the storagedevice Si each now chooses a secret value (that is knownonly to itself) and pre-computes a Diffie-Hellman keycomponent. A session key is then generated from both theDiffie-Hellman components. Upon expiry of a time periodv, the secret values and Diffie-Hellman key componentsare permanently erased, such that in the event when eitherC or Si is compromised, the attacker will no longer haveaccess to the key values required to compute past sessionkeys. However, note that we achieve only partial forwardsecrecy (with respect to v), by trading efficiency oversecurity. This implies that compromise of a long-termkey can expose session keys generated within the currentv. However, past session keys in previous (expired) timeperiods v(for v< v) will not be affected.pNFS-AKE-III: Our third protocol aims to achieve fullforward secrecy, that is, exposure of a long-term keyaffects only a current session key (with respect to t), butnot all the other past session keys. We would also liketo prevent key escrow. In a nutshell, we enhance pNFSAKE-II with a key update technique based on any efficientone-way function, such as a keyed hash function. In PhaseI, we require C and each Si to share some initial keymaterial in the form of a Diffie-Hellman key. In Phase II,the initial shared key is then used to derive session keysin the form of a keyed hash chain. Since a hash value inthe chain does not reveal information about its pre-image,the associated session key is forward secure.V. DESCRIPTION OF OUR PROTOCOLSWe first introduce some notation required for our protocols.Let F(k;m) denote a secure key derivation function that takesas input a secret key k and some auxiliary information m,and outputs another key. Let sid denote a session identifierwhich can be used to uniquely name the ensuing session. Letalso N be the total number of storage devices to which aclient is allowed to access. We are now ready to describe theconstruction of our protocols.A. pNFS-AKE-IOur first pNFS-AKE protocol is illustrated in Figure 3. Foreach validity period v, C must first pre-compute a set of key1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems6Phase I – For each validity period v:(1) C ! M : IDC, E(KCM;KCS1 ; : : : ;KCSN )(2) M ! C : E(KMS1 ; IDC; IDS1 ; v;KCS1 ); : : : ; E(KMSN ; IDC; IDSN ; v;KCSN )Phase II – For each access request at time t:(1) C ! M : IDC, IDS1 ; : : : ; IDSn(2) M ! C : _1; : : : ; _n(3) C ! Si : _i; E(KMSi ; IDC; IDSi ; v;KCSi ), E(sk0i ; IDC; t)(4) Si ! C : E(sk0i ; t + 1)Fig. 3. Specification of pNFS-AKE-I.materials KCS1 ; : : : ;KCSN before it can access any of theN storage device Si (for 1 _ i _ N). The key materials aretransmitted to M. We assume that the communication betweenC andM is authenticated and protected through a secure channelassociated with key KCM established using the existingmethods as described in Section II-B. M then issues an authenticationtoken of the form E(KMSi ; IDC; IDSi ; v;KCSi )for each key material if the associated storage device Si hasnot been revoked.7 This completes Phase I of the protocol.From this point onwards, any request from C to access Si isconsidered part of Phase II of the protocol until v expires.When C submits an access request to M, the request containsall the identities of storage devices Si for 1 _ i _ n _ Nthat C wishes to access. For each Si, M issues a layout_i. C then forwards the respective layouts, authenticationtokens (from Phase I), and encrypted messages of the formE(sk0i ; IDC; t) to all n storage devices.Upon receiving an I/O request for a file object from C, eachSi performs the following:1) check if the layout _i is valid;2) decrypt the authentication token and recover key KCSi ;3) compute keys skzi = F(KCSi ; IDC; IDSi ; v; sid; z) forz = 0; 1;4) decrypt the encrypted message, check if IDC matchesthe identity of C and if t is within the current validityperiod v;5) if all previous checks pass, Si replies C with a keyconfirmation message using key sk0i .At the end of the protocol, sk1i is set to be the session keyfor securing communication between C and Si. We note that,as suggested in [7], sid in our protocol is uniquely generatedfor each session at the application layer, for example throughthe GSS-API.B. pNFS-AKE-IIWe now employ a Diffie-Hellman key agreement techniqueto both provide forward secrecy and prevent key escrow. Inthis protocol, each Si is required to pre-distribute some keymaterial to M at Phase I of the protocol.Let gx 2 G denote a Diffie-Hellman component, where Gis an appropriate group generated by g, and x is a numberrandomly chosen by entity X 2 fC; Sg. Let _ (k;m) denote7Here KMSi is regarded as a long-term symmetric secret key shared betweenM and Si. Also, we use authenticated encryption instead of encryptiononly encryption for security reasons. This will be clear in our security analysis.a secure MAC scheme that takes as input a secret key k anda target message m, and output a MAC tag. Our partiallyforward secure protocol is specified in Figure 4.At the beginning of each v, each Si that is governed byM generates a Diffie-Hellman key component gsi . The keycomponent gsi is forwarded to and stored by M. Similarly, Cgenerates its Diffie-Hellman key component gc and sends it toM.8 At the end of Phase I, C receives all the key componentscorresponding to all N storage devices that it may accesswithin time period v, and a set of authentication tokens of theform _ (KMSi ; IDC; IDSi ; v; gc; gsi ). We note that for ease ofexposition, we use the same key KMSi for encryption in step(1) and MAC in step (2). In actual implementation, however,we assume that different keys are derived for encryption andMAC, respectively, with KMSi as the master key. For example,the encryption key can be set to be F(KMSi ; “enc”), whilethe MAC key can be set to be F(KMSi ; “mac”).Steps (1) & (2) of Phase II are identical to those in theprevious variants. In step (3), C submits its Diffie-Hellmancomponent gc in addition to other information required in step(3) of pNFS-AKE-I. Si must verify the authentication tokento ensure the integrity of gc. Here C and Si compute skzi forz = 0; 1 as follow:skzi = F(gcsi ; IDC; IDSi ; gc; gsi ; v; sid; z):At the end of the protocol, C and Si share a session keysk1i .Note that since C distributes its chosen Diffie-Hellmanvalue gc during each protocol run (in Phase II), each Si needsto store only its own secret value si and is not required tomaintain a list of gc values for different clients. Upon expiryof v, they erase their secret values c and si, respectively, fromtheir internal states (or memory).Clearly, M does not learn anything about skzi unless itcolludes with the associated C or Si, and thus achievingescrow-freeness.C. pNFS-AKE-IIIAs explained before, pNFS-AKE-II achieves only partialforward secrecy (with respect to v). In the third variant ofour pNFS-AKE, therefore, we attempt to design a protocol8For consistency with the existing design of the Kerberos protocol, weassume that the Diffie-Hellman components are “conveniently” transmittedthrough the already established secure channel between them, although theDiffie-Hellman components may not necessarily be encrypted from a securityview point.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems7Phase I – For each validity period v:(1) Si ! M : IDSi , E(KMSi ; gsi )(2) C ! M : IDC, E(KCM; gc)(3) M ! C : E(KCM; gs1 ; : : : ; gsN ),_ (KMS1 ; IDC; IDS1 ; v; gc; gs1 ); : : : ; _ (KMSN ; IDC; IDSN ; v; gc; gsN )Phase II – For each access request at time t:(1) C ! M : IDC, IDS1 ; : : : ; IDSn(2) M ! C : _1; : : : ; _n(3) C ! Si : _i; gc; _ (KMSi ; IDC; IDSi ; v; gc; gsi ), E(sk0i ; IDC; t)(4) Si ! C : E(sk0i ; t + 1)Fig. 4. Specification of pNFS-AKE-II (with partial forward secrecy and escrow-free).that achieves full forward secrecy and escrow-freeness. Astraightforward and well-known technique to do this is throughrequiring both C and Si to generate and exchange freshDiffie-Hellman components for each access request at timet. However, this would drastically increase the computationaloverhead at the client and the storage devices. Hence, we adopta different approach here by combining the Diffie-Hellman keyexchange technique used in pNFS-AKE-II with a very efficientkey update mechanism. The latter allows session keys to bederived using only symmetric key operations based on a agreedDiffie-Hellman key. Our protocol is illustrated in Figure 5.Phase I – For each validity period v:(1) Si ! M : IDSi , E(KMSi ; gsi )(2) C ! M : IDC, E(KCM; gc)(3) M ! C : E(KCM; gs1 ; : : : ; gsN )(4) M ! Si : E(KMSi ; IDC; IDSi ; v; gc; gsi )Phase II – For each access request at time t:(1) C ! M : IDC, IDS1 ; : : : ; IDSn(2) M ! C : _1; : : : ; _n(3) C ! Si : _i, E(skj,0i ; IDC; t)(4) Si ! C : E(skj,0i ; t + 1)Fig. 5. Specification of pNFS-AKE-III (with full forward secrecy and escrowfree).Phase I of the protocol is similar to that of pNFS-AKEII.In addition, M also distributes C’s chosen Diffie-Hellmancomponent gc to each Si. Hence, at the end of Phase I, bothC and Si are able to agree on a Diffie-Hellman value gcsi .Moreover, C and Si set F1(gcsi ; IDC; IDSi ; v) to be theirinitial shared secret state K0CSi .9During each access request at time t in Phase II, steps (1)& (2) of the protocol are identical to those in pNFS-AKE-II.In step (3), however, C can directly establish a secure sessionwith Si by computing skj,zi as follows:skj,zi = F2(Kj1CSi; IDC; IDSi ; j; sid; z)where j _ 1 is an increasing counter denoting the j-th sessionbetween C and Si with session key skj,1i . Both C and Si then9Unlike in pNFS-AKE-II where gc is distributed in Phase II, we need topre-distribute C’s chosen Diffie-Hellman component in Phase I because thesecret state K0C Sithat C and Si store will be updated after each request.This is essential to ensure forward secrecy.setKjCSi= F1(Kj1CSi; j)and update their internal states. Note that here we use twodifferent key derivation functions F1 and F2 to compute KjCSiand skj,zi , respectively. Our design can enforce independenceamong different session keys. Even if the adversary hasobtained a session key skj,1i , the adversary cannot derive Kj1CSior KjCSi . Therefore, the adversary cannot obtain skj+1,zi orany of the following session keys. It is worth noting that theshared state KjCSi should never be used as the session key inreal communications, and just like the long-term secret key, itshould be kept at a safe place, since otherwise, the adversarycan use it to derive all the subsequent session keys within thevalidity period (i.e., KjCSi can be regarded as a medium-termsecret key material). This is similar to the situation that oncethe adversary compromises the long-term secret key, it canlearn all the subsequence sessions.However, we stress that knowing the state information KjCSiallows the adversary to compute only the subsequence sessionkeys (i.e., skj+1,zi ; skj+2,zi ; _ _ _ ) within a validity period, butnot the previous session keys (i.e., sk1,zi ; sk2,zi ; _ _ _ ; skj,zi )within the same period. Our construction achieves thisby making use of one-way hash chains constructed usingthe pseudo-random function F1. Since knowing KjCSidoes not help the adversary in obtaining the previous states(Kj1CSi;Kj2CSi; :::;K0C Si ), we can prevent the adversary fromobtaining the corresponding session keys. Also, since compromiseof KMSi or KCM does not reveal the initial state K0CSiduring the Diffie-Hellman key exchange, we can achieve fullforward secrecy.VI. SECURITY ANALYSISWe work in a security model that allows us to showthat an adversary attacking our protocols will not able tolearn any information about a session key. Our model alsoimplies implicit authentication, that is, only the right protocolparticipant is able to learn or derive a session key.A. Security ModelWe now define a security model for pNFS-AKE. Let Mdenote the metadata server, SS = fS1; S2; _ _ _ ; SNg the setof storage devices, and CS = fC1;C2; _ _ _ ;Cg the set of1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems8clients. A party P 2 fMg[SS[CS may run many instancesconcurrently, and we denote instance i of party P by _iP .Our adversarial model is defined via a game between anadversary A and a game simulator SIM. SIM tosses arandom coin b at the beginning of the game and b willbe used later in the game. SIM then generates for eachSi 2 SS (Cj 2 CS, respectively) a secret key KMSi(KMCj , respectively) shared with M. A is allowed to makethe following queries to the simulator:SEND(P; i;m): This query allows the adversary to senda message m to an instance _iP . If the message m issent by another instance _jPwith the intended receiverP, then this query models a passive attack. Otherwise, itmodels an active attack by the adversary. The simulatorthen simulates the reaction of _iP upon receiving themessage m, and returns to A the response (if there isany) that _iP would generate.CORRUPT(P): This query allows the adversary to corrupta party P 2 SS[CS. By making this query, the adversarylearns all the information held by P at the time of thecorruption, including all the long-term and ephemeralsecret keys. However, the adversary cannot corrupt M(but see Remark 1).REVEAL(P; i): This query allows the adversary to learnthe session key that has been generated by the instance_iP (P 2 SS [ CS). If the instance _iP does not holdany session key, then a special symbol ? is returned tothe adversary.TEST(P; i): This query can only be made to a freshinstance _iP (as defined below) where P 2 SS [ CS.If the instance _iP holds a session key SKiP , then SIMdoes the following– if the coin b = 1, SIM returns SKiP to theadversary;– otherwise, a random session key is drawn from thesession key space and returned to the adversary.Otherwise, a special symbol ? is returned to the adversary.We define the partner id pidiP of an instance _iP as theidentity of the peer party recognized by _iP , and sidiP as theunique session id belonging to _iP . We say a client instance_iC and a storage device instance _jS are partners if pidiC = Sand pidjS = C and sidiC = sidjS.We say an instance _iP is fresh ifA has never made a CORRUPT query to P or pidiP ; andA has never made a REVEAL query to _iP or its partner.At the end of the game, the adversary outputs a bit bas herguess for b. The adversary’s advantage in winning the gameis defined asAdvpNFSA (k) = j2Pr[b= b] 􀀀 1j:Definition 1: We say a pNFS-AKE protocol is secure if thefollowing conditions hold.1) If an honest client and an honest storage device completematching sessions, they compute the same session key.2) For any PPT adversary A, AdvpNFSA (k) is a negligiblefunction of k.Forward Secrecy. The above security model for pNFS-AKEdoes not consider forward secrecy (i.e., the corruption ofa party will not endanger his/her previous communicationsessions). Below we first define a weak form of forwardsecrecy we call partial forward secrecy (PFS). We follow theapproach of Canetti and Krawczyk [10] by introducing a newtype of query:EXPIRE(P; v): After receiving this query, no instance ofP for time period v could be activated. In addition, thesimulator erases all the state information and session keysheld by the instances of party P which are activatedduring time period v.Then, we redefine the freshness of an instance _iP asfollows:A makes a CORRUPT(P) query only after anEXPIRE(P; v) query where the instance _iP is activatedduring time period v;A has never made a REVEAL(P; i) query; andIf _iP has a partner instance _jQ, then A also obeys theabove two rules with respect to _jQ; otherwise, A hasnever made a CORRUPT(pidiP ) query.The rest of the security game is the same. We define theadvantage of the adversary asAdvpNFSPFSA (k) = j2Pr[b= b] 􀀀 1j:We can easily extend the above definition to define fullforward secrecy (FFS) by modifying the EXPIRE query asfollows:EXPIRE(P; i): Upon receiving this query, the simulatorerases all the state information and the session key heldby the instance _iP .The rest of the security model is the same as in the PFS game.Remark 1. In our security model, we do not allow theadversary to corrupt the metadata server M which holds allthe long-term secret keys. However, in our Forward Secrecymodel, we actually do not really enforce such a requirement.It is easy to see that if the adversary corrupts all the partiesin SS [ CS, then the adversary has implicitly corrupted M.But we should also notice that there is no way to preventactive attacks once M is corrupted. Therefore, the adversarycan corrupt all the parties (or M) only after the Test sessionhas expired.Remark 2. Our above Forward Secrecy model has alsoescrow-freeness. One way to define escrow-freeness is todefine a new model which allows the adversary to corruptthe metadata server and learn all the long-term secret keys.However, as outlined in Remark 1, our Forward Secrecy modelallows the adversary to obtain all the long-term secret keysunder some necessary conditions. Hence, our Forward Secrecymodel has implicitly captured escrow-freeness.B. Security ProofsTheorem 1: The pNFS-AKE-I protocol is secure withoutPFS if the authenticated encryption scheme E is secure underchosen-ciphertext attacks and F is a family of pesudo-randomfunctions.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems9Proof. We define a sequence of games Gi(i _ 0) where G0 isthe original game defined in our security model without PFS.We also define AdvpNFSi as the advantage of the adversaryin game Gi. Then we have AdvpNFS0 = AdvpNFSA (k).In game G1 we change the original game as follows: thesimulator randomly chooses an instance _iP among all theinstances created in the game, if the TEST query is notperformed on _iP , the simulator aborts and outputs a randombit. Then we haveAdvpNFS1 =1nIAdvpNFS0where nI denotes the number of instances created in the game.In the following games, we use C and S to denote the clientand the storage device involved in the test session, respectively,and v to denote the time period when the test session isactivated.In game G2, we change game G1 as follows: let FORGEdenote the event that A successfully forges a valid ciphertextE(KMS ; IDC ; IDS ; v;KCS ). If the event FORGEhappens, then the simulator aborts the game and outputsa random bit. Since E is a secure authenticated encryptionscheme, we havePr[b= b in game G1jFORGE]= Pr[b= b in game G2jFORGE]andPr[b= b in game G1] 􀀀 Pr[b= b in game G2]_ Pr[FORGE] _ AdvUFCMAE (k):Therefore, we haveAdvpNFS1_ AdvpNFS2 + 2AdvUFCMAE (k):In game G3 we use a random key Kinstead of the decryptionof E(KMS ; IDC ; IDS ; v;KCS ) to simulate thegame. In the following, we show that jAdvpNFS2􀀀AdvpNFS1jis negligible if the authenticated encryption scheme E is secureunder adaptive chosen-ciphertext attacks (CCA).We construct an adversary B in the CCA game for theauthenticated encryption scheme E. B simulates the game G2for the adversary A as follows. B generates all the longtermkeys in the system except KMS . B then randomlyselects two keys K0 and K1 and obtains a challenge ciphertextCH = E(KMS ; IDC ; IDS ; v;K) from its challengerwhere K is either K0 or K1. B then uses CH as theauthentication token used between C and S during thetime period v, and uses K1 as the decryption of CH toperform any related computation. For other authenticationtokens related to KMS , B generates them by querying itsencryption oracle. Also, for any authentication token intendedfor S but not equal to CH, B performs the decryption byquerying its decryption oracle. Finally, if the adversary A winsin the game (denote this event by WIN), B outputs 1 (i.e., Bguesses K = K1), otherwise, B outputs 0 (i.e., B guessesK = K0).We can see that if K = K1, then the game simulated byB is the same as game G2; otherwise, if K = K0, then thegame simulated by B is the same as game G3. So we haveAdvCCAB (k) = j2(Pr[WINjK= K1]Pr[K= K1] +Pr[WINjK= K0]Pr[K= K0]) 􀀀 1j= Pr[WINjK= K1] 􀀀 Pr[WINjK= K0]=12(AdvpNFS2􀀀 AdvpNFS3 )andAdvpNFS2_ AdvpNFS3 + 2AdvCCAE (k):In game G4 we then replace the function F(K; _) with arandom function RF(_). Since F is a family of pseudo-randomfunctions, if the adversary’s advantage changes significantlyin game G4 , we can construct a distinguisher D against F.D simulates game G3 for A honestly except that wheneverD needs to compute F(K; x), D queries its own oracle Owhich is either F(K; _) or RF(_). At the end of the game, ifA wins the game, D outputs 1, otherwise, D outputs 0.We can see that if O = F(K; _), A is in game G3,otherwise, if O = RF(_), then A is in game G4. Therefore,we haveAdvprfD (k) = Pr[D outputs 1jO = F(K; _)] 􀀀Pr[D outputs 1jO = RF(_)]= Pr[WINjO = F(K; _)] 􀀀Pr[WINjO = RF(_)]=12(AdvpNFS3􀀀 AdvpNFS4 )andAdvpNFS3_ AdvpNFS4 + 2AdvprfF (k):In game G4, we havesk0i = RF(IDC ; IDS ; v; sid; 0)sk1i = RF(IDC ; IDS ; v; sid; 1)where sid is the unique session id for the test session. Nowsince RF is a random function, sk1i is just a random keyindependent of the game. Therefore, the adversary has noadvantage in winning the game, i.e.,AdvpNFS4 = 0:Combining all together, we haveAdvpNFSA (k)_2nI (AdvUFCMAE (k) + AdvCCAE (k) + AdvprfF (k)):Theorem 2: The pNFS-AKE-II protocol achieves partialforward secrecy if _ is a secure MAC scheme, the DDHassumption holds in the underlying group G, and F is a familyof pesudo-random functions.Proof. The proof is similar to that for Theorem 1. Below weonly elaborate on the differences between the two proofs. Wealso define a sequence of games Gi where G0 is the originalPFS security game.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems10In game G1, we change game G0 in the same way as inthe previous proof, then we also haveAdvpNFSPFS1 =1nIAdvpNFSPFS0where nI denotes the number of instances created in thegame. Let C and S denote the client and the storage deviceinvolved in the test session, respectively, and v the timeperiod when the test session is activated.In game G2, we further change game G1 as follows: letFORGE denote the event that A successfully forges a validMAC tag _ (KMSi ; IDC; IDSi ; v; gc; gsi ) before corruptingSi, if the event FORGE happens, then the simulator abortsthe game and outputs a random bit. Then we havePr[b= b in game G1jFORGE]= Pr[b= b in game G2jFORGE]andPr[b= b in game G1] 􀀀 Pr[b= b in game G2]_ Pr[FORGE] _ AdvUFCMAτ (k):Therefore, we haveAdvpNFSPFS1_ AdvpNFSPFS2 + 2AdvUFCMAτ (k):In game G3, we change game G2 by replacing the Diffie-Hellman key gcs in the test session with a random elementK2 G. Below we show that if the adversary’s advantagechanges significantly in game G3 , we can construct a distinguisherB to break the Decisional Diffie-Hellman (DDH)assumption.B is given a challenge (ga; gb;Z), in which with equalprobability, Z is either gab or a random element of G. Bsimulates game G2 honestly by generating all the long-termsecret keys for all the clients and storage devices. Then forthe time period v, B sets gc= ga and gs= gb. When thevalue of gcs is needed, B uses the value of Z to performthe corresponding computation. Finally, if A wins the game,B outputs 1, otherwise, B outputs 0.Since the adversary cannot corrupt C or S before the timeperiod v has expired, if a FORGE event did not happen,then the values of the Diffie-Hellman components in the testsession must be ga and gb. If Z = gab, then A is in game G2;otherwise, if Z is a random element of G, then A is in gameG3. Therefore we haveAdvDDHB (k) = Pr[B outputs 1jZ = gab] 􀀀Pr[B outputs 1jZ = gr]= Pr[WINjZ = gab] 􀀀 Pr[WINjZ = gr]=12(AdvpNFSPFS2􀀀 AdvpNFSPFS3 )andAdvpNFSPFS2_ AdvpNFSPFS3 + 2AdvDDH(k):In game G4, we replace the pseudo-random functionF(K; _) with a random function RF(_). By following thesame analysis as in the previous proof, we haveAdvpNFSPFS3_ AdvpNFSPFS4 + 2AdvprfF (k)andAdvpNFSPFS4 = 0:Therefore, combining all together, we haveAdvpNFSPFSA (k)_ 2nI (AdvUFCMAτ (k) + AdvDDH(k) + AdvprfF (k)):Theorem 3: The pNFS-AKE-III protocol achieves full forwardsecrecy if E is a secure authenticated encryption scheme,the DDH assumption holds in the underlying group G, and Fis a family of pesudo-random functions.Proof (Sketch). The proof is very similar to that for Theorem 2.Below we provide a sketch of the proof.Let C and S denote the client and the storage deviceinvolved in the test session, respectively, and v the timeperiod when the test session is activated. Without loss ofgenerality, suppose the test session is the j-th session betweenC and S within the period v. Since the adversary is notallowed to corrupt C or S before the test session sessionhas expired, due to the unforgeability of E, and the DDHassumption, the simulator can replace gcs in the time periodv with a random element K 2 G. Then in the next augmentedgame, the simulator replaces K0CS by a random key. SinceF1 is a secure pseudo-random function, such a replacementis indistinguishable from the adversary’s view point. Thesimulator then replaces ski,zi (for z = 0; 1) and KiCS withindependent random keys for all 1 _ i _ j. Once again, sinceF1 and F2 are secure pseudo-random functions, the augmentedgames are indistinguishable by the adversary. Finally, in thelast augmented game, we can claim that the adversary has noadvantage in winning the game since a random key is returnedto the adversary no matter b = 0 or b = 1. This completes thesketch of the proof. □VII. PERFORMANCE EVALUATIONA. Computational OverheadWe consider the computational overhead for w accessrequests over time period v for a metadata server M, a clientC, and storage devices Si for i 2 [1;N]. We assume that alayout _ is of the form of a MAC, and the computational costfor authenticated symmetric encryption E is similar to that forthe non-authenticated version E.10 Table I gives a comparisonbetween Kerberos-based pNFS and our protocols in terms ofthe number of cryptographic operations required for executingthe protocols over time period v.To give a more concrete view, Table II provides someestimation of the total computation times in seconds (s) foreach protocol by using the Crypto++ benchmarks obtained onan Intel Core 2 1.83 GHz processor under Windows Vistain 32-bit mode [12]. We choose AES/CBC (128-bit key) forencryption, AES/GCM (128-bit, 64K tables) for authenticatedencryption, HMAC(SHA-1) for MAC, and SHA-1 for keyderivation. Also, Diffie-Hellman exponentiations are based on10For example, according to the Crypto++ 5.6.0 Benchmarks, AES/GCM(128-bit, 64K tables) has similar speed as AES/CBC (128-bit key) [12].1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems11TABLE ICOMPARISON IN TERMS OF CRYPTOGRAPHIC OPERATIONS FOR w ACCESS REQUESTS FROM C TO Si VIA M OVER TIME PERIOD v, FOR ALL 1 _ i _ nAND WHERE n _ N.Protocol M C all Si TotalKerberos-pNFS– Symmetric key encryption / decryption w(n + 5) w(2n + 3) 3wn w(6n + 8)– MAC generation / verification wn 0 wn 2wnpNFS-AKE-I– Symmetric key encryption / decryption N + 1 2wn + 1 3wn 5wn + N + 2– MAC generation / verification wn 0 wn 2wn– Key derivation 0 2wn 2wn 4wnpNFS-AKE-II– Symmetric key encryption / decryption N + 2 2wn + 2 2wn + 1 4wn + N + 5– MAC generation / verification wn + N 0 2wn 3wn + N– Key derivation 0 2wn 2wn 4wn– Diffie-Hellman exponentiation 0 N + 1 N + wn 2N + wn + 1pNFS-AKE-III– Symmetric key encryption / decryption 2N + 2 2wn + 2 2wn + 1 4wn + 2N + 5– MAC generation / verification wn 0 wn 2wn– Key derivation 0 3wn + N 3wn + N 6wn + 2N– Diffie-Hellman exponentiation 0 N + 1 2N 3N + 1DH 1024-bit key pair generation. Our estimation is basedon a fixed message size of 1024 bytes for all cryptographicoperations, and we consider the following case:N = 2n and w = 50 (total access requests by C withinv);C interacts with 103 storage devices concurrently for eachaccess request, i.e. n = 103;M has interacted with 105 clients over time period v; andeach Si has interacted with 104 clients over time periodv.Table II shows that our protocols reduce the workload of Min the existing Kerberos-based protocol by up to approximately54%. This improves the scalability of the metadata serverconsiderably. The total estimated computational cost for Mfor serving 105 clients is 8:02 _ 104 s (_ 22.3 hours) inKerberos-based pNFS, compared with 3:68 _ 104 s (_ 10.2hours) in pNFS-AKE-I and 3:86 _ 104 s (_ 10.6 hours) inpNFS-AKE-III. In general, one can see from Table I that theworkload of M is always reduced by roughly half for anyvalues of (w; n;N). The scalability of our protocols from theserver’s perspective in terms of supporting a large number ofclients is further illustrated in the left graph of Figure 6 whenwe consider each client requesting access to an average ofn = 103 storage devices.Moreover, the additional overhead for C (and all Si) forachieving full forward secrecy and escrow-freeness using ourtechniques are minimal. The right graph of Figure 6 shows thatour pNFS-AKE-III protocol has roughly similar computationaloverhead in comparison with Kerberos-pNFS when the numberof accessed storage devices is small; and the increasedcomputational overhead for accessing 103 storage devicesin parallel is only roughly 1/500 of a second compared tothat of Kerberos-pNFS—a very reasonable trade-off betweenefficiency and security. The small increase in overhead is partlydue to the fact that some of our cryptographic cost is amortizedover a time period v (recall that and for each access requestat time t, the client runs only Phase II of the protocol).On the other hand, we note that the significantly highercomputational overhead incurred by Si in pNFS-AKE-II islargely due to the cost of Diffie-Hellman exponentiations. Thisis a space-computation trade-off as explained in Section V-B(see Section VII-C for further discussion on key storage).Nevertheless, 256 s is an average computation time for 103storage devices over time period v, and thus the averagecomputation time for a storage device is still reasonably small,i.e. less than 1/3 of a second over time period v. Moreover, wecan reduce the computational cost for Si to roughly similarto that of pNFS-AKE-III if C pre-distributes its gc value toall relevant Si so that they can pre-compute the gcsi value foreach time period v.TABLE IICOMPARISON IN TERMS OF COMPUTATION TIMES IN SECONDS (S) OVERTIME PERIOD v BETWEEN KERBEROS-PNFS AND OUR PROTOCOLS. HEREFFS DENOTES FULL FORWARD SECRECY, WHILE EF DENOTESESCROW-FREENESS.Protocol FFS EF M C SiKerberos-pNFS 8:02 _ 104 0.90 17.00pNFS-AKE-I 3:68 _ 104 1.50 23.00pNFS-AKE-II ✓ 3:82 _ 104 2.40 256.00pNFS-AKE-III ✓ ✓ 3:86 _ 104 2.71 39.60B. Communication OverheadAssuming fresh session keys are used to secure communicationsbetween the client and multiple storage devices, clearlyall our protocols have reduced bandwidth requirements. Thisis because during each access request, the client does not needto fetch the required authentication token set from M. Hence,the reduction in bandwidth consumption is approximately thesize of n authentication tokens.1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems12secondsnumber of clientsmillisecondsnumber of storage devicesFig. 6. Comparison in terms of computation times for M (on the left) and for C (on the right) at a specific time t.C. Key StorageWe note that the key storage requirements for KerberospNFSand all our described protocols are roughly similar fromthe client’s perspective. For each access request, the clientneeds to store N or N + 1 key materials (either in the formof symmetric keys or Diffie-Hellman components) in theirinternal states.However, the key storage requirements for each storagedevice is higher in pNFS-AKE-III since the storage devicehas to store some key material for each client in their internalstate. This is in contrast to Kerberos-pNFS, pNFS-AKE-I andpNFS-AKE-II that are not required to maintain any client keyinformation.VIII. OTHER RELATED WORKSome of the earliest work in securing large-scale distributedfile systems, for example [24], [22], have already employedKerberos for performing authentication and enforcing accesscontrol. Kerberos, being based on mostly symmetric keytechniques in its early deployment, was generally believed tobe more suitable for rather closed, well-connected distributedenvironments.On the other hand, data grids and file systems such as,OceanStore [27], LegionFS [54] and FARSITE [3], make useof public key cryptographic techniques and public key infrastructure(PKI) to perform cross-domain user authentication.Independently, SFS [36], also based on public key cryptographictechniques, was designed to enable inter-operabilityof different key management schemes. Each user of thesesystems is assumed to possess a certified public/private keypair. However, these systems were not designed specificallywith scalability and parallel access in mind.With the increasing deployment of highly distributed andnetwork-attached storage systems, subsequent work, suchas [4], [55], [19], focussed on scalable security. Nevertheless,these proposals assumed that a metadata server shares agroup secret key with each distributed storage device. Thegroup key is used to produce capabilities in the form ofmessage authentication codes. However, compromise of themetadata server or any storage device allows the adversaryto impersonate the server to any other entities in the filesystem. This issue can be alleviated by requiring that eachstorage device shares a different secret key with the metadataserver. Nevertheless, such an approach restricts a capabilityto authorising I/O on only a single device, rather than largergroups of blocks or objects which may reside on multiplestorage devices.More recent proposals, which adopted a hybrid symmetrickey and asymmetric key method, allow a capability to spanany number of storage devices, while maintaining a reasonableefficiency-security ratio [40], [29], [30], [31]. For example,Maat [30] encompasses a set of protocols that facilitate (i)authenticated key establishment between clients and storagedevices, (ii) capability issuance and renewal, and (iii) delegationbetween two clients. The authenticated key establishmentprotocol allows a client to establish and re-use a shared(session) key with a storage device. However, Maat and otherrecent proposals do not come with rigorous security analysis.As with NFS, authentication in Hadoop Distributed FileSystem (HDFS) is also based on Kerberos via GSS-API.Each HDFS client obtains a TGT that lasts for 10 hoursand renewable for 7 days by default; and access control isbased on the Unix-style ACLs. However, HDFS makes use ofthe Simple Authentication and Security Layer (SASL) [38],a framework for providing a structured interface betweenconnection-oriented protocols and replaceable mechanisms.11In order to improve the performance of the KDC, the developersof HDFS chose to use a number of tokens forcommunication secured with an RPC digest scheme. TheHadoop security design makes use of Delegation Tokens, JobTokens, and Block Access Tokens. Each of these tokens issimilar in structure and based on HMAC-SHA1. DelegationTokens are used for clients to communicate with the Name11SASL’s design is intended to allow new protocols to reuse existingmechanisms without requiring redesign of the mechanisms and allows existingprotocols to make use of new mechanisms without redesign of protocols [38].1045-9219 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TPDS.2015.2388447, IEEE Transactions on Parallel and Distributed Systems13Node in order to gain access to HDFS data; while BlockAccess Tokens are used to secure communication between theName Node and Data Nodes and to enforce HDFS filesystempermissions. On the other hand, the Job Token is used to securecommunication between the MapReduce engine Task Trackerand individual tasks. Note that the RPC digest scheme usessymmetric encryption and depending upon the token type, theshared key may be distributed to hundreds or even thousandsof hosts [41].IX. CONCLUSIONSWe proposed three authenticated key exchange protocols forparallel network file system (pNFS). Our protocols offer threeappealing advantages over the existing Kerberos-based pNFSprotocol. First, the metadata server executing our protocolshas much lower workload than that of the Kerberos-basedapproach. Second, two our protocols provide forward secrecy:one is partially forward secure (with respect to multiplesessions within a time period), while the other is fully forwardsecure (with respect to a session). Third, we have designed aprotocol which not only provides forward secrecy, but is alsoescrow-free.

Aggregated-Proof Based Hierarchical Authentication Scheme for the Internet of Things

AGGREGATED-PROOF BASED HIERARCHICAL AUTHENTICATION

SCHEME FOR THE INTERNET OF THINGS

ABSTRACT:

The Internet of Things (IoT) is becoming an attractive system paradigm to realize interconnections through the physical, cyber, and social spaces. During the interactions among the ubiquitous things, security issues become noteworthy, and it is significant to establish enhanced solutions for security protection. In this work, we focus on an existing U2IoT architecture (i.e., unit IoT and ubiquitous IoT), to design an aggregated-proof based hierarchical authentication scheme (APHA) for the layered networks. Concretely, 1) the aggregated-proofs are established for multiple targets to achieve backward and forward anonymous data transmission; 2) the directed path descriptors, homomorphism functions, and Chebyshev chaotic maps are jointly applied for mutual authentication; 3) different access authorities are assigned to achieve hierarchical access control. Meanwhile, the BAN logic formal analysis is performed to prove that the proposed APHA has no obvious security defects, and it is potentially available for the U2IoT architecture and other IoT applications.

INTRODUCTION:

The Internet of Things (IoT) is emerging as an attractive system paradigm to integrate physical perceptions, cyber interactions, and social correlations, in which the physical objects, cyber entities, and social attributes are required to achieve interconnections with the embedded intelligence. During the interconnections, the IoT is suffering from severe security challenges, and there are potential vulnerabilities due to the complicated networks referring to heterogeneous targets, sensors, and backend management systems. It becomes noteworthy to address the security issues for the ubiquitous things in the IoT.

Recent studies have been worked on the general IoT, including system models, service platforms, infrastructure architectures, and standardization. Particularly, a human-society inspired U2IoT architecture (i.e., unit IoT and ubiquitous IoT) is proposed to achieve the physical cyber- social convergence in the U2IoT architecture, mankind neural system and social organization framework are introduced to establish the single-application and multi-application IoT frameworks.

Multiple unit IoTs compose a local IoT within a region, or an industrial IoT for an industry. The local IoTs and industrial IoTs are covered within a national IoT, and jointly form the ubiquitous IoT. Towards the IoT security, related works mainly refer to the security architectures and recommended countermeasures secure communication and networking mechanisms cryptography algorithms and application security solutions.

Current researches mainly refer to three aspects: system security, network security, and application security.

_ System security mainly considers a whole IoT system to identify the unique security and privacy challenges, to design systemic security frameworks, and to provide security measures and guidelines.

_ Network security mainly focuses on wireless communication networks (e.g., wireless sensor networks (WSN), radio frequency identification (RFID), and the Internet) to design key distribution algorithms, authentication protocols, advanced signature algorithms, access control mechanisms, and secure routing protocols. Particularly, authentication protocols are popular to address security and privacy issues in the IoT, and should be designed considering the things’ heterogeneity and hierarchy.

_ Application security serves for IoT applications (e.g.., multimedia, smart home, and smart grid), and resolves practical problems with particular scenario requirements.

Towards the U2IoT architecture, a reasonable authentication scheme should satisfy the following requirements. 1) Data CIA (i.e., confidentiality, integrity, and availability): The exchanged messages between any two legal entities should be protected against illegal access and modification. The communication channels should be reliable for the legal entities. 2) Hierarchical access control: Diverse access authorities are assigned to different entities to provide hierarchical interactions.

An unauthorised entity cannot access data exceeding its permission. 3) Forward security: Attackers cannot correlate any two communication sessions, and also cannot derive the previous interrogations according to the ongoing session. 4) Mutual authentication: The untrusted entities should pass each other’s verification so that only the legal entity can access the networks for data acquisition. 5) Privacy preservation: The sensors cannot correlate or disclose an individual target’s private information (e.g., location). Considering above security requirements, we design an aggregated proof based hierarchical authentication scheme (APHA) for the unit IoT.

EXISTING SYSTEM:

Existing WSN network is to be completely integrated into the Internet as part of the Internet of Things (IoT), it is necessary to consider various security challenges, such as the creation of a secure channel between an Internet host and a sensor node. In order to create such a channel, it is necessary to provide key management mechanisms that allow two remote devices to negotiate certain security credentials (e.g. secret keys) that will be used to protect the information flow analyze not only the applicability.

Existing mechanisms such as public key cryptography and pre-shared keys for sensor nodes in the IoT context, but also the applicability of those link-layer oriented key management systems (KMS) whose original purpose is to provide shared keys for sensor nodes belonging to the same WSNs to provide key management mechanisms to allow that two remote devices can negotiate certain security certificates (e.g., shared keys, Blom key pairs, and polynomial shares). The authors analyzed the applicability of existing mechanisms, including public key infrastructure (PKI) and pre-shared keys for sensor nodes in IoT contexts.

DISADVANTAGES:

Smart community model for IoT applications, and a cyber-physical system with the networked smart homes was introduced with security considerations. Filtering false network traffic and avoiding unreliable home gateways are suggested for safeguard. Meanwhile, the security challenges are discussed, including the cooperative authentication, unreliable node detection, target tracking, and intrusion detection group of individuals that hacked into federal sites and released confidential information to the public in the government is supposed to have the highest level of security, yet their system was easily breached.   Therefore, if all of our information is stored on the internet, people could hack into it, finding out everything about individuals lives. Also, companies could misuse the information that they are given access to.  This is a common mishap that occurs within companies all the time.  

PROPOSED SYSTEM:

We proposed scheme realizes data confidentiality and data integrity by the directed path descriptor and homomorphism based Chebyshev chaotic maps, establishes trust relationships via the lightweight mechanisms, and applies dynamically hashed values to achieve session freshness. It indicates that the APHA is suitable for the U2IoT architecture.

In this work, the main purpose is to provide bottom-up safeguard for the U2IoT architecture to realize secure interactions. Towards the U2IoT architecture, a reasonable authentication scheme should satisfy the following requirements.

1) Data CIA (i.e., confidentiality, integrity, and availability): The exchanged messages between any two legal entities should be protected against illegal access and modification. The communication channels should be reliable for the legal entities.

2) Hierarchical access control: Diverse access authorities are assigned to different entities to provide hierarchical interactions. An unauthorised entity cannot access data exceeding its permission.

3) Forward security: Attackers cannot correlate any two communication sessions, and also cannot derive the previous interrogations according to the ongoing session.

4) Mutual authentication: The untrusted entities should pass each other’s verification so that only the legal entity can access the networks for data acquisition.

5) Privacy preservation: The sensors cannot correlate or disclose an individual target’s private information (e.g., location). Considering above security requirements, we design an aggregated proof based hierarchical authentication scheme (APHA) for the ubiquitous IoT.

ADVANTAGES:

Aggregated-proofs are established by wrapping multiple targets’ messages for anonymous data transmission, which realizes that individual information cannot be revealed during both backward and forward communication channels.

Directed path descriptors are defined based on homomorphism functions to establish correlation during the cross-layer interactions. Chebyshev chaotic maps are applied to describe the mapping relationships between the shared secrets and the path descriptors for mutual authentication.

Diverse access authorities on the group identifiers and pseudonyms are assigned to different entities for achieving the hierarchical access control through the layered networks.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Back End                                :           MYSQL Server
  • Server                                      :           Apache Tomact Server
  • Script                                       :           JSP Script
  • Document                               :           MS-Office 2007


ARCHITECTURE DIAGRAM:


DATA FLOW DIAGRAM:

UML DIAGRAMS:

USECASE DIAGRAM:

CLASS DIAGRAM:

SEQUENCE DIAGRAM:

ACITIVITY DIAGRAM:

MODULES:

NETWORK SECURITY MODULE:

U2IOT ARCHITECTURE SYSTEM:

PROOF BASED DATA INTEGRITY:

AUTHENTICATION SCHEME (APHA):

MODULES DESCRIPTION:

NETWORK SECURITY MODULE:

Network-accessible resources may be deployed in a network as surveillance and early-warning tools, as the detection of attackers are not normally accessed for legitimate purposes. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the data’s. Data forwarding can also direct an attacker’s attention away from legitimate servers. A user encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a server, a user is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker’s methods can be studied and that information can be used to increase network security. Considered IP-based IoT, discussed the applicability and limitations of current Internet protocols, and presented thing lifecycle based security architecture for the IP networks.

Our security architecture, node security model, and security bootstrapping are considered in the security solution. Moreover, the authors pointed that the security protocols should fully consider the resource-constrained heterogeneous communication environments a security architecture based on the host identity protocol (HIP) and multimedia Internet keying protocols to enhance secure network association and key management applied a mobile RFID security protocol to guarantee the mobile RFID networks, and a trust third party (TTP) based key management protocol is introduced to construct a secure session key. We focused on the integration of RFID tags into IP networks, and proposed a HIP address translation scheme. The scheme provides address translation services between the tag identifiers and IP addresses, which presents a prototype of the cross-layer IoT networks in the trust-based mechanisms (e.g., cryptographic, and authentication) in WSNs presented Lithe, which is an integration of datagram transport layer security (DTLS) and constrained application protocol (CoAP) to protect the transmission of sensitive information in the IoT.

U2IOT ARCHITECTURE SYSTEM:

IoT architectures and models, Unit and Ubiquitous Internet of Things introduce essential IoT concepts from the perspectives of mapping and interaction between the physical world and the cyber world. It addresses key issues such as strategy and education, particularly around unit and ubiquitous IoT technologies. Supplying a new perspective on IoT, the book covers emerging trends and presents the latest progress in the field. It also:

  • Outlines a fundamental architecture for future IoT together with the IoT layered model
  • Describes various topological structures, existence forms, and corresponding logical relationships
  • Establishes an IoT technology system based on the knowledge of IoT scientific problems
  • Provides an overview of the core technologies, including basic connotation, development status, and open challenges

U2IoT architecture, a reasonable authentication scheme should satisfy the following requirements. 1) Data CIA (i.e., confidentiality, integrity, and availability): The exchanged messages between any two legal entities should be protected against illegal access and modification. The communication channels should be reliable for the legal entities. 2) Hierarchical access control: Diverse access authorities are assigned to different entities to provide hierarchical interactions. An unauthorised entity cannot access data exceeding its permission. 3) Forward security: Attackers cannot correlate any two communication sessions, and also cannot derive the previous interrogations according to the ongoing session. 4) Mutual authentication: The untrusted entities should pass each other’s verification so that only the legal entity can access the networks for data acquisition. 5) Privacy preservation: The sensors cannot correlate or disclose an individual target’s private information (e.g., location).

PROOF BASED DATA INTEGRITY:

The pseudo-random numbers are generated as session-sensitive operators to provide session freshness and randomization. Additionally, the identity related values (e.g., identify flags, group identifier, and pseudonym) are dynamically updated during each session. Such variables are applied to obtain the authentication operators in the aggregated-proofs, and other intermediate variables. The transmitted messages are mainly computed based on the random numbers which make that the exchanged messages can be regarded as dynamically variables with perfect forward unlinkability, and an attacker cannot correlate the ongoing session with former sessions in the open channels to analyze the design correctness for security proof, and it is a rigorous evaluation method to detect subtle defects for authentication scheme. The formal analysis focuses on belief and freshness, involving the following steps: message formalization, initial assumptions declaration, anticipant goals declaration, and logic verification in the BAN logic an attribute-based access control model according to bilinear mappings scheme realizes anonymous access, and minimizes the number of the exchanged messages in the open channels.

Our proposed a fuzzy reputation based trust management model (TRM-IoT) to enforce the entities’ cooperation and interconnection. Proposed an anonymous authentication protocol, and applied the pseudonym and threshold secret sharing mechanism to achieve the tradeoff between anonymity and certification a mutual authentication scheme, which is designed based on the feature extraction, secure hash algorithm (SHA), and elliptic curve cryptography (ECC). There into, asymmetric authentication scheme is established without compromising computation cost and communication overhead. We analyzed cyber infrastructure security in the smart grid. A layered security scheme was established to evaluate security risks for the power applications. The authors highlighted power generation, transmission, distribution control and security, and introduced encryption, authentication, and access control to achieve secure communications. Furthermore, digital forensics, security incident and event management are applied for management, and cyber-security evaluation and intrusion tolerance are also considered.

AUTHENTICATION SCHEME (APHA):

We design an aggregated proof based hierarchical authentication scheme (APHA) for the unit IoT and ubiquitous IoT respectively, and the main contributions are as follows: 1) Aggregated-proofs are established by wrapping multiple targets’ messages for anonymous data transmission, which realizes that individual information cannot be revealed during both backward and forward communication channels, 2) Directed path descriptors are defined based on homomorphism functions to establish correlation during the cross-layer interactions. Chebyshev chaotic maps are applied to describe the mapping relationships between the shared secrets and the path descriptors for mutual authentication, 3) Diverse access authorities on the group identifiers and pseudonyms are assigned to different entities for achieving the hierarchical access control through the layered networks.In the APHA, an entity believes that: 1) the shared secrets and keys are obtained by the assigned entities, 2) the pseudo random numbers, identity flags, pseudonyms, and directed path descriptors are fresh, and 3) the trusted entity has jurisdiction on the entitled values. The initiative assumptions, including initial possessions and entity abilities are obtained as follows:

 

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

g1

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

helloWorld

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

g3

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CONCLUSION AND FUTURE WORK:

In this paper, we have proposed an aggregated-proof based hierarchical authentication scheme for the U2IoT architecture. In the APHA, two sub-protocols are respectively designed for the unit IoT and ubiquitous IoT to provide bottom- up security protection. The proposed scheme realizes data confidentiality and data integrity by the directed path descriptor and homomorphism based Chebyshev chaotic maps, establishes trust relationships via the lightweight mechanisms, and applies dynamically hashed values to achieve session freshness. It indicates that the APHA is suitable for the U2IoT architecture.

A Time Efficient Approach for Detecting Errors in Big Sensor Data on Cloud

1.1 ABSTRACT:

Big sensor data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity it is difficult to process using on-hand database management tools or traditional data processing applications. Cloud computing provides a promising platform to support the addressing of this challenge as it provides a flexible stack of massive computing, storage, and software services in a scalable manner at low cost. Some techniques have been developed in recent years for processing sensor data on cloud, such as sensor-cloud. However, these techniques do not provide efficient support on fast detection and locating of errors in big sensor data sets.

We develop a novel data error detection approach which exploits the full computation potential of cloud platform and the network feature of WSN. Firstly, a set of sensor data error types are classified and defined. Based on that classification, the network feature of a clustered WSN is introduced and analyzed to support fast error detection and location. Specifically, in our proposed approach, the error detection is based on the scale-free network topology and most of detection operations can be conducted in limited temporal or spatial data blocks instead of a whole big data set. Hence the detection and location process can be dramatically accelerated.

Furthermore, the detection and location tasks can be distributed to cloud platform to fully exploit the computation power and massive storage. Through the experiment on our cloud computing platform of U-Cloud, it is demonstrated that our proposed approach can significantly reduce the time for error detection and location in big data sets generated by large scale sensor network systems with acceptable error detecting accuracy.

1.2 INTRODUCTION:

Recently, we enter a new era of data explosion which brings about new challenges for big data processing. In general, big data is a collection of data sets so large and complex that it becomes difficult to process with onhand database management systems or traditional data processing applications. It represents the progress of the human cognitive processes, usually includes data sets with sizes beyond the ability of current technology, method and theory to capture, manage, and process the data within a tolerable elapsed time. Big data has typical characteristics of five ‘V’s, volume, variety, velocity, veracity and value. Big data sets come from many areas, including meteorology, connectomics, complex physics simulations, genomics, biological study, gene analysis and environmental research. According to literature since 1980s, generated data doubles its size in every 40 months all over the world. In the year of 2012, there were 2.5 quintillion (2.5  1018) bytes of data being generated every day.

Hence, how to process big data has become a fundamental and critical challenge for modern society. Cloud computing provides apromising platform for big data processing with powerful computation capability, storage, scalability, resource reuse and low cost, and has attracted significant attention in alignment with big data. One of important source for scientific big data is the data sets collected by wireless sensor networks (WSN). Wireless sensor networks have potential of significantly enhancing people’s ability to monitor and interact with their physical environment. Big data set from sensors is often subject to corruption and losses due to wireless medium of communication and presence of hardware inaccuracies in the nodes. For a WSN application to deduce an appropriate result, it is necessary that the data received is clean, accurate, and lossless. However, effective detection and cleaning of sensor big data errors is a challenging issue demanding innovative solutions. WSN with cloud can be categorized as a kind of complex network systems. In these complex network systems such as WSN and social network, data abnormality and error become an annoying issue for the real network applications.

Therefore, the question of how to find data errors in complex network systems for improving and debugging the network has attracted the interests of researchers. Some work has been done for big data analysis and error detection in complex networks including intelligence sensors networks. There are also some works related to complex network systems data error detection and debugging with online data processing techniques. Since these techniques were not designed and developed to deal with big data on cloud, they were unable to cope with current dramatic increase of data size. For example, when big data sets are encountered, previous offline methods for error detectionand debugging on a single computer may take a long time and lose real time feedback. Because those offline methods are normally based on learning or mining, they often introduce high time cost during the process of data set training and pattern matching. WSN big data error detection commonly requires powerful real-time processing and storing of the massive sensor data as well as analysis in the context of using inherently complex error models to identify and locate events of abnormalities.

In this paper, we aim to develop a novel error detection approach by exploiting the massive storage, scalability and computation power of cloud to detect errors in big data sets from sensor networks. Some work has been done about processing sensor data on cloud. However, fast detection of data errors in big data with cloud remains challenging. Especially, how to use the computation power of cloud to quickly find and locate errors of nodes in WSN needs to be explored. Cloud computing, a disruptive trend at present, poses a significant impact on current IT industry and research communities. Cloud computing infrastructure is becoming popular because it provides an open, flexible, scalable and reconfigurable platform. The proposed error detection approach in this paper will be based on the classification of error types. Specifically, nine types of numerical data abnormalities/errors are listed and introduced in our cloud error detection approach. The defined error model will trigger the error detection process. Compared to previous error detection of sensor network systems, our approach on cloud will be designed and developed by utilizing the massive data processing capability of cloud to enhance error detection speed and real time reaction. In addition, the architecture feature of complex networks will also be analyzed to combine with the cloud computing with a more efficient way. Based on current research literature review, we divide complex network systems into scale-free type and non scale-free type. Sensor network is a kind of scale-free complex network system which matches cloud scalability feature.

1.3 LITRATURE SURVEY

A SURVEY OF LARGE SCALE DATA MANAGEMENT APPROACHES IN CLOUD ENVIRONMENTS

PUBLISH: IEEE Comm. Surveys & Tutorials, vol. 13, no. 3, pp. 311-336, Third Quarter 2011.

AUTHOR: S. Sakr, A. Liu, D. Batista, and M. Alomari,

EXPLANATION:

In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data. Moreover, the recent advances in Web technology has made it easy for any user to provide and consume content of any form. This has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. Cloud computing is associated with a new paradigm for the provision of computing infrastructure. This paradigm shifts the location of this infrastructure to the network to reduce the costs associated with the management of hardware and software resources. This paper gives a comprehensive survey of numerous approaches and mechanisms of deploying data-intensive applications in the cloud which are gaining a lot of momentum in both research and industrial communities. We analyze the various design decisions of each approach and its suitability to support certain classes of applications and end-users. A discussion of some open issues and future challenges pertaining to scalability, consistency, economical processing of large scale data on the cloud is provided. We highlight the characteristics of the best candidate classes of applications that can be deployed in the cloud.

STREAM AS YOU GO: THE CASE FOR INCREMENTAL DATA ACCESS AND PROCESSING IN THE CLOUD

PUBLISH: Proc. IEEE ICDE Int’l Workshop Data Management in the Cloud (DMC’12), 2012.

AUTHOR: R. Kienzler, R. Bruggmann, A. Ranganathan, and N. Tatbul,

EXPLANATION:

Cloud infrastructures promise to provide high-performance and cost-effective solutions to large-scale data processing problems. In this paper, we identify a common class of data-intensive applications for which data transfer latency for uploading data into the cloud in advance of its processing may hinder the linear scalability advantage of the cloud. For such applications, we propose a “stream-as-you-go” approach for incrementally accessing and processing data based on a stream data management architecture. We describe our approach in the context of a DNA sequence analysis use case and compare it against the state of the art in MapReduce-based DNA sequence analysis and incremental MapReduce frameworks. We provide experimental results over an implementation of our approach based on the IBM InfoSphere Streams computing platform deployed on Amazon EC2, showing an order of magnitude improvement in total processing time over the state of the art.

A SCALABLE TWO-PHASE TOP-DOWN SPECIALIZATION APPROACH FOR DATA ANONYMIZATION USING SYSTEMS, IN MAPREDUCE ON CLOUD

PUBLISH: IEEE Trans. Parallel and Distributed, vol. 25, no. 2, pp. 363-373, Feb. 2014.

AUTHOR: X. Zhang, T. Yang, C. Liu, and J. Chen

EXPLANATION:

A large number of cloud services require users to share private data like electronic health records for data analysis or mining, bringing privacy concerns. Anonymizing data sets via generalization to satisfy certain privacy requirements such as k-anonymity is a widely used category of privacy preserving techniques. At present, the scale of data in many cloud applications increases tremendously in accordance with the Big Data trend, thereby making it a challenge for commonly used software tools to capture, manage, and process such large-scale data within a tolerable elapsed time. As a result, it is a challenge for existing anonymization approaches to achieve privacy preservation on privacy-sensitive large-scale data sets due to their insufficiency of scalability. In this paper, we propose a scalable two-phase top-down specialization (TDS) approach to anonymize large-scale data sets using the MapReduce framework on cloud. In both phases of our approach, we deliberately design a group of innovative MapReduce jobs to concretely accomplish the specialization computation in a highly scalable way. Experimental evaluation results demonstrate that with our approach, the scalability and efficiency of TDS can be significantly improved over existing approaches.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

A data error in big data with cloud remains challenging to use the computation power of cloud to quickly find and locate errors of nodes in WSN needs to be explored. Cloud computing, a disruptive trend at present, poses a significant impact on current IT industry and research communities. Cloud computing infrastructure is becoming popular because it provides an open, flexible, scalable and reconfigurable platform. Existing methods in wireless sensor networks is to provide low-cost, low-energy reliable data collection. Reliability against transient errors in sensor data can be provided using the model-based error correction described in which temporal correlation in the data is used to correct errors without any overheads at the sensor nodes. In the above work it is assumed that a perfect model of the data is available.

However, as variations in the physical process are context-dependent and time-varying in a real sensor network, it is infeasible to have an accurate model of the data properties a priori, thus leading to reduced correction efficiency issue by presenting a scalable methodology for improving the accuracy of data modeling through on-line estimation data correction algorithm to incorporate robustness against dynamic model changes and potential modeling errors. We evaluate our system through simulations using real sensor data collected from different sources. Experimental results demonstrate that the proposed enhancements lead to an improvement of up to a factor of 10 over the earlier approach.

2.1.1 DISADVANTAGES:

Ensuring the reliability of sensor data becomes harder, since the hardware becomes less robust to many types of errors due to the effects of aggressive technology scaling. Similarly, errors in the wireless communication channels are another source of unreliability, as limitations on transmission power due to tight energy constraints makes them more susceptible to noise and interference. The problem is further aggravated by exposure to harsh physical environments, which is common for many typical sensing applications. Subsequently, ensuring the reliability of the data in a sensor network is going to be a growing problem and be a challenging part of designing sensor networks.

2.2 PROPOSED SYSTEM:

We proposed error detection approach in this paper will be based on the classification of error types. Specifically, nine types of numerical data abnormalities/errors are listed and introduced in our cloud error detection approach. The defined error model will trigger the error detection process. Compared to previous error detection of sensor network systems, our approach on cloud will be designed and developed by utilizing the massive data processing capability of cloud to enhance error detection speed and real time reaction. However, the scalability and error detection accuracy are not dealt. It is an initial and important step for online error detection of WSN.

Especially, under the cloud environment, the computational power and scalability should be fully exploit to support the real time fast error detection for sensor data sets clustering can significantly reduce the time cost error locating and final decision making by avoiding whole network data processing. In addition, with this detection technique, cloud resources only need be distributed according to each partitioned cluster in a scale-free complex network on current research literature review, we divide complex network systems into scale-free type and non scale-free type. Sensor network is a kind of scale-free complex network system which matches cloud scalability feature.

Our proposed error detection approach on cloud is specifically trimmed for finding errors in big data sets of sensor networks. The main contribution of our proposed detection is to achieve significant time performance improvement in error detection without compromising error detection accuracy. Our proposed scale-free error detection algorithm achieves significant error detection performance gains compared to non scale-free error detection algorithms. Our proposed scale-free detection on cloud can fast detect most of error data (more than 80 percent) after 740 seconds time duration. However, the non scalefree error detection algorithm can only achieve as much as 44 percent error detection rate as the best case. So, it can be concluded from the experiment results in Fig. 5 that the scale-free detection algorithm on cloud for big data can significantly outperform non scale-free error detection algorithms in terms of error finding time cost.

2.2.1 ADVANTAGES:

To verify the time efficiency and the effectiveness of our approach for detecting errors in big data with cloud, experiments are conducted for this experiment.

  • Demonstrate that the significant time-saving is achieved in terms of detecting errors from complex network big data sets.
  • Demonstrate the effectiveness of our proposed error detection approach in terms of different error types.
  • Demonstrate that the false positive ratio of our proposed error detection algorithm is limited within a small value.
  • scale-free error detecting approach can signifi- cantly reduce the time for fast error detection in numeric big data sets in the proposed approach achieves similar error selection ratio to non-scale-free error detection approaches.
  • In future, in accordance with error detection for big data sets from sensor network systems on cloud, the issues such as error correction, big data cleaning and recovery will be further explored.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Back End                                :           MS ACCESS
  • Tools                                       :           Netbeans 7
  • Document                               :           MS-Office 2007


CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM:

 

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

 

Oval: Graph

   START                                                                                                           RESULTS                                                

3.3 CLASS DIAGRAM:

3.4 SEQUENCE DIAGRAM:

 

STRAT                                                                                                                        RESULTS

 

                                         Data Structure

         Cluster Analysis

                                                                             Complexity Analysis

                                                Using Error Detection Algorithm

 

                                                                                                Error Localization   

                                                              Classification and Complexity Analysis

Results View Graph

3.5 ACTIVITY DIAGRAM:


CHAPTER 4

4.0 IMPLEMENTATION:

MODEL BASED ERROR DETECTION ON CLOUD FOR SENSOR NETWORK BIG DATA

ERROR DETECTION:

We propose a two-phase approach to conduct the computation required in the whole process of error detection and localization. At the phase of error detection, there are three inputs for the error detection algorithm. The first is the graph of network. The second is the total collected data set D and the third is the defined error patterns p. The output of the error detection algorithm is the error set D’. The details of the error detection algorithm can be found in Appendix B.1, available in the online supplemental material.

ERROR LOCALIZATION:

After the error pattern matching and error detection, it is important to locate the position and source of the detected error in the original WSN graph G(V, E). The input of the Algorithm 2 is the original graph of a scale-free network G (V, E), and an error data D from Algorithm 1. The output of the algorithm 2 is G’(V’, E’) which is the subset of the G to indicate the error location and source. The details of the error detection algorithm can be found in Appendix B.2, available in the online supplemental material.

COMPLEXITY ANALYSIS:

Suppose that there is a sensor network system consisting of n nodes. For the error detection approach without considering the scale-free network feature, the error detection algorithm will carry out the error pattern matching and localization with whole network data by traversing the whole data set. Suppose that there is R nodes on the data routing, in the worst case, the detection algorithm without considering the scale-free network feature will be executed R  n time for error detection and localization, denoted as OðR  nÞ; 1 R n. Anyway, with the hierarchical network topology, the network can be partitioned in to m clusters.

Model based on our scale-free network definition and our algorithm, in each cluster, the nodes which are involved in error detection will be reduced to n/m on average. In addition, in each cluster, the data values are highly correlated. The data worst case of data traverse times for error detection and localization is determined. Because our scale-free error detection approach limits most of computation within each cluster, the communication and data exchange between clusters can be ignored. Finally, the worst case algorithm complexity of our scale-free error detection approach can outperform the traditional error detection algorithms.

4.1 ALGORITHM

Introduce the big data error detection/location algorithm, and its combination strategy with cloud. Our proposed algorithm on cloud, the data sets need to be partitioned before feeding to the algorithm on cloud. There are two points should be mentioned when carrying out partitioning. Firstly, the partition process could not bring new data errors into a data set; or change and influence the original errors in a data set. That is different to the previous partition algorithm which normally divides data set according certain application preference or clustering principles. Secondly, due to the scale-free network systems being a special topology, the partition has to form the data clusters according to the real world situation of scale-free network or Cluster-head based WSN.

MapReduce is a framework for processing parallelizable problems across huge data sets using a large number of computers (nodes), collectively referred to as a cluster (if all nodes are on the same local network and use similar hardware) or a grid (if the nodes are shared across geographically and administratively distributed systems, and use more heterogenous hardware). Computational processing can occur on data stored either in a filesystem (unstructured) or in a database (structured). MapReduce can take advantage of locality of data, processing data on or near the storage assets to reduce data transmission. “Map” function.

The master node takes the input, divides it into smaller subproblems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node. “Reduce” function. The master node then collects the answers to all the sub-problems and combines them in some way to form the output – the answer to the problem it was originally trying to solve. MapReduce allows for distributed processing of the map and reduction operations.


4.2 MODULES:

NETWORK TOPOLOGY DESIGNS:

ON-CLOUD PROCESSING FOR WSN:

TIME-EFFICIENT ERROR DETECTION:

ERROR AND ABNORMALITY CLASSIFICATION:

ERROR DEFINITION AND MODELING:

4.3 MODULE DESCRIPTION:

NETWORK TOPOLOGY DESIGNS:

Scale-free networks are inhomogeneous and only a few nodes have a large number of links. In real applications, the cluster-head WSN is similar to scale-free networks, which can be described with the scale-free complex networks and has the feature of scale-free networks. In Fig. 2, the instance of scale-free networks and exponential networks are compared. It can be concluded that the scale-free networks have a more clustered hierarchical nodes topology. Central nodes are highly connected by the out-layer nodes has only 1 or 2 links the traditional error detection for WSN data sets has not paid enough attention to making use of complex network features to improve the error detection efficiency on the cloud platform. Compared to the previous sensor data error detection and localization approach, complex network topology features will be explored with the computation power of cloud for error detection efficiency, scalability and low cost.

Wireless sensor network systems have been used in different areas, such as environment monitoring, military, disaster warning and scientific data collection. In order to process the remote sensor data collected by WSN, sensor-cloud platform has been developed including its definition, architecture, and applications. Due to the features of high variety, volume, and velocity, big data is difficult to process using onhand database management tools or traditional sensorcloud platform. Big data sets can come from complex network systems, such as social network and large scale sensor networks. In addition, under the theme of complex network systems, it may be difficult to develop timeefficient detecting or trouble-shooting methods for errors in big data sets, hence to debug the complex network systems in real time.

ON-CLOUD PROCESSING FOR WSN:

Sensor-Cloud is a unique sensor data storage, visualization and remote management platform that leverages powerful cloud computing technologies to provide excellent data scalability, fast visualization, and user programmable analysis. Initially, sensor-cloud was designed to support long-term deployments of MicroStrain wireless sensors. But nowadays, sensor-cloud has been developed to support any web-connected third party device, sensor, or sensor network through a simple OpenData API. Sensor-Cloud can be useful for a variety of applications, particularly where data from large sensor networks needs to be collected, viewed, and monitored remotely. For example, structural health monitoring and condition-based monitoring of high value assets are applications where commonly available data tools often come up short in terms of accessibility, data scalability, programmability, or performance.

Sensor-Cloud represents a direction for processing and analyzing big sensor data using cloud platform. The online WSN data quality and data cleaning issues are discussed in deal with the problems of outliers, missing information, and noise. A novel online approach for modeling and online learning of temporal-spatial data correlations in sensor networks is developed. A Bayesian approach for reducing the effect of noise on sensor data online is also proposed [37]. The proposed approach is efficient in reducing the uncertainty associated with noisy sensors. However, the scalability and error detection accuracy are not dealt. It is an initial and important step for online error detection of WSN. But lots of work still needs to be done. Especially, under the cloud environment, the computational power and scalability should be fully exploit to support the real time fast error detection for sensor data sets.

TIME-EFFICIENT ERROR DETECTION:

In this section, a cluster-head WSN will be introduced and processed as a kind of complex network system. These complex networks may have non-trivial statistical properties which will influence the data processing strategy on them. In order to test the false positive ratio of our error detection approach and time cost for error findings, we impose five types of data errors following the definition in Section 3 into the normalized testing data sets with a uniform random distribution. These five types of data errors are generated equally. Hence, the percentage of each type of errors is 20 percent from the total imposed errors for testing. The first imposed error type is the flat line error. The second imposed error type is out of bound error. The third imposed error type is the spike error. The forth imposed error type is the data lost error. Finally, the aggregate & fusion error type is imposed. By imposing the above listed five types of data error types, the experiment is designed to measure the error selection efficiency and accuracy during the on-cloud processing of data set.

Specifically, 10 different error rates are imposed into the experimental data set and tested independently. The testing error rate changes from 1 to 10 percent in 10 repetitive experiments. After about 100 seconds, the proposed algorithm can detect more than 60 percent errors whatever the testing error rate is within the domain between 1 and 10 percent . During the time duration between 0 and 100 second, all error detection rates increase dramatically with a steep trend. After the time point of 300 second, the error detection rates increase slowly with a flat trend. At the time of 740 second, the proposed error detection algorithm on cloud can find and locate more than 95 percent imposed errors from the testing data sets. When testing error rate is 1 percent, the best performance gains are achieved, as about 99.5 percent total errors detection. With the increase of the testing error rate, the error detection rate decreases.

ERROR AND ABNORMALITY CLASSIFICATION:

Big data sets from real world complex networks, there are mainly two types of data generated and exchanged within networks. (1) The numeric data sampled and exchanged between network nodes such as sensor network sampled data sets. (2) The text files and data logs generated by nodes such as social network data sets. In this paper, our research will focus on the error detection for numeric big data sets from complex networks can be classified as six main types for both numeric and text data as Appendix A.1, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety. org/10.1109/ TPDS.2013.2295810. This error classification can effectively describe the common error types in complex network systems.

However, when it comes to the errors in wireless sensor network data sets, the above classification loses the accuracy in separating node or edge data error caused by different wireless data communication failures. In addition, it is not enough in describing the error data phenomena in sensor data sets. To better capture the error features of sensor data sets, the above general error classification in should be extended. Considering the specific feature of numeric data errors, there are several abnormal data scenarios demonstrated in Fig. 1. The “flat line faults” indicates a time series of a node in a network system keeps unchanged for unacceptable long time duration. In real world applications, sampled data and transmitted data always have slight changes with the time flow. The “out of data bounds faults” indicates impossible data values are observed based on some domain knowledge. In real world applications, if a temperature value of water is reported as 300


C, it can be treated as a data fault directly. The “data lost fault” means there are missing data values in a time series during the data generation or communication.

ERROR DEFINITION AND MODELING:

With the above classification, the definition of each error type is presented to guide our error detection algorithm. Suppose that a data record from a network node is denoted as r(n, t, f(n, t), g(n, l)), where n is the ID of the node in a network systems. t represents the window length of a time series. f(n, t) is the numerical values collected within window t from the node n. g(n, l) is a location function which records the cluster, the data source node and partition situation related to the node n. g(n, l) is used to calculate the distance between the data source node n and the node l which is the initial data source node. g(n, l) indicates that a current detected error data node is the initial data source node. Furthermore, g(n, l) is also used to parse the data routing between data communication nodes.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

g1

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

helloWorld

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

g3

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 7

7.0 APPENDIX

7.1 SAMPLE SCREEN SHOTS:

7.2 SAMPLE SOURCE CODE:

CHAPTER 8

8.1 CONCLUSION AND FUTURE WORK:

In order to detect errors in big data sets from sensor network systems, a novel approach is developed with cloud computing. Firstly error classification for big data sets is presented. Secondly, the correlation between sensor network systems and the scale-free complex networks are introduced. According to each error type and the features from scale-free networks, we have proposed a time-efficient strategy for detecting and locating errors in big data sets on cloud.

Experiment results from our cloud computing environment U-Cloud, it is demonstrated that 1) the proposed scale-free error detecting approach can signifi- cantly reduce the time for fast error detection in numeric big data sets, and 2) the proposed approach achieves similar error selection ratio to non-scale-free error detection approaches. In future, in accordance with error detection for big data sets from sensor network systems on cloud, the issues such as error correction, big data cleaning and recovery will be further explored.

Our experiment results and analysis, it can be concluded that our proposed error detection approach for big data processing on cloud can dramatically increase the error detecting speed without losing error selecting accuracy. Especially, when the error rate for a targeting big data set is limited and within a small value (1-10 percent ), the algorithm can efficiently detect the error with high fidelity.

A Scalable and Reliable Matching Service for Content-Based Publish Subscribe Systems

1.1 ABSTRACT:

Characterized by the increasing arrival rate of live content, the emergency applications pose a great challenge: how to disseminate large-scale live content to interested users in a scalable and reliable manner. The publish/subscribe (pub/sub) model is widely used for data dissemination because of its capacity of seamlessly expanding the system to massive size. However, most event matching services of existing pub/sub systems either lead to low matching throughput when matching a large number of skewed subscriptions, or interrupt dissemination when a large number of servers fail. The cloud computing provides great opportunities for the requirements of complex computing and reliable communication.

In this paper, we propose SREM, a scalable and reliable event matching service for content-based pub/sub systems in cloud computing environment. To achieve low routing latency and reliable links among servers, we propose a distributed overlay Skip Cloud to organize servers of SREM. Through a hybrid space partitioning technique HPartition, large-scale skewed subscriptions are mapped into multiple subspaces, which ensures high matching throughput and provides multiple candidate servers for each event.

Moreover, a series of dynamics maintenance mechanisms are extensively studied. To evaluate the performance of SREM, 64 servers are deployed and millions of live content items are tested in a Cloud Stack testbed. Under various parameter settings, the experimental results demonstrate that the traffic overhead of routing events in SkipCloud is at least 60 percent smaller than in Chord overlay, the matching rate in SREM is at least 3.7 times and at most 40.4 times larger than the single-dimensional partitioning technique of BlueDove. Besides, SREM enables the event loss rate to drop back to 0 in tens of seconds even if a large number of servers fail simultaneously.

1.2 INTRODUCTION

Because of the importance in helping users to make realtime decisions, data dissemination has become dramatically significant in many large-scale emergency applications, such as earthquake monitoring, disaster weather warning and status update in social networks. Recently, data dissemination in these emergency applications presents a number of fresh trends. One is the rapid growth of live content. For instance, Facebook users publish over 600,000 pieces of content and Twitter users send over 100,000 tweets on average per minute. The other is the highly dynamic network environment. For instance, the measurement studies indicate that most users’ sessions in social networks only last several minutes. In emergency scenarios, the sudden disasters like earthquake or bad weather may lead to the failure of a large number of users instantaneously.

These characteristics require the data dissemination system to be scalable and reliable. Firstly, the system must be scalable to support the large amount of live content. The key is to offer a scalable event matching service to filter out irrelevant users. Otherwise, the content may have to traverse a large number of uninterested users before they reach interested users. Secondly, with the dynamic network environment, it’s quite necessary to provide reliable schemes to keep continuous data dissemination capacity. Otherwise, the system interruption may cause the live content becomes obsolete content. Driven by these requirements, publish/subscribe (pub/ sub) pattern is widely used to disseminate data due to its flexibility, scalability, and efficient support of complex event processing. In pub/sub systems (pub/subs), a receiver (subscriber) registers its interest in the form of a subscription. Events are published by senders to the pub/ sub system.

The system matches events against subscriptions and disseminates them to interested subscribers.

In traditional data dissemination applications, the live content are generated by publishers at a low speed, which makes many pub/subs adopt the multi-hop routing techniques to disseminate events. A large body of broker-based pub/subs forward events and subscriptions through organizing nodes into diverse distributed overlays, such as treebased design cluster-based design and DHT-based design. However, the multihop routing techniques in these broker-based systems lead to a low matching throughput, which is inadequate to apply to current high arrival rate of live content.

Recently, cloud computing provides great opportunities for the applications of complex computing and high speed communication where the servers are connected by high speed networks, and have powerful computing and storage capacities. A number of pub/sub services based on the cloud computing environment have been proposed, such as Move BlueDove and SEMAS. However, most of them can not completely meet the requirements of both scalability and reliability when matching large-scale live content under highly dynamic environments.

This mainly stems from the following facts:

1) Most of them are inappropriate to the matching of live content with high data dimensionality due to the limitation of their subscription space partitioning techniques, which bring either low matching throughput or high memory overhead.

2) These systems adopt the one-hop lookup technique among servers to reduce routing latency. In spite of its high efficiency, it requires each dispatching server to have the same view of matching servers. Otherwise, the subscriptions or events may be assigned to the wrong matching server, which brings the availability problem in the face of current joining or crash of matching servers. A number of schemes can be used to keep the consistent view, like periodically sending heartbeat messages to dispatching servers or exchanging messages among matching servers. However, these extra schemes may bring a large traffic overhead or the interruption of event matching service.

1.3 LITRATURE SURVEY

RELIABLE AND HIGHLY AVAILABLE DISTRIBUTED PUBLISH/SUBSCRIBE SERVICE

PUBLICATION: Proc. 28th IEEE Int. Symp. Reliable Distrib. Syst., 2009, pp. 41–50.

AUTHORS: R. S. Kazemzadeh and H.-A Jacobsen

EXPLANATION:

This paper develops reliable distributed publish/subscribe algorithms with service availability in the face of concurrent crash failure of up to delta brokers. The reliability of service in our context refers to per-source in-order and exactly-once delivery of publications to matching subscribers. To handle failures, brokers maintain data structures that enable them to reconnect the topology and compute new forwarding paths on the fly. This enables fast reaction to failures and improves the system’s availability. Moreover, we present a recovery procedure that recovering brokers execute in order to re-enter the system, and synchronize their routing information.

BUILDING A RELIABLE AND HIGH-PERFORMANCE CONTENT-BASED PUBLISH/SUBSCRIBE SYSTEM

PUBLICATION: J. Parallel Distrib. Comput., vol. 73, no. 4, pp. 371–382, 2013.

AUTHORS: Y. Zhao and J. Wu

EXPLANATION:

Provisioning reliability in a high-performance content-based publish/subscribe system is a challenging problem. The inherent complexity of content-based routing makes message loss detection and recovery, and network state recovery extremely complicated. Existing proposals either try to reduce the complexity of handling failures in a traditional network architecture, which only partially address the problem, or rely on robust network architectures that can gracefully tolerate failures, but perform less efficiently than the traditional architectures. In this paper, we present a hybrid network architecture for reliable and high-performance content-based publish/subscribe. Two overlay networks, a high-performance one with moderate fault tolerance and a highly-robust one with sufficient performance, work together to guarantee the performance of normal operations and reliability in the presence of failures. Our design exploits the fact that, in a high-performance content-based publish/subscribe system, subscriptions are broadcast to all brokers, to facilitate efficient backup routing when failures occur, which incurs a minimal overhead. Per-hop reliability is used to gracefully detect and recover lost messages that are caused by transit errors. Two backup routing methods based on DHT routing are proposed. Extensive simulation experiments are conducted. The results demonstrate the superior performance of our system compared to other state-of-the-art proposals.

SCALABLE AND ELASTIC EVENT MATCHING FOR ATTRIBUTE-BASED PUBLISH/SUBSCRIBE SYSTEMS

PUBLICATION: Future Gener. Comput. Syst., vol. 36, pp. 102–119, 2013.

AUTHORS: X. Ma, Y. Wang, Q. Qiu, W. Sun, and X. Pei

EXPLANATION:

Due to the sudden change of the arrival live content rate and the skewness of the large-scale subscriptions, the rapid growth of emergency applications presents a new challenge to the current publish/subscribe systems: providing a scalable and elastic event matching service. However, most existing event matching services cannot adapt to the sudden change of the arrival live content rate, and generate a non-uniform distribution of load on the servers because of the skewness of the large-scale subscriptions. To this end, we propose SEMAS, a scalable and elastic event matching service for attribute-based pub/sub systems in the cloud computing environment. SEMAS uses one-hop lookup overlay to reduce the routing latency. Through ahierarchical multi-attribute space partition technique, SEMAS adaptively partitions the skewed subscriptions and maps them into balanced clusters to achieve high matching throughput. The performance-aware detection scheme in SEMAS adaptively adjusts the scale of servers according to the churn of workloads, leading to high performance–price ratio. A prototype system on an OpenStack-based platform demonstrates that SEMAS has a linear increasing matching capacity as the number of servers and the partitioning granularity increase. It is able to elastically adjust the scale of servers and tolerate a large number of server failures with low latency and traffic overhead. Compared with existing cloud based pub/sub systems, SEMAS achieves higher throughput in various workloads.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

Characterized by the increasing arrival rate of live content, the emergency applications pose a great challenge: how to disseminate large-scale live content to interested users in a scalable and reliable manner. The publish/subscribe (pub/sub) model is widely used for data dissemination because of its capacity of seamlessly expanding the system to massive size. However, most event matching services of existing pub/sub systems either lead to low matching throughput when matching a large number of skewed subscriptions, or interrupt dissemination when a large number of servers fail.

However, most existing event matching services cannot adapt to the sudden change of the arrival live content rate, and generate a non-uniform distribution of load on the servers because of the skewness of the large-scale subscriptions. To this end SEMAS, a scalable and elastic event matching service for attribute-based pub/sub systems in the cloud computing environment. SEMAS uses one-hop lookup overlay to reduce the routing latency. Through ahierarchical multi-attribute space partition technique, SEMAS adaptively partitions the skewed subscriptions and maps them into balanced clusters to achieve high matching throughput.

The performance-aware detection scheme in SEMAS adaptively adjusts the scale of servers according to the churn of workloads, leading to high performance–price ratio. A prototype system on an OpenStack-based platform demonstrates that SEMAS has a linear increasing matching capacity as the number of servers and the partitioning granularity increase. It is able to elastically adjust the scale of servers and tolerate a large number of server failures with low latency and traffic overhead.

2.1.1 DISADVANTAGES:

Publish/Subscribe (pub/sub) is a commonly used asynchronous communication pattern among application components. Senders and receivers of messages are decoupled from each other and interact with an intermediary— a pub/sub system.

A receiver registers its interest in certain kinds of messages with the pub/sub system in the form of a subscription. Messages are published by senders to the pub/sub system. The system matches messages (i.e., publications) to subscriptions and delivers messages to interested subscribers using a notification mechanism.

There are several ways for subscriptions to specify messages of interest. In its simplest form messages are associated with topic strings and subscriptions are defined as patterns of the topic string. A more expressive form is attribute-based pub/sub where messages are further annotated with various attributes.

Subscriptions are expressed as predicates on the message topic and attributes. An even more general form is content based pub/sub where subscriptions can be arbitrary Boolean functions on the entire content of messages (e.g., XML documents), limited to attributes1.

Attribute based pub/sub strikes a balance between the simplicity and performance of topic-based pub/sub and the expressiveness of content-based pub/sub. Many large-scale and loosely coupled applications including stock quote distribution, network management, and environmental monitoring can be structured around a pub/sub messaging paradigm.

2.2 PROPOSED SYSTEM:

We propose a scalable and reliable matching service for content-based pub/sub service in cloud computing environments, called SREM. Specifically, we mainly focus on two problems: one is how to organize servers in the cloud computing environment to achieve scalable and reliable routing. The other is how to manage subscriptions and events to achieve parallel matching among these servers. Generally speaking, we provide the following contributions:

We propose a distributed overlay protocol, called SkipCloud, to organize servers in the cloud computing environment. SkipCloud enables subscriptions and events to be forwarded among brokers in a scalable and reliable manner. Also it is easy to implement and maintain.

  • To achieve scalable and reliable event matching among multiple servers, we propose a hybrid multidimensional space partitioning technique, called HPartition. It allows similar subscriptions to be divided into the same server and provides multiple candidate matching servers for each event. Moreover, it adaptively alleviates hot spots and keeps workload balance among all servers.
  • We implement extensive experiments based on a CloudStack testbed to verify the performance of SREM under various parameter settings.
  • In order to take advantage of multiple distributed brokers, SREM divides the entire content space among the top clusters of SkipCloud, so that each top cluster only handles a subset of the entire space and searches a small number of candidate subscriptions. SREM employs a hybrid multidimensional space partitioning technique, called HPartition, to achieve scalable and reliable event matching.


2.2.1 ADVANTAGES:

To achieve reliable connectivity and low routing latency, these brokers are connected through a distributed overlay, called SkipCloud. The entire content space is partitioned into disjoint subspaces, each of which is managed by a number of brokers. Subscriptions and events are dispatched to the subspaces that are overlapping with them through SkipCloud.

Since the pub/sub system needs to find all the matched subscribers, it requires each event to be matched in all datacenters, which leads to large traffic overhead with the increasing number of datacenters and the increasing arrival rate of live content.

Besides, it’s hard to achieve workload balance among the servers of all datacenters due to the various skewed distributions of users’ interests. Another question is that why we need a distributed overlay like SkipCloud to ensure reliable logical connectivity in datacenter environment where servers are more stable than the peers in P2P networks.

This is because as the number of servers increases in datacenters, the node failure becomes normal, but not rare exception. The node failure may lead to unreliable and inefficient routing among servers. To this end, we try to organize servers into SkipCloud to reduce the routing latency in a scalable and reliable manner.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           JAVA JDK 1.7
  • Back End                                :           MYSQL Server
  • Server                                      :           Apache Tomact Server
  • Script                                       :           JSP Script
  • Document                               :           MS-Office 2007

CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM

PUBLISHER:

SUBSCRIBER:

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:

PUBLISHER:

SUBSCRIBER:

3.3 CLASS DIAGRAM:

PUBLISHER:

SUBSCRIBER:

3.4 SEQUENCE DIAGRAM:

PUBLISHER:

SUBSCRIBER:

3.5 ACTIVITY DIAGRAM:

PUBLISHER:

SUBSCRIBER:

CHAPTER 4

4.0 IMPLEMENTATION:

HPARTITION & SREM

To evaluate the performance of SkipCloud, we implement both SkipCloud and Chord to forward subscriptions and messages. To evaluate the performance of HPartition, the prototype supports different space partitioning policies. Moreover, the prototype provides three different message forwarding strategies, i.e, least subscription amount forwarding, random forwarding, and probability In order to take advantage of multiple distributed brokers, SREM divides the entire content space among the top clusters of SkipCloud, so that each top cluster only handles a subset of the entire space and searches a small number of candidate subscriptions.

SREM employs a hybrid multidimensional space partitioning technique, called HPartition, to achieve scalable and reliable event matching. Generally speaking, HPartition divides the entire content space into disjoint subspaces (Section 4.1). Subscriptions and events with overlapping subspaces are dispatched and matched on the same top cluster of SkipCloud (Sections 4.2 and 4.3). To keep workload balance among servers, HPartition divides the hot spots into multiple cold spots in an adaptive manner (Section 4.4). Table 2 shows key notations used in this section.

SREM

In SREM, there are mainly three roles: clients, brokers, and clusters. Brokers are responsible for managing all of them. Since the joining or leaving of these roles may lead to inefficient and unreliable data dissemination, we will discuss the dynamics maintenance mechanisms used by brokers in this section.

SUBSCRIBER DYNAMICS

To detect the status of subscribers, each subscriber establishes affinity with a broker (called home broker), and periodically sends its subscription as a heartbeat message to its home broker. The home broker maintains a timer for its every buffered subscription. If the broker has not received a heartbeat message from a subscriber over Tout time, the subscriber is supposed to be offline. Next, the home broker removes this subscription from its buffer and notifies the brokers containing the failed subscription to remove it.

BROKER DYNAMICS

Broker dynamics may lead to new clusters joining or old clusters leaving. In this section, we mainly consider the brokers joining/leaving from existing clusters, rather than the changing of the cluster size. When a new broker is generated by its datacenter management service, it firstly sends a “Broker Join” message to the leader broker in its top cluster. The leader broker returns back its top cluster identifier, neighbor lists of all levels of SkipCloud, and all subspaces including the corresponding subscriptions. The new broker generates its own identifier by adding a b-ary number to its top cluster identifier and takes the received items of each level as its initial neighbors.

There is no particular mechanism to handle broker departure from a cluster. In the top cluster, its leader broker can easily monitor the status of other brokers. For the clusters of the rest levels, the sampling service guarantees that the older items of each neighbor list are prior to be replaced by fresh ones during the view shuffling operation, which makes the failed brokers be removed from the system quickly. From the perspective of event matching, all brokers in the same top cluster have the same subspaces of subscriptions, which indicates that broker failure would not interrupt the event matching operation if there is at least one broker alive in each cluster.

CLUSTER DYNAMICS

Broker’s dynamics may lead to new clusters joining or old clusters leaving. Since each subspace is managed by the top cluster whose identifier is closest to that of the subspace, it’s necessary to adaptively migrate a number of old clusters to the new joining clusters. Specifically, the leader broker of the new cluster delivers its top ClusterID carried on a “Cluster Join” message to other clusters. The leader brokers in all other clusters find out the subspaces whose identifiers are closer to the new ClusterID than their own cluster identifiers, and migrate them to the new cluster.

Since each subspace is stored in one cluster, the cluster departure incurs subscription loss. The peer sampling service of SkipCloud can be used to detect failed clusters. To recover lost subscriptions, a simple method is to redirect the lost subscriptions by their owners’ heartbeat messages. Due to the unreliable links between subscribers and brokers, this approach may lead to long repair latency. To this end, we store all subscriptions into a number of well-known servers

of the datacenters. When these servers obtain the failed clusters, they dispatch the subscriptions in these failed clusters to the corresponding live clusters.

4.1 ALGORITHM

PREFIX ROUTING ALGORITHM

Prefix routing in SkipCloud is mainly used to efficiently route subscriptions and events to the top clusters. Note that the cluster identifiers at level i þ 1 are generated by appending one b-ary to the corresponding clusters at level i. The relation of identifiers between clusters is the foundation of routing to target clusters. Briefly, when receiving a routing request to a specific cluster, a broker examines its neighbor lists of all levels and chooses the neighbor which shares the longest common prefix with the target ClusterID as the next hop. The routing operation repeats until a broker can not find a neighbor whose identifier is more closer than itself. Algorithm 2 describes the prefix routing algorithm in pseudo-code.

4.2 MODULES:

CONTENT-BASED (PUB/SUB):

KEY GENERATION (PUB/SUB):

CONTENT SPACE PARTITIONING:

SREM SCALABILITY/RELIABILITY:

4.3 MODULE DESCRIPTION:

CONTENT-BASED (PUB/SUB):

Content-based pub/sub systems in cloud computing environment SREM connects the brokers through a distributed overlay SkipCloud, which ensures reliable connectivity among brokers through its multi-level clusters and brings a low routing latency through a prefix routing algorithm. Through a hybrid multi-dimensional space partitioning technique, SREM reaches scalable and balanced clustering of high dimensional skewed subscriptions, and each event is allowed to be matched on any of its candidate servers routing of events from publishers to the relevant subscribers, we use the content-based data model. We consider pub/sub in a setting where there exists no dedicated broker infrastructure. Publishers and subscribers contribute as peers to the maintenance of a self-organizing overlay structure. To authenticate publishers, we use the concept of advertisements in which a publisher announces beforehand the set of events which it intends to publish.

KEY GENERATION (PUB/SUB):

Recently, a number of cloud providers have offered a series of pub/sub services. For instance, provides high available key-value storage and matching respectively based on one-hop lookup adopts a single-dimensional partitioning technique to divide the entire spare and a performance-aware forwarding scheme to select candidate matcher for each event. Publisher keys: Before starting to publish events, a publisher contacts the key server along with the credentials for each attribute in its advertisement. If the publisher is allowed to publish events according to its credentials, the key server will generate separate private keys for each credential. The public key of a publisher p for credential is generated. Subscriber keys: Similarly, to receive events matching its subscription, a subscriber should contact the key server and receive the private keys for the credentials associated with each attribute A.

CONTENT SPACE PARTITIONING:

To achieve scalable and reliable event matching among multiple servers, we propose a hybrid multidimensional space partitioning technique, called HPartition. It allows similar subscriptions to be divided into the same server and provides multiple candidate matching servers for each event. Moreover, it adaptively alleviates hot spots and keeps workload balance among all servers utilizes distributed multiple clusters, a better solution is to balance the workloads among clusters through partitioning and migrating hot spots. The gain of the partitioning technique is greatly affected by the distribution of subscriptions of the hot spot. To this end, HPartition divides each hot spot into a number of cold spots through two partitioning techniques: hierarchical subspace partitioning and subscription set partitioning. The first aims to partition the hot spots where the subscriptions are diffused among the whole space, and the second aims to partition the hot spots where the subscriptions fall into a narrow space.

SREM SCALABILITY/RELIABILITY:

SREM scalability and reliability when matching large-scale live content under highly dynamic environments in this mainly stems from the following facts:

1) Most of them are inappropriate to the matching of live content with high data dimensionality due to the limitation of their subscription space partitioning techniques, which bring either low matching throughput or high memory overhead.

2) These systems adopt the one-hop lookup technique among servers to reduce routing latency. In spite of its high efficiency, it requires each dispatching server to have the same view of matching servers. Otherwise, the subscriptions or events may be assigned to the wrong matching servers, which bring the availability problem in the face of current joining or crash of matching servers. A number of schemes can be used to keep the consistent view, like periodically sending heartbeat messages to dispatching servers or exchanging messages among matching servers. However, these extra schemes may bring a large traffic overhead or the interruption of event matching service.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are 

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:     

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 

5.1.2 TECHNICAL FEASIBILITY   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months.

This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

5.1.2 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.


5.1. 3 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.4 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.


5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  


5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.


5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.


5.1.8 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.


5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE DESCRIPTION:

 

6.1 JAVA TECHNOLOGY:

Java technology is both a programming language and a platform.

 

The Java Programming Language

 

The Java programming language is a high-level language that can be characterized by all of the following buzzwords:

  • Simple
    • Architecture neutral
    • Object oriented
    • Portable
    • Distributed     
    • High performance
    • Interpreted     
    • Multithreaded
    • Robust
    • Dynamic
    • Secure     

With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make “write once, run anywhere” possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

6.2 THE JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. We’ve already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:

  • The Java Virtual Machine (Java VM)
  • The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.

6.3 WHAT CAN JAVA TECHNOLOGY DO?

The most common types of programs written in the Java programming language are applets and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet.

A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:

  • The essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.
  • Applets: The set of conventions used by applets.
  • Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
  • Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
  • Security: Both low level and high level, including electronic signatures, public and private key management, access control, and certificates.
  • Software components: Known as JavaBeansTM, can plug into existing component architectures.
  • Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI).
  • Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.

 

6.4 HOW WILL JAVA TECHNOLOGY CHANGE MY LIFE?

We can’t promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:

  • Get started quickly: Although the Java programming language is a powerful object-oriented language, it’s easy to learn, especially for programmers already familiar with C or C++.
  • Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
  • Write better code: The Java programming language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code and introduce fewer bugs.
  • Develop programs more quickly: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
  • Avoid platform dependencies with 100% Pure Java: You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
  • Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
  • Distribute software more easily: You can upgrade applets easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded “on the fly,” without recompiling the entire program.

 

6.5 ODBC:

 

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

6.6 JDBC:

In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

 

6.7 JDBC Goals:

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

  1. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

  • Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

  • Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error appear at runtime.

  • Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to precede the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple                                     Architecture-neutral

Object-oriented                       Portable

Distributed                              High-performance

Interpreted                              Multithreaded

Robust                                     Dynamic Secure

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

Java Program
Compilers
Interpreter
My Program

6.7 NETWORKING TCP/IP STACK:

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram’s:

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP:

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model – see later.

TCP:

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address.

Network address:

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address:

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address:

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Total address:

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are “well known”.

Sockets:

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe – but the actual pipe does not yet exist.

6.8 JFREE CHART:

JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart’s extensive feature set includes:

A consistent and well-documented API, supporting a wide range of chart types;

A flexible design that is easy to extend, and targets both server-side and client-side applications;

Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is “open source” or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.

 

6.8.1. Map Visualizations:

Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart; Testing, documenting, testing some more, documenting some more.

6.8.2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts — to display a separate control that shows a small version of ALL the time series data, with a sliding “view” rectangle that allows you to select the subset of the time series data to display in the main chart.

6.8.3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.

 

6.8.4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.

CHAPTER 8

8.0 CONCLUSION & FUTURE WORK:

This paper introduces SREM, a scalable and reliable event matching service for content-based pub/sub systems in cloud computing environment. SREM connects the brokers through a distributed overlay SkipCloud, which ensures reliable connectivity among brokers through its multi-level clusters and brings a low routing latency through a prefix routing algorithm. Through a hybrid multi-dimensional space partitioning technique, SREM reaches scalable and balanced clustering of high dimensional skewed subscriptions, and each event is allowed to be matched on any of its candidate servers.

Extensive experiments with real deployment based on a CloudStack testbed are conducted, producing results which demonstrate that SREM is effective and practical, and also presents good workload balance, scalability and reliability under various parameter settings. Although our proposed event matching service can efficiently filter out irrelevant users from big data volume, there are still a number of problems we need to solve. Firstly, we do not provide elastic resource provisioning strategies in this paper to obtain a good performance price ratio.

We plan to design and implement the elastic strategies of adjusting the scale of servers based on the churn workloads. Secondly, it does not guarantee that the brokers disseminate large live content with various data sizes to the corresponding subscribers in a real-time manner. For the dissemination of bulk content, the upload capacity becomes the main bottleneck. Based on our proposed event matching service, we will consider utilizing a cloud-assisted technique to realize a general and scalable data dissemination service over live content with various data sizes.
CHAPTER 9

Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing

Abstract—As an effective and efficient way to provide computing resources and services to customers on demand, cloud computinghas become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, andit is mainly determined by the configuration of a cloud service platform under given market demand. However, a single long-termrenting scheme is usually adopted to configure a cloud platform, which cannot guarantee the service quality but leads to seriousresource waste. In this paper, a double resource renting scheme is designed firstly in which short-term renting and long-term rentingare combined aiming at the existing issues. This double renting scheme can effectively guarantee the quality of service of all requestsand reduce the resource waste greatly. Secondly, a service system is considered as an M/M/m+D queuing model and the performanceindicators that affect the profit of our double renting scheme are analyzed, e.g., the average charge, the ratio of requests that needtemporary servers, and so forth. Thirdly, a profit maximization problem is formulated for the double renting scheme and the optimizedconfiguration of a cloud platform is obtained by solving the profit maximization problem. Finally, a series of calculations are conductedto compare the profit of our proposed scheme with that of the single renting scheme. The results show that our scheme can not onlyguarantee the service quality of all requests, but also obtain more profit than the latter.Index Terms—Cloud computing, guaranteed service quality, multiserver system, profit maximization, queuing model, service-levelagreement, waiting time.F1 INTRODUCTIONAS an effective and efficient way to consolidate computingresources and computing services, clouding computinghas become more and more popular [1]. Cloud computingcentralizes management of resources and services,and delivers hosted services over the Internet. The hardware,software, databases, information, and all resources areconcentrated and provided to consumers on-demand [2].Cloud computing turns information technology into ordinarycommodities and utilities by the the pay-per-use pricingmodel [3, 4, 5]. In a cloud computing environment, thereare always three tiers, i.e., infrastructure providers, servicesproviders, and customers (see Fig. 1 and its elaboration inSection 3.1). An infrastructure provider maintains the basichardware and software facilities. A service provider rentsresources from the infrastructure providers and providesservices to customers. A customer submits its request to aservice provider and pays for it based on the amount andthe quality of the provided service [6]. In this paper, weaim at researching the multiserver configuration of a serviceprovider such that its profit is maximized.Like all business, the profit of a service provider in cloudcomputing is related to two parts, which are the cost andthe revenue. For a service provider, the cost is the renting_ The authors are with the College of Information Science and Engineering,Hunan University, and National Supercomputing Center in Changsha,Hunan, China, 410082.E-mail: jingmei1988@163.com, lkl@hnu.edu.cn, oyaj@hnu.edu.cn,lik@newpaltz.edu._ Keqin Li is also with the Department of Computer Science, State Universityof New York, New Paltz, New York 12561, USA._ Kenli Li is the author for correspondence.Manuscript received ****, 2015; revised ****, 2015.cost paid to the infrastructure providers plus the electricitycost caused by energy consumption, and the revenue is theservice charge to customers. In general, a service providerrents a certain number of servers from the infrastructureproviders and builds different multiserver systems for differentapplication domains. Each multiserver system is toexecute a special type of service requests and applications.Hence, the renting cost is proportional to the number ofservers in a multiserver system [2]. The power consumptionof a multiserver system is linearly proportional to the numberof servers and the server utilization, and to the square ofexecution speed [7, 8]. The revenue of a service provider isrelated to the amount of service and the quality of service.To summarize, the profit of a service provider is mainlydetermined by the configuration of its service platform.To configure a cloud service platform, a service providerusually adopts a single renting scheme. That’s to say, theservers in the service system are all long-term rented. Becauseof the limited number of servers, some of the incomingservice requests cannot be processed immediately. Sothey are first inserted into a queue until they can handledby any available server. However, the waiting time of theservice requests cannot be too long. In order to satisfyquality-of-service requirements, the waiting time of eachincoming service request should be limited within a certainrange, which is determined by a service-level agreement(SLA). If the quality of service is guaranteed, the serviceis fully charged, otherwise, the service provider serves therequest for free as a penalty of low quality. To obtain higherrevenue, a service provider should rent more servers fromthe infrastructure providers or scale up the server executionspeed to ensure that more service requests are processedwith high service quality. However, doing this would lead to0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 2sharp increase of the renting cost or the electricity cost. Suchincreased cost may counterweight the gain from penaltyreduction. In conclusion, the single renting scheme is not agood scheme for service providers. In this paper, we proposea novel renting scheme for service providers, which notonly can satisfy quality-of-service requirements, but also canobtain more profit. Our contributions in this paper can besummarized as follows._ A novel double renting scheme is proposed forservice providers. It combines long-term rentingwith short-term renting, which can not only satisfyquality-of-service requirements under the varyingsystem workload, but also reduce the resource wastegreatly._ A multiserver system adopted in our paper is modeledas an M/M/m+D queuing model and the performanceindicators are analyzed such as the averageservice charge, the ratio of requests that need shorttermservers, and so forth._ The optimal configuration problem of serviceproviders for profit maximization is formulated andtwo kinds of optimal solutions, i.e., the ideal solutionsand the actual solutions, are obtained respectively._ A series of comparisons are given to verify the performanceof our scheme. The results show that theproposed Double-Quality-Guaranteed (DQG) rentingscheme can achieve more profit than the comparedSingle-Quality-Unguaranteed (SQU) rentingscheme in the premise of guaranteeing the servicequality completely.The rest of the paper is organized as follows. Section 2reviews the related work on profit aware problem in cloudcomputing. Section 3 presents the used models, includingthe three-tier cloud computing model, the multiserver systemmodel, the revenue and cost models. Section 4 proposesour DQG renting scheme and formulates the profitoptimization problem. Section 5 introduces the methods offinding the optimal solutions for the profit optimizationproblem in two scenarios. Section 6 demonstrates the performanceof the proposed scheme through comparison with thetraditional SQU renting scheme. Finally, Section 7 concludesthe work.2 RELATED WORKIn this section, we review recent works relevant to the profitof cloud service providers. Profit of service providers isrelated with many factors such as the price, the marketdemand, the system configuration, the customer satisfactionand so forth. Service providers naturally wish to set a higherprice to get a higher profit margin; but doing so woulddecrease the customer satisfaction, which leads to a risk ofdiscouraging demand in the future. Hence, selecting a reasonablepricing strategy is important for service providers.The pricing strategies are divided into two categories,i.e., static pricing and dynamic pricing. Static pricing meansthat the price of a service request is fixed and knownin advance, and it does not change with the conditions.With dynamic pricing a service provider delays the pricingdecision until after the customer demand is revealed, so thatthe service provider can adjust prices accordingly [9]. Staticpricing is the dominant strategy which is widely used inreal world and in research [2, 10, 11]. Ghamkhari et al. [11]adopted a flat-rate pricing strategy and set a fixed price forall requests, but Odlyzko in [12] argued that the predominantflat-rate pricing encourages waste and is incompatiblewith service differentiation. Another kind of static pricingstrategies are usage-based pricing. For example, the priceof a service request is proportional to the service time andtask execution requirement (measured by the number ofinstructions to be executed) in [10] and [2], respectively.Usage-based pricing reveals that one can use resources moreefficiently [13, 14].Dynamic pricing emerges as an attractive alternativeto better cope with unpredictable customer demand [15].Mac´ıas et al. [16] used a genetic algorithm to iterativelyoptimize the pricing policy. Amazon EC2 [17, 18] has introduceda ”spot pricing” feature, where the spot price fora virtual instance is dynamically updated to match supplyand demand. However, consumers dislike prices to change,especially if they perceive the changes to be ”unfair” [19, 20].After comparison, we select the usage-based pricing strategyin this paper since it agrees with the concept of cloudcomputing mostly.The second factor affecting the profit of service providersis customer satisfaction which is determined by the qualityof service and the charge. In order to improve the customersatisfaction level, there is a service-level agreement (SLA)between a service provider and the customers. The SLAadopts a price compensation mechanism for the customerswith low service quality. The mechanism is to guaranteethe service quality and the customer satisfaction so thatmore customers are attracted. In previous research, differentSLAs are adopted. Ghamkhari et al. [11] adopted a stepwisecharge function with two stages. If a service request ishandled before its deadline, it is normally charged; butif a service request is not handled before its deadline, itis dropped and the provider pays for it due to penalty.In [2, 10, 21], charge is decreased continuously with theincreasing waiting time until the charge is free. In thispaper, we use a two-step charge function, where the servicerequests served with high quality are normally charged,otherwise, are served for free.Since profit is an important concern to cloud serviceproviders, many works have been done on how to boosttheir profit. A large body of works have recently focusedon reducing the energy cost to increase profit of serviceproviders [22, 23, 24, 25], and the idle server turning offstrategy and dynamic CPU clock frequency scaling are adoptedto reduce energy cost. However, only reducing energycost cannot obtain profit maximization. Many researchersinvestigated the trade-off between minimizing cost andmaximizing revenue to optimize profit. Both [11] and [26]adjusted the number of switched on servers periodicallyusing different strategies and different profit maximizationmodels were built to get the number of switched on servers.However, these works did not consider the cost of resourceconfiguration.Chiang and Ouyang [27] considered a cloud serversystem as an M/M/R/K queuing system where all service0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 3requests that exceed its maximum capacity are rejected. Aprofit maximization function is defined to find an optimalcombination of the server size R and the queue capacity Ksuch that the profit is maximized. However, this strategyhas further implications other than just losing the revenuefrom some services, because it also implies loss of reputationand therefore loss of future customers [3]. In [2], Cao etal. treated a cloud service platform as an M/M/m model,and the problem of optimal multiserver configuration forprofit maximization was formulated and solved. This workis the most relevant work to ours, but it adopts a singlerenting scheme to configure a multiserver system, whichcannot adapt to the varying market demand and leads tolow service quality and great resource waste. To overcomethis weakness, another resource management strategy isused in [28, 29, 30, 31], which is cloud federation. Usingfederation, different providers running services thathave complementary resource requirements over time canmutually collaborate to share their respective resources inorder to fulfill each one’s demand [30]. However, providersshould make an intelligent decision about utilization ofthe federation (either as a contributor or as a consumerof resources) depending on different conditions that theymight face, which is a complicated problem.In this paper, to overcome the shortcomings mentionedabove, a double renting scheme is designed to configurea cloud service platform, which can guarantee the servicequality of all requests and reduce the resource waste greatly.Moreover, a profit maximization problem is formulated andsolved to get the optimal multiserver configuration whichcan product more profit than the optimal configurationin [2].3 THE MODELSIn this section, we first describe the three-tier cloud computingstructure. Then, we introduce the related models used inthis paper, including a multiserver system model, a revenuemodel, and a cost model.3.1 A Cloud System ModelThe cloud structure (see Fig. 1) consists of three typicalparties, i.e., infrastructure providers, service providers andcustomers. This three-tier structure is used commonly inexisting literatures [2, 6, 10].Fig. 1: The three-tier cloud structure.In the three-tier structure, an infrastructure provider thebasic hardware and software facilities. A service providerrents resources from infrastructure providers and preparesa set of services in the form of virtual machine (VM). Infrastructureproviders provide two kinds of resource rentingschemes, e.g., long-term renting and short-term renting. Ingeneral, the rental price of long-term renting is much cheaperthan that of short-term renting. A customer submits aservice request to a service provider which delivers serviceson demand. The customer receives the desired result fromthe service provider with certain service-level agreement,and pays for the service based on the amount of the serviceand the service quality. Service providers pay infrastructureproviders for renting their physical resources, and chargecustomers for processing their service requests, which generatescost and revenue, respectively. The profit is generatedfrom the gap between the revenue and the cost.3.2 A Multiserver ModelIn this paper, we consider the cloud service platform as amultiserver system with a service request queue. Fig. 2 givesthe schematic diagram of cloud computing [32].Fig. 2: The schematic diagram of cloud computing.In an actual cloud computing platform such as AmazonEC2, IBM blue cloud, and private clouds, there are manywork nodes managed by the cloud managers such as Eucalyptus,OpenNebula, and Nimbus. The clouds provideresources for jobs in the form of virtual machine (VM). Inaddition, the users submit their jobs to the cloud in whicha job queuing system such as SGE, PBS, or Condor is used.All jobs are scheduled by the job scheduler and assignedto different VMs in a centralized way. Hence, we can considerit as a service request queue. For example, Condor isa specialized workload management system for computeintensivejobs and it provides a job queueing mechanism,scheduling policy, priority scheme, resource monitoring,and resource management. Users submit their jobs to Condor,and Condor places them into a queue, chooses whenand where to run them based upon a policy [33, 34]. Hence,it is reasonable to abstract a cloud service platform as a multiservermodel with a service request queue, and the modelis widely adopted in existing literature [2, 11, 35, 36, 37].In the three-tier structure, a cloud service provider servescustomers’ service requests by using a multiserver system0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 4Fig. 3: The multiserver system model, where servicerequests are first placed in a queue before they areprocessed by any servers.which is rented from an infrastructure provider. Assumethat the multiserver system consists of m long-term rentedidentical servers, and it can be scaled up by temporarilyrenting short-term servers from infrastructure providers.The servers in the system have identical execution speed s(Unit: billion instructions per second). In this paper, a multiserversystem excluding the short-term servers is modeledas an M/M/m queuing system as follows (see Fig. 3). Thereis a Poisson stream of service requests with arrival rate λ,i.e., the interarrival times are independent and identicallydistributed (i.i.d.) exponential random variables with mean1. A multiserver system maintains a queue with infinitecapacity. When the incoming service requests cannot be processedimmediately after they arrive, they are firstly placedin the queue until they can be handled by any availableserver. The first-come-first-served (FCFS) queuing disciplineis adopted. The task execution requirements (measured bythe number of instructions) are independent and identicallydistributed exponential random variables r with mean r(Unit: billion instructions). Therefore, the execution times oftasks on the multiserver system are also i.i.d. exponentialrandom variables x = r/s with mean x = r/s (Unit:second). The average service rate of each server is calculatedas μ = 1/x = s/r, and the system utilization is defined asρ = λ/mμ = λ/m _ r/s.Because the fixed computing capacity of the servicesystem is limited, some requests would wait for a long timebefore they are served. According to the queuing theory, wehave the following theorem about the waiting time in anM/M/m queuing system.Theorem 3.1. The cumulative distribution function (cdf) ofthe waiting time W of a service request isFW(t) = 1 􀀀 πm1 􀀀 ρe􀀀mμ(1􀀀ρ)t, (1)whereπm =()mm![mΣ􀀀1k=0()kk!+()mm!(1 􀀀 ρ)]􀀀1.Proof 3.1. We have known that the probability distributionfunction (pdf) of the waiting time W of a service requestisfW(t) = (1 􀀀 Pq)u(t) + mμπme􀀀(1􀀀ρ)mμt,where Pq = πm/(1 􀀀 ρ) and u(t) is a unit impulsefunction [2, 38]. Then, FW(t) can be obtained by straightforwardcalculation.3.3 Revenue ModelingThe revenue model is determined by the pricing strategyand the server-level agreement (SLA). In this paper, theusage-based pricing strategy is adopted, since cloud computingprovides services to customers and charges them ondemand. The SLA is a negotiation between service providersand customers on the service quality and the price. Becauseof the limited servers, the service requests that cannot behandled immediately after entering the system must wait inthe queue until any server is available. However, to satisfythe quality-of-service requirements, the waiting time of eachservice request should be limited within a certain rangewhich is determined by the SLA. The SLA is widely used bymany types of businesses, and it adopts a price compensationmechanism to guarantee service quality and customersatisfaction. For example, China Post gives a service timecommitment for domestic express mails. It promises thatif a domestic express mail does not arrive within a deadline,the mailing charge will be refunded. The SLA is alsoadopted by many real world cloud service providers suchas Rackspace [39], Joyent [40], Microsoft Azure [41], and soon. Taking Joyent as an example, the customers order SmartMachines, Smart Appliances, and/or Virtual Machines fromJoyent, and if the availability of a customer’s services isless than 100%, Joyent will credit the customer 5% of themonthly fee for each 30 minutes of downtime up to 100% ofthe customer’s monthly fee for the affected server. The onlydifference is that its performance metric is availability andours is waiting time.In this paper, the service level is reflected by the waitingtime of requests. Hence, we define D as the maximumwaiting time here that the service requests can tolerate, inother words, D is their deadline. The service charge of eachtask is related to the amount of a service and the servicelevelagreement. We define the service charge function fora service request with execution requirement r and waitingtime W in Eq. (2),R(r,W) ={ar, 0 _ W _ D;0, W > D.(2)where a is a constant, which indicates the price per onebillion instructions (Unit: cents per one billion instructions).When a service request starts its execution before waitinga fixed time D (Unit: second), a service provider considersthat the service request is processed with high quality-ofserviceand charges a customer ar. If the waiting time of aservice request exceeds deadline D, a service provider mustserve it for free. Similar revenue models have been used inmany existing research such as [2, 11, 42].According to Theorem 1, it is easy to know that theprobability that the waiting time of a service request exceedsits deadline D isP(W _ D) = 1 􀀀 FW(D) =πm1 􀀀 ρe􀀀mμ(1􀀀ρ)D. (3)3.4 Cost ModelingThe cost of a service provider consists of two major parts,i.e., the rental cost of physical resources and the utilitycost of energy consumption. Many existing research suchas [11, 43, 44] only consider the power consumption cost.As a major difference between their models and ours, the0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 5resource rental cost is considered in this paper as well, sinceit is a major part which affects the profit of service providers.A similar cost model is adopted in [2]. The resources canbe rented in two ways, long-term renting and short-termrenting, and the rental price of long-term renting is muchcheaper than that of short-term renting. This is reasonableand common in the real life. In this paper, we assume thatthe long-term rental price of one server for unit of time isβ (Unit: cents per second) and the short-term rental priceof one server for unit of time is γ (Unit: cents per second),where β < γ.The cost of energy consumption is determined by theelectricity price and the amount of energy consumption. Inthis paper, we adopt the following dynamic power model,which is adopted in the literature such as [2, 7, 45, 46]:Pd = NswCLV 2f, (4)where Nsw is the average gate switching factor at each clockcycle, CL is the loading capacitance, V is the supply voltage,and f is the clock frequency [45]. In the ideal case, therelationship between the clock frequency f and the supplyvoltage V is V / fϕ for some constant ϕ > 0 [46]. Theserver execution speed s is linearly proportional to the clockfrequency f, namely, s / f. Hence, the power consumptionis Pd / NswCLs2ϕ+1. For ease of discussion, we assumethat Pd = bNswCLs2ϕ+1 = ξsα where ξ = bNswCL andα = 2ϕ + 1. In this paper, we set NswCL = 7.0, b = 1.3456and ϕ = 0.5. Hence, α = 2.0 and ξ = 9.4192. The value ofpower consumption calculated by Pd = ξsα is close to thevalue of the Intel Pentium M processor [47]. It is reasonablethat a server still consumes some amount of static power [8],denoted as P_ (Unit:Watt), when it is idle. For a busy server,the average amount of energy consumption per unit of timeis P = ξsα + P_ (Unit: Watt). Assume that the price ofenergy is δ (Unit: cents per Watt).4 A QUALITY-GUARANTEED SCHEMEThe traditional single resource renting scheme cannot guaranteethe quality of all requests but wastes a great amountof resources due to the uncertainty of system workload.To overcome the weakness, we propose a double rentingscheme as follows, which not only can guarantee the qualityof service completely but also can reduce the resource wastegreatly.4.1 The Proposed SchemeIn this section, we first propose the Double-Quality-Guaranteed (DQG) resource renting scheme which combineslong-term renting with short-term renting. The maincomputing capacity is provided by the long-term rentedservers due to their low price. The short-term rented serversprovide the extra capacity in peak period. The detail of thescheme is shown in Algorithm 1.The proposed DQG scheme adopts the traditional FCFSqueueing discipline. For each service request entering thesystem, the system records its waiting time. The requests areassigned and executed on the long-term rented servers inthe order of arrival times. Once the waiting time of a requestreaches D, a temporary server is rented from infrastructureAlgorithm 1 Double-Quality-Guaranteed (DQG) Scheme1: A multiserver system with m servers is running and waitingfor the events as follows2: A queue Q is initialized as empty3: Event – A service request arrives4: Search if any server is available5: if true then6: Assign the service request to one available server7: else8: Put it at the end of queue Q and record its waiting time9: end if10: End Event11: Event – A server becomes idle12: Search if the queue Q is empty13: if true then14: Wait for a new service request15: else16: Take the first service request from queue Q and assign itto the idle server17: end if18: End Event19: Event – The deadline of a request is achieved20: Rent a temporary server to execute the request and releasethe temporary server when the request is completed21: End Eventproviders to process the request. We consider the novel servicemodel as an M/M/m+D queuing model [48, 49, 50]. TheM/M/m+D model is a special M/M/m queuing model withimpatient customers. In an M/M/m+D model, the requestsare impatient and they have a maximal tolerable waitingtime. If the waiting time exceeds the tolerable waiting time,they lose patience and leave the system. In our scheme, theimpatient requests do not leave the system but are assignedto temporary rented servers.Since the requests with waiting time D are all assignedto temporary servers, it is apparent that all service requestscan guarantee their deadline and are charged based on theworkload according to the SLA. Hence, the revenue of theservice provider increases. However, the cost increases aswell due to the temporarily rented servers. Moreover, theamount of cost spent in renting temporary servers is determinedby the computing capacity of the long-term rentedmultiserver system. Since the revenue has been maximizedusing our scheme, minimizing the cost is the key issue forprofit maximization. Next, the tradeoff between the longtermrental cost and the short-term rental cost is considered,and an optimal problem is formulated in the following toget the optimal long-term configuration such that the profitis maximized.4.2 The Profit Optimization ProblemAssume that a cloud service platform consists of m longtermrented servers. It is known that part of requests needtemporary servers to serve, so that their quality can beguaranteed. Denoted by pext(D) the steady-state probabilitythat a request is assigned to a temporary server, or putdifferently, pext(D) is the long-run fraction of requests whosewaiting times exceed the deadline D. pext(D) is differentfrom FW(D). In calculating FW(D), all service requests,whether exceed the deadline, will be waiting in the queue.However, in calculating pext(D), the requests whose waiting0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 6times are equal to the deadline will be assigned to thetemporary servers, which will reduce the waiting time ofthe following requests. In general, pext(D) is much less thanFW(D). Refer to [50], we can known that pext(D) is:pext(D) =(1 􀀀 ρ)(1 􀀀 FW(D))1 􀀀 ρ(1 􀀀 FW(D)). (5)0 5 10 15 20 25 30 35 40 45 5000.030.060.090.120.150.180.210.24Deadlinepextaver_r=1Fig. 4: The probability of waiting time exceeding D.That is to say, there are about λpext(D) service requestsin one unit of time which need short-term rented servers.Fig. 4 gives the probability versus different deadline whereλ = 5.99, r = 1, m = 6 and s = 1. Hence, the cost onshort-term rented servers in one unit of time is calculatedas:Cshort = λpext(D)rs(γ + δP), (6)where rs is the average execution time of each request.Among the requests entering the service system, aboutpext(D) percentage requests are not executed by the m longtermrented servers. Hence, the system utilization of them servers is ρ(1 􀀀 pext(D)). Since the power for speeds is ξsα, the average amount of energy consumed by along-term rented server in one unit of time is Plong =ρ(1 􀀀 pext(D))ξsα + P_. Hence, the cost of the long-termrented servers in one unit of time is calculated as:Clong = m(β + δPlong). (7)The following theorem gives the expected charge to aservice request.Theorem 4.1. The expected charge to a service request is ar.Proof 4.1. Because the waiting time W of each request isless than or equal to D, the expected charge to a servicerequest with execution requirement r is ar according tothe SLA. Since r is a random variable, ar is also randomvariable. It is known that r is an exponential randomvariable with mean r, so its probability distributionfunction is fr(z) = 1r e􀀀z/r. The expected charge to aservice re∫quest is 10fr(z)R(r, z)dz =∫ 101re􀀀z/razdz=ar10e􀀀z/rzdz = 􀀀a10zde􀀀z/r= 􀀀a[ze􀀀z/r___10􀀀10e􀀀z/rdz]= 􀀀a[ze􀀀z/r___10+ re􀀀z/r___10]= ar.(8)The theorem is proven.The profit of a service provider in one unit of time isobtained asProfit = Revenue 􀀀 Clong 􀀀 Cshort, (9)where Revenue = λar,Clong = m(β + δ(ρ(1 􀀀 pext(D))ξsα + P_)),andCshort = λpext(D)rs(γ + δ(ξsα + P_)).We aim to choose the optimal number of fixed servers mand the optimal execution speed s to maximize the profit:Profit(m, s) = λar 􀀀 λpext(D)rs(γ + δ(ξsα + P_))􀀀 m(β + δ(ρ(1 􀀀 pext(D))ξsα + P_)).(10)Fig. 5 gives the graph of function Profit(m, s) where λ =5.99, r = 1, D = 5, a = 15, P_ = 3, α = 2.0, ξ = 9.4192,β = 1.5, γ = 3, and δ = 0.3.01020301 0 3 2−150−100−50050The Server Speed The Server SizeProfitFig. 5: The function Profit(m, s).From the figure, we can see that the profit of a serviceprovider is varying with different server size and differentexecution speed. Therefore, we have the problem of selectingthe optimal server size and/or server speed so that theprofit is maximized. In the following section, the solutionsto this problem are proposed.5 OPTIMAL SOLUTIONIn this section, we first develop an analytical methodto solve our optimization problem. Using the analyticalmethod, the ideal optimal solutions are obtained. Becausethe server size and the server speed are limited and discrete,we give an algorithmic method to get the actual solutionsbased on the ideal optimal ones.5.1 An Analytical Method for Ideal SolutionsWe firstly solve our optimization problem analytically, assumingthat m and s are continuous variables. To thisend, a closed-form expression of pext(D) is needed. In thispaper, we use the same closed-form expression as [2], whichisΣm􀀀1k=0()kk!_ emρ. This expression is very accuratewhen m is not too small and ρ is not too large [2]. SinceStirling’s approximation of m! isp2πm(me )m, one closedformexpression of πm isπm _ p 1􀀀ρ2πm(1􀀀ρ)( e_eρ )m+1,0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 7andpext(D) _ (1 􀀀 ρ)e􀀀mμ(1􀀀ρ)D1+p2πm(1􀀀ρ)( e_eρ )m􀀀ρe􀀀mμ(1􀀀ρ)D.For convenience, we rewrite pext(D) _ (1􀀀ρ)K1K2􀀀ρK1, whereK1 = e􀀀mμ(1􀀀ρ)D, and K2 = 1 +p2πm(1 􀀀 ρ)_, where_ = (eρ/eρ)m.In the following, we solve our optimization problemsbased on above closed-form expression of pext(D).5.1.1 Optimal SizeGiven λ, r, a, P_, α, β, γ, δ, ξ, D, and s, our objective is tofind m such that Profit is maximized. To maximize Profit, mmust be found such that∂Profit∂m= 􀀀∂Clong∂m􀀀 ∂Cshort∂m= 0,where∂Clong∂m= β + δP_ 􀀀 δλrξsα􀀀1 ∂pext(D)∂m,and∂Cshort∂m= λ(γ + δP_)rs∂pext(D)∂m+ λrδξsα􀀀1 ∂pext(D)∂m.Sinceln_ = mln(eρ/eρ) = m(ρ 􀀀 ln ρ 􀀀 1),and∂ρ∂m= 􀀀 λrm2s= 􀀀 ρm,we have1__∂m= (ρ 􀀀 ln ρ 􀀀 1) + m(1 􀀀 1ρ)∂ρ∂m= 􀀀ln ρ,and_∂m= 􀀀_ln ρ.Then, we get∂K1∂m= 􀀀μDK1,and∂K2∂m=p2πm_(12m(1 + ρ) 􀀀 ln ρ(1 􀀀 ρ)).Furthermore, we have∂pext(D)∂m=1(K2􀀀ρK1)2[ ρmK1(K2􀀀K1)+ (ρ􀀀1)μDK1K2􀀀(1+ρ)K12m(K2􀀀1)+ (1􀀀ρ)K1(ln ρ)(K2􀀀1)].We cannot get a closed-form solution to m, but we canget the numerical solution to m. Since ∂Profit/∂m is not anincreasing or decreasing function of m, we need to find thedecreasing region of m, and then use the standard bisectionmethod. If there are more than one maximal values, theyare compared and the maximum is selected. When usingthe bisection method to find the extreme point, the iterationaccuracy is set as a unified value 10􀀀10.In Fig. 6, we demonstrate the net profit in one unit oftime as a function of m and λ where s = 1, r = 1, and theother parameters are same as with those in Fig. 5. We noticethat there is an optimal choice of msuch that the net profit ismaximized. Using the analytical method, the optimal valueof m such that ∂Profit/∂m = 0 is 4.8582, 5.8587, 6.8590,1 2 3 4 5 6 7 8 9 10111213141516171819200102030405060708090The Server SizeProfitlamda=4.99lamda=5.99lamda=6.99lamda=7.99Fig. 6: Net profit versus m and λ.0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 201020304050The Server SpeedOptimal Sizelamda=4.99lamda=5.99lamda=6.99lamda=7.99(a) Optimal size versus s and _.0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20102030405060708090The Server SpeedMaximal Profitlamda=4.99lamda=5.99lamda=6.99lamda=7.99(b) Maximal profit versus s and _.Fig. 7: Optimal size and maximal profit vs. s and λ.7.8592 for λ = 4.99, 5.99, 6.99, 7.99, respectively. When thenumber of servers m is less than the optimal value, theservice provider needs to rent more temporary servers toexecute the requests whose waiting times are equal to thedeadline; hence, the extra cost increases, even surpassingthe gained revenue. As m increases, the waiting times aresignificantly reduced, but the cost on fixed servers increasesgreatly, which also surpasses the gained revenue too. Hence,there is an optimal choice of m which maximizes the profit.In Fig. 7, we demonstrate the optimal size and maximalprofit in one unit of time as a function of s and λ. It means,for each combination of s and λ, we find the optimal numberof servers and the maximal profit. The parameters are same0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 8as those in Fig. 6. From the figures we can see that a higherspeed leads to a less number of servers needed for each λ,and different λ values have different optimal combinationsof speed and size. In addition, the greater the λ is, the morethe maximal profit can be obtained.5.1.2 Optimal SpeedGiven λ, r, a, P_, α, β, γ, δ, ξ, D, and m, our objective isto find s such that Profit is maximized. To maximize Profit, smust be found such that∂Profit∂s= 􀀀∂Clong∂s􀀀 ∂Cshort∂s= 0,where∂Clong∂s= δξλrsα􀀀2[(α 􀀀 1)(1 􀀀 pext(D)) 􀀀 s∂pext(D)∂s],and∂Cshort∂s=(γ + δP_)s2(s∂pext(D)∂s􀀀 pext(D))+ λrδξsα􀀀2[s∂pext(D)∂s+ (α 􀀀 1)pext(D)],Since∂ρ∂s= 􀀀 λrms2 = 􀀀ρs,and1__∂s= m(1 􀀀 1ρ)∂ρ∂s,we have_∂s=ms(1 􀀀 ρ)_.Now, we get∂K1∂s= 􀀀DK1mr,and∂K2∂s=p2πm(ρ + m(1 􀀀 ρ)2)_s.Furthermore, we have∂pext(D)∂s=1(K2 􀀀 ρK1)2[ρsK1(K2 􀀀 K1)+ (ρ 􀀀 1)K1p2πm(ρ + m(1 􀀀 ρ)2)_s+ (ρ 􀀀 1)DK1K2mr].Similarly, we cannot get the closed-form expression ofs, so we can use the same method to find the numericalsolution of s. In Fig. 8, we demonstrate the net profit inone unit of time as a function of s and λ, where m = 6.The rest parameters are the same as that in Figs. 6 and 7.We notice that there is an optimal choice of s such thatthe net profit is maximized. Using the analytical method,the optimal value of s such that respectively. When theservers run at a slower speed than the optimal speed, thewaiting times of service requests will be long and exceedthe deadline. So, the revenue is small and the profit is notoptimal. When s increases, the energy consumption as wellas the electricity cost increases. Hence, the increased revenueis much less than the increased cost. As a result, the profit isreduced. Therefore, there is an optimal choice of s such thatthe net profit is maximized.In Fig. 9, we demonstrate the optimal speed and maximalprofit in one unit of time as a function of m and λ. The0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5102030405060708090The Server SpeedProfitlamda=4.99lamda=5.99lamda=6.99lamda=7.99Fig. 8: Net profit versus s and λ.1 3 5 7 9 11 13 15 17 19 21 23 2500.20.40.60.811.21.41.6The Server SizeOptimal Speedlamda=4.99lamda=5.99lamda=6.99lamda=7.99(a) Optimal speed versus m and _.1 3 5 7 9 11 13 15 17 19 21 23 250102030405060708090The Server SizeMaximal Profitlamda=4.99lamda=5.99lamda=6.99lamda=7.99(b) Maximal profit versus m and _.Fig. 9: Optimal speed and maximal profit versus m and λ.parameters are same as that in Figs. 6–8. From the figureswe can see that if the number of fixed servers is great, theservers must run at a lower speed, which can lead to anoptimal profit. In addition, the optimal speed of servers isnot faster than 1.2, that is because the increased electricitycost surpasses the increased cost that rents extra servers. Thefigure also shows us that different λ values have differentoptimal combinations of speed and size.5.1.3 Optimal Size and SpeedGiven λ, r, a, P_, α, β, γ, δ, ξ, D, our third problem is to findm and s such that Profit is maximized. Hence, we need tofind m and s such that ∂Profit/∂m = 0 and ∂Profit/∂s = 0,where ∂Profit/∂m and ∂Profit/∂s have been derived in the0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 90.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.525303540455055The Server SpeedProfitm=3m=4m=5m=6Fig. 10: Net profit versus m and s.0.5 0.75 1 1.25 1.5 1.75 2020406080100120140160Average rMaximal Profitlamda=4.99lamda=5.99lamda=6.99lamda=7.99Fig. 11: Maximal profit versus λ and r.last two sections. The two equations are solved by usingthe same method as [2]. In Fig. 10, we demonstrate the netprofit in one unit of time as a function of m and s. Here λis 5.99, and r = 1. The optimal value is m = 6.2418 ands = 0.9386, which result in the maximal profit 58.0150. InFig. 11, we demonstrate the maximal profit in one unit oftime in different combinations of λ and r. The figure showsthat the service providers can obtain more profit when theservice requests are with greater λ and r.5.2 An Algorithmic Method for Actual SolutionsIn above subsection, the optimal solutions find using theanalytical method are ideal solutions. Since the number ofrented servers must be integer and the server speed levelsare discrete and limited in real system, we need to find theoptimal solutions for the discrete scenarios. Assume thatS = fsij1 _ i _ ng is a discrete set of n speed levels withincreasing order. Next, different situations are discussed andthe corresponding methods are given as follows.5.2.1 Optimal SizeAssume that all servers run at a given execution speed s.Given λ, r, a, P_, α, β, γ, δ, ξ, and D, the first problem is tofind the number of long-term rented servers m such that theprofit is maximized. The method is shown in Algorithm 2.5.2.2 Optimal SpeedAssume that the service provider rents m servers. Given λ,r, a, P_, α, β, γ, δ, ξ, andD, the second problem is to find theAlgorithm 2 Finding the optimal sizeInput: s, _, r, a, P_, _, _, , _, _, and DOutput: the optimal number Opt size of fixed servers1: Profit max ← 02: find the server sizemusing the analytical method in Section5.1.13: m_l← ⌊m⌋, m_u← ⌈m⌉4: Profitl← Profit(m_l ; s), Profitu← Profit(m_u; s)5: if Profitl > Profitu then6: Profit max ← Profitl7: Opt size ← m_l8: else9: Profit max ← Profitu10: Opt size ← m_u11: end ifoptimal execution speed of all servers such that the profit ismaximized. The method is shown in Algorithm 3.Algorithm 3 Finding the optimal speedInput: m, _, r, a, P_, _, _, , _, _, and DOutput: the optimal server speed Opt speed1: Profit max ← 02: find the server speed s using the analytical method inSection 5.1.23: s_l← si, s_u← si+1 if si < s ≤ si+14: Profitl← Profit(m; s_l ), Profitu← Profit(m; s_u)5: if Profitl > Profitu then6: Profit max ← Profitl7: Opt speed ← s_l8: else9: Profit max ← Profitu10: Opt speed ← s_u11: end if5.2.3 Optimal Size and SpeedIn this subsection, we solve the third problem, which is tofind the optimal combination of m and s such that the profitis maximized. Given λ, r, a, P_, α, β, γ, δ, ξ, and D, themethod is shown in Algorithm 4.Algorithm 4 Finding the optimal size and speedInput: _, r, a, P_, _, _, , _, _, and DOutput: the optimal number Opt size of fixed servers and theoptimal execution speed Opt speed of servers1: Profit max ← 02: find the server size m and speed s using the analyticalmethod in Section 5.1.33: m_l← ⌊m⌋, m_u← ⌈m⌉4: find the optimal speed s_l and s_u using Algorithm 3 withserver size m_l and m_u, respectively5: Profitl← Profit(m_l ; s_l ), Profitu← Profit(m_u; s_u)6: if Profitl≤ Profitu then7: Profit max ← Profitu8: Opt size ← m_u , Opt speed ← s_u9: else10: Profit max ← Profitl11: Opt size ← m_l , Opt speed ← s_l12: end if5.3 Comparison of Two Kinds of SolutionsIn Tables 1, 2, and 3, the ideal optimal solutions and theactual optimal solutions are compared for three different0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 10cases. Table 1 compares the ideal optimal size and the actualoptimal size under the given server speed. Table 2 comparesthe ideal optimal speed and the actual optimal speed underthe given server size. In Table 3, two kinds of solutions arecompared for different combinations of λ and r. Here, mcan be any positive integer, and the available speed levelsare S = f0.2, 0.4, _ _ _ , 2.0g. According to the comparisonswe can see that the ideal maximal profit is greater thanthe actual maximal profit. In the tables, we also list therelative difference (RD) between the ideal optimal profit andthe actual optimal profit, which is calculated asRD =Idep 􀀀 ActpActp,where Idep and Actp are the maximal profit in ideal andactual scenarios. From the results we know that the relativedifference is always small except some cases in Table 2. Thatis because a small difference of speed would lead to a bigdifference of profit when the server size is large.6 PERFORMANCE COMPARISONUsing our resource renting scheme, temporary servers arerented for all requests whose waiting time are equal to thedeadline, which can guarantee that all requests are servedwith high service quality. Hence, our scheme is superiorto the traditional resource renting scheme in terms of theservice quality. Next, we conduct a series of calculationsto compare the profit of our renting scheme and the rentingscheme in [2]. In order to distinguish the proposedscheme and the compared scheme, the proposed schemeis renamed as Double-Quality-Guaranteed (DQG) rentingscheme and the compared scheme is renamed as Single-Quality-Unguaranteed (SQU) renting scheme in this paper.6.1 The Compared SchemeFirstly, the average charge of the using the SQU rentingscheme is analyzed.Theorem 6.1. The expected charge to a service request usingthe SQU renting scheme isar(1 􀀀 Pqe􀀀(1􀀀ρ)mμD).Proof 6.1. Recall that the probability distribution function ofthe waiting time W of a service request isfW(t) = (1 􀀀 Pq)u(t) + mμπme􀀀(1􀀀ρ)mμt.Since W is a random variable, so R(r,W) is also a randomvariable. The expected charge to a service requestwith execution requirement r isR(r) = R(r,W)=∫ 10fW(t)R(r, t)dt=∫ D0[(1 􀀀 Pq)u(t) + mμπme􀀀(1􀀀ρ)mμt]ardt= (1 􀀀 Pq)ar + mμπmar1 􀀀 e􀀀(1􀀀ρ)mμD(1 􀀀 ρ)= ar(1 􀀀 Pqe􀀀(1􀀀ρ)mμD).Therefore, the expected charge to a service request is theexpected value of R(r):R(r)=∫ 10fr(z)R(z)dz=∫ 101re􀀀z/raz(1 􀀀 Pqe􀀀(1􀀀ρ)mμD)dz=ar(1 􀀀 Pqe􀀀(1􀀀ρ)mμD)∫ 10e􀀀z/rzdz= ar(1 􀀀 Pqe􀀀(1􀀀ρ)mμD).The theorem is proven.By the above theorem, the profit in one unit of time usingthe SQU renting scheme is calculated as:λar(1 􀀀 Pqe􀀀(1􀀀ρ)mμD) 􀀀 m(β + δ(ρξsα + P_)). (11)Using the SQU renting scheme, a service provider mustrent more servers or scale up the server speed to maintaina high quality-guaranteed ratio. Assumed that the requiredquality-guaranteed ratio of a service provider is ψ and thedeadline of service requests is D. By solving equationFW(D) = 1 􀀀 πm1 􀀀 ρe􀀀mμ(1􀀀ρ)D _ ψwith given mor s, we can get the corresponding s orm suchthat the required quality-guaranteed ratio is achieved.6.2 Profit Comparison under Different Quality-Guaranteed RatioLet λ be 5.99 and the other parameters be the same asthose in Section 5. In the first example, for a given numberof servers, we compare the profit using the SQU rentingscheme with quality-guaranteed ratio 100%, 99%, 92%, 85%and the optimal profit using our DQG renting scheme. Becausethe quality-guaranteed ratio 100% cannot be achievedusing the SQU renting scheme, hence, we set 99.999999% _100%. The results are shown in Fig. 12. From the figure, wecan see that the profit obtained using the proposed scheme isalways greater than that using the SQU renting scheme, andthe five curves reach the peak at different sizes. In addition,the profit obtained by a service provider increases whenthe qualtiy-guaranteed ratio increases from 85% to 99%, butdecreases when the ratio is greater than 99%. That is becausemore service requests are charged with the increasing ratiofrom 85% to 99%; but once the ratio is greater than 99%, thecost to expand the server size is greater than the revenueobtained from the extra qualtiy-guaranteed requests, hence,the total profit is reduced.In the second example, we compare the profit of theabove five scenarios under the given server speed. Theresults are given in Fig. 13. The figure shows the trend ofprofit when the server speed is increasing from 0.1 to 2.9.From the figure, we can see that the curves increase firstlyand reach the peak at certain speed, and then decrease alongwith the increasing speed on the whole. The figure verifiesthat our proposed scheme can obtain more profit than theSQU renting scheme. Noticed that the changing trends ofthe curves of the SQU renting scheme with 100%, 99%,92%, and 85% quality-guaranteed ratio are interesting. Theyshow an increasing trend at the beginning and then decreaseduring a small range of speed repeatedly. The reason is0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 11TABLE 1: Comparison of the two methods for finding the optimal sizeGiven Speed 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0IdealSolutionOptimal Size 29.1996 14.6300 9.7599 7.3222 5.8587 4.8827 4.1854 3.6624 3.2555 2.9300Maximal Profit 11.5546 45.5262 54.6278 57.5070 57.8645 56.9842 55.3996 53.3498 51.0143 48.4578ActualSolutionOptimal Size 29 15 10 7 6 5 4 4 3 3Maximal Profit 11.5268 45.4824 54.6014 57.3751 57.8503 56.9727 55.3259 53.0521 50.8526 48.4513Relative Difference 0.2411% 0.0964% 0.0483% 0.2299% 0.0246% 0.0202% 0.1332% 0.5612% 0.3180% 0.01325%TABLE 2: Comparison of the two methods for finding the optimal speedGiven Size 5 7 9 11 13 15 17 19 21 23IdealSolutionOptimal Speed 1.1051 0.8528 0.6840 0.5705 0.4895 0.4288 0.3817 0.3440 0.3132 0.2875Maximal Profit 57.3742 57.7613 56.0783 53.3337 49.9896 46.2754 42.3167 38.1881 33.9366 29.5933ActualSolutionOptimal Speed 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.4 0.4 0.4Maximal Profit 57.0479 57.3751 54.7031 53.1753 48.4939 45.4824 42.2165 37.4785 32.6795 27.8795Relative Difference 0.5721% 0.6732% 2.5140% 0.2979% 3.0843% 1.7435% 0.2373% 1.8934% 3.8470% 6.1474%TABLE 3: Comparison of the two methods for finding the optimal size and the optimal speedr 0.50 0.75 1.00 1.25 1.50 1.75 2.00_ = 4:99IdealSolutionOptimal Size 2.5763 3.8680 5.1608 6.4542 7.7480 9.0420 10.3362Optimal Speed 0.9432 0.9422 0.9413 0.9406 0.9399 0.9394 0.9388Maximal Profit 24.0605 36.0947 48.1539 60.1926 72.2317 84.3121 96.3528ActualSolutionOptimal Size 3 4 5 6 7 9 10Optimal Speed 1.0 1.0 1.0 1.0 1.0 1.0 1.0Maximal Profit 23.8770 35.7921 48.0850 60.1452 72.0928 83.9968 96.2230Relative Difference 0.7695% 0.8454% 0.14355% 0.0789% 0.1927% 0.3754% 0.1349%_ = 5:99IdealSolutionOptimal Size 3.1166 4.6787 6.2418 7.8056 9.3600 10.9346 12.4995Optimal Speed 0.9401 0.9393 0.9386 0.9380 0.9375 0.9370 0.9366Maximal Profit 28.9587 43.4364 57.9339 72.4121 86.9180 101.3958 115.9086ActualSolutionOptimal Size 3 4 6 7 9 10 12Optimal Speed 1.0 1.0 1.0 1.0 1.0 1.0 1.0Maximal Profit 28.9158 43.1208 57.8503 72.2208 86.7961 101.2557 115.7505Relative Difference 0.1484% 0.7317% 0.1445% 0.2649% 0.1405% 0.1384% 0.1365%1 2 3 4 5 6 7 8 9 10111213141516171819202122232425010203040506070The number of serversProfitDQGSQU 100%SQU 99%SQU 92%SQU 85%Fig. 12: Profit versus m and different quality-guaranteedratios.analyzed as follows. When the server speed is changingwithin a small speed range, in order to satisfy the requireddeadline-guaranteed ratio, the number of servers rented bya service provider keeps unchanged. At the beginning, theadded revenue is more than the added cost, so the profit isincreasing. However, when the speed becomes greater, theenergy consumption increases, leading to the total increasedcost surpassing the increased revenue, hence, the profitdecreases.In the third example, we explore the changing trend ofthe profit with different D, and the results are shown as0.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 2.3 2.5 2.7 2.90102030405060The SpeedProfitDQGSQU 100%SQU 99%SQU 92%SQU 85%Fig. 13: Profit versus s and different quality-guaranteedratios.Fig. 14. Fig. 14(a) gives the numerical results when the serverspeed is fixed at 0.7, and Fig. 14(b) shows the numericalresults when the number of servers is fixed at 5. We analyzethe results as follows.From Fig. 14(a), we can see that the profit obtainedusing the SQU renting scheme increases slightly with theincrement of D. That is because the service charge keepsconstant but the extra cost is reduced whenD is greater. As aconsequence, the profit increases. The second phenomenonfrom the figure is that the curves of SQU 92% and SQU 85%have sharp drop at some points and then ascend gradually0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 125 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 2530354045505560Deadline DProfitDQGSQU 100%SQU 99%SQU 92%SQU 85%(a) Fixed server speed s = 0:7.5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 253035404550556065Deadline DProfitDQGSQU 100%SQU 99%SQU 92%SQU 85%(b) Fixed server size m = 5.Fig. 14: Profit versus D and different quality-guaranteedratios.and smoothly. The reasons are explained as follows. Whenthe server speed is fixed, enough servers are needed tosatisfy the given quality-guaranteed ratio. By calculating,we know that the number of required servers is the samefor all D values in a certain interval. For example, [5,7] and[8,25] are two intervals of D for the curve of SQU 92%,and the required servers are 10 and 9, respectively. For allD within the same interval, their costs are the same witheach other. Whereas, their actual quality-guaranteed ratiosare different which get greater with the increasing D. Hence,during the same interval, the revenue gets greater as wellas the profit. However, if the deadline increases and entersa different interval, the quality-guaranteed ratio sharplydrops due to the reduced servers, and the lost revenuesurpasses the reduced cost, hence, the profit sharply dropsas well. Moreover, we can also see that the profit of SQU100% is much less than the other scenarios. That is becausewhen the quality-guaranteed ratio is great enough, addinga small revenue leads to a much high cost.From Fig. 14(b), we can see that the curves of SQU 92%and SQU 85% descend and ascend repeatedly. The reasonsare same as that of Fig. 14(a). The deadlines within the sameinterval share the same minimal speed, hence, the cost keepsconstant. At the same time, the revenue increases due tothe increasing quality-guaranteed ratio. As a consequence,the profit increases. At each break point, the minimal speedsatisfying the required quality-guaranteed ratio gets smaller,which leads to a sharp drop of the actual quality-guaranteedratio. Hence, the revenue as well as the profit drops.6.3 Comparison of Optimal ProfitIn order to further verify the superiority of our proposedscheme in terms of profit, we conduct the following comparisonbetween the optimal profit achieved by our DQGrenting scheme and that of the SQU renting scheme in [2]. Inthis group of comparisons, λ is set as 6.99,D is 5, r is varyingfrom 0.75 to 2.00 in step of 0.25, and the other parametersare the same as Section 5. In Fig. 15, the optimal profitand the corresponding configuration of two renting schemesare presented. From Fig. 15(a) we can see that the optimalprofit obtained using our scheme is always greater than thatusing the SQU renting scheme. According to the calculation,our scheme can obtain 4.17 percent more profit on theaverage than the SQU renting scheme. This shows that ourscheme outperforms the SQU renting scheme in terms ofboth of quality of service and profit. Figs. 15(b) and 15(c)compare the server size and speed of the two schemes. Thefigures show that using our renting scheme the capacityprovided by the long-term rented servers is much less thanthe capacity using the SQU renting scheme. That is becausea lot of requests are assigned to the temporary servers usingour scheme, and less servers and slower server speed areconfigured to reduce the waste of resources in idle period.In conclusion, our scheme can not only guarantee the servicequality of all requests, but also achieve more profit than thecompared one.7 CONCLUSIONSIn order to guarantee the quality of service requests andmaximize the profit of service providers, this paper hasproposed a novel Double-Quality-Guaranteed (DQG) rentingscheme for service providers. This scheme combinesshort-term renting with long-term renting, which can reducethe resource waste greatly and adapt to the dynamicaldemand of computing capacity. An M/M/m+D queueingmodel is build for our multiserver system with varyingsystem size. And then, an optimal configuration problemof profit maximization is formulated in which many factorsare taken into considerations, such as the market demand,the workload of requests, the server-level agreement, therental cost of servers, the cost of energy consumption, andso forth. The optimal solutions are solved for two differentsituations, which are the ideal optimal solutions and theactual optimal solutions. In addition, a series of calculationsare conducted to compare the profit obtained by theDQG renting scheme with the Single-Quality-Unguaranteed(SQU) renting scheme. The results show that our schemeoutperforms the SQU scheme in terms of both of servicequality and profit.In this paper, we only consider the profit maximizationproblem in a homogeneous cloud environment, because theanalysis of a heterogenous environment is much more complicatedthan that of a homogenous environment. However,we will extend our study to a heterogenous environment inthe future.0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TC.2015.2401021, IEEE Transactions on ComputersTRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 130.75 1 1.25 1.5 1.75 2406080100120140Average rOptimal ProfitDQGSQU(a) Comparison of Profit.0.75 1 1.25 1.5 1.75 205101520Average rOptimal SizeDQGSQU(b) Comparison of Server Size.0.75 1 1.25 1.5 1.75 20.90.920.940.960.981Average rOptimal SpeedDQGSQU(c) Comparison of Server Speed.Fig. 15: Comparison between our scheme with that in [2].

A Distributed Three-Hop Routing Protocol to Increase the Capacity of Hybrid Wireless Networks

A Distributed Three-Hop Routing Protocolto Increase the Capacity of HybridWireless NetworksHaiying Shen, Senior Member, IEEE, Ze Li, and Chenxi QiuAbstract—Hybrid wireless networks combining the advantages of both mobile ad-hoc networks and infrastructure wireless networkshave been receiving increased attention due to their ultra-high performance. An efficient data routing protocol is important in suchnetworks for high network capacity and scalability. However, most routing protocols for these networks simply combine the ad-hoctransmission mode with the cellular transmission mode, which inherits the drawbacks of ad-hoc transmission. This paper presents aDistributed Three-hop Routing protocol (DTR) for hybrid wireless networks. To take full advantage of the widespread base stations,DTR divides a message data stream into segments and transmits the segments in a distributed manner. It makes full spatial reuse of asystem via its high speed ad-hoc interface and alleviates mobile gateway congestion via its cellular interface. Furthermore, sendingsegments to a number of base stations simultaneously increases throughput and makes full use of widespread base stations. Inaddition, DTR significantly reduces overhead due to short path lengths and the elimination of route discovery and maintenance. DTRalso has a congestion control algorithm to avoid overloading base stations. Theoretical analysis and simulation results show thesuperiority of DTR in comparison with other routing protocols in terms of throughput capacity, scalability, and mobility resilience. Theresults also show the effectiveness of the congestion control algorithm in balancing the load between base stations.Index Terms—Hybrid wireless networks, routing algorithm, load balancing, congestion controlÇ1 INTRODUCTIONOVER the past few years, wireless networks includinginfrastructure wireless networks and mobile ad-hocnetworks (MANETs) have attracted significant researchinterest. The growing desire to increase wireless networkcapacity for high performance applications has stimulatedthe development of hybrid wireless networks [1], [2], [3],[4], [5], [6]. A hybrid wireless network consists of both aninfrastructure wireless network and a mobile ad-hoc network.Wireless devices such as smart-phones, tablets andlaptops, have both an infrastructure interface and an ad-hocinterface. As the number of such devices has been increasingsharply in recent years, a hybrid transmission structurewill be widely used in the near future. Such a structure synergisticallycombines the inherent advantages and overcomethe disadvantages of the infrastructure wirelessnetworks and mobile ad-hoc networks.In a mobile ad-hoc network, with the absence of a centralcontrol infrastructure, data is routed to its destinationthrough the intermediate nodes in a multi-hop manner. Themulti-hop routing needs on-demand route discovery orroute maintenance [7], [8], [9], [10]. Since the messages aretransmitted in wireless channels and through dynamic routingpaths, mobile ad-hoc networks are not as reliable asinfrastructure wireless networks. Furthermore, because ofthe multi-hop transmission feature, mobile ad-hoc networksare only suitable for local area data transmission.The infrastructure wireless network (e.g., cellular network)is the major means of wireless communication in ourdaily lives. It excels at inter-cell communication (i.e., communicationbetween nodes in different cells) and Internetaccess. It makes possible the support of universal networkconnectivity and ubiquitous computing by integrating allkinds of wireless devices into the network. In an infrastructurenetwork, nodes communicate with each other throughbase stations (BSes). Because of the long distance one-hoptransmission between BSes and mobile nodes, the infrastructurewireless networks can provide higher messagetransmission reliability and channel access efficiency, butsuffer from higher power consumption on mobile nodesand the single point of failure problem [11].A hybrid wireless network synergistically combines aninfrastructure wireless network and a mobile ad-hoc networkto leverage their advantages and overcome theirshortcomings, and finally increases the throughput capacityof a wide-area wireless network. A routing protocol is a criticalcomponent that affects the throughput capacity of awireless network in data transmission. Most current routingprotocols in hybrid wireless networks [1], [5], [6], [12], [13],[14], [15], [16], [17], [18] simply combine the cellular transmissionmode (i.e., BS transmission mode) in infrastructurewireless networks and the ad-hoc transmission mode inmobile ad-hoc networks [7], [8], [9]. That is, as shown inFig. 1a, the protocols use the multi-hop routing to forward amessage to the mobile gateway nodes that are closest to theBSes or have the highest bandwidth to the BSes. The bandwidthof a channel is the maximum throughput (i.e.,_ The authors are with the Department of Electrical and ComputerEngineering, Clemson University, Clemson, SC 29634.E-mail: {shenh, zel, chenxiq}@clemson.edu.Manuscript received 18 Mar. 2014; accepted 18 Dec. 2014. Date of publication7 Jan. 2015; date of current version 31 Aug. 2015.For information on obtaining reprints of this article, please send e-mail to:reprints@ieee.org, and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TMC.2015.2388476IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015 19751536-1233 _ 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.transmission rate in bits/s) that can be achieved. The mobilegateway nodes then forward the messages to the BSes, functioningas bridges to connect the ad-hoc network and theinfrastructure network.However, direct combination of the two transmissionmodes inherits the following problems that are rooted inthe ad-hoc transmission mode._ High overhead. Route discovery and maintenance incurhigh overhead. The wireless random access mediumaccess control (MAC) required in mobile ad-hoc networks,which utilizes control handshaking and aback-off mechanism, further increases overhead._ Hot spots. The mobile gateway nodes can easilybecome hot spots. The RTS-CTS random access, inwhich most traffic goes through the same gateway,and the flooding employed in mobile ad-hoc routingto discover routes may exacerbate the hot spotproblem. In addition, mobile nodes only use thechannel resources in their route direction, whichmay generate hot spots while leave resources inother directions under-utilized. Hot spots lead tolow transmission rates, severe network congestion,and high data dropping rates._ Low reliability. Dynamic and long routing paths leadto unreliable routing. Noise interference and neighborinterference during the multi-hop transmissionprocess cause a high data drop rate. Long routingpaths increase the probability of the occurrence ofpath breakdown due to the highly dynamic natureof wireless ad-hoc networks.These problems become an obstacle in achieving highthroughput capacity and scalability in hybrid wireless networks.Considering the widespread BSes, the mobile nodeshave a high probability of encountering a BS while moving.Taking advantage of this feature, we propose a DistributedThree-hop Data Routing protocol (DTR). In DTR, as shownin Fig. 1b, a source node divides a message stream into anumber of segments. Each segment is sent to a neighbormobile node. Based on the QoS requirement, these mobilerelay nodes choose between direct transmission or relaytransmission to the BS. In relay transmission, a segment isforwarded to another mobile node with higher capacity to aBS than the current node. In direct transmission, a segmentis directly forwarded to a BS. In the infrastructure, the segmentsare rearranged in their original order and sent to thedestination. The number of routing hops in DTR is confinedto three, including at most two hops in the ad-hoc transmissionmode and one hop in the cellular transmission mode.To overcome the aforementioned shortcomings, DTR triesto limit the number of hops. The first hop forwarding distributesthe segments of a message in different directions tofully utilize the resources, and the possible second hop forwardingensures the high capacity of the forwarder. DTRalso has a congestion control algorithm to balance the trafficload between the nearby BSes in order to avoid traffic congestionat BSes.Using self-adaptive and distributed routing with highspeedand short-path ad-hoc transmission, DTR significantlyincreases the throughput capacity and scalability ofhybrid wireless networks by overcoming the three shortcomingsof the previous routing algorithms. It has the followingfeatures:_ Low overhead. It eliminates overhead caused by routediscovery and maintenance in the ad-hoc transmissionmode, especially in a dynamic environment._ Hot spot reduction. It alleviates traffic congestion atmobile gateway nodes while makes full use of channelresources through a distributed multi-path relay._ High reliability. Because of its small hop path lengthwith a short physical distance in each step, it alleviatesnoise and neighbor interference and avoids theadverse effect of route breakdown during data transmission.Thus, it reduces the packet drop rate andmakes full use of spacial reuse, in which severalsource and destination nodes can communicatesimultaneously without interference.The rest of this paper is organized as follows. Section 2presents a review of representative hybrid wireless networksand multi-hop routing protocols. Section 3 details theDTR protocol, with an emphasis on its routing methods,segment structure, and BS congestion control. Section 4 theoreticallyanalyzes the performance of the DTR protocol.Section 5 shows the performance of the DTR protocol incomparison to other routing protocols. Finally, Section 6concludes the paper.2 RELATED WORKIn order to increase the capacity of hybrid wireless networks,various routing methods with different featureshave been proposed. One group of routing methods integratethe ad-hoc transmission mode and the cellular transmissionmode [1], [5], [6], [14], [16], [17], [18]. Dousse et al.[6] built a Poisson Boolean model to study how a BSincreases the capacity of a MANET. Lin and Hsu [5] proposeda Multihop Cellular Network (MCN) and derived itsthroughput. Hsieh and Sivakumar [14] investigated ahybrid IEEE 802.11 network architecture with both a distributedcoordination function and a point coordination function.Luo et al. [1] proposed a unified cellular and ad-hocnetwork architecture for wireless communication. Cho andHaas [16] studied the impact of concurrent transmission ina downlink direction (i.e., from BSes to mobile nodes) onthe system capacity of a hybrid wireless network. In [17],[18], a node initially communicates with other nodes usingan ad-hoc transmission mode, and switches to a cellularFig. 1. Traditional and proposed routing algorithms on the uplinkdirection.1976 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015transmission mode when its performance is better than thead-hoc transmission.The above methods are only used to assist intra-cell adhoctransmission rather than inter-cell transmission. Ininter-cell transmission [1], [5], [6], a message is forwardedvia the ad-hoc interface to the gateway mobile node that isclosest to or has the highest uplink transmission bandwidthto a BS. The gateway mobile node then forwards the messageto the BS using the cellular interface. However, most ofthese routing protocols simply combine routing schemes inad-hoc networks and infrastructure networks, hence inheritthe drawbacks of the ad-hoc transmission mode asexplained previously.DTR is similar to the Two-hop transmission protocol [19]in terms of the elimination of route maintenance and thelimited number of hops in routing. In Two-hop, when anode’s bandwidth to a BS is larger than that of each neighbor,it directly sends a message to the BS. Otherwise, itchooses a neighbor with a higher channel and sends a messageto it, which further forwards the message to the BS.DTR is different from Two-hop in three aspects. First, Twohoponly considers the node transmission within a singlecell, while DTR can also deal with inter-cell transmission,which is more challenging and more common than intracellcommunication in the real world. Second, DTR uses distributedtransmission involving multiple cells, which makesfull use of system resources and dynamically balances thetraffic load between neighboring cells. In contrast, Two-hopemploys single-path transmission.There are other methods proposed to improve routingperformance in hybrid wireless networks. Wu et al. [3]proposed using ad-hoc relay stations to dynamically relaytraffic from one cell to another in order to avoid traffic congestionin BSes. Li et al. [20] surveyed a number of multihopcellular network architectures in literature, andcompared and discussed methods to reduce the cost ofdeployment for MCNs. The work in [21] investigates howto allocate the bandwidth to users to improve the performanceof hybrid wireless networks. Thulasiraman andShen [22] further considered the wireless interference inoptimizing the resource allocation in hybrid wireless networks.The work in [23] proposes a coalitional game theorybased cooperative packet delivery scheme in hybridwireless networks. There are also some works [24], [25],[26] that study radio frequency allocation for directiontransmission and relay transmission in hybrid wirelessnetworks. These works are orthogonal to our study in thispaper and can be incorporated into DTR to furtherenhance its performance.The throughput capacity of the hybrid wireless networkunder different settings has also been an activeresearch topic in the hybrid wireless network. The worksin [17], [27] have studied the throughput of hybrid networkwith n nodes and m stations. Liu et al. [28] theoreticallystudied the capacity of hybrid wireless networksunder an one-dimensional network topology and a twodimensionalstrip topology. Wang et al. [29] studied themulticast throughput of hybrid wireless networks anddesigned an optimal multicast strategy based on deducedthroughput.3 DISTRIBUTED THREE-HOP ROUTING PROTOCOL3.1 Assumption and OverviewSince BSes are connected with a wired backbone, we assumethat there are no bandwidth and power constraints on transmissionsbetween BSes. We use intermediate nodes to denoterelay nodes that function as gateways connecting an infrastructurewireless network and a mobile ad-hoc network.We assume every mobile node is dual-mode; that is, it hasad-hoc network interface such as a WLAN radio interfaceand infrastructure network interface such as a 3G cellularinterface.DTR aims to shift the routing burden from the ad-hocnetwork to the infrastructure network by taking advantageof widespread base stations in a hybrid wireless network.Rather than using one multi-hop path to forward a messageto one BS, DTR uses at most two hops to relay the segmentsof a message to different BSes in a distributed manner, andrelies on BSes to combine the segments. Fig. 2 demonstratesthe process of DTR in a hybrid wireless network. We simplifythe routings in the infrastructure network for clarity.As shown in the figure, when a source node wants to transmita message stream to a destination node, it divides themessage stream into a number of partial streams called segmentsand transmits each segment to a neighbor node.Upon receiving a segment from the source node, a neighbornode locally decides between direct transmission andrelay transmission based on the QoS requirement of theapplication. The neighbor nodes forward these segments ina distributed manner to nearby BSes. Relying on the infrastructurenetwork routing, the BSes further transmit thesegments to the BS where the destination node resides. Thefinal BS rearranges the segments into the original order andforwards the segments to the destination. It uses the cellularIP transmission method [30] to send segments to thedestination if the destination moves to another BS duringsegment transmission.Our DTR algorithm avoids the shortcomings of ad-hoctransmission in the previous routing algorithms thatdirectly combine an ad-hoc transmission mode and a cellulartransmission mode. Rather than using the multi-hop adhoctransmission, DTR uses two hop forwarding by relyingon node movement and widespread base stations. All otheraspects remain the same as those in the previous routingalgorithms (including the interaction with the TCP layer).DTR works on the Internet layer. It receives packets fromFig. 2. Data transmission in the DTR protocol.SHEN ET AL.: A DISTRIBUTED THREE-HOP ROUTING PROTOCOL TO INCREASE THE CAPACITY OF HYBRID WIRELESS NETWORKS 1977the TCP layer and routes it to the destination node, whereDTR forwards the packet to the TCP layer.The data routing process in DTR can be divided intotwo steps: uplink from a source node to the first BS anddownlink from the final BS to the data’s destination. Criticalproblems that need to be solved include how a sourcenode or relay node chooses nodes for efficient segmentforwarding, and how to ensure that the final BS sendssegments in the right order so that a destination nodereceives the correct data. Also, since traffic is not evenlydistributed in the network, how to avoid overloadingBSes is another problem. Below, Section 3.2 will presentthe details for forwarding node selection in uplink transmissionand Section 3.3 will present the segment structurethat helps ensure the correct final order of segments in amessage, and DTR’s strategy for downlink transmission.Section 3.4 will present the congestion control algorithmfor balancing a load between BSes.3.2 Uplink Data RoutingA long routing path will lead to high overhead, hot spotsand low reliability. Thus, DTR tries to limit the path length.It uses one hop to forward the segments of a message in adistributed manner and uses another hop to find highcapacityforwarder for high performance routing. As aresult, DTR limits the path length of uplink routing to twohops in order to avoid the problems of long-path multi-hoprouting in the ad-hoc networks. Specifically, in the uplinkrouting, a source node initially divides its message streaminto a number of segments, then transmits the segments toits neighbor nodes. The neighbor nodes forward segmentsto BSes, which will forward the segments to the BS wherethe destination resides.Below, we first explain how to definecapacity, then introduce the way for a node to collect thecapacity information from its neighbors, and finally presentthe details of the DTR routing algorithm.Different applications may have different QoS requirements,such as efficiency, throughput, and routing speed.For example, delay-tolerant applications (e.g., voice mail,e-mail and text messaging) do not necessarily need fastreal-time transmission and may make throughput thehighest consideration to ensure successful data transmission.Some applications may take high mobility as theirpriority to avoid hot spots and blank spots. Hot spots areareas where BS channels are congested, while blank spotsare areas without signals or with very weak signals. Highmobilitynodes can quickly move out of a hot spot orblank spot and enter a cell with high bandwidth to a BS,thus providing efficient data transmission. Throughputcan be measured by bandwidth, mobility can be measuredby the speed of node movement, and routing speed can bemeasured by the speed of data forwarding. Bandwidthcan be estimated using the non-intrusive technique proposedin [31]. In this work, we take throughput and routingspeed as examples for the QoS requirement. We use abandwidth/queue metric to reflect node capacity in throughputand fast data forwarding. The metric is the ratio of anode’s channel bandwidth to its message queue size. Alarger bandwidth/queue value means higher throughputand message forwarding speed, and vice versa.When choosing neighbors for data forwarding, a nodeneeds the capacity information (i.e., queue size and bandwidth)of its neighbors. Also, a selected neighbor shouldhave enough storage space for a segment. To keep track ofthe capacity and storage space of its neighbors, each nodeperiodically exchanges its current capacity and storageinformation with its neighbors. In the ad-hoc network component,every node needs to periodically send “hello” messagesto identify its neighbors. Taking advantage of thispolicy, nodes piggyback the capacity and storage informationonto the “hello” messages in order to reduce the overheadcaused by the information exchanges. If a node’scapacity and storage space are changed after its last “hello”message sending when it receives a segment, it sends itscurrent capacity and storage information to the segment forwarder.Then, the segment forwarder will choose the highestcapacity nodes in its neighbors based on the mostupdated information.When a source node sends out message segments, itchooses the neighbors that have enough space for storing asegment, and then chooses neighbors that have the highestcapacity. In order to find higher capacity forwarders in alarger neighborhood around the source, each segmentreceiver further forwards its received segment to its neighborwith the highest capacity. That is, after a neighbor nodemi receives a segment from the source, it uses either directtransmission or relay transmission. If the capacity of each ofits neighbors is no greater than itself, relay node mi usesdirect transmission. Otherwise, it uses relay transmission.In direct transmission, the relay node sends the segment toa BS if it is in a BS’s region. Otherwise, it stores the segmentwhile moving until it enters a BS’s region. In relay transmission,relay node mi chooses its highest-capacity neighbor asthe second relay node based on the QoS requirement. Thesecond relay node will use direct transmission to forwardthe segment directly to a BS. As a result, the number oftransmission hops in the ad-hoc network component is confinedto no more than two. The small number of hops helpto increase the capacity of the network and reduce channelcontention in ad-hoc transmission. Algorithm 1 shows thepseudo-code for neighbor node selection and message forwardingin DTR.The purpose of the second hop selection is to find ahigher capacity node as the message forwarder in order toimprove the performance of the QoS requirement. As theneighborhood scope of a node for high-capacity nodesearching grows, the probability of finding higher capacitynodes increases. Thus, a source node’s neighbors are morelikely to find neighbors with higher capacities than thesource node. Therefore, transmitting data segments toneighbors and enabling them to choose the second relayshelp to find higher capacity nodes to forward data. If asource node has the highest capacity in its region, the segmentswill be forwarded back to the source node accordingto the DTR protocol. The source node then forwards the segmentsto the BSes directly due to the three-hop limit.Though sending data back and forth leads to latency andbandwidth wastage, this case occurs only when the sourcenodes is the highest capacity node within its two-hop neighborhood.Also, this step is necessary for finding the highestcapacity nodes within the source’s two-hop neighborhood,1978 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015and ensures that the highest capacity nodes are alwaysselected as the message forwarders. If the source node doesnot distribute segments to its neighbors, the higher-capacitynode searching cannot be conducted. Note that the datatransmission rate of the ad-hoc interface (e.g., IEEE 802.11)is more than 10 times faster than the cellular interface (e.g.,GSM, 3G). Thus, the transmission delay for sending thedata back and forth in the ad-hoc transmission is negligiblein the total routing latency.Algorithm 1. Pseudo-Code for Neighbor Node Selectionand Message Forwarding.1: ChooseRelay( ) f2: //choose neighbors with sufficient caches and bandwidth/queue(b/q) rates3: Query storage size and QoS requirement info. fromneighbors4: for each neighbor n do5: if n.cache.size>segment.length&&n.b/q>this.b/q then6: Add n toR ¼ fr1; . . . rm; . . .g in a descending order ofb/q7: end if8: end for9: Return R10: g11: Transmission( ) f12: if it is a source node then13: //routing conducted by a source node14: //choose relay nodes based on QoS requirement15: R¼ChooseRelay( );16: Send segments to fr1; . . . rmg in R17: else18: //routing conducted by a neighbor node19: if this.b/q _ b/q of all neighbors then20: //direct transmission21: if within the range of a BS then22: Transmit the segment directly to the BS23: end if24: else25: //relay transmission26: nodei ¼ getHighestCapability(ChooseRelay( ))27: Send a segment to nodei28: end if29: end if30: gBy distributing a message’s segments to different nodesto be forwarded in different directions, our algorithmreduces the congestion in the previous routing algorithmsin the hybrid wireless networks. When a node selects a relayto forward a segment, it checks the capacity of the node.Only when a node, say node mi, has enough capacity, thenode will forward a segment to node mi. Therefore, eventhough the paths are not node-disjoint, there will be no congestionin the common sub-paths.Fig. 3 shows examples of neighbor selection in DTR, inwhich the source node is in the transmission range of a BS.In the figures, the value in the node represents its capacity.In scenario (a), there exist nodes that have higher capacitythan the source node within the source’s two-hop neighborhood.If a routing algorithm directly let a source node transmita message to its BS, the high routing performancecannot be guaranteed since the source node may have verylow capacity. In DTR, the source node sends segments to itsneighbors, which further forward the segments to nodeswith higher capacities. In scenario (b), the source node hasthe highest capacity among the nodes in its two-hop neighborhood.After receiving segments from the source node,some neighbors forward the segments back to the sourcenode, which sends the message to its BS. Thus, DTR alwaysarranges data to be forwarded by nodes with high capacityto their BSes. DTR achieves higher throughput and fasterdata forwarding speed by taking into account node capacityin data forwarding.3.3 Downlink Data Routing and DataReconstructionAs mentioned above, the message stream of a source nodeis divided into several segments. After a BS receives a segment,it needs to forward the segment to the BS, where thedestination node resides (i.e., the destination BS). We usethe mobile IP protocol [32] to enable BSes to know thedestination BS. In this protocol, each mobile node is associatedwith a home BS, which is the BS in the node’s homenetwork, regardless of its current location in the network.The home network of a node contains its registrationinformation identified by its home address, which is astatic IP address assigned by an ISP. In a hybrid wirelessnetwork, each BS periodically emits beacon signals tolocate the mobile nodes in its range. When a mobile nodemi moves away from its home BS, the BS where mi currentlyresides detects mi and sends its IP address to thehome BS of mi. When a BS wants to contact mi, it contactsthe home BS of mi to find the destination BS where mi currentlyresides at.However, the destination BS recorded in the home BSmay not be the most up-to-date destination BS since destinationmobile nodes switch between the coverage regions ofdifferent BSes during data transmission to them. Forinstance, data is transmitted to BS Bi that has the data’s destination,but the destination has moved to the range of BSBj before the data arrives at BS Bi. To deal with this problem,we adopt the Cellular IP protocol [30] for trackingnode locations. With this protocol, a BS has a home agentand a foreign agent. The foreign agent keeps track of mobilenodes moving into the ranges of other BSes. The home agentintercepts in-coming segments, reconstructs the originaldata, and re-routes it to the foreign agent, which then forwardsthe data to the destination mobile node.Fig. 3. Neighbor selection in DTR.SHEN ET AL.: A DISTRIBUTED THREE-HOP ROUTING PROTOCOL TO INCREASE THE CAPACITY OF HYBRID WIRELESS NETWORKS 1979After the destination BS receives the segments of a message,it rearranges the segments into the original messageand then sends it to the destination mobile node. A vitalissue is guaranteeing that the segments are combined in thecorrect order. For this purpose, DTR specifies the segmentstructure format. Each segment contains eight fields, including:(1) source node IP address (denoted by S); (2) destinationnode IP address (denoted by D); (3) message sequencenumber (denoted by m); (4) segment sequence number(denoted by s); (5) QoS indication number (denoted by q);(6) data; (7) length of the data; and (8) checksum. Fields (1)-(5) are in the segment head.The role of the source IP address field is to inform the destinationnode where the message comes from. Thedestination IP address field indicates the destination node,and is used to locate the final BS. After sending out a messagestream to a destination, a source node may send outanother message stream to the same destination node. Themessage sequence number differentiates the different messagestreams initiated by the same source node. The segmentsequence number is used to find the correct transmissionsequence of the segments for transmission to a destinationnode. The data is the actual information that a source nodewants to transmit to a destination node. The length fieldspecifies the length of the DTR segment including theheader in bytes. The checksum is used by the receiver nodeto check whether the received data has errors. The QoS indicationnumber is used to indicate the QoS requirement of theapplication.Thus, each segment’s head includes the informationrepresented by ðS; D; m; s; qÞðm; s ¼ 1; 2; 3; . . .Þ. When asegment with head ðS; D; m; s; qÞ arrives at a BS, the BScontacts D’s home BS to find the destination BS where Dstays via the mobile IP protocol. It then transmits the segmentto the destination BS through the infrastructure networkcomponent. After arriving at the BS, the segmentwaits in the cache for its turn to be transmitted to its destinationnode based on its message and segment sequencenumbers. At this time, if another segment comes with ahead labelled ðS; D; ðm þ 1Þ; s; qÞ, which means that it isfrom the same source node but belongs to another datastream, the BS will put it to another stream. If the segmentis labeled as ðS; D;m; ðs þ 1Þ; qÞ, it means that this segmentbelongs to the same data stream of the same sourcenode as segment ðS;D; m; s; qÞ. The combination of thesource node’s sequence number and segment sequencenumber helps to locate the stream and the position of asegment in the steam. In order to integrate the segmentsinto their correct order to retrieve the original data, thesegments in the BS are transmitted to the destinationnode in the order of the segments’ sequence in the originalmessage. If a segment has not arrived at the final BS,its subsequent segments will wait in the final BS until itsarrival. Algorithm 2 shows the pseudo-code for a BS toreorder and forward segments to their destinations. Notethat in the cache, we can set the timer based on the packetrate and storage limit. In other words, the timer should beset as large as possible to fully utilize the storage on BSesto ensure that a message has a high probability to berecovered.Algorithm 2. Pseudo-Code for a BS to Reorder andForward Segments to Destination Nodes.1: //a cache pool is built for each data stream2: //there are n cache pools currently3: if receives a segment (S, D, m, s, q) then4: if there is no cache pool with msg sequence num equalsm then5: Create a cache pool n þ 1 for the stream m6: else7: //the last delivered segment of stream m has sequence numi _ 18: if s ¼¼ i then9: Send out segment (S, D, m, s, q) to D10: iþþ;11: else12: Add segment (S, D, m, s) into cache pool m13: end if14: end if15: end if3.4 Congestion Control in Base StationsCompared to the previous routing algorithms in hybridwireless networks, DTR can distribute traffic load amongmobile nodes more evenly. Though the distributed routingin DTR can distribute traffic load among nearby BSes, if thetraffic load is not distributed evenly in the network, someBSes may become overloaded while other BSes remainlightly loaded. We propose a congestion control algorithmto avoid overloading BSes in uplink transmission (e.g., B1,B2 and B3 in Fig. 1b) and downlink transmission (e.g., B4 inFig. 1b), respectively.In the hybrid wireless network, BSes send beacon messagesto identify nearby mobile nodes. Taking advantage ofthis beacon strategy, once the workload of a BS, say Bi,exceeds a pre-defined threshold, Bi adds an extra bit in itsbeacon message to broadcast to all the nodes in its transmissionrange. Then, nodes near Bi know that Bi is overloadedand will not forward segments to Bi. When a node near Bi,say mi, needs to forward a segment to a BS, it will send thesegment to Bi based on the DTR algorithm. In our congestioncontrol algorithm, because Bi is overloaded, ratherthan targeting Bi, mi will forward the segment to a lightlyloaded neighboring BS of Bi. To this end, node mi firstqueries a multi-hop path to a lightly loaded neighboring BSof Bi. Node mi broadcasts a query message into the system.We set the TTL for the path query forwarding step to a constant(e.g., 3). The query message is forwarded along othernodes until a node (say mj) near a lightly loaded BS (say Bj)is reached. Due to broadcasting, a node may receive multiplecopies of the same queries. Each node only remembersmi and the node that forwards the first query (i.e., its precedingnode), and ignores all other the same queries. In thisway, a multi-hop path between the source node and thelightly loaded base station can be formed. Nodemj responds to the path query by adding a reply bit and theaddress of mi into its beacon message to its preceding nodein the path. This beacon receiver also adds a reply bit andthe address of mi into its beacon message to its precedingnode in the path. This process repeats until mi receives thebeacon. Thus, each node knows its preceding node and1980 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015succeeding node in the path from mi and mj based on theaddress of mi. Then, mi’s message can be forwarded alongthe observed path along the nodes. The observed path canalways be used by mi for any subsequent messages to Bj aslong as it is not broken. The neighboring BSes of an overloadedBS may also be overloaded. As the mobile nodesnear an overloaded BS know that the BS is overloaded,when they receive a query message to find a path to anunderloaded BS, they do not forward the message towardstheir overloaded BSes.Node mi may receive responses from a few nodes nearBSes. It can choose bðb _ 1Þ neighboring BSes of the destinationto forward the segment. The redundant transmissionsenhance the data transmission reliability while also increasethe routing overhead. Thus, the value of b should be carefullydetermined based on the available resources for routingand the reliability demand. If b is set to a large value, therouting reliability is high at the cost of high overhead. If b isset to a small value, the routing reliability is low while theoverhead is reduced. After the neighboring BSes receive thesegments, they further forward the segments to the destinationBS, which forwards the segments to the destinationnode. In this way, the heavy traffic from mobile nodes to aBS can be distributed among neighboring BSes quickly.Next, we discuss how to handle the case when the destinationBS is congested. If a BS has not received confirmationfrom the destination BS during a certain time periodafter it sends out a segment, it assumes that the destinationBS is overloaded. Then, it sends the segment to b ðb _ 1Þ lightly loaded neighboring BSes of the destination BS fromits routing table. If an attempted neighboring BS does notrespond during a certain time period, it is also consideredas overloaded. Then, the BS keeps trying other neighboringBSes until finding lightly loaded BSes. Redundant neighboringBSes are selected in order to increase routing reliability.The constant b should be set to an appropriatevalue considering factors such as the network size and theamount of traffic in order to achieve an optimal trade-offbetween overhead and reliability.After receiving the message, each lightly loaded neighboringBS of the destination BS finds a multi-hop path to thedestination mobile node. It broadcasts a path query message,which includes the IDs of the destination BS and thedestination node, to the mobile nodes in its region. The pathquerying process is similar to the previous path queryingfor a lightly loaded BS. The nodes further forward the pathquery to their neighbors until the query reaches the destinationnode. Here, we do not piggyback the query to beaconmessages because this querying is for a specific mobile noderather than any mobile node near a lightly loaded BS.Including the mobile node’s ID into beacon messages generatesvery high overhead.In order to reduce the broadcasting overhead, a mobilenode residing in the region of a BS not close to the destinationBS drops the query. The nodes can determine theirapproximate relative positions to BSes by sensing the signalstrengths from different BSes. Each node adds the strengthof its received signal into its beacon message that is periodicallyexchanged between neighbor nodes so that the nodescan identify their relative positions to each other. Only thosemobile nodes that stay farther than the query forwarderfrom the forwarder’s BS forward the queries in the directionof the destination BS. In this way, the query can be forwardedto the destination BS faster. After the multi-hoppath is discovered, the neighboring BS sends the segment tothe destination node along the path. Since the destinationnode is in the neighboring BS’s region, the overhead to identifya path to the destination node is small. Note that ourmethods for congestion control in base stations involvequery broadcasting. However, it is used only when somebase stations are overloaded rather than in the normal DTRrouting algorithm in order to avoid load congestion in BSes.Fig. 4 shows an example of the congestion control onBSes when b ¼ 2. As shown in figure, BS B1 is congested.Then, the relay nodes of the source node’s message broadcastlocally by beacon piggybacking to find multi-hop pathswhich lead to B3 and B4. The relay nodes then send segmentsalong the paths. In this way, the traffic originally targetingoverloaded B1 can be spread out to the neighboringBSes B3 and B4. B3 and B4 further forward the segments tothe destination BS B6 if B6 is not congested. If B6 is also congested,B3 and B4 send the segments to the neighboringBSes of B6. Specifically, B4 sends the segment to B3. B3 doesnot forward the segment to another BS since it already isclose to B6. B3 then finds a multi-hop path to the destinationnode and uses ad-hoc transmission to forward the segmentsto the destination node. Similarly, when B2 wants to send asegment to the destination node, it also uses a multi-hoppath for the segment transmission.4 PERFORMANCE ANALYSIS OF THE DTRPROTOCOLIn this section, we analyze the effectiveness of the DTR protocolat enhancing the capacity and scalability of hybridwireless networks. In our analysis, we use the same scenarioin [17] for hybrid wireless networks, and use the same scenarioin [33] for the ad-hoc network component. We presentthe scenarios and some concepts below. We consider a largenumber of mobile nodes uniformly and randomly deployedover a 2D field. The moving directions of the nodes areindependent and identically distributed (i.i.d.). The distributionof mobile nodes can be modeled as a homogeneousPoisson process with node density s [34]. That is, given anarea of size S in the field, the number of nodes in the area,denoted by nðSÞ, follows the Poisson distribution with theparameter sS,Pr nðSÞ ¼ k ð Þ¼ ðsSÞke_sSk!; k ¼ 0; 1; 2; . . . (1)Fig. 4. Congestion control on BSes.SHEN ET AL.: A DISTRIBUTED THREE-HOP ROUTING PROTOCOL TO INCREASE THE CAPACITY OF HYBRID WIRELESS NETWORKS 1981Besides mobile nodes, there are M BSes regularly deployedin the field. The BSes divide the area into a hexagon tessellation,in which each hexagon has side length h. The BSes areassumed to be connected together by a wired network. Weassume that the link bandwidths in the wired network arelarge enough so that there are no bandwidth constraintsbetween BSes. In single-path transmission, a message issequentially transmitted in one routing path. In multi-pathtransmission, a message is divided into a number of segmentsthat are forwarded along multiple paths in a distributedmanner. We assume each segment has the same lengthl. Table 1 lists the notations used in our analysis.We assume that the transmission range of all mobilenodes and all BSes is R (R > h). In this paper, we use protocolmodel [17], [33] to describe the interference among nodes;that is, a transmission from a node (here “node” can beeither mobile node or BS) vi to another node vj is successfulif the following two conditions are satisfied: 1) vj is withinthe transmission range of vi, i.e.,jvi _ vjj _ R; (2)where jvi _ vjj represents the euclidean distance between viand vj in the plane. 2) For any other node vk that is simultaneouslytransmitting over the same channel,jvk _ vjj _ ð1 þ DÞjvi _ vjj: (3)Formula (3) guarantees a guard zone around the receivingnode to prevent a neighboring node from transmitting onthe same channel at the same time. The radius of the guardzone is ð1 þ DÞ times the distance between the sender andthe receiver. The parameter D defines the size of the guardzone and we require that D > 0.We first adopt a concept called aggregate throughput capacityintroduced in [17], [33] to measure the throughput of thenetwork.Definition (Aggregate Throughput Capacity of HybridNetworks). The aggregate throughput capacity of a hybridwireless network is of order Qðfðs;MÞÞ if there are deterministicconstants a > 0, and a0 < þ1 such thatlimM!1PrðPðs;MÞ ¼ afðs;MÞ is feasibleÞ ¼ 1 (4)lim infM!1PrðPðs;MÞ ¼ a0fðs;MÞ is feasibleÞ < 1: (5)Since the working frequency of infrastructure networks isaround 700 MHz while that of ad-hoc networks is 2.4 GHz,the communications in infrastructure mode (betweenmobile nodes and BSes through cellular interface) wouldnot generate interference to ad-hoc mode. We divide thechannel for infrastructure mode transmissions into uplinkand downlink parts, according to the transmission directionrelative to the BSes. Accordingly, in the DTR protocol, thetraffic of each S-D pair is composed of at most two intra-celltraffics, one uplink traffic and one download traffic. Sinceuplink traffic and downlink traffic use different sub-channels,there is also no interference between these two types oftraffics. For each node vi, we denote the bandwidth assignedto intra-cell, uplink, and downlink sub-channels by Winti ,Wupi and Wdowni , respectively. We let Wupi ¼ Wdowni becausethere are the same amount of uplink traffic and downlinktraffic. The transmission rates should sum to Wi, i.e.,Winti þWupi þWdowni ¼ Wi. Though no interference existsbetween intra-cell, uplink, and downlink traffics, interferenceexists between the same type of traffic in a cell andbetween different cells. Fortunately, there is an efficient spatialtransmission schedule that can prevent such interferences[17]. First, to avoid the interference in a cell, any twonodes within the cell are not allowed to transmit with thesame traffic mode at the same time. Second, to avoid theinterference between different cells, the cells are spatiallydivided into a number of groups and transmissions in thecells of the same group do not interfere with each other. Ifthe groups are scheduled to transmit in a round robin fashion,each cell will be able to transmit once every fixedamount of time without interfering with each other.Below, we show how many groups we need to divide thecells to prevent interference. We adopt the notion of interferingneighbors introduced in [17], and give the number of cellsthat can be affected by a transmission in one cell. Two cellsare defined to be interfering neighbors if there is a point inone cell which is within a distance (2 þ D)R of a point in theother cell. Accordingly, if two cells are not interfering neighbors,transmissions in one cell do not interfere with transmissionsin the other cell. [17] has proved that (1) each cellhas no more than c1 interfering neighbors (Lemma 1 in[17]), where c1 is a constantc1 ¼433l þ 2R þ DR3l_ _2; (6)and (2) all cells should be divided into c1 þ 1 groups and thewhole channel should be divided into c1 þ 1 subchannels,where each subchannel is allocated to the cells in one group.Thus, the number of group we need to divide the cells toprevent interference is c1 þ 1.Before calculating the aggregate throughput capacity ofDTR, we first introduce Lemma 4.1.Lemma 4.1. The number of cells that have mobile nodes is QðMÞ.Proof. Denote the number of cells having mobile nodesby M1. To prove M1 ¼ QðMÞ, we need to prove thatthere exists deterministic constants a > 0 and a0 < þ1such thatlimM!1PrðM1 ¼ aMÞ ¼ 1; (7)lim infM!1PrðM1 ¼ a0MÞ < 1: (8)TABLE 1Parameter Tables Node density M Number of BSesl Segment’s length sh Area size of a cellnðSÞ Number of nodesin area SR Transmission rangeWi Bandwidth of a node vi mi Mobile node iPðs;MÞ Throughput nðs;MÞ Number of nodes1982 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015For Formula (8), let a0 ¼ 2. Because the number of cellshaving mobile nodes is upper bounded by M, thenlim infM!1PrðM1 ¼ 2M is feasibleÞ ¼ 0: (9)Now, we prove that Formula (7) can also be satisfiedfor some constant a. Because the number of nodes in acell follows a Poisson distribution and the size of eachcell (hexagon) is sh ¼ 3ffiffiffip3h2, then we can derive theprobability that no mobile node is in a cell equalsPr nðShÞ ¼ 0 ð Þ¼s0e_sh0! ¼ e_sh : (10)Consider an arbitrary cell k, let X1;X2, . . ., Xk, . . ., XM bei.i.d. random variables, where Xk represents whether cellk has mobile nodes. Then, Xk is defined as follows:Xk ¼1 cell k has mobile nodes0 cell k does not have mobile nodes_(11)and EðXkÞ ¼ e_sh . For simplicity, let c2 ¼ 1 _ e_sh . Then,M1 ¼PMk¼1 Xk. By theStrong Law of Large Number(SLLN) [34],Pr limM!1PMk¼1 XkM ¼ c2!¼ 1; (12)which implies that limM!1 PrðM1 ¼ c2MÞ ¼ 1, whichindicates that when a ¼ c2, Formula (7) can also besatisfied. tuLemma 4.2. Let nðs;MÞ denote the number of mobile nodes inthe whole network. Then,limM!1Prðnðs;MÞ ¼ shMÞ ¼ 1: (13)Proof. Let Z1, Z2, . . . , ZM be i.i.d. random variables representingthe number of nodes in cell 1, 2, . . . , M, respectively.Then, nðs;MÞ ¼PMk¼1 Zk. Because each Zk followsa Poisson distribution with parameter sh, EðZkÞ ¼ sh,81 _ k _ M. According to SLLN,Pr limM!1PMk¼1 ZkM ¼ sh!¼ 1; (14)which implies that limM!1 PrðPMk¼1 Zk ¼ shMÞ ¼ 1, andhence limM!1 Prðnðs;MÞ ¼ shMÞ ¼ 1. tuTheorem 4.1. For a hybrid network of M BSes and s mobilenode density, where each node has the intra-cell, uplink anddownlink sub-channel bandwidth satisfyingWdowni ¼ Wupi ¼ Wup ¼ W=4; Winti ¼ Wint ¼ W=2 (15)the aggregate throughput capacity of DTR isPðs;MÞ ¼ QðMWÞ: (16)Proof. To prove Pðs;MÞ ¼ QðMWÞ, we need to prove thatthere exists deterministic constants a > 0 and a0 < 1such thatlimM!1PrfPðs;MÞ ¼ aMW is feasibleg ¼ 1 (17)lim infM!1PrfPðs;MÞ ¼ a0MW is feasibleg < 1: (18)Recall that any two nodes within a cell cannot transmitsimultaneously in the same traffic mode, thethroughput P is upper bounded by MW=4, which canbe achieved only if each cell has one node to send themessage. Hence, Formula (18) can be satisfied by settinga0 to 1/2.Then, we will show how Formula (17) can be satisfied.Since the same message has to go through anuplink and a downlink and it is counted only once inthe throughput capacity, calculating the throughput ofthe whole network is equivalent to calculating thethroughput of uplink traffic Pup or the throughput ofdownlink traffic Pdown. Notice calculating intra-celltraffic throughput is not accurate because a messagemay transmit twice with intra-cell mode. In this proof,we calculate Pup.First, we consider the throughput of the uplink trafficof an arbitrary cell k, denoted by Pkup. Since the scheduleallocates 1=ðc1 þ 1Þ time slots to this cell, thenPkup ¼Wupc1 þ 1: (19)Then, we consider the throughput of the whole network.Let Pup ¼PMi¼1 PiupXi represent the throughput of uplinktraffic, then we havelimM!1Pr Pup ¼c2MW3ðc1 þ 1Þ_ _¼ limM!1PrXMi¼1PiupXi ¼c2MWupc1 þ 1!¼ limM!1PrXMi¼1Xi ¼ c2M!¼ 1 ðBy Lemma 4:1Þ:Accordingly, Formula (17) can be satisfied when a isset to c23ðc1þ1Þ. tuCorollary 4.1. With the restriction in Theorem 4.1, DTR canachieve QðWÞ throughput per S-D pair.Proof. Denote the throughput of per S-D pair by P, whichequalsP ¼Pðs;MÞn: (20)Obviously, P is upper bounded by W4 because each nodehas at most W4 for uplink traffic (or downlink traffic),which equals its S-D pair throughput. By Lemma 4.2 andTheorem 4.1, we can derive thatSHEN ET AL.: A DISTRIBUTED THREE-HOP ROUTING PROTOCOL TO INCREASE THE CAPACITY OF HYBRID WIRELESS NETWORKS 1983limM!1Pr P ¼c2W3ðc1 þ 1Þsh_ _¼ limM!1PrPðs;MÞnðs;MÞ ¼c2W3ðc1 þ 1Þsh_ __ limM!1Pr Pðs;MÞ ¼c2WM3ðc1 þ 1Þ_ _Prðnðs;MÞ ¼ shMÞ¼ limM!1Pr Pðs;MÞ ¼c2WM3ðc1 þ 1Þ_ _¼ 1which implies that limM!1 Pr P ¼ c2W3ðc1þ1Þsh_ _¼ 1. tuCorollary 4.1 shows that DTR produces a constantthroughput for each pair of nodes regardless of the numberof nodes in each cell due to its spacial reuse of thesystem. Theorem 4.1 and Corollary 4.1 show that theaggregate throughput capacity and the throughput perS-D pair of DTR are QðMWÞ and QðWÞ, respectively.The work in [17] proves that DHybrid achieves QðMWÞinfrastructure aggregate throughput, and the work in[33] proves that the pure ad-hoc transmission achievesQð W ffiffiffiffiffiffiffiffiffiffin_logn p Þ throughput per S-D pair. The results demonstratethat the throughput rates of DTR and DHybrid arehigher than that of the pure ad-hoc transmission. This isbecause the pure ad-hoc transmission is not efficient in alarge scale network [35]. A large network size reducesthe path utilization efficiency and increases node interference.Facilitated by the infrastructure network, DTRand DHybrid avoid long distance transmissions, leadingto a higher transmission throughput.Proposition 4.1. Suppose a mobile node needs to allocate totallyU segments with the same length to L neighboring mobilenodes m1, . . ., mL, which has uplink bandwidth Wup1 , . . . ,WupL , respectively. Let Ui denote the number of segments to beallocate to mi (i ¼ 1; 2; . . . ; L). To minimize the averagelatency of these segments, the optimal allocation should satisfyU1Wup1 ¼ _ _ _ ¼ ULWupL. The minimized average latency equalsUl2PLi¼1Wupi.Proof. Recall that each segment has length l. Then, foreach mobile node mi it requires lWupitime to transmit asegment. Therefore, the jth segment that mi needs totransmit has to wait ðj_1ÞlWup1slots. Hence, the totallatency of the segments that mi needs to transmit toits BS equalsXUij¼1ðj _ 1ÞlWupi ¼ ð0 þ 1þ_ _ _þðUi _ 1ÞÞlWupi _U2i l2Wupi: (21)Hence, the average latency of transmitting all the messagesshould bePKi¼1U2i l2Wupi=U. According to Cauchy-Schwarz inequality [34], the average latency is lowerbounded1UXLi¼1U2i l2Wupi ¼l2UPLi¼1WupiXLi¼1U2iWupiXLi¼1Wupi_l2UPLi¼1WupiXLi¼1ffiffiffiffiffiffiffiffiffiU2iWupis ffiffiffiffiffiffiffiffiffiWupiq !2¼Ul2PLi¼1Wupi: (22)WhenffiffiffiffiffiffiffiU21Wup1rffiffiffiffiffiffiffiWup1p ¼ _ _ _ ¼ffiffiffiffiffiffiffiU2LWupLrffiffiffiffiffiffiffiWupLp , or equivalently, U1Wup1 ¼ _ _ _¼ ULWupL, the average segment latencyPLi¼1U2i l2Wupi=U canachieve the minimum value Ul2PLi¼1Wupi. tuProposition 4.1 indicates that forwarding segments to thenearby nodes with the highest capacity can minimize theaverage latency of messages in the cell. It also balancesthe transmission load of the mobile nodes within a cell.Proposition 4.2 A source node in DTR can find relay nodes formessage forwarding with probabilityP1k¼1k_1kckre_crk! , wherecr ¼ pR2.Proof. Let m denote the number of nodes within mi’s transmissionarea and define the indicator variable Qi byQi ¼1 mi is the highest capacity node0 mi is not the highest capacity node_(23)then,Prfmi can find relays for message forwardingg¼X1k¼0Pr Qi ¼ 0jm ¼ k ð ÞPr m ¼ k ð Þ¼X1k¼1k _ 1kckre_crk!:tuProposition 4.2 indicates that in a high-density network,a source node in DTR can find relay nodes for messageforwarding with a high probability. For example, assumethe average number of neighbor nodes of a source node is10. With the daily increasing number of mobile devices,such an assumption is realistic. Then, the probability ofnot being able to find any node in the range of a node is1 _P1k¼1k_1k10ke_10k! _ 0:12, which is very small. Therefore,in a high-density network, a source node can find neighborsfor message forwarding with a high probability.We use DHybrid to denote the group of routing protocolsin hybrid wireless networks that directly combine the adhoctransmission mode and the infrastructure transmissionmode [1], [5], [6], [12], [13], [14], [15], [16], [17], [18].Proposition 4.3. In a hybrid wireless network, the DHybrid routingprotocol leads to load imbalance among the mobile nodes ina cell.Proof. Fig. 5a shows a cell with a BS and a randomly pickedmobile node mi in the range of the BS. The shaded regionrepresents all possible positions of the source nodes that1984 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015choose mi as the relay node in DHybrid. The total trafficpassing through node mi is the sum of the traffic generatedby the nodes in the shaded region. The area ofshaded region isS ¼ sh _ pD2 ð0 < D < hÞ; (24)where D is the distance between the BS and relay nodemi and sh is the area size of a cell. Therefore, the expectedvalue of traffic passing through node mi isW _ s _ ðsh _ pD2Þ ð0 < D < hÞ; (25)where W is the data transmission rate of a sourcenode, and s is the density of the nodes in a region.Equation (25) shows that the traffic passing throughnode mi decreases as D increases. That is, the nodescloser to the BS have a higher load than the nodes stayingat the brim of the cell. tuProposition 4.4. In a hybrid wireless network, DTR achievesmore balanced load distribution among the mobile nodes ineach cell.Proof. The shaded region in Fig. 5b represents all possiblepositions of the source and relay nodes that choose nodemi as relay node. Suppose m neighbor nodes are chosenas relay nodes, then the expected traffic passing throughnode mi is Wm _ s _ pR2 which shows that the traffic goingthrough node mi is independent of its location relative toits BS. Since every node in the cell has an equal probabilityof generating traffic, the traffic load is balancedamong the nodes in the cell. tu5 PERFORMANCE EVALUATIONThis section demonstrates the properties of DTR throughsimulations on NS-2 [36] in comparison to DHybrid [17],Two-hop [19] and AODV [8]. In DHybrid, a node first usesbroadcasting to observe a multi-hop path to its own BSand then forwards a message in the ad-hoc transmissionmode along the path. During the routing process, if thetransmission rate (i.e., bandwidth) of the next hop to theBS is lower than a threshold, rather than forwardingthe message to the neighbor, the node forwards the messagedirectly to its BS. The source node will be notified ifan established path is broken during data transmission. Ifa source sends a message to the same destination nexttime, it uses the previously established path if it is not broken.In the Two-hop protocol, a source node selects thebetter transmission mode between direct transmission andrelay transmission. If the source node can find a neighborthat has higher bandwidth to the BS than itself, it transmitsthe message to the neighbor. Otherwise, it directly transmitsthe message to the BS.Unless otherwise specified, the simulated network consistsof 50 mobile nodes and four BSes. In the ad-hoc componentof the hybrid wireless network, mobile nodes arerandomly deployed around the BSes in a field of1;000 _ 1;000 square meters. We used the Distributed CoordinationFunction (DCF) of the IEEE 802:11 as the MAC layerprotocol. The transmission range of the cellular interfacewas set to 250 meters, and the raw physical link bandwidthwas set to 2 Mbits/s. The transmission power of the ad-hocinterface was set to the minimum value required to keep thenetwork connected for most times, even when nodes are inmotion in the network. Then, the influence of the transmissionrange on different methods’ performance is controlled.Specifically, we set the transmission range through the adhocinterface to 1.5 times of the average distance betweenneighboring nodes, which can be obtained by measuring thesimulated network. We used the two-ray propagation modelfor the physical layer model. Constant bit rate (CBR) wasselected as the traffic mode in the experiment with a rate of640 kbps. In the experiment, we randomly chose four sourcenodes to continuously send messages to randomly chosendestination nodes. The number of channels for each BS wasset to 10. We set the number of redundant routing paths b inSections 3.4 to 1. We assumed that there was no capacitydegradation during transmission between BSes. Thisassumption is realistic considering the advanced technologiesand hardware presently used in wired infrastructurenetworks. There was no message retransmission for failedtransmissions in the experiments.We employed the random way-point mobility model [37]to generate the moving direction, speed, and pause durationof each node. In this model, each node moves to a randomposition with a speed randomly chosen from ð1 _ 20Þ m/s.The pause time of each node was set to 0. We set the numberof segments of a message to the connection degree of thesource node. The simulation warmup time was set to 100 sand the simulation time was set to 1,000 s. We conductedthe experiments five times and used the average value asthe final experimental result. To make the methods comparable,we did not use the congestion control algorithm inDTR unless otherwise indicated.5.1 ScalabilityFig. 6 shows the average throughput measured in kbps perS-D pair of different routing protocols versus the number ofmobile nodes in the system. The figure shows the throughputof DTR remains almost the same with different networksizes. This result conforms to Corollary 4.1. DTR uses distributedmulti-path routing to fully take advantage of thespatial reuse and avoid transmission congestion in a singlepath. Unlike the multi-hop routing in mobile ad-hoc networks,DTR does not need path query and maintenance.Also, it limits the path length to three to avoid problems inlong-path transmission. The throughput of DHybrid andAODV decreases as the number of nodes in the networkincreases. This is mainly because when the network sizeincreases, more beacon messages are generated in theFig. 5. The traffic load in DHybrid and DTR.SHEN ET AL.: A DISTRIBUTED THREE-HOP ROUTING PROTOCOL TO INCREASE THE CAPACITY OF HYBRID WIRELESS NETWORKS 1985network. Also, the long transmission path also leads to hightransmission interference. Then, nodes in these methodssuffer from intense interference, leading to more transmissionfailure and degraded overall throughput. Also, themobile node increase in the system leads to high networkdynamism, resulting in frequent route re-establishments.The short routing paths in Two-hop reduce congestionand signal interference, thus enabling better spatial reuse asin DTR. Meanwhile, Two-hop enables nodes to adaptivelyswitch between direct transmission and relay transmission.Hence, part of the transmission load is transferred to relaynodes, which carry the messages until meeting the BSes. Asa result, gateway nodes connecting mobile nodes and BSesare not easily overloaded. Therefore, the throughput ofTwo-hop is higher than DHybrid. However, since the numberof message routing hops is confined to one, Two-hopmay not find the node with the best transmission rate to theBSes because of the short transmission range of the ad-hocinterface. Therefore, the throughput of Two-hop is lowerthan DTR, especially in a network with high node density.The reason that AODV has the lowest throughput per S-Dpair is its long transmission paths.Fig. 7 shows the throughput per S-D pair versus thenumber of BSes in different routing protocols. The numberof BSes was varied from 3 to 6. The BSes are uniformly distributedin the network. We can see from the figure that asthe number of BSes increases, the throughputs of DTR,Two-hop, and DHybrid increase while the throughput ofAODV stays nearly constant. In DTR, Two-hop, and DHybrid,as the number of BSes increases, the total number ofnodes close to the BSes increases. Then, more nodes havehigh transmission rates to the BSes, leading to a throughputincrease. In AODV, since the traffic between S-D pairs doesnot travel though BSes, the throughput between an S-D pairis not affected by the increased number of BSes in the network.The figure also shows that the throughput of DTR isconstantly larger than Two-hop and the throughput ofTwo-hop is constantly larger than DHybrid. AODV constantlyhas the lowest transmission delay due to the samereasons as in Fig. 6.5.2 Transmission DelayFig. 8 shows the average transmission delay of S-D pairsfor successfully delivered messages in different routingprotocols versus network size. The network size was variedfrom 20 to 100 with 20 increase in each step. Transmissiondelay is the amount of time it takes for a message tobe transmitted from its source node to its destinationnode. From the figure, we see that DTR generates thesmallest delay. In DTR, each source node first dividesits messages into smaller segments and then forwardsthem to the nearby nodes with the highest capacity, whichleads to more balanced transmission load distributionamong nodes than the previous methods. According toProposition 4.1, average latency can be minimized whenthe transmission loads of all the nodes are balanced.Hence, DTR has smaller latency than the previous methods.The delay of DHybrid is five to six times larger thanDTR. DHybrid uses a single transmission path, while DTRuses multiple paths. Recall that we set the number of segmentsof a message to the connection degree of the sourcenode in DTR. Thus, the ratio of delay time of DHybrid tothat of DTR equals the average connection degree. As thenumber of nodes in the system increases, the connectiondegree of each node increases, and the increase rate of theratio grows. This is caused by two reasons. First, a highernode density leads to longer path lengths in DHybrid,resulting in a longer delay because of a higher likelihoodof link breaks. Second, a higher node density enables anode to quickly find relay nodes to forward messages inDTR, as indicated in Proposition 4.2.DTR also produces a shorter transmission delay thanTwo-hop for two reasons. First, the multi-path parallel routingof DTR saves much transmission time as shown inProposition 4.1. Second, the distributed routing of DTR enablessome messages to be forwarded to the destination BS’sneighboring cells with high transmission rates rather thanwaiting in the current hot cell for a transmission channel.We can also observe that Two-hop produces lower delaythan DHybrid. This is because the delay of DHybirdincludes the time for establishing a path and for data transmission.Also, the multi-hop transmission component ofDHybrid results in a higher delay due to the queuing delayin each hop. Because of the long distance transmissionswithout support from an infrastructure network, AODVgenerates the longest delay.Fig. 6. Throughput vs. network size (simulation).Fig. 7. Throughput vs. number of BSes.Fig. 8. Delay vs. network size.1986 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015Fig. 9 plots the average communication delay per S-Dpair for successfully delivered messages versus the numberof BSes in different routing protocols. The figure shows thatthe increasing number of BSes in the system leads to a communicationdelay decrease between nodes in DTR, Twohop,and DHybrid, but does not affect the communicationdelay in AODV. In DTR, Two-hop, and DHybrid, as thenumber of BSes increases, more nodes can stay close to theBSes, leading to fewer communication hops and bettertransmission links between nodes and BSes. Thus, the transmissiondelay between the nodes is reduced. Since thecommunication between S-D pairs in AODV does notrely on BSes, AODV maintains constant communicationdelay. The figure also shows that the communication delaybetween S-D pairs follows DTR<Two-hop<DHybrid<AODV for the same reason as in Fig. 8.5.3 Communication OverheadWe use the generation rate of control messages in the networkand MAC layers in kbps to represent the communicationoverhead of the routing protocols. Fig. 10 illustrates thecommunication overhead of DTR, Two-hop, DHybrid, andAODV versus network size. We can see that the communicationoverheads of DTR and Two-hop are very close. Thisis because both DTR and Two-hop are transmission protocolsof short distance and small hops. DTR has slightlyhigher communication overhead than Two-hop becauseDTR utilizes three hop transmission, which has one morehop than two hop transmission. However, the marginaloverhead increase leads to a much higher transmissionthroughput as shown in Fig. 6. DHybrid generates muchhigher overhead than DTR and Two-hop because of thehigh overhead of routing path querying. The pure AODVrouting protocol results in much more overhead than theothers. This is because without an infrastructure network,the messages in AODV travel a long way from the sourcenode to the destination node through much longer paths.5.4 Effect of MobilityIn order to see how the node mobility influences the performanceof the routing protocols, we evaluated the throughputof these four transmission protocols with different nodemobilities. Fig. 11 plots the throughput of DTR, DHybrid,Two-hop, and AODV versus node moving speed. From thefigure, we can see that the increasing mobility of the nodesdoes not adversely affect the performance of DTR and Twohop.It is intriguing to find that high mobility can even helpDTR to increase its throughput and that Two-hop generatesconstant throughput regardless of the mobility. This isbecause the DTR and Two-hop transmission modes do notneed to query and rely on multi-hop paths; thus, they arenot affected by the network partition and topology changes.Moreover, since DTR transmits segments of a message in adistributed manner, as the mobility increases, a mobilenode can meet more nodes in a shorter time period. Therefore,DTR enables the segments to be quickly sent to highcapacitynodes. As node mobility increases, the throughputof DHybrid decreases. In DHybrid, the messages are routedin a multi-hop fashion. When the links between nodes arebroken because of node mobility, the messages are dropped.Therefore, when nodes have smaller mobility, the linksbetween the mobile nodes last longer and more messagescan be transmitted. Hence, the throughput of DHybrid isadversely affected by node mobility. However, since DHybridcan adaptively adjust the routing between the ad-hoctransmission and cellular transmission, the throughput ofDHybrid is much higher than AODV’s. With no infrastructurenetwork, AODV produces much lower throughputthan the others. Its throughput also drops as node mobilityincreases for the same reasons as DHybrid.5.5 Effect of WorkloadWe measured the total throughput of BSes on the messagesreceived by BSes. Fig. 12 shows the total throughput of theBSes versus the number of source nodes. We can see thatDTR and Two-hop have much higher throughput increaserates than DHybrid. This is because in DTR and Two-hop,the number of transmission hops from a source node to a BSis small. Meanwhile, each node can adaptively switchbetween relay transmission and direct transmission basedon the transmission rate of its neighbors. Hence, part of asource node’s transmission load is transferred to a few relaynodes, which carry the messages until meeting the BSes.Therefore, the gateway mobile nodes are less likely to becongested. However, nodes in DHybrid cannot adaptivelyadjust the next forwarding hop because it is predeterminedin the routing path. Messages are always forwarded to theFig. 9. Delay vs. number of BSes.Fig. 10. Overhead vs. network size.Fig. 11. Throughput vs. mobility.SHEN ET AL.: A DISTRIBUTED THREE-HOP ROUTING PROTOCOL TO INCREASE THE CAPACITY OF HYBRID WIRELESS NETWORKS 1987mobile gateway nodes that are closer to the BSes or thathave higher transmission rates. Therefore, these mobilegateway nodes can easily become congested as the workloadof the system increases, leading to many messagedrops. Therefore, when the number of the source nodes islarger than 4, the throughput of DHybrid remains nearlyconstant. This is also the reason that the throughput of DHybridis constantly lower than those of DTR and Two-hop.Additionally, the figure shows that the overall throughputof Two-hop is lower than that of DTR. This is because mostof the traffic in Two-hop is confined to a single cell. When aBS in a cell is congested, the traffic cannot be transferred toother cells. In contrast, DTR’s three-hop distributed forwardingmechanism enables it to distribute the trafficamong the BSes in a balance. Therefore, the BSes in DTRwill not become congested easily. In addition, as the forwardingmechanism gives nodes more flexibility in choosingrelay nodes with higher transmission rates for messageforwarding to the BSes, the overall BS throughput in DTR islarger than in Two-hop.5.6 Effect of the Number of Routing HopsWe conducted experiments to show the optimal number ofrouting hops for the routing in hybrid wireless networks.We tested the throughput per S-D pair for x-hop DTR,where x was varied from 1 to 4. In the one-hop routing, anode directly transmits a message to the BS without messagedivision. In the other routing protocols, the ðx _ 1Þthhop chooses the best transmission mode between directtransmission and relay transmission. Also, in the four-hoprouting, the second relay node randomly chooses the thirdrelay node.Fig. 13 shows the average throughput per S-D pair versusnetwork size in DTR. As the figure shows, as the networksize increases, the node throughput keeps constant regardlessof the number of forwarding hops in a routing. The reasonis the same as in Fig. 6. We can also see from the figurethat the throughput of the four protocols follows 3-hop>4-hop>2-hop>1-hop. In the one-hop routing, each node onlytransmits segments directly to a BS regardless of its currenttransmission rate. In the two-hop routing, if the transmissionrate of a node’s neighbor is higher than that of thenode, it asks its neighbor node to forward the segment to aBS. Therefore, the two-hop routing has higher throughputthan the one-hop routing. The three-hop routing can greatlyincrease the number of node options for segment routingsince the number of nodes that the source node can encounterincreases from d to d2, where d is the average nodedegree. Thus, a node with a greater transmission rate can bechosen as the forwarding node. Meanwhile, the three-hoprouting can greatly facilitate inter-cell communicationbecause a node has a higher probability of reaching aneighboring BS within a three-hop path length than withina two-hop path length. Therefore, the throughput of thethree-hop routing is much higher than that of the two-hoprouting. The figure also shows that the four-hop routingproduces lower throughput than the three-hop routing. Thereason is that three hops are enough to find a hop with hightransmission rate and achieve inter-cell communicationbecause of widespread BSes. The four-hop routing increasesthe forwarding delay due to the greater number of nodes ina route; thus, it cannot increase the uploading transmissionrate of messages.5.7 Load Distribution within a CellIn this experiment, we tested the load distribution of mobilenodes in a randomly chosen cell in the hybrid wireless networkthat employs each of the DTR, DHybrid, and Twohopprotocols. We normalized the distance from a mobilenode to its base station according to the function DRb, whereD is the actual distance and Rb is the radius of its cell. Wedivided the space of the cell into several concentric circlesand measured the loads of the nodes on each circle to showthe load distribution.Fig. 14 shows the average load of a node correspondingto the normalized distance from itself to the BS in the chosencell. The figure shows that most of the traffic load of DHybridis located at nodes near the BS. The nodes far from theBS have very low load. The results conform to Proposition4.3. In DHybrid, if a source node wants to access the Internetbackbone or engage in inter-cell communication, it transmitsthe messages to the BSes in a multi-hop fashion. Therefore,the nodes near the BSes will have the highest load. Onthe other hand, since there is little traffic going through thenodes at the brim of a cell, the load of these nodes is small.As a result, some nodes can easily become hot spots whileFig. 12. Throughput of BSes vs. number of source nodes.Fig. 13. Throughput vs. number of hops.Fig. 14. Load distribution in a cell.1988 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015the resources of other nodes are not fully utilized. This loadimbalance prevents DHybrid from fully utilizing systemresources. The traffic load of DTR is almost evenly distributedin the system, which is in line with Proposition 4.4. InDTR, the traffic from a source node is distributed among anumber of relay neighbors for further data forwarding. Thenodes at the brim of the cell also take responsibility for messageforwarding, since the neighbor nodes of the brimnodes could be located in other cells with good transmissionchannels. In Two-hop, the source node considers directtransmission or one-hop relay transmission based on thechannel condition. Since the node is chosen within one hop,the messages will not gather close to the BS due to the limitedtransmission range. However, because of its sequentialtransmission, Two-hop cannot achieve load balance amongnodes in a cell as well as DTR.5.8 Load Balance between CellsIn this experiment, we tested the effectiveness of the congestioncontrol algorithm in DTR. We also added a congestioncontrol algorithm to DHybrid. In the algorithm, when anode receives beacon messages from its BS indicating that itis overloaded, the node broadcasts a query message to finda path to a nearby uncongested BS. We selected two BSesout of the total four BSes. In the range of each of the twoselected BSes, we randomly selected one mobile node as thesource node to send messages to a randomly selected destinationnode in the network. Once the source node movesout of the range of the selected BS, another mobile node inthe range was selected as the source node. In order to showthe load distribution of the BSes in different protocols, weranked the BSes based on BS throughput. The BS with thehighest throughput has a rank of 1.Fig. 15 shows the throughput of each BS versus the BSrank. We can see from the figure that in Two-hop, thethroughput of the first two BSes is extremely high whilethe throughput of the last two BSes is extremely small.This is because the two hop routing path length in Twohopis not long enough to forward messages from a congestedBS to a lightly loaded BS. Therefore, the traffic cannotbe shifted to the neighboring lightly loaded BSes,leading to an unbalanced load distribution. We can alsosee from the figure that in DTR, the variance of thethroughputs in different BSes is small. The reason is thatthree forwarding hops are enough for a mobile node toreach a neighboring BS and hence to balance the loadbetween the BSes. Meanwhile, the congestion control algorithmin DTR can effectively switch the traffic from ahighly loaded cell to a lightly loaded cell. Because theBSes of ranks 1 and 2 in DTR are not congested, theirthroughput is less than the corresponding BSes in Twohop;also, the throughput of the BSes of ranks 3 and 4 inDTR is much higher than that of the corresponding BSesin Two-hop. DHybrid achieves more balanced load distributionbetween BSes than Two-hop since it employs a congestioncontrol algorithm. In DHybrid, if a previouslyestablished path to a destination is not broken, a node stilluses this path to transmit messages to the same destination.Thus, the nodes cannot dynamically balance loadbetween BSes. Also, when a node finds that its current BSis congested, it takes a long time for it to find a path to anon-congested BS by re-issuing a query message to theneighboring non-congested BS, which greatly reduces thethroughput of the system.Fig. 16 further shows the throughput of the BSes versussimulation time in the three routing protocols. At the beginning,the BSes with ranks 1 and 2 are congested and thosewith ranks 3 and 4 do not have much traffic. Thus, the threefigures show that the BSes with ranks 1 and 2 have highthroughput but those with ranks 3 and 4 have extremelylow throughput at the beginning in all three protocols.Fig. 16a shows the throughput of the BSes in DTR. Asshown in the figure, since DTR can adaptively adjust thetraffic among the BSes using its congestion control algorithm,the throughput of the two highly congested BSes isdistributed to the neighboring BSes. As the traffic is forwardedfrom the BSes of ranks 1 and 2 to the BSes of ranks 3and 4, the throughputs of these BSes are very similar later inthe simulation. This result indicates the effectiveness of thecongestion control algorithm in DTR for load balancebetween cells.Fig. 16b shows the throughput of the BSes in Two-hop. InTwo-hop, since the source nodes cannot effectively movethe traffic between BSes, the BSes with rank 1 and rank 2Fig. 15. Load distribution among BSes.Fig. 16. Base station load vs. simulation time.SHEN ET AL.: A DISTRIBUTED THREE-HOP ROUTING PROTOCOL TO INCREASE THE CAPACITY OF HYBRID WIRELESS NETWORKS 1989constantly have the highest throughput, while the BSes withrank 3 and rank 4 constantly have low throughput. The lowthroughput is produced when the immediate neighbors ofthe source node are in the range of the neighboring BSes ofthe source node’s BS. However, the probability of such casesis very small. Fig. 16c shows the throughput of the BSes inDHybrid. As the nodes in DHybrid cannot effectively balancethe load between the BSes, the throughput of the BSesof rank 1 and rank 2 is much larger than that of the BSes ofrank 3 and rank 4. Comparing Figs. 16b and 16c, we canfind that the throughput in DHybrid is lower than that inTwo-hop. This is because the multi-hop transmission in thead-hoc network in DHybrid greatly reduces the throughput.Meanwhile, the mobile gateway nodes in DHybrid easilybecome congested, leading to more message drops.6 CONCLUSIONSHybrid wireless networks have been receiving increasingattention in recent years. A hybrid wireless network combiningan infrastructure wireless network and a mobile adhocnetwork leverages their advantages to increase thethroughput capacity of the system. However, currenthybrid wireless networks simply combine the routing protocolsin the two types of networks for data transmission,which prevents them from achieving higher system capacity.In this paper, we propose a Distributed Three-hop Routingdata routing protocol that integrates the dual features ofhybrid wireless networks in the data transmission process.In DTR, a source node divides a message stream into segmentsand transmits them to its mobile neighbors, whichfurther forward the segments to their destination throughan infrastructure network. DTR limits the routing pathlength to three, and always arranges for high-capacitynodes to forward data. Unlike most existing routing protocols,DTR produces significantly lower overhead by eliminatingroute discovery and maintenance. In addition,its distinguishing characteristics of short path length, shortdistancetransmission, and balanced load distribution providehigh routing reliability and efficiency. DTR also has acongestion control algorithm to avoid load congestion inBSes in the case of unbalanced traffic distributions in networks.Theoretical analysis and simulation results showthat DTR can dramatically improve the throughput capacityand scalability of hybrid wireless networks due to its highscalability, efficiency, and reliability and low overhead.ACKNOWLEDGMENTSThis research was supported in part by US NSF grants NSF-1404981, IIS-1354123, CNS-1254006, CNS-1249603, andCNS-1025652, and Microsoft Research Faculty Fellowship8300751. The authors would like to thank Mr. Kang Chenfor his help in addressing review comments. An early versionof this work was presented in the Proceedings of ICPP2009 [38]. Haiying Shen is the corresponding author.

A Distortion-Resistant Routing Framework for Video Traffic in Wireless Multihop Networks

412 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 2, APRIL 2015A Distortion-Resistant Routing Framework for VideoTraffic in Wireless Multihop NetworksGeorge Papageorgiou, Member, IEEE, ACM, Shailendra Singh, Srikanth V. Krishnamurthy, Fellow, IEEE,Ramesh Govindan, Fellow, IEEE, and Tom La Porta, Fellow, IEEEAbstract—Traditional routing metrics designed for wireless networksare application-agnostic. In this paper, we consider a wirelessnetwork where the application flows consist of video traffic.From a user perspective, reducing the level of video distortion iscritical. We ask the question “Should the routing policies changeif the end-to-end video distortion is to be minimized?” Popularlink-quality-based routing metrics (such as ETX) do not accountfor dependence (in terms of congestion) across the links of a path;as a result, they can cause video flows to converge onto a few pathsand, thus, cause high video distortion. To account for the evolutionof the video frame loss process, we construct an analyticalframework to, first, understand and, second, assess the impact ofthe wireless network on video distortion. The framework allows usto formulate a routing policy for minimizing distortion, based onwhich we design a protocol for routing video traffic. We find viasimulations and testbed experiments that our protocol is efficientin reducing video distortion and minimizing the user experiencedegradation.Index Terms—Protocol design, routing, video communications,video distortion minimization, wireless networks.I. INTRODUCTION WITH the advent of smartphones, video traffic has becomevery popular in wireless networks. In tactical networksor disaster recovery, one can envision the transfer ofvideo clips to facilitate mission management. From a user perspective,maintaining a good quality of the transferred video iscritical. The video quality is affected by: 1) the distortion dueto compression at the source, and 2) the distortion due to bothwireless channel induced errors and interference.Video encoding standards, like MPEG-4 [1] orH.264/AVC [2], define groups of I-, P-, and B-type framesthat provide different levels of encoding and, thus, protectionagainst transmission losses. In particular, the differentlevels of encoding refer to: 1) either information encodedManuscript received May 24, 2013; revised November 15, 2013; acceptedDecember 24, 2013; approved by IEEE/ACM TRANSACTIONS ON NETWORKINGEditor M. Reisslein. Date of publication February 11, 2014; date of current versionApril 14, 2015. This work was supported by the Army Research Laboratoryand was accomplished under Cooperative Agreement No. W911NF-09-2-0053.G. Papageorgiou, S. Singh, and S. V. Krishnamurthy are with the Departmentof Computer Science and Engineering, University of California, Riverside,Riverside, CA 92521 USA (e-mail: gpapag@cs.ucr.edu; singhs@cs.ucr.edu;krish@cs.ucr.edu).R. Govindan is with the Department of Computer Science, University ofSouthern California, Los Angeles, CA 90089 USA (e-mail: ramesh@usc.edu).T. La Porta is with the Department of Computer Science and Engineering,The Pennsylvania State University, University Park, PA 16802 USA (e-mail:tlp@cse.psu.edu).Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TNET.2014.2302815Fig. 1. Multilayer approach.independently, in the case of I-frames, or 2) encoding relativeto the information encoded within other frames, as is the casefor P- and B-frames. This Group of Pictures (GOP) allows forthe mapping of frame losses into a distortion metric that canbe used to assess the application-level performance of videotransmissions.One of the critical functionalities that is often neglected, butaffects the end-to-end quality of a video flow, is routing. Typicalrouting protocols, designed for wireless multihop settings,are application-agnostic and do not account for correlationof losses on the links that compose a route from a source toa destination node. Furthermore, since flows are consideredindependently, they can converge onto certain links that thenbecome heavily loaded (thereby increasing video distortion),while others are significantly underutilized. The decisions madeby such routing protocols are based on only network (and notapplication) parameters.In this paper, our thesis is that the user-perceived videoquality can be significantly improved by accounting for applicationrequirements, and specifically the video distortionexperienced by a flow, end-to-end. Typically, the schemes usedto encode a video clip can accommodate a certain number ofpacket losses per frame. However, if the number of lost packetsin a frame exceeds a certain threshold, the frame cannot bedecoded correctly. A frame loss will result in some amountof distortion. The value of distortion at a hop along the pathfrom the source to the destination depends on the positions ofthe unrecoverable video frames (simply referred to as frames)in the GOP, at that hop. As one of our main contributions,we construct an analytical model to characterize the dynamicbehavior of the process that describes the evolution of framelosses in the GOP (instead of just focusing on a network qualitymetric such as the packet-loss probability) as video is deliveredon an end-to-end path. Specifically, with our model, we capturehow the choice of path for an end-to-end flow affects theperformance of a flow in terms of video distortion. Our modelis built based on a multilayer approach as shown in Fig. 1. Thepacket-loss probability on a link is mapped to the probabilityof a frame loss in the GOP. The frame-loss probability isthen directly associated with the video distortion metric. Byusing the above mapping from the network-specific property(i.e., packet-loss probability) to the application-specific quality1063-6692 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.PAPAGEORGIOU et al.: DISTORTION-RESISTANT ROUTING FRAMEWORK FOR VIDEO TRAFFIC 413metric (i.e., video distortion), we pose the problem of routingas an optimization problem where the objective is to find thepath from the source to the destination that minimizes theend-to-end distortion.In our formulation, we explicitly take into account the historyof losses in the GOP along the path. This is in stark contrastwith traditional routing metrics (such as the total expectedtransmission count (ETX) [3]) wherein the links are treated independently.Our solution to the problem is based on a dynamicprogramming approach that effectively captures the evolutionof the frame-loss process. We then design a practical routingprotocol, based on the above solution, to minimize routingdistortion. In a nutshell, since the loss of the longer I-framesthat carry fine-grained information affects the distortion metricmore, our approach ensures that these frames are carried on thepaths that experience the least congestion; the latter frames ina GOP are sent out on relatively more congested paths. Ourrouting scheme is optimized for transferring video clips onwireless networks with minimum video distortion. Since optimizingfor video streaming is not an objective of our scheme,constraints relating to time (such as jitter) are not directly takeninto account in the design.Specifically, our contributions in this paper are as follows.• Developing an analytical framework to capture the impactof routing on video distortion: As our primary contribution,we develop an analytical framework that captures the impactof routing on the end-to-end video quality in terms ofdistortion. Specifically, the framework facilitates the computationof routes that are optimal in terms of achievingthe minimum distortion. The model takes into account thejoint impact of the PHY and MAC layers and the applicationsemantics on the video quality.• Design of a practical routing protocol for distortion-resilientvideo delivery: Based on our analysis, we design apractical routing protocol for a network that primarily carrieswireless video. The practical protocol allows a sourceto collect distortion information on the links in the networkand distribute traffic across the different paths in accordanceto: 1) the distortion, and 2) the position of a framein the GOP.• Evaluations via extensive experiments: We demonstrate viaextensive simulations and real testbed experiments on amultihop 802.11a testbed that our protocol is extremelyeffective in reducing the end-to-end video distortion andkeeping the user experience degradation to a minimum. Inparticular, the use of the protocol increases the peak signalto-noise ratio (PSNR) of video flows by as much as 20%,producing flows with a mean opinion score (MOS) thatis on the average 2–3 times higher compared to the casewhen traditional routing schemes are used. These PSNRand MOS gains project significant improvements in theperceived video quality at the destination of a flow [4]. Wealso evaluate our protocol with respect to various systemparameters.Organization: The paper is organized as follows. Relatedwork is presented in Section II. Our analytical models are inSection III, followed by the problem formulation in Section IV.In Section V, we discuss how our framework can be usedto route video flows in practice. Section VI contains resultsfrom our simulations and testbed experiments. We conclude inSection VII.II. RELATED WORKThe plethora of recommendations from the standardizationbodies regarding the encoding and transmission of video indicatesthe significance of video communications. Different approachesexist in handling such an encoding and transmission.The Multiple Description Coding (MDC) technique fragmentsthe initial video clip into a number of substreams called descriptions.The descriptions are transmitted on the network over disjointpaths. These descriptions are equivalent in the sense thatany one of them is sufficient for the decoding process to be successful,however the quality improves with the number of decodedsubstreams. Layered Coding (LC) produces a base layerand multiple enhancement layers. The enhancement layers serveonly to refine the base-layer quality and are not useful on theirown. Therefore, the base layer represents the most critical partof the encoded signal [5], [6]. In this paper, we focus on the layeredcoding due to its popularity in applications and adoption instandards.Standards like the MPEG-4 [1] and the H.264/AVC [2] provideguidelines on how a video clip should be encoded for atransmission over a communication system based on layeredcoding. Typically, the initial video clip is separated into a sequenceof frames of different importance with respect to qualityand, hence, different levels of encoding. The frames are calledI-, P-, and B-frames, and groups of such frames constitute astructure named the GOP. In each such GOP, the first frame is anI-frame that can be decoded independently of any other informationcarried within the same GOP. After the I-frame, a sequenceof P- and possibly B-frames follows. The P- and B-frames usethe I-frame as a reference to encode information. However, notethat the P-frames can also be used as references for other frames.There has been a body of work on packet-loss-resilientvideo coding in the signal processing research community [7].In [4], the video stream is split into high- and low-prioritypartitions, and FEC is used to protect the high-priority data.To account for temporal and spatial error propagation due toquantization and packet losses, an algorithm is proposed in [8]to produce estimates of the overall video distortion that canbe used for switching between inter- and intracoding modesper macroblock, achieving higher PSNR. In [9], an enhancementto the transmission robustness of the coded bitstreamis achieved through the introduction of inter/intracoding withredundant macroblocks. The coding parameters are determinedby a rate-distortion optimization scheme. These schemes areevaluated using simulation where the effect of the networktransmission is represented by a constant packet-loss rate,and therefore fails to capture the idiosyncrasies of real-worldsystems.In [10], an analytical framework is developed to model theeffects of wireless channel fading on video distortion. Themodel is, however, only valid for single-hop communication.In [11], the authors examine the effects of packet-loss patternsand specifically the length of error bursts on the distortionof compressed video. The work, although on a single link,showcases the importance of accounting for the correlation of414 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 2, APRIL 2015errors across frames. The performance of video streaming overa multihop IEEE 802.11 wireless network is studied in [12],and a two-dimensional Markov chain model is proposed. Themodel is used not only for performance evaluation, but also asa guide for deploying video streaming services with end-to-endquality-of-service (QoS) provisioning. Finally, a recursionmodel is derived in [13] to relate the average transmissiondistortion across successive P-frames. None of these effortsconsiders the impact of routing on video distortion.There have also been studies on the performance of videotransmissions over 4G wireless networks that have been designedto support high QoS for multimedia applications. In [14],an assessment of the recently defined video coding scheme(H.264/SVC) is performed over mobile WiMAX. Metrics suchas the PSNR and the MOS are used to represent the quality ofexperience perceived by the end-user. The results show that theperformance is sensitive to the different encoding options inthe protocols and responds differently to the loss of data in thenetwork. Again, these are single-link wireless networks, androuting is not a factor.Cross-layer optimization and QoS routing is not new. Anextensive body of research exists on routing algorithms forwireless ad hoc and mesh networks [15]. Furthermore, thesurvey in [16] provides various ways of classifying QoS routingschemes based on protocol evaluation metrics (transport/application,network- and MAC-layer metrics). However, noneof the routing schemes presented in these surveys takes intoaccount performance metrics defined for an application andspecifically for video transfers. Even when a QoS routing isdefined as application-aware, the applications need to specifythroughput and delay constraints. This is in contrast to ourapproach, where an application-related performance metric,namely the video distortion, is directly incorporated into theroute selection mechanism.Prior work on routing for video communications focuses onMultiple Description Coding (MDC). In [17] and [18], multipathrouting schemes are considered to improve the quality ofvideo transfer. In [17], an extension to the Dynamic SourceRouting is proposed to support multipath video communications.The basic idea is to use the information collected at thedestination node to compute nearly disjoint paths. In contrastwith our approach, no analysis is provided in [17], and the evaluationof the scheme is based solely on simulations. In [18], thecomputation of disjoint paths is achieved by proper schedulinggiven a set of path lengths. As is the case in [17], the work in [18]does not take into account any performance metric directly associatedwith video quality; in contrast, the optimization is basedon delay constraints. In [19] and [20], MDC is considered forvideo multicast in wireless ad hoc networks. A rate-distortionmodel is defined and used in an optimization problem wherethe objective is to minimize the overall video distortion by properlyselecting routing paths. Due to the complexity of the optimizationproblem, a genetic-algorithm-based heuristic approachis used to compute the routes. Although the approach in [19]and [20] takes into account the distortion of the video, it doesso using MDC. Our approach differs not only on the way wemodel video distortion, but also on the fact that we focus on LC,which is more popular in applications today. In [21], a multipathrouting scheme for video delivery over IEEE 802.11-basedwireless mesh networks is proposed. To achieve good traffic engineering,the scheme relies on maximally disjoint paths. However,this work does not consider distortion as a user-perceivedmetric. It simply aims to reduce the latency of video transmissions,and thus, its objective is different from what we considerhere.The work in [22] proposes a scheme for energy-efficientvideo communications with minimum QoS degradation for LC.However, the routing scheme is based on a hierarchical model.To support such a hierarchy, the nodes need to be groupedin clusters, and a process of electing a cluster head has tobe executed periodically, increasing the processing and datacommunication load of the network. In contrast, our proposedscheme assumes a flat model where all nodes in the networkare equivalent and perform the same set of tasks.III. MODEL FORMULATIONOur analytical model couples the functionality of the physicaland MAC layers of the network with the application layerfor a video clip that is sent from a source to a destination node.The model for the lower layers computes the packet-loss probabilitythrough a set of equations that characterize multiuser interference,physical path conditions, and traffic rates betweensource–destination pairs in the network. This packet-loss probabilityis then input to a second model to compute the frame-lossprobability and, from that, the corresponding distortion. Thevalue of the distortion at a hop along the path from the sourceto the destination node depends on the position of the first unrecoverableframe in the GOP.A. PHY- and MAC-Layer ModelingWe consider an IEEE 802.11 network that consists of a setof nodes denoted by N . For each nodeN , denote by P the set of paths that pass via node . Forsimplicity, we assume a constant packet length of bits for allsource–destination paths. There are various models [23]–[26]that attempt to capture the operations of the IEEE 802.11 protocol.These models are application-agnostic and provide an estimateof the packet-loss probability due to interference frombackground traffic in the network. We use the model in [26] torepresent the operations of the PHY and MAC layers; specificscan be found in [26].The approach followed in [26] is based on network-lossmodels. Three sets of equations are derived. The first correspondsto a scheduling model that computes the serving rateof a path at each node , as a function of the schedulercoefficient and the service time(1)The second captures the IEEE 802.11 MAC and PHY modelsand associates the probability of a transmission failure withthe channel access probability(2)PAPAGEORGIOU et al.: DISTORTION-RESISTANT ROUTING FRAMEWORK FOR VIDEO TRAFFIC 415where is the number of backoff stages and is the minimumwindow size. Finally, the third set of equations describes therouting model and computes the incoming traffic rate to thenext-hop node based on scheduling and transmission failuresfor all N P (3)A fixed-point method is used to couple the equations in aniteration, until convergence to a consistent solution is achievedand satisfied. The solution is an approximation to the packet-lossprobability per link and the throughput of the network. Notehere that any other method can be used to find , which can thenbe used in our video distortion estimation framework describedin Section III-B.B. Video Distortion ModelOur analysis is based on the model for video transmissiondistortion in [10]. The distortion is broken down into sourcedistortion and wireless transmission distortion over a single hop.Instead of focusing on a single hop, we significantly extend theanalysis by developing a model that captures the evolution ofthe transmission distortion along the links of a route from thesource node to the destination node.We consider a GOP structure that consists of an I-frame followedby P-frames. We index each frame in the GOPstructure starting from 0, i.e., the I-frame corresponds to index0, and the P-frames correspond to indices from 1 up to .We focus on predictive source coding where, if the th frameis the first lost frame in a GOP, then the th frame and all itssuccessors in the GOP are replaced by the st frame atthe destination node. Assuming that the sequence of frames isstationary, the average distortion introduced by such a frame replacementdepends on the temporal proximity of the replacedframe to the st frame and not on the actual position ofthe frame (in the GOP) to be replaced. In [10], a linear model,which corresponds to empirical data, is used to provide the averagemean squared error (MSE) as a function of the temporaldistance between frames. Using this model, the average distortionis computed in [10] to be(4)for . The minimum distortionis achieved when the last frame in the GOP is lost andthe maximum, , is attained if the first frame islost. The values and depend on the actual videosequence and have to be determined by measurement. To automatethe computation of the distortion, however, we computethe minimum and maximum values, and , respectively,of the distortion over different GOPs for each clip anduse their average values.If is the source coding rate, and are the percentageof bits in the GOP that belong to an I-frame and a P-frame,respectively, then:• the number of packets per an I-frame ;• the number of packets per a P-frame ;where and is the duration of a GOP.We define the sensitivity of a frame to lost packets to be theminimum number of packets that belong to a frame that, if lost,can prevent the correct decoding of the frame. We denote bythe sensitivity of an I-frame, and by , that of a P-frame. Forthe sensitivity of the I-frame, it holds that ,and for the P-frame it is . Note that anyfurther packet losses beyond for the I-frame and for theP-frame do not cause any additional distortion for that particularGOP because, in that case, the corresponding frame is alreadyconsidered lost and cannot be correctly decoded.We extend the wireless transmission distortion introducedin [10] and defined in (4) for the multihop case. We define thesequence to represent the wirelesstransmission distortion along the path from the source to thedestination, where is the wireless transmission video distortionat the th hop. In general, at the th hop, the distortioncan take one of the following discrete values given by (4):(5)The sequence of values the process takes depends on thenumber of lost packets per frame in the GOP at each link.Clearly, w.p.1, for all .We track the packet losses per frame by defining a multidimensionalcounting process(6)where the index is again the hop count along the path fromthe source to the destination. The first component of thecounting process tracks the number of lost packets fromthe I-frame at the th hop along the path, and the componentscount the lost packets that belong to the subsequentP-frames in the GOP at the th hop. The state space foreach of these components is given by(7)(8)for .Assuming that the packet losses in different frames in theGOP are independent events (likely if the fading patterns changein between), the transition probabilities for the process , canbe computed. Suppose that is the packet-loss probability providedby the analytical model that describes the MAC layer (inSection III-A). Furthermore, let the value of at hop beand at hop be. Since each of the components of isa counting process, the corresponding sample paths are nondecreasingw.p.1,, and therefore. Regarding the transitions of that correspond to theI-frame, we have that(9)The corresponding transition probability is equal to theprobability of losing packets out of theremaining packets in the I-frame. Therefore, the transition416 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 2, APRIL 2015probabilities for the first component are given by thefollowing binomial distribution:otherwise(10)Similar to the transitions of , the transitions ofthat correspond to the P-frames in the GOPare specified by the transition probabilitiesotherwise(11)From the transition probabilities (10) and (11), one can computethe distribution of lost packetsin each frame at hop assuming that there are no lost packets atthe source. In particular, for the I-frame, we have(12)for . We define the row vector(13)Then, (12), in vector form, becomes(14)for , where is the transition matrix forthe process . To make the dependence of the matrixto the packet-loss probability explicit, we use the notation.It follows then from (14) that(15)for a sequence of packet-loss probabilities ,where . Following the same process, we cancompute the corresponding distribution for the th P-framein the GOP(16)where and is the transitionmatrix for the process . As one canimmediately see, the packet-loss probabilities, computed afteraccounting for the PHY and MAC, in Section III-A, can be usedhere to compute the probabilities .C. Video Distortion DynamicsThe value of the distortion at hop along the path from thesource to the destination node depends on the position of thefirst unrecoverable frame in the GOP. We define the processsuch that is the index of the first unrecoverableframe in the GOP structure at hop . At each hop ,the process takes values in the setC (17)The value 0 indicates that the first (I-frame) is lost, and thereforethe whole GOP is unrecoverable. A value between 1 anddenotes that the corresponding P-frame is the first frame in theGOP that cannot be decoded correctly, and the value indicatesthat no frame has been lost thus far, yielding a distortion .The definition of the process suggests that the sample pathsof the process are nonincreasing w.p.1., which means that, for all .The dynamics of the process and therefore of the videodistortion depend on the process . The value of the processat each hop indicates the number of lost packets up to thatpoint along the path from the source to the destination node.These losses specify the first unrecoverable frame in the GOPand, hence, the value of the distortion at that point on the path.The transition probabilities at hop of the processfor C (18)specifying the likelihood that the first unrecoverable frame athop is given that the first unrecoverable frame at hop is ,can be computed using the distributionsgiven by (15) and (16). In particular, we consider the followingcases.1) For : In this case, the first unrecoverable frame at hopis the first frame (I-frame) in the GOP. This means thatthe GOP is unrecoverable, and the value of the processfor the rest of the path cannot be anything else other than 0.Therefore, the transition probabilities in this case are givenby(19)2) For : When the first unrecoverableframe in the GOP at hop is a P-frame, it is possible duringthe transition to the next hop to have packet lossesthat make an earlier frame in the GOP unrecoverable. Thiswill happen if the number of lost packets in an earlier frameis such that the total number of lost packets for the particularframe reaches the sensitivity of that frame type. This isused to compute the transition probabilities in (20), shownat the bottom of the next page.3) For : This corresponds to the case where no frameshave been lost in the GOP up to hop . The transition to thenext hop may cause packet losses such that either a framein the GOP becomes unrecoverable, or none is lost and notransition of happens. The transition probabilities in thiscase are given by (21), shown at the bottom of the nextpage.PAPAGEORGIOU et al.: DISTORTION-RESISTANT ROUTING FRAMEWORK FOR VIDEO TRAFFIC 417The value of the video transmission distortion depends on thevalue of the process at hop . More specificallyifif and (22)where is given by (4). Therefore, the dynamics of the videotransmission distortion are defined by the transition probabilitiesgiven by (19)–(21).IV. OPTIMAL ROUTING POLICYNext, our objective is to find the path that yields the minimumvideo transmission distortion between any source and destination.By using the analysis presented in Section III, we pose theproblem as a stochastic optimal control problem where the controlis the selection of the next node to be visited at each intermediatenode from the source to the destination.If N is the set of nodes in the network andC is the set of possible values for theprocess described in Section III-C, we define the state spaceof our problem asX N C (23)Each state X is a tuple such that . The firstcomponent N represents the current node on the pathfrom the source to the destination. The second component Cpoints to the first unrecoverable frame in the GOP and, therefore,specifies the video distortion at the current node.Suppose that at the th hop of the path between the sourceand the destination the node is . Suppose furthermore that thefirst unrecoverable frame in the GOP structure is . Then, thecurrent state of the system is . At this point, thesystem needs to select the next node to be visited. Denote thisselection by . Clearly, the node to be selected next shouldbelong to the set U of the one-hop neighbors of . Thismeans that if at stage the state is and a decisionis made such that U , the new state at the nextstage will be . The selection of as thenext node specifies the packet-loss probability from the analysisin Section III-B and accounts for both channel-induced andinterference-related failures. Moreover, it specifies the transitionprobabilities for the second component of thestate. To make the dependence of these transition probabilitiesto the selection explicit, we use the notation .We seek to find the optimal sequence of statesthat minimizes the total video transmission distortion from thesource to the destination node. The first component of each statethat belongs to such an optimal sequence of states indicates thenode that has to be visited next in the optimal path.For the initial state and the sequence of decisions, the cost to be minimized is defined asX(24)where is the length of the path, the function is the runningcost, and the function is the final cost. We call thisoptimization problem the Minimum Distortion Routing (MDR)problem.The running cost is the video transmission distortion at stateifif (25)(20)(21)418 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 2, APRIL 2015The final cost is defined to beif is the destination node andif is the destination node andotherwise(26)If is the source and is the destination of the connection,then the initial state for the optimization problem is definedas . Any state in the boundary setB X (27)is a terminating state for the optimization problem.If is an optimal decision sequence, we define the valuefunction as(28)for an initial state X . If at some stage the state is, we define the minimum cost-to-go as(29)and for the final stage(30)The MDR problem has the following properties.Lemma 1: MDR satisfies the overlapping property, i.e., theproblem can be broken down into smaller problems that retainthe same structure.Proof: From (29), it is clear that computing the cost-to-gorequires the calculation of the cost-to-go . Thismeans that the initial problem of finding the optimal route betweena source and a destination node can be solved if the subproblemof finding an optimal path between an intermediatenode and the destination can be solved.Lemma 2: MDR satisfies the optimal substructure property,i.e., the subpath of an optimal path is optimal for the correspondingsubproblem.Proof: This is immediate from the definition of the costto-go function defined in (29).Theorem 1.: The MDR problem is solvable by dynamicprogramming.Proof: An optimization problem can be solved by dynamicprogramming if the problem satisfies both the overlapping andthe optimal substructure properties [27]. The proof is immediatefrom Lemmas 1 and 2.Since the state space X is of finite dimension, the optimizationproblem can be solved via dynamic programming byback-propagating the computation of the value of the cost-to-gofunction [28], [29] starting from the terminating states of theboundary set B and moving backwards toward the initial state. If at some stage the state is , we consider allpossible neighbors of node that are one hop away. Foreach link , a packet-loss probability characterizesthe quality of this specific link. Using this , we can computethe transition probability from the current stateto a new state through the probabilitythat is defined in Section III-C for allpossible values of the second component of the state. Amongthe neighboring nodes of node , we choose as the next hoptoward the destination node, the node that corresponds to theminimum cost-to-go at stage defined in (29).Discussion: In essence, the MDR routing policy distributesthe video frames (and the packets contained therein) across multiplepaths and in particular minimizes the interference experiencedby the frames that are at the beginning of a GOP (to minimizedistortion). The I-frames are longer than other frames.Their loss impacts distortion more, and thus these are transmittedon relatively interference-free paths. The higher protectionrendered to I-frames is the key contributing factor in decreasingthe distortion with MDR (we also observe this in bothour simulations and testbed experiments).V. PROTOCOL DESIGNTo compute the solution to the MDR problem described inSection IV, knowledge of the complete network (the nodes thatare present in the network and the quality of the links betweenthese nodes) is necessary. However, because of the dynamicnature and distributed operations of a network, such completeknowledge of the global state is not always available to thenodes. In practice, the solution to the MDR problem can be computedby the source node based on partial information regardingthe global state that it gathers. The source node has to samplethe network during a path discovery process in order to collectinformation regarding the state of the network.The sampling process includes the estimation of the ETXmetric [3] for each wireless link in the network. These estimatesprovide a measure of the quality of the links. The estimationprocess can be implemented by tracking the successfulbroadcasting of probe messages in periodic time intervals. TheETX estimates computed locally in the neighborhood of a nodeare then appended in the Route Request messages duringthe Route Discovery phase. Upon reception of this message bythe destination, a Route Reply message is sent back to thesource that contains the computed ETX estimates, which are usableto compute .The source node then can solve the optimization problem(Section IV) by using the information gathered via the samplingprocess described above. Specifically, upon receiving theRoute Reply messages, the source node follows the stepspresented in Algorithm 1. It defines the initial state of the optimizationproblem as , where is the GOP size. Itdefines the boundary setB that serves as the terminating set forthe optimization process. Next, a call to Algorithm 2 producesthe next node in the path. Because of the stochastic nature ofthe second component of the state, its next value has to be estimated.The estimation is based on the transition probabilitiesgiven by (19)–(21). In particular, the estimated valueis the expected value of the second component given its currentvalue(31)PAPAGEORGIOU et al.: DISTORTION-RESISTANT ROUTING FRAMEWORK FOR VIDEO TRAFFIC 419Algorithm 1: Path discovery (Uses Algorithm 2)Input: source node , destination nodeInput: frame sizeOutput: route from to1: /* DSR Route Discovery Phase */2: send3: receive messages4: N node-ids from messages5:6: /*Path Discovery Initialization Phase*/7:8:9: B10:11:12: append to13:14: /* Path Computation /*15: repeat16: Next_node_in_optimal path( B N )17:18:19:20:21: append to22: N N23: until BTo avoid loops in the produced route, node is removedfrom the set N of available nodes. The process is repeatedwith a new initial state until the boundary set Bis reached. In each iteration, Algorithm 2 is called to determinethe next node on the path from the source to the destination .Algorithm 2 takes as an input an initial state , a boundaryset B, the GOP size and the set N . It solves the dynamicprogramming problem described in Section IV by first creatingthe state space of the system and then using the value iterationmethod, starting from the boundary set and moving backwards.At each stage of the process, it also computes the optimal policy.At the end of the computation, the ID of the best node to be selectedis returned by using the optimal policy for the first stage.In the source routing scheme, the routing decisions are madeat the source node ahead of time and before the packet entersthe network. Therefore, source routing is an open-loop controlproblem where all decisions have to be made in the beginning.The decisions are taken sequentially; a decision at a stage correspondsto the choice of the next-hop node at the node correspondingto the stage. The source node cannot know exactly thestate at the th stage of the selection process becauseof the randomness of the second component of the state.It has to estimate at each stage the value of and use this estimateto make a decision for that stage.The sequence of steps followed by each node in the networkis shown in Fig. 2.Algorithm 2: Next node in optimal pathInput: initial state , boundary set BInput: set of available nodes NInput: frame sizeOutput: next node in the optimal path1: /* Initialization Phase */2: C3: X N C4: X5:6: /* Optimal Control Computation */7: for TO 1 do8: if then9: for all X do10:11: end for12: else13: for all X do14:15:16:17:18: end for19: end if20: end for21:22: returnThe flowchart that corresponds to the operation of the sourcenode is depicted in Fig. 2(a), while the flowcharts for an intermediatenode and the destination node are shown in Fig. 2(b).VI. RESULTSWe show the performance gains of the proposed routingscheme via extensive simulations and testbed experiments.For the simulation experiments, we use the network simulatorns-2 [30]. The simulator provides a full protocol stack for awireless multihop network based on IEEE 802.11. We extendthe functionality of ns-2 by implementing our proposed routingscheme on top of the current protocol stack. For the testbed experiments,we implement our scheme using the Click modularrouter [31], [32]. We implement two different mechanisms andexperiment with each, one after another. The first mechanismestimates the ETX value for each link between a node and itsneighbors for all the nodes in the network. The mechanismbroadcasts periodically (every 1 s) small probe messages ofsize 32 B and checks for acknowledgments from the neighborsof the node. The routing policy computes the minimum ETXpath from the source to a destination and uses that path totransfer the video packets. The second mechanism implementsthe protocol defined in Section V in order to compute the routeson the wireless network that achieve minimum video distortion.Furthermore, we use EvalVid [33], which consists of a set oftools for the evaluation of the quality of video that is transmittedover a real or simulated network. The toolset supports different420 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 2, APRIL 2015Fig. 2. Flowchart for application-aware routing. (a) Source node. (b) Intermediateand destination node.performance metrics such as the PSNR and the MOS [34]. Toadapt the EvalVid to the ns-2 simulator, we follow the proceduredescribed in [35]. Specifically, for each simulated video flow betweentwo nodes in the network, we need to produce a sequenceof files. We start with the initial uncompressed video file thatconsists of a sequence of YUV frames [36]. Using the EvalVidtoolset, we transform the YUV format first to the MP4 and thento the MPEG4 format, which contains hints of how the video fileshould be transmitted over a network. When we do this, we donot constrain the GOP size to be the same from GOP to GOP,but rather, we let the tool decide the appropriate size for eachGOP based on the video clip content. We then need to capturea log from an attempted transmission over a real network. Thislog indicates which frame and at what time instance was transmittedover the network. The log is fed as an input to the ns-2simulation, which plays back the video transmission, producingat the end two sets of statistics regarding the transmission, onefor the sender and one for the receiver. By applying the EvalVidtoolset on this sequence of files, we can reconstruct the videofile as it is received by the destination and compare it to the initialvideo file. The comparison provides a measure of the videoquality degradation due to the transmissions over the network.A. Simulation ResultsTo evaluate the performance of the MDR protocol, we compareit against the minimum ETX routing scheme. We considera wireless multihop network that covers an area of 10001000 m . The nodes are distributed over this area accordingto a Poisson random field. The pair of nodes that constitute theFig. 3. Average PSNR for 5 and 10 video connections (Set-I).TABLE IVIDEO ENCODING PARAMETERSsource and destination in each case are selected at random. Ifthey happen to be neighbors, we discard that pair and repeat theprocess until we select a source and destination that are morethan one hop apart. Each node uses the IEEE 802.11b protocolwhere the propagation model is the Two Ray Ground, yielding acommunication range of about 250 m. Each set of experimentsis repeated 10 times, and the average value is reported in eachcase.In Table I, three sets of values are defined for the video encodingparameters. We vary the GOP size and the frame rateand thus, effectively, the video encoding rate. We keep the framesize constant as per the QCIF standard (176 144 pixels) andset the maximum packet size to 1024 B. Our simulation experimentsfocus on three metrics: 1) the PSNR, which is an objectivequality measure; 2) the MOS, which is a subjective qualitymetric; and 3) the delay experienced by each video connection.The effect of the node density on the PSNR is shown in Fig. 3.We plot the average PSNR for 5 and 10 concurrent video connectionsfor different node densities and for Set-I of the videoencoding parameters of Table I. We also plot the performanceof our proposed scheme (MDR) when instead of estimating theper-link packet-loss probabilities through the ETX metric, weuse the model in Section III-A to do so. In this case, we assumefull knowledge of the network topology, and so the state spacewhere we solve the optimal control problem of Section IV is asuperset of the state space when we collect the local estimatesof ETX through the network.We then fix the number of nodes to 20 (distributed as describedearlier) and compute the PSNR of each video connectionwhen: 1) the network serves four concurrent connections,and 2) when the number of concurrent connections is 8. In eachcase, the source–destination pairs are chosen uniformly fromamong the nodes in the network. We define the tail distributionof PSNR as the probability and plot itin Fig. 4 for the different traffic loads. The tail distribution ofPSNR that corresponds to Set-II of the video encoding parametersis shown in Fig. 4(a). For both the light and heavy trafficPAPAGEORGIOU et al.: DISTORTION-RESISTANT ROUTING FRAMEWORK FOR VIDEO TRAFFIC 421Fig. 4. Tail distribution of PSNR. (a) Set-II. (b) Set-III.loads (four and eight concurrent connections, respectively), theMDR protocol performs better, providing a higher percentageof paths that have a given PSNR value. As expected, a performancedegradation is observed for both schemes when the trafficload increases. This is due to the fact that under heavier trafficconditions in the network, the interference becomes more prevalent;furthermore, interference across adjacent links can be correlatedin some cases. Under such network conditions, the benefitsfrom the distortion-based optimization have a greater impacton the path selection process for the different types of frames ina video GOP as discussed earlier. The I-frames are sent on relativelyuncongested paths. With fur concurrent connections, themedian of PSNR is 17 for the minimum ETX policy and 18 forthe MDR protocol. The median decreases when the traffic loadincreases, and it is 9.5 and 10 for the minimum ETX and the application-aware schemes, respectively. The tail distribution ofPSNR that corresponds to the parameters of Set-III is shown inFig. 4(b). As is the case for Set-II, a large GOP size results ina denser state space, and therefore a better performance for theMDR protocol. In the case of the light traffic loads (four concurrentconnections), the median for the PSNR is 15 for the minimumETX scheme and 17 for MDR. Under heavier traffic loads(eight concurrent connections), Pthe median for the PSNR is 9for the minimum ETX scheme and 10.5 for the MDR protocol.The effects of the sensitivities, and , on the MDR-protocolare shown in Fig. 5. As before, the number of randomlyplaced nodes is set to 20. We compare the performance of theMDR protocol when and . Inthe first case, the sensitivity to the packet losses per frame is setto the maximum; in this case, a single packet loss in a framecauses the frame to be unrecoverable. Fig. 5(a) and (b) presentsthe same comparison for Set-II and Set-III of the encoding parameters,respectively. In both cases, relaxing the sensitivity ofan I- or P-frame to packet losses (i.e., increasing the value ofand ) deteriorates the performance of the scheme. A lowersensitivity (larger values of and ) diminishes the impact ofpacket losses on the video distortion, thus limiting the performancegains from using the scheme. For Set-II, the median ofthe PSNR is 17 for the minimum ETX scheme and 18 and 16 forMDR for and , respectively.When the video encoding parameters of Set-III are used, themedian values of the PSNR are 15 for the minimum ETX caseand 17 and 15 for the MDR protocol when and, respectively.Although the PSNR is the most widespread objective metricto measure the digital video quality, it does not always captureuser experience. A subjective quality measure that tries to capturehuman impression regarding the video quality is the MOS.Fig. 5. PSNR dependence on packet-loss sensitivity. (a) Set-II.. (b) Set-III.Fig. 6. Average mean opinion score.The metric uses a scale from 1 (worst) to 5 (best) to representuser satisfaction when watching a video clip [34].To evaluate the MOS with the MDR and ETX-based routing,we consider the wireless multihop network with the averagenumber of nodes equal to 20 (distributed as discussed earlier).The initial raw video is processed using the H.264 encoder witha maximum GOP size of 30 frames and a sampling frequencyof 30 frames per second. Fig. 6 shows the average MOS asthe number of concurrent video flows in the network increases.When the number of connections is three, the traffic load is low,and so both the ETX-based routing and MDR provide similaruser experience regarding video quality. As the traffic load increases,the distortion-based routing distributes the load acrossthe network, causing the I-frames to avoid highly congestedareas. When a moderate number of video flows are concurrentlyactive in the network, there is a significant gap in video qualityin favor of MDR. However, no significant gains are possiblewith MDR when congestion is high (more than nine concurrentvideo flows are active). In such cases, there are no congestion-free routes available to be used by MDR. This results inhigher MOS values, which translates to a better user experience.The delay characteristics of the two routing schemes areshown in Fig. 7 for Set-II of the video encoding parameters.The nodes are again randomly distributed according to aPoisson random field with varying density with values 14, 16,and 18. The traffic load corresponds to five concurrent videoconnections. We compute and plot the mean and varianceof the end-to-end delay for the five connections along withthe 95% confidence intervals. As seen in Fig. 7, for all threedifferent node densities, the MDR protocol produces routes thatexhibit less variability compared to the routes computed by theminimum ETX scheme. Smaller variability implies less jitter,which in turn suggests a better video quality as perceived bythe end-user. Moreover, because of the smaller variability, therequired sizes of buffers at the intermediate nodes is smaller.422 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 2, APRIL 2015Fig. 7. Delay characteristics for five concurrent connections (Set-II). (a) Meandelay. (b) Variance of delay.Note that this benefit is in addition in the reduction in distortionas discussed above. The primary reason for this reduction inthe delay is that the distortion-aware approach tries to avoidpaths that are congested; ETX, on the other hand, results inconvergence of flows onto a few good paths. For both routingschemes, the mean and variance of the delay increase withthe average number of nodes in the network. As the networkbecomes denser, the effect of interference becomes moreprofound, increasing the number of retransmissions and, thus,the delay. In contrast, a sparser network topology provides asmaller number of “good” routes, and thus it is more difficult toseparate flows and cope with congestion. It is in the moderatedensity regions, where the MDR protocol provides the mostbenefits in terms of delay and jitter.In order to understand how the characteristics of the videotraffic and, in particular, the motion level affect the distortion,we experiment with two classes of video clips: slow- and fastmotionvideo. The motion level of a video clip can be computedthrough appropriate detection algorithms; typically, these algorithmsclassify a video clip as a slow-motion or a fast-motionvideo. Tools such as PhysMo [37] and AForge [38] can be usedto perform this classification.We evaluate the MOS of slow- and fast-motion video flowswhen the MDR routing scheme is used. We consider a wirelessmultihop network with an average number of nodes equal to 20(distributed as discussed above). Fig. 8(a) shows the averageMOS for Set-II, and Fig. 8(b) shows the MOS in the case ofSet-III. In both cases, the slow-motion flows experience slightlylower distortion compared to the fast-motion videos and, thus,higher MOS. This is the result of the fact that in the slow-motionvideo clips, the I-frames carry most of the information. Due torapid changes in the content of a fast-motion clip, the P-framesare larger and contain more information than the P-frames forslow-motion video flows. The MDR routing scheme protectsthe I-frames by routing the corresponding packets through lesscongested paths. The P-frames are packed together on congestedpaths and could be lost. As evident from Fig. 8, such losses affectfast-motion video to a greater extent. However, as we increasethe traffic to extremely high levels (11 flows), the performanceof slow- and fast-motion videos is similar due to highframe losses.Next, we compare the behavior of MDR against a routingprotocol that chooses routes so as to minimize the overallexpected transmission time (ETT) [39]. The ETT is a functionof the loss rate and the bandwidth of the link. Therefore, it cancapture delays due to transmissions in multirate settings, unlikeETX, which only estimates the packet-loss ratio at the base rate.Fig. 8. Average value of the MOS for slow- and fast-motion video flows.(a) Set-II. (b) Set-III.Fig. 9. Comparison between the MDR and the ETT-based routing scheme.(a) Mean opinion score. (b) Mean delay.In Fig. 9, the comparison between MDR and the ETT-basedscheme is shown. The mean opinion score is shown in Fig. 9(a),where we observe a behavior similar to the one shown in Fig. 6.The average end-to-end delay is shown in Fig. 9(b). In contrastto what happens when the ETX is used, the routing mechanismthat minimizes the total ETT on the path from the source tothe destination yields smaller delays. However, the delays withMDR are comparable to those with ETT-based routing; in otherwords, the video quality is improved with minimum impact ondelay with MDR.B. Testbed ExperimentsNext, we evaluate the MDR protocol on a wireless indoortestbed composed of 41 nodes [40]. The nodes are based onthe Soekris net5501 hardware configuration and run a DebianLinux distribution. Each node is equipped with 500 MHz CPU,512 MB of RAM, and a WN-CM9 wireless mini-PCI card,which carries the AR5213 Atheros main chip. Each node usesIEEE 802.11a to avoid interference from co-located campusnetworks. To further minimize interference from these othernetworks, all experiments were performed at night. The networktopology of the testbed is shown in Fig. 10.The experiment setup consists of an initial raw video processedusing the H.264 encoder with a maximum GOP sizeof 30 frames. The traffic load ranges from 2 to 12 concurrentvideo flows, where the sender and receiver pairs are randomlyselected. Each scenario is repeated five times.To capture the effect of the ETX-based and MDR routingschemes on the user experience, we measure the average MOSas the number of concurrent video flows in the network increases.Fig. 11 shows that as the number of video connectionsin the network increases, the average MOS for both schemesdecreases. However, when the traffic load increases, the MDRprotocol computes multiple paths between the source and thePAPAGEORGIOU et al.: DISTORTION-RESISTANT ROUTING FRAMEWORK FOR VIDEO TRAFFIC 423Fig. 10. Network topology of the wireless network testbed.Fig. 11. Average value of MOS for a different number of concurrent videoflows.destination nodes and is better in distributing the load acrossthe network such that the frames at the beginning of a GOPavoid congestion. On the other hand, the shorter paths computedthrough the ETX-based scheme become quickly congested, resultingin heavy packet losses. As discussed, we observe thatthis primarily has a negative impact on correctly decoding therelatively longer (but more important) I-frames, resulting in aworse user experience.A visual comparison between Figs. 6 and 11 immediatelyshows the similarity in behaviors between our simulations andreal experiments, thereby validating the realism of our simulations.Fig. 12 shows snapshots from video clips transmittedover the testbed under different traffic conditions for both theETX-based and the MDR protocols. As shown in Fig. 11, whenthere are two connections in the network, the MOS for bothrouting schemes is the same. This is reflected in Fig. 12(a) and(b), where both snapshots are of very similar quality; in thiscase, the traffic load is fairly low, and congestion is not a bigissue (the flows do not cause high levels of interference to eachother). When there are eight concurrent video connections (andinterference across connections is more prevalent), the MDRprotocol achieves a higher MOS compared to the ETX-basedscheme. This is visually depicted in Fig. 12(c) and (d), wherethe snapshot in the case of MDR is much clearer than the noisysnapshot form the ETX-based protocol. Specifically, our protocoldistributes the I-frames across diverse paths with low interference;P-frames that are toward the end of GOPs are relativelypacked together onto more congested paths. The ETXFig. 12. User experience under different traffic loads. (a) Video snapshot—MDR (two connections). (b) Video snapshot—ETX (two connections).(c) Video snapshot—MDR (eight connections). (d) Video snapshot—ETX(eight connections).Fig. 13. Routes for I- and P-frames.metric, which is agnostic to video semantics, does not distinguishbetween frames and packs them together, causing highdistortion. It is difficult to explicitly prove that I- and P-framesfollow somewhat disjoint paths due to the stochastic nature ofthe process. The intuition, however, is based on the fact that thesensitivities and of the I- and P-frames, respectively, are, ingeneral, different. This has as a consequence that the frame-lossprobability for an I-frame is different from that of a P-frame,resulting in their choosing different routes.To illustrate this, we consider a simple network as shown inFig. 13 and perform an experiment with eight concurrent flows.For each video flow , we show in Fig. 13 thecorresponding sender and receiver denoted by and , respectively.We focus on flow 3 and show the different routesfor the I- and P-frames for both the MDR and the ETX-basedrouting protocols. We notice that when MDR is used, the routesfor the I- and P-frames are different. Specifically, the routes forthe packets that belong to I-frames, and the corresponding fractionsof the I-frame traffic they serve are as follows: 58.4% ofthe I-frame packets are routed through424 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 2, APRIL 2015; 14.6% of the I-frame traffic is routedthrough ; 16% isrouted through ; and 11% through. Meanwhile,the routes and traffic split for P-frame packets are as follows:56.7% of the P-frame packets use the path; 6.3% is using ;24% is routed through ; and 13%through . Noticethat the majority of I-frames are routed via a path that is disjointfrom the path followed by the majority of the P-frames.In contrast to the MDR case, when the ETX-based scheme isused, both the I- and P-frames are routed through the same path:.VII. CONCLUSIONIn this paper, we argue that a routing policy that is application-aware is likely to provide benefits in terms of user-perceivedperformance. Specifically, we consider a network thatprimarily carries video flows. We seek to understand the impactof routing on the end-to-end distortion of video flows. Towardthis, we construct an analytical model that ties video distortionto the underlying packet-loss probabilities. Using this model,we find the optimal route (in terms of distortion) between asource and a destination node using a dynamic programmingapproach. Unlike traditional metrics such as ETX, our approachtakes into account correlation across packet losses that influencevideo distortion. Based on our approach, we design a practicalrouting scheme that we then evaluate via extensive simulationsand testbed experiments. Our simulation study shows that thedistortion (in terms of PSNR) is decreased by 20% compared toETX-based routing.

Design and Implementation of Mobile Lightweight TV Media System Based on Android

With the maturity and popularization of 4G mobile networks, and with mobile Internet devices and hardware support, mobile Internet has become a hot topic in the IT industry. The number of user base and online time in mobile devices are on increase [II. People are not only limited to computer or TV but access to get video infonnation in the mobile lightweight TV. In view of this situation, this paper focuses on the design and implementation of mobile lightweight TV media system based on Android. The design and implementation of mobile lightweight TV media system based on Android are different from that on the PC side, and we must give full consideration to the characteristics of mobile tenninal, and strive to change numerous for brief. The system is divided into client and server. The interface of the design is concise, and the client and server are easy to operate, maintain and update. The HTML5, CSS3, JavaScript and other programming languages, such as PHP, Java, MySQL and Apache technology, are used in the implementation of mobile lightweight TV media system. What’s more, these technologies are open and free source for that. I use Ajax technology to transfer JSON data to make the asynchronously interaction between the client side and server side come true. The asynchronous interactive through Ajax allows users not to have to refresh the entire web page, but only refresh the part page, which reduces the pressure on the client side and the server side, and users can get much better page browsing experience.

THE DESIGN OF MOBILE LIGHTWEIGHT TV MEDIA SYSTEM BASED ON ANDROID The design concept of mobile lightweight TV media system is not only to meet the needs of users to browse text, video and other information, but also to meet the administrator’s convenience to update and maintenance background page. For user page, we should strive for simplicity, and operate easily [21. For management background page, we should make sure that administrators can manage it in batches. A. The design of client The users’ fIrst impression on an App comes from the client page. So, it is an important factor of whether the user to install and use the App. In order to get a better user experience, the client interface should be attractive and easy to operate. When users open an App, what they see fIrst is the main interface, and then is a concise array of columns one to column six. In particular, the sixth column is broadcast live. When users click every column box, they can see that the page jump to the corresponding column. In such a column, they can see the program one to program four about the information of text and video. This mobile lightweight TV media system allows users to interact with App. For example, if user is interested in someone program, he or she can collect it and click on a “like” button through the operation of the corresponding. For the client interface, there are features of on-demand ..

B. The design of sever Server side is mainly for administrators. Although the server is not directly related to the user, it is the guarantee of user access to information. Although the client is based on the mobile terminal, in order to facilitate the managers to manage and release information, the server management is based on the PC side. Administrators can add, update and delete the client’s text and video information in timely through the background server managements system. In addition, Manager can be allowed to login and exit management system.

Administrators can login user name and password through the input window, and then submit the fonn in “POST” way through the intelligent form of HTML5. Then, server deals with the PHP file. The backend server will submit the information and the background database to match the test, to detennine whether to allow the login operation. Once registered successful, administrators can click on the “Add” button to add text and video and other infonnation. Administrators can also click on the “Edit” button to edit and update the database of information. The “Delete” button allow Administrators to delete the infonnation of the database. Administrators can click on “Save” button and “Exit” button to save the operation and to exit the management interface [3].

Design and Implementation of an Automatic Management System for vehicles using the Android Platform

SINCE its creation over 100 year ago, the car has positioned itself as the predominant mean of transport in the world and it has become a key instrument for the functioning of society. However, over the last decades, the world has experienced an intense process of environmental awareness, where the automobile plays a significant role by being one of the main pollution sources that exist in the planet.

The standard OBDI was created with the objective of monitoring the pollution levels of light levels. OBDI is in charge of monitoring the engine’s main components and its implementation in new vehicles became mandatory in the United States since 1991 [1]. The current OBDII standard was developed in 1996 as a result of stricter environmental measures in the United States and its implementation in new vehicles has being mandatory since that year. Currently the OBDII standard has been implemented in most new vehicles around the world and it is main tool for a complete automotive diagnosis.

OBDII is not only able to determine errors in the vehicle operation; it is also capable of providing information in real time about different parameters of the system. Currently many companies require that their vehicles to be in constant use, this causes them to be at risk of mechanical failures, capable of compromising not only their integrity but also the occupants security. These failures can cause the vehicle stoppage for repair, which represent a problem in terms of cost and logistics for the owners. Therefore, a system that integrates the information provided by the OBDII standard with a userfriendly interface, allows companies and vehicle owners to be aware of the mechanical condition of their unit, being able to identify and solve car problems and prevent potential catastrophic damage to them. The implementation of this system ensures the reliability of the vehicles and safety of the occupants concerning the mechanical part.

STANDARD OBDII All printed material, including text, illustrations, and charts, must be kept within a print area of 6-1/2 inches (16.51 cm) wide by 8-7/8 inches (22.51 cm) high. Do not write or print anything outside the print area. All text must be in a two-column format. Columns are to be 3-1/16 inches (7.85 cm) wide, with a 3/8 inch (0.81 cm) space between them. Text must be fully justified. A format sheet with the margins and placement guides is available as both Word and PDF files as and . It contains lines and boxes showing the margins and print areas. If you hold it and your printed page up to the light, you can easily check your margins to see if your print area fits within the space allowed. A. Description In 1988, the US Environmental Protection Agency (EPA) established for vehicle manufacturers to necessarily include a program of self – diagnosis in automotive computers. This first generation of on-board diagnostic systems was known as OBDI. [2]

OBDI is a set of programmed instructions in computers or automotive brains. The main objective of these programs is to detect any damage that might occur in actuators, switches and the wiring of any system that is related to gas emissions from the vehicle. Therefore if the computer detects a failure or malfunction in any of these systems, an indicator on the dashboard is lit. The indicator therefore will only be lit when a problem is detected in the vehicle emissions. OBDII offers a second version of OBDI, the diagnosis programs are improved and show a new automotive monitoring function, unlike OBDI that was only able to identify damaged components. The monitoring function not only deals with issues related to gas emission systems but several others responsible for the proper operation and safety of the vehicle. OBDII also allows all this information to be available and accessible to owners and mechanics using the proper equipment. However, OBDII’s main characteristic is standardization but each manufacturer implemented it based on their own considerations. The connector, communication protocols, fault codes and terminology varied depending on the car brand, so diagnostic systems had no interoperability between cars from different manufacturers. Therefore OBDII three main objectives are: • Standardize communication procedures and protocols between the diagnostic equipment and automotive computers • Promote the use of a standard link connector on all vehicles. • Standardize the code numbers, code definitions and used language for the description and identification of car flaws. • Currently most modern light vehicles incorporate the OBDII standard; however, for heavy vehicles its implementation is not yet mandatory. B. ODBII Connector The Data Link Connector or DLC is the physical interface between the vehicle’s computer and the diagnosis system or equipment. In OBDI systems shape, size and connector location varied between manufacturers while using OBDII the 16-pin connector is standardize and even though its location varies between vehicles, in general it is possible to find it in the left hand part of the instrumental board.

C. OBDII Faul Codes Also known as Diagnose Trouble Code or DTC, these codes identify failures or malfunctioning in systems of specific components of the vehicle. [3] Each vehicle in its instrumental board has a malfunction indicator known as “Engine Light”. When the computer detects a problem in automotive operation of one or more systems: Assigns a fault code that identifies the source of the problem, store this code in the internal memory of the computer and turns on the malfunction indicator to inform the owner to carry out a check on the vehicle.

D. Bluetooth Device BAFX OBDII BAFX OBDII bluetooth adapter is a device that allows synchronization and communication between the computer of any vehicle manufactured after 1996 and an Android device or a a computer with Windows operating system. Figure 3 shows the device BAFX OBDII. [4]

Mobile Learning Application Based On Hybrid Mobile Application Technology Running On Android Smartphone and Blackberry

Nowadays, many universities have taken advantage of elearning in the form of a website in the lecturing. Both students and faculties who want to access the e-learning should find for a computer or laptop. However the physical size of a computer, laptop, or something like that is such a large and not convenience to carry out. Considering the condition today, mobile devices have become a way of life for many people. Computers are now replaced by smartphones that can be inserted into a pocket and can be taken anywhere. However, the problem that arise is on a device with a small screen, users need to zoom and scroll to get the comfortable viewing. According the rapid development of technology, the operating systems for mobile devices are also getting popular such as iOS, Android, Blackberry, WebOS, Symbian, and others. Various operating systems raise the new problem in developing the mobile e-learning (called mobile learning), because of differences in the programming language and the differences in how the operation of each mobile device. Currently, there is hybrid application technology that can overcome the problem of many different operating systems on mobile phone. This new technology can be used in developing the e-learning mobile phone application. Furthermore, the mobile phone application can be uploaded to the application store, so it can be downloaded by another users. In this research, will be developed a mobile learning application which is a further development of the existing web based applications.

MOBILE LEARNING The term mobile learning (m-learning) refers to the use of mobile and handheld IT devices, such as Personal Digital Assistants (PDAs), mobile telephones, smartphones and tablet PC technologies, in teaching and learning. [5] As computers and the internet become essential educational tools, the technologies become more portable, affordable, effective and easy to use. This provides many opportunities for widening participation and access to ICT, and in particular the internet. Mobile devices such as phones and PDAs are much more reasonably priced than desktop computers, and therefore represent a less expensive method of accessing the internet. The introduction of tablet PCs now allows mobile internet access with equal, if not more, functionality than desktop computers. Mobile learning now currently be most useful as a supplement to ICT, web learning and more traditional learning methods, and can do much to enrich the learning experience. In the future mobile learning could be a huge factor in getting unsatisfied people in learning, where more traditional methods have failed. As mobile phones combine PDA functions with cameras, video and MP3 players, and as tablets combine the portability of PDAs with the functionality of desktops, the world of learning becomes more mobile, more flexible and more exciting.

HYBRID MOBILE APPLICATION TECHNOLOGY Hybrid is derived from heterogeneous sources, or composed of elements of different or unsuitable kinds. A hybrid application is one that is written with the same technology used for websites and mobile web implementations, and that is hosted or runs inside a native container on a mobile device. It is the integration of web technology and native execution. PhoneGap is an example of the most popular container for creating hybrid mobile application[3] [4]. Hybrid application use a web view control (UIWebView on iOS, WebView on Android and others) to present the HTML and JavaScript files in a full-screen format, using the native browser rendering engine. WebKit is the browser rendering engine that is used on iOS, Android, Blackberry and others. That means that the HTML and JavaScript used to construct a hybrid application is rendered/processed by the WebKit rendering engine (for you Windows 8 folks, this is what the IE10 engine does for Metro style applications that use WinJS) and displayed to the user in a full-screen web view control, not in a browser. No longer are you constrained to using HTML and JavaScript for only in-browser implementations on mobile devices.

Machine Learning-Based Mobile Threat Monitoring and Detection

Mobile computing is now dispersed and ubiquitous throughout our society, providing new avenues for communication, productivity, and commerce. Mobile networks are available and free to access throughout public spaces, laptops have provided a platform for on-the-go business management, and smartphones and tablets extend our access to information to the moment when we wake up in the morning. Yet, as we have seen with the adoption of each new piece of technology, end users are often at significant risk. Malicious intentions and knowledge of the underlying technology provide the means for cyber attacks that compromise personal and business data. The need for dynamic defense systems to analyze and prevent malicious intrusion is then self-apparent. To address the pertinent issue of security in mobile technology, in this paper we propose a security system to detect malicious activities in Android OS devices. Our proposed system is designed to operate in a cloud environment, incurs low overhead to the Android device, and facilitates multiple smartphones simultaneously. The system centers around four primary components, the Android App, the Security Server, Google Cloud Messaging (GCM) service, and the Analysis Module. Facilitating message delivery, the GCM service processes requests from the security server to the Android app. Transmitting from the mobile app, data is collected and stored from multiple devices to the security server for preprocessing. In the analysis module, static and dynamic analysis are performed simultaneously, allowing for rapid inspection of common attributes in Android malware, while complex algorithms are applied in extended examination. Once the analysis is completed, a report can be sent to the device, and a security administrator overseeing the system can view the status of the various devices in the web visualization to improve security awareness and act on security risks. The remainder of the paper is as follows: In Section II, we give the background and provide a literature review on the topics of smart mobile security and cloud computing security. In Section III, we describe the designed system architecture and outline the basic workflow. In Section IV, we describe the data analysis module and process and evaluation results. Finally, we conclude the paper in Section V.

SYSTEM ARCHITECTURE AND WORKFLOW Our developed security framework is designed to be generic, and can operate as a cloud-based service. The primary components are the Security Server, the Google Cloud Messaging (GCM) service, the Mobile Application, and the Analysis Testbed, as outlined below. In combination, they provide the scaffolding for the interconnection of the mobile device to a powerful analysis testbed. • Security Server: The security hub is a typical LAMP (Linux, Apache, MySQL, PHP) server. Specifically, the Linux operating system is Ubuntu 14.04 server, running Apache2, MySQL 5.5 and PHP-5. The server is managed by the web application programmed in PHP, implementing the Laravel 5 framework, and the requisite dependencies. The web application utilizes the MySQL relational database model to store and manage smartphone system information, and application and log data, received from connected Android devices. It also provides the interface for security visualization for the security operator. • GCM: Google Cloud Messaging is a cloud-based messaging service provided by Google for developing applications compatible with Android, iOS, and Chrome. The primary feature of the GCM is to provide an authenticated project message host that queues messages while the device is not connected, and supports upstream and downstream messaging. • Mobile App: The mobile application is developed for Android OS devices. While operating, the mobile application is designed to listen for GCM messages and send system, application, and log data to the security server upon request for security analysis. • Data Analysis: The Data Analysis module utilizes Weka software [17] to analyze the test dataset comprised of dynamically obtained Android system calls and static permission information of malicious and benign applications. From the training analysis, the module can make predictive assertions about new applications based on their attributes. The workflow, shown in Figure 1, illustrates the typical interaction between the system components. The two timedependent system operations are on Startup of the application, and Daily updates to identify system changes. These daily updates can additionally be initiated from the visualization in the security hub, at the discretion of a security administrator. Startup – (1) Upon initializing the Android application, the GCM server is contacted to retrieve the registration token. This enables the initialization of new devices, as well as for situations where the registration ID is refreshed. After (2) retrieving the registration token, (3) the application contacts the web server and passes three key values: the GCM registration token, and the device Brand and Serial. The application server then queries the database for the target data. If the information matches, no further action is taken. However, if the GCM registration token has changed, it is then updated in the database. Should the device identifying information not be found, it is immediately added to the database, and (4) the server messages the GCM server, requesting additional system information from the device. (5) The GCM server passes the message to the device, and (6) the device passes the requested data to the web server to be added to the newly created database entry. Daily – (7) Independently, the web server will message the GCM server daily, requesting application data for analysis. (8) The GCM will pass along the request when the device is connected. (9) The device then transmits the requested data to the web server for analysis. The received device information is stored in the database, preprocessed, and (10) transmitted to the analysis module. The analysis module then operates on the data and determines the risks, if any. The module composes a report that is (11) returned to the web server. This report is stored in the database as for review, and copies are transmitted to the security official and the (12) GCM server. Finally, the GCM server (13) delivers the report to the device. Once a device has been registered, the security server, running in the cloud, sends daily messages to the GCM. The GCM queues the messages and transmits the requests to the mobile device. The mobile app, listening for GCM messages, processes the requests and responds to the server directly. Once the requested data is received by the server, it updates the database and triggers the analysis module. The module reduces the data and determines the status of the mobile device. If the device has been compromised, notification is sent to both the security officer, as well as to the mobile device.

Comparing Performance and Energy Consumption of Android Applications: Native Versus Web Approaches

Tablets, ultramobiles and mobile phones are changing the routine of people and organizations around the world, representing a very important market of consumer electronics as well as applications. In [1] Android is pointed out as the dominant mobile Operating Systems (OS), although devices running iOS, or other OSs are also found on the market. Each OS represents an ecosystem that includes specific APIs, frameworks, and development tools, and usually also defines a language to be employed in development [2] [3]. To handle the diversity of ecosystems, traditionally, native applications have been developed using the language defined by the target platforms. The native approach was the predominant way to develop mobile application for a time, since it can present advantages, as a better usage of platform resources as 3D graphics or sensors. On the other hand, modern web technologies directed towards mobile are rapidly gaining interest from large communities of developers [4] [5]. These web-based approaches employ languages that are not native to the device’s OS [6], as HTML5, JavaScript, and PHP. As advantage, these approaches enable one single implementation to be shared across the target platforms and also not having to deal with deployment specific issues related to some of the ecosystems [6] [7], being known as cross-platform approaches. As mobile devices usually have limited resources and depend on battery to work, approaches that allow migrate processing and data storing for a remote server, saving device’s resources [8], as web or cloud based ones, have received attention. Following this approach, mobile applications are very closed to the traditional web systems adopting also a client/server architecture. These systems adopt different technologies/languages for the front-end (client side) and backend (server side). Usually, front-end can be developed in HTLM5 or JavaScript, but the server side usually adopts PHP. However, recently Node.js [9] is proposed as an open-source cross-platform JavaScript runtime environment, which enables developers to write server-side components using also the JavaScript programming language. Differently from the native applications, the web-based mobile ones are executed from an embedded web browser. This additional layer can generate some overhead and thus impact negatively on application performance. Moreover, applications developed in PHP and or using Node.js, require communication with the web server. Depending of the volume of data transfered, this communication can also impact negatively in performance and energy consumption. In contrast, locally executed applications (i.e native, and JavaScript ones) tend to consume more with processing compared to the ones implemented in PHP or using Node.js. Comparative studies among native and cross-platform applications have been published, since this tendency has emerged. However, most of these works consider only criteria as development facility, usability or end-user experience, and a small number of works discuss also performance or energy consumption [5]. Since these metrics are very important in the context of mobile devices, this work has as objective to compare native Android applications developed in Java to web-based applications developed in PHP, JavaScript and Node.js. Through experiments, we evaluate the impact of adopting these approaches, regarding performance and energy consumption. Besides, we discuss when the adoption of a client/server solution can provide benefits on efficiency compared to the local approaches. Yet, this work explores the Node.js technology, which has recently emerged as a performance improvement strategy for web-based applications, evaluating how its usage impact on application efficiency. This paper is organized as follows: Section II discusses related works; Section III presents the comparative study proposed here and details methodology and the different evaluated implementations; in Section IV experimental results are presented and discussed; and Section V presents the conclusion and points out future work.

Performance Evaluation and Optimization for Android-based Web Server

INTRODUCTION For a long time, Web servers are generally built on the computer operating system, such as Windows and Linux that both are mature operating system, but few people build the server in the Android system [1]. Now with the mobile device hardware level rising and the rapid development of Android system, Android system has become a worldwide wide-ranging operating system. Android system is not only a mobile phone operating system, but also increasingly widely used in tablet PCs, set-top boxes, wearable equipment, television, digital cameras and other equipment [2]. Android system greatly enhances the function of these devices and greatly enriches people’s lives. The research object of this paper is Android set-top box which is a micro host with Android operating system. By building the standard HTTP server environment in Android system, it makes the Android set-top box has the ability to be a lightweight Internet server [3]. We can put some web pages in the STB and nearby people can access it, thus the STB become a regional server. For example, shops can place some pages in the STB, then the customers can access it to interact, so shops can obtain operational data. But the performance of the existing Android system equipped with HTTP server is not productive because of the large system resources consumption. In this paper, the exploratory method is used to test and verify the processing capacity and system resource occupancy of multiple concurrent access Web servers under Android system. Through the HTTP request test, PHP request test and MySQL request test, the corresponding configuration is optimized to improve concurrent processing power and reduce system resource consumption. The rest of the paper is structured as follows. Section II briefly reviews existing work related to our work. Section III describes the test method and the test environment. Section IV presents experiment implementation and results analysis. Section V proposes how to optimize the server’s performance and experimental verification. Section VI summarizes the full paper.

RELATED WORK There has been a lot of researches on performance of Android and web server, they do a lot of works to propose the corresponding solutions to improve performance. However, there are few researches on Android-based web server as well as the related performance optimization schemes. Vimal [4] designs a new memory management scheme for Enhancing Performance of Applications on Android. This scheme takes into account application usage patterns of the user to decide the applications that have to be killed from the main memory and dynamically set the background process cache limit based on hit rate and number of applications of user’s interest. Yuan [5] designs experiments to compare Binder with the traditional IPC communication modes. And the experiments prove that Binder performance is enhanced by more than 20% using message queue instead of global lock. Singh [6] points out the drawbacks of current LMK approach and the paper improves user experience by reducing or removing the delay at memory crunch situations with efficient use of LMK. Su [7] introduces FSMdroid, a novel, guided approach to GUI testing of Android apps. Compared with the traditional model- based testing approaches, FSMdroid enhances the diversity of test sequences by 85%, but reduces the number of them by 54%. Liu et al. [8] presents an approach for improving the CTS test efficiency to reduce the time to perform CTS tests and shorten the time-to-market of Android devices. Asselin proposes an anomaly detection model as a very helpful tool to start building an efficient intrusion detection system adapted to a specific web application or to assist a forensic analysis.

Design and Implementation of Oil Painting Online Appreciation System Based on Android

With popularity of computer and networked application, artwork exhibition starts to change from off-line to online so as to provide artwork lovers and investors with more appreciating ways. Many famous artworks including famous painter’s painting can all be browsed and downloaded from internet. As a work of art with requirements in art expressive ability, during digital exhibition in network, it has become a key point on whether oil painting is successful in online display of oil painting in the fact whether oil painting can reduce expressive force of art in real paintings. Design and realization in oil painting management system cannot only promote communication of oil painting art but also provide art value realization with platform and tie. This paper studies mobile platform-based realization in oil painting resource system, and pushes excellent oil paintings and the latest paintings exhibition information to users’ cell-phone. The system realizes quantified appreciation evaluation in oil paintings through five-star evaluation and sets up convenient resource navigation system to realize character navigation. It constructs online test system to evaluate learners’ oil painting appreciation ability, supports online making test questions and realizes automatic checking in system. The system also adopts WebApp technology and can establish response layout in unified standard for different mobile terminals with perfect compatibility, platform crossing and flexibility.

SYSTEM REQUIREMENT ANALYSIS A. Users’ Types and Authority Analysis To satisfy different users’ specific requirements in system, this appreciation system designs user type into two types including managers and common users. Managers contain system manager, website editor while common users include teacher, students and anonymous user: z System manager belongs to super user-group in system and restructuring has the highest control authority and global authority in system. It mainly contains functional adjustment in system, business logic control such as data content filtering and interface content display. System managers should be responsible for normal operation and maintenance in system. z Website editors, that is, the content distributors in website, are mainly responsible for real-time updating in website content, the latest notice and announcement of art show in oil painting, checking distribution of anonymous users’ evaluation, updating famous painters’ excellent oil painting resources and guaranteeing synchronous real-time updating in website information. z Teacher users are mainly managing curriculum and teaching including students’ evaluation and feedback in submitting oil paintings, managing student users’ registering, checking students’ online evaluation, online making oil painting technique knowledge, announcing exam notice and online correction of exams. z Student users. They mainly learn online, study teachers’ oil paintings appreciation and technique knowledge, participate learning group discussion, complete teachers’ case appreciation online, join teachers’ stage-test and text and finally enhance knowledge acquisition according to test results. z Anonymous users. Design in system itself follows opening rules and its purpose is to make more people understand oil painting beauty and cherish traditional culture charm. B. Function Design The construction object of oil painting online display mainly include the follows: (1) It helps users to manage and simply process their oil paintings, separates bases for electronic pictures in computer on the basis of rules and ranks according to certain sequence for users’ management. (2) Computer and network technology are used for information management in oil paintings in order to improve efficiency and quality in transmission of oil paintings and support information as well as science of oil painting management. Then, functional requests of oil paintings online system mainly include artwork exhibition, painter management, paintings management, data management and system management. These are the basis for functional design of oil paintings online system to divide system functional modules.

Low cost QoS and RF power measuring tool for 2G/3G mobile network using Android OS platform

INTRODUCTION Smartphones and 3G tablets are becoming a major platform for execution of Internet services as more powerful and less expensive devices are becoming available. From this perspective, a large number of mobile applications are becoming available for end users and corporates [1]. In comparison with home users, corporates require a more stable and reliable network especially for wireless communications. Wireless service providers need to monitor their networks not only for received signal level but also for quality of service (QoS) [2]. Frequently the quality of service is integrated with subjective information and reported as Quality of Experience (QoE) to the end users [3, 4]. Both metrics, QoS and QoE require automated and expensive monitoring tools. As smartphones’ analyzing power is becoming comparable to personal computers, the possibility of using their capabilities for live network monitoring lays ahead an interesting research field [5]. In this work, we present a mobile application exploiting measuring capabilities of modern Android-based smartphones. The developed application makes use of Android Programming Interfaces (API) to extrapolate interesting parameters from the user connected network such as received power and network responding time (latency). We use the applications as live monitoring tool in more critical situations such as in the case of fast moving users. A post-processing of the monitored data is provided via a PHP scripting language and is integrated with Google maps, using their respective programming interfaces. These post-processing results in three new layers; one made of signal quality, the next one of network latency and the last one of user velocity for the surveyed area overimposed to the geographical map. The provided information can be useful for service providers to improve their quality of service for end users. This can be in context, as “you cannot improve what you cannot measure”. The paper is organized as follows: in the first paragraph, we provide some technical equations and concepts of signalreceived power, followed by the second section where the extrapolated parameters are presented together with the application interface. In the third paragraph, measured data are provided and the results are commented and analyzed. At the end some conclusions are drawn.

RF RECEIVED POWER ON MOBILE USER EQUIPMENT In wireless communication, high data rate is proportionally related to the level of Signal to Noise Ratio of the received signal. Received power level in far field region [6, 7] is inversely proportional to the square of the distance between transmitting and receiving antenna. In GSM/UMTS network, the distance from the user mobile station (MS) and base station (BS) is to be considered as far field for practical cases. where GRX and GTX are the gains of the receive and transmit antennas, respectively, λ is the wavelength, d is the relative distance from transmitting and receiving antenna and PTX is the transmit power in dBm. From (1) we see that the received power at a fixed distance d cannot be improve without changing the antennas gain or transmitted power. In (1) only free space loss are considered in respect to other factors as multipath propagation (fading) or propagation loss on different media inserted in optical path from BS and MS. Practically, transmitted power and transmitting antenna gain are the same at any measuring time. The receiving antenna in mobile station is assumed omnidirectional, so its orientation does not influence the measuring procedure. This assumption is coherent with ETSI (European Telecommunications Standards Institute) GSM Technical Specification [8, 9] where equipment with integral antenna may be taken into account assuming a 0 dBi gain antenna. Therefore, at a fixed distance from transmitting antenna the received power must be constant. Inpractical cases, this is not true due to other loss factors as multipath propagation and atmospheric loss. Monitoring received power at a fixed distance from transmitting station will provide information’s of the other loss factors that need to be taken into account for offering high quality of service to end users. In this section, the requirements are given in terms of power levels at the antenna connector of the receiver. This means that the tests on equipment on integral antenna will consider fields strengths (E) related to the power levels (P) specified.

Using smartphones as live probe is possible since normally the received power is automatically evaluated by the device for communication capabilities. From a programming point of view, the signal network strength is provided in Android platform as ASU (Arbitrary Strength Unit) levels. The ASU level is an integer in the [0, 31] range (5 bit discretization) directly related to the Received Signal Strength Indicator-RSSI for GSM network (2G). For UMTS (3G) the same android API reports the level index of CPICH-RSCP (Common Pilot Channel – Received Signal Code Power) defined in TS 25.125. In the UMTS cellular communication system, received signal code power (RSCP) denotes the power measured by a receiver on a particular physical communication channel. It is used as an indication of signal strength. Reporting this information in the more common measuring unit (dBm) as indicated by ETSI.