DATA-DRIVEN COMPOSITION FOR SERVICE-ORIENTED SITUATIONAL WEB APPLICATIONS

This paper presents a systematic data-driven approach to assisting situational application development. We first propose a technique to extract useful information from multiple sources to abstract service capabilities with set tags. This supports intuitive expression of user’s desired composition goals by simple queries, without having to know underlying technical details. A planning technique then exploits composition solutions which can constitute the desired goals, even with some potential new interesting composition opportunities. A browser-based tool facilitates visual and iterative refinement of composition solutions, to finally come up with the satisfying outputs. A series of experiments demonstrate the efficiency and effectiveness of our approach.

Data-driven composition technique for situational web applications by using tag-based semantics in to illustrate the overall life-cycle of our “compose as-you-search” composition approach, to propose the clustering technique for deriving tag-based composition semantics, and to evaluate the composition planning effectiveness, respectively. Compared with previous work, this paper is significantly updated by introducing a semi-supervised technique for clustering hierarchical tag based semantics from service documentations and human-annotated annotations. The derived semantics link service capabilities and developers’ processing goals, so that the composition is processed by planning the “Tag HyperLinks” from initialquery to the goals.

The planning algorithm is also further evaluated in terms of recommendation quality, performance, and scalability over data sets from real-world service repositories. Results show that our approach reaches satisfying precision and high-quality composition recommendations. We also demonstrate that our approach can accommodate even larger size of services than real world repositories so as to promise performance. Besides, more details of our interactive development prototyping are presented. We particularly demonstrate how the composition UI can help developers intuitively compose situational applications, and iteratively refine their goals until requirements are finally satisfied.

CONTINUOUS AND TRANSPARENT USER IDENTITY VERIFICATION FOR SECURE INTERNET SERVICES

Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction.

This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user. The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed.

Collision Tolerant and Collision Free Packet Scheduling for Underwater Acoustic Localization

To implement the system to solve the joint problem of packet scheduling and self-localization in an underwater acoustic sensor network with randomly distributed nodes. In terms of packet scheduling, our goal is to minimize the localization time, and to do so we consider two packet transmission schemes, namely a collision-free scheme (CFS), and a collision-tolerant scheme (CTS). The required localization time is formulated for these schemes, and through analytical results and numerical examples their performances are shown to be dependent on the circumstances.  When the packet duration is short (as is the case for a localization packet), the operating area is large (above 3 km in at least one dimension), and the average probability of packet-loss is not close to zero, the collision-tolerant scheme is found to require a shorter localization time.

CLOUD-BASED MULTIMEDIA CONTENT PROTECTION SYSTEM

We propose a new design for large-scale multimedia content protection systems. Our design leverages cloud infrastructures to provide cost efficiency, rapid deployment, scalability, and elasticity to accommodate varying workloads. The proposed system can be used to protect different multimedia content types, including 2-D videos, 3-D videos, images, audio clips, songs, and music clips. The system can be deployed on private and/or public clouds. Our system has two novel components: (i) method to create signatures of 3-D videos, and (ii) distributed matching engine for multimedia objects. The signature method creates robust and representative signatures of 3-D videos that capture the depth signals in these videos and it is computationally efficient to compute and compare as well as it requires small storage. The distributed matching engine achieves high scalability and it is designed to support different multimedia objects.

We implemented the proposed system and deployed it on two clouds: Amazon cloud and our private cloud. Our experiments with more than 11,000 3-D videos and 1 million images show the high accuracy and scalability of the proposed system. In addition, we compared our system to the protection system used by YouTube and our results show that the YouTube protection system fails to detect most copies of 3-D videos, while our system detects more than 98% of them. This comparison shows the need for the proposed 3-D signature method, since the state-of-the-art commercial system was not able to handle 3-D videos.

BRACER A DISTRIBUTED BROADCAST PROTOCOL IN MULTI-HOP COGNITIVE RADIO AD HOC NETWORKS

Broadcast is an important operation in wireless ad hoc networks where control information is usually propagated as broadcasts for the realization of most networking protocols. In traditional ad hoc networks, since the spectrum availability is uniform, broadcasts are delivered via a common channel which can be heard by all users in a network. However, in cognitive radio (CR) ad hoc networks, different unlicensed users may acquire different available channel sets. This non-uniform spectrum availability imposes special design challenges for broadcasting in CR ad hoc networks.

In this paper, a fully-distributed Broadcast protocol in multi-hop Cognitive Radio ad hoc networks with collision avoidance, BRACER, is proposed. In our design, we consider practical scenarios that each unlicensed user is not assumed to be aware of the global network topology, the spectrum availability information of other users, and time synchronization information. By intelligently downsizing the original available channel set and designing the broadcasting sequences and scheduling schemes, our proposed broadcast protocol can provide very high successful broadcast ratio while achieving very short average broadcast delay. It can also avoid broadcast collisions. To the best of our knowledge, this is the first work that addresses the unique broadcasting challenges in multi-hop CR ad hoc networks with collision avoidance.

A Methodology for Extracting Standing Human Bodies From Single Images

Segmentation of human bodies in images is a challenging task that can facilitate numerous applications, like scene understanding and activity recognition. In order to cope with the highly dimensional pose space, scene complexity, and various human appearances, the majority of existing works require computationally complex training and template matching processes.
We propose a bottom-up methodology for automatic extraction of human bodies from single images, in the case of almost upright poses in cluttered environments. The position, dimensions, and color of the face are used for the localization of the human body, construction of the models for the upper and lower body according to anthropometric constraints, and estimation of the skin color.
Different levels of segmentation granularity are combined to extract the pose with highest potential. The segments that belong to the human body arise through the joint estimation of the foreground and background during the body part search phases, which alleviates the need for exact shape matching. The performance of our algorithm is measured using 40 images (43 persons) from the INRIA person dataset and 163 images from the “lab1” dataset, where the measured accuracies are 89.53% and 97.68%, respectively. Qualitative and quantitative experimental results demonstrate that our methodology outperforms state-of-the-art interactive and hybrid top-down/bottom-up approaches.

Principles of navigation

Navigation between different screens and apps is a core part of the user experience. The following principles set a baseline for a consistent and intuitive user experience across apps. The Navigation component is designed to implement these principles by default, ensuring that users can apply the same heuristics and patterns in navigation as they move between apps.

Note: Even if you aren’t using the Navigation component in your project, your app should follow these design principles.

Fixed start destination

Every app you build has a fixed start destination. This is the first screen the user sees when they launch your app from the launcher. This destination is also the last screen the user sees when they return to the launcher after pressing the Back button. Let’s take a look at the Sunflower app as an example.

When launching the Sunflower app from the launcher, the first screen that a user sees is the List Screen, the list of plants in their garden. This is also the last screen they see before exiting the app. If they press the Back button from the list screen, they navigate back to the launcher.

Note: An app might have a one-time setup or series of login screens. These conditional screens should not be considered start destinations because users see these screens only in certain cases.

Navigation state is represented as a stack of destinations

When your app is first launched, a new task is created for the user, and app displays its start destination. This becomes the base destination of what is known as the back stack and is the basis for your app’s navigation state. The top of the stack is the current screen, and the previous destinations in the stack represent the history of where you’ve been. The back stack always has the start destination of the app at the bottom of the stack.

Operations that change the back stack always operate on the top of the stack, either by pushing a new destination onto the top of the stack or popping the top-most destination off the stack. Navigating to a destination pushes that destination on top of the stack.

The Navigation component manages all of your back stack ordering for you, though you can also choose to manage the back stack yourself.

Up and Back are identical within your app’s task

The Back button appears in the system navigation bar at the bottom of the screen and is used to navigate in reverse-chronological order through the history of screens the user has recently worked with. When you press the Back button, the current destination is popped off the top of the back stack, and you then navigate to the previous destination.

The Up button appears in the app bar at the top of the screen. Within your app’s task, the Up and Back buttons behave identically.

The Up button never exits your app

If a user is at the app’s start destination, then the Up button does not appear, because the Up button never exits the app. The Back button, however, is shown and does exit the app.

When your app is launched using a deep link on another app’s task, Up transitions users back to your app’s task and through a simulated back stack and not to the app that triggered the deep link. The Back button, however, does take you back to the other app.

Deep linking simulates manual navigation

Whether deep linking or manually navigating to a specific destination, you can use the Up button to navigate through destinations back to the start destination.

When deep linking to a destination within your app’s task, any existing back stack for your app’s task is removed and replaced with the deep-linked back stack.

Using the Sunflower app again as an example, let’s assume that the user had previously launched the app from the launcher screen and navigated to the detail screen for an apple. Looking at the Recents screen would indicate that a task exists with the top most screen being the detail screen for the Apple.

At this point, the user can tap the Home button to put the app in the background. Next, let’s say this app has a deep link feature that allows users to launch directly into a specific plant detail screen by name. Opening the app via this deep link completely replaces the current Sunflower back stack shown in figure 3 with a new back stack, as shown in figure 4:

Figure 4: Following a deep link replaces the existing back stack for the Sunflower app.

Notice that the Sunflower back stack is replaced by a synthetic back stack with the avocado detail screen at the top. The My Garden screen, which is the start destination, was also added to the back stack. This is important because the synthetic back stack must be realistic. It should match a back stack that could have been achieved by organically navigating through the app. The original Sunflower back stack is gone, including the app’s knowledge that the user was on the Apple details screen before.

The Navigation component supports deep linking and recreates a realistic back stack for you when linking to any destination in your navigation graph.

New Functions of PHP 5

In PHP 5 there are some new functions. Here is the list of them:

Arrays:

array_combine() – Creates an array by using one array for keys and another for its values

array_diff_uassoc() – Computes the difference of arrays with additional index check which is performed by a user supplied callback function

array_udiff() – Computes the difference of arrays by using a callback function for data comparison

array_udiff_assoc() – Computes the difference of arrays with additional index check. The data is compared by using a callback function

array_udiff_uassoc() – Computes the difference of arrays with additional index check. The data is compared by using a callback function. The index check is done by a callback function also

array_walk_recursive() – Apply a user function recursively to every member of an array

array_uintersect_assoc() – Computes the intersection of arrays with additional index check. The data is compared by using a callback function

array_uintersect_uassoc() – Computes the intersection of arrays with additional index check. Both the data and the indexes are compared by using separate callback functions

array_uintersect() – Computes the intersection of arrays. The data is compared by using a callback function

InterBase:

ibase_affected_rows() – Return the number of rows that were affected by the previous query

ibase_backup() – Initiates a backup task in the service manager and returns immediately

ibase_commit_ret() – Commit a transaction without closing it

ibase_db_info() – Request statistics about a database

ibase_drop_db() – Drops a database

ibase_errcode() – Return an error code

ibase_free_event_handler() – Cancels a registered event handler

ibase_gen_id() – Increments the named generator and returns its new value

ibase_maintain_db() – Execute a maintenance command on the database server

ibase_name_result() – Assigns a name to a result set

ibase_num_params() – Return the number of parameters in a prepared query

ibase_param_info() – Return information about a parameter in a prepared query

ibase_restore() – Initiates a restore task in the service manager and returns immediately

ibase_rollback_ret() – Rollback transaction and retain the transaction context

ibase_server_info() – Request statistics about a database server

ibase_service_attach() – Connect to the service manager

ibase_service_detach() – Disconnect from the service manager

ibase_set_event_handler() – Register a callback function to be called when events are posted

ibase_wait_event() – Wait for an event to be posted by the database

iconv:

iconv_mime_decode() – Decodes a MIME header field

iconv_mime_decode_headers() – Decodes multiple MIME header fields at once

iconv_mime_encode() – Composes a MIME header field

iconv_strlen() – Returns the character count of string

iconv_strpos() – Finds position of first occurrence of a needle within a haystack

iconv_strrpos() – Finds the last occurrence of a needle within a haystack

iconv_substr() – Cut out part of a string

Streams:

stream_copy_to_stream() – Copies data from one stream to another

stream_get_line() – Gets line from stream resource up to a given delimiter

stream_socket_accept() – Accept a connection on a socket created by stream_socket_server()

stream_socket_client() – Open Internet or Unix domain socket connection

stream_socket_get_name() – Retrieve the name of the local or remote sockets

stream_socket_recvfrom() – Receives data from a socket, connected or not

stream_socket_sendto() – Sends a message to a socket, whether it is connected or not

stream_socket_server() – Create an Internet or Unix domain server socket

Date and time related:

idate() – Format a local time/date as integer

date_sunset() – Time of sunset for a given day and location

date_sunrise() – Time of sunrise for a given day and location

time_nanosleep() – Delay for a number of seconds and nanoseconds

Strings:

str_split() – Convert a string to an array

strpbrk() – Search a string for any of a set of characters

substr_compare() – Binary safe optionally case insensitive comparison of two strings from an offset, up to length characters

Other:

convert_uudecode() – decode a uuencoded string

convert_uuencode() – uuencode a string

curl_copy_handle() – Copy a cURL handle along with all of its preferences

dba_key_split() – Splits a key in string representation into array representation

dbase_get_header_info() – Get the header info of a dBase database

dbx_fetch_row() – Fetches rows from a query-result that had the DBX_RESULT_UNBUFFERED flag set

fbsql_set_password() – Change the password for a given user

file_put_contents() – Write a string to a file

ftp_alloc() – Allocates space for a file to be uploaded

get_declared_interfaces() – Returns an array of all declared interfaces

get_headers() – Fetches all the headers sent by the server in response to a HTTP request

headers_list() – Returns a list of response headers sent (or ready to send)

http_build_query() – Generate URL-encoded query string

image_type_to_extension() – Get file extension for image-type returned by getimagesize(), exif_read_data(), exif_thumbnail(), exif_imagetype()

imagefilter() – Applies a filter to an image using custom arguments

imap_getacl() – Gets the ACL for a given mailbox

ldap_sasl_bind() – Bind to LDAP directory using SASL

mb_list_encodings() – Returns an array of all supported encodings

pcntl_getpriority() – Get the priority of any process

pcntl_wait() – Waits on or returns the status of a forked child as defined by the waitpid() system call

pg_version() – Returns an array with client, protocol and server version (when available)

php_check_syntax() – Check the syntax of the specified file

php_strip_whitespace() – Return source with stripped comments and whitespace

proc_nice() – Change the priority of the current process

pspell_config_data_dir() – Change location of language data files

pspell_config_dict_dir() – Change location of the main word list

setrawcookie() – Send a cookie without URL-encoding the value

scandir() – List files and directories inside the specified path

snmp_read_mib() – Reads and parses a MIB file into the active MIB tree

sqlite_fetch_column_types() – Return an array of column types from a particular table

International PHP Conference 2019 – Fall Edition

The International PHP Conference is the world’s first PHP conference and stands since more than a decade for top-notch pragmatic expertise in PHP and web technologies. At the IPC, internationally renowned experts from the PHP industry meet up with PHP users and developers from large and small companies. Here is the place where concepts emerge and ideas are born – the IPC signifies knowledge transfer at highest level.

All delegates of the International PHP Conference have, in addition to PHP program, free access to the entire range of the International JavaScript Conference taking place at the same time.

Basic facts:

Date: October 21 – 25, 2019

Location: Holiday Inn Munich City Centre, Munich

Highlights:

  • 60+ best practice sessions
  • 50+ international top speakers
  • PHPower: Hands-on Power Workshops
  • Expo with exciting exhibitors on October 22nd & 23rd
  • Conference Combo: Visit the International JavaScript Conference for free
  • All inclusive: Changing buffets, snacks & refreshing drinks
  • Official certificate for attendees
  • Free Swag: Developer bag, T-Shirt, magazines etc.
  • Exclusive networking events

Topics:

  • PHP Development
  • Web Development
  • JavaScript Development
  • Agile & Culture
  • DevOps
  • Architecture
  • Web Security
  • Testing & Quality

Android Studio 3.5 Beta 5 available


Android Studio 3.5 Beta 5 is now available in the Beta channel.

If you have Android Studio set up to receive updates on the Beta channel, you can get the update by choosing Help > Check for Updates (Android Studio > Check for Updates on macOS).

Fixed issues with predefined Android code styling

We fixed the underlying issues around applying the predefined Android code style for Java and XML, and it is now the default again both for IDE and project schemes. If you have local code style changes, those will be unaffected; you can always reapply the Android code style by selecting Set from > Predefined Style > Android on the Code Style settings page to reapply the defaults. (Issue #131581006)

General fixes

This update also includes fixes for the following public issues:

Core IDE

  • Issue #133666019: New Image Asset wizard (launcher / legacy) does not trim image to selected shape
  • Issue #133771451: IDE ERROR DISPLAY
  • Issue #133066328: Error preview when creating image asset > icon launcher (Preview rendering error: rendering failed – null)

Data Binding

  • Issue #131889243: Studio 3.5 deadlock (Kotlin resolve + databinding)
  • Issue #132367955: AS 3.5 Beta 1 assumes Databinding bindings are Views

Design Tools

  • Issue #133184665: Resource picker doesn’t appear when adding an attribute using Declared Attribute + button

Dexer (D8)

  • Issue #118842646: Ability to selectively suppress warnings during D8 desugaring

Gradle

  • Issue #132840182: ClassNotFoundException on API 21 or 22 device.
  • Issue #133273847: Error: Duplicate resources in gradle plugin 3.5.0-beta01 and 02

Layout Editor

  • Issue #132578769: ConstraintLayout v2.0.0-beta1: Impossible to drop element on layout with data element defined
  • Issue #133789726: GoTo navigation goes to the wrong property or doesn’t work
  • Issue #133225561: Completions does not seem to work in a newly added attribute
  • Issue #134522901: Android Studio full crash every time you undo widget rename
  • Issue #132323234: Long names don’t fit in dropdown menus for attributes and can’t be distinguished
  • Issue #133526948: attributes starting with “__removed” are showing up in the properties panel

Lint

  • Issue #131844902: DefaultJavaEvaluator.getProject sometimes returning /media for /media2/player/…MediaPlayer.java
  • Issue #111487505: Unnecessary warning for Attribute ‘importantForAutofill’ is only used in API level 26 and higher

Navigation

  • Issue #133280833: element can only be included in application manifest

Run Debug

  • Issue #134515798: Improve error reporting when ADB cannot be executed
  • Issue #131786506: IndexNotReadyException in AndroidTestRunConfiguration.getRunnerFromManifest

Shrinker (R8)

  • Issue #132549918: Using -keepparameternames has no effect
  • Issue #134304597: VerifyError: kotlinx/coroutines/AbstractCoroutine at API 17, 18
  • Issue #135210786: NoClassDefFoundError in runtime on API 19 and below when using AGP 3.5.0-beta04
  • Issue #134093979: Unsupported source file type (META-INF/versions/9/module-info.class)
  • Issue #133686361: R8 1.5 issue with Google play core library
  • Issue #134462736: R8 1.5.43 introduce again VerifyError
  • Issue #133215941: VerifyError with Android Annotations
  • Issue #133457361: AbstractMethodError when calling interface provided as Java 8 lambda with R8 on Android Gradle Plugin 3.4.1
  • Issue #132953944: java.lang.VerifyError at api19 and below
  • Issue #134838460: Add support for keep option modifier `includecode`


For information on new features and changes in all preview builds of Android Studio 3.5, see the Android Studio Preview release notes. For details of bugs fixed in each preview release, see previous entries on this blog.

We greatly appreciate your bug reports, which help us to make Android Studio better. If you encounter a problem, let us know by reporting a bug. Note that you can also vote for an existing issue to indicate that you are also affected by it.

Joint Interference Coordination and Load Balancing for OFDMA Multihop Cellular Networks

  1.  

Multihop cellular networks (MCNs) have drawn tremendous attention due to its high throughput and extensive coverage. However, there are still three issues not well addressed. With the existence of relay stations (RSs), how to efficiently allocate frequency resource to relay links becomes a challenging design issue. For mobile stations (MSs) near the cell edge, cochannel interference (CCI) become severe, which significantly affects the network performance.

Furthermore, the unbalanced user distribution will result in traffic congestion and inability to guarantee quality of service (QoS). To address these problems, we propose a quantitative study on adaptive resource allocation schemes by jointly considering interference coordination (IC) and load balancing (LB) in MCNs.

In this paper, we focus on the downlink of OFDMA-based MCNs with time division duplex (TDD) mode, and analyze the characteristics of resource allocation according to IEEE 802.16j/m specification. We also design a novel frequency reuse scheme to mitigate interference and maintain high spectral efficiency, and provide practical LB-based handover mechanisms which can evenly distribute the traffic and guarantee users’ QoS.

  1. INTRODUCTION:

The future wireless cellular networks, such as 3GPP advanced long term evolution (LTE-Advanced) and IEEE 802.16m systems, will adopt orthogonal frequency division multiple access (OFDMA) technology for multihop cellular networks (MCNs). OFDMA is regarded as the most promising physical layer technology for the fourth generation (4G) wireless networks. New relay strategies and technologies are proposed to provide services with extended coverage and higher data rate. Fixed relay stations (RSs) with fewer functionalities than base stations (BSs) can be deployed to overcome poor channel conditions while maintaining low infrastructure cost. Nevertheless, MCNs have inherent drawbacks, for example, extra radio resource are required on relay links (BS-RS links). Therefore, well-designed radio resource allocation schemes are crucial for MCNs to effectively exploit the benefit of RSs, while overcoming the disadvantages.

Since RSs always utilizes the same spectrum as MSs or BSs, cochannel interference (CCI) will be closely related to the radio resource allocation schemes in MCNs due to the intercell and intracell frequency reuse. OFDMA systems should employ frequency planning for better cell edge performance and the ease of interference management. Traditional single-hop cellular networks (SCNs) typically employ the frequency reuse pattern with factor of 3 or 7 to reduce CCI, which results in low spectral efficiency. As we all know, high data rate is one of the desired features of the future cellular networks. It requires a highly efficient utilization of the available spectrum. Frequency reuse with factor of 1 is likely to be used in LTE-Advanced and IEEE 802.16m systems, aiming at improving the spectral efficiency. However, the CCI using this frequency planning causes severe performance degradation at cell boundaries. (WiMAX) Forum, the frequency reuse pattern can be denoted as N _ S _ K, which means that the networks are divided into clusters of N cells (each cell in the cluster has a different frequency band), with S sectors and K different frequency bands per cell. According to these reuse patterns, all available spectrum is assigned to all sector-BSs in the reuse pattern of 1 _ 3 _ 1, whereas each sector-BS uses only one third of the total frequency bands in the reuse pattern of 1 _ 3 _ 3. The CCI level is higher in the former, whereas the spectral efficiency is lower in the latter. If 1 _ 3 _ 3 is used in MCNs, the spectral efficiency will be much lower because extra frequency resource has to be allocated to relay links. If 1 _ 3 _ 1 is used in MCNs, the frequency reuse scheme is more important in a multicell scenario. Compared with BSs deployed at the cell center, RSs deployed at the cell edge cause serious interference because RSs are closer to the mobile stations (MSs) in the adjacent cells than those BSs.

In the existing literature, there are several works about reducing CCI in MCNs. In, several static resource allocation schemes with different partitions and reuse factors are discussed. The CCI in these schemes is analyzed in a multicell scenario. In, a relay-based orthogonal frequency planning strategy is proposed to improve cell edge performance. In, fractional frequency reuse (FFR) is extended to MCNs as a compromise solution to reduce CCI while maintaining the sector frequency reuse factor as 1. The main idea of FFR is to adopt frequency reuse 1 _ 3 _ 1 at the cell center to maximize the network spectral efficiency while harnessing frequency reuse 1 _ 3 _ 3 at the cell edge to alleviate CCI, the minimum CCI has been achieved by adjusting the transmission (Tx) power at BSs and RSs under orthogonal frequency resource allocation. The essence of these works is to use partial frequency bands while maintaining frequency orthogonal at the cell edge and the remaining frequency bands at the cell center.

Moreover, there are several static frequency allocation schemes proposed in the aforementioned works, which fit for uniform traffic distribution only. In reality, users are not evenly distributed among cells. Too many users accessing one station (BS or RS) yields load imbalance in MCNs. Such an imbalance could severely affect the performance of hot spot areas, which may not meet the users’ quality of service (QoS) requirements. This is another major reason for system performance degradation. To guarantee users’ QoS, therefore, load balancing (LB) should be adopted along with IC for MCNs.

LB has been widely studied in SCNs and heterogeneous networks (HetNets). For SCNs, resource allocation schemes have to work in conjunction with the connection admission control (CAC) mechanisms, which determines, based on available resource and users’ QoS, whether to admit an incoming connection to a particular cell or to reject it in the current cell, but to switch the user to an adjacent non congested cell through a handover mechanism. Here, the corresponding handover mechanism is not executed due to position change of users, but due to the lack of resource in the original cell. As important methods in LB, the cell breathing and load-ware handover are proposed. The idea is that if a cell is heavily congested, the adjacent noncongested cell may expand the coverage and accommodate more users by raising transmission power. In a scheme jointly considering IC and LB is designed to improve the weighted sum of data rates in multicell networks. The problem is NP-hard and then develop a local-improvement-based algorithm to solve it. These works suggest not only to use higher transmission power at the adjacent cell stations, but also report continually a large amount of information related to signal quality and traffic load in the surrounding cells, to the mobile switch center (MSC), to calculate the best connection to the BS. Apparently, this would increase the system overhead and management complexity. For HetNets, an integrated cellular and ad hoc relay (iCAR) system has been proposed, in which some users can be switched to adjacent cells through ad hoc RSs and the spare resource are then acquired by incoming users. However, this type of LB only works with HetNets.

HetNets intend to change the traditional system architecture of cellular networks, while MCNs only attempt to improve the network performance of the traditional cellular networks through the use of RSs. It is noticeable that MCNs differ from Het Nets in the following few characteristics: 1) RSs are important add-on communications facilities of cellular networks, which also share the same spectrum with BSs;

 2) BSs and RSs are connected through wireless radio interfaces;

3) the users associated with an RS need to access BS ultimately, which may ask for two-hop transmissions to deliver data.

With the deployment of RSs in MCNs, more handover opportunities arise, leading to better resource management and performance gain. This paper focuses on how to switch the connections from congested stations to non congested stations and increase the available frequency resource for congested stations to achieve LB. In a cell, the traffic load information of RSs as well as link qualities between RSs and MSs are reported to BS by RSs. The BS is directly responsible for performing handover mechanisms in each sector. This method does not require to collect and process all kinds of information for a group of cells, which can reduce the complexity of the system implementation and guarantee QoS for users in hot spots.

The main contributions of this paper can be summarized as follows: We provide a quantitative study on an adaptive resource allocation scheme by jointly considering IC and LB in MCNs. We also present a novel frequency reuse scheme to mitigate interference and maintain high spectral efficiency, and propose practical LB-based handover mechanisms which can evenly distribute the traffic load and guarantee users’ QoS. Extensive simulations demonstrate that our proposed schemes can provide higher throughput and accommodate more QoS-guaranteed users than what conventional SCNs can do.

1.3 LITRATURE SURVEY

OPPORTUNITIES AND CHALLENGES IN OFDMA-BASED CELLULAR RELAY NETWORKS: A RADIO RESOURCE MANAGEMENT PERSPECTIVE

PUBLICATION: M. Salem, A. Adinoyi, H. Yanikomeroglu, and D. Falconer, IEEE Trans. Vehicular Technology, vol. 59, no. 5, pp. 2496-2510, Jan. 2010.

The opportunities and flexibility in relay networks and orthogonal frequency-division multiple access (OFDMA) make the combination a suitable candidate network and air-interface technology for providing reliable and ubiquitous high-data-rate coverage in next-generation cellular networks. Advanced and intelligent radio resource management (RRM) schemes are known to be crucial toward harnessing these opportunities in future OFDMA-based relay-enhanced cellular networks. However, it is not very clear how to address the new RRM challenges (such as enabling distributed algorithms, intra-cell/inter-cell routing, intense and dynamic co-channel interference (CCI), and feedback overhead) in such complex environments comprising a plethora of relay stations (RSs) of different functionalities and characteristics. Employment of conventional RRM schemes in such networks will highly be inefficient if not infeasible. The next-generation networks are required to meet the expectations of all wireless users, irrespective of their locations. High-data-rate connectivity, mobility, and reliability, among other features, are examples of these expectations. Therefore, fairness is a critical performance aspect that has to be taken into account in the design of prospective RRM schemes. This paper reviews some of the prominent challenges involved in migrating from the conventional cellular architecture to the relay-based type and discusses how intelligent RRM schemes can exploit the opportunities in relay-enhanced OFDMA-based cellular networks. We identify the role of multiantenna systems and explore the current approaches in literature to extend the conventional schedulers to next-generation relay networks. This paper also highlights the fairness aspect in such networks in the light of the recent literature, provides some example fairness metrics, and compares the performances of some representative algorithms.

INTERFERENCE COORDINATION IN COMPACT FREQUENCY REUSE FOR MULTIHOP CELLULAR NETWORKS

PUBLICATION: Y. Zhao, X. Fang, and Z. Zhao, IEICE Trans. Fundamentals of Electronics, Comm. and Computer Sciences, vol. E93-A, no. 11, pp. 2312-2319, Nov. 2010.

Continuously increasing the bandwidth to enhance the capacity is impractical because of the scarcity of spectrum availability. Fortunately, on the basis of the characteristics of the multihop cellular networks (MCNs), a new compact frequency reuse scheme has been proposed to provide higher spectrum utilization efficiency and larger capacity without increasing the cost on network. Base stations (BSs) and relay stations (RSs) could transmit simultaneously on the same frequency according to the compact frequency reuse scheme. In this situation, however, mobile stations (MSs) near the coverage boundary will suffer serious interference and their traffic quality can hardly be guaranteed. In order to mitigate the interference while maintaining high spectrum utilization efficiency, this paper introduces a fractional frequency reuse (FFR) scheme into multihop cellular networks, in which the principle of FFR scheme and characteristics of frequency resources configurations are described, then the transmission (Tx) power consumption of BS and RSs is analyzed. The proposed scheme can both meet the requirement of high traffic load in future cellular system and maximize the benefit by reducing the Tx power consumption. Numerical results demonstrate that the proposed FFR in compact frequency reuse achieves higher cell coverage probability and larger capacity with respect to the conventional schemes.

TECHNICAL SPECIFICATION GROUP RADIO ACCESS NETWORK; PHYSICAL LAYER ASPECTS FOR EVOLVED UNIVERSAL TERRESTRIAL RADIO ACCESS (UTRA)

PUBLICATION: Third Generation Partnership Project,  3GPP Technical Report 25.814 v7.1.0, Sept. 2006.

The justification of the study item was, that with enhancements such as HSDPA and Enhanced Uplink, the 3GPP radio-access technology will be highly competitive for several years. However, to ensure competitiveness in an even longer time frame, i.e. for the next 10 years and beyond, a long-term evolution of the 3GPP radio-access technology needs to be considered.  Important parts of such a long-term evolution includes reduced latency, higher user data rates, improved system capacity and coverage, and reduced cost for the operator. In order to achieve this, an evolution of the radio interface as well as the radio network architecture should be considered. Considering a desire for even higher data rates and also taking into account future additional 3G spectrum allocations the long-term 3GPP evolution should include an evolution towards support for wider transmission bandwidth than 5 MHz. At the same time, support for transmission bandwidths of  5MHz and less than 5MHz should be investigated in order to allow for more flexibility in whichever  frequency bands the system may be deployed.

OPPORTUNITIES AND CHALLENGES IN OFDMA-BASED CELLULAR RELAY NETWORKS: A RADIO RESOURCE MANAGEMENT PERSPECTIVE

PUBLICATION:  M. Salem, A. Adinoyi, H. Yanikomeroglu, and D. Falconer,  IEEE

Trans. Vehicular Technology, vol. 59, no. 5, pp. 2496-2510, Jan. 2010.

The opportunities and flexibility in relay networks and orthogonal frequency-division multiple access (OFDMA) make the combination a suitable candidate network and air-interface technology for providing reliable and ubiquitous high-data-rate coverage in next-generation cellular networks. Advanced and intelligent radio resource management (RRM) schemes are known to be crucial toward harnessing these opportunities in future OFDMA-based relay-enhanced cellular networks. However, it is not very clear how to address the new RRM challenges (such as enabling distributed algorithms, intra-cell/inter-cell routing, intense and dynamic co-channel interference (CCI), and feedback overhead) in such complex environments comprising a plethora of relay stations (RSs) of different functionalities and characteristics. Employment of conventional RRM schemes in such networks will highly be inefficient if not infeasible. The next-generation networks are required to meet the expectations of all wireless users, irrespective of their locations. High-data-rate connectivity, mobility, and reliability, among other features, are examples of these expectations. Therefore, fairness is a critical performance aspect that has to be taken into account in the design of prospective RRM schemes. This paper reviews some of the prominent challenges involved in migrating from the conventional cellular architecture to the relay-based type and discusses how intelligent RRM schemes can exploit the opportunities in relay-enhanced OFDMA-based cellular networks. We identify the role of multiantenna systems and explore the current approaches in literature to extend the conventional schedulers to next-generation relay networks. This paper also highlights the fairness aspect in such networks in the light of the recent literature, provides some example fairness metrics, and compares the performances of some representative algorithms.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

Existing literature, there are several works about reducing CCI in MCNs. In, several static resource allocation schemes with different partitions and reuse factors are discussed. The CCI in these schemes is analyzed in a multicell scenario in a relay-based orthogonal frequency planning strategy is proposed to improve cell edge performance. Fractional frequency reuses (FFR) is extended to MCNs as a compromise solution to reduce CCI while maintaining the sector frequency reuse factor as 1. The minimum CCI has been achieved by adjusting the transmission (Tx) power at BSs and RSs under orthogonal frequency resource allocation. The essence of these works is to use partial frequency bands while maintaining frequency orthogonal at the cell edge and the remaining frequency bands at the cell center.

2.2 PROPOSED SYSTEM:

We propose a quantitative study on adaptive resource allocation schemes by jointly considering interference coordination (IC) and load balancing (LB) in MCNs. In this paper, we focus on the downlink of OFDMA-based MCNs with time division duplex (TDD) mode, and analyze the characteristics of resource allocation according to IEEE 802.16j/m specification. We also design a novel frequency reuse scheme to mitigate interference and maintain high spectral efficiency, and provide practical LB-based handover mechanisms which can evenly distribute the traffic and guarantee users’ QoS.

We provide a quantitative study on an adaptive resource allocation scheme by jointly considering IC and LB in MCNs. We also present a novel frequency reuse scheme to mitigate interference and maintain high spectral efficiency, and propose practical LB-based handover mechanisms which can evenly distribute the traffic load and guarantee users’ QoS. Extensive simulations demonstrate that our proposed schemes can provide higher throughput and accommodate more QoS-guaranteed users than what conventional SCNs.

WMNs, the frequency spectrum is shared and randomly contended by all stations. The access scheme with the lowest overhead is optimal. However, for example, in this paper, a centrally controlled optimal resource allocation for OFDMA-based MCNs is our target.

To provide analytical performance evaluation, we make two assumptions for the remainder of this paper:

1. All users have a single type of data service and thus have the same QoS requirements.

2. All cells/sectors have the same channel conditions, traffic load, and distribution of users.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP
  • Front End                                :           Microsoft Visual Studio 2008
  • Coding                                                :           C# .Net
  • Document                               :           MS-Office 2007


CHAPTER 3

3.0 SYSTEM DESIGN

Data Flow Diagram / Use Case Diagram / Flow Diagram

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system.
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

 

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

 

PROCESS:

People, procedures or devices that produce data. The physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.

3.1 NETWORK ARCHITECTURE DIAGRAM:

3.2 DATAFLOW DIAGRAM:

UML DIAGRAMS:

3.3 USE CASE DIAGRAM:

3.4 CLASS DIAGRAM:

3.5 SEQUENCE DIAGRAM:

3.6 ACTIVITY DIAGRAM:

CHAPTER 4

4.0 IMPLEMENTATION:

JOINT INTERFERENCE COORDINATION AND LOAD BALANCING:

Since traffic load distribution of each cell/sector affects the system performance significantly, we propose joint IC and LB (ICLB) for MCNs. The objective is to improve system throughput under the constraint of the basic requirement on coverage in the cell coverage probability, is defined as the percentage of area within the cell that has received SINR above the threshold of the most robust MCS, i.e., QPSK (1/12) modulation. Therefore, the coverage probability can be estimated to MCNs, increasing throughput implies that more users’ QoS requirements are met. Therefore, system throughput is improved and more reliable service is attained. For different station types, we present two LB mechanisms to improve the system throughput.

4.1 ALGORITHM:

RESOURCE SCHEDULING ALGORITHM:

For relay links, based on the allocation result of the second-hop links, slots should be assigned to first-hop link with proportion to the aggregate data rate of the secondhop link of each RS that the resource allocation to the first-hop link via each RS will end when the first-hop data rate is greater than or equal to the aggregate secondhop data rate. The other slots of RZ are assigned to BS-MS links according to (8). Considering the assignable slots of one frame are limited, the attainable balance of slot allocation determines the ratio of RZ and AZ in the time domain in each frame. The detailed algorithm is shown in Algorithm 1.

4.2 MODULES:

SERVER CLIENT MODULE:

MULTIHOP CELLULAR:

LOAD BALANCING:

RESOURCE SCHEDULING:

OFDMA/TDD:

4.3 MODULE DISCRIPTION:

SERVER CLIENT MODULE:

Client-server computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client also shares any of its resources; Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

MULTIHOP CELLULAR NETWORKS:

Multi-hop cellular network (MCN) is an architecture proposed for wireless communication & MCNs combine the benefits of having a fixed infrastructure of base stations and the flexibility of ad-hoc networks. They are capable of achieving much higher throughput than current cellular systems, which can be classified as single-hop cellular networks (SCNs). This work concentrates on MCNs and SCNs using the IEEE 802.11 standard for wireless LANs.

We provide a general overview of the architecture and the issues involved in the design of MCNs, in particular the challenges to be met in the design of a routing protocol. We extend the work of Lin and Hsu to enhance the throughput of such networks further.

We propose a routing protocol for use in such networks. We conduct extensive experimental studies on the performance of MCNs and SCNs under various load conditions (both TCP and UDP). Then studies clearly indicate that MCNs with the proposed routing protocol are a viable alternative for SCNs, in fact they provide much higher throughput.

LOAD BALANCING NETWORKS:

Wireless sensor networks have received increasing attention in the many military and civil applications of sensor networks; sensors are constrained in onboard energy supply and are left unattended. Energy, size and cost constraints of such sensors limit their communication range. Therefore, they require multi-hop wireless connectivity to forward data on their behalf to a remote command site.

Our performance of an algorithm to network these sensors in to well define clusters with less energy-constrained gateway nodes acting as cluster heads, and balance load among these gateways. Load balanced clustering increases the system stability and improves the communication between different nodes in the system. To evaluate the efficiency of our approach we have studied the performance of sensor networks applying various different routing protocols.

Simulation results shows that irrespective of the routing protocol used, our approach improves the lifetime of the system performance of hot spot areas, which may not meet the users’ quality of service (QoS) requirements. This is another major reason for system performance degradation. To guarantee users’ QoS, therefore, load balancing (LB) should be adopted along with IC for MCNs.

RESOURCE SCHEDULING:

Resource scheduling can further improve system performance; we then extend the proportional fair (PF) algorithm for MCNs in this section. Besides the PF algorithm, the other two classical scheduling algorithms of round robin (RR) and maximum SINR (MaxSINR) are often applied to cellular networks. In RR algorithm, slots are allocated to the users in the cell coverage in due order and thus seem to be absolutely fair. Nonetheless, it is not efficient since the difference of slot efficiency of users is not taken into consideration.

In MaxSINR algorithm, slots are allocated to the users with the highest SINR at per scheduling instant, which can maximize the system throughput, but it is not fair since the users with low slot efficiency are not guaranteed to obtain slots. The PF algorithm has been investigated in the literature of scheduling in SCNs provides an efficient throughput-fairness tradeoff. In MCNs, the BS is responsible for gathering link information and allocating the available resource to the corresponding links according to the PF algorithm.

OFDMA/TDD NETWORKS:

THE future wireless cellular networks, such as 3GPP advanced long term evolution (LTE-Advanced) and IEEE 802.16m systems, will adopt orthogonal frequency division multiple access (OFDMA) technology for multihop cellular networks (MCNs). OFDMA is regarded as the most promising physical layer technology for the fourth generation (4G) wireless networks. New relay strategies and technologies are proposed to provide services with extended coverage and higher data rate.

OFDMA systems should employ frequency planning for better cell edge performance and the ease of interference management. Traditional single-hop cellular networks (SCNs) typically employ the frequency reuse pattern with factor of 3 or 7 to reduce CCI, which results in low spectral efficiency. As we all know, high data rate is one of the desired features of the future cellular networks. It requires a highly efficient utilization of the available spectrum. Frequency reuse with factor of 1 is likely to be used in LTE-Advanced and IEEE 802.16m systems, aiming at improving the spectral efficiency.

Time division duplex (TDD) frame consists of downlink and uplink subframes. Each subframe is subsequently divided into two time zones which are named as relay zone (RZ) and access zone (AZ), respectively. RZ is dedicated to the BS transmission toward both RSs and MSs, while AZ is dedicated to the reception of MSs from the BS or two RSs. Assuming each RS receives data for relaying in RZ at the current frame, it should be scheduled to transmit the data in AZ and empty its buffer at the next frame. In each subframe, the frequency domain consists of subchannels and the time domain consists of slots. A slot in a subchannel is the minimum frequency-time resource unit TDD relay frame structure for MCNs.

Additionally, for WMNs, the frequency spectrum is shared and randomly contended by all stations. The access scheme with the lowest overhead is optimal. However, for example, in this paper, a centrally controlled optimal resource allocation for OFDMA-based MCNs is our target.

To provide analytical performance evaluation, we make two assumptions for the remainder of this paper:

1. All users have a single type of data service and thus have the same QoS requirements.

2. All cells/sectors have the same channel conditions, traffic load, and distribution of users.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are      

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY         

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

5.1.2 TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.

5.1.3 SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

TYPES OF TESTS

5.2.1 UNIT TESTING

Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

5.2.2 INTEGRATION TESTING

Integration tests are designed to test integrated software components to determine if they actually run as one program.  Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at   exposing the problems that arise from the combination of components.

5.2.3 FUNCTIONAL TEST

Functional tests provide systematic demonstrations that functions tested are available as

specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input                : identified classes of valid input must be accepted.

Invalid Input             : identified classes of invalid input must be rejected.

Functions                  : identified functions must be exercised.

Output                       : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.

5.2.4 SYSTEM TEST

System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

5.2.5 WHITE BOX TESTING

White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.

5.2.6 BLACK BOX TESTING

Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.

5.3 UNIT TESTING:

Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in detail.

Test objectives

  • All field entries must work properly.
  • Pages must be activated from the identified link.
  • The entry screen, messages and responses must not be delayed.

Features to be tested

  • Verify that the entries are of the correct format
  • No duplicate entries should be allowed
  • All links should take the user to the correct page.

 

5.4 INTEGRATION TESTING

Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.

The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

5.5 ACCEPTANCE TESTING

User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

CHAPTER 6

6.0 SOFTWARE ENVIRONMENT

6.1 FEATURES OF .NET

Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script.

 The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.

“.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).

6.2 THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are

  • Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
  • Memory management, notably including garbage collection.
  • Checking and enforcing security restrictions on the running code.
  • Loading and executing programs, with version control and other such features.
  • The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra Information – “metadata” – to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

6.3 THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.

6.4 LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.

Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

  • FORTRAN
  • COBOL
  • Eiffel          
            ASP.NET  XML WEB SERVICES    Windows Forms
                         Base Class Libraries
                   Common Language Runtime
                           Operating System

Fig1 .Net Framework

C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.   

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION

  Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using Try…Catch…Finally statements, we can create robust and effective exception handlers to improve the performance of our application.

6.5 THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application development          in the highly distributed environment of the Internet.

      OBJECTIVES OF . NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.

3. Eliminates the performance problems.         

There are different types of application, such as Windows-based applications and Web-based applications. 

6.6 FEATURES OF SQL-SERVE

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server                 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.

CHAPTER 8

8.0 CONCLUSION:

In this paper, we have carried out a quantitative study on an adaptive resource allocation scheme based on interference coordination and load balancing for multihop cellular networks. We also propose a novel frequency reuse scheme to mitigate interference and maintain high spectral efficiency, and present practical LB-based handover mechanisms which can evenly distribute the traffic load and guarantee users’ quality of service.

Simulations demonstrate that our scheme not only meets the requirement on coverage probability, but also improves the sector throughput and accommodates more users. To the best of our knowledge, this is the first work to provide dynamic resource allocation by jointly considering interference coordination and load balancing for MCNs. We expect that our method will play a significant role in network planning and resource allocation in the future MCNs.

CHAPTER 9

9.0 REFERENCES:

[1] M. Salem, A. Adinoyi, H. Yanikomeroglu, and D. Falconer, “Opportunities and Challenges in OFDMA-Based Cellular Relay Networks: A Radio Resource Management Perspective,” IEEE Trans. Vehicular Technology, vol. 59, no. 5, pp. 2496-2510, Jan. 2010.

[2] Y. Zhao, X. Fang, and Z. Zhao, “Interference Coordination in Compact Frequency Reuse for Multihop Cellular Networks,” IEICE Trans. Fundamentals of Electronics, Comm. and Computer Sciences, vol. E93-A, no. 11, pp. 2312-2319, Nov. 2010.

[3] Third Generation Partnership Project, “Technical Specification Group Radio Access Network; Physical Layer Aspects for Evolved Universal Terrestrial Radio Access (UTRA) (Release 7),” 3GPP Technical Report 25.814 v7.1.0, Sept. 2006.

Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage

Designing an Architecture for Monitoring Patients at Home Ontologies and Web Services for Clinical an

This paper presents the design and implementation of an architecture based on the combination of ontologies, rules, web services, and the autonomic computing paradigm to manage data in home-based telemonitoring scenarios.

The architecture includes two layers: 1) a conceptual layer and 2) a data and communication layer. On the one hand, the conceptual layer based on ontologies is proposed to unify the management procedure and integrate incoming data from all the sources involved in the telemonitoring process. On the other hand, the data and communication layer based on REST web service (WS) technologies is proposed to provide practical backup to the use of the ontology, to provide a real implementation of the tasks it describes and thus to provide a means of exchanging data (support communication tasks).

We study regarding chronic obstructive pulmonary disease data management is presented in order to evaluate the efficiency of the architecture. This proposed ontology-based solution defines a flexible and scalable architecture in order to address main challenges presented in home-based telemonitoring scenarios and thus provide a means to integrate, unify, and transfer data supporting both clinical and technical management tasks.

1.2 INTRODUCTION

Patient empowerment is considered as a philosophy of health care based on the perspective that better outcomes are achieved when patients become active participants in their own health management. This new paradigm is a central idea in the European Union (EU) health strategy supported by international health organizations including the World Health Organization among others, and its effectiveness in yielding quality of care is an obvious and essential area of research. This new idea invites to look for new ways of providing healthcare, e.g., by using information and communications technologies. In this context, home-based telemonitoring systems can be used as self-care management tools, while collaborative processes among healthcare personnel and patients are maintained, thus the patient’s safe control is guaranteed. Telemonitoring systems face the problem of delivering medicine to the current growing population with chronic conditions while at the same time covering the dimensions of quality of care and new paradigms such as empowerment can be supported.

By periodically collecting patients’ themselves clinical data (located at their home sites) and transferring them to physicians located in remote sites, patient’s health status supervision and feedback provision are possible. This type of telemedicine system guarantees patient control while reducing costs and avoiding hospital overflows. These two sites (home site and healthcare site) comprised a typical home-based telemonitoring system. At home site, data acquired by using MDs together with the patient’s feedback are collected in a concentrator device (HG) used to evaluate and/or transfer the acquired data outside the patient’s home if necessary. At the health-care site, a server device is used to manage information from the home site as well as to manage and store the patient’s monitoring guidelines defined by physicians (TS, telemonitoring server). In fact, this telemonitoring process, and consequently the evolution of the patient’s health status, ismanaged through the indications or monitoring guidelines provided by physicians.

Although significant contributions have been made in this field in recent decades, telemedicine and in e-health scenarios in general still pose numerous challenges that need to be addressed by researchers in order to take maximum advantage of the benefits that these systems provide and to support their long-term implementation. Interoperability and integration are critical challenges that also need to be addressed when developing monitoring systems in order to provide effective healthcare and to make possible seamless communication among the different heterogeneous health entities that participate in the monitoring process. This integration should be addressed at both end sites of the scenario but also in the communication link, thus integrating the way of transferring and exchanging information efficiently between them.

We providing personalized care services and taking into account the patient’s context have been identified as additional requirements. Furthermore, apart from clinical data aspects, technical issues should be also addressed in this scenario. Technical management of all the devices that comprise the telemonitoring scenario (e.g., the MDs and HG) is an important task that may or may not be integrated under the same architecture as clinical management. Hence, at this technical level, research is still required to address these challenges. Consequently there is a need for the development of new telemonitoring architectures.

Great efforts have been made in recent years in developing standards to deal with interoperability at different points of the e-health communication infrastructure such as the ISO/IEEE 11073 (X73) for MDs interoperability, the OpenEHR initiative for storage, management and retrieval of electronic health record (EHR) information or as the standardized Health Level Seven7 (HL7) messages to solve clinical data transferences. Nevertheless, additional efforts are required to enable them to work together and ultimately provide a higher level of integration.

Specifically, in this telemonitoring scenario, there is not a unique standard-based solution to address data and management integration. Since several standards can be used (some of them in combination with proprietary protocols or other standards) at different points of this scenario, the interoperability problem remains unsolved unless these standards would merge into one or alignments and combination of them would be done. According to Berges et al. interoperability does not mean to have a unique representation but a semantically acknowledged equivalent one. That is the reason to propose in this study an ontology-based architecture in order to provide with a common knowledge about the exchanged data and the management of such data. This ontology constitutes the knowledge equivalent one. Then, at both ends of the architecture other standards could be used for other managing purposes relating this model with the specific desired approach. Using this alternative, a knowledge model is first provided that avoids alignment of models two by two, while all being related through the main ontology.

Ontologies-based solutions have been popularized over the past few years. Ontologies provide a higher level of abstraction and have been successfully used in telemonitoring scenarios and other areas to provide knowledge representation and semantic integration, thus a common understanding about data exchanged by all the entities. Furthermore, its combination with rules allows providing personalized management services and thus personalized care. Although there are works that describe the details of an ontology approach in this domain, they do not devote much attention to the architecture implementation and the communication used to exchange the information described. Consequently, fewworks have given details about this practical implementation of the ontology-based system which may be of interest for the development of other ontology-based applications in and outside the e-health domain.

This paper presents an ontology-driven architecture to integrate data management and enable its communication in a telemonitoring scenario. The proposed architecture includes two layers: the conceptual layer (the ontology) and the communication and data layer. The conceptual layer uses the HOTMES and its extensions introduced. Specifically, the OWL-DL language was selected to define this ontology model. The second layer is based on WS technologies. WSs have been successfully used in network management and also in other works to exchange data modeled by ontology. However, our proposal, inspired on the representational state transfer (REST) style and based on a generic communication method, provides a different design approach that may be reusable for other systems based on ontologies. Furthermore, security issues have been considered. The aim is to define a flexible and scalable architecture in order to address main challenges presented in home-based telemonitoring scenarios and thus provide a means to integrate and transfer data supporting both clinical and technical data management.

1.3 LITRATURE SURVEY

AUTHOR AND PUBLICATION: JD. Trigo, I. Mart´ınez, A. Alesanco, A. Kollmann, J. Escayola, D. Hayn, G. Schreier, and J. Garc´ıa, “AN INTEGRATED HEALTHCARE INFORMATION SYSTEM FOR END-TO-END STANDARDIZED EXCHANGE AND HOMOGENEOUS MANAGEMENT OF DIGITAL ECG FORMATS,” IEEE Trans. Inf. Technol. Biomed., vol. 16, no. 4, pp. 518–529, Jul. 2012.

EXPLANATION:

This paper investigates the application of the enterprise information system (EIS) paradigm to standardized cardiovascular condition monitoring. There are many specifications in cardiology, particularly in the ECG standardization arena. The existence of ECG formats, however, does not guarantee the implementation of homogeneous, standardized solutions for ECG management. In fact, hospital management services need to cope with various ECG formats and, moreover, several different visualization applications. This heterogeneity hampers the normalization of integrated, standardized healthcare information systems, hence the need for finding an appropriate combination of ECG formats and suitable EIS-based software architecture that enables standardized exchange and homogeneous management of ECG formats. Determining such a combination is one objective of this paper.

We develop the integrated healthcare information system that satisfies the requirements posed by the previous determination. The ECG formats selected include ISO/IEEE11073, Standard Communications Protocol for Computer-Assisted Electrocardiography, and an ECG ontology. The EIS-enabling techniques and technologies selected include web services, simple object access protocol, extensible markup language, or business process execution language. Such a selection ensures the standardized exchange of ECGs within, or across, healthcare information systems while providing modularity and accessibility.

AUTHOR AND PUBLICATION: D. Ria˜no, F. Real, J. A. L´opez-Vallverd´u, F. Campana, S. Ercolani, P. Mecocci, R. Annicchiarico, and C. Caltagirone, “AN ONTOLOGY-BASED PERSONALIZATION OF HEALTH-CARE KNOWLEDGE TO SUPPORT CLINICAL DECISIONS FOR CHRONICALLY ILL PATIENTS,” J. Biomed. Informat., vol. 45, no. 3, pp. 429–446, 2012.

EXPLANATION:

Chronically ill patients are complex health care cases that require the coordinated interaction of multiple professionals. A correct intervention of these sort of patients entails the accurate analysis of the conditions of each concrete patient and the adaptation of evidence-based standard intervention plans to these conditions. There are some other clinical circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases or prevention, whose detection depends on the capacities of deduction of the professionals involved. In this paper, we introduce ontology for the care of chronically ill patients and implement two personalization processes and a decision support tool. The first personalization process adapts the contents of the ontology to the particularities observed in the health-care record of a given concrete patient, automatically providing a personalized ontology containing only the clinical information that is relevant for health-care professionals to manage that patient. The second personalization process uses the personalized ontology of a patient to automatically transform intervention plans describing health-care general treatments into individual intervention plans. For comorbid patients, this process concludes with the semi-automatic integration of several individual plans into a single personalized plan. Finally, the ontology is also used as the knowledge base of a decision support tool that helps health-care professionals to detect anomalous circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases, or preventive actions. Seven health-care centers participating in the K4CARE project, together with the group SAGESA and the Local Health System in the town of Pollenza have served as the validation platform for these two processes and tool. Health-care professionals participating in the evaluation agree about the average quality 84% (5.9/7.0) and utility 90% (6.3/7.0) of the tools and also about the correct reasoning of the decision support tool, according to clinical standards.

AUTHOR AND PUBLICATION: I.Berges, J. Bermudez, and A. Illarramendi, “TOWARDS SEMANTIC INTEROPERABILITY OF ELECTRONIC HEALTH RECORDS,” IEEE Trans. Inf. Technol. Biomed., vol. 16, no. 3, pp. 424–431, May 2012.

EXPLANATION:

Although the goal of achieving semantic interoperability of electronic health records (EHRs) is pursued by many researchers, it has not been accomplished yet. In this paper, we present a proposal that smoothes out the way toward the achievement of that goal. In particular, our study focuses on medical diagnoses statements. In summary, the main contributions of our ontology-based proposal are the following: first, it includes a canonical ontology whose EHR-related terms focus on semantic aspects. As a result, their descriptions are independent of languages and technology aspects used in different organizations to represent EHRs. Moreover, those terms are related to their corresponding codes in well-known medical terminologies. Second, it deals with modules that allow obtaining rich ontological representations of EHR information managed by proprietary models of health information systems. The features of one specific module are shown as reference. Third, it considers the necessary mapping axioms between ontological terms enhanced with so-called path mappings. This feature smoothes out structural differences between heterogeneous EHR representations, allowing proper alignment of information.

AUTHOR AND PUBLICATION: N. Lasierra,A.Alesanco, J.Garc´ıa, andD.O’Sullivan, “DATA MANAGEMENT IN HOME SCENARIOS USING AN AUTONOMIC ONTOLOGY-BASED APPROACH,” in Proc. of the 9th IEEE Int. Conf. Pervasive Workshop on Manag. Ubiquitous Commun. Services part of PerCom, 2012, pp. 94–99.

EXPLANATION:

An ontology-based approach to deal with data and management procedure integration in home-based scenarios is presented in this paper. The proposed ontology not only provides a means to represent exchanged data but also to unify the way of accessing, controlling, evaluating and transferring information remotely. The structure of this ontology has been inspired by the autonomic computing paradigm, thus it describes the tasks that comprise the MAPE (Monitor, Analyze, Plan and Execute) process. Furthermore the use of SPARQL (Simple Protocol and RDF Query Language) is proposed in this paper to express conditions and rules that determine the performance of these tasks according to each situation. Finally two practical application cases of the proposed ontology-based approach are presented.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

Telemonitoring systems face the problem of delivering medicine to the current growing population with chronic conditions while at the same time covering the dimensions of quality of care and new paradigms such as empowerment can be supported. By periodically collecting patients’ themselves clinical data (located at their home sites) and transferring them to physicians located in remote sites, patient’s health status supervision and feedback provision are possible.

This type of telemedicine system guarantees patient control while reducing costs and avoiding hospital overflows. These two sites (home site and healthcare site) comprised a typical home-based telemonitoring system. At home site, data acquired by using MDs together with the patient’s feedback are collected in a concentrator device (HG) used to evaluate and/or transfer the acquired data outside the patient’s home if necessary.

2.1.1 DISADVANTAGES:

  • Existing models for chronic diseases pose several technology-oriented challenges for home-based care, where assistance services rely on a close collaboration among different stakeholders, such as health operators, patient relatives, and social community members.
  • An ontology-based context model and a related context management system providing a configurable and extensible service-oriented framework to ease the development of applications for monitoring and handling patient chronic conditions.
  • The system has been developed in a prototypal version, and integrated with a service platform for supporting operators of home-based care networks in cooperating and sharing patient-related information and coordinating mutual interventions for handling critical and alarm situations.


2.2 PROPOSED SYSTEM:

We present an ontology-driven architecture to integrate data management and enable its communication in a telemonitoring scenario. It enables to not only integrate patient’s clinical data management but also technical data management of all devices that are included in the scenario. The proposed architecture includes two layers: the conceptual layer (the ontology) and the communication and data layer.

The conceptual layer uses the HOTMES and its extensions introduced specifically in the OWL-DL language was selected to define this ontology model. The second layer is based on WS technologies. WSs have been successfully used in network management and also in other works to exchange data modeled by ontology is our proposal, inspired on the representational state transfer (REST) style and based on a generic communication method, provides a different design approach that may be reusable for other systems based on ontologies.

Furthermore, security issues have been considered. The aim is to define a flexible and scalable architecture in order to address main challenges presented in home-based telemonitoring scenarios and thus provide a means to integrate and transfer data supporting both clinical and technical data management.

2.2.1 ADVANTAGES:

Ontologies provide a higher level of abstraction and have been successfully used in telemonitoring scenarios and other areas to provide knowledge representation and semantic integration, thus a common understanding about data exchanged by all the entities. Furthermore, its combination with rules allows providing personalized management services and thus personalized care.

We describe the details of an ontology approach in this domain, they do not devote much attention to the architecture implementation and the communication used to exchange the information described.

Our implementation of the ontology-based system which may be of interest for the development of other ontology-based applications in and outside the e-health domain the ontology for interpreting the data transferred for the communication of end sources of the architecture. The data and communication layer deals with data management and transmission.

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win7
  • Front End                                :           Microsoft Visual Studio .NET
  • Back End                                :           MSSQL Server
  • Server                                      :           ASP .NET Web Server
  • Script                                       :           C# Script
  • Document                               :           MS-Office 2007

CHAPTER 3

3.0 SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data’s in the physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.


3.1 ARCHITECTURE DIAGRAM

3.2 DATAFLOW DIAGRAM

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:


3.3 CLASS DIAGRAM:


3.4 SEQUENCE DIAGRAM:


3.5 ACTIVITY DIAGRAM:

CHAPTER 4

4.0 IMPLEMENTATION:

ONTOLOGIES:

According to one of the most widely accepted definitions of ontologies in computer science, ontology can be described as “an explicit and formal specification of a shared conceptualization”.  In simple words, ontologies represent concepts and basic relationships for the purpose of comprehension of a common knowledge area. To develop an ontology means to formalize a common view of a certain domain.

1) OWL Language: In computer science, there are plenty of formal languages that can be used to define and constructontologies. These languages allow encoding knowledge contained in ontology in a simple and formal way. However, the standardized RDF and OWL have been gaining popularity in the semantic web world. Ontology can be formally described in OWL using following basic elements: 1) classes; 2) individuals; and 3) properties. These elements are used in order to describe concepts, instances, or members of a class and relationships between individuals of two classes (object properties) or to link individuals with datatype values, respectively (data type properties). Apart from these basic elements OWL provides with class descriptors used to precisely describe OWL classes which includes properties restrictions (value and cardinality constraints), class axioms, properties axioms, and properties over individuals.

2) Rules: Generally, ontology-based solutions combine knowledge presented in ontologies with dynamic knowledge presented by the use of rules. A system based on the use of rules usually contains a set of if-then rules (which indicate what should be done according to a situation) and a rule engine used to apply them. By using rules, the behavior of individuals can be expressed inside a domain. Hence, they can be used to generate new knowledge and can also be used to provide personalized services. One of the most popular languages for rules definition is SWRL.

However, in our study, we used SPARQL to define some rules is a query language it can be used as a rule language by combining CONSTRUCT clause and FILTER restrictions. On the one hand, the CONSTRUCT query form returns a single RDF graph built based on the results of matching with the graph pattern of the query and by taking the specified graph template. On the other hand, the FILTER clause can be used to restrict solutions to those which the filter expression considers as TRUE. Only if the filter function evaluates to true is the solution to be included in the solution sequence. Note that although this language was good enough for our purpose, its limitations should be studied for other purposes (e.g., recursive tasks) and the adequacy of SWRL could be studied for complex applications.

WEB SERVICES

Web services are used in this study as software technology to access and exchange information modeled by the ontology. According to the W3C, a WS is a “software system designed to support interoperable machine-to-machine interaction over a communication network”. Systems may interact with the web services by exchanging SOAP messages serialized in XML for its message format and sent over other application layer protocols, usually HTTP. Although SOAP-based web services are the most popular types of WSs, there are other styles of programming a WS such as the REST style.

1) Rest Style for DesigningWeb Services: REST is a style of software architecture for distributed hypermedia systems such as the World Wide Web first defined in 2000 by Fielding. This style is based on the idea of transferring the representations of resources, a resource being any item of interest. One of the key advantages of the REST architecture are scalability of components and generality of interfaces. Although REST was initially described in the context of HTTP, this paradigm can be applied to other protocols or implementations. Web services can also be described using this style. A WS implemented using HTTP and the principles of REST architecture is designated as REST(ful) WS. Requests made from the client and responses from the WS are used to transfer resources information. Each resource is identified through an URI. Stateless behavior of data using XML and/or JSON and explicitly used HTTP methods (PUT, GET, POST, DELETE) to exchange resources are the key characteristics of a REST(ful) WS.

4.1 MODULES:

MANAGEMENT PROFILE:

DATA AND COMMUNICATION LAYER:

HG AND TS MANAGEMENT MODULES:

COMMUNICATION FLOW AND WORKFLOW:

4.3 MODULE DESCRIPTION:

CLINICAL MANAGEMENT PROFILE:

COPD patients were identified as candidates to be monitored at home sites. From a clinical point of view, it was an interesting case study (some estimations suggest that up to 10% of the European population suffers COPD). From a technical point of view, the case of the COPD patient led to define a complex technical management profile (because different MDs are required to be used by the patient) and interesting option to test the performance of the agent. Hence, one patient profile was designed according to the clinical HOTMES ontology and one technical management profile was designed according to the technical HOTMES ontology.

The patient profile includes the required tasks to monitor a COPD patient such as controlling the FEV1 measurement in order to detect the presence and severity of the airway obstruction. It was configured by a primary care physician by means of published clinical guidelines in patient profile included 15 monitoring task, 11 analysis task, 9 planning task, and 3 execution task. This configuration led to include 144 new instances and to configure 18 rules. The details of this profile and its evaluation to configure other type of profiles can be technical management profile was designed to monitor the state of theMDs used by the COPD patient (weighing scale, a blood pressure monitor, a pulse-oximeter, and a glucometer) and the consumption of resources of the correspondent HG. In addition rules were configured and 83 new instances were required to be configured in the technical management profile in additional information of the application of the HOTMES ontology for technical tasks.

DATA AND COMMUNICATION LAYER:

In the data layer, the communication between the end sites is established using WS technologies. Consequently, a WS has been designed to be placed in the TS and also a web client to be installed in the HG (to establish a communication with the TS). This communication allows the HG to ask for its associated management profile to the TS and to transmit acquired information from the HG to the TS.

A REST WS was developed in order to enhance the scalability and flexibility of the architecture and improve the performance (efficiency). This WS comprises and defines a set of operations over the following resources: an OWL ontology, the rules (transferred by means of an XML), OWL individuals (sent by the IndividualWS structure), properties datatype values corresponding to an individual (identified by the URI of the individual and the URI of the property sent in a string generic type), and inform messages to provide some control functions to the web pair communication.

Each one of these resources was identified by an URI, and a set of operations was defined for each particular resource using HTTP methods (e.g., GET or PUT). This WS interface allows information described in the ontology to be exchanged in a generic manner. This is one key that contributes to the reusability and easy extension of the architecture. Described communication methods do not depend on the knowledge itself described in the ontology (related to the service) but on the fact of using an ontology to represent such knowledge. A summary of the resources and defined operations is depicted in Table I. As mentioned in the description of the converter module, individuals are exchanged by using a developed structure designated as IndividualWS. Using OWL language, an individual of the ontology can be described as a member of a class with individual axioms or facts as individual property values (datatype and object properties).

HG AND TS MANAGEMENT MODULES:

Two management modules and web technology modules inside the HG and the TS constitute the main parts of the telemedicine system (see Fig. 1). The modules that comprise the architecture have been developed using .NET technologies. Specifically, the .NET framework (version 3.5) has been used to process the ontology and create new instances, data acquisition, and manipulation when the rules are applied. Regarding the web modules the components of the remote management module installed in the TS are depicted in Fig. 1. This management module includes the following three components:

1) Ontology knowledge base module: This module contains the ontology knowledge models and the instances of the registered management profiles. The TDB triple-store has been used to store the ontology model and new instances in this knowledge base module.

2) Converter module: The communication module of this architecture is mainly based on OWL instances exchanged generically by means of a developed object structure named IndividualWS. The converter module is used to wrap and unwrap the individuals structure used to exchange information with web clients. Furthermore, this module incorporates some reasoning tasks. Ontology-based reasoning is used in order to check instances before including new information

in the model and to ensure the consistency of the model.

3) Rules module: This module is used to store rules associated with each management profile. These rules are subsequently transferred by means of an XML file. As shown in Fig. 1, an additional GUI is required in order to make easier for EM, technical or clinical (physician), the process of defining the profiles and the rules. We are currentlyworking in the development of this GUI combining ontology visualization techniques and usability methods. The methodology used to design this interface components of the management module installed in the HG are equally depicted in Fig. 1. This last management module has been designated the “Semantic Autonomic Agent.” This module plays a key role in the architecture. It is in charge of integrating incoming data and executing the management tasks described in the management profile.

The communication between this agent and the management module installed at the remote site is established through a web client connection to the WS installed in the remote TS. The architecture of the agent comprises the ontology knowledge base module, the rules module, the converter module, and the following modules.

1) MAPE module: This module constitutes the computing core of the agent. It will be used to run the tasks specified in each management profile, hence to execute the closed loop from the MAPE loop process.

2) Integrator module: Information transferred by MDs and also contextual data provided by patients will be acquired in this module, which integrates data coming from different data sources.

3) Reminders and alarms module: This module includes clock functionalities to ask patients about data (reminders) or to collect information from a specific software resource.

4) Actions module: This last module is used to execute actions described within the execution tasks of the management profile if an abnormal finding occurs.

FLOW AND WORKFLOW PERFORMANCE:

All the modules and sources involved in the management procedure. The first step (see Fig. 3) consists in the download of the management profile (patient profile or technical profile). First of all, an instance of the management profile should be configured by an EM placed at a remote site. Furthermore, a set of individual rules should be configured for each particular management purpose. As shown in Fig. 3, the designed GUI helps the physician with the ontology instantiation process and the rules definition. The outputs of this interface (which uses selected classes of the ontology as a navigation tool) are a personalized management profile and a set of rules gathered in an XML file. Other functionalities such as queries over acquired data or crossing data among patients to take some decisions could be of interest to be included in this tool.

The communication is always initiated by the user (web client at HG). Through a connection to the web service, the user (the patient in the telemonitoring scenario) situated at home site will acquire the required management profile. As shown in Fig. 3, if the user requests for an update of his/her management profile, then the version of the available profile at the TS will be requested for its evaluation (GET property value). When the user requests a new management profile, first, it is checked whether the ontology to download it is available (GET ontology). After that, the rules and the management profile will be downloaded when required.

The methods involved are 1) GET (rules) and 2) GET (individual). Note that the TLS authentication phase is not depicted in Fig. 3, but it is initially carried out in order to allow the web client connection to the web service. As depicted in Fig. 3, the associated management profile is extracted from the ontology and the instances of the ontology managed by Jena are wrapped into the IndividualWS structure through the converter module. Once the management profile is in the HG, it will be processed into the converter module, unwrapped, and inserted as individuals managed by Jena in the ontology. Once the management profile has been included in the ontology knowledge base module of the HG, it will be evaluated in the MAPE module and the management procedure will be performed by running the tasks specified in the profile.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are      

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:                  

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

5.1.2 TECHNICAL FEASIBILITY:

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months. This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

5.1.3 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

FUNCTIONAL TESTING:

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.

5.1. 4 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing


5.1.5 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Load Testing

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.

5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

PERFORMANCE TESTING:

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  

5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

RELIABILITY TESTING:

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.

5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

SECURITY TESTING:

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.

5.1.7 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

5.1.8 WHITE BOX TESTING:

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.

5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

5.1.10 BLACK BOX TESTING:

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 7

7.0 SOFTWARE SPECIFICATION:

7.1 FEATURES OF .NET:

Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script.

The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.

“.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).

7.2 THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are

  • Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
  • Memory management, notably including garbage collection.
  • Checking and enforcing security restrictions on the running code.
  • Loading and executing programs, with version control and other such features.
  • The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra Information – “metadata” – to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

7.3 THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.

7.4 LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.

Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

  • FORTRAN
  • COBOL
  • Eiffel          
            ASP.NET  XML WEB SERVICES    Windows Forms
                         Base Class Libraries
                   Common Language Runtime
                           Operating System

Fig1 .Net Framework

C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using Try…Catch…Finally statements, we can create robust and effective exception handlers to improve the performance of our application.

7.5 THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.

OBJECTIVES OF .NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.

3. Eliminates the performance problems.         

There are different types of application, such as Windows-based applications and Web-based applications. 

7.6 FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

7.7 TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

          To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.

CHAPTER 7

APPENDIX

7.1 SAMPLE SOURCE CODE

7.2 SAMPLE OUTPUT

CHAPTER 8

8.1 CONCLUSION:

This study describes architecture to enable data integration and its management in an ontology-driven telemonitoring solution implemented in home-based scenarios. This is an innovative architecture that facilitates the integration of several management services at home sites using the same software engine. The architecture has been specifically studied to support both technical and clinical services in the telemonitoring scenario, thus avoiding installing additional software for technical purposes.

HOTMES ontology used at the conceptual layer to describe a management profile on the one hand, our ontology contributes to integrate data and its management offering benefits in terms of knowledge representation, workflow organization, and self-management capabilities to the system. Its combination with rules allows providing personalized services.

This application ontology could be in future improved by introducing concepts from domain ontology. On the other hand, the data and communication layer of the architecture, based on the REST WS, was oriented to minimizing the consumption of resources and providing reusable key ideas for future ontology-based architecture developments.

8.2 FUTURE ENHANCEMENT

This solution represents a further step toward the possibility of establishing more effective home-based telemonitoring systems and thus improving the remote care of patientswith chronic diseases. As it was reported in, good telemedicine implementations are developed after a process where the dynamic interaction among a combination of socio-technical and also clinical factors is optimized. It means that additional work should be done (e.g., to measure the interaction of the

patient–doctor using the system and also the truthfulness of the system for a long period of time) before adopting this solution in a real scenario its complete development, first, a concordance study should be conducted in order to determine its clinical efficiency. Then, a social impact study should be conducted in order to determine how the system allowed improving patient’s quality of life. Regarding these last studies, the results presented in evidence the benefits of telemonitoring systems while linking their success to the usability design issues and features.

Decentralized Access Control with Anonymous Authentication of Data Stored in Clouds

Cloud computing is a rising computing standard in which assets of the computing framework are given as a service over the Internet. As guaranteeing as it may be, this standard additionally delivers a lot of people new challenges for data security and access control when clients outsource sensitive data for offering on cloud servers, which are not inside the same trusted dominion as data possessors. In any case, in completing thus, these results unavoidably present a substantial processing overhead on the data possessor for key distribution and data administration when fine-grained data access control is in demand, and subsequently don’t scale well. The issue of at the same time accomplishing fine-grainedness, scalability, and data confidentiality of access control really still remains uncertain. This paper addresses this open issue by, on one hand, characterizing and implementing access policies based on data qualities, and, then again, permitting the data owner to representative the majority of the calculation undertakings included in fine-grained data access control to un-trusted cloud servers without unveiling the underlying data substance. We accomplish this goal by exploiting and combining techniques of decentralized key policy Attribute Based Encryption (KP-ABE). Extensive investigation shows that the proposed approach is highly efficient and secure.

1.2 INTRODUCTION

Research in cloud computing is receiving a lot of attention from both academic and industrial worlds. In cloud computing, users can outsource their computation and storage to servers (also called clouds) using Internet. This frees users from the hassles of maintaining resources on-site. Clouds can provide several types of services like applications (e.g., Google Apps, Microsoft online), infrastructures (e.g., Amazon’s EC2, Eucalyptus, Nimbus), and platforms to help developers write applications (e.g., Amazon’s S3, Windows Azure).

Much of the data stored in clouds is highly sensitive, for example, medical records and social networks. Security and privacy are thus very important issues in cloud computing. In one hand, the user should authenticate itself before initiating any transaction, and on the other hand, it must be ensured that the cloud does not tamper with the data that is outsourced. User privacy is also required so that the cloud or other users do not know the identity of the user. The cloud can hold the user accountable for the data it outsources, and likewise, the cloud is itself accountable for the services it provides. The validity of the user who stores the data is also verified. Apart from the technical solutions to ensure security and privacy, there is also a need for law enforcement.

Recently, Wang et al. addressed secure and dependable cloud storage. Cloud servers prone to Byzantine failure, where a storage server can fail in arbitrary ways. The cloud is also prone to data modification and server colluding attacks. In server colluding attack, the adversary can compromise storage servers, so that it can modify data files as long as they are internally consistent. To provide secure data storage, the data needs to be encrypted. However, the data is often modified and this dynamic property needs to be taken into account while designing efficient secure storage techniques.

Efficient search on encrypted data is also an important concern in clouds. The clouds should not know the query but should be able to return the records that satisfy the query. This is achieved by means of searchable encryption. The keywords are sent to the cloud encrypted, and the cloud returns the result without knowing the actual keyword for the search. The problem here is that the data records should have keywords associated with them to enable the search. The correct records are returned only when searched with the exact keywords.

Security and privacy protection in clouds are being explored by many researchers.Wang et al. addressed storage security using Reed-Solomon erasure-correcting codes. Authentication of users using public key cryptographic techniques has been studied in. Many homomorphic encryption techniques have been suggested to ensure that the cloud is not able to read the data while performing computations on them. Using homomorphic encryption, the cloud receives ciphertext of the data and performs computations on the ciphertext and returns the encoded value of the result. The user is able to decode the result, but the cloud does not know what data it has operated on. In such circumstances, it must be possible for the user to verify that the cloud returns correct results. Accountability of clouds is a very challenging task and involves

technical issues and law enforcement. Neither clouds nor users should deny any operations performed or requested. It is important to have log of the transactions performed; however, it is an important concern to decide how much information to keep in the log.

Accountability has been addressed in TrustCloud. Secure provenance has been studied in. Considering the following situation: A Law student, Alice, wants to send a series of reports about some malpractices by authorities of University X to all the professors of University X, Research chairs of universities in the country, and students belonging to Law department in all universities in the province. She wants to remain anonymous while publishing all evidence of malpractice. She stores the information in the cloud.

Access control is important in such case, so that only authorized users can access the data. It is also important to verify that the information comes from a reliable source. The problems of access control, authentication, and privacy protection should be solved simultaneously. We address this problem in its entirety in this paper. Access control in clouds is gaining attention because it is important that only authorized users have access to valid service. A huge amount of information is being stored in the cloud, and much of this is sensitive information. Care should be taken to ensure access control of this sensitive information which can often be related to health, important documents (as in Google Docs or Dropbox) or even personal information (as in social networking). There are broadly three types of access control: User Based Access Control (UBAC), Role Based Access Control (RBAC), and Attribute Based Access Control (ABAC). In UBAC, the access control list (ACL) contains the list of users who are authorized to access data. This is not feasible in clouds where there are many users. In RBAC, users are classified based on their individual roles. Data can be accessed by users who have matching roles. The roles are defined by the system. For example, only faculty members and senior secretaries might have access to data but not the junior secretaries. ABAC is more extended in scope, in which users are given attributes, and the data has attached access policy. Only users with valid set of attributes, satisfying the access policy, can access the data. For instance, in the above example certain records might be accessible by faculty members with more than 10 years of research experience or by senior secretaries with more than 8 years experience. The pros and cons of RBAC and ABAC are discussed in. There has been some work on ABAC in clouds. All these work use a cryptographic primitive known as Attribute Based Encryption (ABE). The The eXtensible Access Control Markup Language (XACML)  has been proposed for ABAC in clouds. An area where access control is widely being used is health care. Clouds are being used to store sensitive information about patients to enable access to medical professionals, hospital staff, researchers, and policy makers. It is important to control the access of data so that only authorized users can access the data. Using ABE, the records are encrypted under some access policy and stored in the cloud. Users are given sets of attributes and corresponding keys. Only when the users have matching set of attributes, can they decrypt the information stored in the cloud. Access control in health care has been studied. Access control is also gaining importance in online social networking where users (members) store their personal information, pictures, videos and share them with selected groups of users or communities they belong to. Access control in online social networking has been studied. Such data are being stored in clouds.

It is very important that only the authorized users are given access to those information. A similar situation arises when data is stored in clouds, for example in Dropbox, and shared with certain groups of people. It is just not enough to store the contents securely in the cloud but it might also be necessary to ensure anonymity of the user. For example, a user would like to store some sensitive information but does not want to be recognized. The user might want to post a comment on an article, but does not want his/her identity to be disclosed. However, the user should be able to prove to the other users that he/she is a valid user who stored the information without revealing the identity. There are cryptographic protocols like ring signatures, mesh signatures, group signatures, which can be used in these situations. Ring signature is not a feasible option for clouds where there are a large number of users. Group signatures assume the pre-existence of a group which might not be possible in clouds. Mesh signatures do not ensure if the message is from a single user or many users colluding together. For these reasons, a new protocol known as Attribute Based Signature (ABS) has been applied. ABS was proposed by Maji et al. In ABS, users have a claim predicate associated with a message. The claim predicate helps to identify the user as an authorized one, without revealing its identity. Other users or the cloud can verify the user and the validity of the message stored. ABS can be combined with ABE to achieve authenticated access control without disclosing the identity of the user to the cloud.

Existing work on access control in cloud are centralized in nature. Except and, all other schemes use attribute based encryption (ABE). The scheme uses a symmetric key approach and does not support authentication. The schemes do not support authentication as well. Earlier work by Zhao et al. provides privacy preserving authenticated access control in cloud. However, the authors take a centralized approach where a single key distribution center (KDC) distributes secret keys and attributes to all users. Unfortunately, a single KDC is not only a single point of failure but difficult to maintain because of the large number of users that are supported in a cloud environment. We, therefore, emphasize that clouds should take a decentralized approach while distributing secret keys and attributes to users. It is also quite natural for clouds to have many KDCs in different locations in the world. Although Yang et al. proposed a decentralized approach, their technique does not authenticate users, who want to remain anonymous while accessing the cloud. In an earlier work, Ruj et al.proposed a distributed access control mechanism in clouds. However, the scheme did not provide user authentication. The other drawback was that a user can create and store a file and other users can only read the file. Write access was not permitted to users other than the creator. In the preliminary version, we extend our previous work with added features which enables to authenticate the validity of the message without revealing the identity of the user who has stored information in the cloud. In this version we also address user revocation, that was not addressed. We use attribute based signature scheme to achieve authenticity and privacy. Unlike, our scheme is resistant to replay attacks, in which a user can replace fresh data with stale data from a previous write, even if it no longer has valid claim policy. This is an important property because a user, revoked of its attributes, might no longer be able to write to the cloud. We therefore add this extra feature in our scheme and modify appropriately. Our scheme also allows writing multiple times which was not permitted in our earlier work.

1.3 LITRATURE SURVEY

PRIVACY PRESERVING ACCESS CONTROL WITH AUTHENTICATION FOR SECURING DATA IN CLOUDS

PUBLICATION: S. Ruj, M. Stojmenovic and A. Nayak, IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, pp. 556–563, 2012.

TOWARD SECURE AND DEPENDABLE STORAGE SERVICES IN CLOUD COMPUTING

PUBLICATION: C. Wang, Q. Wang, K. Ren, N. Cao and W. Lou, IEEE T. Services Computing, vol. 5, no. 2, pp. 220–232, 2012.

FUZZY KEYWORD SEARCH OVER ENCRYPTED DATA IN CLOUD COMPUTING

PUBLICATION: J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, in IEEE INFOCOM. , pp. 441–445, 2010.

CRYPTOGRAPHIC CLOUD STORAGE

PUBLICATION: S. Kamara and K. Lauter, in Financial Cryptography Workshops, ser. Lecture Notes in Computer Science, vol. 6054. Springer, pp. 136–149, 2010.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

To accomplish secure data transaction in cloud, suitable cryptography method is utilized. The data possessor must encrypt the record and then store the record to the cloud. Assuming that a third person downloads the record, they may see the record if they had the key which is utilized to decrypt the encrypted record. Once in a while this may be failure because of the technology improvement and the programmers. To overcome the issue there is lot of procedures and techniques to make secure transaction and storage.

2.2 DISADVANTAGES:

  • The access control and authentication are both collusion resistant, meaning that no two users can collude and access data or authenticate themselves, if they are individually not authorized.
  • Revoked users cannot access data after they have been revoked.

2.3 PROPOSED SYSTEM:

KP-ABE is a public key cryptography primitive for one-to-many correspondences. In KP-ABE, information is associated with attributes for each of which a public key part is characterized. The encrypted associates the set of attributes to the message by scrambling it with the comparing public key parts. Every client is assigned an access structure which is normally characterized as an access tree over information attributes, i.e., inside hubs of the access tree are limit doors and leaf hubs are connected with attributes. Client secret key is characterized to reflect the access structure so the client has the ability to decode a cipher-text if and just if the information attributes fulfill his access structure.

2.4 ADVANTAGES:

  • Distributed access control of data stored in cloud so that only authorized users with valid attributes can access them.
  • Authentication of users who store and modify their data on the cloud.
  • The identity of the user is protected from the cloud during authentication.
  • The architecture is decentralized, meaning that there can be several KDCs for key management.


2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP or Win 7
  • Front End                                :           Microsoft Visual Studio 2008
  • Back End                                :           MSSQL Server 2005
  • Server                                      :           ASP Web Server
  • Script                                       :           C# Script
  • Document                               :           MS-Office 2007

CHAPTER 3

3.0 SYSTEM DESIGN:

ARCHITECTURE DIAGRAM / UML DIAGRAMS / DAT FLOW DIAGRAM:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data. The physical component is not identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.

3.1 DATAFLOW DIAGRAM

UML DIAGRAMS:

3.2 USE CASE DIAGRAM:


3.3 CLASS DIAGRAM:


3.4 SEQUENCE DIAGRAM:

3.5 ACTIVITY DIAGRAM: 

CHAPTER 4

4.0 IMPLEMENTATION:

We propose our privacy preserving authenticated access control scheme. According to our scheme a user can create a file and store it securely in the cloud. This scheme consists of use of the two protocols ABE and ABS, as discussed in Sections 3.4 and 3.5, respectively. We will first discuss our scheme in details and then provide a concrete example to demonstrate how it works. We refer to the Fig. 1. There are three users, a creator, a reader, and writer. Creator Alice receives a token _ from the trustee, who is assumed to be honest. A trustee can be someone like the federal government who manages social insurance numbers etc. On presenting her id (like health/social insurance number), the trustee gives her a token _. There are multiple KDCs (here 2), which can be scattered. For example, these can be servers in different parts of the world.

A creator on presenting the token to one or more KDCs receives keys for encryption/decryption and signing. In the Fig. 1, SKs are secret keys given for decryption, Kx are keys for signing. The message MSG is encrypted under the access policy X. The access policy decides who can access the data stored in the cloud. The creator decides on a claim policy Y, to prove her authenticity and signs the message under this claim. The ciphertext C with signature is c, and is sent to the cloud. The cloud verifies the signature and storesthe ciphertext C. When a reader wants to read, the cloud sends C. If the user has attributes matching with access policy, it can decrypt and get back original message.

Write proceeds in the same way as file creation. By designating the verification process to the cloud, it relieves the individual users from time consuming verifications. When a reader wants to read some data stored in the cloud, it tries to decrypt it using the secret keys it receives from the KDCs. If it has enough attributes matching with the access policy, then it decrypts the information stored in the cloud.

4.1 ALGORITHM:

ATTRIBUTE-BASED ENCRYPTION:

ABE with multiple authorities as proposed as follows:




4.2 MODULES:

CLOUD USER MODULE:

ATTRIBUTE-BASED SIGNATURES:

ANONYMOUS AUTHENTICATION:

CLOUD USER OPERATIONS:

4.3 MODULE DESCRIPTION:

CLOUD USER MODULE:

User: users, who have data to be stored in the cloud and rely on the cloud for data computation, consist of both individual consumers and organizations.

Cloud Service Provider (CSP): a CSP, who has significant resources and expertise in building and managing distributed cloud storage servers, owns and operates live Cloud Computing systems.

Third Party Auditor (TPA): an optional TPA, who has expertise and capabilities that users may not have, is trusted to assess and expose risk of cloud storage services on behalf of the users upon request.

ATTRIBUTE-BASED SIGNATURES:

Cryptographic protocols like ring signatures mesh signatures group signatures which can be used in these situations. Ring signature is not a feasible option for clouds where there are a large number of users. Group signatures assume the preexistence of a group which might not be possible in clouds. Mesh signatures do not ensure if the message is from a single user or many users colluding together. For these reasons, a new protocol known as attribute-based signature (ABS) has been applied. ABS was proposed by Maji et al. In ABS, users have a claim predicate associated with a message. The claim predicate helps to identify the user as an authorized one, without revealing its identity. Other users or the cloud can verify the user and the validity of the message stored. ABS can be combined with ABE to achieve authenticated access control without disclosing the identity of the user to the cloud.

ANONYMOUS AUTHENTICATION:

In our scheme a writer whose rights have been revoked cannot create a new signature with new time stamp and, thus, cannot write back stale information. It then signs the message and calculates the message signature as.

CLOUD USER OPERATIONS:

Update Operation

In cloud data storage, sometimes the user may need to modify some data block(s) stored in the cloud, we refer this operation as data update. In other words, for all the unused tokens, the user needs to exclude every occurrence of the old data block and replace it with the new one.

Delete Operation

Sometimes, after being stored in the cloud, certain data blocks may need to be deleted. The delete operation we are considering is a general one, in which user replaces the data block with zero or some special reserved data symbol. From this point of view, the delete operation is actually a special case of the data update operation, where the original data blocks can be replaced with zeros or some predetermined special blocks.

Append Operation

In some cases, the user may want to increase the size of his stored data by adding blocks at the end of the data file, which we refer as data append. We anticipate that the most frequent append operation in cloud data storage is bulk append, in which the user needs to upload a large number of blocks (not a single block) at one time.

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are      

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:                  

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

5.1.2 TECHNICAL FEASIBILITY:

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months. This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

5.1.3 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

FUNCTIONAL TESTING:

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.

5.1. 4 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing


5.1.5 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Load Testing

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.

5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

PERFORMANCE TESTING:

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  

5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

RELIABILITY TESTING:

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.

5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

SECURITY TESTING:

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.

5.1.7 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

5.1.8 WHITE BOX TESTING:

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.

5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

5.1.10 BLACK BOX TESTING:

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 7

7.0 SOFTWARE SPECIFICATION:

7.1 FEATURES OF .NET:

Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script.

The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.

“.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).

7.2 THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are

  • Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
  • Memory management, notably including garbage collection.
  • Checking and enforcing security restrictions on the running code.
  • Loading and executing programs, with version control and other such features.
  • The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra Information – “metadata” – to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

7.3 THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.

7.4 LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.

Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

  • FORTRAN
  • COBOL
  • Eiffel          
            ASP.NET  XML WEB SERVICES    Windows Forms
                         Base Class Libraries
                   Common Language Runtime
                           Operating System

Fig1 .Net Framework

C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using Try…Catch…Finally statements, we can create robust and effective exception handlers to improve the performance of our application.

7.5 THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.

OBJECTIVES OF .NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.

3. Eliminates the performance problems.         

There are different types of application, such as Windows-based applications and Web-based applications. 

7.6 FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

7.7 TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

          To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.

CHAPTER 7

APPENDIX

7.1 SAMPLE SOURCE CODE

7.2 SAMPLE OUTPUT

CHAPTER 8

8.0 CONCLUSION

We have presented a decentralized access control technique with anonymous authentication, which provides user revocation and prevents replay attacks. The cloud does not know the identity of the user who stores information, but only verifies the user’s credentials. Key distribution is done in a decentralized way. One limitation is that the cloud knows the access policy for each record stored in the cloud. In future, we would like to hide the attributes and access policy of a user.

Congestion Aware Routing in Nonlinear Elastic Optical Networks

Sensor networks are composed of small sensing devices that have the capability to take various measurements of their environment such as temperature, sound, light etc. These devices are equipped with a processor and wireless communication antenna and are powered with a battery. Upon deployment in a field, they form an ad hoc network and communicate with each other and with data processing centers. The routing protocol in such networks has an important effect on congestion, especially with increasing sizes of the deployments. Congestion becomes worse when a particular area is generating most of the data. This may occur in some deployments when sensors in one area of interest are requested to gather and transmit data at a higher rate than others.

 We believe that all data generated in a sensor network may not be equally important; some may have a low priority while others have a higher priority and hence differentiated service must be provided to these data. In such a scenario, routing dynamics can lead to congestion on specific paths. Since congestion is a self-compounding problem, these paths are usually close to each other which lead to an entire zone in the network facing congestion. We refer to this zone as the congestion zone or conzone.

Congestion can adversely affect the network in two ways.

First, it can lead to indiscriminate dropping of data, i.e. some packets of high priority might be dropped while others of less priority are delivered. This happens because sensor nodes are very simple devices and do not have the capability to differentiate packets (i.e. they do not have multiple queues for different priority levels). Second, congestion can cause an increase in energy consumption as links become saturated. This can lead to depletion of the limited energy available in the sensor nodes in the congested area.

In this paper, we examine data delivery issues in the presence of congestion in wireless sensor networks. We propose the use of data prioritization and a simple priority aware routing protocol, Congestion Aware Routing (CAR). CAR does not use multiple priority queues, a QoS aware MAC layer or specialized scheduling algorithms. The first step in this protocol is to dynamically discover the conzone. The second step is to enforce differentiated routing; high priority packets are routed in the conzone. Low priority packets generated outside the conzone stay outside while those generated within the conzone are routed out. In effect, conzone nodes are dedicated to serving high priority data which will enable them to provide better service and lengthen their lifetime.

Our extensive simulations show that CAR leads to a significant increase in the successful packet delivery ratio of high priority data to the sink, and a clear decrease in the average delay to CAR also provides low jitter which makes it able to support real-time multimedia applications. It also reduces the energy consumed in the nodes that lie on the conzone which leads to an increase in connectivity lifetime. We now consider the network formation process. Once the sink node discovers its surrounding neighbors, it broadcasts a “Build Mesh” message asking all nodes in the network to organize as a mesh. In that message the sink provides its ID and zero as its depth. Once a neighboring node hears this message it will check if it has already joined the routing network (i.e. if it knows its depth); if not then it sets its depth to one plus the depth in the message received and sets the source of the message as a parent.

 Each node then rebroadcasts the Build Mesh message, with its own ID and depth to its neighbors. If a node is already a member of the network, then it will check the depth in the message, and if that depth is less than its own, then the source of the message is added as a parent. In that case, the message is not rebroadcast. In this fashion, the Build Mesh message is sent down the network until all nodes become part of this routing structure. Similar to TAG, the Build Mesh message can be periodically broadcast to maintain the topology and adapt to changes caused by the failure, addition or mobility of nodes.

1.3 SCOPE OF THE PROJECT:

Design goals of the congestion aware routing (CAR) protocol for sensor networks are to provide high priority data with better service quality compared to other routing schemes. These include higher delivery ratios, lower delays and lower jitter to support real-time data. We also aim at decreasing energy consumption which will lengthen the lifetime of the network. To achieve these goals, CAR divides the network into two regions; the congestion zone (conzone) and the remaining part of the network. While high priority data is routed through the conzone, low priority data is routed using the other nodes. Low priority data that originates outside the conzone is routed exclusively on off-conzone nodes using regular routing protocols such as low priority data that originate inside the conzone are efficiently routed out of the conzone.

  1. LITRATURE SURVEY

ELASTIC OPTICAL NETWORKING: A NEW DAWN FOR THE OPTICAL LAYER?

PUBLICATION: O. Gerstel, M. Jinno, A. Lord, and S. J. B. Yoo,  IEEE Commun. Mag., vol. 50, no. 2, pp. s12–s20, Feb. 2012.

Optical networks are undergoing significant changes, fueled by the exponential growth of traffic due to multimedia services and by the increased uncertainty in predicting the sources of this traffic due to the ever changing models of content providers over the Internet. The change has already begun: simple on-off modulation of signals, which was adequate for bit rates up to 10 Gb/s, has given way to much more sophisticated modulation schemes for 100 Gb/s and beyond. The next bottleneck is the 10-year-old division of the optical spectrum into a fixed “wavelength grid,” which will no longer work for 400 Gb/s and above, heralding the need for a more flexible grid. Once both transceivers and switches become flexible, a whole new elastic optical networking paradigm is born. In this article we describe the drivers, building blocks, architecture, and enabling technologies for this new paradigm, as well as early standardization efforts.

MODELING THE ROUTING AND SPECTRUM ALLOCATION PROBLEM FOR FLEXGRID OPTICAL NETWORKS

PUBLICATION: L. Velasco, M. Klinkowski, M. Ruiz, and J. Comellas, Photon. Netw. Commun., vol. 24, no. 3, pp. 177–186, 2012.

Flexgrid optical networks are attracting huge interest due to their higher spectrum efficiency and flexibility in comparison with traditional wavelength switched optical networks based on the wavelength division multiplexing technology. To properly analyze, design, plan, and operate flexible and elastic networks, efficient methods are required for the routing and spectrum allocation (RSA) problem. Specifically, the allocated spectral resources must be, in absence of spectrum converters, the same along the links in the route (the continuity constraint) and contiguous in the spectrum (the contiguity constraint). In light of the fact that the contiguity constraint adds huge complexity to the RSA problem, we introduce the concept of channels for the representation of contiguous spectral resources. In this paper, we show that the use of a pre-computed set of channels allows considerably reducing the problem complexity. In our study, we address an off-line RSA problem in which enough spectrum needs to be allocated for each demand of a given traffic matrix. To this end, we present novel integer lineal programming (ILP) formulations of RSA that are based on the assignment of channels. The evaluation results reveal that the proposed approach allows solving the RSA problem much more efficiently than previously proposed ILP-based methods and it can be applied even for realistic problem instances, contrary to previous ILP formulations.

DISTANCE-ADAPTIVE SPECTRUM RESOURCE ALLOCATION IN SPECTRUM-SLICED ELASTIC OPTICAL PATH NETWORK

PUBLICATION: M. Jinno et al., “,” IEEE Commun. Mag., vol. 48, no. 8, pp. 138–145, Aug. 2010.

The rigid nature of current wavelength-routed optical networks brings limitations on network utilization efficiency. One limitation originates from mismatch of granularities between the client layer and the wavelength layer. The recently proposed spectrum-sliced elastic optical path network (SLICE) is expected to mitigate this problem by adaptively allocating spectral resources according to client traffic demands. This article discusses another limitation of the current optical networks associated with worst case design in terms of transmission performance. In order to address this problem, we present a concept of a novel adaptation scheme in SLICE called distance-adaptive spectrum resource allocation. In the presented scheme the minimum necessary spectral resource is adaptively allocated according to the end-to-end physical condition of an optical path. Modulation format and optical filter width are used as parameters to determine the necessary spectral resources to be allocated for an optical path. Evaluation of network utilization efficiency shows that distance-adaptive SLICE can save more than 45 percent of required spectrum resources for a 12-node ring network. Finally, we introduce the concept of a frequency slot to extend the current frequency grid standard, and discuss possible spectral resource designation schemes.

QOT PREDICTION FOR CORE NETWORKS WITH UNCOMPENSATED COHERENT TRANSMISSION

PUBLICATION: M. Angelou, P. N. Ji, I. Tomkos, and T. Wang, in Proc. OECC/PS Jul. 2013, pp. 1–2, paper TuQ3-4.

We propose a comprehensive QoT prediction tool based on fast analytical modeling for on-the-fly signal assessments in networks with uncompensated coherent systems and confirm its superiority in reducing over-engineering compared to system-reach methods.

CHAPTER 2

2.0 SYSTEM ANALYSIS

2.1 EXISTING SYSTEM:

The Problem of Existing Solutions in these scenario nodes in the network sends all high priority data to a single sink, tree-based routing is the most appropriate. In this routing scheme, a spanning tree is built with the high priority sink as its root. The setup of such a tree uses controlled flooding from the sink to all nodes in the network. Low priority data, on the other hand, do not need to follow the same routing scheme. This is true because there may be multiple low priority sinks and a node might send data to any of them. For example, temperature readings might be forwarded to one sink while the motion detection measurements go to another sink, and tree based routing schemes suffer from congestion, especially if the number of messages generated in the leaves is high.

This problem becomes worse when we have a mixture of high priority and low priority traffic traveling through the network. This is because low priority messages will cross the tree that is formed to route high priority data in order to reach their destinations. Therefore even when the rate of high priority data is relatively low, the background noise created by low priority traffic will create a congestion zone that spans the deployment from the critical area to the high priority sink. Nodes in this zone become overwhelmed and indiscriminately drop high and low priority messages. These nodes also consume more energy compared to other nodes in the network and hence die sooner. This will lead to only sub-optimal paths being available to route high priority data, or a total loss of connectivity from critical area to the sink even though other nodes outside a single routing scheme is used to route both types of traffic.

2.1.1 DISADVANTAGES:

In such a scenario, routing dynamics can lead to congestion on specific paths. Since congestion is a self-compounding problem, these paths are usually close to each other which lead to an entire zone in the network facing congestion. Congestion can adversely affect the network in two ways. First, it can lead to indiscriminate dropping of data, i.e. some packets of high priority might be dropped while others of less priority are delivered. This happens because sensor nodes are very simple devices and do not have the capability to differentiate packets (i.e. they do not have multiple queues for different priority levels). Second, congestion can cause an increase in energy consumption as links become saturated. This can lead to depletion of the limited energy available in the sensor nodes in the congested area.

2.2 PROPOSED SYSTEM:

We proposed Congestion Aware Routing (CAR) which is a simple routing protocol that uses data prioritization and treats packets according to their priorities. We defined a conzone as the set of sensors that will be required to route high priority packets from the data sources to the sink.

We presented algorithms to build a high priority routing mesh, dynamically discover and configure conzones, and perform differentiated routing. Our solutions do not require active queue management, maintenance of multiple queues or scheduling algorithms, or the use of specialized MAC protocols.

The proposed algorithm for RMSA in a nonlinear elastic network utilizing Nyquist pulse shaping is as follows:

  1. Determine the optimum signal power spectral density given the fiber and amplifier parameters.
  2. For a pair of nodes, select the shortest path that avoids the link with the highest spectral usage (determined by measuring the total optical power which is proportional to spectral usage).
  3. For this path determine the total number of amplifier spans (100 km herein) in order to determine the received signal to noise ratio (SNR).
  4. For this SNR, determine the maximum net spectral efficiency (NSE) based on known relationship between SNR and NSE for a range of polarization division multiplexed formats with Nyquist spectra where variable rate FEC is also included.
  5. Finally determine the gross symbol rate and assign spectrum to serve the demand between the two nodes. We showed that with the inclusion of small playout buffers at the sink, the CAR-based routing is suitable for delivering real-time traffic, such as video, over a wide range of conditions.

2.2.1 ADVANTAGES:

  • High priority data delivery is assured without loss
  • Conzone (congestion zone discovery) is an overhead.
  • Low priority data is often dropped
  • Low priority data delivery is also assured along with high priority data. The channel is virtually divided for both priorities.
  • Still low priority data is often dropped
  • Low Priority data delivery is assured to maximum extent.
  • The burden on intermediate nodes is decreased for discovering
  • The request and acknowledgements traffic is reduced in this method.
  • The Low Priority data has to travel in long path which has less congestion
  • In the long path all the sensor nodes has to be in active position which increases battery consumption

2.3 HARDWARE & SOFTWARE REQUIREMENTS:

2.3.1 HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed                                      –    1.1 GHz
    • RAM                                       –    256 MB (min)
    • Hard Disk                               –   20 GB
    • Floppy Drive                           –    1.44 MB
    • Key Board                              –    Standard Windows Keyboard
    • Mouse                                     –    Two or Three Button Mouse
    • Monitor                                   –    SVGA

 

2.3.2 SOFTWARE REQUIREMENTS:

  • Operating System                   :           Windows XP
  • Front End                                :           Microsoft Visual Studio .NET 2008
  • Document                               :           MS-Office 2007

CHAPTER 3

3.0 SYSTEM DESIGN

ARCHITECUTRE DIAGRAM / UML DIAGRAM / DATA FLOW DIAGRAM:

  • The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
  • The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  • DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  • DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data in the physical component is not identified.

 

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

  1. All processes must have at least one data flow in and one data flow out.
  2. All processes should modify the incoming data, producing new forms of outgoing data.
  3. Each data store must be involved with at least one data flow.
  4. Each external entity must be involved with at least one data flow.
  5. A data flow must be attached to at least one process.

3.1 ARCHITECTURE DIAGRAM:

CHAPTER 4

4.0 IMPLEMENTATION:

4.1 ALGORITHM

4.2 MODULES:

SERVER CLIENT MODULE:

FIBER NONLINEARITIES:

DISCOVERY FROM SINK:

NETWORK PROBABILITY (NBP):

ROUTING ALGORITHMS (CAR):

4.3 MODUL DISCRIPTION:

CHAPTER 5

5.0 SYSTEM STUDY:

5.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are      

  • ECONOMICAL FEASIBILITY
  • TECHNICAL FEASIBILITY
  • SOCIAL FEASIBILITY

5.1.1 ECONOMICAL FEASIBILITY:                  

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

5.1.2 TECHNICAL FEASIBILITY:   

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  

5.1.3 SOCIAL FEASIBILITY:  

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

5.2 SYSTEM TESTING:

Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months. This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.

5.2.1 UNIT TESTING:

A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program  testing  checks  for  two  types  of  errors:  syntax  and  logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.

UNIT TESTING:

Description Expected result
Test for application window properties. All the properties of the windows are to be properly aligned and displayed.
Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.

5.1.3 FUNCTIONAL TESTING:

Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly.When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem.  The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems.

FUNCTIONAL TESTING:

Description Expected result
Test for all modules. All peers should communicate in the group.
Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.

5.1. 4 NON-FUNCTIONAL TESTING:

 The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:

  • Load testing
  • Performance testing
  • Usability testing
  • Reliability testing
  • Security testing

5.1.5 LOAD TESTING:

An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.

Load Testing

Description Expected result
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.

5.1.5 PERFORMANCE TESTING:

Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings.

PERFORMANCE TESTING:

Description Expected result
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management.   Should handle large input values, and produce accurate result in a  expected time.  

5.1.6 RELIABILITY TESTING:

The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team.

RELIABILITY TESTING:

Description Expected result
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of  the server an alternate server should take over the job.

5.1.7 SECURITY TESTING:

Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.

SECURITY TESTING:

  Description Expected result
Checking that the user identification is authenticated. In case failure it should not be connected in the framework.
Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.

5.1.7 WHITE BOX TESTING:

White  box  testing,  sometimes called  glass-box  testing is  a test  case  design method  that  uses  the  control  structure  of the procedural  design  to  derive  test  cases. Using  white  box  testing  method,  the software  engineer  can  derive  test  cases. The White box testing focuses on the inner structure of the software structure to be tested.

5.1.8 WHITE BOX TESTING:

Description Expected result
Exercise all logical decisions on their true and false sides. All the logical decisions must be valid.
Execute all loops at their boundaries and within their operational bounds. All the loops must be finite.
Exercise internal data structures to ensure their validity. All the data structures must be valid.

5.1.9 BLACK BOX TESTING:

Black box testing, also called behavioral testing, focuses on the functional requirements of the software.  That  is,  black  testing  enables  the software engineer  to  derive  sets  of  input  conditions  that  will  fully  exercise  all  functional requirements  for  a  program.  Black box testing is not alternative to white box techniques.  Rather  it  is  a  complementary  approach  that  is  likely  to  uncover  a different  class  of  errors  than  white box  methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results.

5.1.10 BLACK BOX TESTING:

Description Expected result
To check for incorrect or missing functions. All the functions must be valid.
To check for interface errors. The entire interface must function normally.
To check for errors in a data structures or external data base access. The database updation and retrieval must be done.
To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.

All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential.

CHAPTER 6

6.0 SOFTWARE SPECIFICATION:

6.1 FEATURES OF .NET:

Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script.

The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.

“.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).

6.2 THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are

  • Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
  • Memory management, notably including garbage collection.
  • Checking and enforcing security restrictions on the running code.
  • Loading and executing programs, with version control and other such features.
  • The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra Information – “metadata” – to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

6.3 THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.

6.4 LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.

Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

  • FORTRAN
  • COBOL
  • Eiffel          
            ASP.NET  XML WEB SERVICES    Windows Forms
                         Base Class Libraries
                   Common Language Runtime
                           Operating System

Fig1 .Net Framework

C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using Try…Catch…Finally statements, we can create robust and effective exception handlers to improve the performance of our application.

6.5 THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.

OBJECTIVES OF .NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.

3. Eliminates the performance problems.         

There are different types of application, such as Windows-based applications and Web-based applications. 

6.6 FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

 TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.

CHAPTER 7

APPENDIX

7.1 SAMPLE SOURCE CODE

7.2 SAMPLE OUTPUT

CHAPTER 8

8.0 CONCLUSION:

Congestion aware routing has been investigated in nonlinear elastic optical networks and shown to be effective for the reference NSFNET topology. We observe that the network blocking probability (NBP) follows a generalized extreme value distribution, allowing robust estimates of the load for a given NBP to be obtained. When NSFNET is sequentially loaded with 100 GbE demands the proposed algorithm with a flexgrid, allows the network to support 1744 demands compared to 328 demands using a fixed 50 GHz grid with shortest path routing for NBP = 1%. The congestion aware routing algorithms investigated resulted in longer average paths, with 5% of all routes exceeding the maximum shortest path in order to increase the overall network capacity.