Navigation between different
screens and apps is a core part of the user experience. The following
principles set a baseline for a consistent and intuitive user experience across
apps. The Navigation component is designed to implement these principles by
default, ensuring that users can apply the same heuristics and patterns in
navigation as they move between apps.
Note: Even if you aren’t using
the Navigation component in your project, your app should follow these design
principles.
Fixed start destination
Every app you build has a fixed
start destination. This is the first screen the user sees when they launch your
app from the launcher. This destination is also the last screen the user sees
when they return to the launcher after pressing the Back button. Let’s take a
look at the Sunflower app as an example.
When launching the Sunflower app
from the launcher, the first screen that a user sees is the List Screen, the
list of plants in their garden. This is also the last screen they see before
exiting the app. If they press the Back button from the list screen, they
navigate back to the launcher.
Note: An app might have a
one-time setup or series of login screens. These conditional screens should not
be considered start destinations because users see these screens only in
certain cases.
Navigation state is represented
as a stack of destinations
When your app is first launched,
a new task is created for the user, and app displays its start destination.
This becomes the base destination of what is known as the back stack and is the
basis for your app’s navigation state. The top of the stack is the current
screen, and the previous destinations in the stack represent the history of
where you’ve been. The back stack always has the start destination of the app
at the bottom of the stack.
Operations that change the back
stack always operate on the top of the stack, either by pushing a new
destination onto the top of the stack or popping the top-most destination off
the stack. Navigating to a destination pushes that destination on top of the
stack.
The Navigation component manages
all of your back stack ordering for you, though you can also choose to manage
the back stack yourself.
Up and Back are identical within
your app’s task
The Back button appears in the
system navigation bar at the bottom of the screen and is used to navigate in
reverse-chronological order through the history of screens the user has
recently worked with. When you press the Back button, the current destination
is popped off the top of the back stack, and you then navigate to the previous
destination.
The Up button appears in the app
bar at the top of the screen. Within your app’s task, the Up and Back buttons
behave identically.
The Up button never exits your
app
If a user is at the app’s start
destination, then the Up button does not appear, because the Up button never
exits the app. The Back button, however, is shown and does exit the app.
When your app is launched using a
deep link on another app’s task, Up transitions users back to your app’s task
and through a simulated back stack and not to the app that triggered the deep
link. The Back button, however, does take you back to the other app.
Deep linking simulates manual
navigation
Whether deep linking or manually
navigating to a specific destination, you can use the Up button to navigate
through destinations back to the start destination.
When deep linking to a
destination within your app’s task, any existing back stack for your app’s task
is removed and replaced with the deep-linked back stack.
Using the Sunflower app again as
an example, let’s assume that the user had previously launched the app from the
launcher screen and navigated to the detail screen for an apple. Looking at the
Recents screen would indicate that a task exists with the top most screen being
the detail screen for the Apple.
At this point, the user can tap
the Home button to put the app in the background. Next, let’s say this app has
a deep link feature that allows users to launch directly into a specific plant
detail screen by name. Opening the app via this deep link completely replaces
the current Sunflower back stack shown in figure 3 with a new back stack, as
shown in figure 4:
Figure 4: Following a deep link
replaces the existing back stack for the Sunflower app.
Notice that the Sunflower back
stack is replaced by a synthetic back stack with the avocado detail screen at
the top. The My Garden screen, which is the start destination, was also added
to the back stack. This is important because the synthetic back stack must be
realistic. It should match a back stack that could have been achieved by
organically navigating through the app. The original Sunflower back stack is
gone, including the app’s knowledge that the user was on the Apple details
screen before.
The Navigation component supports
deep linking and recreates a realistic back stack for you when linking to any
destination in your navigation graph.
In PHP 5 there are some new
functions. Here is the list of them:
Arrays:
array_combine() – Creates an
array by using one array for keys and another for its values
array_diff_uassoc() – Computes
the difference of arrays with additional index check which is performed by a
user supplied callback function
array_udiff() – Computes the
difference of arrays by using a callback function for data comparison
array_udiff_assoc() – Computes
the difference of arrays with additional index check. The data is compared by
using a callback function
array_udiff_uassoc() – Computes
the difference of arrays with additional index check. The data is compared by
using a callback function. The index check is done by a callback function also
array_walk_recursive() – Apply a
user function recursively to every member of an array
array_uintersect_assoc() –
Computes the intersection of arrays with additional index check. The data is
compared by using a callback function
array_uintersect_uassoc() –
Computes the intersection of arrays with additional index check. Both the data
and the indexes are compared by using separate callback functions
array_uintersect() – Computes the
intersection of arrays. The data is compared by using a callback function
InterBase:
ibase_affected_rows() – Return
the number of rows that were affected by the previous query
ibase_backup() – Initiates a
backup task in the service manager and returns immediately
ibase_commit_ret() – Commit a
transaction without closing it
ibase_db_info() – Request statistics
about a database
ibase_drop_db() – Drops a
database
ibase_errcode() – Return an error
code
ibase_free_event_handler() –
Cancels a registered event handler
ibase_gen_id() – Increments the
named generator and returns its new value
ibase_maintain_db() – Execute a
maintenance command on the database server
ibase_name_result() – Assigns a
name to a result set
ibase_num_params() – Return the
number of parameters in a prepared query
ibase_param_info() – Return
information about a parameter in a prepared query
ibase_restore() – Initiates a
restore task in the service manager and returns immediately
ibase_rollback_ret() – Rollback
transaction and retain the transaction context
ibase_server_info() – Request
statistics about a database server
ibase_service_attach() – Connect
to the service manager
ibase_service_detach() –
Disconnect from the service manager
ibase_set_event_handler() –
Register a callback function to be called when events are posted
ibase_wait_event() – Wait for an
event to be posted by the database
iconv:
iconv_mime_decode() – Decodes a
MIME header field
iconv_mime_decode_headers() –
Decodes multiple MIME header fields at once
iconv_mime_encode() – Composes a
MIME header field
iconv_strlen() – Returns the
character count of string
iconv_strpos() – Finds position
of first occurrence of a needle within a haystack
iconv_strrpos() – Finds the last
occurrence of a needle within a haystack
iconv_substr() – Cut out part of
a string
Streams:
stream_copy_to_stream() – Copies
data from one stream to another
stream_get_line() – Gets line
from stream resource up to a given delimiter
stream_socket_accept() – Accept a
connection on a socket created by stream_socket_server()
stream_socket_client() – Open
Internet or Unix domain socket connection
stream_socket_get_name() –
Retrieve the name of the local or remote sockets
stream_socket_recvfrom() –
Receives data from a socket, connected or not
stream_socket_sendto() – Sends a
message to a socket, whether it is connected or not
stream_socket_server() – Create an
Internet or Unix domain server socket
Date and time related:
idate() – Format a local
time/date as integer
date_sunset() – Time of sunset
for a given day and location
date_sunrise() – Time of sunrise
for a given day and location
time_nanosleep() – Delay for a
number of seconds and nanoseconds
Strings:
str_split() – Convert a string to
an array
strpbrk() – Search a string for
any of a set of characters
substr_compare() – Binary safe
optionally case insensitive comparison of two strings from an offset, up to
length characters
Other:
convert_uudecode() – decode a
uuencoded string
convert_uuencode() – uuencode a
string
curl_copy_handle() – Copy a cURL
handle along with all of its preferences
dba_key_split() – Splits a key in
string representation into array representation
dbase_get_header_info() – Get the
header info of a dBase database
dbx_fetch_row() – Fetches rows
from a query-result that had the DBX_RESULT_UNBUFFERED flag set
fbsql_set_password() – Change the
password for a given user
file_put_contents() – Write a
string to a file
ftp_alloc() – Allocates space for
a file to be uploaded
get_declared_interfaces() –
Returns an array of all declared interfaces
get_headers() – Fetches all the
headers sent by the server in response to a HTTP request
headers_list() – Returns a list
of response headers sent (or ready to send)
The International PHP
Conference is the world’s first PHP conference and stands since more than
a decade for top-notch pragmatic expertise in PHP and web technologies. At the
IPC, internationally renowned experts from the PHP industry meet up with PHP
users and developers from large and small companies. Here is the place where
concepts emerge and ideas are born – the IPC signifies knowledge transfer at
highest level.
All delegates of the International
PHP Conference have, in addition to PHP program, free access to the entire
range of the International JavaScript Conference taking place at the
same time.
Basic facts:
Date: October 21 – 25, 2019
Location: Holiday Inn Munich City
Centre, Munich
Highlights:
60+ best practice sessions
50+ international top speakers
PHPower: Hands-on Power Workshops
Expo with exciting exhibitors on October 22nd &
23rd
Conference Combo: Visit the International
JavaScript Conference for free
All inclusive: Changing buffets, snacks & refreshing
drinks
Android Studio 3.5 Beta 5 is now available in the Beta channel.
If you have Android Studio set up to receive updates on the Beta channel,
you can get the update by choosing Help > Check for
Updates (Android Studio > Check for Updates on macOS).
Fixed issues with predefined Android code styling
We fixed the underlying issues
around applying the predefined Android code style for Java and XML, and it is
now the default again both for IDE and project schemes. If you have local code
style changes, those will be unaffected; you can always reapply the Android
code style by selecting Set from > Predefined Style >
Android on the Code Style settings page to reapply the defaults. (Issue
#131581006)
General fixes
This update also includes fixes for the following public
issues:
CoreIDE
Issue #133666019: New Image Asset
wizard (launcher / legacy) does not trim image to selected shape
Issue #131889243: Studio 3.5
deadlock (Kotlin resolve + databinding)
Issue #132367955: AS 3.5 Beta 1
assumes Databinding bindings are Views
Design Tools
Issue #133184665: Resource picker
doesn’t appear when adding an attribute using Declared Attribute + button
Dexer (D8)
Issue #118842646: Ability to
selectively suppress warnings during D8 desugaring
Gradle
Issue #132840182:
ClassNotFoundException on API 21 or 22 device.
Issue #133273847: Error: Duplicate
resources in gradle plugin 3.5.0-beta01 and 02
Layout Editor
Issue #132578769: ConstraintLayout
v2.0.0-beta1: Impossible to drop element on layout with data element defined
Issue #133789726: GoTo navigation
goes to the wrong property or doesn’t work
Issue #133225561: Completions does
not seem to work in a newly added attribute
Issue #134522901: Android Studio
full crash every time you undo widget rename
Issue #132323234: Long names don’t
fit in dropdown menus for attributes and can’t be distinguished
Issue #133526948: attributes
starting with “__removed” are showing up in the properties panel
Lint
Issue #131844902:
DefaultJavaEvaluator.getProject sometimes returning /media for
/media2/player/…MediaPlayer.java
Issue #111487505: Unnecessary
warning for Attribute ‘importantForAutofill’ is only used in API level 26 and
higher
Navigation
Issue #133280833: element can
only be included in application manifest
Run Debug
Issue #134515798: Improve error
reporting when ADB cannot be executed
Issue #131786506:
IndexNotReadyException in AndroidTestRunConfiguration.getRunnerFromManifest
Shrinker (R8)
Issue #132549918: Using
-keepparameternames has no effect
Issue #134304597: VerifyError:
kotlinx/coroutines/AbstractCoroutine at API 17, 18
Issue #135210786:
NoClassDefFoundError in runtime on API 19 and below when using AGP 3.5.0-beta04
Issue #134093979: Unsupported source
file type (META-INF/versions/9/module-info.class)
Issue #133686361: R8 1.5 issue with
Google play core library
Issue #134462736: R8 1.5.43
introduce again VerifyError
Issue #133215941: VerifyError with
Android Annotations
Issue #133457361:
AbstractMethodError when calling interface provided as Java 8 lambda with R8 on
Android Gradle Plugin 3.4.1
Issue #132953944: java.lang.VerifyError
at api19 and below
Issue #134838460: Add support for
keep option modifier `includecode`
For information on new features and changes in all preview builds of Android
Studio 3.5, see the Android Studio Preview release notes. For details of
bugs fixed in each preview release, see previous entries on this blog.
We greatly appreciate your bug reports, which help us to make Android Studio
better. If you encounter a problem, let us know by reporting a bug. Note
that you can also vote for an existing issue to indicate that you are
also affected by it.
Multihop cellular
networks (MCNs) have drawn tremendous attention due to its high throughput and
extensive coverage. However, there are still three issues not well addressed.
With the existence of relay stations (RSs), how to efficiently allocate frequency
resource to relay links becomes a challenging design issue. For mobile stations
(MSs) near the cell edge, cochannel interference (CCI) become severe, which
significantly affects the network performance.
Furthermore, the
unbalanced user distribution will result in traffic congestion and inability to
guarantee quality of service (QoS). To address these problems, we propose a
quantitative study on adaptive resource allocation schemes by jointly
considering interference coordination (IC) and load balancing (LB) in MCNs.
In this paper, we
focus on the downlink of OFDMA-based MCNs with time division duplex (TDD) mode,
and analyze the characteristics of resource allocation according to IEEE
802.16j/m specification. We also design a novel frequency reuse scheme to
mitigate interference and maintain high spectral efficiency, and provide
practical LB-based handover mechanisms which can evenly distribute the traffic
and guarantee users’ QoS.
INTRODUCTION:
The future wireless cellular networks,
such as 3GPP advanced long term evolution (LTE-Advanced) and IEEE 802.16m
systems, will adopt orthogonal frequency division multiple access (OFDMA)
technology for multihop cellular networks (MCNs). OFDMA is regarded as the most
promising physical layer technology for the fourth generation (4G) wireless
networks. New relay strategies and technologies are proposed to provide
services with extended coverage and higher data rate. Fixed relay stations (RSs)
with fewer functionalities than base stations (BSs) can be deployed to overcome
poor channel conditions while maintaining low infrastructure cost.
Nevertheless, MCNs have inherent drawbacks, for example, extra radio resource
are required on relay links (BS-RS links). Therefore, well-designed radio
resource allocation schemes are crucial for MCNs to effectively exploit the
benefit of RSs, while overcoming the disadvantages.
Since RSs always utilizes the same
spectrum as MSs or BSs, cochannel interference (CCI) will be closely related to
the radio resource allocation schemes in MCNs due to the intercell and
intracell frequency reuse. OFDMA systems should employ frequency planning for
better cell edge performance and the ease of interference management. Traditional
single-hop cellular networks (SCNs) typically employ the frequency reuse pattern
with factor of 3 or 7 to reduce CCI, which results in low spectral efficiency.
As we all know, high data rate is one of the desired features of the future
cellular networks. It requires a highly efficient utilization of the available
spectrum. Frequency reuse with factor of 1 is likely to be used in LTE-Advanced
and IEEE 802.16m systems, aiming at improving the spectral efficiency. However,
the CCI using this frequency planning causes severe performance degradation at
cell boundaries. (WiMAX) Forum, the frequency reuse pattern can be denoted as N
_ S _ K, which means that the networks are divided into clusters of N cells
(each cell in the cluster has a different frequency band), with S sectors and K
different frequency bands per cell. According to these reuse patterns, all
available spectrum is assigned to all sector-BSs in the reuse pattern of 1 _ 3
_ 1, whereas each sector-BS uses only one third of the total frequency bands in
the reuse pattern of 1 _ 3 _ 3. The CCI level is higher in the former, whereas
the spectral efficiency is lower in the latter. If 1 _ 3 _ 3 is used in MCNs,
the spectral efficiency will be much lower because extra frequency resource has
to be allocated to relay links. If 1 _ 3 _ 1 is used in MCNs, the frequency
reuse scheme is more important in a multicell scenario. Compared with BSs
deployed at the cell center, RSs deployed at the cell edge cause serious
interference because RSs are closer to the mobile stations (MSs) in the adjacent
cells than those BSs.
In the existing literature, there are
several works about reducing CCI in MCNs. In, several static resource allocation
schemes with different partitions and reuse factors are discussed. The CCI in
these schemes is analyzed in a multicell scenario. In, a relay-based orthogonal
frequency planning strategy is proposed to improve cell edge performance. In,
fractional frequency reuse (FFR) is extended to MCNs as a compromise solution
to reduce CCI while maintaining the sector frequency reuse factor as 1. The
main idea of FFR is to adopt frequency reuse 1 _ 3 _ 1 at the cell center to
maximize the network spectral efficiency while harnessing frequency reuse 1 _ 3
_ 3 at the cell edge to alleviate CCI, the minimum CCI has been achieved by
adjusting the transmission (Tx) power at BSs and RSs under orthogonal frequency
resource allocation. The essence of these works is to use partial frequency bands
while maintaining frequency orthogonal at the cell edge and the remaining
frequency bands at the cell center.
Moreover, there are several static
frequency allocation schemes proposed in the aforementioned works, which fit for
uniform traffic distribution only. In reality, users are not evenly distributed
among cells. Too many users accessing one station (BS or RS) yields load
imbalance in MCNs. Such an imbalance could severely affect the performance of
hot spot areas, which may not meet the users’ quality of service (QoS)
requirements. This is another major reason for system performance degradation.
To guarantee users’ QoS, therefore, load balancing (LB) should be adopted along
with IC for MCNs.
LB has been widely studied in SCNs and
heterogeneous networks (HetNets). For SCNs, resource allocation schemes have to
work in conjunction with the connection admission control (CAC) mechanisms, which
determines, based on available resource and users’ QoS, whether to admit an incoming
connection to a particular cell or to reject it in the current cell, but to
switch the user to an adjacent non congested cell through a handover mechanism.
Here, the corresponding handover mechanism is not executed due to position
change of users, but due to the lack of resource in the original cell. As
important methods in LB, the cell breathing and load-ware handover are proposed.
The idea is that if a cell is heavily congested, the adjacent noncongested cell
may expand the coverage and accommodate more users by raising transmission
power. In a scheme jointly considering IC and LB is designed to improve the
weighted sum of data rates in multicell networks. The problem is NP-hard and
then develop a local-improvement-based algorithm to solve it. These works
suggest not only to use higher transmission power at the adjacent cell
stations, but also report continually a large amount of information related to
signal quality and traffic load in the surrounding cells, to the mobile switch
center (MSC), to calculate the best connection to the BS. Apparently, this
would increase the system overhead and management complexity. For HetNets, an
integrated cellular and ad hoc relay (iCAR) system has been proposed, in which
some users can be switched to adjacent cells through ad hoc RSs and the spare
resource are then acquired by incoming users. However, this type of LB only
works with HetNets.
HetNets intend to change the traditional
system architecture of cellular networks, while MCNs only attempt to improve
the network performance of the traditional cellular networks through the use of
RSs. It is noticeable that MCNs differ from Het Nets in the following few
characteristics: 1) RSs are important add-on communications facilities of cellular
networks, which also share the same spectrum with BSs;
2) BSs and RSs are connected through wireless
radio interfaces;
3) the users associated with an RS need
to access BS ultimately, which may ask for two-hop transmissions to deliver
data.
With the deployment of RSs in MCNs, more
handover opportunities arise, leading to better resource management and
performance gain. This paper focuses on how to switch the connections from
congested stations to non congested stations and increase the available
frequency resource for congested stations to achieve LB. In a cell, the traffic
load information of RSs as well as link qualities between RSs and MSs are
reported to BS by RSs. The BS is directly responsible for performing handover
mechanisms in each sector. This method does not require to collect and process all
kinds of information for a group of cells, which can reduce the complexity of
the system implementation and guarantee QoS for users in hot spots.
The main contributions of this paper can
be summarized as follows: We provide a quantitative study on an adaptive resource
allocation scheme by jointly considering IC and LB in MCNs. We also present a
novel frequency reuse scheme to mitigate interference and maintain high
spectral efficiency, and propose practical LB-based handover mechanisms which
can evenly distribute the traffic load and guarantee users’ QoS. Extensive
simulations demonstrate that our proposed schemes can provide higher throughput
and accommodate more QoS-guaranteed users than what conventional SCNs can do.
1.3
LITRATURE SURVEY
OPPORTUNITIES AND CHALLENGES IN OFDMA-BASED CELLULAR
RELAY NETWORKS: A RADIO RESOURCE MANAGEMENT PERSPECTIVE
PUBLICATION: M.
Salem, A. Adinoyi, H. Yanikomeroglu, and D. Falconer, IEEE Trans. Vehicular
Technology, vol. 59, no. 5, pp. 2496-2510, Jan. 2010.
The opportunities and
flexibility in relay networks and orthogonal frequency-division multiple access
(OFDMA) make the combination a suitable candidate network and air-interface
technology for providing reliable and ubiquitous high-data-rate coverage in
next-generation cellular networks. Advanced and intelligent radio resource
management (RRM) schemes are known to be crucial toward harnessing these opportunities
in future OFDMA-based relay-enhanced cellular networks. However, it is not very
clear how to address the new RRM challenges (such as enabling distributed
algorithms, intra-cell/inter-cell routing, intense and dynamic co-channel
interference (CCI), and feedback overhead) in such complex environments
comprising a plethora of relay stations (RSs) of different functionalities and
characteristics. Employment of conventional RRM schemes in such networks will
highly be inefficient if not infeasible. The next-generation networks are
required to meet the expectations of all wireless users, irrespective of their
locations. High-data-rate connectivity, mobility, and reliability, among other
features, are examples of these expectations. Therefore, fairness is a critical
performance aspect that has to be taken into account in the design of
prospective RRM schemes. This paper reviews some of the prominent challenges
involved in migrating from the conventional cellular architecture to the
relay-based type and discusses how intelligent RRM schemes can exploit the
opportunities in relay-enhanced OFDMA-based cellular networks. We identify the
role of multiantenna systems and explore the current approaches in literature
to extend the conventional schedulers to next-generation relay networks. This
paper also highlights the fairness aspect in such networks in the light of the
recent literature, provides some example fairness metrics, and compares the
performances of some representative algorithms.
INTERFERENCE COORDINATION IN COMPACT FREQUENCY REUSE FOR
MULTIHOP CELLULAR NETWORKS
PUBLICATION: Y.
Zhao, X. Fang, and Z. Zhao, IEICE Trans. Fundamentals of Electronics, Comm. and
Computer Sciences, vol. E93-A, no. 11, pp. 2312-2319, Nov. 2010.
Continuously
increasing the bandwidth to enhance the capacity is impractical because of
the scarcity of spectrum availability. Fortunately, on the basis of the
characteristics of the multihop cellular networks (MCNs), a new compact
frequency reuse scheme has been proposed to provide higher spectrum
utilization efficiency and larger capacity without increasing the cost on
network. Base stations (BSs) and relay stations (RSs) could transmit
simultaneously on the same frequency according to the compact frequency reuse
scheme. In this situation, however, mobile stations (MSs) near the coverage
boundary will suffer serious interference and their traffic quality can
hardly be guaranteed. In order to mitigate the interference while maintaining
high spectrum utilization efficiency, this paper introduces a fractional
frequency reuse (FFR) scheme into multihop cellular networks, in which the
principle of FFR scheme and characteristics of frequency resources
configurations are described, then the transmission (Tx) power consumption of
BS and RSs is analyzed. The proposed scheme can both meet the requirement of
high traffic load in future cellular system and maximize the benefit by
reducing the Tx power consumption. Numerical results demonstrate that the
proposed FFR in compact frequency reuse achieves higher cell coverage
probability and larger capacity with respect to the conventional schemes.
TECHNICAL SPECIFICATION GROUP RADIO ACCESS NETWORK;
PHYSICAL LAYER ASPECTS FOR EVOLVED UNIVERSAL TERRESTRIAL RADIO ACCESS (UTRA)
The justification
of the study item was, that with enhancements such as HSDPA and Enhanced
Uplink, the 3GPP radio-access technology will be highly competitive for several
years. However, to ensure competitiveness in an even longer time frame, i.e.
for the next 10 years and beyond, a long-term evolution of the 3GPP
radio-access technology needs to be considered. Important parts of such a long-term evolution
includes reduced latency, higher user data rates, improved system capacity and
coverage, and reduced cost for the operator. In order to achieve this, an
evolution of the radio interface as well as the radio network architecture
should be considered. Considering a desire for even higher data rates and also
taking into account future additional 3G spectrum allocations the long-term
3GPP evolution should include an evolution towards support for wider
transmission bandwidth than 5 MHz. At the same time, support for transmission
bandwidths of 5MHz and less than 5MHz
should be investigated in order to allow for more flexibility in whichever frequency bands the system may be deployed.
OPPORTUNITIES
AND CHALLENGES IN OFDMA-BASED CELLULAR RELAY NETWORKS: A RADIO RESOURCE
MANAGEMENT PERSPECTIVE
PUBLICATION:
M. Salem, A. Adinoyi, H. Yanikomeroglu,
and D. Falconer, IEEE
The opportunities and flexibility
in relay networks and orthogonal frequency-division multiple access (OFDMA)
make the combination a suitable candidate network and air-interface technology
for providing reliable and ubiquitous high-data-rate coverage in next-generation
cellular networks. Advanced and intelligent radio resource management (RRM)
schemes are known to be crucial toward harnessing these opportunities in future
OFDMA-based relay-enhanced cellular networks. However, it is not very clear how
to address the new RRM challenges (such as enabling distributed algorithms,
intra-cell/inter-cell routing, intense and dynamic co-channel interference
(CCI), and feedback overhead) in such complex environments comprising a
plethora of relay stations (RSs) of different functionalities and
characteristics. Employment of conventional RRM schemes in such networks will
highly be inefficient if not infeasible. The next-generation networks are
required to meet the expectations of all wireless users, irrespective of their
locations. High-data-rate connectivity, mobility, and reliability, among other
features, are examples of these expectations. Therefore, fairness is a critical
performance aspect that has to be taken into account in the design of
prospective RRM schemes. This paper reviews some of the prominent challenges
involved in migrating from the conventional cellular architecture to the
relay-based type and discusses how intelligent RRM schemes can exploit the
opportunities in relay-enhanced OFDMA-based cellular networks. We identify the
role of multiantenna systems and explore the current approaches in literature
to extend the conventional schedulers to next-generation relay networks. This
paper also highlights the fairness aspect in such networks in the light of the
recent literature, provides some example fairness metrics, and compares the
performances of some representative algorithms.
CHAPTER 2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
Existing
literature, there are several works about reducing CCI in MCNs. In, several
static resource allocation schemes with different partitions and reuse factors
are discussed. The CCI in these schemes is analyzed in a multicell scenario in
a relay-based orthogonal frequency planning strategy is proposed to improve
cell edge performance. Fractional frequency reuses (FFR) is extended to MCNs as
a compromise solution to reduce CCI while maintaining the sector frequency
reuse factor as 1. The minimum CCI has been achieved by adjusting the
transmission (Tx) power at BSs and RSs under orthogonal frequency resource
allocation. The essence of these works is to use partial frequency bands while
maintaining frequency orthogonal at the cell edge and the remaining frequency
bands at the cell center.
2.2
PROPOSED SYSTEM:
We propose a
quantitative study on adaptive resource allocation schemes by jointly
considering interference coordination (IC) and load balancing (LB) in MCNs. In
this paper, we focus on the downlink of OFDMA-based MCNs with time division
duplex (TDD) mode, and analyze the characteristics of resource allocation
according to IEEE 802.16j/m specification. We also design a novel frequency
reuse scheme to mitigate interference and maintain high spectral efficiency,
and provide practical LB-based handover mechanisms which can evenly distribute
the traffic and guarantee users’ QoS.
We provide a
quantitative study on an adaptive resource allocation scheme by jointly
considering IC and LB in MCNs. We also present a novel frequency reuse scheme to
mitigate interference and maintain high spectral efficiency, and propose
practical LB-based handover mechanisms which can evenly distribute the traffic
load and guarantee users’ QoS. Extensive simulations demonstrate that our
proposed schemes can provide higher throughput and accommodate more
QoS-guaranteed users than what conventional SCNs.
WMNs, the frequency
spectrum is shared and randomly contended by all stations. The access scheme
with the lowest overhead is optimal. However, for example, in this paper, a
centrally controlled optimal resource allocation for OFDMA-based MCNs is our
target.
To provide
analytical performance evaluation, we make two assumptions for the remainder of
this paper:
1. All users have a
single type of data service and thus have the same QoS requirements.
2. All
cells/sectors have the same channel conditions, traffic load, and distribution
of users.
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
Speed –
1.1 GHz
RAM –
256 MB (min)
Hard
Disk –
20 GB
Floppy
Drive –
1.44 MB
Key
Board – Standard Windows
Keyboard
Mouse –
Two or Three Button Mouse
Monitor –
SVGA
2.3.2
SOFTWARE REQUIREMENTS:
Operating System : Windows XP
Front End : Microsoft Visual Studio 2008
Coding : C# .Net
Document : MS-Office
2007
CHAPTER 3
3.0
SYSTEM DESIGN
Data Flow Diagram / Use Case Diagram / Flow Diagram
The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system.
The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External
sources or destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data. The
physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
MODELING RULES:
There
are several common modeling rules when creating DFDs:
All processes must
have at least one data flow in and one data flow out.
All processes
should modify the incoming data, producing new forms of outgoing data.
Each data store
must be involved with at least one data flow.
Each external
entity must be involved with at least one data flow.
A data flow must
be attached to at least one process.
3.1 NETWORK
ARCHITECTURE DIAGRAM:
3.2 DATAFLOW DIAGRAM:
UML DIAGRAMS:
3.3 USE CASE DIAGRAM:
3.4 CLASS DIAGRAM:
3.5 SEQUENCE DIAGRAM:
3.6 ACTIVITY DIAGRAM:
CHAPTER
4
4.0
IMPLEMENTATION:
JOINT
INTERFERENCE COORDINATION AND LOAD BALANCING:
Since traffic load distribution of each
cell/sector affects the system performance significantly, we propose joint IC
and LB (ICLB) for MCNs. The objective is to improve system throughput under the
constraint of the basic requirement on coverage in the cell coverage
probability, is defined as the percentage of area within the cell that has received
SINR above the threshold of the most robust MCS, i.e., QPSK (1/12) modulation.
Therefore, the coverage probability can be estimated to MCNs, increasing
throughput implies that more users’ QoS requirements are met. Therefore, system
throughput is improved and more reliable service is attained. For different
station types, we present two LB mechanisms to improve the system throughput.
4.1 ALGORITHM:
RESOURCE
SCHEDULING ALGORITHM:
For relay links, based on the allocation
result of the second-hop links, slots should be assigned to first-hop link with
proportion to the aggregate data rate of the secondhop link of each RS that the
resource allocation to the first-hop link via each RS will end when the
first-hop data rate is greater than or equal to the aggregate secondhop data
rate. The other slots of RZ are assigned to BS-MS links according to (8).
Considering the assignable slots of one frame are limited, the attainable
balance of slot allocation determines the ratio of RZ and AZ in the time domain
in each frame. The detailed algorithm is shown in Algorithm 1.
4.2
MODULES:
SERVER
CLIENT MODULE:
MULTIHOP
CELLULAR:
LOAD
BALANCING:
RESOURCE
SCHEDULING:
OFDMA/TDD:
4.3
MODULE DISCRIPTION:
SERVER
CLIENT MODULE:
Client-server
computing or networking is a distributed application architecture that
partitions tasks or workloads between service providers (servers) and service
requesters, called clients. Often clients and servers operate over a computer
network on separate hardware. A server machine is a high-performance host that
is running one or more server programs which share its resources with clients.
A client also shares any of its resources; Clients therefore initiate
communication sessions with servers which await (listen to) incoming requests.
MULTIHOP
CELLULAR NETWORKS:
Multi-hop cellular network (MCN) is an architecture proposed
for wireless communication & MCNs combine the benefits of having a fixed
infrastructure of base stations and the flexibility of ad-hoc networks. They
are capable of achieving much higher throughput than current cellular systems,
which can be classified as single-hop cellular networks (SCNs). This work
concentrates on MCNs and SCNs using the IEEE 802.11 standard for wireless LANs.
We provide a general overview of the architecture and the
issues involved in the design of MCNs, in particular the challenges to be met
in the design of a routing protocol. We extend the work of Lin and Hsu to
enhance the throughput of such networks further.
We propose a routing protocol for use in such networks. We
conduct extensive experimental studies on the performance of MCNs and SCNs
under various load conditions (both TCP and UDP). Then studies clearly indicate
that MCNs with the proposed routing protocol are a viable alternative for SCNs,
in fact they provide much higher throughput.
LOAD
BALANCING NETWORKS:
Wireless sensor networks have received increasing
attention in the many military and civil applications of sensor networks;
sensors are constrained in onboard energy supply and are left unattended.
Energy, size and cost constraints of such sensors limit their communication
range. Therefore, they require multi-hop wireless connectivity to forward data
on their behalf to a remote command site.
Our performance of an algorithm to network these
sensors in to well define clusters with less energy-constrained gateway nodes
acting as cluster heads, and balance load among these gateways. Load balanced
clustering increases the system stability and improves the communication
between different nodes in the system. To evaluate the efficiency of our
approach we have studied the performance of sensor networks applying various
different routing protocols.
Simulation results shows that
irrespective of the routing protocol used, our approach improves the lifetime
of the system performance of hot spot areas, which may not meet the users’
quality of service (QoS) requirements. This is another major reason for system
performance degradation. To guarantee users’ QoS, therefore, load balancing
(LB) should be adopted along with IC for MCNs.
RESOURCE
SCHEDULING:
Resource scheduling can further improve
system performance; we then extend the proportional fair (PF) algorithm for
MCNs in this section. Besides the PF algorithm, the other two classical scheduling
algorithms of round robin (RR) and maximum SINR (MaxSINR) are often applied to
cellular networks. In RR algorithm, slots are allocated to the users in the
cell coverage in due order and thus seem to be absolutely fair. Nonetheless, it
is not efficient since the difference of slot efficiency of users is not taken
into consideration.
In MaxSINR algorithm, slots are
allocated to the users with the highest SINR at per scheduling instant, which
can maximize the system throughput, but it is not fair since the users with low
slot efficiency are not guaranteed to obtain slots. The PF algorithm has been
investigated in the literature of scheduling in SCNs provides an efficient throughput-fairness
tradeoff. In MCNs, the BS is responsible for gathering link information and
allocating the available resource to the corresponding links according to the
PF algorithm.
OFDMA/TDD
NETWORKS:
THE future wireless cellular networks,
such as 3GPP advanced long term evolution (LTE-Advanced) and IEEE 802.16m
systems, will adopt orthogonal frequency division multiple access (OFDMA)
technology for multihop cellular networks (MCNs). OFDMA is regarded as the most
promising physical layer technology for the fourth generation (4G) wireless networks.
New relay strategies and technologies are proposed to provide services with
extended coverage and higher data rate.
OFDMA systems should employ frequency
planning for better cell edge performance and the ease of interference
management. Traditional single-hop cellular networks (SCNs) typically employ
the frequency reuse pattern with factor of 3 or 7 to reduce CCI, which results
in low spectral efficiency. As we all know, high data rate is one of the
desired features of the future cellular networks. It requires a highly
efficient utilization of the available spectrum. Frequency reuse with factor of
1 is likely to be used in LTE-Advanced and IEEE 802.16m systems, aiming at
improving the spectral efficiency.
Time division duplex (TDD) frame consists
of downlink and uplink subframes. Each subframe is subsequently divided into
two time zones which are named as relay zone (RZ) and access zone (AZ),
respectively. RZ is dedicated to the BS transmission toward both RSs and MSs,
while AZ is dedicated to the reception of MSs from the BS or two RSs. Assuming
each RS receives data for relaying in RZ at the current frame, it should be
scheduled to transmit the data in AZ and empty its buffer at the next frame. In
each subframe, the frequency domain consists of subchannels and the time domain
consists of slots. A slot in a subchannel is the minimum frequency-time
resource unit TDD relay frame structure for MCNs.
Additionally, for WMNs, the frequency
spectrum is shared and randomly contended by all stations. The access scheme
with the lowest overhead is optimal. However, for example, in this paper, a
centrally controlled optimal resource allocation for OFDMA-based MCNs is our
target.
To provide analytical performance
evaluation, we make two assumptions for the remainder of this paper:
1. All users have a single type of data
service and thus have the same QoS requirements.
2. All cells/sectors
have the same channel conditions, traffic load, and distribution of users.
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out. This
is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY
This study is carried out to check the economic
impact that the system will have on the organization. The amount of fund that
the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
5.1.2
TECHNICAL FEASIBILITY
This study is
carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on
the client. The developed system must have a modest requirement, as only
minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING
The purpose of
testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of
ensuring that the
Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various
types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
5.2.1 UNIT
TESTING
Unit testing involves the design of test cases that
validate that the internal program logic is functioning properly, and that
program inputs produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its
construction and is invasive. Unit tests perform basic tests at component level
and test a specific business process, application, and/or system configuration.
Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs
and expected results.
5.2.2 INTEGRATION TESTING
Integration tests
are designed to test integrated software components to determine if they
actually run as one program. Testing is
event driven and is more concerned with the basic outcome of screens or fields.
Integration tests demonstrate that although the components were individually
satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed
at exposing the problems that arise
from the combination of components.
5.2.3 FUNCTIONAL
TEST
Functional tests provide systematic demonstrations
that functions tested are available as
specified by the business and technical
requirements, system documentation, and user manuals.
Functional testing is centered on the following
items:
Valid Input : identified classes of valid input must be
accepted.
Invalid Input : identified classes of invalid
input must be rejected.
Functions : identified functions must
be exercised.
Output
: identified classes of application outputs
must be exercised.
Systems/Procedures: interfacing systems or
procedures must be invoked.
Organization and preparation of functional tests is
focused on requirements, key functions, or special test cases. In addition,
systematic coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for testing.
Before functional testing is complete, additional tests are identified and the
effective value of current tests is determined.
5.2.4 SYSTEM
TEST
System testing ensures that the entire integrated software
system meets requirements. It tests a configuration to ensure known and
predictable results. An example of system testing is the configuration oriented
system integration test. System testing is based on process descriptions and
flows, emphasizing pre-driven process links and integration points.
5.2.5 WHITE BOX
TESTING
White Box Testing is a testing in which in which
the software tester has knowledge of the inner workings, structure and language
of the software, or at least its purpose. It is purpose. It is used to test
areas that cannot be reached from a black box level.
5.2.6
BLACK BOX TESTING
Black Box Testing is testing the software without
any knowledge of the inner workings, structure or language of the module being
tested. Black box tests, as most other kinds of tests, must be written from a
definitive source document, such as specification or requirements document,
such as specification or requirements document. It is a testing in which the
software under test is treated, as a black box .you cannot “see” into it. The
test provides inputs and responds to outputs without considering how the
software works.
5.3 UNIT TESTING:
Unit testing is usually conducted as part of a
combined code and unit test phase of the software lifecycle, although it is not
uncommon for coding and unit testing to be conducted as two distinct phases.
Test strategy and approach
Field testing
will be performed manually and functional tests will be written in detail.
Test
objectives
All
field entries must work properly.
Pages
must be activated from the identified link.
The
entry screen, messages and responses must not be delayed.
Features
to be tested
Verify
that the entries are of the correct format
No
duplicate entries should be allowed
All
links should take the user to the correct page.
5.4 INTEGRATION TESTING
Software integration testing is the incremental
integration testing of two or more integrated software components on a single
platform to produce failures caused by interface defects.
The task of the integration test is to check that
components or software applications, e.g. components in a software system or –
one step up – software applications at the company level – interact without
error.
Test Results:
All the test cases mentioned above passed
successfully. No defects encountered.
5.5 ACCEPTANCE TESTING
User Acceptance Testing is a critical phase of any
project and requires significant participation by the end user. It also ensures
that the system meets the functional requirements.
Test Results:
All the test cases mentioned above passed
successfully. No defects encountered.
CHAPTER
6
6.0 SOFTWARE ENVIRONMENT
6.1 FEATURES OF .NET
Microsoft .NET is a set of Microsoft software
technologies for rapidly building and integrating XML Web services, Microsoft Windows-based
applications, and Web solutions. The .NET Framework is a language-neutral
platform for writing programs that can easily and securely interoperate.
There’s no language barrier with .NET: there are numerous languages available
to the developer including Managed C++, C#, Visual Basic and Java Script.
The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.
“.NET” is also the collective name given to various
software components built upon the .NET platform. These will be both products
(Visual Studio.NET and Windows.NET Server, for instance) and services (like
Passport, .NET My Services, and so on).
6.2 THE .NET FRAMEWORK
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of
.NET. It provides the environment within which programs run. The most important
features are
Conversion from a low-level assembler-style language, called
Intermediate Language (IL), into code native to the platform being executed on.
Memory management, notably including garbage collection.
Checking and enforcing security restrictions on the running
code.
Loading and executing programs, with version control and other
such features.
The following features of the .NET framework are also worth description:
Managed Code
The code that targets .NET, and
which contains certain extra Information – “metadata” – to describe itself.
Whilst both managed and unmanaged code can run in the runtime, only managed
code contains the information that allows the CLR to guarantee, for instance,
safe execution and interoperability.
Managed
Data
With Managed Code comes Managed Data. CLR provides
memory allocation and Deal location facilities, and garbage collection. Some
.NET languages use Managed Data by default, such as C#, Visual Basic.NET and
JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending
on the language you’re using, impose certain constraints on the features
available. As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications – data that doesn’t get garbage collected
but instead is looked after by unmanaged code.
Common
Type System
The CLR uses something called the Common Type System
(CTS) to strictly enforce type-safety. This ensures that all classes are
compatible with each other, by describing types in a common way. CTS define how
types work within the runtime, which enables types in one language to
interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the
runtime also ensures that code doesn’t attempt to access memory that hasn’t
been allocated to it.
Common
Language Specification
The CLR provides built-in support for language
interoperability. To ensure that you can develop managed code that can be fully
used by developers using any programming language, a set of language features
and rules for using them called the Common Language Specification (CLS) has
been defined. Components that follow these rules and expose only CLS features
are considered CLS-compliant.
6.3 THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes,
containing over 7000 types. The root of the namespace is called System; this
contains basic types like Byte, Double, Boolean, and String, as well as Object.
All objects derive from System. Object. As well as objects, there are value types.
Value types can be allocated on the stack, which can provide useful
flexibility. There are also efficient means of converting value types to object
types if and when necessary.
The set of classes is pretty comprehensive,
providing collections, file, screen, and network I/O, threading, and so on, as
well as XML and database connectivity.
The class library is subdivided into a number of
sets (or namespaces), each providing distinct areas of functionality, with
dependencies between the namespaces kept to a minimum.
6.4 LANGUAGES SUPPORTED BY .NET
The multi-language capability of the .NET Framework
and Visual Studio .NET enables developers to use their existing programming
skills to build all types of applications and XML Web services. The .NET
framework supports new versions of Microsoft’s old favorites Visual Basic and
C++ (as VB.NET and Managed C++), but there are also a number of new additions
to the family.
Visual Basic .NET has been updated to include many
new and improved language features that make it a powerful object-oriented
programming language. These features include inheritance, interfaces, and
overloading, among others. Visual Basic also now supports structured exception
handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means
that any CLS-compliant language can use the classes, objects, and components
you create in Visual Basic .NET.
Managed Extensions for C++ and attributed
programming are just some of the enhancements made to the C++ language. Managed
Extensions simplify the task of migrating existing C++ applications to the new
.NET Framework.
C# is Microsoft’s new language. It’s a C-style
language that is essentially “C++ for Rapid Application Development”. Unlike
other languages, its specification is just the grammar of the language. It has
no standard library of its own, and instead has been designed with the
intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest
transition for Java-language developers into the world of XML Web Services and
dramatically improves the interoperability of Java-language programs with
existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual
Python, which enable .NET-aware applications to be built in either Perl or
Python. Both products can be integrated into the Visual Studio .NET
environment. Visual Perl includes support for Active State’s Perl Dev Kit.
Other languages for which .NET compilers are available include
FORTRAN
COBOL
Eiffel
ASP.NET
XML WEB SERVICES
Windows Forms
Base Class Libraries
Common Language Runtime
Operating System
Fig1 .Net
Framework
C#.NET is also compliant with CLS (Common Language
Specification) and supports structured exception handling. CLS is set of rules
and constructs that are supported by the CLR (Common Language Runtime). CLR is
the runtime environment provided by the .NET Framework; it manages the
execution of the code and also makes the development process easier by
providing services.
C#.NET is a CLS-compliant language. Any objects,
classes, or components that created in C#.NET can be used in any other
CLS-compliant language. In addition, we can use objects, classes, and
components created in other CLS-compliant languages in C#.NET .The use of CLS
ensures complete interoperability among applications, regardless of the
languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas
destructors are used to destroy them. In other words, destructors are used to
release the resources allocated to the object. In C#.NET the sub finalize
procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the
sub finalize procedure can be called only from the class it belongs to or from
derived classes.
GARBAGE COLLECTION
Garbage
Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET
Framework automatically releases memory for reuse by destroying objects that
are no longer in use.
In C#.NET, the garbage collector checks for the
objects that are not currently in use by applications. When the garbage
collector comes across an object that is marked for garbage collection, it
releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading
enables us to define multiple procedures with the same name, where each
procedure has a different set of arguments. Besides using overloading for procedures,
we can use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application
that supports multithreading can handle multiple tasks simultaneously, we can
use multithreading to decrease the time taken by an application to respond to
user interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables
us to detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
6.5 THE .NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development
in the highly distributed environment of the Internet.
OBJECTIVES OF .
NET FRAMEWORK
1. To provide a consistent object-oriented
programming environment whether object codes is stored and executed locally on Internet-distributed,
or executed remotely.
2. To provide a code-execution environment to
minimizes software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.
There are different types of application, such as
Windows-based applications and Web-based applications.
6.6
FEATURES OF SQL-SERVE
The OLAP
Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The
term OLAP Services has been replaced with the term Analysis Services. Analysis
Services also includes a new data mining component. The Repository component
available in SQL Server version 7.0 is now called Microsoft SQL Server 2000
Meta Data Services. References to the component now use the term Meta Data
Services. The term repository is used only in reference to the repository
engine within Meta Data Services
SQL-SERVER database consist of six type of objects,
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
A database is a collection of data about a specific
topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work
in the table design view. We can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in
tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data.
Access gathers data that answers the question from one or more table. The data
that make up the answer is either dynaset (if you edit it) or a snapshot (it
cannot be edited).Each time we run query, we get latest information in the
dynaset. Access either displays the dynaset or snapshot for us to view or
perform an action on it, such as deleting or updating.
CHAPTER 8
8.0
CONCLUSION:
In this paper, we
have carried out a quantitative study on an adaptive resource allocation scheme
based on interference coordination and load balancing for multihop cellular
networks. We also propose a novel frequency reuse scheme to mitigate
interference and maintain high spectral efficiency, and present practical
LB-based handover mechanisms which can evenly distribute the traffic load and
guarantee users’ quality of service.
Simulations
demonstrate that our scheme not only meets the requirement on coverage
probability, but also improves the sector throughput and accommodates more
users. To the best of our knowledge, this is the first work to provide dynamic
resource allocation by jointly considering interference coordination and load
balancing for MCNs. We expect that our method will play a significant role in
network planning and resource allocation in the future MCNs.
CHAPTER 9
9.0
REFERENCES:
[1] M. Salem, A.
Adinoyi, H. Yanikomeroglu, and D. Falconer, “Opportunities and Challenges in
OFDMA-Based Cellular Relay Networks: A Radio Resource Management Perspective,”
IEEE Trans. Vehicular Technology, vol. 59, no. 5, pp. 2496-2510, Jan. 2010.
[2] Y. Zhao, X.
Fang, and Z. Zhao, “Interference Coordination in Compact Frequency Reuse for
Multihop Cellular Networks,” IEICE Trans. Fundamentals of Electronics, Comm.
and Computer Sciences, vol. E93-A, no. 11, pp. 2312-2319, Nov. 2010.
[3] Third
Generation Partnership Project, “Technical Specification Group Radio Access
Network; Physical Layer Aspects for Evolved Universal Terrestrial Radio Access
(UTRA) (Release 7),” 3GPP Technical Report 25.814 v7.1.0, Sept. 2006.
This paper
presents the design and implementation of an architecture based on the
combination of ontologies, rules, web services, and the autonomic computing paradigm
to manage data in home-based telemonitoring scenarios.
The
architecture includes two layers: 1) a conceptual layer and 2) a data and
communication layer. On the one hand, the conceptual layer based on ontologies is
proposed to unify the management procedure and integrate incoming data from all
the sources involved in the telemonitoring process. On the other hand, the data
and communication layer based on REST web service (WS) technologies is proposed
to provide practical backup to the use of the ontology, to provide a real implementation
of the tasks it describes and thus to provide a means of exchanging data
(support communication tasks).
We study
regarding chronic obstructive pulmonary disease data management is presented in
order to evaluate the efficiency of the architecture. This proposed
ontology-based solution defines a flexible and scalable architecture in order
to address main challenges presented in home-based telemonitoring scenarios and
thus provide a means to integrate, unify, and transfer data supporting both clinical
and technical management tasks.
1.2
INTRODUCTION
Patient
empowerment is considered as a philosophy of health care based on the
perspective that better outcomes are achieved when patients become active
participants in their own health management. This new paradigm is a central
idea in the European Union (EU) health strategy supported by international
health organizations including the World Health Organization among others, and
its effectiveness in yielding quality of care is an obvious and essential area
of research. This new idea invites to look for new ways of providing
healthcare, e.g., by using information and communications technologies. In this
context, home-based telemonitoring systems can be used as self-care management
tools, while collaborative processes among healthcare personnel and patients are
maintained, thus the patient’s safe control is guaranteed. Telemonitoring
systems face the problem of delivering medicine to the current growing
population with chronic conditions while at the same time covering the
dimensions of quality of care and new paradigms such as empowerment can be
supported.
By periodically collecting patients’
themselves clinical data (located at their home sites) and transferring them to
physicians located in remote sites, patient’s health status supervision and
feedback provision are possible. This type of telemedicine system guarantees
patient control while reducing costs and avoiding hospital overflows. These two
sites (home site and healthcare site) comprised a typical home-based
telemonitoring system. At home site, data acquired by using MDs together with
the patient’s feedback are collected in a concentrator device (HG) used to
evaluate and/or transfer the acquired data outside the patient’s home if
necessary. At the health-care site, a server device is used to manage information
from the home site as well as to manage and store the patient’s monitoring
guidelines defined by physicians (TS, telemonitoring server). In fact, this
telemonitoring process, and consequently the evolution of the patient’s health
status, ismanaged through the indications or monitoring guidelines provided by
physicians.
Although significant contributions have
been made in this field in recent decades, telemedicine and in e-health
scenarios in general still pose numerous challenges that need to be addressed
by researchers in order to take maximum advantage of the benefits that these
systems provide and to support their long-term implementation. Interoperability
and integration are critical challenges that also need to be addressed when
developing monitoring systems in order to provide effective healthcare and to
make possible seamless communication among the different heterogeneous health
entities that participate in the monitoring process. This integration should be
addressed at both end sites of the scenario but also in the communication link,
thus integrating the way of transferring and exchanging information efficiently
between them.
We providing personalized care services
and taking into account the patient’s context have been identified as additional
requirements. Furthermore, apart from clinical data aspects, technical issues
should be also addressed in this scenario. Technical management of all the
devices that comprise the telemonitoring scenario (e.g., the MDs and HG) is an
important task that may or may not be integrated under the same architecture as
clinical management. Hence, at this technical level, research is still required
to address these challenges. Consequently there is a need for the development
of new telemonitoring architectures.
Great efforts have been made in recent
years in developing standards to deal with interoperability at different points
of the e-health communication infrastructure such as the ISO/IEEE 11073 (X73)
for MDs interoperability, the OpenEHR initiative for storage, management and
retrieval of electronic health record (EHR) information or as the standardized
Health Level Seven7 (HL7) messages to solve clinical data transferences.
Nevertheless, additional efforts are required to enable them to work together and
ultimately provide a higher level of integration.
Specifically, in this telemonitoring
scenario, there is not a unique standard-based solution to address data and
management integration. Since several standards can be used (some of them in
combination with proprietary protocols or other standards) at different points
of this scenario, the interoperability problem remains unsolved unless these
standards would merge into one or alignments and combination of them would be
done. According to Berges et al. interoperability does not mean to have
a unique representation but a semantically acknowledged equivalent one. That is
the reason to propose in this study an ontology-based architecture in order to
provide with a common knowledge about the exchanged data and the management of
such data. This ontology constitutes the knowledge equivalent one. Then, at
both ends of the architecture other standards could be used for other managing
purposes relating this model with the specific desired approach. Using this
alternative, a knowledge model is first provided that avoids alignment of
models two by two, while all being related through the main ontology.
Ontologies-based solutions have been
popularized over the past few years. Ontologies provide a higher level of abstraction
and have been successfully used in telemonitoring scenarios and other areas to
provide knowledge representation and semantic integration, thus a common
understanding about data exchanged by all the entities. Furthermore, its
combination with rules allows providing personalized management services and
thus personalized care. Although there are works that describe the details of
an ontology approach in this domain, they do not devote much attention to the
architecture implementation and the communication used to exchange the
information described. Consequently, fewworks have given details about this
practical implementation of the ontology-based system which may be of interest
for the development of other ontology-based applications in and outside the e-health
domain.
This paper presents an ontology-driven
architecture to integrate data management and enable its communication in a
telemonitoring scenario. The proposed architecture includes two layers: the
conceptual layer (the ontology) and the communication and data layer. The
conceptual layer uses the HOTMES and its extensions introduced. Specifically,
the OWL-DL language was selected to define this ontology model. The second
layer is based on WS technologies. WSs have been successfully used in network
management and also in other works to exchange data modeled by ontology.
However, our proposal, inspired on the representational state transfer (REST)
style and based on a generic communication method, provides a different design
approach that may be reusable for other systems based on ontologies.
Furthermore, security issues have been considered. The aim is to define a
flexible and scalable architecture in order to address main challenges
presented in home-based telemonitoring scenarios and thus provide a means to
integrate and transfer data supporting both clinical and technical data
management.
1.3
LITRATURE SURVEY
AUTHOR
AND PUBLICATION: JD. Trigo, I. Mart´ınez, A. Alesanco,
A. Kollmann, J. Escayola, D. Hayn, G. Schreier, and J. Garc´ıa, “AN INTEGRATED
HEALTHCARE INFORMATION SYSTEM FOR END-TO-END STANDARDIZED EXCHANGE AND
HOMOGENEOUS MANAGEMENT OF DIGITAL ECG FORMATS,” IEEE Trans. Inf. Technol.
Biomed., vol. 16, no. 4, pp. 518–529, Jul. 2012.
EXPLANATION:
This paper investigates
the application of the enterprise information system (EIS) paradigm to
standardized cardiovascular condition monitoring. There are many specifications
in cardiology, particularly in the ECG standardization arena. The existence of ECG
formats, however, does not guarantee the implementation of homogeneous,
standardized solutions for ECG management. In fact, hospital management
services need to cope with various ECG formats and, moreover, several different
visualization applications. This heterogeneity hampers the normalization of
integrated, standardized healthcare information systems, hence the need for
finding an appropriate combination of ECG formats and suitable EIS-based
software architecture that enables standardized exchange and homogeneous
management of ECG formats. Determining such a combination is one objective of
this paper.
We develop the
integrated healthcare information system that satisfies the requirements posed
by the previous determination. The ECG formats selected include ISO/IEEE11073,
Standard Communications Protocol for Computer-Assisted Electrocardiography, and
an ECG ontology. The EIS-enabling techniques and technologies selected include
web services, simple object access protocol, extensible markup language, or business
process execution language. Such a selection ensures the standardized exchange
of ECGs within, or across, healthcare information systems while providing
modularity and accessibility.
AUTHOR
AND PUBLICATION: D. Ria˜no, F. Real, J. A. L´opez-Vallverd´u,
F. Campana, S. Ercolani, P. Mecocci, R. Annicchiarico, and C. Caltagirone, “AN
ONTOLOGY-BASED PERSONALIZATION OF HEALTH-CARE KNOWLEDGE TO SUPPORT CLINICAL
DECISIONS FOR CHRONICALLY ILL PATIENTS,” J. Biomed. Informat., vol. 45,
no. 3, pp. 429–446, 2012.
EXPLANATION:
Chronically ill
patients are complex health care cases that require the coordinated interaction
of multiple professionals. A correct intervention of these sort of patients
entails the accurate analysis of the conditions of each concrete patient and
the adaptation of evidence-based standard intervention plans to these
conditions. There are some other clinical circumstances such as wrong
diagnoses, unobserved comorbidities, missing information, unobserved related
diseases or prevention, whose detection depends on the capacities of deduction
of the professionals involved. In this paper, we introduce ontology for the
care of chronically ill patients and implement two personalization processes
and a decision support tool. The first personalization process adapts the
contents of the ontology to the particularities observed in the health-care
record of a given concrete patient, automatically providing a personalized
ontology containing only the clinical information that is relevant for health-care
professionals to manage that patient. The second personalization process uses
the personalized ontology of a patient to automatically transform intervention
plans describing health-care general treatments into individual intervention
plans. For comorbid patients, this process concludes with the semi-automatic
integration of several individual plans into a single personalized plan.
Finally, the ontology is also used as the knowledge base of a decision support
tool that helps health-care professionals to detect anomalous circumstances
such as wrong diagnoses, unobserved comorbidities, missing information,
unobserved related diseases, or preventive actions. Seven health-care centers
participating in the K4CARE project, together with the group SAGESA and the Local
Health System in the town of Pollenza have served as the validation platform
for these two processes and tool. Health-care professionals participating in
the evaluation agree about the average quality 84% (5.9/7.0) and utility 90%
(6.3/7.0) of the tools and also about the correct reasoning of the decision
support tool, according to clinical standards.
AUTHOR
AND PUBLICATION: I.Berges, J. Bermudez, and A.
Illarramendi, “TOWARDS SEMANTIC INTEROPERABILITY OF ELECTRONIC HEALTH RECORDS,”
IEEE Trans. Inf. Technol. Biomed., vol. 16, no. 3, pp. 424–431, May
2012.
EXPLANATION:
Although the goal of
achieving semantic interoperability of electronic health records (EHRs) is
pursued by many researchers, it has not been accomplished yet. In this paper,
we present a proposal that smoothes out the way toward the achievement of that
goal. In particular, our study focuses on medical diagnoses statements. In
summary, the main contributions of our ontology-based proposal are the
following: first, it includes a canonical ontology whose EHR-related terms
focus on semantic aspects. As a result, their descriptions are independent of
languages and technology aspects used in different organizations to represent
EHRs. Moreover, those terms are related to their corresponding codes in
well-known medical terminologies. Second, it deals with modules that allow
obtaining rich ontological representations of EHR information managed by
proprietary models of health information systems. The features of one specific
module are shown as reference. Third, it considers the necessary mapping axioms
between ontological terms enhanced with so-called path mappings. This feature
smoothes out structural differences between heterogeneous EHR representations,
allowing proper alignment of information.
AUTHOR
AND PUBLICATION: N. Lasierra,A.Alesanco, J.Garc´ıa,
andD.O’Sullivan, “DATA MANAGEMENT IN HOME SCENARIOS USING AN AUTONOMIC
ONTOLOGY-BASED APPROACH,” in Proc. of the 9th IEEE Int. Conf. Pervasive
Workshop on Manag. Ubiquitous Commun. Services part of PerCom, 2012, pp.
94–99.
EXPLANATION:
An ontology-based approach to deal
with data and management procedure integration in home-based scenarios is
presented in this paper. The proposed ontology not only provides a means to
represent exchanged data but also to unify the way of accessing, controlling,
evaluating and transferring information remotely. The structure of this
ontology has been inspired by the autonomic computing paradigm, thus it
describes the tasks that comprise the MAPE (Monitor, Analyze, Plan and Execute)
process. Furthermore the use of SPARQL (Simple Protocol and RDF Query Language)
is proposed in this paper to express conditions and rules that determine the
performance of these tasks according to each situation. Finally two practical
application cases of the proposed ontology-based approach are presented.
CHAPTER 2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
Telemonitoring systems face the problem
of delivering medicine to the current growing population with chronic
conditions while at the same time covering the dimensions of quality of care
and new paradigms such as empowerment can be supported. By periodically
collecting patients’ themselves clinical data (located at their home sites) and
transferring them to physicians located in remote sites, patient’s health
status supervision and feedback provision are possible.
This type of telemedicine system
guarantees patient control while reducing costs and avoiding hospital
overflows. These two sites (home site and healthcare site) comprised a typical
home-based telemonitoring system. At home site, data acquired by using MDs
together with the patient’s feedback are collected in a concentrator device
(HG) used to evaluate and/or transfer the acquired data outside the patient’s
home if necessary.
2.1.1
DISADVANTAGES:
Existing models for chronic diseases pose several
technology-oriented challenges for home-based care, where assistance services
rely on a close collaboration among different stakeholders, such as health
operators, patient relatives, and social community members.
An ontology-based context model and a related context
management system providing a configurable and extensible service-oriented
framework to ease the development of applications for monitoring and handling
patient chronic conditions.
The system has been developed in a prototypal version, and
integrated with a service platform for supporting operators of home-based care
networks in cooperating and sharing patient-related information and
coordinating mutual interventions for handling critical and alarm situations.
2.2
PROPOSED SYSTEM:
We present an ontology-driven
architecture to integrate data management and enable its communication in a
telemonitoring scenario. It enables to not only integrate patient’s clinical
data management but also technical data management of all devices that are
included in the scenario. The proposed architecture includes two layers: the
conceptual layer (the ontology) and the communication and data layer.
The conceptual layer uses the HOTMES and
its extensions introduced specifically in the OWL-DL language was selected to
define this ontology model. The second layer is based on WS technologies. WSs
have been successfully used in network management and also in other works to
exchange data modeled by ontology is our proposal, inspired on the
representational state transfer (REST) style and based on a generic
communication method, provides a different design approach that may be reusable
for other systems based on ontologies.
Furthermore, security issues have been
considered. The aim is to define a flexible and scalable architecture in order
to address main challenges presented in home-based telemonitoring scenarios and
thus provide a means to integrate and transfer data supporting both clinical
and technical data management.
2.2.1
ADVANTAGES:
Ontologies provide a higher level of
abstraction and have been successfully used in telemonitoring scenarios and
other areas to provide knowledge representation and semantic integration, thus
a common understanding about data exchanged by all the entities. Furthermore,
its combination with rules allows providing personalized management services
and thus personalized care.
We describe the details of an ontology
approach in this domain, they do not devote much attention to the architecture
implementation and the communication used to exchange the information described.
Our implementation of the ontology-based
system which may be of interest for the development of other ontology-based
applications in and outside the e-health domain the ontology for interpreting
the data transferred for the communication of end sources of the architecture.
The data and communication layer deals with data management and transmission.
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
Speed –
1.1 GHz
RAM –
256 MB (min)
Hard
Disk –
20 GB
Floppy
Drive – 1.44 MB
Key
Board –
Standard Windows Keyboard
Mouse –
Two or Three Button Mouse
Monitor –
SVGA
2.3.2
SOFTWARE REQUIREMENTS:
Operating System : Windows XP or Win7
Front End : Microsoft Visual Studio .NET
Back End : MSSQL
Server
Server : ASP .NET Web Server
Script : C# Script
Document : MS-Office
2007
CHAPTER
3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use
Case Diagram / Flow Diagram:
The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION
OF DATA:
External sources or
destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data’s in
the physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
There
are several common modeling rules when creating DFDs:
All processes must
have at least one data flow in and one data flow out.
All processes
should modify the incoming data, producing new forms of outgoing data.
Each data store
must be involved with at least one data flow.
Each external entity
must be involved with at least one data flow.
A data flow must
be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM
3.2
DATAFLOW DIAGRAM
UML
DIAGRAMS:
3.2
USE CASE DIAGRAM:
3.3
CLASS DIAGRAM:
3.4
SEQUENCE DIAGRAM:
3.5
ACTIVITY DIAGRAM:
CHAPTER
4
4.0
IMPLEMENTATION:
ONTOLOGIES:
According to one of the most widely
accepted definitions of ontologies in computer science, ontology can be
described as “an explicit and formal specification of a shared
conceptualization”. In simple words,
ontologies represent concepts and basic relationships for the purpose of
comprehension of a common knowledge area. To develop an ontology means to formalize
a common view of a certain domain.
1) OWL Language: In
computer science, there are plenty of formal languages that can be used to
define and constructontologies. These languages allow encoding knowledge
contained in ontology in a simple and formal way. However, the standardized RDF
and OWL have been gaining popularity in the semantic web world. Ontology can be
formally described in OWL using following basic elements: 1) classes; 2) individuals;
and 3) properties. These elements are used in order to describe concepts,
instances, or members of a class and relationships between individuals of two
classes (object properties) or to link individuals with datatype values,
respectively (data type properties). Apart from these basic elements OWL
provides with class descriptors used to precisely describe OWL classes
which includes properties restrictions (value and cardinality constraints),
class axioms, properties axioms, and properties over individuals.
2) Rules: Generally,
ontology-based solutions combine knowledge presented in ontologies with dynamic
knowledge presented by the use of rules. A system based on the use of rules
usually contains a set of if-then rules (which indicate what should be done
according to a situation) and a rule engine used to apply them. By using rules,
the behavior of individuals can be expressed inside a domain. Hence, they can
be used to generate new knowledge and can also be used to provide personalized
services. One of the most popular languages for rules definition is SWRL.
However, in our study, we used SPARQL to
define some rules is a query language it can be used as a rule language by
combining CONSTRUCT clause and FILTER restrictions. On the one hand, the CONSTRUCT
query form returns a single RDF graph built based on the results of matching
with the graph pattern of the query and by taking the specified graph template.
On the other hand, the FILTER clause can be used to restrict solutions to those
which the filter expression considers as TRUE. Only if the filter function
evaluates to true is the solution to be included in the solution sequence. Note
that although this language was good enough for our purpose, its limitations
should be studied for other purposes (e.g., recursive tasks) and the adequacy
of SWRL could be studied for complex applications.
WEB SERVICES
Web services are used in this study as
software technology to access and exchange information modeled by the ontology.
According to the W3C, a WS is a “software system designed to support
interoperable machine-to-machine interaction over a communication network”.
Systems may interact with the web services by exchanging SOAP messages
serialized in XML for its message format and sent over other application layer protocols,
usually HTTP. Although SOAP-based web services are the most popular types of
WSs, there are other styles of programming a WS such as the REST style.
1) Rest Style for DesigningWeb Services:
REST
is a style of software architecture for distributed hypermedia systems such as
the World Wide Web first defined in 2000 by Fielding. This style is based on
the idea of transferring the representations of resources, a resource being any
item of interest. One of the key advantages of the REST architecture are
scalability of components and generality of interfaces. Although REST was initially
described in the context of HTTP, this paradigm can be applied to other
protocols or implementations. Web services can also be described using this
style. A WS implemented using HTTP and the principles of REST architecture is
designated as REST(ful) WS. Requests made from the client and responses from
the WS are used to transfer resources information. Each resource is identified
through an URI. Stateless behavior of data using XML and/or JSON and explicitly
used HTTP methods (PUT, GET, POST, DELETE) to exchange resources are the key
characteristics of a REST(ful) WS.
4.1
MODULES:
MANAGEMENT
PROFILE:
DATA AND COMMUNICATION LAYER:
HG AND TS MANAGEMENT MODULES:
COMMUNICATION FLOW AND WORKFLOW:
4.3
MODULE DESCRIPTION:
CLINICAL
MANAGEMENT PROFILE:
COPD patients were identified as
candidates to be monitored at home sites. From a clinical point of view, it was
an interesting case study (some estimations suggest that up to 10% of the
European population suffers COPD). From a technical point of view, the case of
the COPD patient led to define a complex technical management profile (because
different MDs are required to be used by the patient) and interesting option to
test the performance of the agent. Hence, one patient profile was designed
according to the clinical HOTMES ontology and one technical management
profile was designed according to the technical HOTMES ontology.
The patient profile includes the required
tasks to monitor a COPD patient such as controlling the FEV1 measurement in
order to detect the presence and severity of the airway obstruction. It was
configured by a primary care physician by means of published clinical
guidelines in patient profile included 15 monitoring task, 11 analysis
task, 9 planning task, and 3 execution task. This configuration led to include
144 new instances and to configure 18 rules. The details of this profile and
its evaluation to configure other type of profiles can be technical
management profile was designed to monitor the state of theMDs used by the
COPD patient (weighing scale, a blood pressure monitor, a pulse-oximeter, and a
glucometer) and the consumption of resources of the correspondent HG. In
addition rules were configured and 83 new instances were required to be
configured in the technical management profile in additional information
of the application of the HOTMES ontology for technical tasks.
DATA AND COMMUNICATION LAYER:
In the data layer, the communication
between the end sites is established using WS technologies. Consequently, a WS
has been designed to be placed in the TS and also a web client to be installed
in the HG (to establish a communication with the TS). This communication allows
the HG to ask for its associated management profile to the TS and to
transmit acquired information from the HG to the TS.
A REST WS was developed in order to
enhance the scalability and flexibility of the architecture and improve the
performance (efficiency). This WS comprises and defines a set of operations
over the following resources: an OWL ontology, the rules (transferred by means
of an XML), OWL individuals (sent by the IndividualWS structure),
properties datatype values corresponding to an individual (identified by the
URI of the individual and the URI of the property sent in a string generic type),
and inform messages to provide some control functions to the web pair
communication.
Each one of these resources was
identified by an URI, and a set of operations was defined for each particular
resource using HTTP methods (e.g., GET or PUT). This WS interface allows
information described in the ontology to be exchanged in a generic manner. This
is one key that contributes to the reusability and easy extension of the architecture.
Described communication methods do not depend on the knowledge itself described
in the ontology (related to the service) but on the fact of using an ontology
to represent such knowledge. A summary of the resources and defined operations is
depicted in Table I. As mentioned in the description of the converter module, individuals
are exchanged by using a developed structure designated as IndividualWS.
Using OWL language, an individual of the ontology can be described as a member
of a class with individual axioms or facts as individual property values
(datatype and object properties).
HG AND TS MANAGEMENT MODULES:
Two management modules and web
technology modules inside the HG and the TS constitute the main parts of the telemedicine
system (see Fig. 1). The modules that comprise the architecture have been
developed using .NET technologies. Specifically, the .NET framework (version 3.5)
has been used to process the ontology and create new instances, data
acquisition, and manipulation when the rules are applied. Regarding the web
modules the components of the remote management module installed in the TS are
depicted in Fig. 1. This management module includes the following three
components:
1)
Ontology knowledge base module: This module contains the
ontology knowledge models and the instances of the registered management
profiles. The TDB triple-store has been used to store the ontology model and
new instances in this knowledge base module.
2)
Converter module: The communication module of this
architecture is mainly based on OWL instances exchanged generically by means of
a developed object structure named IndividualWS. The converter module is
used to wrap and unwrap the individuals structure used to exchange information with
web clients. Furthermore, this module incorporates some reasoning tasks.
Ontology-based reasoning is used in order to check instances before including
new information
in the model and to ensure the
consistency of the model.
3)
Rules module: This module is used to store rules
associated with each management profile. These rules are subsequently transferred
by means of an XML file. As shown in Fig. 1, an additional GUI is required in
order to make easier for EM, technical or clinical (physician), the process of
defining the profiles and the rules. We are currentlyworking in the development
of this GUI combining ontology visualization techniques and usability methods.
The methodology used to design this interface components of the management module
installed in the HG are equally depicted in Fig. 1. This last management module
has been designated the “Semantic Autonomic Agent.” This module plays a key
role in the architecture. It is in charge of integrating incoming data and
executing the management tasks described in the management profile.
The communication between this agent and
the management module installed at the remote site is established through a web
client connection to the WS installed in the remote TS. The architecture of the
agent comprises the ontology knowledge base module, the rules module, the
converter module, and the following modules.
1) MAPE module: This module constitutes
the computing core of the agent. It will be used to run the tasks specified in
each management profile, hence to execute the closed loop from the MAPE
loop process.
2) Integrator module: Information
transferred by MDs and also contextual data provided by patients will be
acquired in this module, which integrates data coming from different data sources.
3) Reminders and alarms module: This
module includes clock functionalities to ask patients about data (reminders) or
to collect information from a specific software resource.
4) Actions module: This last module is
used to execute actions described within the execution tasks of the management
profile if an abnormal finding occurs.
FLOW AND WORKFLOW PERFORMANCE:
All the modules and sources involved in
the management procedure. The first step (see Fig. 3) consists in the download
of the management profile (patient profile or technical profile). First
of all, an instance of the management profile should be configured by an
EM placed at a remote site. Furthermore, a set of individual rules should be configured
for each particular management purpose. As shown in Fig. 3, the designed GUI
helps the physician with the ontology instantiation process and the rules
definition. The outputs of this interface (which uses selected classes of the
ontology as a navigation tool) are a personalized management profile and
a set of rules gathered in an XML file. Other functionalities such as queries
over acquired data or crossing data among patients to take some decisions could
be of interest to be included in this tool.
The communication is always initiated by
the user (web client at HG). Through a connection to the web service, the user
(the patient in the telemonitoring scenario) situated at home site will acquire
the required management profile. As shown in Fig. 3, if the user
requests for an update of his/her management profile, then the version of the
available profile at the TS will be requested for its evaluation (GET property
value). When the user requests a new management profile, first, it is
checked whether the ontology to download it is available (GET ontology). After that,
the rules and the management profile will be downloaded when required.
The methods involved are 1) GET (rules)
and 2) GET (individual). Note that the TLS authentication phase is not depicted
in Fig. 3, but it is initially carried out in order to allow the web client
connection to the web service. As depicted in Fig. 3, the associated management
profile is extracted from the ontology and the instances of the ontology
managed by Jena are wrapped into the IndividualWS structure through the
converter module. Once the management profile is in the HG, it will be
processed into the converter module, unwrapped, and inserted as individuals
managed by Jena in the ontology. Once the management profile has been
included in the ontology knowledge base module of the HG, it will be evaluated in
the MAPE module and the management procedure will be performed by running the
tasks specified in the profile.
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
ECONOMICAL
FEASIBILITY
TECHNICAL
FEASIBILITY
SOCIAL
FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic
impact that the system will have on the organization. The amount of fund that
the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical
feasibility, that is, the technical requirements of the system. Any system
developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have
a modest requirement, as only minimal or null changes are required for
implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a
process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of
activities that can be planned in advance and conducted systematically. Testing
is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct, the global will be
successfully achieved. In adequate testing if not testing leads to errors that
may not appear even many months. This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a
user-oriented vehicle before implementation. The best programs are worthless if
it produces the correct outputs.
5.2.1 UNIT TESTING:
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
UNIT TESTING:
Description
Expected
result
Test for application window properties.
All the properties of the windows are to be
properly aligned and displayed.
Test for mouse operations.
All the mouse operations like click, drag, etc.
must perform the necessary operations without any exceptions.
5.1.3 FUNCTIONAL TESTING:
Functional
testing of an application is used to prove the application delivers correct
results, using enough inputs to give an adequate level of confidence that will
work correctly for all sets of inputs. The functional testing will need to
prove that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated
values to isolate the problems.
FUNCTIONAL TESTING:
Description
Expected result
Test for all modules.
All peers should communicate in the
group.
Test for various peer in a distributed
network framework as it display all users available in the group.
The result after execution should give
the accurate result.
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing
encompasses a rich spectrum of testing strategies, describing the expected
results for every test case. It uses symbolic analysis techniques. This testing
used to check that an application will work in the operational environment.
Non-functional testing includes:
Load
testing
Performance
testing
Usability
testing
Reliability
testing
Security
testing
5.1.5 LOAD TESTING:
An important
tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A
load can be a real load, that is, the system can be put under test to real
usage by having actual telephone users connected to it. They will generate test
input data for system test.
Load Testing
Description
Expected result
It is necessary to ascertain that the
application behaves correctly under loads when ‘Server busy’ response is
received.
Should designate another active node
as a Server.
5.1.5 PERFORMANCE TESTING:
Performance
tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the code,
response time and device utilization. The intent of this testing is to identify
weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description
Expected result
This is required to assure that an
application perforce adequately, having the capability to handle many peers,
delivering its results in expected time and using an acceptable level of
resource and it is an aspect of operational management.
Should handle large input values, and
produce accurate result in a expected
time.
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software
quality control team.
RELIABILITY TESTING:
Description
Expected result
This is to
check that the server is rugged and reliable and can handle the failure of
any of the components involved in provide the application.
In case of
failure of the server an alternate
server should take over the job.
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description
Expected result
Checking that the user identification
is authenticated.
In case failure it should not be
connected in the framework.
Check whether group keys in a tree are
shared by all peers.
The peers should know group key in the
same group.
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description
Expected result
Exercise all logical decisions on
their true and false sides.
All the logical decisions must be
valid.
Execute all loops at their boundaries
and within their operational bounds.
All the loops must be finite.
Exercise internal data structures to
ensure their validity.
All the data structures must be valid.
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is,
black testing enables
the software engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach that
is likely to
uncover a different class
of errors than
white box methods. Black box
testing attempts to find errors which focuses on inputs, outputs, and principle
function of a software module. The starting point of the black box testing is
either a specification or code. The contents of the box are hidden and the
stimulated software should produce the desired results.
5.1.10 BLACK BOX TESTING:
Description
Expected result
To check for incorrect or missing
functions.
All the functions must be valid.
To check for interface errors.
The entire interface must function
normally.
To check for errors in a data
structures or external data base access.
The database updation and retrieval
must be done.
To check for initialization and termination
errors.
All the functions and data structures
must be initialized properly and terminated normally.
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER
7
7.0 SOFTWARE SPECIFICATION:
7.1 FEATURES OF .NET:
Microsoft
.NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web
solutions. The .NET Framework is a language-neutral platform for writing
programs that can easily and securely interoperate. There’s no language barrier
with .NET: there are numerous languages available to the developer including
Managed C++, C#, Visual Basic and Java Script.
The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
7.2 THE .NET FRAMEWORK
The .NET Framework has
two main parts:
1. The Common Language
Runtime (CLR).
2. A hierarchical set of
class libraries.
The CLR is
described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
Conversion from a
low-level assembler-style language, called Intermediate Language (IL), into
code native to the platform being executed on.
Memory management,
notably including garbage collection.
Checking and enforcing
security restrictions on the running code.
Loading and executing
programs, with version control and other such features.
The following features
of the .NET framework are also worth description:
Managed
Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With
Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed
Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others,
namely C++, do not. Targeting CLR can, depending on the language you’re using,
impose certain constraints on the features available. As with managed and
unmanaged code, one can have both managed and unmanaged data in .NET
applications – data that doesn’t get garbage collected but instead is looked
after by unmanaged code.
Common Type System
The CLR
uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the runtime,
which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR
provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common
Language Specification (CLS) has been defined. Components that follow these
rules and expose only CLS features are considered CLS-compliant.
7.3 THE CLASS LIBRARY
.NET
provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System.
Object. As well as objects, there are value types. Value types can be allocated
on the stack, which can provide useful flexibility. There are also efficient
means of converting value types to object types if and when necessary.
The set of
classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class
library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept
to a minimum.
7.4 LANGUAGES SUPPORTED
BY .NET
The
multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but
there are also a number of new additions to the family.
Visual
Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic
also now supports structured exception handling, custom attributes and also
supports multi-threading.
Visual
Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.
Managed
Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is
Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is
just the grammar of the language. It has no standard library of its own, and
instead has been designed with the intention of using the .NET libraries as its
own.
Microsoft
Visual J# .NET provides the easiest transition for Java-language developers
into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a
variety of other programming languages.
Active
State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be
integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.
Other languages for
which .NET compilers are available include
FORTRAN
COBOL
Eiffel
ASP.NET
XML WEB SERVICES
Windows Forms
Base Class Libraries
Common Language Runtime
Operating System
Fig1 .Net Framework
C#.NET is
also compliant with CLS (Common Language Specification) and supports structured
exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas
destructors are used to destroy them. In other words, destructors are used to
release the resources allocated to the object. In C#.NET the sub finalize
procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the
sub finalize procedure can be called only from the class it belongs to or from
derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The
.NET Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that
are not currently in use by applications. When the garbage collector comes
across an object that is marked for garbage collection, it releases the memory
occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that
supports multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
7.5
THE .NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed environment of the
Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To
provide a consistent object-oriented programming environment whether object
codes is stored and executed locally on Internet-distributed, or executed
remotely.
2. To
provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3.
Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and
Web-based applications.
7.6 FEATURES OF SQL-SERVER
The OLAP
Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called
Microsoft SQL Server 2000 Meta Data Services. References to the component now
use the term Meta Data Services. The term repository is used only in reference
to the repository engine within Meta Data Services
SQL-SERVER
database consist of six type of objects,
They
are,
1.
TABLE
2.
QUERY
3.
FORM
4.
REPORT
5.
MACRO
7.7 TABLE:
A database
is a collection of data about a specific topic.
VIEWS OF
TABLE:
We can
work with a table in two types,
1.
Design View
2.
Datasheet View
Design
View
To build or modify the structure of a
table we work in the table design view. We can specify what kind of data will
be hold.
Datasheet
View
To add,
edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is
a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the
dynaset or snapshot for us to view or perform an action on it, such as deleting
or updating.
CHAPTER
7
APPENDIX
7.1
SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.1
CONCLUSION:
This study describes architecture to
enable data integration and its management in an ontology-driven telemonitoring
solution implemented in home-based scenarios. This is an innovative
architecture that facilitates the integration of several management services at
home sites using the same software engine. The architecture has been
specifically studied to support both technical and clinical services in the
telemonitoring scenario, thus avoiding installing additional software for
technical purposes.
HOTMES ontology used at the conceptual
layer to describe a management profile on the one hand, our ontology contributes to integrate data and
its management offering benefits in terms of knowledge representation, workflow
organization, and self-management capabilities to the system. Its combination with
rules allows providing personalized services.
This application ontology could be in
future improved by introducing concepts from domain ontology. On the other hand,
the data and communication layer of the architecture, based on the REST WS, was
oriented to minimizing the consumption of resources and providing reusable key
ideas for future ontology-based architecture developments.
8.2
FUTURE ENHANCEMENT
This solution represents a further step
toward the possibility of establishing more effective home-based telemonitoring
systems and thus improving the remote care of patientswith chronic diseases. As
it was reported in, good telemedicine implementations are developed after a
process where the dynamic interaction among a combination of socio-technical and
also clinical factors is optimized. It means that additional work should be
done (e.g., to measure the interaction of the
patient–doctor using the system and also
the truthfulness of the system for a long period of time) before adopting this
solution in a real scenario its complete development, first, a concordance study
should be conducted in order to determine its clinical efficiency. Then, a
social impact study should be conducted in order to determine how the system
allowed improving patient’s quality of life. Regarding these last studies, the
results presented in evidence the benefits of telemonitoring systems while
linking their success to the usability design issues and features.
Cloud computing is a
rising computing standard in which assets of the computing framework are given
as a service over the Internet. As guaranteeing as it may be, this standard
additionally delivers a lot of people new challenges for data security and
access control when clients outsource sensitive data for offering on cloud
servers, which are not inside the same trusted dominion as data possessors. In
any case, in completing thus, these results unavoidably present a substantial
processing overhead on the data possessor for key distribution and data
administration when fine-grained data access control is in demand, and
subsequently don’t scale well. The issue of at the same time accomplishing
fine-grainedness, scalability, and data confidentiality of access control
really still remains uncertain. This paper addresses this open issue by, on one
hand, characterizing and implementing access policies based on data qualities,
and, then again, permitting the data owner to representative the majority of
the calculation undertakings included in fine-grained data access control to un-trusted
cloud servers without unveiling the underlying data substance. We accomplish
this goal by exploiting and combining techniques of decentralized key policy
Attribute Based Encryption (KP-ABE). Extensive investigation shows that the
proposed approach is highly efficient and secure.
1.2
INTRODUCTION
Research in cloud computing is receiving
a lot of attention from both academic and industrial worlds. In cloud
computing, users can outsource their computation and storage to servers (also
called clouds) using Internet. This frees users from the hassles of maintaining
resources on-site. Clouds can provide several types of services like
applications (e.g., Google Apps, Microsoft online), infrastructures (e.g.,
Amazon’s EC2, Eucalyptus, Nimbus), and platforms to help developers write
applications (e.g., Amazon’s S3, Windows Azure).
Much of the data stored in clouds is
highly sensitive, for example, medical records and social networks. Security
and privacy are thus very important issues in cloud computing. In one hand, the
user should authenticate itself before initiating any transaction, and on the
other hand, it must be ensured that the cloud does not tamper with the data
that is outsourced. User privacy is also required so that the cloud or other
users do not know the identity of the user. The cloud can hold the user
accountable for the data it outsources, and likewise, the cloud is itself
accountable for the services it provides. The validity of the user who stores
the data is also verified. Apart from the technical solutions to ensure
security and privacy, there is also a need for law enforcement.
Recently, Wang et al. addressed
secure and dependable cloud storage. Cloud servers prone to Byzantine failure,
where a storage server can fail in arbitrary ways. The cloud is also prone to
data modification and server colluding attacks. In server colluding attack, the
adversary can compromise storage servers, so that it can modify data files as
long as they are internally consistent. To provide secure data storage, the
data needs to be encrypted. However, the data is often modified and this
dynamic property needs to be taken into account while designing efficient
secure storage techniques.
Efficient search on encrypted data is
also an important concern in clouds. The clouds should not know the query but
should be able to return the records that satisfy the query. This is achieved
by means of searchable encryption. The keywords are sent to the cloud
encrypted, and the cloud returns the result without knowing the actual keyword
for the search. The problem here is that the data records should have keywords
associated with them to enable the search. The correct records are returned
only when searched with the exact keywords.
Security and privacy protection in
clouds are being explored by many researchers.Wang et al. addressed
storage security using Reed-Solomon erasure-correcting codes. Authentication of
users using public key cryptographic techniques has been studied in. Many
homomorphic encryption techniques have been suggested to ensure that the cloud
is not able to read the data while performing computations on them. Using homomorphic
encryption, the cloud receives ciphertext of the data and performs computations
on the ciphertext and returns the encoded value of the result. The user is able
to decode the result, but the cloud does not know what data it has operated on.
In such circumstances, it must be possible for the user to verify that the
cloud returns correct results. Accountability of clouds is a very challenging
task and involves
technical issues and law enforcement.
Neither clouds nor users should deny any operations performed or requested. It
is important to have log of the transactions performed; however, it is an
important concern to decide how much information to keep in the log.
Accountability has been addressed in
TrustCloud. Secure provenance has been studied in. Considering the following
situation: A Law student, Alice, wants to send a series of reports about some
malpractices by authorities of University X to all the professors of University
X, Research chairs of universities in the country, and students belonging to
Law department in all universities in the province. She wants to remain
anonymous while publishing all evidence of malpractice. She stores the
information in the cloud.
Access control is important in such
case, so that only authorized users can access the data. It is also important
to verify that the information comes from a reliable source. The problems of
access control, authentication, and privacy protection should be solved
simultaneously. We address this problem in its entirety in this paper. Access
control in clouds is gaining attention because it is important that only
authorized users have access to valid service. A huge amount of information is
being stored in the cloud, and much of this is sensitive information. Care
should be taken to ensure access control of this sensitive information which
can often be related to health, important documents (as in Google Docs or
Dropbox) or even personal information (as in social networking). There are
broadly three types of access control: User Based Access Control (UBAC),
Role Based Access Control (RBAC), and Attribute Based Access Control (ABAC).
In UBAC, the access control list (ACL) contains the list of users who are
authorized to access data. This is not feasible in clouds where there are many
users. In RBAC, users are classified based on their individual roles. Data can
be accessed by users who have matching roles. The roles are defined by the
system. For example, only faculty members and senior secretaries might have
access to data but not the junior secretaries. ABAC is more extended in scope,
in which users are given attributes, and the data has attached access policy.
Only users with valid set of attributes, satisfying the access policy, can
access the data. For instance, in the above example certain records might be
accessible by faculty members with more than 10 years of research experience or
by senior secretaries with more than 8 years experience. The pros and cons of
RBAC and ABAC are discussed in. There has been some work on ABAC in clouds. All
these work use a cryptographic primitive known as Attribute Based Encryption
(ABE). The The eXtensible Access Control Markup Language (XACML) has been proposed for ABAC in clouds. An area
where access control is widely being used is health care. Clouds are being used
to store sensitive information about patients to enable access to medical
professionals, hospital staff, researchers, and policy makers. It is important
to control the access of data so that only authorized users can access the
data. Using ABE, the records are encrypted under some access policy and stored
in the cloud. Users are given sets of attributes and corresponding keys. Only
when the users have matching set of attributes, can they decrypt the
information stored in the cloud. Access control in health care has been
studied. Access control is also gaining importance in online social networking
where users (members) store their personal information, pictures, videos and
share them with selected groups of users or communities they belong to. Access
control in online social networking has been studied. Such data are being
stored in clouds.
It is very important that only the
authorized users are given access to those information. A similar situation
arises when data is stored in clouds, for example in Dropbox, and shared with
certain groups of people. It is just not enough to store the contents securely
in the cloud but it might also be necessary to ensure anonymity of the user.
For example, a user would like to store some sensitive information but does not
want to be recognized. The user might want to post a comment on an article, but
does not want his/her identity to be disclosed. However, the user should be
able to prove to the other users that he/she is a valid user who stored the
information without revealing the identity. There are cryptographic protocols
like ring signatures, mesh signatures, group signatures, which can be used in
these situations. Ring signature is not a feasible option for clouds where
there are a large number of users. Group signatures assume the pre-existence of
a group which might not be possible in clouds. Mesh signatures do not ensure if
the message is from a single user or many users colluding together. For these
reasons, a new protocol known as Attribute Based Signature (ABS) has been
applied. ABS was proposed by Maji et al. In ABS, users have a claim
predicate associated with a message. The claim predicate helps to identify the
user as an authorized one, without revealing its identity. Other users or the
cloud can verify the user and the validity of the message stored. ABS can be
combined with ABE to achieve authenticated access control without disclosing
the identity of the user to the cloud.
Existing work on access control in cloud
are centralized in nature. Except and, all other schemes use attribute based
encryption (ABE). The scheme uses a symmetric key approach and does not support
authentication. The schemes do not support authentication as well. Earlier work
by Zhao et al. provides privacy preserving authenticated access control
in cloud. However, the authors take a centralized approach where a single key
distribution center (KDC) distributes secret keys and attributes to all users.
Unfortunately, a single KDC is not only a single point of failure but difficult
to maintain because of the large number of users that are supported in a cloud
environment. We, therefore, emphasize that clouds should take a decentralized
approach while distributing secret keys and attributes to users. It is also
quite natural for clouds to have many KDCs in different locations in the world.
Although Yang et al. proposed a decentralized approach, their technique
does not authenticate users, who want to remain anonymous while accessing the
cloud. In an earlier work, Ruj et al.proposed a distributed access
control mechanism in clouds. However, the scheme did not provide user
authentication. The other drawback was that a user can create and store a file
and other users can only read the file. Write access was not permitted to users
other than the creator. In the preliminary version, we extend our previous work
with added features which enables to authenticate the validity of the message
without revealing the identity of the user who has stored information in the
cloud. In this version we also address user revocation, that was not addressed.
We use attribute based signature scheme to achieve authenticity and privacy.
Unlike, our scheme is resistant to replay attacks, in which a user can replace
fresh data with stale data from a previous write, even if it no longer has
valid claim policy. This is an important property because a user, revoked of
its attributes, might no longer be able to write to the cloud. We therefore add
this extra feature in our scheme and modify appropriately. Our scheme also
allows writing multiple times which was not permitted in our earlier work.
1.3
LITRATURE SURVEY
PRIVACY
PRESERVING ACCESS CONTROL WITH AUTHENTICATION FOR SECURING DATA IN CLOUDS
PUBLICATION:
S. Ruj, M. Stojmenovic and A. Nayak, IEEE/ACM International Symposium on
Cluster, Cloud and Grid Computing, pp. 556–563, 2012.
TOWARD
SECURE AND DEPENDABLE STORAGE SERVICES IN CLOUD COMPUTING
PUBLICATION:
C. Wang, Q. Wang, K. Ren, N. Cao and W. Lou, IEEE T. Services Computing,
vol. 5, no. 2, pp. 220–232, 2012.
FUZZY
KEYWORD SEARCH OVER ENCRYPTED DATA IN CLOUD COMPUTING
PUBLICATION:
J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, in IEEE INFOCOM. ,
pp. 441–445, 2010.
CRYPTOGRAPHIC
CLOUD STORAGE
PUBLICATION:
S. Kamara and K. Lauter, in Financial Cryptography Workshops, ser.
Lecture Notes in Computer Science, vol. 6054. Springer, pp. 136–149, 2010.
CHAPTER 2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
To accomplish secure data transaction in cloud,
suitable cryptography method is utilized. The data possessor must encrypt the
record and then store the record to the cloud. Assuming that a third person
downloads the record, they may see the record if they had the key which is
utilized to decrypt the encrypted record. Once in a while this may be failure
because of the technology improvement and the programmers. To overcome the
issue there is lot of procedures and techniques to make secure transaction and
storage.
2.2
DISADVANTAGES:
The
access control and authentication are both collusion resistant, meaning that no
two users can collude and access data or authenticate themselves, if they are
individually not authorized.
Revoked
users cannot access data after they have been revoked.
2.3
PROPOSED SYSTEM:
KP-ABE is a public key cryptography primitive for
one-to-many correspondences. In KP-ABE, information is associated with
attributes for each of which a public key part is characterized. The encrypted
associates the set of attributes to the message by scrambling it with the
comparing public key parts. Every client is assigned an access structure which
is normally characterized as an access tree over information attributes, i.e.,
inside hubs of the access tree are limit doors and leaf hubs are connected with
attributes. Client secret key is characterized to reflect the access structure
so the client has the ability to decode a cipher-text if and just if the
information attributes fulfill his access structure.
2.4
ADVANTAGES:
Distributed
access control of data stored in cloud so that only authorized users with valid
attributes can access them.
Authentication
of users who store and modify their data on the cloud.
The
identity of the user is protected from the cloud during authentication.
The
architecture is decentralized, meaning that there can be several KDCs for key
management.
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
Speed –
1.1 GHz
RAM –
256 MB (min)
Hard
Disk –
20 GB
Floppy
Drive –
1.44 MB
Key
Board –
Standard Windows Keyboard
Mouse –
Two or Three Button Mouse
Monitor –
SVGA
2.3.2
SOFTWARE REQUIREMENTS:
Operating System : Windows XP or Win 7
Front End : Microsoft Visual Studio 2008
Back End : MSSQL
Server 2005
Server : ASP Web Server
Script : C# Script
Document : MS-Office
2007
CHAPTER 3
3.0
SYSTEM DESIGN:
ARCHITECTURE DIAGRAM / UML DIAGRAMS / DAT FLOW
DIAGRAM:
The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External sources or
destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data. The
physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
MODELING RULES:
There
are several common modeling rules when creating DFDs:
All processes must
have at least one data flow in and one data flow out.
All processes
should modify the incoming data, producing new forms of outgoing data.
Each data store
must be involved with at least one data flow.
Each external
entity must be involved with at least one data flow.
A data flow must
be attached to at least one process.
3.1
DATAFLOW DIAGRAM
UML
DIAGRAMS:
3.2
USE CASE DIAGRAM:
3.3
CLASS DIAGRAM:
3.4
SEQUENCE DIAGRAM:
3.5
ACTIVITY DIAGRAM:
CHAPTER 4
4.0
IMPLEMENTATION:
We propose our privacy preserving authenticated
access control scheme. According to our scheme a user can create a file and
store it securely in the cloud. This scheme consists of use of the two protocols
ABE and ABS, as discussed in Sections 3.4 and 3.5, respectively. We will first
discuss our scheme in details and then provide a concrete example to
demonstrate how it works. We refer to the Fig. 1. There are three users, a
creator, a reader, and writer. Creator Alice receives a token _ from the
trustee, who is assumed to be honest. A trustee can be someone like the federal
government who manages social insurance numbers etc. On presenting her id (like
health/social insurance number), the trustee gives her a token _. There are
multiple KDCs (here 2), which can be scattered. For example, these can be
servers in different parts of the world.
A creator on presenting the token to one
or more KDCs receives keys for encryption/decryption and signing. In the Fig.
1, SKs are secret keys given for decryption, Kx are keys for signing. The
message MSG is encrypted under the access policy X. The access policy decides
who can access the data stored in the cloud. The creator decides on a claim policy
Y, to prove her authenticity and signs the message under this claim. The
ciphertext C with signature is c, and is sent to the cloud. The cloud verifies
the signature and storesthe ciphertext C. When a reader wants to read, the
cloud sends C. If the user has attributes matching with access policy, it can
decrypt and get back original message.
Write proceeds in the same way as file
creation. By designating the verification process to the cloud, it relieves the
individual users from time consuming verifications. When a reader wants to read
some data stored in the cloud, it tries to decrypt it using the secret keys it
receives from the KDCs. If it has enough attributes matching with the access policy,
then it decrypts the information stored in the cloud.
4.1 ALGORITHM:
ATTRIBUTE-BASED
ENCRYPTION:
ABE
with multiple authorities as proposed as follows:
4.2
MODULES:
CLOUD
USER MODULE:
ATTRIBUTE-BASED
SIGNATURES:
ANONYMOUS
AUTHENTICATION:
CLOUD USER OPERATIONS:
4.3
MODULE DESCRIPTION:
CLOUD
USER MODULE:
User: users, who have data to be stored
in the cloud and rely on the cloud for data computation, consist of both
individual consumers and organizations.
Cloud Service Provider (CSP): a CSP, who has significant
resources and expertise in building and managing distributed cloud storage
servers, owns and operates live Cloud Computing systems.
Third Party Auditor (TPA): an optional TPA, who has expertise
and capabilities that users may not have, is trusted to assess and expose risk
of cloud storage services on behalf of the users upon request.
ATTRIBUTE-BASED
SIGNATURES:
Cryptographic protocols like ring
signatures mesh signatures group signatures which can be used in these
situations. Ring signature is not a feasible option for clouds where there are a
large number of users. Group signatures assume the preexistence of a group
which might not be possible in clouds. Mesh signatures do not ensure if the
message is from a single user or many users colluding together. For these reasons,
a new protocol known as attribute-based signature (ABS) has been applied. ABS
was proposed by Maji et al. In ABS, users have a claim predicate associated
with a message. The claim predicate helps to identify the user as an authorized
one, without revealing its identity. Other users or the cloud can verify the
user and the validity of the message stored. ABS can be combined with ABE to
achieve authenticated access control without disclosing the identity of the
user to the cloud.
ANONYMOUS
AUTHENTICATION:
In our scheme a writer whose rights have
been revoked cannot create a new signature with new time stamp and, thus, cannot
write back stale information. It then signs the message and calculates the
message signature as.
CLOUD USER OPERATIONS:
Update Operation
In
cloud data storage, sometimes the user may need to modify some data block(s)
stored in the cloud, we refer this operation as data update. In other words,
for all the unused tokens, the user needs to exclude every occurrence of the
old data block and replace it with the new one.
Delete Operation
Sometimes,
after being stored in the cloud, certain data blocks may need to be deleted.
The delete operation we are considering is a general one, in which user
replaces the data block with zero or some special reserved data symbol. From
this point of view, the delete operation is actually a special case of the data
update operation, where the original data blocks can be replaced with zeros or
some predetermined special blocks.
Append Operation
In
some cases, the user may want to increase the size of his stored data by adding
blocks at the end of the data file, which we refer as data append. We
anticipate that the most frequent append operation in cloud data storage is
bulk append, in which the user needs to upload a large number of blocks (not a
single block) at one time.
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
ECONOMICAL
FEASIBILITY
TECHNICAL
FEASIBILITY
SOCIAL
FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic impact
that the system will have on the organization. The amount of fund that the
company can pour into the research and development of the system is limited.
The expenditures must be justified. Thus the developed system as well within
the budget and this was achieved because most of the technologies used are
freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical
feasibility, that is, the technical requirements of the system. Any system
developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have
a modest requirement, as only minimal or null changes are required for
implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a
process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of
activities that can be planned in advance and conducted systematically. Testing
is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct, the global will be
successfully achieved. In adequate testing if not testing leads to errors that
may not appear even many months. This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a
user-oriented vehicle before implementation. The best programs are worthless if
it produces the correct outputs.
5.2.1 UNIT TESTING:
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are common
syntax errors. These errors are shown through error message generated by the
computer. For Logic errors the programmer must examine the output carefully.
UNIT TESTING:
Description
Expected
result
Test for application window properties.
All the properties of the windows are to be
properly aligned and displayed.
Test for mouse operations.
All the mouse operations like click, drag, etc.
must perform the necessary operations without any exceptions.
5.1.3 FUNCTIONAL TESTING:
Functional testing
of an application is used to prove the application delivers correct results,
using enough inputs to give an adequate level of confidence that will work
correctly for all sets of inputs. The functional testing will need to prove
that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated
values to isolate the problems.
FUNCTIONAL TESTING:
Description
Expected result
Test for all modules.
All peers should communicate in the
group.
Test for various peer in a distributed
network framework as it display all users available in the group.
The result after execution should give
the accurate result.
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing
encompasses a rich spectrum of testing strategies, describing the expected
results for every test case. It uses symbolic analysis techniques. This testing
used to check that an application will work in the operational environment.
Non-functional testing includes:
Load
testing
Performance
testing
Usability
testing
Reliability
testing
Security
testing
5.1.5 LOAD TESTING:
An important
tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A
load can be a real load, that is, the system can be put under test to real
usage by having actual telephone users connected to it. They will generate test
input data for system test.
Load Testing
Description
Expected result
It is necessary to ascertain that the
application behaves correctly under loads when ‘Server busy’ response is
received.
Should designate another active node
as a Server.
5.1.5 PERFORMANCE TESTING:
Performance
tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the code,
response time and device utilization. The intent of this testing is to identify
weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description
Expected result
This is required to assure that an
application perforce adequately, having the capability to handle many peers,
delivering its results in expected time and using an acceptable level of
resource and it is an aspect of operational management.
Should handle large input values, and
produce accurate result in a expected
time.
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software
quality control team.
RELIABILITY TESTING:
Description
Expected result
This is to
check that the server is rugged and reliable and can handle the failure of
any of the components involved in provide the application.
In case of
failure of the server an alternate
server should take over the job.
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description
Expected result
Checking that the user identification
is authenticated.
In case failure it should not be
connected in the framework.
Check whether group keys in a tree are
shared by all peers.
The peers should know group key in the
same group.
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description
Expected result
Exercise all logical decisions on
their true and false sides.
All the logical decisions must be
valid.
Execute all loops at their boundaries
and within their operational bounds.
All the loops must be finite.
Exercise internal data structures to
ensure their validity.
All the data structures must be valid.
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is, black testing
enables the software
engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach
that is likely
to uncover a different
class of errors
than white box methods. Black box testing attempts to find
errors which focuses on inputs, outputs, and principle function of a software
module. The starting point of the black box testing is either a specification
or code. The contents of the box are hidden and the stimulated software should
produce the desired results.
5.1.10 BLACK BOX TESTING:
Description
Expected result
To check for incorrect or missing
functions.
All the functions must be valid.
To check for interface errors.
The entire interface must function
normally.
To check for errors in a data
structures or external data base access.
The database updation and retrieval
must be done.
To check for initialization and
termination errors.
All the functions and data structures
must be initialized properly and terminated normally.
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER
7
7.0 SOFTWARE SPECIFICATION:
7.1 FEATURES OF .NET:
Microsoft
.NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web
solutions. The .NET Framework is a language-neutral platform for writing
programs that can easily and securely interoperate. There’s no language barrier
with .NET: there are numerous languages available to the developer including
Managed C++, C#, Visual Basic and Java Script.
The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
7.2 THE .NET FRAMEWORK
The .NET Framework has
two main parts:
1. The Common Language
Runtime (CLR).
2. A hierarchical set of
class libraries.
The CLR is
described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
Conversion from a
low-level assembler-style language, called Intermediate Language (IL), into
code native to the platform being executed on.
Memory management,
notably including garbage collection.
Checking and enforcing
security restrictions on the running code.
Loading and executing
programs, with version control and other such features.
The following features
of the .NET framework are also worth description:
Managed
Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With
Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed
Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others,
namely C++, do not. Targeting CLR can, depending on the language you’re using,
impose certain constraints on the features available. As with managed and
unmanaged code, one can have both managed and unmanaged data in .NET
applications – data that doesn’t get garbage collected but instead is looked
after by unmanaged code.
Common Type System
The CLR
uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the runtime,
which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR
provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common
Language Specification (CLS) has been defined. Components that follow these
rules and expose only CLS features are considered CLS-compliant.
7.3 THE CLASS LIBRARY
.NET
provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System.
Object. As well as objects, there are value types. Value types can be allocated
on the stack, which can provide useful flexibility. There are also efficient
means of converting value types to object types if and when necessary.
The set of
classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class
library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept
to a minimum.
7.4 LANGUAGES SUPPORTED
BY .NET
The
multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but
there are also a number of new additions to the family.
Visual
Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic
also now supports structured exception handling, custom attributes and also
supports multi-threading.
Visual
Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.
Managed
Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is
Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is
just the grammar of the language. It has no standard library of its own, and
instead has been designed with the intention of using the .NET libraries as its
own.
Microsoft
Visual J# .NET provides the easiest transition for Java-language developers
into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a
variety of other programming languages.
Active
State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be
integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.
Other languages for
which .NET compilers are available include
FORTRAN
COBOL
Eiffel
ASP.NET
XML WEB SERVICES
Windows Forms
Base Class Libraries
Common Language Runtime
Operating System
Fig1 .Net Framework
C#.NET is
also compliant with CLS (Common Language Specification) and supports structured
exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas
destructors are used to destroy them. In other words, destructors are used to
release the resources allocated to the object. In C#.NET the sub finalize
procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the
sub finalize procedure can be called only from the class it belongs to or from
derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The
.NET Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that
are not currently in use by applications. When the garbage collector comes
across an object that is marked for garbage collection, it releases the memory
occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that
supports multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
7.5
THE .NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed environment of the
Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To
provide a consistent object-oriented programming environment whether object
codes is stored and executed locally on Internet-distributed, or executed
remotely.
2. To
provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3.
Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and
Web-based applications.
7.6 FEATURES OF SQL-SERVER
The OLAP
Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called
Microsoft SQL Server 2000 Meta Data Services. References to the component now
use the term Meta Data Services. The term repository is used only in reference
to the repository engine within Meta Data Services
SQL-SERVER
database consist of six type of objects,
They
are,
1.
TABLE
2.
QUERY
3.
FORM
4.
REPORT
5.
MACRO
7.7 TABLE:
A database
is a collection of data about a specific topic.
VIEWS OF
TABLE:
We can
work with a table in two types,
1.
Design View
2.
Datasheet View
Design
View
To build or modify the structure of a
table we work in the table design view. We can specify what kind of data will
be hold.
Datasheet
View
To add,
edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is
a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the
dynaset or snapshot for us to view or perform an action on it, such as deleting
or updating.
CHAPTER 7
APPENDIX
7.1
SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.0
CONCLUSION
We have presented a decentralized access
control technique with anonymous authentication, which provides user revocation
and prevents replay attacks. The cloud does not know the identity of the user
who stores information, but only verifies the user’s credentials. Key
distribution is done in a decentralized way. One limitation is that the cloud
knows the access policy for each record stored in the cloud. In future, we
would like to hide the attributes and access policy of a user.
Sensor networks are composed of small
sensing devices that have the capability to take various measurements of their
environment such as temperature, sound, light etc. These devices are equipped
with a processor and wireless communication antenna and are powered with a
battery. Upon deployment in a field, they form an ad hoc network and
communicate with each other and with data processing centers. The routing
protocol in such networks has an important effect on congestion, especially
with increasing sizes of the deployments. Congestion becomes worse when a
particular area is generating most of the data. This may occur in some deployments
when sensors in one area of interest are requested to gather and transmit data
at a higher rate than others.
We
believe that all data generated in a sensor network may not be equally
important; some may have a low priority while others have a higher priority and
hence differentiated service must be provided to these data. In such a
scenario, routing dynamics can lead to congestion on specific paths. Since
congestion is a self-compounding problem, these paths are usually close to each
other which lead to an entire zone in the network facing congestion. We refer
to this zone as the congestion zone or conzone.
Congestion
can adversely affect the network in two ways.
First, it can lead to indiscriminate
dropping of data, i.e. some packets of high priority might be dropped while
others of less priority are delivered. This happens because sensor nodes are
very simple devices and do not have the capability to differentiate packets
(i.e. they do not have multiple queues for different priority levels). Second,
congestion can cause an increase in energy consumption as links become
saturated. This can lead to depletion of the limited energy available in the
sensor nodes in the congested area.
In this paper, we examine data delivery issues in
the presence of congestion in wireless sensor networks. We propose the use of
data prioritization and a simple priority aware routing protocol, Congestion
Aware Routing (CAR). CAR does not use multiple priority queues, a QoS aware MAC
layer or specialized scheduling algorithms. The first step in this protocol is
to dynamically discover the conzone. The second step is to enforce
differentiated routing; high priority packets are routed in the conzone. Low
priority packets generated outside the conzone stay outside while those
generated within the conzone are routed out. In effect, conzone nodes are dedicated
to serving high priority data which will enable them to provide better service
and lengthen their lifetime.
Our extensive simulations show that CAR
leads to a significant increase in the successful packet delivery ratio of high
priority data to the sink, and a clear decrease in the average delay to CAR
also provides low jitter which makes it able to support real-time multimedia
applications. It also reduces the energy consumed in the nodes that lie on the conzone
which leads to an increase in connectivity lifetime. We now consider the
network formation process. Once the sink node discovers its surrounding
neighbors, it broadcasts a “Build Mesh” message asking all nodes in the network
to organize as a mesh. In that message the sink provides its ID and zero
as its depth. Once a neighboring node hears this message it will check
if it has already joined the routing network (i.e. if it knows its depth); if
not then it sets its depth to one plus the depth in the message received and sets
the source of the message as a parent.
Each node then rebroadcasts the Build Mesh
message, with its own ID and depth to its neighbors. If a node is already a
member of the network, then it will check the depth in the message, and if that
depth is less than its own, then the source of the message is added as a
parent. In that case, the message is not rebroadcast. In this fashion, the
Build Mesh message is sent down the network until all nodes become part of this
routing structure. Similar to TAG, the Build Mesh message can be periodically
broadcast to maintain the topology and adapt to changes caused by the failure, addition
or mobility of nodes.
1.3
SCOPE OF THE PROJECT:
Design goals of the congestion aware
routing (CAR) protocol for sensor networks are to provide high priority data
with better service quality compared to other routing schemes. These include
higher delivery ratios, lower delays and lower jitter to support real-time
data. We also aim at decreasing energy consumption which will lengthen the lifetime
of the network. To achieve these goals, CAR divides the network into two regions;
the congestion zone (conzone) and the remaining part of the network. While high
priority data is routed through the conzone, low priority data is routed using
the other nodes. Low priority data that originates outside the conzone is routed
exclusively on off-conzone nodes using regular routing protocols such as low
priority data that originate inside the conzone are efficiently routed out of
the conzone.
LITRATURE SURVEY
ELASTIC
OPTICAL NETWORKING: A NEW DAWN FOR THE OPTICAL LAYER?
PUBLICATION:
O. Gerstel, M. Jinno, A. Lord, and S. J. B. Yoo, IEEE Commun. Mag., vol. 50, no. 2, pp.
s12–s20, Feb. 2012.
Optical networks are
undergoing significant changes, fueled by the exponential growth of traffic due
to multimedia services and by the increased uncertainty in predicting the
sources of this traffic due to the ever changing models of content providers over
the Internet. The change has already begun: simple on-off modulation of
signals, which was adequate for bit rates up to 10 Gb/s, has given way to much
more sophisticated modulation schemes for 100 Gb/s and beyond. The next
bottleneck is the 10-year-old division of the optical spectrum into a fixed
“wavelength grid,” which will no longer work for 400 Gb/s and above,
heralding the need for a more flexible grid. Once both transceivers and
switches become flexible, a whole new elastic optical networking paradigm is
born. In this article we describe the drivers, building blocks, architecture,
and enabling technologies for this new paradigm, as well as early
standardization efforts.
MODELING
THE ROUTING AND SPECTRUM ALLOCATION PROBLEM FOR FLEXGRID OPTICAL NETWORKS
PUBLICATION:
L. Velasco, M. Klinkowski, M. Ruiz, and J. Comellas, Photon. Netw. Commun.,
vol. 24, no. 3, pp. 177–186, 2012.
Flexgrid optical
networks are attracting huge interest due to their higher spectrum efficiency
and flexibility in comparison with traditional wavelength switched optical
networks based on the wavelength division multiplexing technology. To properly
analyze, design, plan, and operate flexible and elastic networks, efficient
methods are required for the routing and spectrum allocation (RSA) problem.
Specifically, the allocated spectral resources must be, in absence of spectrum
converters, the same along the links in the route (the continuity constraint)
and contiguous in the spectrum (the contiguity constraint). In light of the
fact that the contiguity constraint adds huge complexity to the RSA problem, we
introduce the concept of channels for the representation of contiguous spectral
resources. In this paper, we show that the use of a pre-computed set of
channels allows considerably reducing the problem complexity. In our study, we
address an off-line RSA problem in which enough spectrum needs to be allocated
for each demand of a given traffic matrix. To this end, we present novel
integer lineal programming (ILP) formulations of RSA that are based on the
assignment of channels. The evaluation results reveal that the proposed
approach allows solving the RSA problem much more efficiently than previously
proposed ILP-based methods and it can be applied even for realistic problem
instances, contrary to previous ILP formulations.
DISTANCE-ADAPTIVE
SPECTRUM RESOURCE ALLOCATION IN SPECTRUM-SLICED ELASTIC OPTICAL PATH NETWORK
PUBLICATION:
M.
Jinno et al., “,” IEEE Commun. Mag., vol. 48, no. 8, pp. 138–145,
Aug. 2010.
The rigid nature of
current wavelength-routed optical networks brings limitations on network
utilization efficiency. One limitation originates from mismatch of
granularities between the client layer and the wavelength layer. The recently
proposed spectrum-sliced elastic optical path network (SLICE) is expected to
mitigate this problem by adaptively allocating spectral resources according to
client traffic demands. This article discusses another limitation of the
current optical networks associated with worst case design in terms of
transmission performance. In order to address this problem, we present a
concept of a novel adaptation scheme in SLICE called distance-adaptive spectrum
resource allocation. In the presented scheme the minimum necessary spectral resource
is adaptively allocated according to the end-to-end physical condition of an
optical path. Modulation format and optical filter width are used as parameters
to determine the necessary spectral resources to be allocated for an optical
path. Evaluation of network utilization efficiency shows that distance-adaptive
SLICE can save more than 45 percent of required spectrum resources for a
12-node ring network. Finally, we introduce the concept of a frequency slot to
extend the current frequency grid standard, and discuss possible spectral
resource designation schemes.
QOT
PREDICTION FOR CORE NETWORKS WITH UNCOMPENSATED COHERENT TRANSMISSION
PUBLICATION:
M. Angelou, P. N. Ji, I. Tomkos, and T. Wang, in Proc. OECC/PS Jul.
2013, pp. 1–2, paper TuQ3-4.
We propose a
comprehensive QoT prediction tool based on fast analytical modeling for
on-the-fly signal assessments in networks with uncompensated coherent systems
and confirm its superiority in reducing over-engineering compared to
system-reach methods.
CHAPTER
2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
The Problem
of Existing Solutions in these scenario nodes in the network
sends all high priority data to a single sink, tree-based routing is the most appropriate.
In this routing scheme, a spanning tree is built with the high priority sink as
its root. The setup of such a tree uses controlled flooding from the sink to
all nodes in the network. Low priority data, on the other hand, do not need to
follow the same routing scheme. This is true because there may be multiple low
priority sinks and a node might send data to any of them. For example,
temperature readings might be forwarded to one sink while the motion detection measurements
go to another sink, and tree based routing schemes suffer from congestion,
especially if the number of messages generated in the leaves is high.
This problem becomes worse when we have
a mixture of high priority and low priority traffic traveling through the
network. This is because low priority messages will cross the tree that is formed
to route high priority data in order to reach their destinations. Therefore
even when the rate of high priority data is relatively low, the background
noise created by low priority traffic will create a congestion zone that spans
the deployment from the critical area to the high priority sink. Nodes in this zone
become overwhelmed and indiscriminately drop high and low priority messages.
These nodes also consume more energy compared to other nodes in the network and
hence die sooner. This will lead to only sub-optimal paths being available to route
high priority data, or a total loss of connectivity from critical area to the
sink even though other nodes outside a single routing scheme is used to route
both types of traffic.
2.1.1
DISADVANTAGES:
In such a scenario, routing dynamics can
lead to congestion on specific paths. Since congestion is a self-compounding problem,
these paths are usually close to each other which lead to an entire zone in the
network facing congestion. Congestion can adversely affect the network in two
ways. First, it can lead to indiscriminate dropping of data, i.e. some packets
of high priority might be dropped while others of less priority are delivered.
This happens because sensor nodes are very simple devices and do not have the
capability to differentiate packets (i.e. they do not have multiple queues for
different priority levels). Second, congestion can cause an increase in energy
consumption as links become saturated. This can lead to depletion of the
limited energy available in the sensor nodes in the congested area.
2.2
PROPOSED SYSTEM:
We proposed Congestion Aware Routing
(CAR) which is a simple routing protocol that uses data prioritization and
treats packets according to their priorities. We defined a conzone as the set of
sensors that will be required to route high priority packets from the data
sources to the sink.
We presented algorithms to build a high
priority routing mesh, dynamically discover and configure conzones, and perform
differentiated routing. Our solutions do not require active queue management,
maintenance of multiple queues or scheduling algorithms, or the use of
specialized MAC protocols.
The
proposed algorithm for RMSA in a nonlinear elastic network utilizing Nyquist pulse
shaping is as follows:
Determine the optimum signal power
spectral density given the fiber and amplifier parameters.
For a pair of nodes, select the shortest
path that avoids the link with the highest spectral usage (determined by
measuring the total optical power which is proportional to spectral usage).
For this path determine the total number
of amplifier spans (100 km herein) in order to determine the received signal to
noise ratio (SNR).
For this SNR, determine the maximum net
spectral efficiency (NSE) based on known relationship between SNR and NSE for a
range of polarization division multiplexed formats with Nyquist spectra where
variable rate FEC is also included.
Finally determine the gross symbol rate
and assign spectrum to serve the demand between the two nodes. We showed that
with the inclusion of small playout buffers at the sink, the CAR-based routing
is suitable for delivering real-time traffic, such as video, over a wide range
of conditions.
2.2.1
ADVANTAGES:
High
priority data delivery is assured without loss
Conzone (congestion zone discovery) is
an overhead.
Low priority data is often dropped
Low
priority data delivery is also assured along with high priority data. The
channel is virtually divided for both priorities.
Still
low priority data is often dropped
Low
Priority data delivery is assured to maximum extent.
The
burden on intermediate nodes is decreased for discovering
The
request and acknowledgements traffic is reduced in this method.
The
Low Priority data has to travel in long path which has less congestion
In
the long path all the sensor nodes has to be in active position which increases
battery consumption
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
Speed –
1.1 GHz
RAM –
256 MB (min)
Hard
Disk –
20 GB
Floppy
Drive –
1.44 MB
Key
Board –
Standard Windows Keyboard
Mouse –
Two or Three Button Mouse
Monitor –
SVGA
2.3.2
SOFTWARE REQUIREMENTS:
Operating
System : Windows XP
Front
End : Microsoft Visual Studio .NET 2008
Document : MS-Office 2007
CHAPTER
3
3.0
SYSTEM DESIGN
ARCHITECUTRE DIAGRAM / UML DIAGRAM / DATA FLOW
DIAGRAM:
The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External sources or
destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data in the
physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
There
are several common modeling rules when creating DFDs:
All processes must
have at least one data flow in and one data flow out.
All processes
should modify the incoming data, producing new forms of outgoing data.
Each data store
must be involved with at least one data flow.
Each external
entity must be involved with at least one data flow.
A data flow must
be attached to at least one process.
3.1
ARCHITECTURE DIAGRAM:
CHAPTER
4
4.0
IMPLEMENTATION:
4.1 ALGORITHM
4.2
MODULES:
SERVER
CLIENT MODULE:
FIBER
NONLINEARITIES:
DISCOVERY FROM SINK:
NETWORK
PROBABILITY (NBP):
ROUTING
ALGORITHMS (CAR):
4.3
MODUL DISCRIPTION:
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
ECONOMICAL
FEASIBILITY
TECHNICAL
FEASIBILITY
SOCIAL
FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic
impact that the system will have on the organization. The amount of fund that
the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is
carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on
the client. The developed system must have a modest requirement, as only
minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a
process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of
activities that can be planned in advance and conducted systematically. Testing
is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct, the global will be
successfully achieved. In adequate testing if not testing leads to errors that
may not appear even many months. This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a
user-oriented vehicle before implementation. The best programs are worthless if
it produces the correct outputs.
5.2.1 UNIT TESTING:
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
UNIT TESTING:
Description
Expected
result
Test for application window properties.
All the properties of the windows are to be
properly aligned and displayed.
Test for mouse operations.
All the mouse operations like click, drag, etc.
must perform the necessary operations without any exceptions.
5.1.3 FUNCTIONAL TESTING:
Functional
testing of an application is used to prove the application delivers correct
results, using enough inputs to give an adequate level of confidence that will
work correctly for all sets of inputs. The functional testing will need to
prove that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated values
to isolate the problems.
FUNCTIONAL TESTING:
Description
Expected result
Test for all modules.
All peers should communicate in the
group.
Test for various peer in a distributed
network framework as it display all users available in the group.
The result after execution should give
the accurate result.
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing
encompasses a rich spectrum of testing strategies, describing the expected
results for every test case. It uses symbolic analysis techniques. This testing
used to check that an application will work in the operational environment.
Non-functional testing includes:
Load
testing
Performance
testing
Usability
testing
Reliability
testing
Security
testing
5.1.5 LOAD TESTING:
An important
tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A
load can be a real load, that is, the system can be put under test to real
usage by having actual telephone users connected to it. They will generate test
input data for system test.
Load Testing
Description
Expected result
It is necessary to ascertain that the
application behaves correctly under loads when ‘Server busy’ response is
received.
Should designate another active node
as a Server.
5.1.5 PERFORMANCE TESTING:
Performance
tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the code,
response time and device utilization. The intent of this testing is to identify
weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description
Expected result
This is required to assure that an
application perforce adequately, having the capability to handle many peers,
delivering its results in expected time and using an acceptable level of
resource and it is an aspect of operational management.
Should handle large input values, and
produce accurate result in a expected
time.
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software
quality control team.
RELIABILITY TESTING:
Description
Expected result
This is to
check that the server is rugged and reliable and can handle the failure of
any of the components involved in provide the application.
In case of
failure of the server an alternate
server should take over the job.
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description
Expected result
Checking that the user identification
is authenticated.
In case failure it should not be
connected in the framework.
Check whether group keys in a tree are
shared by all peers.
The peers should know group key in the
same group.
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description
Expected result
Exercise all logical decisions on
their true and false sides.
All the logical decisions must be
valid.
Execute all loops at their boundaries
and within their operational bounds.
All the loops must be finite.
Exercise internal data structures to
ensure their validity.
All the data structures must be valid.
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is, black testing
enables the software
engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach
that is likely
to uncover a different
class of errors
than white box methods. Black box testing attempts to find
errors which focuses on inputs, outputs, and principle function of a software
module. The starting point of the black box testing is either a specification
or code. The contents of the box are hidden and the stimulated software should
produce the desired results.
5.1.10 BLACK BOX TESTING:
Description
Expected result
To check for incorrect or missing
functions.
All the functions must be valid.
To check for interface errors.
The entire interface must function
normally.
To check for errors in a data
structures or external data base access.
The database updation and retrieval
must be done.
To check for initialization and
termination errors.
All the functions and data structures
must be initialized properly and terminated normally.
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER
6
6.0 SOFTWARE SPECIFICATION:
6.1 FEATURES OF .NET:
Microsoft
.NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web
solutions. The .NET Framework is a language-neutral platform for writing
programs that can easily and securely interoperate. There’s no language barrier
with .NET: there are numerous languages available to the developer including
Managed C++, C#, Visual Basic and Java Script.
The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
6.2 THE .NET FRAMEWORK
The .NET Framework has
two main parts:
1. The Common Language
Runtime (CLR).
2. A hierarchical set of
class libraries.
The CLR is
described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
Conversion from a
low-level assembler-style language, called Intermediate Language (IL), into
code native to the platform being executed on.
Memory management,
notably including garbage collection.
Checking and enforcing
security restrictions on the running code.
Loading and executing
programs, with version control and other such features.
The following features
of the .NET framework are also worth description:
Managed
Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With
Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed
Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others,
namely C++, do not. Targeting CLR can, depending on the language you’re using,
impose certain constraints on the features available. As with managed and
unmanaged code, one can have both managed and unmanaged data in .NET
applications – data that doesn’t get garbage collected but instead is looked
after by unmanaged code.
Common Type System
The CLR
uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the runtime,
which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR
provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common
Language Specification (CLS) has been defined. Components that follow these
rules and expose only CLS features are considered CLS-compliant.
6.3 THE CLASS LIBRARY
.NET
provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System.
Object. As well as objects, there are value types. Value types can be allocated
on the stack, which can provide useful flexibility. There are also efficient
means of converting value types to object types if and when necessary.
The set of
classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class
library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept
to a minimum.
6.4 LANGUAGES SUPPORTED
BY .NET
The
multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but
there are also a number of new additions to the family.
Visual
Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic
also now supports structured exception handling, custom attributes and also
supports multi-threading.
Visual
Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.
Managed
Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is
Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is
just the grammar of the language. It has no standard library of its own, and
instead has been designed with the intention of using the .NET libraries as its
own.
Microsoft
Visual J# .NET provides the easiest transition for Java-language developers
into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a
variety of other programming languages.
Active
State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be
integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.
Other languages for
which .NET compilers are available include
FORTRAN
COBOL
Eiffel
ASP.NET
XML WEB SERVICES
Windows Forms
Base Class Libraries
Common Language Runtime
Operating System
Fig1 .Net Framework
C#.NET is
also compliant with CLS (Common Language Specification) and supports structured
exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas
destructors are used to destroy them. In other words, destructors are used to
release the resources allocated to the object. In C#.NET the sub finalize
procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the
sub finalize procedure can be called only from the class it belongs to or from
derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The
.NET Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that
are not currently in use by applications. When the garbage collector comes
across an object that is marked for garbage collection, it releases the memory
occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that
supports multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
6.5 THE
.NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed environment of the
Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To
provide a consistent object-oriented programming environment whether object
codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To
provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3.
Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and Web-based
applications.
6.6 FEATURES OF SQL-SERVER
The OLAP
Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called
Microsoft SQL Server 2000 Meta Data Services. References to the component now
use the term Meta Data Services. The term repository is used only in reference
to the repository engine within Meta Data Services
SQL-SERVER
database consist of six type of objects,
They
are,
1.
TABLE
2.
QUERY
3.
FORM
4.
REPORT
5.
MACRO
TABLE:
A database
is a collection of data about a specific topic.
VIEWS OF
TABLE:
We can
work with a table in two types,
1. Design
View
2.
Datasheet View
Design
View
To build
or modify the structure of a table we work in the table design view. We can
specify what kind of data will be hold.
Datasheet
View
To add,
edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is
a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the
dynaset or snapshot for us to view or perform an action on it, such as deleting
or updating.
CHAPTER
7
APPENDIX
7.1
SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER
8
8.0
CONCLUSION:
Congestion aware routing has been
investigated in nonlinear elastic optical networks and shown to be effective for
the reference NSFNET topology. We observe that the network blocking probability
(NBP) follows a generalized extreme value distribution, allowing robust
estimates of the load for a given NBP to be obtained. When NSFNET is sequentially
loaded with 100 GbE demands the proposed algorithm with a flexgrid, allows the
network to support 1744 demands compared to 328 demands using a fixed 50 GHz
grid with shortest path routing for NBP = 1%. The congestion aware routing algorithms
investigated resulted in longer average paths, with 5% of all routes exceeding
the maximum shortest path in order to increase the overall network capacity.
We
propose and analyze a behavior-rule specification-based technique for intrusion
detection of medical devices embedded in a medical cyber physical system (MCPS)
in which the patient’s safety is of the utmost importance. We propose a
methodology to transform behavior rules to a state machine, so that a device
that is being monitored for its behavior can easily be checked against the
transformed state machine for deviation from its behavior specification. Using
vital sign monitor medical devices as an example; we demonstrate that our
intrusion detection technique can effectively trade false positives off for a
high detection probability to cope with more sophisticated and hidden attackers
to support ultra safe and secure MCPS applications. Moreover, through a
comparative analysis, we demonstrate that our behavior-rule specification based
IDS technique outperforms two existing anomaly-based techniques for detecting
abnormal patient behaviors in pervasive healthcare applications.
INTRODUCTION
The most prominent characteristic of a
medical cyber physical system (MCPS) is its feedback loop that acts on the
physical environment. In other words, the physical environment provides data to
the MCPS sensors whose data feed the MCPS control algorithms that drive the
actuators which change the physical environment. MCPSs are often characterized
by sophisticated patient treatment algorithms interacting with the physical
environment including the patient. In this paper, we are concerned with
intrusion detection mechanisms for detecting compromised sensors or actuators
embedded in an MCPS for supporting safe and secure MCPS applications upon which
patients and healthcare personnel can depend with high confidence.
Intrusion detection system (IDS) design
for cyber physical systems (CPSs) has attracted considerable attention because
of the dire consequence of CPS failure. However, IDS techniques for MCPSs is
still in its infancy with very little work reported. Intrusion detection
techniques in general can be classified into four types: signature, anomaly,
trust, and specification-based techniques. In this paper, we consider specification
rather than signature-based detection to deal with unknown attacker patterns.
We consider specification rather than anomaly based techniques to avoid using
resource constrained sensors or actuators in an MCPS for profiling anomaly
patterns (e.g., through learning) and to avoid high false positives. We
consider specification rather than trust based techniques to avoid delay due to
trust aggregation and propagation to promptly react to malicious behaviors in safety
critical MCPSs.
To accommodate resource-constrained
sensors and actuators in an MCPS, we propose behavior-rule specification-based intrusion
detection (BSID) which uses the notion of behavior rules for specifying
acceptable behaviors of medical devices in an MCPS. Rule-based intrusion
detection thus far has been applied only in the context of communication
networks which have no concern of physical environments and the closed-loop control
structure as in an MCPS. For example, Da Silva et al. propose an IDS that
applies seven types of traffic-based rules to detect intruders: interval,
retransmission, integrity, delay, repetition, radio transmission range and
jamming. Ioannis et al. propose a multi trust IDS with traffic-based collection
that audits the forwarding behavior of suspects to detect black hole and grey hole
attacks launched by captured devices based on the rate of specification violations.
Our contribution relative to prior work
cited above is that we specifically consider behavior rules for MCPS actuators
controlling patient treatment algorithms as well as for physiological sensors
providing information concerning the physical environment. Further, we propose
a methodology to transform behavior rules to a state machine, so that a device
that is being monitored for its behavior can easily be checked against the
transformed state machine for deviation from its behavior specification.
Existing work only considered specification-based state machines for intrusion detection
of communication protocol misbehaving patterns.
Untreated in the literature, in this paper we also investigate the impact
of attacker behaviors on the effectiveness of MCPS intrusion detection. We
demonstrate that our specification based IDS technique can effectively trade
higher false positives off for lower false negatives to cope with more sophisticated
and hidden attackers. We show results for a range of configurations to
illustrate this trade. Because the key motivation in MCPS is safety, our
solution is deployed in a configuration yielding a high detection rate without
compromising the false positive probability. Our approach is monitoring-based
relying on the use of peer devices to monitor and measure the compliance degree
of a trustee device connected to the monitoring node by the CPS network. The
rules comparing monitor and trustee physiology (blood pressure, oxygen
saturation, pulse, respiration and temperature) exceeds protection possible by
considering devices in isolation.
The fundamental difference in designing
IDSs for safety critical CPSs versus for other brands of systems is that the
intrusion detection is closely tied with the physical components of the CPS, so
the detection is less about communication protocol compliance but more about
behavior compliance specific to the physical components to be controlled in the
CPS. Thus, instead of monitoring packet routing or packet loss data for misbehavior
detection of communication protocol compliance during packet transmission, IDSs
for MCPSs may test medical sensor measurements and actuator settings for
misbehavior detection of physical properties manifested because of attacks. For
example, a patient requesting analgesic must have a pulse greater than some
threshold, otherwise it may cause an overdose of analgesic delivered. Thus, if
a patient requests analgesic while having a pulse below the threshold then an intruder
may be involved. The behavior rules proposed in our work specifically address
the expected behavior of individual physical components in the MCPS. The
compliance threshold proposed in this paper specifically measures the goodness
of a physical component. A challenge is to provide a high detection rate
without introducing high false positives. We demonstrate that our IDS design
based on the compliance threshold can effectively distinguish benign
abnormalities from malicious attacks. To the best of our knowledge, there is no
prior work discussing the difference between CPS intrusion detection and communication
systems intrusion detection.
It is necessary to build an IDS per CPS
domain/application since the behavior rules for specifying the behaviors of
physical components/devices in a CPS are inherently domain/application
specific. In the literature, ISML and T-Rex are also specification-based
approaches for intrusion detection in CPSs. However, none of them considered
MCPSs. In the field of intrusion detection for MCPSs or healthcare systems,
Asfaw et al. studied an anomaly-based IDS for MCPSs. The authors focus on
attacks that violate privacy of an MCPS; in contrast, our investigation focuses
on attacks that violate the integrity of an MCPS. They use an anomaly-based
approach while we use a specification-based approach. Asfaw et al. do not
provide numerical results in the form of false negatives or positives which are
the critical metrics for this research area; our investigation does provide
these results.
Venkatasubramanian and Gupta survey
security solutions for pervasive healthcare applications. Like , the authors
focus on attacks on a passive pervasive healthcare system that violate patient
privacy while our investigation considers integrity attacks on an MCPS that
harm a patient. Their countermeasures focus on encryption and authentication/access
control.
Yang and Hwang investigated an approach
to fraud and abuse detection in healthcare applications. In contrast, our
investigation focuses on the treatment, rather than the administrative, domain
of healthcare. The authors use an anomaly-based approach while we use a
specification-based approach. They provide numerical results that measure
internal validity (the effectiveness of the data mining implementation) but do
not provide externally valid metrics like Receiver Operating Characteristic(ROC) which can reveal
the tradeoff between the detection rate vs. the false positive probability
Porras and Neumann study a hierarchical multi trust behavior-based IDS called
Event Monitoring Enabling Responses to Anomalous Live Disturbances (EMERALD) using
complementary signature based and anomaly-based analysis. The authors identify
a signature-based analysis trade between the state space created/runtime burden
imposed by rich rule sets and the increased false negatives that stem from a
less expressive rule set.
Porras and Neumann highlight two specific
anomaly-based techniques using statistical analysis: one studies user sessions
(to detect live intruders), and the other studies the runtime behavior of
programs (to detect malicious code). EMERALD provides a generic analysis framework
that is flexible enough to allow anomaly detectors to run with different scopes
of multi trust data (service, domain or enterprise). However, Porras and Neumann
did not report false positive or false negative probability data. While EMERALD
pursues a domain-independent CPS security solution combining anomaly and
signature-based analysis, our investigation focuses on one that is relevant for
MCPSs using specification-based analysis. Park et al.propose a semi-supervised
anomaly-based IDS targeted for assisted living environments. Their design is
behavior-based and audits series of events which they call episodes. The
authors’ events are 3-tuples comprising sensor ID, start time and duration.
Park et al. test data sets using four similarity functions based on: LCS, count
of common events not in LCS, event start times and event durations. They control
episode length and similarity function as independent variables. The authors
provide excellent ROC data which we use for a comparative analysis.
Tsang and Kwong propose a multi trust
IDS called Multi-agent System (MAS) that includes an analysis function called
Ant Colony Clustering Model (ACCM). The authors intend for ACCM to reduce the
characteristically high false positive rate of anomaly-based approaches while
minimizing the training period by using an unsupervised approach to machine
learning. MAS is hierarchical and contains a large number of roles: monitor
agents collect audit data, decision agents perform analysis, action agents
effect responses, coordination agents manage multi trust communication, user interface
agents interact with human operators and registration agents manage agent
appearance and disappearance. Their results indicate ACCM slightly outperforms
the detection rates and significantly outperforms the false positive rates of k
means and expectation-maximization approaches. Like, MAS pursues a
domain-independent CPS security solution using anomaly-based analysis; our
investigation focuses on MCPS-specific IDS using specification-based analysis.
We will use Park et al. and Tsang and Kwong as base schemes against which BSID
will be compared because no others provide meaningful pfp/pfn data for a
comparative analysis.
Our study of IDS warrants distinct
treatment for medical versus generic CPSs because the behavior rule set we
propose is application specific. CPSs in other domains will not have
temperature sensors, medication dispensers or actuators supporting cardiac
function. Furthermore, each CPS domain will have a unique environment: For
example, while the population in an MCPS may be around 1000 based on the number
of beds in a hospital, the population for a smart grid CPS may be in the millions.
Also, while the geography of a MCPS may span a single square kilometer based on
the size of a medical campus, the area of operation for a unmanned air vehicle
(UAV) may be thousands of km2.
1.3
LITRATURE SURVEY
REDUNDANCY
MANAGEMENT OF MULTIPATH ROUTING FOR INTRUSION TOLERANCE IN HETEROGENEOUS
WIRELESS SENSOR NETWORKS.
PUBLICATION:
H. Al-Hamadi and I. R. Chen. IEEE Transactions on Network and Service
Management, 10(2):189–203, 2013.
In this paper we propose redundancy
management of heterogeneous wireless sensor networks (HWSNs), utilizing
multipath routing to answer user queries in the presence of unreliable and
malicious nodes. The key concept of our redundancy management is to exploit the
tradeoff between energy consumption vs. the gain in reliability, timeliness,
and security to maximize the system useful lifetime. We formulate the tradeoff
as an optimization problem for dynamically determining the best redundancy
level to apply to multipath routing for intrusion tolerance so that the query
response success probability is maximized while prolonging the useful lifetime.
Furthermore, we consider this optimization problem for the case in which a
voting-based distributed intrusion detection algorithm is applied to detect and
evict malicious nodes in a HWSN. We develop a novel probability model to
analyze the best redundancy level in terms of path redundancy and source
redundancy, as well as the best intrusion detection settings in terms of the
number of voters and the intrusion invocation interval under which the lifetime
of a HWSN is maximized. We then apply the analysis results obtained to the
design of a dynamic redundancy management algorithm to identify and apply the
best design parameter settings at runtime in response to environment changes,
to maximize the HWSN lifetime.
TELECOMMUNICATIONS
DEMAND AND PRICING STRUCTURE: AN ECONOMETRIC ANALYSIS.
PUBLICATION:
M. Aldebert, M. Ivaldi, and C. Roucolle.
Telecommunication Systems, 25:89–115, 2004.
The main objective of this paper is to analyse
residential demand by traffic destination, using a translogarithmic indirect
utility function. We focus on five traffic directions, in order to construct a
model adapted to evaluate the characteristics of telecommunications demand in a
competitive market. The resulting price elasticities express high reactivity to
own price changes for the main traffic directions, as well as little
interactions between the different types of traffic. Moreover the high values
of income elasticities confirm the importance of income effects when analysing
residential telecommunications demand. This model shows useful for welfare
analysis. The computation of customers’ income equivalent variation shows, on
average, a higher willingness to pay for some traffic directions than the bill
actually paid. Finally we show that the optimal prices for the operator, in a
cost minimisation point of view, are higher than the observed prices for local
and national traffic directions. This emphasises the existence of important
cross-subsidies among the different segments of customers.
SECURITY
CHALLENGES IN NEXT GENERATION CYBER PHYSICAL SYSTEMS.
PUBLICATION:
M. Anand, E. Cronin, M. Sherr, M. Blaze, Z. Ives, and I. Lee. Beyond SCADA: Networked
Embedded Control for Cyber Physical Systems, 2006.
The advent of low-powered wireless
networks of embedded sensors has spurred the development of new applications at
the interface between the real world and its digital manifestation. Following
this trend, the next generation Supervisory Control And Data Acquisition
(SCADA) system is expected to replace traditional data gathering – a
distributed network of Remote Terminal Units (RTU) or Programmable Logic
Controllers (PLC), with devices such as the wireless sensing devices. Before
these intelligent systems can be deployed in critical infrastructure such as
emergency rooms and power plants, the security properties of sensors must be
fully understood. Existing wisdom has been to apply the traditional security
models and techniques to sensor networks: as in conventional computing
environments, the goal has been to protect physical entities: devices, packets,
links, and ultimately networks. Sensors have unique
characteristics that warrant novel security considerations: the geographic
distribution of the devices allows an attacker to physically capture nodes and
learn secret key material, or to intercept or inject messages; the hierarchical
nature of sensor networks and their route maintenance protocols permit the
attacker to determine where the root node is placed. Perhaps most importantly,
most sensor networks rely on redundancy (followed by aggregation) to accurately
capture environmental information even with poorly calibrated and unreliable
devices. This results in a fundamental distinction between a physical message in
a sensor network and a logical unit of sensed information: a message with a
single sensor reading may reveal very little information about the real
environment, whereas a message containing an aggregate or collection of readings
may reveal a great deal more.
HOST-BASED
ANOMALY DETECTION FOR PERVASIVE MEDICAL SYSTEMS.
PUBLICATION:
B. Asfaw, D. Bekele, B. Eshete, A. Villafiorita, and K. Weldemariam. In Fifth
International Conference on Risks and Security of Internet and Systems, pages
1–8, October 2010.
Intrusion detection systems are deployed on hosts in
a computing infrastructure to tackle undesired events in the course of usage of
the systems. One of the promising domains of applying intrusion detection is
the healthcare domain. A typical healthcare scenario is characterized by high
degree of mobility, frequent interruptions and above all demands access to
sensitive medical records by concerned stakeholders. Migrating this set of
concerns in pervasive healthcare environments where the traditional
characteristics are more intensified in terms of uncertainty, one ends up with
more challenges on security due to nature of pervasive devices and wireless
communication media along with classic security problems for desktop based systems.
Despite evolution of automated healthcare services and sophistication of
attacks against such services, there is a reasonable lack of techniques, tools
and experimental setups for protecting hosts against intrusive actions. This
paper presents a contribution to provide a host-based, anomaly modeling and
detection approach based on data mining techniques for pervasive healthcare
systems. The technique maintains normal usage profile of pervasive healthcare applications
and inspects current work flow against normal usage profile so as to classify it
as anomalous or normal. The technique is implemented as a prototype with sample
data set and the results obtained revealed that the technique is able to perform
classification of anomalous activities.
CHAPTER 2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
Existing work only considered specification-based state machines
for intrusion detection of communication protocol misbehaving patterns. Before
that not using trust based techniques to avoid delay due to trust aggregation and
propagation to promptly react to malicious behaviors in safety critical MCPSs.
2.1.1
DISADVANTAGES:
2.2
PROPOSED SYSTEM:
We propose a methodology to transform behavior rules to a state
machine, so that a device that is being monitored for its behavior can easily
be checked against the transformed state machine for deviation from its
behavior specification. We also investigate the impact of attacker behaviors on
the effectiveness of MCPS intrusion detection. We demonstrate that our specification
based IDS technique can effectively trade higher false positives off for lower
false negatives to cope with more sophisticated and hidden attackers. We show
results for a range of configurations to illustrate this trade. Because the key
motivation in MCPS is safety, our solution is deployed in a configuration
yielding a high detection rate without compromising the false positive
probability. Our approach is monitoring-based relying on the use of peer devices
to monitor and measure the compliance degree of a trustee device connected to
the monitoring node by the CPS network. The rules comparing monitor and trustee
physiology (blood pressure, oxygen saturation, pulse, respiration and temperature)
exceeds protection possible by considering devices in isolation.
2.2.1
ADVANTAGES:
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
Speed –
1.1 GHz
RAM –
256 MB (min)
Hard
Disk –
20 GB
Floppy
Drive –
1.44 MB
Key
Board –
Standard Windows Keyboard
Mouse –
Two or Three Button Mouse
Monitor – SVGA
2.3.2
SOFTWARE REQUIREMENTS:
Operating System : Windows XP
Front End : Microsoft Visual Studio .NET 2008
Back End : MS-SQL
Server 2005
Document : MS-Office
2007
CHAPTER 3
3.0
SYSTEM DESIGN:
Data Flow Diagram / Use Case Diagram / Flow Diagram:
The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External sources or
destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data. The
physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
There
are several common modeling rules when creating DFDs:
All processes must
have at least one data flow in and one data flow out.
All processes
should modify the incoming data, producing new forms of outgoing data.
Each data store
must be involved with at least one data flow.
Each external
entity must be involved with at least one data flow.
A data flow must
be attached to at least one process.
3.1
BLOCK DIAGRAM
3.2
DATAFLOW DIAGRAM
UML
DIAGRAMS:
3.2
USE CASE DIAGRAM:
3.3
CLASS DIAGRAM:
3.4
SEQUENCE DIAGRAM:
3.5
ACTIVITY DIAGRAM:
CHAPTER
4
4.0
IMPLEMENTATION:
4.1 ALGORITHM
4.2
MODULES:
The system is proposed to have the following modules along with
functional requirements.
THREAT
MODEL
ATTACKER
ARCHETYPES
BEHAVIOR
RULES
INTRUSION
DETECTION SYSTEM
4.3
MODULE DESCRIPTION:
1. THREAT
MODEL
We focus on defeating inside attackers that
violate the integrity of the MCPS with the objective to disable the MCPS functionality.
Our design is also effective against attacks such as subtle manipulations that
change medical doses slightly to cause long term harm to patients or medical or
billing record exfiltrations which violate privacy. There are two distinct stages
in an attack: before a node is compromised and after a node is compromised.
Before a node is compromised, the adversary focuses on the tactical goal of
achieving a foothold on the target system.
2. ATTACKER
ARCHETYPES
We differentiate two attacker archetypes:
reckless, random and opportunistic. A reckless attacker performs attacks whenever
it has a chance to impair the MCPS functionality as soon as possible. A random
attacker, on the other hand, performs attacks only randomly to avoid detection.
It is thus insidious and hidden with the objective to cripple the MCPS functionality.
We model the attacker behavior by a random attack probability pa. When pa = 1
the attacker is a reckless adversary. Random attacks are typically implemented
with on off attacks in real-world scenarios, so pa is not a random variable
drawn from uniform distribution U(0, 1) but rather a probability that a
malicious node is performing attacks at any time with this on-off attack
behavior. An opportunistic attacker is the third archetype. An opportunistic
attacker exploits ambient noise modeled by perr (probability of mis-monitoring)to
perform attacks.
3. BEHAVIOR
RULES
Behavior rules for a device are specified
during the design and testing phase of an MCPS. Our intrusion detection protocol
takes a set of behavior rules for a device as input and detects if a device’s
behavior deviates from the expected behavior specified by the set of behavior
rules. Since the intrusion detection activity is performed in the background,
it allows behavior rules to be changed if incomplete or imprecise specifications
are discovered during the operational phase
Without disrupting the MCPS operation. Our IDS
design for the reference MCPS model relies on
The use of lightweight specification-based
behavior rules for each sensor or actuator medical device.
4. INTRUSION
DETECTION SYSTEM
Intrusion detection system (IDS) design for
cyber physical systems (CPSs) has attracted considerable because of the dire
consequence of CPS failure. In this paper, we consider specification rather
than signature-based detection to deal with unknown attacker patterns. We consider
specification rather than anomaly based techniques to avoid using resource constrained
Sensors or actuators in an MCPS for profiling
anomaly patterns (e.g., through learning) and to avoid high false positives. We
consider specification rather than trust based techniques to avoid delay due to
trust aggregation and propagation to promptly react to malicious behaviors in Safety
critical MCPSs.
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
ECONOMICAL
FEASIBILITY
TECHNICAL
FEASIBILITY
SOCIAL
FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic
impact that the system will have on the organization. The amount of fund that
the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is
carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on
the client. The developed system must have a modest requirement, as only
minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a
process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of
activities that can be planned in advance and conducted systematically. Testing
is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct, the global will be
successfully achieved. In adequate testing if not testing leads to errors that
may not appear even many months. This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a
user-oriented vehicle before implementation. The best programs are worthless if
it produces the correct outputs.
5.2.1 UNIT TESTING:
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
UNIT TESTING:
Description
Expected
result
Test for application window properties.
All the properties of the windows are to be
properly aligned and displayed.
Test for mouse operations.
All the mouse operations like click, drag, etc.
must perform the necessary operations without any exceptions.
5.1.3 FUNCTIONAL TESTING:
Functional
testing of an application is used to prove the application delivers correct
results, using enough inputs to give an adequate level of confidence that will
work correctly for all sets of inputs. The functional testing will need to
prove that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated values
to isolate the problems.
FUNCTIONAL TESTING:
Description
Expected result
Test for all modules.
All peers should communicate in the
group.
Test for various peer in a distributed
network framework as it display all users available in the group.
The result after execution should give
the accurate result.
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing
encompasses a rich spectrum of testing strategies, describing the expected
results for every test case. It uses symbolic analysis techniques. This testing
used to check that an application will work in the operational environment.
Non-functional testing includes:
Load
testing
Performance
testing
Usability
testing
Reliability
testing
Security
testing
5.1.5 LOAD TESTING:
An important
tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A
load can be a real load, that is, the system can be put under test to real
usage by having actual telephone users connected to it. They will generate test
input data for system test.
Load Testing
Description
Expected result
It is necessary to ascertain that the
application behaves correctly under loads when ‘Server busy’ response is
received.
Should designate another active node
as a Server.
5.1.5 PERFORMANCE TESTING:
Performance
tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the code,
response time and device utilization. The intent of this testing is to identify
weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description
Expected result
This is required to assure that an
application perforce adequately, having the capability to handle many peers,
delivering its results in expected time and using an acceptable level of
resource and it is an aspect of operational management.
Should handle large input values, and
produce accurate result in a expected
time.
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software
quality control team.
RELIABILITY TESTING:
Description
Expected result
This is to
check that the server is rugged and reliable and can handle the failure of
any of the components involved in provide the application.
In case of
failure of the server an alternate
server should take over the job.
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description
Expected result
Checking that the user identification
is authenticated.
In case failure it should not be
connected in the framework.
Check whether group keys in a tree are
shared by all peers.
The peers should know group key in the
same group.
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description
Expected result
Exercise all logical decisions on
their true and false sides.
All the logical decisions must be
valid.
Execute all loops at their boundaries
and within their operational bounds.
All the loops must be finite.
Exercise internal data structures to
ensure their validity.
All the data structures must be valid.
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is, black testing
enables the software
engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach
that is likely
to uncover a different
class of errors
than white box methods. Black box testing attempts to find
errors which focuses on inputs, outputs, and principle function of a software
module. The starting point of the black box testing is either a specification
or code. The contents of the box are hidden and the stimulated software should
produce the desired results.
5.1.10 BLACK BOX TESTING:
Description
Expected result
To check for incorrect or missing
functions.
All the functions must be valid.
To check for interface errors.
The entire interface must function
normally.
To check for errors in a data
structures or external data base access.
The database updation and retrieval
must be done.
To check for initialization and
termination errors.
All the functions and data structures
must be initialized properly and terminated normally.
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER
6
6.0 SOFTWARE SPECIFICATION:
6.1 FEATURES OF .NET:
Microsoft
.NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web
solutions. The .NET Framework is a language-neutral platform for writing
programs that can easily and securely interoperate. There’s no language barrier
with .NET: there are numerous languages available to the developer including
Managed C++, C#, Visual Basic and Java Script.
The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
6.2 THE .NET FRAMEWORK
The .NET Framework has
two main parts:
1. The Common Language
Runtime (CLR).
2. A hierarchical set of
class libraries.
The CLR is
described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
Conversion from a
low-level assembler-style language, called Intermediate Language (IL), into
code native to the platform being executed on.
Memory management,
notably including garbage collection.
Checking and enforcing
security restrictions on the running code.
Loading and executing
programs, with version control and other such features.
The following features
of the .NET framework are also worth description:
Managed
Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With
Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed
Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others,
namely C++, do not. Targeting CLR can, depending on the language you’re using,
impose certain constraints on the features available. As with managed and
unmanaged code, one can have both managed and unmanaged data in .NET
applications – data that doesn’t get garbage collected but instead is looked
after by unmanaged code.
Common Type System
The CLR
uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the runtime,
which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR
provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common
Language Specification (CLS) has been defined. Components that follow these
rules and expose only CLS features are considered CLS-compliant.
6.3 THE CLASS LIBRARY
.NET
provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System.
Object. As well as objects, there are value types. Value types can be allocated
on the stack, which can provide useful flexibility. There are also efficient
means of converting value types to object types if and when necessary.
The set of
classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class
library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept
to a minimum.
6.4 LANGUAGES SUPPORTED
BY .NET
The
multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but
there are also a number of new additions to the family.
Visual
Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic
also now supports structured exception handling, custom attributes and also
supports multi-threading.
Visual
Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.
Managed
Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is
Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is
just the grammar of the language. It has no standard library of its own, and
instead has been designed with the intention of using the .NET libraries as its
own.
Microsoft
Visual J# .NET provides the easiest transition for Java-language developers
into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a
variety of other programming languages.
Active
State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be
integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.
Other languages for
which .NET compilers are available include
FORTRAN
COBOL
Eiffel
ASP.NET
XML WEB SERVICES
Windows Forms
Base Class Libraries
Common Language Runtime
Operating System
Fig1 .Net Framework
C#.NET is
also compliant with CLS (Common Language Specification) and supports structured
exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas
destructors are used to destroy them. In other words, destructors are used to
release the resources allocated to the object. In C#.NET the sub finalize
procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the
sub finalize procedure can be called only from the class it belongs to or from
derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The
.NET Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that
are not currently in use by applications. When the garbage collector comes
across an object that is marked for garbage collection, it releases the memory
occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that
supports multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
6.5 THE
.NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed environment of the
Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To
provide a consistent object-oriented programming environment whether object
codes is stored and executed locally on Internet-distributed, or executed
remotely.
2. To
provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3.
Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and
Web-based applications.
6.6 FEATURES OF SQL-SERVER
The OLAP
Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called
Microsoft SQL Server 2000 Meta Data Services. References to the component now
use the term Meta Data Services. The term repository is used only in reference
to the repository engine within Meta Data Services
SQL-SERVER
database consist of six type of objects,
They
are,
1.
TABLE
2.
QUERY
3.
FORM
4.
REPORT
5.
MACRO
TABLE:
A database
is a collection of data about a specific topic.
VIEWS OF
TABLE:
We can
work with a table in two types,
1. Design View
2.
Datasheet View
Design View
To build
or modify the structure of a table we work in the table design view. We can
specify what kind of data will be hold.
Datasheet
View
To add,
edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is
a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the
dynaset or snapshot for us to view or perform an action on it, such as deleting
or updating.
CHAPTER
7
APPENDIX
7.1
SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.1
CONCLUSION
For
safety-critical MCPSs, being able to detect attackers while limiting the false
alarm probability to protect the welfare of patients is of utmost importance.
In this paper we proposed a behavior-rule specification-based IDS technique for
intrusion detection of medical devices embedded in a MCPS. We exemplified the
utility with VSMs and demonstrated that the detection probability of the
medical device approaches one (that is, we can always catch the attacker
without false negatives) while bounding the false alarm probability to below 5%
for reckless attackers and below 25% for random and opportunistic attackers
over a wide range of environment noise levels. Through a comparative analysis,
we demonstrated that our behavior-rule specification-based IDS technique
outperforms existing techniques based on anomaly intrusion detection. In future
work, we plan to analyze the overheads of our detection techniques such as the
various distance-based methods in comparison with contemporary approaches. We also
plan to deepen adversary modeling research based on stochastic Petri net
techniques such that the system can dynamically adjust CT to maximize intrusion
detection performance in response to changing attacker behaviors at runtime.
New communication technologies
integrated into modern vehicles offer an opportunity for better assistance to
people injured in traffic accidents. Recent studies show how communication
capabilities should be supported by artificial intelligence systems capable of
automating many of the decisions to be taken by emergency services, thereby
adapting the rescue resources to the severity of the accident and reducing
assistance time. To improve the overall rescue process, a fast and accurate
estimation of the severity of the accident represent a key point to help
emergency services better estimate the required resources.
This paper proposes a novel intelligent
system which is able to automatically detect road accidents, notify them
through vehicular networks, and estimate their severity based on the concept of
data mining and knowledge inference. Our system considers the most relevant variables
that can characterize the severity of the accidents (variables such as the
vehicle speed, the type of vehicles involved, the impact speed, and the status
of the airbag).
Results show that a complete Knowledge
Discovery in Databases (KDD) process, with an adequate selection of relevant
features, allows generating estimation models that can predict the severity of
new accidents. We develop a prototype of our system based on off-the-shelf
devices and validate it at the Applus+ IDIADA Automotive Research Corporation
facilities, showing that our system can notably reduce the time needed to alert
and deploy emergency services after an accident takes place.
1.2
INTRODUCTION
1.3
LITRATURE SURVEY
CHAPTER 2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
Most ITS applications, such as road
safety, fleet management, and navigation, will rely on data exchanged between
the vehicle and the roadside infrastructure (V2I), or even directly between
vehicles (V2V). The integration of sensoring capabilities on-board of vehicles,
along with peer-to-peer mobile communication among vehicles, forecast
significant improvements for failure. Existing V2V architecture, the transportation network
is broken into zones in which a single vehicle is known as the super vehicle.
Only super vehicles are able to communicate with the central infrastructure or
with other Super Vehicles, and all other vehicles can only communicate with the
super vehicle responsible for the zone in which they are previously traversing
in describe the super vehicle detection (SVD) algorithm for how a vehicle can
find or become a super vehicle of a zone and how super vehicles can aggregate
the speed and location data from all of the vehicles within their zone to still
ensure an accurate representation of the network.
2.1.1
DISADVANTAGES:
Zero
accident objectives on the long term, a fast and efficient rescue operation
during the hour following a traffic accident significantly increase the
probability of survival of the injured, and reduce the injury severity.
Communication
systems between vehicles, the infrastructure should be supported by intelligent
systems capable of estimating the severity of accidents, and automatically
deploying the actions required, thereby reducing the time needed to assist
injured passengers.
Many
of the manual decisions taken nowadays by emergency services are based on
incomplete or inaccurate data, which may be replaced by automatic systems that
adapt to the specific characteristics of each accident.
2.2
PROPOSED SYSTEM:
The proposed system consists of several
components with different functions. Firstly, vehicles should incorporate an
On-Board unit (OBU) responsible for: (i) detecting when there has been a
potentially dangerous impact for the occupants, (ii) collecting available
information coming from sensors in the vehicle, and (iii) communicating the
situation to a Control Unit (CU) that will accordingly address the handling of
the warning notification. Next, the notification of the detected accidents is
made through a combination of both V2V and V2I communications. Finally, the
destination of all the collected information is the Control Unit; it will
handle the warning notification, estimating the severity of the accident, and communicating
the incident to the appropriate emergency services.
Our proposed architecture provides: (i)
direct communication between the vehicles involved in the accident, (ii) automatic
sending of a data file containing important information about the accident to
the Control Unit, and (iii) a preliminary and automatic assessment of the
damage of the vehicle and its occupants, based on the information coming from
the involved vehicles, and a database of accident reports. According to the
reported information and the preliminary accident estimation, the system will alert
the required rescue resources to optimize the accident assistance.
2.2.1
ADVANTAGES:
In-vehicle
sensors: They are required to detect accidents and provide information about
its causes. Accessing the data from in-vehicle sensors is possible nowadays
using the On-Board Diagnostics (OBD) standard interface, which serves as the
entry point to the vehicles.
Data
Acquisition Unit (DAU): This device is responsible for periodically collecting
data from the sensors available in the vehicle (airbag triggers, speed, fuel
levels, etc.), converting them to a common format, and providing the collected
data set to the OBU Processing Unit.
OBU
Processing Unit: It is in charge of processing the data coming from sensors,
determining whether an accident occurred, and notifying dangerous situations to
nearby vehicles, or directly to the Control Unit.
The information from the DAU is
gathered, interpreted and used to determine the vehicle’s current status. This
unit must also have access to a positioning device (such as a GPS receiver),
and to different wireless interfaces, thereby enabling communication between
the vehicle and the remote control center.
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
Speed –
1.1 GHz
RAM –
256 MB (min)
Hard
Disk –
20 GB
Floppy
Drive –
1.44 MB
Key
Board –
Standard Windows Keyboard
Mouse –
Two or Three Button Mouse
Monitor – SVGA
2.3.2
SOFTWARE REQUIREMENTS:
Operating System : Windows XP or Win7
Front End : Microsoft Visual Studio .NET 2008
Script : C# Script
Back End : MS-SQL
Server 2005
Document : MS-Office
2007
CHAPTER
3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use
Case Diagram / Flow Diagram:
The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the data
used by the process, an external entity that interacts with the system and the
information flows in the system.
DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION
OF DATA:
External
sources or destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data’s in
the physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
There
are several common modeling rules when creating DFDs:
All processes must
have at least one data flow in and one data flow out.
All processes
should modify the incoming data, producing new forms of outgoing data.
Each data store
must be involved with at least one data flow.
Each external
entity must be involved with at least one data flow.
A data flow must
be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM
3.2
DATAFLOW DIAGRAM
UML
DIAGRAMS:
3.2
USE CASE DIAGRAM:
3.3
CLASS DIAGRAM:
3.4
SEQUENCE DIAGRAM:
3.5
ACTIVITY DIAGRAM:
CHAPTER
4
4.0
IMPLEMENTATION:
The KDD approach can be defined as the
nontrivial process of identifying valid, novel, potentially useful, and understandable
patterns from KDD process begins with the understanding of the application specific
domain and the necessary prior knowledge. After the acquisition of initial
data, a series of phases are performed:
1) Selection: This phase determines the
information sources that may be useful, and then it transforms the data into a
common format.
2) Preprocessing: In this stage, the
selected data must be cleaned (noise reduction or modeling) and preprocessed (missing
data handling).
3) Transformation: This phase is in
charge of performing a reduction and projection of the data to find relevant
features that represent the data depending on the purpose of the task.
4) Data mining: This phase basically
selects mining algorithms and selection methods which will be used to find
patterns in data.
5) Interpretation/Evaluation: Finally,
the extracted patterns must be interpreted. This step may also include
displaying the patterns and models, or displaying the data taking into account
such models.
4.1 ALGORITHM
We propose to develop a complete KDD
process, starting by selecting a useful data source containing instances of
previous accidents. The data collected will be structured and preprocessed to
ease the work to be done in the transformation and data mining phases. The
final step will consist on interpreting the results, and assessing their
utility for the specific task of estimating the severity of road accidents. The
phases from the KDD process will be performed using the open-source Weka
collection, which is a set of machine learning algorithms.
Weka is open source software issued
under the GNU General Public License which contains tools for data
pre-processing, classification, regression, clustering, association rules, and
visualization. We will deal with road accidents in two dimensions: (i) damage
on the vehicle (indicating the possibility of traffic problems or the need of
cranes in the area of the accident), and (ii) passenger injuries. These two
dimensions seem to be related, since heavily damaged vehicles are usually
associated with low survival possibilities of the occupants.
We will use the estimations obtained with our
system about the damage on the vehicle to help in the prediction of the occupants’
injuries. Finally, our system will benefit from additional knowledge to improve
its accuracy, grouping accidents according to their degree of similarity. We
can use the criteria used in numerous studies about accidents in which crashes
are divided and analyzed separately depending on the main direction of the impact
registered due to the collision. The following sections contain the results of
the different phases of our KDD proposal.
4.2
MODULES:
USER
MODULES:
VEHICULAR
NETWORKS (ITS):
OBU
AND CU STRUCTURE:
DATA ACQUISITION:
KDD MACHINE LEARNING:
4.3
MODULE DESCRIPTION:
USER
MODULES:
VEHICULAR
NETWORKS (ITS):
OBU
AND CU STRUCTURE:
DATA ACQUISITION:
KDD MACHINE LEARNING:
CHAPTER 8
8.1
CONCLUSION:
The new communication technologies
integrated into the automotive sector offer an opportunity for better
assistance to people injured in traffic accidents, reducing the response time
of emergency services, and increasing the information they have about the
incident just before starting the rescue process. To this end, we designed and
implemented a prototype for automatic accident notification and assistance based
on V2V and V2I communications.
However, the effectiveness of this
technology can be improved with the support of intelligent systems which can
automate the decision making process associated with an accident. A preliminary
assessment of the severity of an accident is needed to adapt resources
accordingly. This estimation can be done by using historical data from previous
accidents using a Knowledge Discovery in Databases process.
We showed that the vehicle speed is a
crucial factor in front crashes, but the type of vehicle involved and the speed
of the striking vehicle are more important than speed itself in side and
rear-end collisions. The status of the airbag is also very useful in the
estimation, since situations where it was not necessary to deploy the airbag
rarely produce serious injuries to the passengers.
We developed a prototype that shows how
inter-vehicle communications can make accessible the information about the
different vehicles involved in an accident. Moreover, the positive results
achieved on the real tests indicates that the accident detection and severity estimation
algorithms are robust enough to allow a mass deployment of the proposed system.
Cloud data center management is a key
problem due to the numerous and heterogeneous strategies that can be applied,
ranging from the VM placement to the federation with other clouds. Performance
evaluation of Cloud Computing infrastructures is required to predict and
quantify the cost-benefit of a strategy portfolio and the corresponding Quality
of Service (QoS) experienced by users. Such analyses are not feasible by
simulation or on-the-field experimentation, due to the great number of
parameters that have to be investigated.
In this paper, we present an analytical
model, based on Stochastic Reward Nets (SRNs), that is both scalable to model
systems composed of thousands of resources and flexible to represent different
policies and cloud-specific strategies. Several performance metrics are defined
and evaluated to analyze the behavior of a Cloud data center: utilization,
availability, waiting time, and responsiveness. A resiliency analysis is also
provided to take into account load bursts. Finally, a general approach is
presented that, starting from the concept of system capacity, can help system
managers to opportunely set the data center parameters under different working
conditions.
EXISTING
SYSTEM:
In order to integrate business
requirements and application level needs, in terms of Quality of Service (QoS),
cloud service provisioning is regulated by Service Level Agreements (SLAs):
contracts between clients and providers that express the price for a service,
the QoS levels required during the service provisioning, and the penalties
associated with the SLA violations. In such a context, performance evaluation
plays a key role allowing system managers to evaluate the effects of different
resource management strategies on the data center functioning and to predict
the corresponding costs/benefits.
Cloud systems differ from traditional
distributed systems. First of all, they are characterized by a very large
number of resources that can span different administrative domains. Moreover,
the high level of resource abstraction allows implementing particular resource
management techniques such as VM multiplexing or VM live migrations that, even
if transparent to final users, have to be considered in the design of
performance models in order to accurately understand the system behavior.
Finally, different clouds, belonging to
the same or to different organizations, can dynamically join each other to
achieve a common goal, usually represented by the optimization of resources
utilization. This mechanism, referred to as cloud federation, allows providing
and releasing resources on demand thus providing elastic capabilities to the
whole infrastructure.
DISADVANTAGES:
On-the-field experiments are mainly
focused on the offered QoS, they are based on a black box approach that makes
difficult to correlate obtained data to the internal resource management
strategies implemented by the system provider.
Simulation does not allow conducting
comprehensive analyses of the system performance due to the great number of
parameters that have to be investigated.
PROPOSED
SYSTEM:
In this paper, we present a stochastic
model, based on Stochastic Reward Nets (SRNs), that exhibits the above
mentioned features allowing capturing the key concepts of an IaaS cloud system.
The proposed model is scalable enough to represent systems composed of
thousands of resources and it makes possible to represent both physical and
virtual resources exploiting cloud specific concepts such as the infrastructure
elasticity.
We present work is that a generic and
comprehensive view of a cloud system is presented. Low level details, such as
VM multiplexing, are easily integrated with cloud based actions such as
federation, allowing investigating different mixed strategies. An exhaustive
set of performance metrics are defined regarding both the system provider
(e.g., utilization) and the final users (e.g., responsiveness).
ADVANTAGES:
To provide a fair comparison among
different resource management strategies, also taking into account the system
elasticity, a performance evaluation approach is described. Such an approach,
based on the concept of system capacity, presents a holistic view of a cloud
system and it allows system managers to study the better solution with respect
to an established goal and to opportunely set the system parameters.
Our analytical techniques represent a
good candidate, thanks to the limited solution cost of their associated models.
However, to accurately represent a cloud system, an analytical model has to be:
.
Scalable: To deal with very large systems composed of
hundreds or thousands of resources.
.
Flexible: Allowing us to easily implement different
strategies and policies and to represent different working conditions.
HARDWARE
& SOFTWARE REQUIREMENTS:
HARDWARE
REQUIREMENT:
v
Processor – Pentium –IV
Speed –
1.1 GHz
RAM –
256 MB (min)
Hard
Disk –
20 GB
Floppy
Drive –
1.44 MB
Key
Board –
Standard Windows Keyboard
Mouse –
Two or Three Button Mouse
Monitor – SVGA
SOFTWARE
REQUIREMENTS:
Operating System : Windows XP or Win 7
Front End : Microsoft Visual Studio .NET 2008
Back End : MSSQL
Server
Script Coding : C#
Script
Server : ASP .NET Web Server
Document : MS-Office
2007
SYSTEM DESIGN:
ARCHITECTURE DIAGRAM /
UML DIAGRAMS / DAT FLOW DIAGRAM:
The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION
OF DATA:
External
sources or destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data. The
physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
MODELING RULES:
There
are several common modeling rules when creating DFDs:
All processes must
have at least one data flow in and one data flow out.
All processes
should modify the incoming data, producing new forms of outgoing data.
Each data store
must be involved with at least one data flow.
Each external
entity must be involved with at least one data flow.
A data flow must
be attached to at least one process.
SYSTEM
ARCHITECTURE:
IMPLEMENTATION:
SRNs allow us to define reward functions
that can be associated to a particular state of the model to evaluate the performance
level reached by the system during the sojourn in that state.
In the following, we are interested in
performance metrics able to characterize the system behavior from both the provider
and the user point of views. Such metrics will help system designer to size and
manage the cloud data center and they will also be determinant in the SLA
definitions.
Responsiveness It is the steady-state probability
R that the system is able to accept a request within a given time deadline _.
The computation of such a parameter requires the knowledge of the waiting time
cumulative distribution function (CDF). To this end, it is possible to apply
the tagged customer technique by modifying the SRN model to isolate the
behavior of a single user request u and to observe its movements through the
system. In the tagged customer model shown in Fig. 3, the system queue is
modeled through two places. Place Pcustomer contains a single token that
represents the arrival of request u. The P tokens initially present in place
Pqueue represent the number of requests still waiting in the queue when u
arrives, while the M1 and M2 tokens initially present in places Pres and Prun represent
the corresponding system status.
MODULES:
USER
MODULE:
ADMIN:
USER:
IAAS
CLOUD SYSTEM:
ANALYTICAL
MODEL:
CLOUD
FEDERATION:
MODELING
VM MULTIPLEXING:
RESILIENCY
ANALYSIS:
MODULES
DESCRIPTION:
USER
MODULE:
ADMIN:
In this module is used to help the
server to view details and upload files with the security. Admin
upload the data’s to database.
Also view the subscriber details and user details. Admin find the redistribute
details. Also who send the data and
receive the data’s.
USER:
In this module, Users are having authentication and security to access
the detail which is presented in the ontology system. Before accessing or
searching the details user should have the account in that otherwise they
should register first user
can register their details like name, password, gender, age, and then.
We develop this module, where the cloud storage can be made secure.
IAAS
CLOUD SYSTEM:
Cloud computing is a promising
technology able to strongly modify the way computing and storage resources will
be accessed in the near future in the provision of on-demand access to virtual
resources available on the Internet, cloud systems offer services at three
different levels: infrastructure as a service (IaaS), platform as a service
(PaaS), and software as a service (SaaS). In particular, IaaS clouds provide
users with computational resources in the form of virtual machine (VM)
instances deployed in the provider data center, while PaaS and SaaS clouds
offer services in terms of specific solution stacks and application software
suites, respectively.
Our business requirements and
application-level needs, in terms of quality of service (QoS), cloud service provisioning
is regulated by service-level agreements (SLAs): contracts between clients and
providers that express the price for a service, the QoS levels required during
the service provisioning, and the penalties associated with the SLA violations.
In such a context, performance evaluation plays a key role allowing system
managers to evaluate the effects of different resource management strategies on
the data center.
ANALYTICAL
MODEL:
IaaS cloud system composed of N physical
resources job requests (in terms of VM instantiation requests) are enqueued in
the system queue. Such a queue has a finite size Q; once its limit is reached, further
requests are rejected. The system queue is managed according to a FIFO
scheduling policy. When a resource is available, a job is accepted and the
corresponding VM is instantiated.
We assume that the instantiation time is
negligible and that the service time (i.e., the time needed to execute a job)
is exponentially distributed with mean 1=_. According to the VM multiplexing
technique in the cloud system can provide a number M of logical resources greater
than N. In this case, multiple VMs can be allocated in the same physical
machine (PM), for example, a core in a multicore architecture.
Multiple VMs sharing the same PM can
incur in a reduction of the performance mainly due to I/O interference between
VMs. We define the degradation factor d (_ 0) as the percentage increase in the
expected service time experienced by a VM when multiplexed with another VM. The
performance degradation of multiplexed VMs depends on the multiplexing
technique.
CLOUD
FEDERATION:
Cloud
federation allows the system to use, in particular situations, the resources
offered by other public cloud systems through a sharing and paying model. In
this way, elastic capabilities can be exploited to respond to particular load
conditions. Job requests can be redirected to other clouds by transferring the
corresponding VM disk images through the network. With respect to the
federation technique, we make the following assumptions:
Finally, with respect to the arrival
process, we will investigate three different scenarios. In the first one
(constant arrival process), we assume the arrival process be a homogeneous
Poisson process with rate _. However, largescale distributed systems with
thousands of users, such as cloud systems, could exhibit
self-similarity/long-range dependence with respect to the arrival process. For these
reasons, to take into account the dependences of the job arrival rate on both
the days of a week and the hours of a day, in the second scenario (Periodic arrival
process), we also choose to model the job arrival process as a Markov Modulated
Poisson Process (MMPP).
MODELING
VM MULTIPLEXING:
The proposed model is scalable enough to
represent systems composed of thousands of resources and it makes possible to
represent both physical and virtual resources exploiting cloud-specific
concepts such as the infrastructure elasticity. With respect to the existing
literature, the innovative aspect of the present work is that a generic and
comprehensive view of a cloud system is presented. Low-level details, such as
VM multiplexing, are easily integrated with cloud-based actions such as federation,
allowing us to investigate different mixed strategies. An exhaustive set of
performance metrics is defined regarding both the system provider (e.g.,
utilization) and the final users (e.g., responsiveness).
Moreover, different working conditions
are investigated and a resiliency analysis is provided to take into account the
effects of load bursts. Finally, to provide a fair comparison among different
resource management strategies, also taking into account the system elasticity,
a performance evaluation approach is described. Such an approach, based on the
concept of system capacity, presents a holistic view of a cloud system and it
allows system managers to study the better solution with respect to an
established goal and to opportunely set the system parameters.
VM multiplexing technique in the cloud
system can provide a number M of logical resources greater than N. In this case,
multiple VMs can be allocated in the same physical machine (PM), for example, a
core in a multicore architecture. Multiple VMs sharing the same PM can incur in
a reduction of the performance mainly due to I/O interference between VMs. We
define the degradation factor d (_ 0) as the percentage increase in the
expected service time experienced by a VM when multiplexed with another VM. The
performance degradation of multiplexed VMs depends on the multiplexing
technique and on the VM placement strategy. We assume that, to reduce the
degradation and to obtain a fair distribution of VMs, the system is able to
optimally balance the load among the PMs with respect to the resources required
by VMs (e.g., trying to multiplex CPU-bound VMs only with I/O-bound VMs), thus
reaching a homogeneous degradation factor. Then, indicating with T ¼ 1=_ the
expected service time of a VM in isolation, we can derive the expected time needed
to execute two multiplexed VMs as T2 ¼ T _ ð1 þ dÞ. In general, we can express
the expected execution time of I multiplexed VMs
RESILIENCY
ANALYSIS:
Through a transient solution of the
cloud performance model of it is possible to investigate the trend over time of
some performance metrics. Such an analysis is straightforward to assess the
resiliency of the cloud infrastructure, in particular when the load is
characterized by bursts. In fact, even if the infrastructure is optimally sized
with respect to the expected load, during a load burst, users can experience a degradation
of the perceived QoS with corresponding violations of SLAs. For this reason, it
is needed to predict the effects of a particular load condition to study the
ability of the system to react to an overload situation. To study the system resiliency,
we highlight the arrival of a single burst taking into account a bursty arrival
process characterized by the following behavior:
The bursty arrival process is modeled by
opportunely changing the exponentially distributed firing time of the transition
Tarr in the cloud performance model through the adoption of the technique
described in of all; we can identify three temporal phases:
In each phase, the model is solved in
transitory by setting the firing rate of Tarr with the corresponding mean
value: _n for the regular load, _b for the load burst. Moreover, at the beginning
of each phase (i.e., before the change on the firing rate is applied), the
initial state probabilities of the model.
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
ECONOMICAL
FEASIBILITY
TECHNICAL
FEASIBILITY
SOCIAL
FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic
impact that the system will have on the organization. The amount of fund that
the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical
feasibility, that is, the technical requirements of the system. Any system
developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have
a modest requirement, as only minimal or null changes are required for
implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a
process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of activities
that can be planned in advance and conducted systematically. Testing is vital
to the success of the system. System testing makes a logical assumption that if
all the parts of the system are correct, the global will be successfully
achieved. In adequate testing if not testing leads to errors that may not
appear even many months. This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a user-oriented
vehicle before implementation. The best programs are worthless if it produces
the correct outputs.
5.2.1 UNIT TESTING:
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
UNIT TESTING:
Description
Expected
result
Test for application window properties.
All the properties of the windows are to be
properly aligned and displayed.
Test for mouse operations.
All the mouse operations like click, drag, etc.
must perform the necessary operations without any exceptions.
5.1.3 FUNCTIONAL TESTING:
Functional
testing of an application is used to prove the application delivers correct
results, using enough inputs to give an adequate level of confidence that will
work correctly for all sets of inputs. The functional testing will need to
prove that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated
values to isolate the problems.
FUNCTIONAL TESTING:
Description
Expected result
Test for all modules.
All peers should communicate in the
group.
Test for various peer in a distributed
network framework as it display all users available in the group.
The result after execution should give
the accurate result.
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing
encompasses a rich spectrum of testing strategies, describing the expected
results for every test case. It uses symbolic analysis techniques. This testing
used to check that an application will work in the operational environment.
Non-functional testing includes:
Load
testing
Performance
testing
Usability
testing
Reliability
testing
Security
testing
5.1.5 LOAD TESTING:
An important
tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A
load can be a real load, that is, the system can be put under test to real
usage by having actual telephone users connected to it. They will generate test
input data for system test.
Load Testing
Description
Expected result
It is necessary to ascertain that the
application behaves correctly under loads when ‘Server busy’ response is
received.
Should designate another active node
as a Server.
5.1.5 PERFORMANCE TESTING:
Performance tests
are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the
code, response time and device utilization. The intent of this testing is to
identify weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description
Expected result
This is required to assure that an
application perforce adequately, having the capability to handle many peers,
delivering its results in expected time and using an acceptable level of
resource and it is an aspect of operational management.
Should handle large input values, and
produce accurate result in a expected
time.
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software quality
control team.
RELIABILITY TESTING:
Description
Expected result
This is to
check that the server is rugged and reliable and can handle the failure of
any of the components involved in provide the application.
In case of
failure of the server an alternate
server should take over the job.
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description
Expected result
Checking that the user identification
is authenticated.
In case failure it should not be
connected in the framework.
Check whether group keys in a tree are
shared by all peers.
The peers should know group key in the
same group.
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description
Expected result
Exercise all logical decisions on
their true and false sides.
All the logical decisions must be
valid.
Execute all loops at their boundaries
and within their operational bounds.
All the loops must be finite.
Exercise internal data structures to
ensure their validity.
All the data structures must be valid.
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is, black testing
enables the software
engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach
that is likely
to uncover a different
class of errors
than white box methods. Black box testing attempts to find
errors which focuses on inputs, outputs, and principle function of a software
module. The starting point of the black box testing is either a specification
or code. The contents of the box are hidden and the stimulated software should
produce the desired results.
5.1.10 BLACK BOX TESTING:
Description
Expected result
To check for incorrect or missing
functions.
All the functions must be valid.
To check for interface errors.
The entire interface must function
normally.
To check for errors in a data
structures or external data base access.
The database updation and retrieval
must be done.
To check for initialization and
termination errors.
All the functions and data structures
must be initialized properly and terminated normally.
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER
7
7.0 SOFTWARE SPECIFICATION:
7.1 FEATURES OF .NET:
Microsoft
.NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web
solutions. The .NET Framework is a language-neutral platform for writing
programs that can easily and securely interoperate. There’s no language barrier
with .NET: there are numerous languages available to the developer including
Managed C++, C#, Visual Basic and Java Script.
The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
7.2 THE .NET FRAMEWORK
The .NET Framework has
two main parts:
1. The Common Language
Runtime (CLR).
2. A hierarchical set of
class libraries.
The CLR is
described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
Conversion from a
low-level assembler-style language, called Intermediate Language (IL), into
code native to the platform being executed on.
Memory management,
notably including garbage collection.
Checking and enforcing
security restrictions on the running code.
Loading and executing
programs, with version control and other such features.
The following features
of the .NET framework are also worth description:
Managed
Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With
Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed
Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others,
namely C++, do not. Targeting CLR can, depending on the language you’re using,
impose certain constraints on the features available. As with managed and
unmanaged code, one can have both managed and unmanaged data in .NET
applications – data that doesn’t get garbage collected but instead is looked
after by unmanaged code.
Common Type System
The CLR
uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the runtime,
which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR
provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common
Language Specification (CLS) has been defined. Components that follow these
rules and expose only CLS features are considered CLS-compliant.
7.3 THE CLASS LIBRARY
.NET
provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System.
Object. As well as objects, there are value types. Value types can be allocated
on the stack, which can provide useful flexibility. There are also efficient
means of converting value types to object types if and when necessary.
The set of
classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class
library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept
to a minimum.
7.4 LANGUAGES SUPPORTED
BY .NET
The
multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but
there are also a number of new additions to the family.
Visual
Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic
also now supports structured exception handling, custom attributes and also
supports multi-threading.
Visual
Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.
Managed
Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is
Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is
just the grammar of the language. It has no standard library of its own, and
instead has been designed with the intention of using the .NET libraries as its
own.
Microsoft
Visual J# .NET provides the easiest transition for Java-language developers
into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a
variety of other programming languages.
Active
State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be
integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.
Other languages for
which .NET compilers are available include
FORTRAN
COBOL
Eiffel
ASP.NET
XML WEB SERVICES
Windows Forms
Base Class Libraries
Common Language Runtime
Operating System
Fig1 .Net Framework
C#.NET is
also compliant with CLS (Common Language Specification) and supports structured
exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas
destructors are used to destroy them. In other words, destructors are used to
release the resources allocated to the object. In C#.NET the sub finalize
procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the
sub finalize procedure can be called only from the class it belongs to or from
derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The
.NET Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that
are not currently in use by applications. When the garbage collector comes
across an object that is marked for garbage collection, it releases the memory
occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that
supports multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
7.5
THE .NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed environment of the
Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To
provide a consistent object-oriented programming environment whether object
codes is stored and executed locally on Internet-distributed, or executed
remotely.
2. To
provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3.
Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and
Web-based applications.
7.6 FEATURES OF SQL-SERVER
The OLAP
Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called
Microsoft SQL Server 2000 Meta Data Services. References to the component now
use the term Meta Data Services. The term repository is used only in reference
to the repository engine within Meta Data Services
SQL-SERVER
database consist of six type of objects,
They
are,
1.
TABLE
2.
QUERY
3.
FORM
4.
REPORT
5.
MACRO
7.7 TABLE:
A database
is a collection of data about a specific topic.
VIEWS OF
TABLE:
We can
work with a table in two types,
1.
Design View
2.
Datasheet View
Design
View
To build or modify the structure of a
table we work in the table design view. We can specify what kind of data will
be hold.
Datasheet
View
To add,
edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is
a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the
dynaset or snapshot for us to view or perform an action on it, such as deleting
or updating.
CHAPTER 7
APPENDIX
7.1
SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.1
CONCLUSION:
In this paper, we have presented a
stochastic model to evaluate the performance of an IaaS cloud system. Several
performance metrics have been defined, such as availability, utilization, and
responsiveness, allowing us to investigate the impact of different strategies
on both provider and user point of views. In a market-oriented area, such as
the cloud computing, an accurate evaluation of these parameters is required to
quantify the offered QoS and opportunely manage SLAs.
We present an analytical model, based on
Stochastic Reward Nets (SRNs), that is both scalable to model systems composed
of thousands of resources and flexible to represent different policies and
cloud-specific strategies. Several performance metrics are defined and
evaluated to analyze the behavior of a Cloud data center: utilization,
availability, waiting time, and responsiveness. A resiliency analysis is also
provided to take into account load bursts. Finally, a general approach is
presented that, starting from the concept of system capacity, can help system
managers to opportunely set the data center parameters under different working
conditions.
8.2 FUTURE ENHANCEMENT:
Future works will include the analysis
of autonomic techniques able to change on-the-fly the system configuration to
react to a change on the working conditions. We will also extend the model to
represent PaaS and SaaS cloud systems and to integrate the mechanisms needed to
capture VM migration and data center consolidation aspects that cover a crucial
role in energy saving policies.