This paper
presents the design and implementation of an architecture based on the
combination of ontologies, rules, web services, and the autonomic computing paradigm
to manage data in home-based telemonitoring scenarios.
The
architecture includes two layers: 1) a conceptual layer and 2) a data and
communication layer. On the one hand, the conceptual layer based on ontologies is
proposed to unify the management procedure and integrate incoming data from all
the sources involved in the telemonitoring process. On the other hand, the data
and communication layer based on REST web service (WS) technologies is proposed
to provide practical backup to the use of the ontology, to provide a real implementation
of the tasks it describes and thus to provide a means of exchanging data
(support communication tasks).
We study
regarding chronic obstructive pulmonary disease data management is presented in
order to evaluate the efficiency of the architecture. This proposed
ontology-based solution defines a flexible and scalable architecture in order
to address main challenges presented in home-based telemonitoring scenarios and
thus provide a means to integrate, unify, and transfer data supporting both clinical
and technical management tasks.
1.2
INTRODUCTION
Patient
empowerment is considered as a philosophy of health care based on the
perspective that better outcomes are achieved when patients become active
participants in their own health management. This new paradigm is a central
idea in the European Union (EU) health strategy supported by international
health organizations including the World Health Organization among others, and
its effectiveness in yielding quality of care is an obvious and essential area
of research. This new idea invites to look for new ways of providing
healthcare, e.g., by using information and communications technologies. In this
context, home-based telemonitoring systems can be used as self-care management
tools, while collaborative processes among healthcare personnel and patients are
maintained, thus the patient’s safe control is guaranteed. Telemonitoring
systems face the problem of delivering medicine to the current growing
population with chronic conditions while at the same time covering the
dimensions of quality of care and new paradigms such as empowerment can be
supported.
By periodically collecting patients’
themselves clinical data (located at their home sites) and transferring them to
physicians located in remote sites, patient’s health status supervision and
feedback provision are possible. This type of telemedicine system guarantees
patient control while reducing costs and avoiding hospital overflows. These two
sites (home site and healthcare site) comprised a typical home-based
telemonitoring system. At home site, data acquired by using MDs together with
the patient’s feedback are collected in a concentrator device (HG) used to
evaluate and/or transfer the acquired data outside the patient’s home if
necessary. At the health-care site, a server device is used to manage information
from the home site as well as to manage and store the patient’s monitoring
guidelines defined by physicians (TS, telemonitoring server). In fact, this
telemonitoring process, and consequently the evolution of the patient’s health
status, ismanaged through the indications or monitoring guidelines provided by
physicians.
Although significant contributions have
been made in this field in recent decades, telemedicine and in e-health
scenarios in general still pose numerous challenges that need to be addressed
by researchers in order to take maximum advantage of the benefits that these
systems provide and to support their long-term implementation. Interoperability
and integration are critical challenges that also need to be addressed when
developing monitoring systems in order to provide effective healthcare and to
make possible seamless communication among the different heterogeneous health
entities that participate in the monitoring process. This integration should be
addressed at both end sites of the scenario but also in the communication link,
thus integrating the way of transferring and exchanging information efficiently
between them.
We providing personalized care services
and taking into account the patient’s context have been identified as additional
requirements. Furthermore, apart from clinical data aspects, technical issues
should be also addressed in this scenario. Technical management of all the
devices that comprise the telemonitoring scenario (e.g., the MDs and HG) is an
important task that may or may not be integrated under the same architecture as
clinical management. Hence, at this technical level, research is still required
to address these challenges. Consequently there is a need for the development
of new telemonitoring architectures.
Great efforts have been made in recent
years in developing standards to deal with interoperability at different points
of the e-health communication infrastructure such as the ISO/IEEE 11073 (X73)
for MDs interoperability, the OpenEHR initiative for storage, management and
retrieval of electronic health record (EHR) information or as the standardized
Health Level Seven7 (HL7) messages to solve clinical data transferences.
Nevertheless, additional efforts are required to enable them to work together and
ultimately provide a higher level of integration.
Specifically, in this telemonitoring
scenario, there is not a unique standard-based solution to address data and
management integration. Since several standards can be used (some of them in
combination with proprietary protocols or other standards) at different points
of this scenario, the interoperability problem remains unsolved unless these
standards would merge into one or alignments and combination of them would be
done. According to Berges et al. interoperability does not mean to have
a unique representation but a semantically acknowledged equivalent one. That is
the reason to propose in this study an ontology-based architecture in order to
provide with a common knowledge about the exchanged data and the management of
such data. This ontology constitutes the knowledge equivalent one. Then, at
both ends of the architecture other standards could be used for other managing
purposes relating this model with the specific desired approach. Using this
alternative, a knowledge model is first provided that avoids alignment of
models two by two, while all being related through the main ontology.
Ontologies-based solutions have been
popularized over the past few years. Ontologies provide a higher level of abstraction
and have been successfully used in telemonitoring scenarios and other areas to
provide knowledge representation and semantic integration, thus a common
understanding about data exchanged by all the entities. Furthermore, its
combination with rules allows providing personalized management services and
thus personalized care. Although there are works that describe the details of
an ontology approach in this domain, they do not devote much attention to the
architecture implementation and the communication used to exchange the
information described. Consequently, fewworks have given details about this
practical implementation of the ontology-based system which may be of interest
for the development of other ontology-based applications in and outside the e-health
domain.
This paper presents an ontology-driven
architecture to integrate data management and enable its communication in a
telemonitoring scenario. The proposed architecture includes two layers: the
conceptual layer (the ontology) and the communication and data layer. The
conceptual layer uses the HOTMES and its extensions introduced. Specifically,
the OWL-DL language was selected to define this ontology model. The second
layer is based on WS technologies. WSs have been successfully used in network
management and also in other works to exchange data modeled by ontology.
However, our proposal, inspired on the representational state transfer (REST)
style and based on a generic communication method, provides a different design
approach that may be reusable for other systems based on ontologies.
Furthermore, security issues have been considered. The aim is to define a
flexible and scalable architecture in order to address main challenges
presented in home-based telemonitoring scenarios and thus provide a means to
integrate and transfer data supporting both clinical and technical data
management.
1.3
LITRATURE SURVEY
AUTHOR
AND PUBLICATION: JD. Trigo, I. Mart´ınez, A. Alesanco,
A. Kollmann, J. Escayola, D. Hayn, G. Schreier, and J. Garc´ıa, “AN INTEGRATED
HEALTHCARE INFORMATION SYSTEM FOR END-TO-END STANDARDIZED EXCHANGE AND
HOMOGENEOUS MANAGEMENT OF DIGITAL ECG FORMATS,” IEEE Trans. Inf. Technol.
Biomed., vol. 16, no. 4, pp. 518–529, Jul. 2012.
EXPLANATION:
This paper investigates
the application of the enterprise information system (EIS) paradigm to
standardized cardiovascular condition monitoring. There are many specifications
in cardiology, particularly in the ECG standardization arena. The existence of ECG
formats, however, does not guarantee the implementation of homogeneous,
standardized solutions for ECG management. In fact, hospital management
services need to cope with various ECG formats and, moreover, several different
visualization applications. This heterogeneity hampers the normalization of
integrated, standardized healthcare information systems, hence the need for
finding an appropriate combination of ECG formats and suitable EIS-based
software architecture that enables standardized exchange and homogeneous
management of ECG formats. Determining such a combination is one objective of
this paper.
We develop the
integrated healthcare information system that satisfies the requirements posed
by the previous determination. The ECG formats selected include ISO/IEEE11073,
Standard Communications Protocol for Computer-Assisted Electrocardiography, and
an ECG ontology. The EIS-enabling techniques and technologies selected include
web services, simple object access protocol, extensible markup language, or business
process execution language. Such a selection ensures the standardized exchange
of ECGs within, or across, healthcare information systems while providing
modularity and accessibility.
AUTHOR
AND PUBLICATION: D. Ria˜no, F. Real, J. A. L´opez-Vallverd´u,
F. Campana, S. Ercolani, P. Mecocci, R. Annicchiarico, and C. Caltagirone, “AN
ONTOLOGY-BASED PERSONALIZATION OF HEALTH-CARE KNOWLEDGE TO SUPPORT CLINICAL
DECISIONS FOR CHRONICALLY ILL PATIENTS,” J. Biomed. Informat., vol. 45,
no. 3, pp. 429–446, 2012.
EXPLANATION:
Chronically ill
patients are complex health care cases that require the coordinated interaction
of multiple professionals. A correct intervention of these sort of patients
entails the accurate analysis of the conditions of each concrete patient and
the adaptation of evidence-based standard intervention plans to these
conditions. There are some other clinical circumstances such as wrong
diagnoses, unobserved comorbidities, missing information, unobserved related
diseases or prevention, whose detection depends on the capacities of deduction
of the professionals involved. In this paper, we introduce ontology for the
care of chronically ill patients and implement two personalization processes
and a decision support tool. The first personalization process adapts the
contents of the ontology to the particularities observed in the health-care
record of a given concrete patient, automatically providing a personalized
ontology containing only the clinical information that is relevant for health-care
professionals to manage that patient. The second personalization process uses
the personalized ontology of a patient to automatically transform intervention
plans describing health-care general treatments into individual intervention
plans. For comorbid patients, this process concludes with the semi-automatic
integration of several individual plans into a single personalized plan.
Finally, the ontology is also used as the knowledge base of a decision support
tool that helps health-care professionals to detect anomalous circumstances
such as wrong diagnoses, unobserved comorbidities, missing information,
unobserved related diseases, or preventive actions. Seven health-care centers
participating in the K4CARE project, together with the group SAGESA and the Local
Health System in the town of Pollenza have served as the validation platform
for these two processes and tool. Health-care professionals participating in
the evaluation agree about the average quality 84% (5.9/7.0) and utility 90%
(6.3/7.0) of the tools and also about the correct reasoning of the decision
support tool, according to clinical standards.
AUTHOR
AND PUBLICATION: I.Berges, J. Bermudez, and A.
Illarramendi, “TOWARDS SEMANTIC INTEROPERABILITY OF ELECTRONIC HEALTH RECORDS,”
IEEE Trans. Inf. Technol. Biomed., vol. 16, no. 3, pp. 424–431, May
2012.
EXPLANATION:
Although the goal of
achieving semantic interoperability of electronic health records (EHRs) is
pursued by many researchers, it has not been accomplished yet. In this paper,
we present a proposal that smoothes out the way toward the achievement of that
goal. In particular, our study focuses on medical diagnoses statements. In
summary, the main contributions of our ontology-based proposal are the
following: first, it includes a canonical ontology whose EHR-related terms
focus on semantic aspects. As a result, their descriptions are independent of
languages and technology aspects used in different organizations to represent
EHRs. Moreover, those terms are related to their corresponding codes in
well-known medical terminologies. Second, it deals with modules that allow
obtaining rich ontological representations of EHR information managed by
proprietary models of health information systems. The features of one specific
module are shown as reference. Third, it considers the necessary mapping axioms
between ontological terms enhanced with so-called path mappings. This feature
smoothes out structural differences between heterogeneous EHR representations,
allowing proper alignment of information.
AUTHOR
AND PUBLICATION: N. Lasierra,A.Alesanco, J.Garc´ıa,
andD.O’Sullivan, “DATA MANAGEMENT IN HOME SCENARIOS USING AN AUTONOMIC
ONTOLOGY-BASED APPROACH,” in Proc. of the 9th IEEE Int. Conf. Pervasive
Workshop on Manag. Ubiquitous Commun. Services part of PerCom, 2012, pp.
94–99.
EXPLANATION:
An ontology-based approach to deal
with data and management procedure integration in home-based scenarios is
presented in this paper. The proposed ontology not only provides a means to
represent exchanged data but also to unify the way of accessing, controlling,
evaluating and transferring information remotely. The structure of this
ontology has been inspired by the autonomic computing paradigm, thus it
describes the tasks that comprise the MAPE (Monitor, Analyze, Plan and Execute)
process. Furthermore the use of SPARQL (Simple Protocol and RDF Query Language)
is proposed in this paper to express conditions and rules that determine the
performance of these tasks according to each situation. Finally two practical
application cases of the proposed ontology-based approach are presented.
CHAPTER 2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
Telemonitoring systems face the problem
of delivering medicine to the current growing population with chronic
conditions while at the same time covering the dimensions of quality of care
and new paradigms such as empowerment can be supported. By periodically
collecting patients’ themselves clinical data (located at their home sites) and
transferring them to physicians located in remote sites, patient’s health
status supervision and feedback provision are possible.
This type of telemedicine system
guarantees patient control while reducing costs and avoiding hospital
overflows. These two sites (home site and healthcare site) comprised a typical
home-based telemonitoring system. At home site, data acquired by using MDs
together with the patient’s feedback are collected in a concentrator device
(HG) used to evaluate and/or transfer the acquired data outside the patient’s
home if necessary.
2.1.1
DISADVANTAGES:
- Existing models for chronic diseases pose several
technology-oriented challenges for home-based care, where assistance services
rely on a close collaboration among different stakeholders, such as health
operators, patient relatives, and social community members.
- An ontology-based context model and a related context
management system providing a configurable and extensible service-oriented
framework to ease the development of applications for monitoring and handling
patient chronic conditions.
- The system has been developed in a prototypal version, and
integrated with a service platform for supporting operators of home-based care
networks in cooperating and sharing patient-related information and
coordinating mutual interventions for handling critical and alarm situations.
2.2
PROPOSED SYSTEM:
We present an ontology-driven
architecture to integrate data management and enable its communication in a
telemonitoring scenario. It enables to not only integrate patient’s clinical
data management but also technical data management of all devices that are
included in the scenario. The proposed architecture includes two layers: the
conceptual layer (the ontology) and the communication and data layer.
The conceptual layer uses the HOTMES and
its extensions introduced specifically in the OWL-DL language was selected to
define this ontology model. The second layer is based on WS technologies. WSs
have been successfully used in network management and also in other works to
exchange data modeled by ontology is our proposal, inspired on the
representational state transfer (REST) style and based on a generic
communication method, provides a different design approach that may be reusable
for other systems based on ontologies.
Furthermore, security issues have been
considered. The aim is to define a flexible and scalable architecture in order
to address main challenges presented in home-based telemonitoring scenarios and
thus provide a means to integrate and transfer data supporting both clinical
and technical data management.
2.2.1
ADVANTAGES:
Ontologies provide a higher level of
abstraction and have been successfully used in telemonitoring scenarios and
other areas to provide knowledge representation and semantic integration, thus
a common understanding about data exchanged by all the entities. Furthermore,
its combination with rules allows providing personalized management services
and thus personalized care.
We describe the details of an ontology
approach in this domain, they do not devote much attention to the architecture
implementation and the communication used to exchange the information described.
Our implementation of the ontology-based
system which may be of interest for the development of other ontology-based
applications in and outside the e-health domain the ontology for interpreting
the data transferred for the communication of end sources of the architecture.
The data and communication layer deals with data management and transmission.
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
- Speed –
1.1 GHz
- Key
Board –
Standard Windows Keyboard
- Mouse –
Two or Three Button Mouse
2.3.2
SOFTWARE REQUIREMENTS:
- Operating System : Windows XP or Win7
- Front End : Microsoft Visual Studio .NET
- Back End : MSSQL
Server
- Server : ASP .NET Web Server
- Script : C# Script
- Document : MS-Office
2007
CHAPTER
3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use
Case Diagram / Flow Diagram:
- The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
- The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
- DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
- DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION
OF DATA:
External sources or
destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data’s in
the physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
MODELING RULES:
There
are several common modeling rules when creating DFDs:
- All processes must
have at least one data flow in and one data flow out.
- All processes
should modify the incoming data, producing new forms of outgoing data.
- Each data store
must be involved with at least one data flow.
- Each external entity
must be involved with at least one data flow.
- A data flow must
be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM
3.2
DATAFLOW DIAGRAM
UML
DIAGRAMS:
3.2
USE CASE DIAGRAM:
3.3
CLASS DIAGRAM:
3.4
SEQUENCE DIAGRAM:
3.5
ACTIVITY DIAGRAM:
CHAPTER
4
4.0
IMPLEMENTATION:
ONTOLOGIES:
According to one of the most widely
accepted definitions of ontologies in computer science, ontology can be
described as “an explicit and formal specification of a shared
conceptualization”. In simple words,
ontologies represent concepts and basic relationships for the purpose of
comprehension of a common knowledge area. To develop an ontology means to formalize
a common view of a certain domain.
1) OWL Language: In
computer science, there are plenty of formal languages that can be used to
define and constructontologies. These languages allow encoding knowledge
contained in ontology in a simple and formal way. However, the standardized RDF
and OWL have been gaining popularity in the semantic web world. Ontology can be
formally described in OWL using following basic elements: 1) classes; 2) individuals;
and 3) properties. These elements are used in order to describe concepts,
instances, or members of a class and relationships between individuals of two
classes (object properties) or to link individuals with datatype values,
respectively (data type properties). Apart from these basic elements OWL
provides with class descriptors used to precisely describe OWL classes
which includes properties restrictions (value and cardinality constraints),
class axioms, properties axioms, and properties over individuals.
2) Rules: Generally,
ontology-based solutions combine knowledge presented in ontologies with dynamic
knowledge presented by the use of rules. A system based on the use of rules
usually contains a set of if-then rules (which indicate what should be done
according to a situation) and a rule engine used to apply them. By using rules,
the behavior of individuals can be expressed inside a domain. Hence, they can
be used to generate new knowledge and can also be used to provide personalized
services. One of the most popular languages for rules definition is SWRL.
However, in our study, we used SPARQL to
define some rules is a query language it can be used as a rule language by
combining CONSTRUCT clause and FILTER restrictions. On the one hand, the CONSTRUCT
query form returns a single RDF graph built based on the results of matching
with the graph pattern of the query and by taking the specified graph template.
On the other hand, the FILTER clause can be used to restrict solutions to those
which the filter expression considers as TRUE. Only if the filter function
evaluates to true is the solution to be included in the solution sequence. Note
that although this language was good enough for our purpose, its limitations
should be studied for other purposes (e.g., recursive tasks) and the adequacy
of SWRL could be studied for complex applications.
WEB SERVICES
Web services are used in this study as
software technology to access and exchange information modeled by the ontology.
According to the W3C, a WS is a “software system designed to support
interoperable machine-to-machine interaction over a communication network”.
Systems may interact with the web services by exchanging SOAP messages
serialized in XML for its message format and sent over other application layer protocols,
usually HTTP. Although SOAP-based web services are the most popular types of
WSs, there are other styles of programming a WS such as the REST style.
1) Rest Style for DesigningWeb Services:
REST
is a style of software architecture for distributed hypermedia systems such as
the World Wide Web first defined in 2000 by Fielding. This style is based on
the idea of transferring the representations of resources, a resource being any
item of interest. One of the key advantages of the REST architecture are
scalability of components and generality of interfaces. Although REST was initially
described in the context of HTTP, this paradigm can be applied to other
protocols or implementations. Web services can also be described using this
style. A WS implemented using HTTP and the principles of REST architecture is
designated as REST(ful) WS. Requests made from the client and responses from
the WS are used to transfer resources information. Each resource is identified
through an URI. Stateless behavior of data using XML and/or JSON and explicitly
used HTTP methods (PUT, GET, POST, DELETE) to exchange resources are the key
characteristics of a REST(ful) WS.
4.1
MODULES:
MANAGEMENT
PROFILE:
DATA AND COMMUNICATION LAYER:
HG AND TS MANAGEMENT MODULES:
COMMUNICATION FLOW AND WORKFLOW:
4.3
MODULE DESCRIPTION:
CLINICAL
MANAGEMENT PROFILE:
COPD patients were identified as
candidates to be monitored at home sites. From a clinical point of view, it was
an interesting case study (some estimations suggest that up to 10% of the
European population suffers COPD). From a technical point of view, the case of
the COPD patient led to define a complex technical management profile (because
different MDs are required to be used by the patient) and interesting option to
test the performance of the agent. Hence, one patient profile was designed
according to the clinical HOTMES ontology and one technical management
profile was designed according to the technical HOTMES ontology.
The patient profile includes the required
tasks to monitor a COPD patient such as controlling the FEV1 measurement in
order to detect the presence and severity of the airway obstruction. It was
configured by a primary care physician by means of published clinical
guidelines in patient profile included 15 monitoring task, 11 analysis
task, 9 planning task, and 3 execution task. This configuration led to include
144 new instances and to configure 18 rules. The details of this profile and
its evaluation to configure other type of profiles can be technical
management profile was designed to monitor the state of theMDs used by the
COPD patient (weighing scale, a blood pressure monitor, a pulse-oximeter, and a
glucometer) and the consumption of resources of the correspondent HG. In
addition rules were configured and 83 new instances were required to be
configured in the technical management profile in additional information
of the application of the HOTMES ontology for technical tasks.
DATA AND COMMUNICATION LAYER:
In the data layer, the communication
between the end sites is established using WS technologies. Consequently, a WS
has been designed to be placed in the TS and also a web client to be installed
in the HG (to establish a communication with the TS). This communication allows
the HG to ask for its associated management profile to the TS and to
transmit acquired information from the HG to the TS.
A REST WS was developed in order to
enhance the scalability and flexibility of the architecture and improve the
performance (efficiency). This WS comprises and defines a set of operations
over the following resources: an OWL ontology, the rules (transferred by means
of an XML), OWL individuals (sent by the IndividualWS structure),
properties datatype values corresponding to an individual (identified by the
URI of the individual and the URI of the property sent in a string generic type),
and inform messages to provide some control functions to the web pair
communication.
Each one of these resources was
identified by an URI, and a set of operations was defined for each particular
resource using HTTP methods (e.g., GET or PUT). This WS interface allows
information described in the ontology to be exchanged in a generic manner. This
is one key that contributes to the reusability and easy extension of the architecture.
Described communication methods do not depend on the knowledge itself described
in the ontology (related to the service) but on the fact of using an ontology
to represent such knowledge. A summary of the resources and defined operations is
depicted in Table I. As mentioned in the description of the converter module, individuals
are exchanged by using a developed structure designated as IndividualWS.
Using OWL language, an individual of the ontology can be described as a member
of a class with individual axioms or facts as individual property values
(datatype and object properties).
HG AND TS MANAGEMENT MODULES:
Two management modules and web
technology modules inside the HG and the TS constitute the main parts of the telemedicine
system (see Fig. 1). The modules that comprise the architecture have been
developed using .NET technologies. Specifically, the .NET framework (version 3.5)
has been used to process the ontology and create new instances, data
acquisition, and manipulation when the rules are applied. Regarding the web
modules the components of the remote management module installed in the TS are
depicted in Fig. 1. This management module includes the following three
components:
1)
Ontology knowledge base module: This module contains the
ontology knowledge models and the instances of the registered management
profiles. The TDB triple-store has been used to store the ontology model and
new instances in this knowledge base module.
2)
Converter module: The communication module of this
architecture is mainly based on OWL instances exchanged generically by means of
a developed object structure named IndividualWS. The converter module is
used to wrap and unwrap the individuals structure used to exchange information with
web clients. Furthermore, this module incorporates some reasoning tasks.
Ontology-based reasoning is used in order to check instances before including
new information
in the model and to ensure the
consistency of the model.
3)
Rules module: This module is used to store rules
associated with each management profile. These rules are subsequently transferred
by means of an XML file. As shown in Fig. 1, an additional GUI is required in
order to make easier for EM, technical or clinical (physician), the process of
defining the profiles and the rules. We are currentlyworking in the development
of this GUI combining ontology visualization techniques and usability methods.
The methodology used to design this interface components of the management module
installed in the HG are equally depicted in Fig. 1. This last management module
has been designated the “Semantic Autonomic Agent.” This module plays a key
role in the architecture. It is in charge of integrating incoming data and
executing the management tasks described in the management profile.
The communication between this agent and
the management module installed at the remote site is established through a web
client connection to the WS installed in the remote TS. The architecture of the
agent comprises the ontology knowledge base module, the rules module, the
converter module, and the following modules.
1) MAPE module: This module constitutes
the computing core of the agent. It will be used to run the tasks specified in
each management profile, hence to execute the closed loop from the MAPE
loop process.
2) Integrator module: Information
transferred by MDs and also contextual data provided by patients will be
acquired in this module, which integrates data coming from different data sources.
3) Reminders and alarms module: This
module includes clock functionalities to ask patients about data (reminders) or
to collect information from a specific software resource.
4) Actions module: This last module is
used to execute actions described within the execution tasks of the management
profile if an abnormal finding occurs.
FLOW AND WORKFLOW PERFORMANCE:
All the modules and sources involved in
the management procedure. The first step (see Fig. 3) consists in the download
of the management profile (patient profile or technical profile). First
of all, an instance of the management profile should be configured by an
EM placed at a remote site. Furthermore, a set of individual rules should be configured
for each particular management purpose. As shown in Fig. 3, the designed GUI
helps the physician with the ontology instantiation process and the rules
definition. The outputs of this interface (which uses selected classes of the
ontology as a navigation tool) are a personalized management profile and
a set of rules gathered in an XML file. Other functionalities such as queries
over acquired data or crossing data among patients to take some decisions could
be of interest to be included in this tool.
The communication is always initiated by
the user (web client at HG). Through a connection to the web service, the user
(the patient in the telemonitoring scenario) situated at home site will acquire
the required management profile. As shown in Fig. 3, if the user
requests for an update of his/her management profile, then the version of the
available profile at the TS will be requested for its evaluation (GET property
value). When the user requests a new management profile, first, it is
checked whether the ontology to download it is available (GET ontology). After that,
the rules and the management profile will be downloaded when required.
The methods involved are 1) GET (rules)
and 2) GET (individual). Note that the TLS authentication phase is not depicted
in Fig. 3, but it is initially carried out in order to allow the web client
connection to the web service. As depicted in Fig. 3, the associated management
profile is extracted from the ontology and the instances of the ontology
managed by Jena are wrapped into the IndividualWS structure through the
converter module. Once the management profile is in the HG, it will be
processed into the converter module, unwrapped, and inserted as individuals
managed by Jena in the ontology. Once the management profile has been
included in the ontology knowledge base module of the HG, it will be evaluated in
the MAPE module and the management procedure will be performed by running the
tasks specified in the profile.
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility
analysis are
- ECONOMICAL
FEASIBILITY
- TECHNICAL
FEASIBILITY
- SOCIAL
FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic
impact that the system will have on the organization. The amount of fund that
the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical
feasibility, that is, the technical requirements of the system. Any system
developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have
a modest requirement, as only minimal or null changes are required for
implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a
process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of
activities that can be planned in advance and conducted systematically. Testing
is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct, the global will be
successfully achieved. In adequate testing if not testing leads to errors that
may not appear even many months. This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a
user-oriented vehicle before implementation. The best programs are worthless if
it produces the correct outputs.
5.2.1 UNIT TESTING:
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
UNIT TESTING:
Description
|
Expected
result
|
Test for application window properties.
|
All the properties of the windows are to be
properly aligned and displayed.
|
Test for mouse operations.
|
All the mouse operations like click, drag, etc.
must perform the necessary operations without any exceptions.
|
5.1.3 FUNCTIONAL TESTING:
Functional
testing of an application is used to prove the application delivers correct
results, using enough inputs to give an adequate level of confidence that will
work correctly for all sets of inputs. The functional testing will need to
prove that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated
values to isolate the problems.
FUNCTIONAL TESTING:
Description
|
Expected result
|
Test for all modules.
|
All peers should communicate in the
group.
|
Test for various peer in a distributed
network framework as it display all users available in the group.
|
The result after execution should give
the accurate result.
|
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing
encompasses a rich spectrum of testing strategies, describing the expected
results for every test case. It uses symbolic analysis techniques. This testing
used to check that an application will work in the operational environment.
Non-functional testing includes:
- Load
testing
- Performance
testing
- Usability
testing
- Reliability
testing
- Security
testing
5.1.5 LOAD TESTING:
An important
tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A
load can be a real load, that is, the system can be put under test to real
usage by having actual telephone users connected to it. They will generate test
input data for system test.
Load Testing
Description
|
Expected result
|
It is necessary to ascertain that the
application behaves correctly under loads when ‘Server busy’ response is
received.
|
Should designate another active node
as a Server.
|
5.1.5 PERFORMANCE TESTING:
Performance
tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the code,
response time and device utilization. The intent of this testing is to identify
weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description
|
Expected result
|
This is required to assure that an
application perforce adequately, having the capability to handle many peers,
delivering its results in expected time and using an acceptable level of
resource and it is an aspect of operational management.
|
Should handle large input values, and
produce accurate result in a expected
time.
|
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software
quality control team.
RELIABILITY TESTING:
Description
|
Expected result
|
This is to
check that the server is rugged and reliable and can handle the failure of
any of the components involved in provide the application.
|
In case of
failure of the server an alternate
server should take over the job.
|
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description
|
Expected result
|
Checking that the user identification
is authenticated.
|
In case failure it should not be
connected in the framework.
|
Check whether group keys in a tree are
shared by all peers.
|
The peers should know group key in the
same group.
|
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description
|
Expected result
|
Exercise all logical decisions on
their true and false sides.
|
All the logical decisions must be
valid.
|
Execute all loops at their boundaries
and within their operational bounds.
|
All the loops must be finite.
|
Exercise internal data structures to
ensure their validity.
|
All the data structures must be valid.
|
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is,
black testing enables
the software engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach that
is likely to
uncover a different class
of errors than
white box methods. Black box
testing attempts to find errors which focuses on inputs, outputs, and principle
function of a software module. The starting point of the black box testing is
either a specification or code. The contents of the box are hidden and the
stimulated software should produce the desired results.
5.1.10 BLACK BOX TESTING:
Description
|
Expected result
|
To check for incorrect or missing
functions.
|
All the functions must be valid.
|
To check for interface errors.
|
The entire interface must function
normally.
|
To check for errors in a data
structures or external data base access.
|
The database updation and retrieval
must be done.
|
To check for initialization and termination
errors.
|
All the functions and data structures
must be initialized properly and terminated normally.
|
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER
7
7.0 SOFTWARE SPECIFICATION:
7.1 FEATURES OF .NET:
Microsoft
.NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web
solutions. The .NET Framework is a language-neutral platform for writing
programs that can easily and securely interoperate. There’s no language barrier
with .NET: there are numerous languages available to the developer including
Managed C++, C#, Visual Basic and Java Script.
The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
7.2 THE .NET FRAMEWORK
The .NET Framework has
two main parts:
1. The Common Language
Runtime (CLR).
2. A hierarchical set of
class libraries.
The CLR is
described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
- Conversion from a
low-level assembler-style language, called Intermediate Language (IL), into
code native to the platform being executed on.
- Memory management,
notably including garbage collection.
- Checking and enforcing
security restrictions on the running code.
- Loading and executing
programs, with version control and other such features.
- The following features
of the .NET framework are also worth description:
Managed
Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With
Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed
Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others,
namely C++, do not. Targeting CLR can, depending on the language you’re using,
impose certain constraints on the features available. As with managed and
unmanaged code, one can have both managed and unmanaged data in .NET
applications – data that doesn’t get garbage collected but instead is looked
after by unmanaged code.
Common Type System
The CLR
uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the runtime,
which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR
provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common
Language Specification (CLS) has been defined. Components that follow these
rules and expose only CLS features are considered CLS-compliant.
7.3 THE CLASS LIBRARY
.NET
provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System.
Object. As well as objects, there are value types. Value types can be allocated
on the stack, which can provide useful flexibility. There are also efficient
means of converting value types to object types if and when necessary.
The set of
classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class
library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept
to a minimum.
7.4 LANGUAGES SUPPORTED
BY .NET
The
multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but
there are also a number of new additions to the family.
Visual
Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic
also now supports structured exception handling, custom attributes and also
supports multi-threading.
Visual
Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.
Managed
Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is
Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is
just the grammar of the language. It has no standard library of its own, and
instead has been designed with the intention of using the .NET libraries as its
own.
Microsoft
Visual J# .NET provides the easiest transition for Java-language developers
into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a
variety of other programming languages.
Active
State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be
integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.
Other languages for
which .NET compilers are available include
ASP.NET
XML WEB SERVICES
|
Windows Forms
|
Base Class Libraries
|
Common Language Runtime
|
Operating System
|
Fig1 .Net Framework
C#.NET is
also compliant with CLS (Common Language Specification) and supports structured
exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas
destructors are used to destroy them. In other words, destructors are used to
release the resources allocated to the object. In C#.NET the sub finalize
procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the
sub finalize procedure can be called only from the class it belongs to or from
derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The
.NET Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that
are not currently in use by applications. When the garbage collector comes
across an object that is marked for garbage collection, it releases the memory
occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that
supports multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
7.5
THE .NET FRAMEWORK
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed environment of the
Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To
provide a consistent object-oriented programming environment whether object
codes is stored and executed locally on Internet-distributed, or executed
remotely.
2. To
provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3.
Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and
Web-based applications.
7.6 FEATURES OF SQL-SERVER
The OLAP
Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called
Microsoft SQL Server 2000 Meta Data Services. References to the component now
use the term Meta Data Services. The term repository is used only in reference
to the repository engine within Meta Data Services
SQL-SERVER
database consist of six type of objects,
They
are,
1.
TABLE
2.
QUERY
3.
FORM
4.
REPORT
5.
MACRO
7.7 TABLE:
A database
is a collection of data about a specific topic.
VIEWS OF
TABLE:
We can
work with a table in two types,
1.
Design View
2.
Datasheet View
Design
View
To build or modify the structure of a
table we work in the table design view. We can specify what kind of data will
be hold.
Datasheet
View
To add,
edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is
a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the
dynaset or snapshot for us to view or perform an action on it, such as deleting
or updating.
CHAPTER
7
APPENDIX
7.1
SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.1
CONCLUSION:
This study describes architecture to
enable data integration and its management in an ontology-driven telemonitoring
solution implemented in home-based scenarios. This is an innovative
architecture that facilitates the integration of several management services at
home sites using the same software engine. The architecture has been
specifically studied to support both technical and clinical services in the
telemonitoring scenario, thus avoiding installing additional software for
technical purposes.
HOTMES ontology used at the conceptual
layer to describe a management profile on the one hand, our ontology contributes to integrate data and
its management offering benefits in terms of knowledge representation, workflow
organization, and self-management capabilities to the system. Its combination with
rules allows providing personalized services.
This application ontology could be in
future improved by introducing concepts from domain ontology. On the other hand,
the data and communication layer of the architecture, based on the REST WS, was
oriented to minimizing the consumption of resources and providing reusable key
ideas for future ontology-based architecture developments.
8.2
FUTURE ENHANCEMENT
This solution represents a further step
toward the possibility of establishing more effective home-based telemonitoring
systems and thus improving the remote care of patientswith chronic diseases. As
it was reported in, good telemedicine implementations are developed after a
process where the dynamic interaction among a combination of socio-technical and
also clinical factors is optimized. It means that additional work should be
done (e.g., to measure the interaction of the
patient–doctor using the system and also
the truthfulness of the system for a long period of time) before adopting this
solution in a real scenario its complete development, first, a concordance study
should be conducted in order to determine its clinical efficiency. Then, a
social impact study should be conducted in order to determine how the system
allowed improving patient’s quality of life. Regarding these last studies, the
results presented in evidence the benefits of telemonitoring systems while
linking their success to the usability design issues and features.