In the recent of social media, sentiment
analysis has developed rapidly in recent years. However, only a few studies
focused on the field of transportation, which failed to meet the stringent
requirements of safety, efficiency, and information exchange of intelligent
transportation systems (ITSs). We propose the traffic sentiment analysis (TSA)
as a new tool to tackle this problem, which provides a new prospective for
modern ITSs.
Our methods and models in TSA are
proposed in this paper, and the advantages and disadvantages of rule- and
learning-based approaches are analyzed based on web data. Practically, we
applied the rule-based approach to deal with real problems, presented an architectural
design, constructed related bases, demonstrated the process, and discussed the
online data collection.
1.2
INTRODUCTION
Transportation
systems serve the people in essence, but the modern intelligent transportation
systems (ITSs) failed to concern about the public opinions. For the
completeness of ITS space, it is necessary to collect and analyze the public
wisdom and opinions. With the remarkable advancement of Web 2.0 in the last
decade, communication platforms, such as blogs, wikis, online forums, and
social-networking groups, have become a rich data-mining source for the
detection of public opinions. Therefore, we propose traffic sentiment analysis
(TSA) for processing traffic information from websites. As taking consideration
of human affection, TSA will enrich the performance of the current ITS space.
TSA is a subfield of sentiment analysis,
which concerns about the issues of traffic in particular. Due to the field
sensitivity of sentiment analysis, it is necessary to discuss the TSA problems
and construct TSA systems specifically. The TSA treats the traffic problems in
a new angle, and it supplements the capabilities of current ITS systems. Fig. 1
illustrates the modules of ITS and exhibits that the TSA plays the role of
sensing, computing, and supporting the decision making in ITSs.
The
functions of the TSA system can be illustrated as follows.
1)
Investigation: It is more economical and efficient than
the public poll to collect the public opinion through the TSA system.
2)
Evaluation: The computational production of the TSA
system can be used to evaluate the performance of traffic services and
policies.
3)
Prediction: The TSA system can be further developed
to predict the trends of some social events. For example, to predict whether a
cancelled flight would bring chaos, we can analyze the emotion of passengers on
their words published on Twitter or Weibo through TSA systems.
In addition, specific parts of the TSA
system can be viewed as another form of “social sensors” compared with traditional
sensor systems; it can detect the situation from a new humanized perspective.
The TSA system is independent of current systems, which is particularly useful
in an emergency when other systems were ruined. For example, in 2009, the volcano
ash from Iceland caused the malfunction of many cameras in several European
countries. In this paper, by constructing a specific TSA system, we addressed the
issues and methods in this field and illustrated two cases to demonstrate the
value of this research.
Our
contribution in this paper can be addressed as follows.
1) We proposed TSA to view the traffic
problems in a new perspective.
2) The main issues of TSA applications
on web data were discussed based on the web data.
3) The key problems of TSA were
addressed, including the design of architecture, the improved rule-based approach,
and the construction of related bases.
1.3
LITRATURE SURVEY
CHINESE
WORD SEGMENTATION FOR TERRORISM-RELATED CONTENTS
PUBLICATION:
D.
Zeng, D. Wei, M. Chau, and F. Wang, Intelligence and Security Informatics.New
York, NY, USA: Springer-Verlag, 2008, pp. 1–13.
EXPLANATION:
In order to analyze
security and terrorism related content in Chinese, it is important to perform
word segmentation on Chinese documents. There are many previous studies on
Chinese word segmentation. The two major approaches are statistic-based and
dictionary-based approaches. The pure statistic methods have lower precision,
while the pure dictionary-based method cannot deal with new words and are
restricted to the coverage of the dictionary. In this paper, we propose a
hybrid method that avoids the limitations of both approaches. Through the use
of suffix tree and mutual information (MI) with the dictionary, our segmenter,
called IASeg, achieves a high accuracy in word segmentation when domain
training is available. It can identify new words through MI-based token merging
and dictionary update. In addition, with the Improved Bigram method it can also
process N-grams. To evaluate the performance of our segmenter, we compare it
with the Hylanda segmenter and the ICTCLAS segmenter using a terrorism-related
corpus. The experiment results show that IASeg performs better than the two
benchmarks in both precision and recall.
AGENT-BASED
CONTROL FOR NETWORKED TRAFFIC MANAGEMENT SYSTEMS
PUBLICATION:
F.-Y.
Wang, IEEE Intell. Syst., vol. 20, no. 5, pp. 92–96, Sep./Oct. 2005.
EXPLANATION:
Agent or multiagent
systems have evolved and diversified rapidly since their inception around the
mid 1980s as the key concept and method in distributed artificial intelligence.
They have become an established, promising research and application field
drawing on and bringing together results and concepts from many disciplines,
including AI, computer science, sociology, economics, organization and
management science, and philosophy. However, multiagent systems have yet to
achieve widespread use for controlling traffic management systems. Most
research focuses on developing hierarchical structures, analytical modeling,
and optimized algorithms that are effective for real-time traffic applications,
as you can see from well-known traffic control systems such as CRONOS, OPAC,
SCOOT, SCAT, PRODYN, and RHODES. Although those functional-decomposition-based
systems are useful and successful for many traffic management problems, costs
and difficulties associated with their development, operation, maintenance,
expansion, and upgrading are often prohibitive and sometimes unnecessary,
especially in the rapidly arriving age of connectivity. We need to rethink
control systems and reinvestigate the use of simple task-oriented agents for
traffic control and management of transportation systems.
OPINION
FEATURE EXTRACTION USING CLASS SEQUENTIAL RULES
PUBLICATION:
M.
Hu and B. Liu, presented at the AAAI Spring Symposium Computational
Approaches
Analyzing Weblogs, Palo Alto, CA, USA, 2006, Paper AAAI-CAAW-06.
EXPLANATION:
The paper studies the problem of
analyzing user comments and reviews of products sold online. Analyzing such
reviews and producing a summary of them is very useful to both potential
customers and product manufacturers. By analyzing reviews, we mean to extract
features of products (also called opinion features) that have been commented by
reviewers and determine whether the opinions are positive or negative. This
paper focuses on extracting opinion features from Pros and Cons, which
typically consist of short phrases or incomplete sentences. We propose a
language pattern based approach for this purpose. The language patterns are
generated from Class Sequential Rules (CSR). A CSR is different from a classic
sequential pattern because a CSR has a fixed class (or target). We propose an
algorithm to mine CSR from a set of labeled training sequences. To perform
extraction, the mined CSRs are transformed into language patterns, which are
used to match Pros and Cons to extract opinion features. Experimental results
show that the proposed approach is very effective.
CHAPTER 2
2.0
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM:
Existing approaches to sentiment
analysis can be categorized into rule- and learning based approaches.
Rule-based approaches often require an expert-defined dictionary of subjective
words; this approach predicts the polarity of a sentence or document by
analyzing the occurring patterns of such words in text. For example, Wiebe et
al. provided a lexicon source of subjectivity clues, such as verbs, adjectives,
and nouns, with their polarity (i.e., positive, negative, or neutral) and
strength (i.e., strong or weak) annotated. However, this lexicon is able to
define the original polarity of a word only, and the actual polarity of a word
may be modified by its context in a sentence. Several approaches that consider
the context of words have been proposed to determine the sentiment orientation
of words.
Previous studies, the data set contains
several subjective texts that could not be easily analyzed by the rules. The
most typical phenomenon is the ironic sentiment sentences. For instance, in
posts regarding fuel prices, the thread title used was “the fuel price will
rise,” to which one user replied, “go to sell the car.” Such a reply apparently
carries an ironic tone; thus, all annotators manually labeled the reply as
“negative.” However, given that the computer cannot detect from the given text
any word expressing a negative sentiment, the methods cannot recognize the sentiment
polarity. Therefore, numerous problems remain unsolved.
2.1.1
DISADVANTAGES:
- Rule-based
approach, the disadvantage is that the sentiment polarity results cannot be as
precise as expected if the context of the texts is not considered.
Nevertheless, for handling web data, this type of approach has the following
advantages.
- The
precision of the rule-based approach is independent of the sizes of the
clauses. Second, the syntax rule of a certain language is basic and static
despite the differences in the stylistic features of various users. The thought
process and word choice basically remain unchanged.
- Existing
the rules of the rule-based approach is relatively static in the rule-based
approach can be easily extended by simply updating the sentiment lexicon, although new sentimental words
rapidly emerge and the sentiment of several words may be changed with words.
2.2
PROPOSED SYSTEM:
We propose traffic sentiment analysis
(TSA) for processing traffic information from websites. As taking consideration
of human affection, TSA will enrich the performance of the current ITS space. TSA
is a subfield of sentiment analysis, which concerns about the issues of traffic
in articular. Due to the field sensitivity of sentiment analysis, it is
necessary to discuss the TSA problems and construct TSA systems specifically.
The TSA treats the traffic problems in a
new angle, and it supplements the capabilities of current ITS systems in the
modules of ITS and exhibits that the TSA plays the role of sensing, computing,
and supporting the decision making in ITSs. The functions of the TSA system can
be illustrated as follows. 1) Investigation: It is more economical and
efficient than the public poll to collect the public opinion through the TSA
system. 2) Evaluation: The computational production of the TSA system can be
used to evaluate the performance of traffic services and policies. 3)
Prediction: The TSA system can be further developed to predict the trends of
some social events.
For example, to predict whether a
cancelled flight would bring chaos, we can analyze the emotion of passengers on
their words published on Twitter or Weibo through TSA systems. In addition,
specific parts of the TSA system can be viewed as another form of “social
sensors” compared with traditional sensor systems; it can detect the situation
from a new humanized perspective.
2.2.1
ADVANTAGES:
- We
approach is adopted here to address the distinct challenges posed by the web
data set illustrated the architecture of TSA; the architecture is based on the
tackling process; and its main components, including 1) web data collection, 2)
preprocessing, 3) extraction of subjects and objects, 4) extraction of
sentiment properties, 5) sentiment calculation and classification, 6)
evaluation or applications, and 7) feed-back, improve the construction of the
sentiment, rule, and TSA object bases.
- Data
collection: We gathered data from several websites, such ensuring that the
conclusions are definitely based on public opinion or, at least, represent part
of the public opinion.
- Preprocessing: As previously mentioned,
web documents must be processed additionally because that segment words by
spaces in sentences. In the preprocessing, the following steps are included: 1)
the segmentation of text, 2) the labeling of words, and 3) the replacement of
synonymous expressions.
2.3
HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1
HARDWARE REQUIREMENT:
v
Processor – Pentium –IV
- Speed –
1.1 GHz
- Key
Board –
Standard Windows Keyboard
- Mouse –
Two or Three Button Mouse
2.3.2
SOFTWARE REQUIREMENTS:
- Operating System : Windows XP or Win7
- Front End : JAVA JDK 1.7
- Back End : MS
ACCESS 2007
- Tools : Netbeans 7
- Document : MS-Office
2007
CHAPTER
3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use
Case Diagram / Flow Diagram:
- The
DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system
- The
data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process, an external entity that interacts with the system and
the information flows in the system.
- DFD
shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to
output.
- DFD
is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION
OF DATA:
External sources or
destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and
retrieved.
PROCESS:
People, procedures or devices that produce data’s in
the physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to
a destination. The data flow is a “packet” of data.
MODELING RULES:
There
are several common modeling rules when creating DFDs:
- All processes must
have at least one data flow in and one data flow out.
- All processes
should modify the incoming data, producing new forms of outgoing data.
- Each data store
must be involved with at least one data flow.
- Each external
entity must be involved with at least one data flow.
- A data flow must
be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM
3.2
DATAFLOW DIAGRAM
UML
DIAGRAMS:
3.2
USE CASE DIAGRAM:
3.3
CLASS DIAGRAM:
3.4
SEQUENCE DIAGRAM:
3.5
ACTIVITY DIAGRAM:
CHAPTER
4
4.0
IMPLEMENTATION:
TSA ARCHITECTURE
Previous studies on Chinese texts have
devoted considerable efforts on architectural design. Che et al. designed
the architecture of the language technology platform (LTP), an integrated
Chinese processing platform including a suite of high-performance natural
language processing (NLP) modules and relevant corpora. They achieved plausible
results in several relevant evaluations, particularly for syntactic and
semantic parsing modules. Li et al. designed the architecture of sentiment
analysis application in the financial domain on the basis of morphemes. A
rule-based approach is adopted here to address the distinct challenges posed by
the Chinese data set. Fig. 2 illustrated the architecture of TSA; the
architecture is based on the tackling process; and its main components,
including 1) web data collection, 2) preprocessing, 3) extraction of subjects
and objects, 4) extraction of sentiment properties, 5) sentiment calculation and
classification, 6) evaluation or applications, and 7) feed- back, improve the
construction of the sentiment, rule, and TSA object bases.
Data collection: To
address the problem, we gathered data from several websites, such as Sina
Weibo, Tencent Weibo, Tianya, and autohome (the upper block in Fig.
2), ensuring that the conclusions are definitely based on public opinion or, at
least, represent part of the public opinion details of data collection are
discussed in Section V.
Preprocessing: As
previously mentioned, Chinese documents must be processed additionally because
that Chinese language does not segment words by spaces in sentences. In the
preprocessing, the following steps are included: 1) the segmentation of text,
2) the labeling of words, and 3) the replacement of synonymous expressions. The
first two steps are done by a Chinese segmentation tool; we employ the Chinese Lexical
Analysis System 3 launched by the social media, various expressions denote the
same meaning. For example, several users commonly use “d,” which represents the
Chinese character “ ” (support), to express agreement with others. Therefore,
the replacement of synonymous expressions (step 3) is necessary to reduce the
complexity and increase the precision of following processes.
Word segmentation optimization: To
avoid unnecessary disturbances and improve precision, preprocessing should be conducted
according to the material and the demand of the algorithms. However, in
practice, the result of word segmentation in Chinese is far from expected. In
some cases, this step may even reduce the precision. For example, “” is separated
as ( /n). In fact, “ ” is an abbreviation of a company name, which represents
one of the two Chinese oil giants. Therefore, it is necessary to improve the performance
of the Chinese segmentation. In this paper, we propose to construct the
“sentiment base” in the application of TSA. In practice, the “sentiment base”
consists of the TSA sentiment base and HowNet (subsection B).
Extraction of subjects and objects: Subjects
and objects are mainly extracted by context mining and document analysis. In
TSA, appropriate models should be designed in context mining according to
different data sets and resources. Context mining should obtain results as
efficiently as possible to provide the necessary background knowledge for the
subsequent steps. In practice, context mining includes conservation extraction
and coreference analysis. Conservation extraction refers to handling the text,
such as “citation, @.” In addition, coreference analysis refers to mining the
object represented by other words. For example, the address in Sina Weibo is
usually represented by a hyperlink.
4.1 ALGORITHM
In this paper, we propose to construct
the sentiment, modifier, object, and rule bases. Assume that the sentiment
polarity of a word is determined by its morphemes. If the morphemes of a word
appear in the positive lexicon more frequently than they do in the negative lexicon,
the word is positive; otherwise, the word is negative. To measure the positive
and negative tendencies of the morpheme q, we assign positive and
negative weights to the morphemes as follows:
In formula (3), the polarity Sci depends
on morphemes Ci, and the absolute value of Sci is the degree of
tendency of morphemes Ci. The steps for calculating the sentiment
polarity of words are as follows. Scan the positive and negative word lexicons;
if the word w appears in the positive word lexicon, Sw = 1; if
the word appears in the negative word lexicon, Sw = −1. Otherwise,
the sentiment polarity is computed using morphemes by
Where Sw represents the sentiment
polarity of the word w, which consists of c1, c2, . . .
, cp. If Sw > 0, the sentiment polarity of the word is positive;
otherwise, the sentiment polarity of the word is negative. If the value
obtained is close to zero, the word can be considered neutral.
4.2
MODULES:
DATA
COLLECTION TSA:
IT’S
TRANSPORT SYSTEMS:
RULE
BASED APPROACH:
TSA
ANALYTICAL TECHNIQUE:
4.3
MODULE DESCRIPTION:
DATA
COLLECTION TSA:
Information
regarding traffic on the Web can be classified
into
three categories. The first category consists of news, expert
commentaries,
announcements, etc., from the traffic website.
The
second includes posts from the transport sector in forums.
These
forums provide a platform through which users
can
exchange information about social topics, such as traffic
congestions
and transportation policies. The third includes realtime
information
about traffic in microblogging, which can be
found
from the social media, such as weibo.com. The sentiment
polarity
of the first category is not easily distinguished, but its
content
is true and meaningful. The sentiment polarity of the
second
category is clear, and usually, a discussion on certain
events
or topics may be highly valuable for tracking public
opinion.
The third category, which includes real-time traffic
information,
may not have a fixed topic but often located in a
certain
place. Such information bears significance for obtaining
real-time
information of travelers and creating a backup sensor
network
system. Data from the specific websites can be collected by the open
application
programming interface or correspondent crawler,
such
as the first and third categories of information. However,
collecting
a data set on a specific topic is more difficult. In most
forums,
the information-publishing platform can be divided
into
a series of boards containing various categories or topics. In
a
predefined subject board, the topics are designed for specific
events,
providing a relatively better framework for the readers
and
commenters. Nevertheless, the categorization is too simple
and
indistinct for analysis and research because of the following
reasons:
1) not all topics can be mapped to a single board; 2) the
contents
of the post are not strictly related to the object topics;
and
3) a board of forum often contains more than one topic.
Therefore,
to precisely collect a topic line and gather the
information
to one post, we first design a special crawler by
using
depth retrieval. Traffic-related terms are adopted to build
the
key ontological vocabulary used for the built-in search
engine of
the website.
IT’S
TRANSPORT SYSTEMS:
The advances in cloud computing and internet of
things (IoT) have provided a promising opportunity to further address the increasing
transportation issues, such as heavy traffic, congestion, and vehicle safety.
In the past few years, researchers have proposed a few models that use cloud
computing for implementing intelligent transportation systems (ITSs). For
example, a new vehicular cloud architecture called ITS-Cloud was proposed to improve
vehicle-to-vehicle communication and road safety A cloud-based urban traffic
control system was proposed to optimize traffic control a service-oriented
architecture (SOA), this system uses a number of software services (SaaS), such
as intersection control services, area management service, cloud service
discovery service, and sensor service, to perform different tasks.
These services also interact with each other to
exchange information and provide a solid basis for building a collaborative
traffic control and processing system in a distributed cloud environment. As an
emerging technology caused by rapid advances in modern wireless telecommunication,
IoT has received a lot of attention and is expected to bring benefits to
numerous application areas including health care, manufacturing, and
transportation. Currently, the use of IoT in transportation is still in its
early stage and most research on ITSs has not leveraged the IoT technology as a
solution or an enabling infrastructure.
We propose to use both cloud computing and IoT as an
enabling infrastructure for developing a vehicular data cloud platform where
transportation-related information, such as traffic control and management, car
location tracking and monitoring, road condition, car warranty, and maintenance
information, can be intelligently connected and made available to drivers,
automakers, part-manufacturer, vehicle quality controller, safety authorities,
and regional transportation division. An experiment of using data mining models
to analyze vehicular data clouds in the IoT environment was also conducted to
demonstrate the feasibility of vehicular data mining service.
RULE
BASED APPROACH:
Rule-based approach is needed, e.g.,
whether a noun that could represent the sentiment of the texts exists. As
emphasized in previous studies, the data set contains several subjective texts
that could not be easily analyzed by the rules. The most typical phenomenon is
the ironic sentiment sentences. For instance, in posts regarding fuel prices,
the thread title used was “the fuel price will rise,” to which one user
replied, “go to sell the car.” Such a reply apparently carries an ironic tone;
thus, all annotators manually labeled the reply as “negative.” However, given
that the computer cannot detect from the given text any word expressing a negative
sentiment, the methods cannot recognize the sentiment polarity. Therefore,
numerous problems remain unsolved. For the limitations of the existing
lexicons, an improved lexicon should be developed, which requires long-term and
arduous efforts. We proposed the construction of ITSs under the architecture of
artificial, computational, and parallel (ACP) methods, with the TSA system as
one of the data sources.
TSA
ANALYTICAL TECHNIQUE:
Text sentiment calculation can be
categorized into three levels, namely, word, sentence, and document levels. The
calculation of the sentiment polarity of words is a basic step in the
construction of the sentiment word base. In practice, we consider the words or
phrases as another form of sentence. Therefore, text processing includes two
main parts, the polarity calculation of the sentence- and document-level text. Fig.
3 shows the overall process involved in the proposed approach. The method
includes two major steps, i.e., the sentence sentiment analysis and document
sentiment aggregation. Considering the subtlety of Chinese expression, we first
decompose a document into constituting sentences and determine the sentiment
polarity of each sentence. In contrast to early document-level analytical
approaches we regard sentences as atomic units for semantic analysis. The
polarity scores of all the sentences are subsequently synthesized to compute
for the overall polarity of the entire document. The sentiment polarity of a
sentence is defined as ps. ps is determined to extract the SND
patterns and calculate the sentiment polarity score according to the SND
patterns identified in the text.
CHAPTER 5
5.0
SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the
project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three
key considerations involved in the feasibility analysis are
- ECONOMICAL
FEASIBILITY
- TECHNICAL
FEASIBILITY
- SOCIAL
FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic
impact that the system will have on the organization. The amount of fund that
the company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY
This study is carried out to check the technical
feasibility, that is, the technical requirements of the system. Any system
developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have
a modest requirement, as only minimal or null changes are required for
implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the users
solely depends on the methods that are employed to educate the user about the
system and to make him familiar with it. His level of confidence must be raised
so that he is also able to make some constructive criticism, which is welcomed,
as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a
process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of
activities that can be planned in advance and conducted systematically. Testing
is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct, the global will be
successfully achieved. In adequate testing if not testing leads to errors that
may not appear even many months.
This creates two problems, the time lag
between the cause and the appearance of the problem and the effect of the
system errors on the files and records within the system. A small system error
can conceivably explode into a much larger Problem. Effective testing early in
the purpose translates directly into long term cost savings from a reduced
number of errors. Another reason for system testing is its utility, as a
user-oriented vehicle before implementation. The best programs are worthless if
it produces the correct outputs.
5.2.1 UNIT TESTING:
|
Description
|
Expected result
|
|
Test for application window
properties.
|
All the properties of the windows are
to be properly aligned and displayed.
|
|
Test for mouse operations.
|
All the mouse operations like click,
drag, etc. must perform the necessary operations without any exceptions.
|
A program
represents the logical elements of a system. For a program to run satisfactorily,
it must compile and test data correctly and tie in properly with other
programs. Achieving an error free program is the responsibility of the
programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are
common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output
carefully.
5.1.2 FUNCTIONAL TESTING:
Functional
testing of an application is used to prove the application delivers correct
results, using enough inputs to give an adequate level of confidence that will
work correctly for all sets of inputs. The functional testing will need to
prove that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated
values to isolate the problems.
|
Description
|
Expected result
|
|
Test for all modules.
|
All peers should communicate in the
group.
|
|
Test for various peer in a distributed
network framework as it display all users available in the group.
|
The result after execution should give
the accurate result.
|
5.1. 3 NON-FUNCTIONAL TESTING:
The Non Functional software testing
encompasses a rich spectrum of testing strategies, describing the expected
results for every test case. It uses symbolic analysis techniques. This testing
used to check that an application will work in the operational environment.
Non-functional testing includes:
- Load
testing
- Performance
testing
- Usability
testing
- Reliability
testing
- Security
testing
5.1.4 LOAD TESTING:
An important
tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A
load can be a real load, that is, the system can be put under test to real
usage by having actual telephone users connected to it. They will generate test
input data for system test.
|
Description
|
Expected result
|
|
It is necessary to ascertain that the
application behaves correctly under loads when ‘Server busy’ response is
received.
|
Should designate another active node
as a Server.
|
5.1.5 PERFORMANCE TESTING:
Performance
tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the
code, response time and device utilization. The intent of this testing is to
identify weak points of the software system and quantify its shortcomings.
|
Description
|
Expected result
|
|
This is required to assure that an
application perforce adequately, having the capability to handle many peers,
delivering its results in expected time and using an acceptable level of
resource and it is an aspect of operational management.
|
Should handle large input values, and
produce accurate result in a expected
time.
|
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software
quality control team.
|
Description
|
Expected result
|
|
This is to check that the server is
rugged and reliable and can handle the failure of any of the components
involved in provide the application.
|
In case of failure of the server an alternate server should take
over the job.
|
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
|
Description
|
Expected result
|
|
Checking that the user identification
is authenticated.
|
In case failure it should not be
connected in the framework.
|
|
Check whether group keys in a tree are
shared by all peers.
|
The peers should know group key in the
same group.
|
5.1.8 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
|
Description
|
Expected result
|
|
Exercise all logical decisions on
their true and false sides.
|
All the logical decisions must be
valid.
|
|
Execute all loops at their boundaries
and within their operational bounds.
|
All the loops must be finite.
|
|
Exercise internal data structures to ensure
their validity.
|
All the data structures must be valid.
|
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is,
black testing enables
the software engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach
that is likely
to uncover a different
class of errors
than white box methods. Black box testing attempts to find
errors which focuses on inputs, outputs, and principle function of a software
module. The starting point of the black box testing is either a specification
or code. The contents of the box are hidden and the stimulated software should
produce the desired results.
|
Description
|
Expected result
|
|
To check for incorrect or missing
functions.
|
All the functions must be valid.
|
|
To check for interface errors.
|
The entire interface must function
normally.
|
|
To check for errors in a data
structures or external data base access.
|
The database updation and retrieval
must be done.
|
|
To check for initialization and
termination errors.
|
All the functions and data structures
must be initialized properly and terminated normally.
|
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER
6
6.0 SOFTWARE DESCRIPTION:
6.1 JAVA
TECHNOLOGY:
Java technology is both a programming language and a
platform.
The Java Programming Language
The Java
programming language is a high-level language that can be characterized by all
of the following buzzwords:
With most
programming languages, you either compile or interpret a program so that you
can run it on your computer. The Java programming language is unusual in that a
program is both compiled and interpreted. With the compiler, first you
translate a program into an intermediate language called Java byte codes
—the platform-independent codes interpreted by the interpreter on the Java
platform. The interpreter parses and runs each Java byte code instruction on
the computer. Compilation happens just once; interpretation occurs each time
the program is executed. The following figure illustrates how this works.
You can think of Java byte codes as the machine code
instructions for the Java Virtual Machine (Java VM). Every Java
interpreter, whether it’s a development tool or a Web browser that can run
applets, is an implementation of the Java VM. Java byte codes help make “write
once, run anywhere” possible. You can compile your program into byte codes on any
platform that has a Java compiler. The byte codes can then be run on any
implementation of the Java VM. That means that as long as a computer has a Java
VM, the same program written in the Java programming language can run on
Windows 2000, a Solaris workstation, or on an iMac.
6.2 THE JAVA PLATFORM:
A platform is the hardware or software
environment in which a program runs. We’ve already mentioned some of the most
popular platforms like Windows 2000, Linux, Solaris, and MacOS. Most platforms
can be described as a combination of the operating system and hardware. The
Java platform differs from most other platforms in that it’s a software-only
platform that runs on top of other hardware-based platforms.
The Java
platform has two components:
- The Java Virtual Machine (Java
VM)
- The Java Application Programming
Interface (Java API)
You’ve already been introduced to the Java VM. It’s
the base for the Java platform and is ported onto various hardware-based
platforms.
The Java API is
a large collection of ready-made software components that provide many useful
capabilities, such as graphical user interface (GUI) widgets. The Java API is
grouped into libraries of related classes and interfaces; these libraries are
known as packages. The next section, What Can Java Technology Do?
Highlights what functionality some of the packages in the Java API provide.
The following
figure depicts a program that’s running on the Java platform. As the figure
shows, the Java API and the virtual machine insulate the program from the
hardware.
Native code is code that after you compile it, the compiled
code runs on a specific hardware platform. As a platform-independent
environment, the Java platform can be a bit slower than native code. However,
smart compilers, well-tuned interpreters, and just-in-time byte code compilers
can bring performance close to that of native code without threatening
portability.
6.3 WHAT CAN JAVA TECHNOLOGY DO?
The most common types of programs written in the
Java programming language are applets and applications. If
you’ve surfed the Web, you’re probably already familiar with applets. An applet
is a program that adheres to certain conventions that allow it to run within a
Java-enabled browser.
However, the
Java programming language is not just for writing cute, entertaining applets
for the Web. The general-purpose, high-level Java programming language is also
a powerful software platform. Using the generous API, you can write many types
of programs.
An application
is a standalone program that runs directly on the Java platform. A special kind
of application known as a server serves and supports clients on a
network. Examples of servers are Web servers, proxy servers, mail servers, and
print servers. Another specialized program is a servlet.
A servlet can
almost be thought of as an applet that runs on the server side. Java Servlets
are a popular choice for building interactive web applications, replacing the
use of CGI scripts. Servlets are similar to applets in that they are runtime
extensions of applications. Instead of working in browsers, though, servlets
run within Java Web servers, configuring or tailoring the server.
How does the
API support all these kinds of programs? It does so with packages of software
components that provides a wide range of functionality. Every full
implementation of the Java platform gives you the following features:
- The essentials: Objects, strings,
threads, numbers, input and output, data structures, system properties, date
and time, and so on.
- Applets: The set of
conventions used by applets.
- Networking: URLs, TCP
(Transmission Control Protocol), UDP (User Data gram Protocol) sockets, and IP
(Internet Protocol) addresses.
- Internationalization: Help for
writing programs that can be localized for users worldwide. Programs can
automatically adapt to specific locales and be displayed in the appropriate
language.
- Security: Both low level and
high level, including electronic signatures, public and private key management,
access control, and certificates.
- Software components: Known as
JavaBeansTM, can plug into existing component architectures.
- Object serialization: Allows
lightweight persistence and communication via Remote Method Invocation (RMI).
- Java Database Connectivity (JDBCTM):
Provides uniform access to a wide range of relational databases.
The Java platform also has APIs for 2D and 3D
graphics, accessibility, servers, collaboration, telephony, speech, animation,
and more. The following figure depicts what is included in the Java 2 SDK.
6.4 HOW WILL JAVA TECHNOLOGY CHANGE
MY LIFE?
We can’t promise you fame, fortune, or even a job if you
learn the Java programming language. Still, it is likely to make your programs
better and requires less effort than other languages. We believe that Java
technology will help you do the following:
- Get started quickly: Although the
Java programming language is a powerful object-oriented language, it’s easy to
learn, especially for programmers already familiar with C or C++.
- Write less code: Comparisons of
program metrics (class counts, method counts, and so on) suggest that a program
written in the Java programming language can be four times smaller than the
same program in C++.
- Write better code: The Java programming
language encourages good coding practices, and its garbage collection helps you
avoid memory leaks. Its object orientation, its JavaBeans component
architecture, and its wide-ranging, easily extendible API let you reuse other
people’s tested code and introduce fewer bugs.
- Develop programs more quickly:
Your development time may be as much as twice as fast versus writing the same
program in C++. Why? You write fewer lines of code and it is a simpler
programming language than C++.
- Avoid platform dependencies with 100% Pure Java:
You can keep your program portable by avoiding the use of libraries written in
other languages. The 100% Pure JavaTM Product Certification Program
has a repository of historical process manuals, white papers, brochures, and
similar materials online.
- Write once, run anywhere:
Because 100% Pure Java programs are compiled into machine-independent byte
codes, they run consistently on any Java platform.
- Distribute software more easily:
You can upgrade applets easily from a central server. Applets take advantage of
the feature of allowing new classes to be loaded “on the fly,” without
recompiling the entire program.
6.5 ODBC:
Microsoft Open
Database Connectivity (ODBC) is a standard programming interface for
application developers and database systems providers. Before ODBC became a de
facto standard for Windows programs to interface with database systems,
programmers had to use proprietary languages for each database they wanted to
connect to. Now, ODBC has made the choice of the database system almost
irrelevant from a coding perspective, which is as it should be. Application
developers have much more important things to worry about than the syntax that
is needed to port their program from one database to another when business
needs suddenly change.
Through the
ODBC Administrator in Control Panel, you can specify the particular database
that is associated with a data source that an ODBC application program is
written to use. Think of an ODBC data source as a door with a name on it. Each
door will lead you to a particular database. For example, the data source named
Sales Figures might be a SQL Server database, whereas the Accounts Payable data
source could refer to an Access database. The physical database referred to by
a data source can reside anywhere on the LAN.
The ODBC system files are not installed on your
system by Windows 95. Rather, they are installed when you setup a separate
database application, such as SQL Server Client or Visual Basic 4.0. When the
ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It
is also possible to administer your ODBC data sources through a stand-alone
program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this
program and each maintains a separate list of ODBC data sources.
From a
programming perspective, the beauty of ODBC is that the application can be
written to use the same set of function calls to interface with any data
source, regardless of the database vendor. The source code of the application
doesn’t change whether it talks to Oracle or SQL Server. We only mention these
two as an example. There are ODBC drivers available for several dozen popular
database systems. Even Excel spreadsheets and plain text files can be turned
into data sources. The operating system uses the Registry information written
by ODBC Administrator to determine which low-level ODBC drivers are needed to
talk to the data source (such as the interface to Oracle or SQL Server). The
loading of the ODBC drivers is transparent to the ODBC application program. In
a client/server environment, the ODBC API even handles many of the network
issues for the application programmer.
The advantages
of this scheme are so numerous that you are probably thinking there must be
some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking
directly to the native database interface. ODBC has had many detractors make
the charge that it is too slow. Microsoft has always claimed that the critical
factor in performance is the quality of the driver software that is used. In
our humble opinion, this is true. The availability of good ODBC drivers has
improved a great deal recently. And anyway, the criticism about performance is
somewhat analogous to those who said that compilers would never match the speed
of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the
opportunity to write cleaner programs, which means you finish sooner.
Meanwhile, computers get faster every year.
6.6 JDBC:
In an effort
to set an independent database standard API for Java; Sun Microsystems
developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL
database access mechanism that provides a consistent interface to a variety of
RDBMSs. This consistent interface is achieved through the use of “plug-in”
database connectivity modules, or drivers. If a database vendor wishes
to have JDBC support, he or she must provide the driver for each platform that
the database and Java run on.
To gain a
wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you discovered
earlier in this chapter, ODBC has widespread support on a variety of platforms.
Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much
faster than developing a completely new connectivity solution.
JDBC was
announced in March of 1996. It was released for a 90 day public review that
ended June 8, 1996. Because of user input, the final JDBC v1.0 specification
was released soon after.
The remainder
of this section will cover enough information about JDBC for you to know what
it is about and how to use it effectively. This is by no means a complete
overview of JDBC. That would fill an entire book.
6.7 JDBC Goals:
Few software
packages are designed without goals in mind. JDBC is one that, because of its
many goals, drove the development of the API. These goals, in conjunction with
early reviewer feedback, have finalized the JDBC class library into a solid
framework for building database applications in Java.
The goals that
were set for JDBC are important. They will give you some insight as to why
certain classes and functionalities behave the way they do. The eight design
goals for JDBC are as follows:
SQL
Level API
The designers felt that their main goal was to
define a SQL interface for Java. Although not the lowest database interface
level possible, it is at a low enough level for higher-level tools and APIs to
be created. Conversely, it is at a high enough level for application
programmers to use it confidently. Attaining this goal allows for future tool
vendors to “generate” JDBC code and to hide many of JDBC’s complexities from
the end user.
SQL Conformance
SQL syntax varies as you move from database vendor
to database vendor. In an effort to support a wide variety of vendors, JDBC
will allow any query statement to be passed through it to the underlying
database driver. This allows the connectivity module to handle non-standard
functionality in a manner that is suitable for its users.
JDBC
must be implemental on top of common database interfaces
The JDBC SQL API must “sit” on top of other common
SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the
use of a software interface. This interface would translate JDBC calls to ODBC
and vice versa.
- Provide a Java interface that is
consistent with the rest of the Java system
Because of Java’s acceptance in the user community
thus far, the designers feel that they should not stray from the current design
of the core Java system.
This goal probably appears in all software design
goal listings. JDBC is no exception. Sun felt that the design of JDBC should be
very simple, allowing for only one method of completing a task per mechanism.
Allowing duplicate functionality only serves to confuse the users of the API.
- Use strong, static typing wherever
possible
Strong typing allows for more error checking to be
done at compile time; also, less error appear at runtime.
- Keep the common cases simple
Because more often than not, the usual SQL calls
used by the programmer are simple SELECT’s,
INSERT’s,
DELETE’s
and UPDATE’s,
these queries should be simple to perform with JDBC. However, more complex SQL
statements should also be possible.
Finally we decided to precede
the implementation using Java Networking.
And for dynamically updating
the cache table we go for MS Access database.
Java ha
two things: a programming language and a platform.
Java is
a high-level programming language that is all of the following
Simple Architecture-neutral
Object-oriented Portable
Distributed
High-performance
Interpreted Multithreaded
Robust Dynamic Secure
Java is
also unusual in that each Java program is both compiled and interpreted. With a
compile you translate a Java program into an intermediate language called Java
byte codes the platform-independent code instruction is passed and run on the
computer.
Compilation
happens just once; interpretation occurs each time the program is executed. The
figure illustrates how this works.
6.7 NETWORKING TCP/IP STACK:
The TCP/IP stack is shorter than the OSI one:
TCP is a connection-oriented protocol; UDP (User
Datagram Protocol) is a connectionless protocol.
IP datagram’s:
The IP layer provides a connectionless and unreliable
delivery system. It considers each datagram independently of the others. Any
association between datagram must be supplied by the higher layers. The IP
layer supplies a checksum that includes its own header. The header includes the
source and destination addresses. The IP layer handles routing through an
Internet. It is also responsible for breaking up large datagram into smaller
ones for transmission and reassembling them at the other end.
UDP:
UDP is also connectionless and unreliable. What it
adds to IP is a checksum for the contents of the datagram and port numbers.
These are used to give a client/server model – see later.
TCP:
TCP supplies logic to give a reliable
connection-oriented protocol above IP. It provides a virtual circuit that two
processes can use to communicate.
Internet addresses
In order to use a service, you must be able to find
it. The Internet uses an address scheme for machines so that they can be
located. The address is a 32 bit integer which gives the IP address.
Network address:
Class A uses 8 bits for the network address with 24
bits left over for other addressing. Class B uses 16 bit network addressing.
Class C uses 24 bit network addressing and class D uses all 32.
Subnet address:
Internally, the UNIX network is divided into sub
networks. Building 11 is currently on one sub network and uses 10-bit
addressing, allowing 1024 different hosts.
Host address:
8 bits are finally used for host addresses within our
subnet. This places a limit of 256 machines that can be on the subnet.
Total address:
The 32 bit address is usually written as 4 integers
separated by dots.
Port addresses
A service exists on a host, and is identified by its
port. This is a 16 bit number. To send a message to a server, you send it to
the port for that service of the host that it is running on. This is not
location transparency! Certain of these ports are “well known”.
Sockets:
A socket is a data structure maintained by the system
to handle network connections. A socket is created using the call socket. It returns an integer that is like a file
descriptor. In fact, under Windows, this handle can be used with Read File and Write File
functions.
#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);
Here “family” will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two
processes wishing to communicate over a network create a socket each. These are
similar to two ends of a pipe – but the actual pipe does not yet exist.
6.8 JFREE
CHART:
JFreeChart is a free 100% Java chart library that
makes it easy for developers to display professional quality charts in their
applications. JFreeChart’s extensive feature set includes:
A consistent and well-documented API, supporting a
wide range of chart types;
A flexible design that is easy to extend, and
targets both server-side and client-side applications;
Support for many output types, including Swing
components, image files (including PNG and JPEG), and vector graphics file
formats (including PDF, EPS and SVG);
JFreeChart is “open source” or, more
specifically, free software. It is distributed
under the terms of the GNU Lesser General Public Licence
(LGPL), which permits use in proprietary applications.
6.8.1. Map Visualizations:
Charts showing values that relate to geographical
areas. Some examples include: (a) population density in each state of the
United States, (b) income per capita for each country in Europe, (c) life
expectancy in each country of the world. The tasks in this project include:
Sourcing freely redistributable vector outlines for the countries of the world,
states/provinces in particular countries (USA in particular, but also other
areas);
Creating an appropriate dataset interface (plus
default implementation), a rendered, and integrating this with the existing
XYPlot class in JFreeChart; Testing, documenting, testing some more,
documenting some more.
6.8.2. Time Series Chart Interactivity
Implement a new (to JFreeChart) feature for
interactive time series charts — to display a separate control that shows a
small version of ALL the time series data, with a sliding “view”
rectangle that allows you to select the subset of the time series data to
display in the main chart.
6.8.3. Dashboards
There is currently a lot of interest in dashboard
displays. Create a flexible dashboard mechanism that supports a subset of
JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series)
that can be delivered easily via both Java Web Start and an applet.
6.8.4. Property Editors
The property editor mechanism in JFreeChart only
handles a small subset of the properties that can be set for charts. Extend (or
reimplement) this mechanism to provide greater end-user control over the
appearance of the charts.
CHAPTER
7
APPENDIX
7.1
SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
CHAPTER 8
8.1
CONCLUSION
We have proposed Web-based TSA to
analyze the traffic problems in a humanizer way. To the best of our knowledge,
this is the first attempt to apply sentiment analysis on the area of traffic. The
study of TSA will provide us a new perspective when facing with traffic
problems.
Our work can be concluded as the
following five folds: 1) designing the application architecture of TSA; 2)
constructing the related bases for the TSA system; 3) comparing the advantages
and disadvantages of both rule- and learning-based approaches based on the
characters of web data; 4) proposing an algorithm for the sentiment polarity
calculation based on the rule-based approach; and 5) taking consideration of
the modifying relationships of sentence patterns and locations in the sentiment
polarity calculations.
The task to implement the TSA system
into existing ITSs is also critically important, and it does need further
research. We suggested that take the policy evaluation part to support decision
making of managers and view the evaluation results related to specific location
as sensor information. The keynote of implementation is jointly accommodating
the traveler’s best interest and reasonable workload. Since TSA is still in its
infancy, we anticipate that more techniques will be developed for the joint
performance of ITS with the TSA system in the future.