Cloud computing is a rising computing standard in which assets of the computing framework are given as a service over the Internet. As guaranteeing as it may be, this standard additionally delivers a lot of people new challenges for data security and access control when clients outsource sensitive data for offering on cloud servers, which are not inside the same trusted dominion as data possessors. In any case, in completing thus, these results unavoidably present a substantial processing overhead on the data possessor for key distribution and data administration when fine-grained data access control is in demand, and subsequently don’t scale well. The issue of at the same time accomplishing fine-grainedness, scalability, and data confidentiality of access control really still remains uncertain. This paper addresses this open issue by, on one hand, characterizing and implementing access policies based on data qualities, and, then again, permitting the data owner to representative the majority of the calculation undertakings included in fine-grained data access control to un-trusted cloud servers without unveiling the underlying data substance. We accomplish this goal by exploiting and combining techniques of decentralized key policy Attribute Based Encryption (KP-ABE). Extensive investigation shows that the proposed approach is highly efficient and secure.
1.2 INTRODUCTION
Research in cloud computing is receiving a lot of attention from both academic and industrial worlds. In cloud computing, users can outsource their computation and storage to servers (also called clouds) using Internet. This frees users from the hassles of maintaining resources on-site. Clouds can provide several types of services like applications (e.g., Google Apps, Microsoft online), infrastructures (e.g., Amazon’s EC2, Eucalyptus, Nimbus), and platforms to help developers write applications (e.g., Amazon’s S3, Windows Azure).
Much of the data stored in clouds is highly sensitive, for example, medical records and social networks. Security and privacy are thus very important issues in cloud computing. In one hand, the user should authenticate itself before initiating any transaction, and on the other hand, it must be ensured that the cloud does not tamper with the data that is outsourced. User privacy is also required so that the cloud or other users do not know the identity of the user. The cloud can hold the user accountable for the data it outsources, and likewise, the cloud is itself accountable for the services it provides. The validity of the user who stores the data is also verified. Apart from the technical solutions to ensure security and privacy, there is also a need for law enforcement.
Recently, Wang et al. addressed secure and dependable cloud storage. Cloud servers prone to Byzantine failure, where a storage server can fail in arbitrary ways. The cloud is also prone to data modification and server colluding attacks. In server colluding attack, the adversary can compromise storage servers, so that it can modify data files as long as they are internally consistent. To provide secure data storage, the data needs to be encrypted. However, the data is often modified and this dynamic property needs to be taken into account while designing efficient secure storage techniques.
Efficient search on encrypted data is also an important concern in clouds. The clouds should not know the query but should be able to return the records that satisfy the query. This is achieved by means of searchable encryption. The keywords are sent to the cloud encrypted, and the cloud returns the result without knowing the actual keyword for the search. The problem here is that the data records should have keywords associated with them to enable the search. The correct records are returned only when searched with the exact keywords.
Security and privacy protection in clouds are being explored by many researchers.Wang et al. addressed storage security using Reed-Solomon erasure-correcting codes. Authentication of users using public key cryptographic techniques has been studied in. Many homomorphic encryption techniques have been suggested to ensure that the cloud is not able to read the data while performing computations on them. Using homomorphic encryption, the cloud receives ciphertext of the data and performs computations on the ciphertext and returns the encoded value of the result. The user is able to decode the result, but the cloud does not know what data it has operated on. In such circumstances, it must be possible for the user to verify that the cloud returns correct results. Accountability of clouds is a very challenging task and involves
technical issues and law enforcement. Neither clouds nor users should deny any operations performed or requested. It is important to have log of the transactions performed; however, it is an important concern to decide how much information to keep in the log.
Accountability has been addressed in TrustCloud. Secure provenance has been studied in. Considering the following situation: A Law student, Alice, wants to send a series of reports about some malpractices by authorities of University X to all the professors of University X, Research chairs of universities in the country, and students belonging to Law department in all universities in the province. She wants to remain anonymous while publishing all evidence of malpractice. She stores the information in the cloud.
Access control is important in such case, so that only authorized users can access the data. It is also important to verify that the information comes from a reliable source. The problems of access control, authentication, and privacy protection should be solved simultaneously. We address this problem in its entirety in this paper. Access control in clouds is gaining attention because it is important that only authorized users have access to valid service. A huge amount of information is being stored in the cloud, and much of this is sensitive information. Care should be taken to ensure access control of this sensitive information which can often be related to health, important documents (as in Google Docs or Dropbox) or even personal information (as in social networking). There are broadly three types of access control: User Based Access Control (UBAC), Role Based Access Control (RBAC), and Attribute Based Access Control (ABAC). In UBAC, the access control list (ACL) contains the list of users who are authorized to access data. This is not feasible in clouds where there are many users. In RBAC, users are classified based on their individual roles. Data can be accessed by users who have matching roles. The roles are defined by the system. For example, only faculty members and senior secretaries might have access to data but not the junior secretaries. ABAC is more extended in scope, in which users are given attributes, and the data has attached access policy. Only users with valid set of attributes, satisfying the access policy, can access the data. For instance, in the above example certain records might be accessible by faculty members with more than 10 years of research experience or by senior secretaries with more than 8 years experience. The pros and cons of RBAC and ABAC are discussed in. There has been some work on ABAC in clouds. All these work use a cryptographic primitive known as Attribute Based Encryption (ABE). The The eXtensible Access Control Markup Language (XACML) has been proposed for ABAC in clouds. An area where access control is widely being used is health care. Clouds are being used to store sensitive information about patients to enable access to medical professionals, hospital staff, researchers, and policy makers. It is important to control the access of data so that only authorized users can access the data. Using ABE, the records are encrypted under some access policy and stored in the cloud. Users are given sets of attributes and corresponding keys. Only when the users have matching set of attributes, can they decrypt the information stored in the cloud. Access control in health care has been studied. Access control is also gaining importance in online social networking where users (members) store their personal information, pictures, videos and share them with selected groups of users or communities they belong to. Access control in online social networking has been studied. Such data are being stored in clouds.
It is very important that only the authorized users are given access to those information. A similar situation arises when data is stored in clouds, for example in Dropbox, and shared with certain groups of people. It is just not enough to store the contents securely in the cloud but it might also be necessary to ensure anonymity of the user. For example, a user would like to store some sensitive information but does not want to be recognized. The user might want to post a comment on an article, but does not want his/her identity to be disclosed. However, the user should be able to prove to the other users that he/she is a valid user who stored the information without revealing the identity. There are cryptographic protocols like ring signatures, mesh signatures, group signatures, which can be used in these situations. Ring signature is not a feasible option for clouds where there are a large number of users. Group signatures assume the pre-existence of a group which might not be possible in clouds. Mesh signatures do not ensure if the message is from a single user or many users colluding together. For these reasons, a new protocol known as Attribute Based Signature (ABS) has been applied. ABS was proposed by Maji et al. In ABS, users have a claim predicate associated with a message. The claim predicate helps to identify the user as an authorized one, without revealing its identity. Other users or the cloud can verify the user and the validity of the message stored. ABS can be combined with ABE to achieve authenticated access control without disclosing the identity of the user to the cloud.
Existing work on access control in cloud
are centralized in nature. Except and, all other schemes use attribute based
encryption (ABE). The scheme uses a symmetric key approach and does not support
authentication. The schemes do not support authentication as well. Earlier work
by Zhao et al. provides privacy preserving authenticated access control
in cloud. However, the authors take a centralized approach where a single key
distribution center (KDC) distributes secret keys and attributes to all users.
Unfortunately, a single KDC is not only a single point of failure but difficult
to maintain because of the large number of users that are supported in a cloud
environment. We, therefore, emphasize that clouds should take a decentralized
approach while distributing secret keys and attributes to users. It is also
quite natural for clouds to have many KDCs in different locations in the world.
Although Yang et al. proposed a decentralized approach, their technique
does not authenticate users, who want to remain anonymous while accessing the
cloud. In an earlier work, Ruj et al.proposed a distributed access
control mechanism in clouds. However, the scheme did not provide user
authentication. The other drawback was that a user can create and store a file
and other users can only read the file. Write access was not permitted to users
other than the creator. In the preliminary version, we extend our previous work
with added features which enables to authenticate the validity of the message
without revealing the identity of the user who has stored information in the
cloud. In this version we also address user revocation, that was not addressed.
We use attribute based signature scheme to achieve authenticity and privacy.
Unlike, our scheme is resistant to replay attacks, in which a user can replace
fresh data with stale data from a previous write, even if it no longer has
valid claim policy. This is an important property because a user, revoked of
its attributes, might no longer be able to write to the cloud. We therefore add
this extra feature in our scheme and modify appropriately. Our scheme also
allows writing multiple times which was not permitted in our earlier work.
1.3 LITRATURE SURVEY
PRIVACY PRESERVING ACCESS CONTROL WITH AUTHENTICATION FOR SECURING DATA IN CLOUDS
PUBLICATION: S. Ruj, M. Stojmenovic and A. Nayak, IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, pp. 556–563, 2012.
TOWARD SECURE AND DEPENDABLE STORAGE SERVICES IN CLOUD COMPUTING
PUBLICATION: C. Wang, Q. Wang, K. Ren, N. Cao and W. Lou, IEEE T. Services Computing, vol. 5, no. 2, pp. 220–232, 2012.
FUZZY KEYWORD SEARCH OVER ENCRYPTED DATA IN CLOUD COMPUTING
PUBLICATION: J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, in IEEE INFOCOM. , pp. 441–445, 2010.
CRYPTOGRAPHIC CLOUD STORAGE
PUBLICATION: S. Kamara and K. Lauter, in Financial Cryptography Workshops, ser. Lecture Notes in Computer Science, vol. 6054. Springer, pp. 136–149, 2010.
CHAPTER 2
2.0 SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
To accomplish secure data transaction in cloud, suitable cryptography method is utilized. The data possessor must encrypt the record and then store the record to the cloud. Assuming that a third person downloads the record, they may see the record if they had the key which is utilized to decrypt the encrypted record. Once in a while this may be failure because of the technology improvement and the programmers. To overcome the issue there is lot of procedures and techniques to make secure transaction and storage.
2.2 DISADVANTAGES:
2.3 PROPOSED SYSTEM:
KP-ABE is a public key cryptography primitive for
one-to-many correspondences. In KP-ABE, information is associated with
attributes for each of which a public key part is characterized. The encrypted
associates the set of attributes to the message by scrambling it with the
comparing public key parts. Every client is assigned an access structure which
is normally characterized as an access tree over information attributes, i.e.,
inside hubs of the access tree are limit doors and leaf hubs are connected with
attributes. Client secret key is characterized to reflect the access structure
so the client has the ability to decode a cipher-text if and just if the
information attributes fulfill his access structure.
2.4 ADVANTAGES:
2.3.1 HARDWARE REQUIREMENT:
CHAPTER 3
3.0 SYSTEM DESIGN:
ARCHITECTURE DIAGRAM / UML DIAGRAMS / DAT FLOW DIAGRAM:
External sources or destinations, which may be people or organizations or other entities
Here the data referenced by a process is stored and retrieved.
People, procedures or devices that produce data. The physical component is not identified.
Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data.
There are several common modeling rules when creating DFDs:
3.1 DATAFLOW DIAGRAM
UML DIAGRAMS:
3.2 USE CASE DIAGRAM:
3.3 CLASS DIAGRAM:
3.4 SEQUENCE DIAGRAM:
3.5 ACTIVITY DIAGRAM:
CHAPTER 4
4.0 IMPLEMENTATION:
We propose our privacy preserving authenticated access control scheme. According to our scheme a user can create a file and store it securely in the cloud. This scheme consists of use of the two protocols ABE and ABS, as discussed in Sections 3.4 and 3.5, respectively. We will first discuss our scheme in details and then provide a concrete example to demonstrate how it works. We refer to the Fig. 1. There are three users, a creator, a reader, and writer. Creator Alice receives a token _ from the trustee, who is assumed to be honest. A trustee can be someone like the federal government who manages social insurance numbers etc. On presenting her id (like health/social insurance number), the trustee gives her a token _. There are multiple KDCs (here 2), which can be scattered. For example, these can be servers in different parts of the world.
A creator on presenting the token to one or more KDCs receives keys for encryption/decryption and signing. In the Fig. 1, SKs are secret keys given for decryption, Kx are keys for signing. The message MSG is encrypted under the access policy X. The access policy decides who can access the data stored in the cloud. The creator decides on a claim policy Y, to prove her authenticity and signs the message under this claim. The ciphertext C with signature is c, and is sent to the cloud. The cloud verifies the signature and storesthe ciphertext C. When a reader wants to read, the cloud sends C. If the user has attributes matching with access policy, it can decrypt and get back original message.
Write proceeds in the same way as file creation. By designating the verification process to the cloud, it relieves the individual users from time consuming verifications. When a reader wants to read some data stored in the cloud, it tries to decrypt it using the secret keys it receives from the KDCs. If it has enough attributes matching with the access policy, then it decrypts the information stored in the cloud.
4.1 ALGORITHM:
ATTRIBUTE-BASED ENCRYPTION:
ABE with multiple authorities as proposed as follows:
4.2 MODULES:
CLOUD USER MODULE:
ATTRIBUTE-BASED SIGNATURES:
ANONYMOUS AUTHENTICATION:
CLOUD USER OPERATIONS:
4.3 MODULE DESCRIPTION:
CLOUD USER MODULE:
User: users, who have data to be stored in the cloud and rely on the cloud for data computation, consist of both individual consumers and organizations.
Cloud Service Provider (CSP): a CSP, who has significant resources and expertise in building and managing distributed cloud storage servers, owns and operates live Cloud Computing systems.
Third Party Auditor (TPA): an optional TPA, who has expertise and capabilities that users may not have, is trusted to assess and expose risk of cloud storage services on behalf of the users upon request.
ATTRIBUTE-BASED SIGNATURES:
Cryptographic protocols like ring signatures mesh signatures group signatures which can be used in these situations. Ring signature is not a feasible option for clouds where there are a large number of users. Group signatures assume the preexistence of a group which might not be possible in clouds. Mesh signatures do not ensure if the message is from a single user or many users colluding together. For these reasons, a new protocol known as attribute-based signature (ABS) has been applied. ABS was proposed by Maji et al. In ABS, users have a claim predicate associated with a message. The claim predicate helps to identify the user as an authorized one, without revealing its identity. Other users or the cloud can verify the user and the validity of the message stored. ABS can be combined with ABE to achieve authenticated access control without disclosing the identity of the user to the cloud.
ANONYMOUS AUTHENTICATION:
In our scheme a writer whose rights have been revoked cannot create a new signature with new time stamp and, thus, cannot write back stale information. It then signs the message and calculates the message signature as.
CLOUD USER OPERATIONS:
Update Operation
In cloud data storage, sometimes the user may need to modify some data block(s) stored in the cloud, we refer this operation as data update. In other words, for all the unused tokens, the user needs to exclude every occurrence of the old data block and replace it with the new one.
Delete Operation
Sometimes, after being stored in the cloud, certain data blocks may need to be deleted. The delete operation we are considering is a general one, in which user replaces the data block with zero or some special reserved data symbol. From this point of view, the delete operation is actually a special case of the data update operation, where the original data blocks can be replaced with zeros or some predetermined special blocks.
Append Operation
In some cases, the user may want to increase the size of his stored data by adding blocks at the end of the data file, which we refer as data append. We anticipate that the most frequent append operation in cloud data storage is bulk append, in which the user needs to upload a large number of blocks (not a single block) at one time.
CHAPTER 5
5.0 SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of
acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about
the system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months. This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs.
5.2.1 UNIT TESTING:
A program
represents the logical elements of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly
with other programs. Achieving an error free program is the responsibility of
the programmer. Program testing checks
for two types
of errors: syntax
and logical. Syntax error is a
program statement that violates one or more rules of the language in which it
is written. An improperly defined field dimension or omitted keywords are common
syntax errors. These errors are shown through error message generated by the
computer. For Logic errors the programmer must examine the output carefully.
UNIT TESTING:
Description | Expected result |
Test for application window properties. | All the properties of the windows are to be properly aligned and displayed. |
Test for mouse operations. | All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions. |
5.1.3 FUNCTIONAL TESTING:
Functional testing
of an application is used to prove the application delivers correct results,
using enough inputs to give an adequate level of confidence that will work
correctly for all sets of inputs. The functional testing will need to prove
that the application works for each client type and that personalization
function work correctly.When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of
instructions must be traced to determine the problem. The process is facilitated by breaking the
program into self-contained portions, each of which can be checked at certain
key points. The idea is to compare program values against desk-calculated
values to isolate the problems.
FUNCTIONAL TESTING:
Description | Expected result |
Test for all modules. | All peers should communicate in the group. |
Test for various peer in a distributed network framework as it display all users available in the group. | The result after execution should give the accurate result. |
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non-functional testing includes:
5.1.5 LOAD TESTING:
An important tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test.
Load Testing
Description | Expected result |
It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. | Should designate another active node as a Server. |
5.1.5 PERFORMANCE TESTING:
Performance
tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the code,
response time and device utilization. The intent of this testing is to identify
weak points of the software system and quantify its shortcomings.
PERFORMANCE TESTING:
Description | Expected result |
This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management. | Should handle large input values, and produce accurate result in a expected time. |
5.1.6 RELIABILITY TESTING:
The software
reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time and it is
being ensured in this testing. Reliability can be expressed as the ability of
the software to reveal defects under testing conditions, according to the
specified requirements. It the portability that a software system will operate
without failure under given conditions for a given time interval and it focuses
on the behavior of the software element. It forms a part of the software
quality control team.
RELIABILITY TESTING:
Description | Expected result |
This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. | In case of failure of the server an alternate server should take over the job. |
5.1.7 SECURITY TESTING:
Security
testing evaluates system characteristics that relate to the availability,
integrity and confidentiality of the system data and services. Users/Clients
should be encouraged to make sure their security needs are very clearly known
at requirements time, so that the security issues can be addressed by the
designers and testers.
SECURITY TESTING:
Description | Expected result |
Checking that the user identification is authenticated. | In case failure it should not be connected in the framework. |
Check whether group keys in a tree are shared by all peers. | The peers should know group key in the same group. |
5.1.7 WHITE BOX TESTING:
White box
testing, sometimes called glass-box
testing is a test case
design method that uses
the control structure
of the procedural design to
derive test cases. Using
white box testing
method, the software engineer
can derive test
cases. The White box testing focuses on the inner structure of the
software structure to be tested.
5.1.8 WHITE BOX TESTING:
Description | Expected result |
Exercise all logical decisions on their true and false sides. | All the logical decisions must be valid. |
Execute all loops at their boundaries and within their operational bounds. | All the loops must be finite. |
Exercise internal data structures to ensure their validity. | All the data structures must be valid. |
5.1.9 BLACK BOX TESTING:
Black box
testing, also called behavioral testing, focuses on the functional requirements
of the software. That is, black testing
enables the software
engineer to derive
sets of input
conditions that will
fully exercise all
functional requirements for a
program. Black box testing is not
alternative to white box techniques.
Rather it is
a complementary approach
that is likely
to uncover a different
class of errors
than white box methods. Black box testing attempts to find
errors which focuses on inputs, outputs, and principle function of a software
module. The starting point of the black box testing is either a specification
or code. The contents of the box are hidden and the stimulated software should
produce the desired results.
5.1.10 BLACK BOX TESTING:
Description | Expected result |
To check for incorrect or missing functions. | All the functions must be valid. |
To check for interface errors. | The entire interface must function normally. |
To check for errors in a data structures or external data base access. | The database updation and retrieval must be done. |
To check for initialization and termination errors. | All the functions and data structures must be initialized properly and terminated normally. |
All
the above system testing strategies are carried out in as the development,
documentation and institutionalization of the proposed goals and related
policies is essential.
CHAPTER 7
7.0 SOFTWARE SPECIFICATION:
7.1 FEATURES OF .NET:
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script.
The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.
“.NET” is
also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so
on).
7.2 THE .NET FRAMEWORK
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are
Managed Code
The code
that targets .NET, and which contains certain extra Information – “metadata” –
to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.
Common Type System
The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
7.3 THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.
7.4 LANGUAGES SUPPORTED BY .NET
The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.
Other languages for which .NET compilers are available include
ASP.NET XML WEB SERVICES | Windows Forms |
Base Class Libraries | |
Common Language Runtime | |
Operating System |
Fig1 .Net Framework
C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.
C#.NET is
a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in
C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us
to define multiple procedures with the same name, where each procedure has a
different set of arguments. Besides using overloading for procedures, we can
use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to
detect and remove errors at runtime. In C#.NET, we need to use
Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception
handlers to improve the performance of our application.
7.5 THE .NET FRAMEWORK
The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.
There are
different types of application, such as Windows-based applications and
Web-based applications.
7.6 FEATURES OF SQL-SERVER
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services
SQL-SERVER database consist of six type of objects,
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5.
MACRO
7.7 TABLE:
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.
CHAPTER 7
APPENDIX
7.1 SAMPLE SOURCE CODE
7.2
SAMPLE OUTPUT
CHAPTER 8
8.0 CONCLUSION
We have presented a decentralized access control technique with anonymous authentication, which provides user revocation and prevents replay attacks. The cloud does not know the identity of the user who stores information, but only verifies the user’s credentials. Key distribution is done in a decentralized way. One limitation is that the cloud knows the access policy for each record stored in the cloud. In future, we would like to hide the attributes and access policy of a user.