A Food Wastage Reduction Mobile Application

Food Wastage Reduction Management Android App

A Food Wastage Reduction Mobile Application

Food Wastage Reduction Management Android App

According to [5], food waste is a significant issue around the world. It is predicted through a survey that more than 58 percent of food that people produce for consumption is wasted every day. Whereas, more than 60 percent of people in the third world countries are dying in malnutrition without proper food for a living. Therefore, the technologically developed countries are emphasizing more on this issue.

https://codeshoppy.com/shop/product/a-food-wastage-reduction-mobile-application/

Therefore, that less food can be wasted and can be distributed to the needy people. According to [6] in the age of modern era, where we are developed through artificial intelligence, people are more dependent on the smartphone. There are various applications, which are developed to control the huge wastage of food, and it provides the opportunity to send that extra food to the people who need it. There are multiple applications, which control food waste. The most useful food waste application for android and apple are discussed below:

A.Food waste application of Singapore (11th Hour) Tan Jun Yuan who is a food stall hawker from Singapore felt very bad noticing that people waste so much food in every year. He saw many vendors with leftover foods in a day. The quantity was 10 to 15 bowl of pork ribs served including other foods that he served the customers per day. He also saw that more than 35 percent of food he made every day was left as extra. Therefore, he created the application named 11Th Hour. This application provides the left and unused foods at the half of their original price before the restaurants are closed. After the creation of this application, there were almost 20000 downloads of this application [7].

B.Food waste reduction application from Netherlands (NoFoodWasted) August de Vocht, a citizen of Netherlands developed this application to reduce the amount of food waste. This application makes collaboration with the supermarket so that people can be aware of the foods that will be expired very soon. According to [8], it helps the users to upload their grocery items, which will expire soon so that people who are in need of food can buy them at a reduced price and use them. It helps to stop the wastage of excessive foods. More than 20000 people have found this application useful, and it has reduced the amount of food wastage in the Netherlands.

C.An application to control food waste by UK and Ireland (FoodCloud) This application has been declared as one of the useful food wastage application in the United Kingdom as well as Ireland. This application notifies the supermarkets about their surplus food so that the charitable societies can collect them and reduce the chances of food wastage. This application works as an intermediate that provides the type of foods and arranges the pick-up for the charities. It also collects and stores the food so that the charitable societies can collect the food according to their requirements. According to [9] more than 1200 business hubs and 3000 charitable societies work under this application to provide excess foods to the homeless people.

D.Food wastage Reduction Application from Africa (Cheetah) Some researchers from the University of Twente have developed this application to reduce the number of food wastages in Africa. It is seen that various fruits and vegetables lose their ability to be consumed due to poor road circumstances, less refrigeration in Africa. This application is created to gather those food items before they get rotten and distribute it to the needy malnutrition people of Africa. Dutch Ministry of Foreign Affairs helped the researchers in the development of this application. Mostly, farmers, the food transporters use this application, and it also helped them to reduce the chances of food bribing in Africa. It is expected that the public version of this application will be released within May, next year [10].

E.Indian Food Wastage Reduction Application (No Food Waste) No Food waste is an application from India that allows the restaurants, food stalls and parties to inform about their excessive leftover foods so that needy people can collect them for their usage. This application collects those foods and distributes those among the homeless people, slum dwellers and orphanages as well as nursing homes. Code Shoppy According to [11], the users can also notify them by showing hunger points, and they will distribute the foods to there. The only requirement is they take foods only if it is prepared two hours before. These applications have changed the use of artificial intelligence by providing food to the needy people. It is considered one of the best uses of software development. However, food wastage is still a bad habit. According to [12], people need to be more careful while preparing or ordering food because many people around the world do not get to eat. Food wastage reduction has decreased a lot due to the usage of this application, but people need to be more sensitive and careful so that a better world can be created where no food is wasted.

REAL-TIME VEHICLE DETECTION AND TRACKING USING DEEP NEURAL NETWORKS

On Road Vehicle Breakdown Assistance Finder Project

REAL-TIME VEHICLE DETECTION AND TRACKING USING DEEP NEURAL NETWORKS

Dynamic vehicle detection and tracking can provide essential data to solve the problem of road planning and traffic management. A method for real-time vehicle detection and tracking using deep neural networks is proposed in this paper and a complete network architecture is presented. Using our model, you can obtain vehicle candidates, vehicle probabilities, and their coordinates in real-time. The proposed model is trained on the PASCAL VOC 2007 and 2012 image set and tested on ImageNet dataset. By a carefully design, the detection speed of our model is fast enough to process streaming video. Experimental results show that proposed model is a real-time, accurate vehicle detector, making it ideal for computer vision application.

Introduction                                   

In today’s society, more and more vehicles are taking to the highways every year, which makes a push to monitor and control the traffic more efficiently. The real-time vehicle detection and tracing is essential for intelligent road routing, road traffic control, road planning and so on. Therefore, it is important to know the road traffic density real time, especially in mega cities for signal control and effective traffic management. For a long time, several approaches[1,2] in the literature have been proposed to resolve the problem of various moving vehicles; Nevertheless, the aim of real-time fully-automatic detection of vehicle is far from being attained as it needs improvement in detection and tracking for accurate prediction with faster processing speed. Zheng et al. use brake lights detection through color segmentation method to generate vehicle candidates and verify them through a rule-based clustering approach. A tracking-by-detection scheme based on Harris-SIFT feature matching is then used to learn the template of the detected vehicle on line, localize and track the corresponding vehicle in live video [2]. It is a good measure to extract vehicle areas, however, it needs a relatively ideal background. Wei Wang et al. have presented a method of multi-vehicle tracking and counting using a fisheye camera based on simple feature points tracking, grouping and association. They integrates low level feature-point based tracking and higher level “identity appearance” and motion based real-time association [1]. However, the average processing time of it is around 750ms, which is not fast enough to achieve the real-time processing. System based Convolutional Neural Networks (CNN) can provide the solution of many contemporary problems in vehicle detection and tracing. CNN currently outperform other techniques by a large margin in computer vision problems such as classification [3] and detection [4]. The training procedure of CNN automatically learn the weights of the filters, so that they are able to extract visual concepts from raw image content. Using the knowledge obtained through the analysis of the training set containing labelled vehicle and non-vehicle examples, vehicle can be identified in given images. In general, Convolutional Neural Networks show more promising results. In this paper, we propose a method of real-time vehicles detection and tracking using Convolutional Neural Networks. We present a network architecture, which create multiple vehicle candidates and predict vehicle probabilities in one evaluation. Our architecture uses features from the entire image to create vehicle candidates. Firstly, we use convolutional layers of the system to extract features from the raw image. Secondly, we use four kinds of inception modules. Thirdly, we add Spatial Pyramid Pooling (SPP) layer between convolutional layers and fully connected layers, which is able to resize any images into fixed size. Lastly, the fully connected layers predict the probability and coordinates of vehicles.

https://codeshoppy.com/shop/product/on-road-vehicle-breakdown-assistance-app/

Comparison of Periodic Behavior of Consumer Online Searches for Restaurants Based on Search Engine

Comparison of Periodic Behavior of Consumer Online Searches for Restaurants Based on Search Engine

Increased knowledge about the online search behavior of restaurant consumers is valuableto restaurant management and marketing professionals. However, people in different countries may demon-strate distinctive online search behaviors. There has been a lack of cross-cultural research on the online searchbehavior of restaurant consumers. In this paper, the periodic nature of online search behavior demonstrated byrestaurant consumers from U.S. and China is analyzed and compared using Fourier transform and Parseval’stheorem. The search interest records from Google and Baidu, respectively, are used. The results reveal thatthe online search behavior of restaurant consumers in the U.S. is strongly governed by weekly cycles but lessdependent on annual cycles; however, the analogous consumer behavior in China exhibits less dependenceon weekly cycles. The theoretical and practical implications of the research are discussed.

As culture so clearly shapes consumer behavior [36], hos-pitality and tourism researchers have sought to identify andunderstand cultural differences to provide useful informa-tion to industry practitioners [39]. For example, Moneyand Crotts [38] indicated that consumers from differentcultures tended to seek out travel and planning infor-mation from different sources; Baeket al.[60] (2006)reported that consumers from different cultures used dis-tinct restaurant selection criteria. Consumers have madeextensive use of search engines to seek out commercialinformation. This makes internet marketing very impor-tant [2], [28], [42], [43], [47], [53], [54], [57] and leadssellers and marketers to compete for higher search enginerankings and to increase their bids for internet advertisementspace. An optimal marketing strategy should consider con-sumer search behavior [17], [29], Nica, [61] (2013). It is thusvery meaningful to study and compare the patterns of onlineinformation search behavior among consumers from differentcountries. However, there has been a lack of research withsuch a focus.In this study, the periodic nature of consumer online searchbehavior, as it applies to restaurant searches in the UnitedStates (U.S.) and China, is analyzed and compared. Thesearch interest records from Google and Baidu, the most pop-ular search engines in the U.S. and China, respectively, areused to generate the material for analysis. Parseval’s theoremis used to quantify the weight of the periodic componentsin the whole search dynamic system obtained by discreteFourier transform (DFT). The results indicate that periodicpatterns exist in the behavior of consumer online searchesfor restaurants in the U.S. and in China, but consumers fromthe U.S. and China exhibit distinctive periodic patterns ofbehavior for online restaurant searches. The cyclic patterns ofconsumer behavior for online restaurant searches identified inthe two study countries are useful to international restaurantmanagement personnel and online marketing professionals.Following this brief introduction, the rest of the study is presented in the following order: literature review, datadescription, method, results, discussion, and conclusion.

1Toll Gate App For Android Based Payment based android app
2Android App for Student Attendance System
3Android App for College Management System
4ebanking App to Manage Account And Transfer
5Grocery Shopping Android App
6Android Courier Management System Using SMS Alert
7eCommerce Old Book Store Shopping with eWallet Android App
8Android Smart Restaurant Management System in near-Field Communication

In this research, the cyclic consumer behavior of onlinerestaurant searches in the U.S. and China was analyzed.Parseval’s theorem was used to quantify the weight of cyclicpatterns in the whole searching dynamics. The study showedthat consumers in both the U.S. and China follow cyclicpatterns for online restaurant searches with the same periods,but Americans are more likely to arrange dining activities ona weekly basis, while the Chinese do not arrange this activityas regularly as Americans. This finding agrees with the Hofst-ede’s original finding of an uncertainty avoidance differencebetween the two countries. This work is expected to be usefulfor international restaurant management personnel and onlinemarketing professionals. Future work aimed at analyzing thecyclic patterns of online restaurant search behavior in othercountries is needed.

IOIO hardware features to Android Apps

IOIO hardware features to Android Apps

mobile based college student communication portal android app source code

While each of the products mentioned in the previous section provides an interesting product that simplifies whether Electronics or programming, all of them tend to be very specific. They focus whether on HW only or SW only. Even if a HW product comes with its developing environment, it still requires advanced programming knowledge. The objective in this paper is to link the features of existing HW & SW products and combine them to get a better platform that offers the user the possibility to create hardware prototypes and program them quickly without needing technical knowledge especially in programming. An electrical board named IOIO is used as the hardware part, while the App Inventor platform is used as the software platform

mobile based college student communication portal android app source code

IOIO is an electrical board that adds hardware I/O features to a PC or android application [14]. The board can be controlled from a distance using Bluetooth, so it is possible to program and control it via a Smartphone mobile App. The IOIO board has a java library and is compatible with Android OS, therefore, it is possible to create Android apps that control the board’s different pins. Which leads to the next point: programming Android is complex, so it would be great if this task were easier. That is the role of the App Inventor platform, which is an online platform that allows the user to create Android Apps just by interconnecting graphical blocks. mobile based college student communication portal android app source code The concept is based on the Blockly interface [15]. There is no Java code to write, no need to consider Android constraints, and the design and script interface are simple and intuitive. With this platform, even kids are capable of creating interesting projects. The existing features makes interfacing complex components easier, such as reading smartphone’s sensors, camera, microphone etc. The downside with App Inventor, however, is that it cannot control the IOIO board, the dedicated SW component does not exist in the platform. Therefore, the contribution of this paper is to bring support for this electrical board to App Inventor. This way, it will be possible to easily create Android Apps that control the IOIO board without technical knowledge. Making it possible for kids and beginners to initiate to the electronics and programming fields. The targeted audience is quite large especially that Android market share has reached 86.1% [16].

The final objective of IoT is to leverage abstraction as much as possible, with electronics and programming considered as main disciplines of IoT, several efforts are made to make them accessible for everyone. This paper presents an initiative that facilitates the process of creating Android Apps capable of controlling electrical systems. IOIO and App Inventor already exist, but do not coexist, the idea then is to develop an Android service that links the two existing solutions and get the best of both worlds. This has added an abstraction level to program the IOIO board, allowing its control via App Inventor’s graphical blocks, thus simplifying the programming side. https://codeshoppy.com/android-project-with-source-code-students.html For the HW side, dealing with electronics can be simpler by using existing HW modules, so it is actually possible to start exploring the embedded systems field with the proposed tools. The developed Android IOIO service supports so far the DigitalWrite command. The service is still under development and other commands such as Digital & Analog and PWM will be added, making it possible to interface additional electrical components and read data from sensors connected to the IOIO board.

INTERFACING THE ROBOT SYSTEM WITH COMMUNICATION MODULE BLUETOOTH

INTERFACING THE ROBOT SYSTEM WITH COMMUNICATION MODULE BLUETOOTH

mca mini project topics 2019

mca mini project topics 2019

Earlier methods used were: First, the use of an onboardcontrol system, which requires a human to control therobot/ machine by using onboard commands/buttons. Second,wired control system, which requires handling of bulkywires and limits the control length.Third, wireless dedicates remote system, requiredifferent remotes for various machines, prone todisturbance and can be hacked easily as there is notprotected transmission of signals, anyone with thefrequency can send commands.

Android application with robotics is an exciting areafor human computer interface that is used to assisthumans to control the mechanical system via software.Here, we designed an android application to control robot/machines wirelessly using a secured connection overBluetooth

The basic considerations for proposed segments are:Interfacing the robot system with communicationmodule Bluetooth (Embedded Hardware Model) Bluetooth Module with android application(Software Model). mca mini project topics 2019 The input from android application is processed usingcommand and understanding algorithms to parse intocontroller that will control the robot. A communicationprotocol that effectively translate the selected commandinto maneuverable tasks to any control board.

https://codeshoppy.com/latest-mca-project-topics-2018.html

Driving MotorsA combination of four 100RPM motors (12V, 1A) with spurgearbox system are used to drive four wheels of the robot. 2.Arduino MegaArduino Mega 2560 based on ATmega 2560 is used as main brain of the hardware. It takes input from the sensors mounted on chassis. Suitable decision is made by Arduinoand actuators are governed accordingly for motionand implementing different selected operations. 3.Voltage Divider Voltage divider used here provides two voltage distribution one is 5V (for logic operations) and other is 12V (for motors). 4.TyresTread block tyres are used with 4.5” diameter for better gripping and push on landapplications. 5.Bluetooth Bluetooth gives connectivitybetween two devices using particular MACaddress. 6.Motor Driver (L298N based) Motor driver board based on L298N is used to drive the motors for motion.

Minimizing Android GUI Test Suites

Minimizing Android GUI Test Suites

simple mobile app project ideas for students

simple mobile app project ideas for students

In recent years, there has been a significant surge in the usage anddevelopment of apps for smartphones and tablets. Developers arewriting more apps for mobile platforms than for desktops. The com-plexity of mobile apps often lies in their graphical user interfaces(GUIs). Testing efforts of such apps mostly focus on the behaviorof graphical user interfaces.Several automated GUI testing techniques have recently beenproposed. The techniques include learning-based testing [8,29,31,32], model-based testing [1,23,45], genetic programming [27,28],fuzz testing [25,26,37], and static-analysis based approaches [4,33,34,44,49]. The goal of the majority of these techniques is to achievegood code and screen coverage (i.e. covering all distinct screens ofan app), and to find common bugs such as crashes and unrespon-siveness. Most of these techniques work by injecting sequences ofautomatically generated user inputs or actions to an app for severaltens of hours. We consider each sequence of actions injected bythese techniques to be atest case, and the set of all sequences ofactions to be atest suite.

Although automated GUI testing techniques could find bugs,they tend to generate large test suites containing thousands oftest cases. Each test case can contain tens to thousands of useractions. Such a large test suite can take several hours to execute,because the running time of a test suite is linear in the size of thetest suite.1However, regression tests should be fast so that theycan be used frequently during development. Therefore, such testsuites are difficult to use in regression testing.In this paper, we address the problem of generating a smallregression GUI test suite for an Android app. We assume that weare given a large test suite generated by an existing automatedGUI testing tool. simple mobile app project ideas for students We also assume that the test suite is replayablein the sense that if we rerun the test suite multiple times we getthe same coverage and observe the same sequence of app screens.(The evaluation section has details on how to obtain a replayable test suite from an automated GUI testing tool.) We assume thatthe test suite takes several hours to run on the app.

Our goal is tospend a reasonable amount of time, say a day, to generate a smallregression test suite for the app that runs for less than an hour andthat achieves similar code and screen coverage as the original testsuite provided as input.A couple of techniques have been proposed to minimize testsuites for GUIs. For example, Clapp et al. [7]and Hammoudi et al.[17]proposed delta-debugging [48] based algorithms. These tech-niques work well if the size of the input test suite is small, containingless than one thousand user inputs. However, they fail to scale forlarge test suites because they depend heavily on the rapid gen-eration and feasibility checking of new test cases. Unfortunately,for most real-world GUI apps, it takes few minutes to check thefeasibility of a new input sequence. Therefore, for large test suitescontaining tens of thousands of user actions, a delta-debuggingbased approach could take more than a month to effectively mini-mize a test suite. McMaster and Memon[30]proposed a GUI testsuite reduction technique for reducing the number of test cases ina test suite. However, this technique does not make any effort toreduce the size of each test case. In our experimental evaluation, weobserved that test cases generated by an automated tool can containsubsequences of redundant user actions, which can be removed toobtain smaller test suites.We propose an Android GUI test suite reduction algorithm thatcan scalably and effectively minimize large test suites. The keyinsight behind our technique is that if we can identify and removesome common forms of redundancies introduced by existing au-tomated GUI testing tools, then we can drastically lower the timerequired to minimize a test suite.

We manually analyzed test suitesgenerated by existing automated GUI testing tools and found thereare three kinds of redundancies that are common in these testsuites: 1) some test cases can be safely removed from a test suitewithout impacting code and screen coverage, 2) within a test case,certain loops can be eliminated without decreasing coverage, and3) many test cases share common subsequences of actions whoserepeated execution can be avoided by combining fragments fromdifferent action sequences. Based on these observations, we devel-oped an algorithm that removes these redundancies one-by-onewhile preserving the overall code and screen coverage of the testsuite.In order to identify redundant loops and common fragments oftest cases, we define a notion of state abstraction which enablesus to approximately determine if we are visiting the same abstractstate at least twice while executing a test case. If an abstract stateis visited twice during the execution, we have identified a loopwhich can potentially be removed. Similarly, if the executions oftwo test cases visit an identical subsequence of abstract states, weknow that fragments from the two test cases can be combined toobtain a longer test case which avoids re-executing the commonfragment. Whenever we get a new test case by removing a loopor by combining two fragments, the resulting test case may nottraverse the same abstract states as expected. In our algorithm,we check the feasibility of a newly created test case by executingit a few times and by checking if the execution visits the samesequence of abstract states every time—we call thisreplayability. https://codeshoppy.com/android-app-ideas-for-students-college-project.html We noticed that if our state abstraction is too coarse-grained ourfeasibility checks often fail, leading to longer running time.

On theother hand, if we use a too fine-grained state abstraction, we fail toidentify many redundancies. One contribution of this paper is todesign a good enough abstraction that works well in practice.One advantage of our algorithm over delta-debugging or otherblack-box algorithms is that we do not blindly generate all possiblenew test cases that can be constructed by dropping some actions.Rather, we use a suitable state abstraction to only drop potentiallyredundant loops. Another advantage is that we create new test casesby combining fragments from input test cases. This enables us tocome up with new, longer test cases which cannot be generatedusing delta-debugging or other test suite reduction techniques.Longer test cases are usually better than multiple shorter test casesbecause we do not have to perform a clean restart of an app. A cleanrestart of an app requires us to kill the app, erase app data, anderase SD card contents, which is very time consuming.

A longertest case in place of several shorter test cases avoids several suchexpensive restarts.We have implemented our algorithm in a prototype tool, calledDetReduce, for Android apps. The tool is publicly available at https://github.com/wtchoi/swifthand2. We applied DetReduce to severalapps and found that DetReduce could reduce a test-suite by a factorof 16.2×in size and a factor of 14.7×in running time on average. Wealso found that for a test suite generated by running SwiftHand [5]and a random testing algorithm [5] for 8 hours, DetReduce canreduce the test suite in an average of 14.6 hours. We are not awareof any existing technique that could get such huge reduction inthe size of a large GUI test suite in such a reasonable amount oftime. Note that DetReduce often runs longer than generating alltest cases; however, running DetReduce is a one-time cost. Once aregression suite has been generated, it will be run many times andeach run will take a fraction of the time required to generate alltest cases.

Update in Android-based IoT Platform

Update in Android-based IoT Platform

android projects titles topics ideas

The Android-based IoT(Internet of Things) platform just like the existing Android provides an environment that makes it easy to utilize Google’s infrastructure services including development tools and APIs through which it helps to control the sensors of IoT devices. Applications running on the Android-based IoT platform are often UI free and are used without the user’s consent to registered permissions. It is difficult to respond to the misuse of permissions as well as to check them when they are registered indiscriminately while updating applications. This paper analyzes the versions of before and after an application the update running on the Android-based IoT platform and the collected permission lists. https://codeshoppy.com/android-projects-titles-ieee.html It aims to identify the same permissions before and after the update, and deleted and newly added permissions after the update were identified, and thereby respond to security threats that can arise from the permissions that is not needed for IoT devices to perform certain functions.

android projects titles topics ideas

The Android-based IoT platform was first unveiled to the public as the developer preview version on December 13, 2016. The Android-based IoT platform provides the technology to develop applications that run on IoT devices based on the Android operating system. It makes it easy to develop applications while leveraging existing Android development tools, Android APIs and Google infrastructure services. Applications that run on the Android-based IoT platform have much in common with those that run on existing Android-based Smartphone. Both applications running on the IoT device and smartphone register permissions to provide users with certain functions. If an application is used differently from its original purpose or asks additional permissions rather than using given permissions to provide certain functions for the user, it can perform malicious activities such as collecting excessive information or leaking personal information [1]. For example, if an IoT device that provides temperature and humidity registered permissions such as location information, camera, package installation and deletion, etc., it would perform functions different from the original purpose through the newly registered permissions. android projects titles topics ideas This paper collects permission lists for the versions of an application running on the Android-based IoT platform before and after the update. It aims to respond to future security threats by identifying the same, deleted, and added permission information compared to the update based on the collected permission lists. The structure of this paper is as follows. Section 2 discusses the Android-based IoT platform, the AndroidManifest.xml file, and the Android permission protection level. Section 3 performs permission analysis on the application to identify permission differences before and after the update. Finally, section 4 concludes this study.

Android Communication Analysis with Intent Revision

Android Communication Analysis with Intent Revision

mca php project topics 2019 computer science

mca php project topics 2019 computer science

Android applications (also called Androidapps) have been proved the effective target.Google Play store has provided billions of Android apps, but unfortunately, the advance has a dark side because security cannot be ensured by many Android apps. Hence, more and more attention has been paid to Android malware. Taint flow analysis has been proved an effective approach to providing potential malicious data flows. It aims at determining whether a sensitive data flows from a source to a sink. The analysis can be executed either dynamically or statically. Dynamic taint analysis [5] relies on testing to reach a appropriate code coverage criterion. It is able to precisely pinpoint leaks, but may be incomplete in exploring all possible executing paths. In contrast, static analysis takes all the possible paths for consideration. But most of the static analyses available for Android apps [1,3] are inner-component based analysis which are unable to detect leaks across-components.Even though most of the privacy leaks happen in a single com-ponent, lots of inter-components privacy leaks have been reported. mca php project topics 2019 computer science Thus, inner-component taint analysis is not enough to detect leaks.Efforts have also been devoted to implement static analysis for An-droid [2]to supply us with a relatively satisfactory outcome. Among them, Inter-Component Communication (ICC) [4] analysis plays important roles since ICC values can facilitate a precise consequent.However, the current ICC analyses only consider ICC links be-tween components where reuse and revision of an Intent across-component are not considered. Thus, lots of potential leaks will escape from being tracked in the succeeding ICC leak detection.With this motivation, in this paper, we devote to ICC analysis on reused and revised Intents. To do so, first, ICC values are analyzed by taking reused and revised Intents into account. With this basis,target components of Intents are analyzed and ICC Graphs are built.On an ICCG, all the ICC flows, which are useful in tracking leaks across-components, are contained. This will lay a critical foundation to the succeeding taint flow analysis. The proposed approach has been implemented in a tool called ICC-Analyzer (ICCA) where IC3is integrated in for providing ICC values of the Intents which are not reused or revised.

We have implemented our approach in a tool named ICCA to analyze ICC values with ICIR and construct ICCGs of Android apps for the convenience of the succeeding ICC leak detection. The evaluation of our approach addresses the following two research questions:(1) How does ICCA perform when analyze ICC values with ICIR?and (2) As an ICC analysis tool for Android apps, how ICCA can precisely match the targets of Intents?

CC Analysis with ICIRBy experiments, we can obtain that 37 and 36revised Intents existin GooglePlay and MalGenome, respectively. We apply ICCA in analyzing ICC values of the 73revised Intents. Table 1 illustrates abird’s eye view of the whole experiment. The l.h.s of Table 1 shows the seven attributes Action,Category,Type,Data,Flag,Extra, and Component of the 37 different revised Intentsin GooglePlay. Ther.h.s illustrates the attributes of the 36revise Intentsin MalGenome.Note that in the table, ‘–’ means that the relative value is captured but not modified, and ‘√’ indicates that the revised value is successfully acquired. As shown in Table 1, ICC values of all the 73revisedIntentsare successfully captured which are unable to be obtained by all the existing ICC analysis tools

In this part, we illustrate the target components of different Intentsin GooglePlay and MalGenome matched by ICCA. The results are compared with the target components matched with IC3. https://codeshoppy.com/php-projects-titles-topics.html All the Intents are classified into three categories: explicit, implicit and reused ones. We record the numbers of Intents in different categories and numbers of the matched target components.The results on GooglePlay and MalGenomeare presented in Ta-ble 2. The first column are the sets of data; the second one shows the categories of Intents; the third one illustrates the numbers of the involved Intents in the relative category. The right-most two columns present the numbers of the target components identified by IC3 and ICCA, respectively. As shown in the experiment, both IC3 and ICCA can identify most of the explicitly defined target components of In-tents (94.8% inGooglePlay and 98.7% inMalGenome). For implicitones, a small part (1.3% inGooglePlay and 0.7% inMalGenome) of them are acquired by ICCA whereas null of them can be obtained by IC3. The success rate of ICCA is low since implicit Intents are frequently used to launch target components in other apps which cannot be acquired without the runtime environment. Thus, our re-sult is reasonable. For the reused Intents, ICCA can acquire almost all the target components while null of them are acquired by IC3. In this experiment, we compare the results of ICCA in ICC analysis only with the newest ICC analysis tool IC3 as it is an improvement of Epicc. To be best of our knowledge, IC3 and Epicc are the only ICC analysis tools publicly available

PRIVACY-PRESERVING RELATIVE LOCATION BASED SERVICES FOR MOBILE USERS

ABSTRACT:

Location-aware applications have been used widely with the assistance of the latest positioning features in Smart Phone such as GPS, AGPS, etc. However, all the existing applications gather users’ geographical data and transfer them into the pertinent information to give meaning and value. For this kind of solutions, the user’s privacy and security issues might be raised because the geographical location has to be exposed to the service provider. A novel and practical solution is proposed in this article to provide the relative location of two mobile users based on their WiFi scanned results without any additional sensors. There is no privacy concern in this solution because end users will not collect and send any sensitive information to the server. This solution adopts a Client/Server (C/S) architecture, where the mobile user as a client reports the ambient WiFi APs and the server calculates the distances based on the WiFi AP’s topological relationships. A series of technologies are explored to improve the accuracy of the estimated distance and the corresponding algorithms are proposed. We also prove the feasibility with the prototype of “Circle Your Friends” System (CYFS) on Android phone which lets the mobile user know the distance between him and his social network friends.

INTRODUCTION:

LOCATION-AWARE APPLICATIONS:

Location awareness refers to devices that can passively or actively determine their location. Navigational instruments provide location coordinates for vessels and vehicles. Surveying equipment identifies location with respect to a well-known location a wireless communications device. Network location awareness (NLA) describes the location of a node in a network. The term applies to navigating, real-time locating and positioning support with global, regional or local scope. The term has been applied to traffic, logistics, business administration and leisure applications. Location awareness is supported by navigation systems, positioning systems and/or locating services. Location awareness without the active participation of the device is known as non-cooperative locating or detection. Location-aware applications use the geographical position of a mobile worker or an asset to execute a task. Position is detected mainly through satellite technologies, such as a GPS, or through mobile location technologies in cellular networks and mobile devices.

Examples include fleet management applications with mapping, navigation and routing functionalities, government inspections and integration with geographic information system applications.  Location-aware applications deliver specified messages to users based on their physical location. This kind of services can be divided into two types: absolute-location services and relative-location services. Absolute location is locating a place using a coordinate system while relative location means to locate a place relative to other landmarks. Location services require the users to report their absolute location data to the server and then the server return the querying result. Usually the technologies to detect and retrieve the location data include GPS, mobile cell id (CID), WiFi AP. For these methodologies, serious privacy concerns are raised because they enable the continuous tracking of involved users’ location. Two major types of privacy concerns are triggered: the potential information leakage in communications and the inappropriate usage of this information by the service providers.

EXISTING SYSTEM:

The rapid proliferation of smart phone technology in urban communities has enabled mobile users to utilize context aware services on their devices. Service providers take advantage of this dynamic and ever-growing technology landscape by proposing innovative context-dependent services for mobile subscribers. Location-based Services (LBS), for example, are used by millions of mobile subscribers every day to obtain location-specific information .Two popular features of location-based services are location check-insand location sharing. By checking into a location, users can share their current location with family and friends or obtain location-specific services from third-party providers, the obtained service does not depend on the locations of other users.

The other types of location-based services, which rely on sharing of locations (or location preferences) by a group of users in order to obtain some service for the whole group, are also becoming popular. According to a recent study, location sharing services are used by almost 20% of all mobile phone users. One prominent example of such a service is the taxi-sharing application, offered by a global telecom operator, where smart phone users can share a taxi with other users at a suitable location by revealing their departure and destination locations. Similarly, another popular service enables a group of users to find the most geographically convenient place to meet.

DISADVANTAGES:

  • Privacy of a user’s location or location preferences, with respect to other users and the third-party service provider, is a critical concern in such location-sharing-based applications.
  • For instance, such information can be used to de-anonymize users and their availabilities, to track their preferences or to identify their social networks.
  • For example, in the taxi-sharing application, a curious third-party service provider could easily deduce home/work location pairs of users who regularly use their service.
  • Without effective protection, evens parse location information has been shown to provide reliable information about a users’ private sphere, which could have severe consequences on the users’ social, financial and private life.
  • Even service providers who legitimately track users’ location information in order to improve the offered service can inadvertently harm users’ privacy, if the collected data is leaked in an unauthorized fashion or improperly shared with corporate partners.

PROPOSED SYSTEM:

We propose a simple and novel solution to provide the relative distance of two mobile devices without collecting any personal sensitive data. It can guarantee 100% privacy of the users when providing location based service. Since no absolute information is detected, none of above privacy-protected mechanisms needs to be adopted in our solution. At the same time, some methods are put forward to improve the accuracy of the relative distance. Our approach undertakes and integrates more parameters to improve the accuracy for WiFi positioning system such as IEEE protocol type, overlap ratio, etc. More importantly, all these mechanisms have been revisited and redesigned carefully to make them more applicable.

We address the privacy issue in LSBSs by focusing on a specific problem called the CYFS. Given a set of user location preferences, the CYFS is to determine a location among the proposed ones such that the maximum distance between this location and all other users’ locations is minimized, i.e. it is fairto all users. To prove its feasibility, a prototype based on Facebook is developed on Android based mobile devices. By evaluating the accuracy of estimated distance, though the precision is not good as GPS, it has proved that our privacy-free solution is suitable for social networking and location-based application. The future work includes developing the application on Google Android as well as Apple IOS devices. Furthermore, if possible, it also includes integrating the privacy-preserving relative location based service into other social networking applications such as Wechat and QQ.

ADVANTAGES:

  • In the proposed system, Problem in a privacy-preserving fashion, where each user participates by providing only a single location preference to the CYFS solver or the service provider.
  • In this significantly extended version of our earlier conference paper, we evaluate the security of our proposal under various passive and active adversarial scenarios, including collusion.
  • We also provide an accurate and detailed analysis of the privacy properties of our proposal and show that our algorithms do not provide any probabilistic advantage to a passive adversary in correctly guessing the preferred location of any participant.
  • In addition to the theoretical analysis, we also evaluate the practical efficiency and performance of the proposed algorithms by means of a prototype implementation on a test bed of Nokia mobile devices. We also address the multi-preference case, where each user may have multiple prioritized location preferences.
  • We highlight the main differences, in terms of   performance, with the single preference case, and also present initial experimental results for the multi-preference implementation. Finally, by means of a targeted user study, we provide insight into the usability of our proposed solutions.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed       –    1 GHz
  • RAM       –    256 MB (min)
  • Hard Disk      –   20 GB
  • Floppy Drive       –    44 MB
  • Key Board      –    Standard Windows Keyboard
  • Mouse       –    Two or Three Button Mouse
  • Monitor                            –    SVGA

SOFTWARE REQUIREMENTS:

  • Operating System        :           Windows XP or Win7
  • Front End       :           JAVA JDK 1.7
  • Back End :           MYSQL Server
  • Script :           JSP Script       
  • Document                               :           MS-Office 2007

PRIVACY-PRESERVING DETECTION OF SENSITIVE DATA EXPOSURE

ABSTRACT:

Statistics from security firms, research institutions and government organizations show that the numbers of data-leak instances have grown rapidly in recent years. Among various data-leak cases, human mistakes are one of the main causes of data loss. There exist solutions detecting inadvertent sensitive data leaks caused by human mistakes and to provide alerts for organizations. A common approach is to screen content in storage and transmission for exposed sensitive information. Such an approach usually requires the detection operation to be conducted in secrecy. However, this secrecy requirement is challenging to satisfy in practice, as detection servers may be compromised or outsourced.

In this paper, we present a privacy preserving data-leak detection (DLD) solution to solve the issue where a special set of sensitive data digests is used in detection. The advantage of our method is that it enables the data owner to safely delegate the detection operation to a semihonest provider without revealing the sensitive data to the provider. We describe how Internet service providers can offer their customers DLD as an add-on service with strong privacy guarantees. The evaluation results show that our method can support accurate detection with very small number of false alarms under various data-leak scenarios.

INTRODUCTION:

According to a report from Risk Based Security (RBS), the number of leaked sensitive data records has increased dramatically during the last few years, i.e., from 412 million in 2012 to 822 million in 2013. Deliberately planned attacks, inadvertent leaks (e.g., forwarding confidential emails to unclassified email accounts), and human mistakes (e.g., assigning the wrong privilege) lead to most of the data-leak incidents. Detecting and preventing data leaks requires a set of complementary solutions, which may include data-leak detection, data confinement, stealthy malware detection and policy enforcement.

Network data-leak detection (DLD) typically performs deep packet inspection (DPI) and searches for any occurrences of sensitive data patterns. DPI is a technique to analyze payloads of IP/TCP packets for inspecting application layer data, e.g., HTTP header/content. Alerts are triggered when the amount of sensitive data found in traffic passes a threshold. The detection system can be deployed on a router or integrated into existing network intrusion detection systems (NIDS). Straightforward realizations of data-leak detection require the plaintext sensitive data.

However, this requirement is undesirable, as it may threaten the confidentiality of the sensitive information. If a detection system is compromised, then it may expose the plaintext sensitive data (in memory). In addition, the data owner may need to outsource the data-leak detection to providers, but may be unwilling to reveal the plaintext sensitive data to them. Therefore, one needs new data-leak detection solutions that allow the providers to scan content for leaks without learning the sensitive information.

In this paper, we propose a data-leak detection solution which can be outsourced and be deployed in a semihonest detection environment. We design, implement, and evaluate our fuzzy fingerprint technique that enhances data privacy during data-leak detection operations. Our approach is based on a fast and practical one-way computation on the sensitive data (SSN records, classified documents, sensitive emails, etc.). It enables the data owner to securely delegate the content-inspection task to DLD providers without exposing the sensitive data. Using our detection method, the DLD provider, who is modeled as an honest-but-curious (aka semi-honest) adversary, can only gain limited knowledge about the sensitive data from either the released digests, or the content being inspected. Using our techniques, an Internet service provider (ISP) can perform detection on its customers’ traffic securely and provide data-leak detection as an add-on service for its customers. In another scenario, individuals can mark their own sensitive data and ask the administrator of their local network to detect data leaks for them.

In our detection procedure, the data owner computes a special set of digests or fingerprints from the sensitive data and then discloses only a small amount of them to the DLD provider. The DLD provider computes fingerprints from network traffic and identifies potential leaks in them. To prevent the DLD provider from gathering exact knowledge about the sensitive data, the collection of potential leaks is composed of real leaks and noises. It is the data owner, who post-processes the potential leaks sent back by the DLD provider and determines whether there is any real data leak.

Our contributions are summarized as follows.

1) We describe a privacy-preserving data-leak detection model for preventing inadvertent data leak in network traffic. Our model supports detection operation delegation and ISPs can provide data-leak detection as an add-on service to their customers using our model. We design, implement, and evaluate an efficient technique, fuzzy fingerprint, for privacy-preserving data-leak detection. Fuzzy fingerprints are special sensitive data digests prepared by the data owner for release to the DLD provider.

2) We implement our detection system and perform extensive experimental evaluation on 2.6 GB Enron dataset, Internet surfing traffic of 20 users, and also 5 simulated real-worlds data-leak scenarios to measure its privacy guarantee, detection rate and efficiency. Our results indicate high accuracy achieved by our underlying scheme with very low false positive rate. Our results also show that the detection accuracy does not degrade much when only partial (sampled) sensitive-data digests are used. In addition, we give an empirical analysis of our fuzzification as well as of the fairness of fingerprint partial disclosure.

LITRATURE SURVEY

PRIVACY-AWARE COLLABORATIVE SPAM FILTERING

AUTHORS: K. Li, Z. Zhong, and L. Ramaswamy

PUBLISH: IEEE Trans. Parallel Distrib. Syst., vol. 20, no. 5, pp. 725–739, May 2009.

EXPLANATION:

While the concept of collaboration provides a natural defense against massive spam e-mails directed at large numbers of recipients, designing effective collaborative anti-spam systems raises several important research challenges. First and foremost, since e-mails may contain confidential information, any collaborative anti-spam approach has to guarantee strong privacy protection to the participating entities. Second, the continuously evolving nature of spam demands the collaborative techniques to be resilient to various kinds of camouflage attacks. Third, the collaboration has to be lightweight, efficient, and scalable. Toward addressing these challenges, this paper presents ALPACAS-a privacy-aware framework for collaborative spam filtering. In designing the ALPACAS framework, we make two unique contributions. The first is a feature-preserving message transformation technique that is highly resilient against the latest kinds of spam attacks. The second is a privacy-preserving protocol that provides enhanced privacy guarantees to the participating entities. Our experimental results conducted on a real e-mail data set shows that the proposed framework provides a 10 fold improvement in the false negative rate over the Bayesian-based Bogofilter when faced with one of the recent kinds of spam attacks. Further, the privacy breaches are extremely rare. This demonstrates the strong privacy protection provided by the ALPACAS system.

DATA LEAK DETECTION AS A SERVICE: CHALLENGES AND SOLUTIONS

AUTHORS: X. Shu and D. Yao

PUBLISH: Proc. 8th Int. Conf. Secur. Privacy Commun. Netw., 2012, pp. 222–240

EXPLANATION:

We describe network-based data-leak detection (DLD) technique, the main feature of which is that the detection does not require the data owner to reveal the content of the sensitive data. Instead, only a small amount of specialized digests are needed. Our technique – referred to as the fuzzy fingerprint – can be used to detect accidental data leaks due to human errors or application flaws. The privacy-preserving feature of our algorithms minimizes the exposure of sensitive data and enables the data owner to safely delegate the detection to others. We describe how cloud providers can offer their customers data-leak detection as an add-on service with strong privacy guarantees. We perform extensive experimental evaluation on the privacy, efficiency, accuracy and noise tolerance of our techniques. Our evaluation results under various data-leak scenarios and setups show that our method can support accurate detection with very small number of false alarms, even when the presentation of the data has been transformed. It also indicates that the detection accuracy does not degrade when partial digests are used. We further provide a quantifiable method to measure the privacy guarantee offered by our fuzzy fingerprint framework.

QUANTIFYING INFORMATION LEAKS IN OUTBOUND WEB TRAFFIC

AUTHORS: K. Borders and A. Prakash

PUBLISH: Proc. 30th IEEE Symp. Secur. Privacy, May 2009, pp. 129–140.

EXPLANATION:

As the Internet grows and network bandwidth continues to increase, administrators are faced with the task of keeping confidential information from leaving their networks. Todaypsilas network traffic is so voluminous that manual inspection would be unreasonably expensive. In response, researchers have created data loss prevention systems that check outgoing traffic for known confidential information. These systems stop naive adversaries from leaking data, but are fundamentally unable to identify encrypted or obfuscated information leaks. What remains is a high-capacity pipe for tunneling data to the Internet. We present an approach for quantifying information leak capacity in network traffic. Instead of trying to detect the presence of sensitive data-an impossible task in the general case–our goal is to measure and constrain its maximum volume. We take advantage of the insight that most network traffic is repeated or determined by external information, such as protocol specifications or messages sent by a server. By filtering this data, we can isolate and quantify true information flowing from a computer. In this paper, we present measurement algorithms for the Hypertext Transfer Protocol (HTTP), the main protocol for Web browsing. When applied to real Web browsing traffic, the algorithms were able to discount 98.5% of measured bytes and effectively isolate information leaks.

SYSTEM ANALYSIS

EXISTING SYSTEM:

  • Existing detecting and preventing data leaks requires a set of complementary solutions, which may include data-leak detection, data confinement, stealthy malware detection, and policy enforcement.
  • Network data-leak detection (DLD) typically performs deep packet inspection (DPI) and searches for any occurrences of sensitive data patterns. DPI is a technique to analyze payloads of IP/TCP packets for inspecting application layer data, e.g., HTTP header/content.
  • Alerts are triggered when the amount of sensitive data found in traffic passes a threshold. The detection system can be deployed on a router or integrated into existing network intrusion detection systems (NIDS).
  • Straightforward realizations of data-leak detection require the plaintext sensitive data. However, this requirement is undesirable, as it may threaten the confidentiality of the sensitive information. If a detection system is compromised, then it may expose the plaintext sensitive data (in memory).
  • In addition, the data owner may need to outsource the data-leak detection to providers, but may be unwilling to reveal the plaintext sensitive data to them. Therefore, one needs new data-leak detection solutions that allow the providers to scan content for leaks without learning the sensitive information.

DISADVANTAGES:

  • As the Internet grows and network bandwidth continues to increase, administrators are faced with the task of keeping confidential information from leaving their networks. In response, researchers have created data loss prevention systems that check outgoing traffic for known confidential information.
  • These systems stop naive adversaries from leaking data, but are fundamentally unable to identify encrypted or obfuscated information leaks. What remains is a high-capacity pipe for tunneling data to the Internet.\
  • Existing approach for quantifying information leak capacity in network traffic instead of trying to detect the presence of sensitive data-an impossible task in the general case–our goal is to measure and constrain its maximum volume.
  • We take disadvantage of the insight that most network traffic is repeated or determined by external information, such as protocol specifications or messages sent by a server. By filtering this data, we can isolate and quantify true information flowing from a computer.

PROPOSED SYSTEM:

  • We propose a data-leak detection solution which can be outsourced and be deployed in a semihonest detection environment. We design, implement, and evaluate our fuzzy fingerprint technique that enhances data privacy during data-leak detection operations.
  • Our approach is based on a fast and practical one-way computation on the sensitive data (SSN records, classified documents, sensitive emails, etc.). It enables the data owner to securely delegate the content-inspection task to DLD providers without exposing the sensitive data.
  • Our detection method, the DLD provider, who is modeled as an honest-but-curious (aka semi-honest) adversary, can only gain limited knowledge about the sensitive data from either the released digests, or the content being inspected. Using our techniques, an Internet service provider (ISP) can perform detection on its customers’ traffic securely and provide data-leak detection as an add-on service for its customers. In another scenario, individuals can mark their own sensitive data and ask the administrator of their local network to detect data leaks for them.
  • Our detection procedure, the data owner computes a special set of digests or fingerprints from the sensitive data and then discloses only a small amount of them to the DLD provider. The DLD provider computes fingerprints from network traffic and identifies potential leaks in them.
  • To prevent the DLD provider from gathering exact knowledge about the sensitive data, the collection of potential leaks is composed of real leaks and noises. It is the data owner, who post-processes the potential leaks sent back by the DLD provider and determines whether there is any real data leak.

ADVANTAGES:

  • We describe privacy-preserving data-leak detection model for preventing inadvertent data leak in network traffic. Our model supports detection operation delegation and ISPs can provide data-leak detection as an add-on service to their customers using our model.
  • We design, implement, and evaluate an efficient technique, fuzzy fingerprint, for privacy-preserving data-leak detection. Fuzzy fingerprints are special sensitive data digests prepared by the data owner for release to the DLD provider.
  • We implement our detection system and perform extensive experimental evaluation on internet surfing traffic of 20 users, and also 5 simulated real-worlds data-leak scenarios to measure its privacy guarantee, detection rate and efficiency.
  • Our results indicate high accuracy achieved by our underlying scheme with very low false positive rate. Our results also show that the detection accuracy does not degrade much when only partial (sampled) sensitive-data digests are used an empirical analysis of our fuzzification as well as of the fairness of fingerprint partial disclosure.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed       –    1 GHz
  • RAM       –    256 MB (min)
  • Hard Disk      –   20 GB
  • Floppy Drive       –    44 MB
  • Key Board      –    Standard Windows Keyboard
  • Mouse       –    Two or Three Button Mouse
  • Monitor      –    SVGA

SOFTWARE REQUIREMENTS:

  • Operating System        :           Windows XP or Win7
  • Front End       :           JAVA JDK 1.7
  • Back End :           MYSQL Server
  • Server :           Apache Tomact Server
  • Script :           JSP Script
  • Document :           MS-Office 2007

PRIVACY POLICY INFERENCE OF USER-UPLOADED IMAGES ON CONTENT SHARING SITES

 ABSTRACT:

With the increasing volume of images users share through social sites, maintaining privacy has become a major problem, as demonstrated by a recent wave of publicized incidents where users inadvertently shared personal information. In light of these incidents, the need of tools to help users control access to their shared content is apparent. Toward addressing this need, we propose an Adaptive Privacy Policy Prediction (A3P) system to help users compose privacy settings for their images. We examine the role of social context, image content, and metadata as possible indicators of users’ privacy preferences.

We propose a two-level framework which according to the user’s available history on the site, determines the best available privacy policy for the user’s images being uploaded. Our solution relies on an image classification framework for image categories which may be associated with similar policies, and on a policy prediction algorithm to automatically generate a policy for each newly uploaded image, also according to users’ social features. Over time, the generated policies will follow the evolution of users’ privacy attitude. We provide the results of our extensive evaluation over 5,000 policies, which demonstrate the effectiveness of our system, with prediction accuracies over 90 percent.

INTRODUCTION

Images are now one of the key enablers of users’ connectivity. Sharing takes place both among previously established groups of known people or social circles (e. g., Google+, Flickr or Picasa), and also increasingly with people outside the users social circles, for purposes of social discovery-to help them identify new peers and learn about peers interests and social surroundings. However, semantically rich images may reveal contentsensitive information. Consider a photo of a students 2012 graduationceremony, for example.

It could be shared within a Google+ circle or Flickr group, but may unnecessarily expose the studentsBApos familymembers and other friends. Sharing images within online content sharing sites,therefore,may quickly leadto unwanted disclosure and privacy violations. Further, the persistent nature of online media makes it possible for other users to collect rich aggregated information about the owner of the published content and the subjects in the published content. The aggregated information can result in unexpected exposure of one’s social environment and lead to abuse of one’s personal information.

Most content sharing websites allow users to enter their privacy preferences. Unfortunately, recent studies have shown that users struggle to set up and maintain such privacy settings. One of the main reasons provided is that given the amount of shared information this process can be tedious and error-prone. Therefore, many have acknowledged the need of policy recommendation systems which can assist users to easily and properly configure privacy settings. However, existing proposals for automating privacy settings appear to be inadequate to address the unique privacy needs of images due to the amount of information implicitly carried within images, and their relationship with the online environment wherein they are exposed.

LITRATURE SURVEY

TITLE NAME: SHEEPDOG: GROUP AND TAG RECOMMENDATION FOR FLICKR PHOTOS BY AUTOMATIC SEARCH-BASED LEARNING

AUTHOR: H.-M. Chen, M.-H. Chang, P.-C. Chang, M.-C. Tien, W. H. Hsu, and J.-L. Wu,

PUBLISH: Proc. 16th ACM Int. Conf. Multimedia, 2008, pp. 737–740.

EXPLANATION:

Online photo albums have been prevalent in recent years and have resulted in more and more applications developed to provide convenient functionalities for photo sharing. In this paper, we propose a system named SheepDog to automatically add photos into appropriate groups and recommend suitable tags for users on Flickr. We adopt concept detection to predict relevant concepts of a photo and probe into the issue about training data collection for concept classification. From the perspective of gathering training data by web searching, we introduce two mechanisms and investigate their performances of concept detection. Based on some existing information from Flickr, a ranking-based method is applied not only to obtain reliable training data, but also to provide reasonable group/tag recommendations for input photos. We evaluate this system with a rich set of photos and the results demonstrate the effectiveness of our work.

TITLE NAME: CONNECTING CONTENT TO COMMUNITY IN SOCIAL MEDIA VIA IMAGE CONTENT, USER TAGS AND USER COMMUNICATION

AUTHOR: M. D. Choudhury, H. Sundaram, Y.-R. Lin, A. John, and D. D. Seligmann

PUBLISH: Proc. IEEE Int. Conf. Multimedia Expo, 2009, pp.1238–1241.

EXPLANATION:

In this paper we develop a recommendation framework to connect image content with communities in online social media. The problem is important because users are looking for useful feedback on their uploaded content, but finding the right community for feedback is challenging for the end user. Social media are characterized by both content and community. Hence, in our approach, we characterize images through three types of features: visual features, user generated text tags, and social interaction (user communication history in the form of comments). A recommendation framework based on learning a latent space representation of the groups is developed to recommend the most likely groups for a given image. The model was tested on a large corpus of Flickr images comprising 15,689 images. Our method outperforms the baseline method, with a mean precision 0.62 and mean recall 0.69. Importantly, we show that fusing image content, text tags with social interaction features outperforms the case of only using image content or tags.

TITLE NAME: ANALYSING FACEBOOK FEATURES TO SUPPORT EVENT DETECTION FOR PHOTO-BASED FACEBOOK APPLICATIONS

AUTHOR: M. Rabbath, P. Sandhaus, and S. Boll,

PUBLISH: Proc. 2nd ACM Int. Conf. Multimedia Retrieval, 2012, pp. 11:1–11:8.

EXPLANATION:

Facebook witnesses an explosion of the number of shared photos: With 100 million photo uploads a day it creates as much as a whole Flickr each two months in terms of volume. Facebook has also one of the healthiest platforms to support third party applications, many of which deal with photos and related events. While it is essential for many Facebook applications, until now there is no easy way to detect and link photos that are related to the same events, which are usually distributed between friends and albums. In this work, we introduce an approach that exploits Facebook features to link photos related to the same event. In the current situation where the EXIF header of photos is missing in Facebook, we extract visual-based, tagged areas-based, friendship-based and structure-based features. We evaluate each of these features and use the results in our approach. We introduce and evaluate a semi-supervised probabilistic approach that takes into account the evaluation of these features. In this approach we create a lookup table of the initialization values of our model variables and make it available for other Facebook applications or researchers to use. The evaluation of our approach showed promising results and it outperformed the other the baseline method of using the unsupervised EM algorithm in estimating the parameters of a Gaussian mixture model. We also give two examples of the applicability of this approach to help Facebook applications in better serving the user.

SYSTEM ANALYSIS

EXISTING SYSTEM:

Image content sharing environments such as Flickr or YouTube contain a large amount of private resources such as photos showing weddings, family holidays, and private parties. These resources can be of a highly sensitive nature, disclosing many details of the users’ private sphere. In order to support users in making privacy decisions in the context of image sharing and to provide them with a better overview on privacy related visual content available on the Web techniques to automatically detect private images, and to enable privacy-oriented image search.

To this end, we learn privacy classifiers trained on a large set of manually assessed Flickr photos, combining textual metadata of images with a variety of visual features. We employ the resulting classification models for specifically searching for private photos, and for diversifying query results to provide users with a better coverage of private and public content. Most content sharing websites allow users to enter their privacy preferences. Unfortunately, recent studies have shown that users struggle to set up and maintain such privacy settings.

  • One of the main reasons provided is that given the amount of shared information this process can be tedious and error-prone of policy recommendation systems which can assist users too easily and properly configure privacy settings.

DISADVANTAGES:

  • Sharing images within online content sharing sites, therefore, may quickly lead to unwanted disclosure and privacy violations.
  • Further, the persistent nature of online media makes it possible for other users to collect rich aggregated information about the owner of the published content and the subjects in the published content.
  • The aggregated information can result in unexpected exposure of one’s social environment and lead to abuse of one’s personal information.

PROPOSED SYSTEM:

We propose an Adaptive Privacy Policy Prediction (A3P) system which aims to provide users a hassle free privacy settings experience by automatically generating personalized policies. The A3P system handles user uploaded images, and factors in the following criteria that influence one’s privacy settings of images:

The impact of social environment and personal characteristics: Social context of users, such as their profile information and relationships with others may provide useful information regarding users’ privacy preferences. For example, users interested in photography may like to share their photos with other amateur photographers. Users who have several family members among their social contacts may share with them pictures related to family events. However, using common policies across all users or across users with similar traits may be too simplistic and not satisfy individual preferences.

Users may have drastically different opinions even on the same type of images. For example, a privacy adverse person may be willing to share all his personal images while a more conservative person may just want to share personal images with his family members. In light of these considerations, it is important to find the balancing point between the impact of social environment and users’ individual characteristics in order to predict the policies that match each individual’s needs.

The role of image’s content and metadata: In general, similar images often incur similar privacy preferences, especially when people appear in the images. For example, one may upload several photos of his kids and specify that only his family members are allowed to see these photos. He may upload some other photos of landscapes which he took as a hobby and for these photos, he may set privacy preference allowing anyone to view and comment the photos. Analyzing the visual content may not be sufficient to capture users’ privacy preferences. Tags and other metadata are indicative of the social context of the image, including where it was taken and why, and also provide a synthetic description of images, complementing the information obtained from visual content analysis.

ADVANTAGES:

  • The A3P-core focuses on analyzing each individual user’s own images and metadata, while the A3P-Social offers a community perspective of privacy setting recommendations for a user’s potential privacy improvement.
  • Our algorithm in A3P-core (that is now parameterized based on user groups and also factors in possible outliers), and a new A3P-social module that develops the notion of social context to refine and extend the prediction power of our system.
  • We design the interaction flows between the two building blocks to balance the benefits from meeting personal characteristics and obtaining community advice.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed       –    1 GHz
  • RAM       –    256 MB (min)
  • Hard Disk      –   20 GB
  • Floppy Drive       –    44 MB
  • Key Board      –    Standard Windows Keyboard
  • Mouse       –    Two or Three Button Mouse
  • Monitor      –    SVGA

SOFTWARE REQUIREMENTS:

  • Operating System        :           Windows XP or Win7
  • Front End       :           JAVA JDK 1.7
  • Back End :           MYSQL Server
  • Server :           Apache Tomact Server
  • Script :           JSP Script
  • Document :           MS-Office 2007

Android graphical interface

Android graphical interface

Android graphical interface: “layout” files User interfaces (views) are described in XML files that define controls’ positioning (buttons, images, text boxes, etc.) and how they are arranged with one another (below, on the right, on the left, etc.) in a linear container, in absolute position, in a grid, horizontal, vertical, etc. NOTE.– The screen orientation (according to the position detected by inertial sensors: portrait or landscape) can be managed at the activity declaration level in the manifest file.

Designing a layout Two design modes are available from Eclipse: the visual designer (“Graphical layout” tab) that enables to drag and drop components and to graphically place them in the view, and the XML view.

NOTE.– A layout can refer to another layout defined in another XML file; this allows reuse of atomic pieces of views. codeshoppy.com/android-projects-titles-ieee.html In order to do so, we use the include tag referencing the layout file that is to be included the coding wizard menu in Eclipse that is activated by positioning the cursor at the right location and pressing the “Ctrl”+”space” key combination; then, you just need to choose the element you wish to insert from the list of available codes by scrolling (with the arrow keys) and to press “enter” to confirm.

Associating a layout to an activity and handling controls A layout can be “set” (method “setContentView”) to one or several activities (usually in the “onCreate” method). From this layout,  controls can be differently displayed in order to adapt the targeted devise regardless of the type of display and regardless of the published version of your Android app; for example, it could be installed on a smartphone, on a tablet, on a smart watch, as well as on a smart oven and any other smart object running under OS Android with a display. NOTE.– Fragments15, introduced since the Honeycomb version 3.0 and Android API 11, ease the modular management of views and the display on screens of different sizes and shapes. In this way, a fragment is a piece of code in charge of controlling a view. It has its own lifecycle, and can thus be dynamically added or removed from the activity based on the events or on the type of display detected during the creation of the activity (onCreate method).

Handling the user’s actions The user’s actions are handled at the activity and/or at the layout and/or at the control levels with inherited callback methods overriding or by implementing an interface or a callback object. For example, onClickListener, onTouchListener, onScrollListener, etc. The source code gives an example on how touch events are triggered and processed after a finger sliding on the screen by managing the x and y coordinates corresponding to the top left and the bottom right corners of the virtual rectangle shaped by the fingers position compared to the screen landmark at the start of the movement (ACTION_DOWN), during the movement (ACTION_POINTER_DOWN), and when the movement finishes when the user lifts his/her fingers from the screen (ACTION_MOVE). In the same way, when the user presses “return” and “home” keys of the system, the event is sent to the onKeyDown callback that can be overridden in the activity as illustrated in the code. Actions on menus can also be intercepted (or processed), for example, by overriding the onMenuItemSelected method.

Compiling and testing an Android application By default in Eclipse IDE, the Android project compilations are done automatically. This can sometimes be a source of troubles when the coding is in progress (error messages, slowness due to compilation in the background). You can deactivate this option through the Project menu by unchecking the “Build automatically” option of automated compilation. The “Build project” option is then available in the shortcut menu of the project (right click at the project root in the packages explorer). Sometimes, a project can contain errors you might not understand: it can be due to binary objects previously generated and which are no longer compatible with the last version of the source code (e.g., removal of layouts). In this case, you should “clean” the generated binary objects from the Project|Clean menu (remember to do this every time this kind of problem occurs!).

Launching the application Once the application is ready for testing (no compilation errors), it can be launched from the Run|Run.as|Android Application menu, which is also available in the project’s shortcut menu. to launch our application project directly on our Android device, because the emulator requires resources allocated to it, and this can drive into our system overhead and result in a very, even extremely slow display.

Using the Android device emulator The menu Window|Android Virtual Device (AVD) manager as well as the associated tool framed in Figure 2.21 enable management of the Android device emulator: the NOTE.– We may note that NFC applications are not easy to test using a virtual device: indeed, you will also need to install an NFC reader emulator as well as an NFC tag or contactless card emulator16, which is very cumbersome to configure, but this will not ensure the same behaviors as on an NFC-enabled Android device. We thus strongly recommend testing applications with ALL types of devices you wish to deploy your application on.

Using an Android device connected to the USB port The ADT ADB component detects the Android device connected to the USB port and allows the application launching directly on the plugged device. NOTE.– We will need to make sure that the installed OS Android version and the features of the device on which we launch the application are compatible with what was declared in the AndroidManifest file. Moreover, for the Android device to be detected by ADB, we need the appropriate ADB driver according the model of the Android device you connect to your system. NOTE.– Google’s drivers comply with a broad range of devices and can be manually installed from the folder extras found in the Android SDK directory (for example in usual Windows environments, it can be found at C:\Program Files (x86)\Android\android-sdk\extras\google\ usb_driver).

PREDICTING ASTHMA-RELATED EMERGENCY DEPARTMENT VISITS USING BIG DATA

ABSTRACT:

Asthma is one of the most prevalent and costly chronic conditions in the United States which cannot be cured. However accurate and timely surveillance data could allow for timely and targeted interventions at the community or individual level. Current national asthma disease surveillance systems can have data availability lags of up to two weeks. Rapid progress has been made in gathering non-traditional, digital information to perform disease surveillance.

We introduce a novel method of using multiple data sources for predicting the number of asthma related emergency department (ED) visits in a specific area. Twitter data, Google search interests and environmental sensor data were collected for this purpose. Our preliminary findings show that our model can predict the number of asthma ED visits based on near-real-time environmental and social media data with approximately 70% precision. The results can be helpful for public health surveillance, emergency department preparedness, and, targeted patient interventions.

INTRODUCTION:

Asthma is one of the most prevalent and costly chronic conditions in the United States, with 25 million people affected. Asthma accounts for about two million emergency department (ED) visits, half a million hospitalizations, and 3,500 deaths, and incurs more than 50 billion dollars in direct medical costs annually. Moreover, asthma is a leading cause of loss productivity with nearly 11 million missed school days and more than 14 million missed work days every year due to asthma. Although asthma cannot be cured, many of its adverse events can be prevented by appropriate medication use and avoidance of environmental triggers. The prediction of population- and individual-level risk for asthma adverse events using accurate and timely surveillance data could guide timely and targeted interventions, to reduce the societal burden of asthma. At the population level, current national asthma disease surveillance programs rely on weekly reports to the Centers for Disease Control and Prevention (CDC) of data collected from various local resources by state health departments.

Notoriously, such data have a lag-time of weeks, therefore providing retrospective information that is not amenable to proactive and timely preventive interventions. At the individual level, known predictors of asthma ED visits and hospitalizations include past acute care utilization, medication use, and sociodemographic characteristics. Common data sources for these variables include electronic medical records (EMR), medical insurance claims data, and population surveys, all of which, also, are subject to significant time lag. In an ongoing quality improvement project for asthma care, Parkland Center for Clinical Innovation (PCCI) researchers have built an asthma predictive model relying on a combination of EMR and claim data to predict the risk for asthma-related ED visits within three months of data collection [Unpublished reports from PCCI]. Although the model performance (C-statistic 72%) and prediction timeframe (three months) are satisfying, a narrower prediction timeframe potentially could provide additional risk-stratification for more efficiency and timeliness in resource deployment. For instance, resources might be prioritized to first serve patients at high risk for an asthma ED visit within 2 weeks of data collection, while being safely deferred for patients with a later predicted high-risk period.

Novel sources of timely data on population- and individual-level asthma activities are needed to provide additional temporal and geographical granularity to asthma risk stratification. Short of collecting information directly from individual patients (a time- and resource-intensive endeavor), readily available public data will have to be repurposed intelligently to provide the required information. There has been increasing interest in gathering non-traditional, digital information to perform disease surveillance. These include diverse datasets such as those stemming from social media, internet search, and environmental data. Twitter is an online social media platform that enables users to post and read 140-character messages called “tweets”. It is a popular data source for disease surveillance using social media since it can provide nearly instant access to real-time social opinions. More importantly, tweets are often tagged by geographic location and time stamps potentially providing information for disease surveillance.

Another notable non-traditional disease surveillance systemhas been a data-aggregating tool called Google Flu Trends which uses aggregated search data to estimate flu activity. Google Trends was quite successful in its estimation of influenza-like illness. It is based on Google’s search engine which tracks how often a particular search-term is entered relative to the total search-volume across a particular area. This enables access to the latest data from web search interest trends on a variety of topics, including diseases like asthma. Air pollutants are known triggers for asthma symptoms and exacerbations. The United States Environmental Protection Agency (EPA) provides access to monitored air quality data collected at outdoor sensors across the country which could be used as a data source for asthma prediction. Meanwhile, as health reform progresses, the quantity and variety of health records being made available electronically are increasing dramatically. In contrast to traditional disease surveillance systems, these new data sources have the potential to enable health organizations to respond to chronic conditions, like asthma, in real time. This in turn implies that health organizations can appropriately plan for staffing and equipment availability in a flexible manner. They can also provide early warning signals to the people at risk for asthma adverse events, and enable timely, proactive, and targeted preventive and therapeutic interventions.

LITRATURE SURVEY:

USE OF HANGEUL TWITTER TO TRACK AND PREDICT HUMAN INFLUENZA INFECTION

AUTHOR: Kim, Eui-Ki, et al.

PUBLISH: PloS one vol. 8, no.7, e69305, 2013.

EXPLANATION:

Influenza epidemics arise through the accumulation of viral genetic changes. The emergence of new virus strains coincides with a higher level of influenza-like illness (ILI), which is seen as a peak of a normal season. Monitoring the spread of an epidemic influenza in populations is a difficult and important task. Twitter is a free social networking service whose messages can improve the accuracy of forecasting models by providing early warnings of influenza outbreaks. In this study, we have examined the use of information embedded in the Hangeul Twitter stream to detect rapidly evolving public awareness or concern with respect to influenza transmission and developed regression models that can track levels of actual disease activity and predict influenza epidemics in the real world. Our prediction model using a delay mode provides not only a real-time assessment of the current influenza epidemic activity but also a significant improvement in prediction performance at the initial phase of ILI peak when prediction is of most importance.

A NEW AGE OF PUBLIC HEALTH: IDENTIFYING DISEASE OUTBREAKS BY ANALYZING TWEETS

AUTHOR: Krieck, Manuela, Johannes Dreesman, Lubomir Otrusina, and Kerstin Denecke.

PUBLISH: In Proceedings of Health Web-Science Workshop, ACM Web Science Conference. 2011.

EXPLANATION:

Traditional disease surveillance is a very time consuming reporting process. Cases of notifiable diseases are reported to the different levels in the national health care system before actions can be taken. But, early detection of disease activity followed by a rapid response is crucial to reduce the impact of epidemics. To address this challenge, alternative sources of information are investigated for disease surveillance. In this paper, the relevance of twitter messages outbreak detection is investigated from two directions. First, Twitter messages potentially related to disease outbreaks are retrospectively searched and analyzed. Second, incoming twitter messages are assessed with respect to their relevance for outbreak detection. The studies show that twitter messages can be – to a certain extent – highly relevant for early detecting hints to public health threats. According to the law on German Protection against Infection Act (Infektionsschutzgesetz (IfSG), 2001) the traditional disease surveillance relies on data from mandatory reporting of cases by physicians and laboratories. They inform local county health departments (Landkreis) which in turn report to state health departments (Land). At the end of the reporting pipeline, the national surveillance institute (Robert Koch Institute) is informed about the outbreak. It is clear that these different stages of reporting take time and delay a timely reaction.

TOWARDS DETECTING INFLUENZA EPIDEMICS BY ANALYZING TWITTER MESSAGES

AUTHOR: Culotta, Aron.

PUBLISH: In Proceedings of the first workshop on social media analytics, pp. 115-122. ACM, 2010.

EXPLANATION:

Rapid response to a health epidemic is critical to reduce loss of life. Existing methods mostly rely on expensive surveys of hospitals across the country, typically with lag times of one to two weeks for influenza reporting, and even longer for less common diseases. In response, there have been several recently proposed solutions to estimate a population’s health from Internet activity, most notably Google’s Flu Trends service, which correlates search term frequency with influenza statistics reported by the Centers for Disease Control and Prevention (CDC). In this paper, we analyze messages posted on the micro-blogging site Twitter.com to determine if a similar correlation can be uncovered. We propose several methods to identify influenza-related messages and compare a number of regression models to correlate these messages with CDC statistics. Using over 500,000 messages spanning 10 weeks, we find that our best model achieves a correlation of .78 with CDC statistics by leveraging a document classifier to identify relevant messages.

SYSTEM ANALYSIS

EXISTING SYSTEM:

Existing methods in the increased availability of information in the Web, in the last years, a new research area has been developed, namely Infodemiology. It can be defined as the “science of distribution and determinants of information in an electronic medium, specifically the Internet, or in a population, with the ultimate aim to inform public health and public policy”. As part of this research area, several kinds of data have been studied for their applicability in the context of disease surveillance. Google flu trends exploit the search behavior to monitor the current flurelated disease activity. It could be shown by Carneiro and Mylonakis that Google Flu Trends can detect regional outbreaks of influenza 7–10 days before conventional Centers for Disease Control and Prevention surveillance systems.

Google messages and their relevance for disease outbreak detection has been reported already that especially tweets are useful to predict outbreaks such as a Norovirus outbreak at a university analysed twitter news during the influenza epidemic 2009. They compared the use of the term “H1N1” and “swine flu” over the time. Furthermore, they analysed the content of the tweets (ten content concepts) and validated twitter as a the real time content. They analysed the data via Infovigil an infosurveillance system by using an automated coding. To find out if there is a relationship between automated and manual coding, the tweets were evaluated by a Pearson´s correlation. Chew et al. found a significant correlation between both coding in seven content concept it needs to be investigated whether this source might be of relevance for detecting disease outbreaks in Germany. Therefore, only German keywords are exploited to identify Twitter messages. Further, we are not only interested in influenza-like illnesses as the studies available so far, but also in other infectious diseases (e.g. Norovirus and Salmonella).

DISADVANTAGES:

Existing methods have a common format: [username] [text] [date time client]. The length is restricted to 140 characters. In terms of linguistics, each twitter user can write as he or she likes. Thus, the variety reaches from complete sentences to listing of keywords. Hashtags, i.e. terms that are combined with a hash (e.g. #flu) denote topics and are primarily utilized by experienced users categories google according to their contents in more details, google messages can • Provide information, • Express opinions or • Report personal issues is provided, the authority of that information cannot normally not be determined, so it might be unverified information. Opinions are often expressed with humor or sarcasm and may be highly contradictive in the emotions that are expressed.

PROPOSED SYSTEM:

Our proposed methods to leverage social media, internet search, and environmental air quality data to estimate ED visits for asthma in a relatively discrete geographic area (a metropolitan area) within a relatively short time period (days) to this end, we have gathered asthma related ED visits data, social media data from Twitter, internet users’ search interests from Google and pollution sensor data from the EPA, all from the same geographic area and time period, to create a model for predicting asthma related ED visits. This work is different from extant studies that typically predict the spread of contagious diseases using social media such as Twitter. Unlike influenza or other viral diseases, asthma is a non-communicable health condition and we demonstrate the utility and value of linking big data from diverse sources in developing predictive models for non-communicable diseases with a specific focus on asthma.

Research studies have explored the use of novel data sources to propose rapid, cost-effective health status surveillance methodologies. Some of the early studies rely on document classification suggesting that Twitter data can be highly relevant for early detection of public health threats. Others employ more complex linguistic analysis, such as the Ailment Topic Aspect Model which is useful for syndrome surveillance. This type of analysis is useful for demonstrating the significance of social media as a promising new data source for health surveillance. Other recent studies have linked social media data with real world disease incidence to generate actionable knowledge useful for making health care decisions. These include which analyzed Twitter messages related to influenza and correlated them with reported CDC statistics validated Twitter as a real-time content, sentiment, and public attention trend-tracking tool. Collier employed supervised classifiers (SVM and Naive Bayes) to classify tweets into four self-reported protective behavior categories. This study adds to evidence supporting a high degree of correlation between pre-diagnostic social media signals and diagnostic influenza case data.

ADVANTAGES:

Our work uses a combination of data from multiple sources to predict the number of asthma-related ED visits in near real-time. In doing so, we exploit geographic information associated with each dataset. We describe the techniques to process multiple types of datasets, to extract signals from each, integrate, and feed into a prediction model using machine learning algorithms, and demonstrate the feasibility of such a prediction.

The main contributions of this work are:

  • Analysis of tweets with respect to their relevance for disease surveillance,
  • Content analysis and content classification of tweets,
  • Linguistic analysis of disease-reporting twitter messages,
  • Recommendations on search patterns for tweet search in the context of disease surveillance.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed       –    1 GHz
  • RAM       –    256 MB (min)
  • Hard Disk      –   20 GB
  • Floppy Drive       –    44 MB
  • Key Board      –    Standard Windows Keyboard
  • Mouse       –    Two or Three Button Mouse
  • Monitor      –    SVGA

SOFTWARE REQUIREMENTS:

  • Operating System        :           Windows XP or Win7
  • Front End       :           JAVA JDK 1.7
  • Back End :           MYSQL Server
  • Server :           Apache Tomact Server
  • Script :           JSP Script
  • Document :           MS-Office 2007

PERFORMING INITIATIVE DATA PREFETCHING IN DISTRIBUTED FILE SYSTEMS FOR CLOUD COMPUTING

ABSTRACT:

An initiative data prefetching scheme on the storage servers in distributed file systems for cloud computing. In this prefetching technique, the client machines are not substantially involved in the process of data prefetching, but the storage servers can directly prefetch the data after analyzing the history of disk I/O access events, and then send the prefetched data to the relevant client machines proactively. To put this technique to work, the information about client nodes is piggybacked onto the real client I/O requests, and then forwarded to the relevant storage server. Next, two prediction algorithms have been proposed to forecast future block access operations for directing what data should be fetched on storage servers in advance.

Finally, the prefetched data can be pushed to the relevant client machine from the storage server. Through a series of evaluation experiments with a collection of application benchmarks, we have demonstrated that our presented initiative prefetching technique can benefit distributed file systems for cloud environments to achieve better I/O performance. In particular, configurationlimited client machines in the cloud are not responsible for predicting I/O access operations, which can definitely contribute to preferable system performance on them.

INTRODUCTION

The assimilation of distributed computing for search engines, multimedia websites, and data-intensive applications has brought about the generation of data at unprecedented speed. For instance, the amount of data created, replicated, and consumed in United States may double every three years through the end of this decade, according to the general, the file system deployed in a distributed computing environment is called a distributed file system, which is always used to be a backend storage system to provide I/O services for various sorts of dataintensive applications in cloud computing environments. In fact, the distributed file system employs multiple distributed I/O devices by striping file data across the I/O nodes, and uses high aggregate bandwidth to meet the growing I/O requirements of distributed and parallel scientific applications.

However, because distributed file systems scale both numerically and geographically, the network delay is becoming the dominant factor in remote file system access [26], [34]. With regard to this issue, numerous data prefetching mechanisms have been proposed to hide the latency in distributed file systems caused by network communication and disk operations. In these conventional prefetching mechanisms, the client file system (which is a part of the file system and runs on theclient machine) is supposed to predict future access by analyzing the history of occurred I/O access without any application intervention. After that, the client file system may send relevant I/O requests to storage servers for reading the relevant data in. Consequently, the applications that have intensive read workloads can automatically yield not only better use of available bandwidth, but also less file operations via batched I/O requests through prefetching.

On the other hand, mobile devices generally have limited processing power, battery life and storage, but cloud computing offers an illusion of infinite computing resources. For combining the mobile devices and cloud computing to create a new infrastructure, the mobile cloud computing research field emerged [45]. Namely, mobile cloud computing provides mobile applications with data storage and processing services in clouds, obviating the requirement to equip a powerful hardware configuration, because all resource-intensive computing can be completed in the cloud. Thus, conventional prefetching schemes are not the best-suited optimization strategies for distributed file systems to boost I/O performance in mobile clouds, since these schemes require the client file systems running on client machines to proactively issue prefetching requests after analyzing the occurred access events recorded by them, which must place negative effects to the client nodes.

Furthermore, considering only disk I/O events can reveal the disk tracks that can offer critical information to perform I/O optimization tactics certain prefetching techniques have been proposed in succession to read the data on the disk in advance after analyzing disk I/O traces. But, this kind of prefetching only works for local file systems, and the prefetched data iscached on the local machine to fulfill the application’s I/O requests passively in brief, although block access history reveals the behavior of disk tracks, there are no prefetching schemes on storage servers in a distributed file system for yielding better system performance. And the reason for this situation is because of the difficulties in modeling the block access history to generate block access patterns and deciding the destination client machine for driving the prefetched data from storage servers.

LITRATURE SURVEY

PARTIAL REPLICATION OF METADATA TO ACHIEVE HIGH METADATA AVAILABILITY IN PARALLEL FILE SYSTEMS

AUTHOR: J. Liao, Y. Ishikawa

PUBLISH: In the Proceedings of 41st International Conference on Parallel Processing (ICPP ’12), pp. 168–177, 2012.

EXPLANATION:

This paper presents PARTE, a prototype parallel file system with active/standby configured metadata servers (MDSs). PARTE replicates and distributes a part of files’ metadata to the corresponding metadata stripes on the storage servers (OSTs) with a per-file granularity, meanwhile the client file system (client) keeps certain sent metadata requests. If the active MDS has crashed for some reason, these client backup requests will be replayed by the standby MDS to restore the lost metadata. In case one or more backup requests are lost due to network problems or dead clients, the latest metadata saved in the associated metadata stripes will be used to construct consistent and up-to-date metadata on the standby MDS. Moreover, the clients and OSTs can work in both normal mode and recovery mode in the PARTE file system. This differs from conventional active/standby configured MDSs parallel file systems, which hang all I/O requests and metadata requests during restoration of the lost metadata. In the PARTE file system, previously connected clients can continue to perform I/O operations and relevant metadata operations, because OSTs work as temporary MDSs during that period by using the replicated metadata in the relevant metadata stripes. Through examination of experimental results, we show the feasibility of the main ideas presented in this paper for providing high availability metadata service with only a slight overhead effect on I/O performance. Furthermore, since previously connected clients are never hanged during metadata recovery, in contrast to conventional systems, a better overall I/O data throughput can be achieved with PARTE.

EVALUATING PERFORMANCE AND ENERGY IN FILE SYSTEM SERVER WORKLOADS

AUTHOR: P. Sehgal, V. Tarasov, E. Zadok

PUBLISH: the 8th USENIX Conference on File and Storage Technologies (FAST ’10), pp.253-266, 2010.

EXPLANATION:

Recently, power has emerged as a critical factor in designing components of storage systems, especially for power-hungry data centers. While there is some research into power-aware storage stack components, there are no systematic studies evaluating each component’s impact separately. This paper evaluates the file system’s impact on energy consumption and performance. We studied several popular Linux file systems, with various mount and format options, using the FileBench workload generator to emulate four server workloads: Web, database, mail, and file server. In case of a server node consisting of a single disk, CPU power generally exceeds disk-power consumption. However, file system design, implementation, and available features have a signifi- cant effect on CPU/disk utilization, and hence on performance and power. We discovered that default file system options are often suboptimal, and even poor. We show that a careful matching of expected workloads to file system types and options can improve power-performance efficiency by a factor ranging from 1.05 to 9.4 times.

FLEXIBLE, WIDEAREA STORAGE FOR DISTRIBUTED SYSTEMS WITH WHEELFS

AUTHOR: J. Stribling, Y. Sovran, I. Zhang and R. Morris et al

PUBLISH: In Proceedings of the 6th USENIX symposium on Networked systems design and implementation (NSDI’09), USENIX Association, pp. 43–58, 2009.

EXPLANATION:

WheelFS is a wide-area distributed storage system intended to help multi-site applications share data and gain fault tolerance. WheelFS takes the form of a distributed file system with a familiar POSIX interface. Its design allows applications to adjust the tradeoff between prompt visibility of updates from other sites and the ability for sites to operate independently despite failures and long delays. WheelFS allows these adjustments via semantic cues, which provide application control over consistency, failure handling, and file and replica placement. WheelFS is implemented as a user-level file system and is deployed on PlanetLab and Emulab. Three applications (a distributed Web cache, an email service and large file distribution) demonstrate that WheelFS’s file system interface simplifies construction of distributed applications by allowing reuse of existing software. These applications would perform poorly with the strict semantics implied by a traditional file system interface, but by providing cues to WheelFS they are able to achieve good performance. Measurements show that applications built on WheelFS deliver comparable performance to services such as CoralCDN and BitTorrent that use specialized wide-area storage systems.

SYSTEM ANALYSIS

EXISTING SYSTEM:

The file system deployed in a distributed computing environment is called a distributed file system, which is always used to be a backend storage system to provide I/O services for various sorts of data intensive applications in cloud computing environments. In fact, the distributed file system employs multiple distributed I/O devices by striping file data across the I/O nodes, and uses high aggregate bandwidth to meet the growing I/O requirements of distributed and parallel scientific applications benchmark to create OLTP workloads, since it is able to create similar OLTP workloads that exist in real systems. All the configured client file systems executed the same script, and each of them run several threads that issue OLTP requests. Because Sysbench requires MySQL installed as a backend for OLTP workloads, we configured mysqld process to 16 cores of storage servers. As a consequence, it is possible to measure the response time to the client request while handling the generated workloads.

DISADVANTAGES:

  • Network delay in numerically and geographically remote file system access
  • Mobile devices generally have limited processing power, battery life and storage

PROPOSED SYSTEM:

Proposed in succession to read the data on the disk in advance after analyzing disk I/O traces of prefetching only works for local file systems, and the prefetched data is cached on the local machine to fulfill the application’s I/O requests passively. In brief, although block access history reveals the behavior of disk tracks, there are no prefetching schemes on storage servers in a distributed file system for yielding better system performance. And the reason for this situation is because of the difficulties in modeling the block access history to generate block access patterns and deciding the destination client machine for driving the prefetched data from storage servers. To yield attractive I/O performance in the distributed file system deployed in a mobile cloud environment or a cloud environment that has many resource-limited client machines, this paper presents an initiative data prefetching mechanism. The proposed mechanism first analyzes disk I/O tracks to predict the future disk I/O access so that the storage servers can fetch data in advance, and then forward the prefetched data to relevant client file systems for future potential usages.

This paper makes the following two contributions:

1) Chaotic time series prediction and linear regression prediction to forecast disk I/O access. We have modeled the disk I/O access operations, and classified them into two kinds of access patterns, i.e. the random access pattern and the sequential access pattern. Therefore, in order to predict the future I/O access that belongs to the different access patterns as accurately as possible (note that the future I/O access indicates what data will be requested in the near future), two prediction algorithms including the chaotic time series prediction algorithm and the linear regression prediction algorithm have been proposed respectively. 2) Initiative data prefetching on storage servers. Without any intervention from client file systems except for piggybacking their information onto relevant I/O requests to the storage servers. The storage servers are supposed to log disk I/O access and classify access patterns after modeling disk I/O events. Next, by properly using two proposed prediction algorithms, the storage servers can predict the future disk I/O access to guide prefetching data. Finally, the storage servers proactively forward the prefetched data to the relevant client file systems for satisfying future application’s requests.

ADVANTAGES:

  • The applications that have intensive read workloads can automatically yield not only better use of available bandwidth.
  • Less file operations via batched I/O requests through prefetching
  • Cloud computing offers an illusion of infinite computing resources

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed       –    1 GHz
  • RAM       –    256 MB (min)
  • Hard Disk      –   20 GB
  • Floppy Drive       –    44 MB
  • Key Board      –    Standard Windows Keyboard
  • Mouse       –    Two or Three Button Mouse
  • Monitor      –    SVGA

SOFTWARE REQUIREMENTS:

JAVA

  • Operating System        :           Windows XP or Win7
  • Front End       :           JAVA JDK 1.7
  • Script :           Java Script
  • Document :           MS-Office 2007

PASSIVE IP TRACEBACK: DISCLOSING THE LOCATIONS OF IP SPOOFERS FROM PATH BACKSCATTER

ABSTRACT:

It is long known attackers may use forged source IP address to conceal their real locations. To capture the spoofers, a number of IP traceback mechanisms have been proposed. However, due to the challenges of deployment, there has been not a widely adopted IP traceback solution, at least at the Internet level. As a result, the mist on the locations of spoofers has never been dissipated till now.

This paper proposes passive IP traceback (PIT) that bypasses the deployment difficulties of IP traceback techniques. PIT investigates Internet Control Message Protocol error messages (named path backscatter) triggered by spoofing traffic, and tracks the spoofers based on public available information (e.g., topology). In this way, PIT can find the spoofers without any deployment requirement.

This paper illustrates the causes, collection, and the statistical results on path backscatter, demonstrates the processes and effectiveness of PIT, and shows the captured locations of spoofers through applying PIT on the path backscatter data set.

These results can help further reveal IP spoofing, which has been studied for long but never well understood. Though PIT cannot work in all the spoofing attacks, it may be the most useful mechanism to trace spoofers before an Internet-level traceback system has been deployed in real.

INTRODUCTION

IP spoofing, which means attackers launching attacks with forged source IP addresses, has been recognized as a serious security problem on the Internet for long. By using addresses that are assigned to others or not assigned at all, attackers can avoid exposing their real locations, or enhance the effect of attacking, or launch reflection based attacks. A number of notorious attacks rely on IP spoofing, including SYN flooding, SMURF, DNS amplification etc. A DNS amplification attack which severely degraded the service of a Top Level Domain (TLD) name server is reported in though there has been a popular conventional wisdom that DoS attacks are launched from botnets and spoofing is no longer critical, the report of ARBOR on NANOG 50th meeting shows spoofing is still significant in observed DoS attacks. Indeed, based on the captured backscatter messages from UCSD Network Telescopes, spoofing activities are still frequently observed.

To capture the origins of IP spoofing traffic is of great importance. As long as the real locations of spoofers are not disclosed, they cannot be deterred from launching further attacks. Even just approaching the spoofers, for example, determining the ASes or networks they reside in, attackers can be located in a smaller area, and filters can be placed closer to the attacker before attacking traffic get aggregated. The last but not the least, identifying the origins of spoofing traffic can help build a reputation system for ASes, which would be helpful to push the corresponding ISPs to verify IP source address.

Instead of proposing another IP traceback mechanism with improved tracking capability, we propose a novel solution, named Passive IP Traceback (PIT), to bypass the challenges in deployment. Routers may fail to forward an IP spoofing packet due to various reasons, e.g., TTL exceeding. In such cases, the routers may generate an ICMP error message (named path backscatter) and send the message to the spoofed source address. Because the routers can be close to the spoofers, the path backscatter messages may potentially disclose the locations of the spoofers. PIT exploits these path backscatter messages to find the location of the spoofers. With the locations of the spoofers known, the victim can seek help from the corresponding ISP to filter out the attacking packets, or take other counterattacks. PIT is especially useful for the victims in reflection based spoofing attacks, e.g., DNS amplification attacks. The victims can find the locations of the spoofers directly from the attacking traffic.

In this article, at first we illustrate the generation, types, collection, and the security issues of path backscatter messages in section III. Then in section IV, we present PIT, which tracks the location of the spoofers based on path backscatter messages together with the topology and routing information. We discuss how to apply PIT when both topology and routing are known, or only topology is known, or neither are known respectively. We also present two effective algorithms to apply PIT in large scale networks. In the following section, at first we show the statistical results on path backscatter messages. Then we evaluate the two key mechanisms of PIT which work without routing information. At last, we give the tracking result when applying PIT on the path backscatter message dataset: a number of ASes in which spoofers are found.

Our work has the following contributions:

1) This is the first article known which deeply investigates path backscatter messages. These messages are valuable to help understand spoofing activities. Though Moore et al. [8] has exploited backscatter messages, which are generated by the targets of spoofing messages, to study Denial of Services (DoS), path backscatter messages, which are sent by intermediate devices rather than the targets, have not been used in traceback. 2) A practical and effective IP traceback solution based on path backscatter messages, i.e., PIT, is proposed. PIT bypasses the deployment difficulties of existing IP traceback mechanisms and actually is already in force. Though given the limitation that path backscatter messages are not generated with stable possibility, PIT cannot work in all the attacks, but it does work in a number of spoofing activities. At least it may be the most useful traceback mechanism before an AS-level traceback system has been deployed in real. 3) Through applying PIT on the path backscatter dataset, a number of locations of spoofers are captured and presented. Though this is not a complete list, it is the first known list disclosing the locations of spoofers.

LITRATURE SURVEY

DEFENSE AGAINST SPOOFED IP TRAFFIC USING HOP-COUNT FILTERING

PUBLICATION: IEEE/ACM Trans. Netw., vol. 15, no. 1, pp. 40–53, Feb. 2007.

AUTHORS: H. Wang, C. Jin, and K. G. Shin

EXPLANATION:

IP spoofing has often been exploited by Distributed Denial of Service (DDoS) attacks to: 1)conceal flooding sources and dilute localities in flooding traffic, and 2)coax legitimate hosts into becoming reflectors, redirecting and amplifying flooding traffic. Thus, the ability to filter spoofed IP packets near victim servers is essential to their own protection and prevention of becoming involuntary DoS reflectors. Although an attacker can forge any field in the IP header, he cannot falsify the number of hops an IP packet takes to reach its destination. More importantly, since the hop-count values are diverse, an attacker cannot randomly spoof IP addresses while maintaining consistent hop-counts. On the other hand, an Internet server can easily infer the hop-count information from the Time-to-Live (TTL) field of the IP header. Using a mapping between IP addresses and their hop-counts, the server can distinguish spoofed IP packets from legitimate ones. Based on this observation, we present a novel filtering technique, called Hop-Count Filtering (HCF)-which builds an accurate IP-to-hop-count (IP2HC) mapping table-to detect and discard spoofed IP packets. HCF is easy to deploy, as it does not require any support from the underlying network. Through analysis using network measurement data, we show that HCF can identify close to 90% of spoofed IP packets, and then discard them with little collateral damage. We implement and evaluate HCF in the Linux kernel, demonstrating its effectiveness with experimental measurements

DYNAMIC PROBABILISTIC PACKET MARKING FOR EFFICIENT IP TRACEBACK

PUBLICATION: Comput. Netw., vol. 51, no. 3, pp. 866–882, 2007.

AUTHORS: J. Liu, Z.-J. Lee, and Y.-C. Chung

EXPLANATION:

Recently, denial-of-service (DoS) attack has become a pressing problem due to the lack of an efficient method to locate the real attackers and ease of launching an attack with readily available source codes on the Internet. Traceback is a subtle scheme to tackle DoS attacks. Probabilistic packet marking (PPM) is a new way for practical IP traceback. Although PPM enables a victim to pinpoint the attacker’s origin to within 2–5 equally possible sites, it has been shown that PPM suffers from uncertainty under spoofed marking attack. Furthermore, the uncertainty factor can be amplified significantly under distributed DoS attack, which may diminish the effectiveness of PPM. In this work, we present a new approach, called dynamic probabilistic packet marking (DPPM), to further improve the effectiveness of PPM. Instead of using a fixed marking probability, we propose to deduce the traveling distance of a packet and then choose a proper marking probability. DPPM may completely remove uncertainty and enable victims to precisely pinpoint the attacking origin even under spoofed marking DoS attacks. DPPM supports incremental deployment. Formal analysis indicates that DPPM outperforms PPM in most aspects.

FLEXIBLE DETERMINISTIC PACKET MARKING: AN IP TRACEBACK SYSTEM TO FIND THE REAL SOURCE OF ATTACKS

PUBLICATION: EEE Trans. Parallel Distrib. Syst., vol. 20, no. 4, pp. 567–580, Apr. 2009.

AUTHORS: Y. Xiang, W. Zhou, and M. Guo

EXPLANATION:

IP traceback is the enabling technology to control Internet crime. In this paper we present a novel and practical IP traceback system called Flexible Deterministic Packet Marking (FDPM) which provides a defense system with the ability to find out the real sources of attacking packets that traverse through the network. While a number of other traceback schemes exist, FDPM provides innovative features to trace the source of IP packets and can obtain better tracing capability than others. In particular, FDPM adopts a flexible mark length strategy to make it compatible to different network environments; it also adaptively changes its marking rate according to the load of the participating router by a flexible flow-based marking scheme. Evaluations on both simulation and real system implementation demonstrate that FDPM requires a moderately small number of packets to complete the traceback process; add little additional load to routers and can trace a large number of sources in one traceback process with low false positive rates. The built-in overload prevention mechanism makes this system capable of achieving a satisfactory traceback result even when the router is heavily loaded. It has been used to not only trace DDoS attacking packets but also enhance filtering attacking traffic.

SYSTEM ANALYSIS

EXISTING SYSTEM:

Existing methods of the IP marking approach is that routers probabilistically write some encoding of partial path information into the packets during forwarding. A basic technique, the edge sampling algorithm, is to write edge information into the packets. This scheme reserves two static fields of the size of IP address, start and end, and a static distance field in each packet. Each router updates these fields as follows. Each router marks the packet with a probability. When the router decides to mark the packet, it writes its own IP address into the start field and writes zero into the distance field. Otherwise, if the distance field is already zero which indicates its previous router marked the packet, it writes its own IP address into the end field, thus represents the edge between itself and the previous routers.

Previous router doesn’t mark the packet, then it always increments the distance field. Thus the distance field in the packet indicates the number of routers the packet has traversed from the router which marked the packet to the victim. The distance field should be updated using a saturating addition, meaning that the distance field is not allowed to wrap. The mandatory increment of the distance field is used to avoid spoofing by an attacker. Using such a scheme, any packet written by the attacker will have distance field greater than or equal to the length of the real attack path a router false positive if it is in the reconstructed attack graph but not in the real attack graph. Similarly we call a router false negative if it is in the true attack graph but not in the reconstructed attack graph. We call a solution to the IP traceback problem robust if it has very low rate of false negatives and false positives.

DISADVANTAGES:

  • Existing approach has a very high computation overhead for the victim to reconstruct the attack paths, and gives a large number of false positives when the denial-of-service attack originates from multiple attackers.
  • Existing approach can require days of computation to reconstruct the attack paths and give thousands of false positives even when there are only 25 distributed attackers. This approach is also vulnerable to compromised routers.
  • If a router is compromised, it can forge markings from other uncompromised routers and hence lead the reconstruction to wrong results. Even worse, the victim will not be able to tell a router is compromised just from the information in the packets it receives problem.

PROPOSED SYSTEM:

We propose a novel solution, named Passive IP Traceback (PIT), to bypass the challenges in deployment. Routers may fail to forward an IP spoofing packet due to various reasons, e.g., TTL exceeding. In such cases, the routers may generate an ICMP error message (named path backscatter) and send the message to the spoofed source address. Because the routers can be close to the spoofers, the path backscatter messages may potentially disclose the locations of the spoofers. PIT exploits these path backscatter messages to find the location of the spoofers. With the locations of the spoofers known, the victim can seek help from the corresponding ISP to filter out the attacking packets, or take other counterattacks. PIT is especially useful for the victims in reflection based spoofing attacks, e.g., DNS amplification attacks. The victims can find the locations of the spoofers directly from the attacking traffic.

We present PIT, which tracks the location of the spoofers based on path backscatter messages together with the topology and routing information. We discuss how to apply PIT when both topology and routing are known, or only topology is known, or neither are known respectively. We also present two effective algorithms to apply PIT in large scale networks. In the following section, at first we show the statistical results on path backscatter messages. Then we evaluate the two key mechanisms of PIT which work without routing information. At last, we give the tracking result when applying PIT on the path backscatter message dataset: a number of ASes in which spoofers are found.

ADVANTAGES:

1) This is the first article known which deeply investigates path backscatter messages. These messages are valuable to help understand spoofing activities has exploited backscatter messages, which are generated by the targets of spoofing messages, to study Denial of Services (DoS), path backscatter messages, which are sent by intermediate devices rather than the targets, have not been used in traceback.

2) A practical and effective IP traceback solution based on path backscatter messages, i.e., PIT, is proposed. PIT bypasses the deployment difficulties of existing IP traceback mechanisms and actually is already in force. Though given the limitation that path backscatter messages are not generated with stable possibility, PIT cannot work in all the attacks, but it does work in a number of spoofing activities. At least it may be the most useful traceback mechanism before an AS-level traceback system has been deployed in real.

3) Through applying PIT on the path backscatter dataset, a number of locations of spoofers are captured and presented. Though this is not a complete list, it is the first known list disclosing the locations of spoofers.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENT:

v    Processor                                 –    Pentium –IV

  • Speed       –    1 GHz
  • RAM       –    256 MB (min)
  • Hard Disk      –   20 GB
  • Floppy Drive       –    44 MB
  • Key Board      –    Standard Windows Keyboard
  • Mouse       –    Two or Three Button Mouse
  • Monitor      –    SVGA

SOFTWARE REQUIREMENTS:

  • Operating System        :           Windows XP or Win7
  • Front End       :           JAVA JDK 1.7
  • Document :           MS-Office 2007