IJCST Logo










 

International Journal of Computer Science and Technology
IJCST 8.4 ver-1 (Oct-December 2017)

S.No. Research Topic Paper ID Download
1

Improved Security and Efficiency with Time Based Tokenized System & COAP for Internet of Things (IOT)

Shivani Bilthare, Ashok Verma

Abstract

The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. IoT has interconnections through the physical, cyber and social spaces. Most of devices among them are resource constrained. During the interaction between devices, IoT gets suffered from severe security challenges. Security of resource constrained networks becomes prime important. Many existing mechanisms give security and protection to networks and systems but they are unable to give fine grain access control. In this work, focus is on enhancing the performance of the IoT system with high security and least usage of the resources on the constrained devices i.e. the load related with security is kept on the servers which are high resource oriented. Performance of CoAP based framework is enhanced and compared with existing security CoAP implementations. Test results shall be compared for communication overhead and authentication delays.
Full Paper

IJCST/84/1/A-0863
2

Forecast of Scan Report Waiting Time for Patients in Bigdata

Adhikari Ramya, P.R.Sudha Rani

Abstract

In this work, a patient scan report time expectation is set up in light of clinics’ chronicled data. The delay of scan report assignment is anticipated report task is expected by k-closest neighbors’ calculation groups checking report times in the present closest neighbors’ calculation stores all the accessible cases and orders new output report time for the patient successfully with less deferral for the patient.
Full Paper

IJCST/84/1/A-0864
3

Efficient Technique for Classifying High-Dimensional Data

K. Sai Sravani, Dr. P. Kiran Sree

Abstract

Grouping issues in high dimensional data with few perceptions are ending up more typical particularly in microarray data.The two unique sorts of online feature selection tasks: 1) OFS by learning with full sources of information, and 2) OFS by learning with incomplete sources of information. Assume in first task that the learner can access all the features of training instances, and the goal is to efficiently identify a fixed number of relevant features for accurate prediction. In the second task, consider a more challenging scenario where the learner is allowed to access a fixed small number of features for each training instance to identify the subset of relevant features. This work proposes a new estimation measure Q-statistic that includes the solidity of the selected feature subset in addition to the estimate accuracy. Then propose the Booster of an FS algorithm that boosts the value of the Q-statistic of the algorithm applied. Empirical studies based on synthetic data and 14 microarray data sets show that Booster boosts not only the value of the Q-statistic but also the estimate accuracy of the algorithm applied unless the data set is intrinsically difficult to predict with the given algorithm.
Full Paper

IJCST/84/1/A-0865
4

Cloud Storage System with Secure Data Forwarding

V.Aadarsh, V.Sudharshan Rao

Abstract

Here the system oriented with storage of the cloud plays a major role related to the storage of the collection of the servers in a well effective fashion in which the services related to the long term strategy on the broad way of internet respectively. Internet plays a major role in the society for the data transmission and also the advancement in the service involvement phenomena in which user as a major concern respectively. There is a huge advancement in the services of the internet in the form of the computation of the cloud in a well stipulated fashion respectively. Here the data of the user is stored in the third party oriented scenario rather than the cloud in its server as a major concern respectively. This particular well analytical phenomena is termed as the decentralization strategy in a well effective manner respectively. Here this particular phenomena create a huge problem to the user and it is a major concern and also one of the challenging task oriented with the well effective implementation fashion. Many of the users are worried about their data stored in the server oriented cloud and are frustrated regarding the security aspect in the form of the privacy as a major concern respectively. Here in order to overcome the above problem a new technique is proposed by the help of the re encryption of the threshold based scenario is a major concern in its aspect with a well oriented scenario where the code erasure decentralization strategy is maintained followed by the security as a major concern in its aspect respectively. Simulations have been conducted on the present method and there is a lot of analysis takes place in the similar fashion of the test bed conducted phenomena oriented strategy with respect to the large number of the data sets in a well oriented fashion respectively.
Full Paper

IJCST/84/1/A-0866
5

Study of Information Extraction and Optical Character Recognition

Swayanshu Shanti Pragnya

Abstract

In the intensification rate of techniques and its application towards the convenience of human being is in ceaseless process. While techniques raises the question of storing data and retrieving is common in mind. Text mining is high in demand and been the most interesting way of different data processes. As the name extraction itself shows to retrieve from ancestry of data and information can be any knowledge get by some data. So all together lineage of data is what information extraction does. Here extraction of information will be conducted from images. But information extraction is consisting of different parts like its type, orientation, process and finally a technique for executing the whole process. So this paper is all about the basic of information extraction, its type, condition, process and finally conclude with the Optical Character Recognition (OCR) tool which can make the whole extraction process efficient by the study of different journals. Gathering an overall ideation regarding information extraction from images and its applicable tool is the objective which can make the whole retrieving process convenient.
Full Paper

IJCST/84/1/A-0867
6

Approximate Nearest Neighbor Search towards Removing the Curse of Dimensionality Query-Aware and Locality-Sensitive Hashing

S Ratna Kumari, D Durga Prasad

Abstract

We show a simple vector quantizer that consolidates low mutilation with speedyrecreation and apply it to Approximate Nearest Neighbor (ANN) look in high dimensional spaces. Utilizing the extremely same information structure that is utilized to give non-comprehensive hunt, i.e., disturbed records or a multi list, the thought is to locally streamline an individual Product Quantizer (PQ) per cell and utilize it to encode residuals. Local optimization is over turn and space deterioration; strikingly, we apply a parametric arrangement that accept a typical dissemination and is to a great degree quick to prepare. With a sensible space and time overhead that is consistent in the information estimate, we set another state-of-the-art on a few open datasets, including a billion-scale one. The Approximate Nearest Neighbor (ANN) look plays out the quick and effective recovery of information as the span of information develop increments quickly. It investigates the quantization centroids on numerous relative subspace. We propose an iterative way to deal with limit the quantization blunder keeping in mind the end goal to make a novel quantization plot, which beats the state-of-the-art calculations. The computational cost of our strategy is likewise practically identical to that of the contending strategies.
Full Paper

IJCST/84/1/A-0868
7

H2Hadoop: Improving Hadoop Performance Using Metadata of Related Jobs

S.Kanaka Lakshmi, K.Ramachandra Rao

Abstract

Cloud computing influences Hadoop system for preparing BigData in parallel. Hadoop has certain confinements that could be abused to execute the job effectively. These restrictions are generally due to information locality in the cluster, employments and job planning, and asset distributions in Hadoop. Proficient asset designation remains a provocation in Cloud Computing MapReduce stages. Hadoop contains a few confinements that could be created to have a higher execution in executing jobs. These confinements are generally as a result of data locality in the cluster, job and job Scheduling , CPU execution time, or asset distributions in Hadoop. In this paper here is a study of how to overcome from these confinements. Keywords:BigData, Cloud Computing, Hadoop, Hadoop Performance, MapReduce.
Full Paper

IJCST/84/1/A-0869
8

A Performance Model for Estimating Jobs and Providing Resources

G Praveena, P J R Shalem Raju

Abstract

MapReduce has turned into a noteworthy computing model for information serious applications. Hadoop, an open source execution of MapReduce, has been embraced by an undeniably developing client group. Cloud computing service suppliers, for example, Amazon EC2 Cloud offer the open doors for Hadoop clients to rent a specific measure of assets and pay for their utilization. Be that as it may, a key test is that cloud service suppliers don’t have an asset provisioning component to fulfill client occupations with due date prerequisites. Right now, it is exclusively the client’s duty to appraise the required measure of assets for running an occupation in the cloud. This paper introduces a Hadoop work execution demonstrate that precisely gauges work consummation time and further arrangements the required measure of assets for a vocation to be finished inside a due date. The proposed model expands on authentic employment execution records and utilizes Locally Weighted Linear Regression (LWLR) system to appraise the execution time of a vocation. Besides, it utilizes Lagrange Multipliers System for asset provisioning to fulfill occupations with due date prerequisites. The proposed model is at first assessed on an in-house Hadoop bunch and therefore assessed in the Amazon EC2 Cloud. Trial comes about demonstrate that the exactness of the proposed model in occupation execution estimation is in the scope of 94.97 and 95.51 percent, and employments are finished inside the required due dates taking after on the asset provisioning plan of the proposed model.
Full Paper

IJCST/84/1/A-0870