EACO: An Enhanced ANT Colony Optimization Algorithm for Task Scheduling in Cloud Computing
and Richa Jain2
Banasthali Vidypith, Tonk, Rajasthan, India
Cloud Computing is emerging as an influential architecture to perform complex and large scale computing. It provides on-demand access to services on “Pay –as –you-go” method. In cloud computing environment Task Scheduling is an essential technique that is required for allocating tasks to appropriate resources for proper resource utilization and optimizing overall system performance. In this paper, the enhanced Ant Colony Optimization (EACO) algorithm has been proposed that serves improved task scheduling with minimum makespan while maintaining cost. This algorithm mainly contributes in minimizing total completion time for scheduling tasks on resources. This is attained by splitting the ordered submitted tasks into bunches -the sub list of tasks. The main goal of EACO is to minimize total execution time. The proposed algorithm EACO is simulated using CloudSim toolkit and compared with existing nature inspired algorithm. The experimental results show that presented algorithm improves result in terms of makespan
Cloud computing, task scheduling, ant colony optimization (ACO), makespan.
Addressing Application Portability Challenge In cloud Computing
Jayaprakash Arjarapu, Oracle India Private Limited, Hyderabad, India
The cloud computing is a buzz word for all the technologists and it is the recent revolution in IT industry. Cloud computing drastically changing the way IT service providers operate and moved the ball into the court of customers to some extent. To the business community where customers were spending more time on the IT rather than their actual business, cloud computing is a wonderful opportunity which allows them to spend more time on their business than IT. As there is a huge demand in the market for the cloud application platforms, it resulted in a large number of platform offerings which are readily available in the market. Apart from the benefits customer can enjoy from this cloud computing, there are also different challenges which need to be addressed before customers fully adopt this cloud computing to their business. One of such challenge is application portability. In this research work, we are trying to bring out application portability challenge and focus on addressing this challenge of porting applications from one platform to other. Our research also opens the door to future research directions towards investigating the improvement of applications portability across different cloud platforms
TGRID Location Service in Ad-Hoc Networks
Baktash Motlagh Farrokhlegha, Department of Computer, Technical & vocational University, Urmia, West Azarbayjan, Iran
Geographic addresses are essential in position-based routing algorithms in mobile ad hoc networks, i.e. a node that intends to send a packet to some target node, has to know the target's current position. A distributed location service is required to provide each node's position to the other network nodes. Hierarchical Location Service (HGRID) has been known as a promising location service approach. In this paper we present a new approach called TGRID and describe the performance of a
novel multi-level Tree-walk grid location management protocol for large scale ad hoc networks. The Treewalk grid location service mechanism is evaluated by GLOMOSIM against well known location service protocol HGRID when increasing node density and node speed. It is observed that TGRID outperforms HGRID in terms of packet delivery fraction and storage cost and also maintains low control overhead in a uniformly randomly distributed network.
Location based routing, location service, location management, Mobile Ad Hoc Networks, HGRID, and TGRID
Automated Regression Tests and Automated Test Optimisation for GETRV
Neil Kevin Patalita Arcolas and Shahid Ali, Department of Information Technology, AGI Institute, Auckland, New Zealand
Regression testing is a type of testing that is performed to validate that new changes pushed to the system does not have any adverse effect to the existing features. Automated regression testing greatly reduces the time spent by testers to perform these repetitive and mundane tests and allows them to work on more critical tests. The first problem addressed in this project is to add two automated regression scripts to increase test coverage of the existing test automation framework. The second problem is to optimise the automated regression test run to reduce the test run times. Additionally, to improve the automated test run times, redundant expressions were removed and handled in the outermost loop of the automated test run. The project resulted in the addition of two automated test scripts for the automated test run and a significant test run time reduction of at least 60%.
Automated regression, Agile scrum, Automated test run
Predicting Weather events using Soft Computing Techniques
Hatim Aljuaed and Mohamed Alghamdi, Department of Computer Science and Computer Engineer, Umm-Al Qura University, Makkah, Saudi Arabia
Weather Prediction has been widely needed for many applications with different purposes such as rainfall prediction, agriculture, pilot an aircraft and so on. It’s becomes so helpful in normal people’s lives to make more informed daily decisions especially in those who lives in countries with extreme or diverse weather conditions. Short term prediction is a critical mission as well to predict warnings that can help saving people’s lives and protect properties. This work suggests a system that is based on the integration between Fuzzy Logic (FL) and Artificial Neural Networks (ANNs). The usage of the two soft computing techniques enhances the accuracy nature of neural networks and the easy usage of the fuzzy logic. Upon this literature, the work will show how soft computing using fuzzy logic and neural networks solves the problem of weather forecast in Kingdom of Saudi Arabia, Makkah and extracting non-relative knowledge from the known facts from different types of data.
Neural Networks, Fuzzy logic, soft computing, weather prediction.
An Overview of Steganography - A Data Hiding Technique
Priya Pareek and N.Monica, Student, Department of Computer Science and Engineering GMRIT, Rajam
Steganography is a technique which is used to hide the secret data in the form of embedded messages, simply it is covered by the other messages. The word Steganography is derived from two ancient Greek word names "Stegano"(which means hiding or covering) and "graph"(meaning to write). Steganography is very different from cryptography. Cryptography is a technique which is used to create or generate the code which keeps information very secure. In simple comparisons, cryptography is the one that cannot be decrypted without the proper knowledge of encrypted data key whereas, steganography is the one which can be easily used to encrypt or decrypt the data
Steganography, Data hiding, Cryptography, Types of Steganography
Opportunistic touting based on Store carry and forward algotithm to avoid discontinuity in VANET
Mohamed Anis MASTOURI and Salem Hasnaoui, Communication systems – Sys’Com Laboratory National school of engineers of Tunis - ENIT Tunis, Tunisia
The publish/subscriber model according to DDS is essentially characterized by the decoupling between the participants and the interaction many-to-many. These features are desirable properties for constructing distributed applications in the context of a mobile environment such as VANET. Which are currently receiving in-creased attention from manufacturers and researchers to improve safety on the roads or help drivers. So the goal of this paper is to design a solution of routing in order to avoid discontinuity in such environments based on publish subscribe paradigm.
VANET, publish-subscribe, MANET, opportunistic, routing
An approach to Sentiment Analysis using Semantics & Context
Arushi Tetarbe and Dr. Rajni Sehgal, Amity School of Engineering & Technology, Amity University, Noida, India
In today’s world the sentiment or reaction to particular product, decision or statement has proven to be highly defining in the process of governing, strategy making in businesses or monitoring cybercrimes. Sentiment is a very personal outlook on something by an individual. If we are able to judge that based on analysis of the text that the person writes on various platforms, we will be able to Product monitoring is done by almost every business out there to help in shaping their future strategies. There are various companies that offer these features as a service also. Our model is basically a deep learning model for sentiment classification that performs the task of sentiment analysis. We are using Stanford Sentiment Treebank which contains labelled text data for training the model. In order to include semantics in our model, we convert the words into their equivalent vector representation by Stanford’s unsupervised trained word embedding model called GloVe. To train the model we use LSTM (Long Short Term Memory) which is a kind of RNN (Recurrent Neural Network). This neural network is built on Keras which in turn is built on Tensorflow. Finally, our model is able to reflect the 91.33% sentiment when tested on example sentences.
Sentiment Analysis, Recurrent Neural Networks, LSTM, GloVe, Tensorflow
Natural Language to SQL
Ankita Makker and Gaurav Nayak, Department of Computer Science and Engineering, PDPM Indian Institute Of Information Technology, Design and Manufacturing, Jabalpur, India
In this research, an intelligent system is designed for the users to access the database using natural language. It accepts natural language input and then converts it into an SQL query. Using query language for dealing with databases has always been a professional and complex problem. The system currently handles single sentence natural language inputs and concentrates on MySQL database system. The system accommodates aggregate functions, multiple conditions in WHERE clause, join operations, advanced clauses like ORDER BY, GROUP BY and HAVING. The natural language statement goes through various stages of Natural Language Processing like morphological, lexical, syntactic and semantic analysis resulting in SQL query formation. Intelligent Interface is the need of database applications to enhance efficient interaction between user and DBMS. The research focuses on making the system more dynamic.Improvements have been introduced to the system by incorporating preprocessing of text, named entity recognition, building hierarchical relations, semantic similarity and negation handling using dependency graphs.
Natural language processing (NLP), SQL, semantic similarity, context, named entity recognition,dependency graphs
Automatic Text Summarization of Legal Cases: A Hybrid Approach
Varun Pandya, School of Computer Engineering, Pandit Deendayal Petroleum University, Gandhinagar, India
Manual summarization of large bodies of text involves a lot of human effort and time, especially in the legal domain. Automatic Text summarization is a constantly evolving field of Natural Language Processing(NLP) discipline of the broader Artificial Intelligence Field. Lawyers spend a lot of time preparing legal briefs of their client’s case files. In this paper a hybrid method for automatic text summarization of legal cases using k-means clustering technique and tf-idf(term frequency-inverse document frequency) word vectorizer is proposed. The summary generated by the proposed method is compared using ROGUE evaluation parameters with the case summary as prepared by the lawyer for appeal in court. Further, suggestions for improving the proposed method are also presented.
Automatic Text Summarization, Legal domain, k-means clustering, tf-idf word vectors