Masters Thesis
Quick Links

Roshan Kumar Shrestha                                                     Supervisor: Assoc. Prof. Saroj Shakya
M.Sc. Computer Science

Title: Analysis of classification algorithm for Eye State detection in Electroencephalography (EEG)

Abstract:  In our daily life, there are lots of data in different fields. Whenever there is data, we can have lots of information, patterns, meaning etc. and the process of extracting or   mining  knowledge from large amount of data is called data mining and is also known as   Knowledge Discovery from Data (KDD) . Data mining applications have got rich focus due to the significance of classification algorithms. The comparison of classification algorithms is complex task and it is an open problem. First, the notion of the performance can be defined in many ways: accuracy, speed, cost, reliability, etc. Second, an appropriate tool is necessary to quantify the performance. Third, a consistent method must be selected to compare the measured values. The selection of the best classification algorithm for a given dataset is a very widespread problem. In this sense, it requires to make several methodological choices. So, this research focused on the analysis of classification algorithms for eye state detection in EEG among four classification algorithms (OneR, DecisionStump, J48 and RandomForest). RandomForest was able to classify 91.1348 % of the data correctly which was best among all in comparison to the results of evaluation metrics (Accuracy, Precision, Recall and F-Measure).

Index Terms—Classification, Data Mining, DecisionStump, EEG, J48, Machine Learning, OneR, RandomForest.


Barun Kumar Dhakal                                                          Supervisor: Assoc. Prof. Saroj Shakya

ME Computer Engineering

Title: Singular Value Decomposition (SVD) Based Image Watermarking using Interpolation

Abstract. With the vast growth of internet technology, digital content protection has become very important. Various algorithms have been/being developed for preserving proprietary rights. Digital watermarking technique is the good measure for preserving such rights. In this thesis, SVD based digital image watermarking using interpolation method was done where the watermark was added in the singular values of a digital image using interpolation method. The resultant watermarked image was treated with different distortion operation. Finally the watermark was extracted for each method and each recovered watermark was compared with the original watermark using normalized correlation and accuracy rate. The better method was prescribed based on comparison results. The selected method can be easily implemented in real world application for the purpose of copy right protection, authentication, integrity etc


Anil Kumar Yadav               Supervisor: Prof Dr Shashidhar Ram Joshi

ME Computer Engineering

Title: Impact of Bit Plane Slicing with JPEG Compression on BMP Image

Abstract:The storage space in computer field isprecious. It is necessary and time demand to use each single bit for fruitful act. So, saving space is most important act in this arena. As we know images use more space compare to text. Day by day uses of space is growing up. The growth of space occupancy led to a need for compression, i.e. the ability to reduce the amount of storage, which in turn reduces bandwidth required for the transmission. In this research the main focus was on image compression of the BMP format. The JPEG compression algorithm was used to compress the   .BMP  image and same compression algorithm was applied after the compression algorithm   bit plane slicing . It was compared among the JPEG compression after bit plane slicing and JPEG compression only. Their comparative compression percentage was considered. Here compression ratio was main decision making parameter. Here 500 images of different category had been considered


Sumitra DeviPyakurel           

ME Computer Engineering

Title: StudyAnd Comparative AnalysisOfLossless Data Compression Algorithms

Abstract:DataCompressionisthescienceandartofrepresenting information inacompactform. Wheneverwedealwith anykindofdata(audio,video,text)weareboundtocompress thedatato minimizethespaceandtimeutilization,Datacompression isacommon requirementformostofthe computerizedapplications. Therearedifferent compressionalgorithmswhichareavailableindifferent formats.Datacompressions aregenerallylosslessandlossydatacompression.Inthisresearch,study andcomparewithdifferentLosslessdatacompressionalgorithmshas bedoneontextfiles andAnalyze differentfactorsnamelyCompressionRatio,Compression Time,SavingPercentage,and DecompressionTime:usingbzip2,gzipalgorithm.


Er. Krishna Pandey

Title: Performance Assessment of Different Algorithms for Effective IDS Implementation

Abstract: Network attacks have been increasing in number and severity over the past manyyears. An ID (Intrusion Detection) is the process of monitoring and analyzing the events occurring in an information system in order to detect different security violations. Due to large volumes of security audit data as well as complex and dynamic properties of intrusion behaviors, optimizing performance of IDS becomes a challenge that is receiving more and more attention from the security measure implementation perspectives as well as by research communities. The uncertainty to explore if certain algorithms perform better for certain attack classes constitutes the motivation for this thesis work. In this research, performance of a comprehensive set of potential classifier using KDD99 dataset has been evaluated. Based on evaluation results, maximum accurate classifier for high attack detection date and low false alarm rate has been chosen and suitable classifier has been proposed. The comparison of simulation result indicates that noticeable performance improvement can be achieved with the proposed classifier to detect different kinds of network attacks.

Er. Niraj Rai                                                                Supervisor: Prof. Dr. Shashidhar Ram Joshi
Master of Computer Engineering

Title: An Adaptive Multicast Routing Protocol Based on Mobility Index to Optimize the Quality of Service of MANET

Abstract: Due to the dynamic nature of the network topology and restricted resources in Mobile Ad-hoc Network (MANET), the stable routing based on Quality of Service (QoS) is a challenging task. The unpredictable mobility of nodes, variation of received signal strength and limited battery power in nodes of MANET create the link, node vulnerability and instability. Most of the current ad-hoc multicast routing protocols for group communication are based on link availability, received signal strength and link cost. In this thesis, illustration of an adaptive multicast routing protocol based on mobility index to optimize QoS of MANET (AMMOM) is proposed. First of all stable multicast routes are identified on the basis of signal strengths between the nodes and battery life of each node. After that if there are multiple paths from source to every multicast destination, the best route is identified having maximum value of Mobility Index (MI). MI is dependent upon three factors; link persistence, differential signal strength Ds and distance between the nodes Lm+1. The results obtained from the simulations demonstrate that the proposed AMMOM protocol overcomes On-Demand Multicast Routing Protocol (ODMRP) with respect to packet delivery ratio and control overhead.


Sailesh Bajracharya                                                                             Supervisor: Suresh Pokharel


Title: Identifying Users Behavior By Analyzing Web Proxy Logs

Abstract: Users spend most of their time on internet in search of innovative ideas, information, and entertainment. Information searched on the web is logged in various forms such as access log from where interesting patterns can be generated. Data mining techniques such as Frequent Pattern Mining is carried out to capture useful patterns obtained from the web proxy server logs. This helps the analyst evaluate the types of users, their interests and improve the performance of the internet usage.

This thesis focuses on Squid Proxy Logs from where Frequent Pattern Mining algorithms are deployed and evaluated to find the interested user s behavior.


Umesh Paneru

ME Computer Engineering

Title:Cost Effective Traversing in Delaunay Mesh with applications to heterogeneous network

Abstract: The graph drawn in a plane such that none of its edges intersect with each other but only at their endpoints is called a planar graph. Delaunay triangulation is an example of a planar graph. Delaunay triangulation is used to generate meshes of triangles. The Delaunay Algorithm can be potentially used to model any 2D or a 3D object. Its applications are found in a broad range of fields including but not limited to GIS, Image processing, computer Networks, sensory Networks etc. The first objective of our thesis is to create a Delaunay triangulation from a given set of nodes in doubly connected edge list data structure. The second objective of the thesis is to generate a minimum spanning tree of the constructed Delaunay triangulation. The minimum spanning tree gives the cost effective path for the planar Delaunay Graph which can be used in various applications such as in the design of networks, telecommunications networks, transportation networks, water supply networks, and electrical grids etc.


Madhav Karki                                                                                  Supervisor: Kumar Pudashine


Title: Comparative Study on Payload based Network Anomaly Detection Algorithms Using Histogram Similarity

Abstract: The rapid growth of computer networks has changed the prospect of network security. An easy accessibility condition causes computer networks to be vulnerable against numerous and potentially devastating threats from hackers. Researchers and developers have proposed two types of intrusion detection techniques: Header based and Payload based. Among these two techniques, Payload based anomaly detection has been taken as a great interest by many researchers.

In order to have a better understanding and use of the strength and weakness of different Payload based Anomaly Detection approaches, this preliminary comparative study will explore the behaviors of four Algorithms used for Payload based Anomaly detection using histogram similarity: Mahalanobis Distance, Manhattan Distance, Euclidian Distance , and Correlation. The goal will be to observe and investigate the Precision and Recall of different algorithm for Histogram Similarity methods and finally compute the F-Measure and to suggest the best Algorithm among them.

Bigyan Sapkota

ME Computer Engineering

Title: Spectrum Re-farming for 3G and 4G Wireless Technology

Abstract: Spectrum is an essential element of the communications. It is well known that the development of a country can be enhanced by proper management of spectrum distribution and its usage. The radio spectrum is a scarce resource for which there is an increasing range of valuable uses. Therefore it becomes essential for Nepal to develop effective and efficient management of radio spectrum system for continuing economic and social development. [1] Owing to spectrum limitations the system of allocating blocks of spectrum to services need to rebalance both for demand and supply, and this requires the development of new spectrum management methods. This is done by spectrum refarming. [2] Spectrum refarming can be done in different band but refarming in 900MHz band is better since the propagation characteristic of 900 MHz is more potent when it comes to indoor coverage. It has been proven that 900 MHz band has better coverage than other bands.The primary focus in this thesis is on 900 MHz band and its comparison with 21000 MHz and 2300MHz band. Coverage, capacity and throughput scenario are the essential dimensions studied in this thesis. A live network and the theoretical analysis was used for measurement and the primary measurement parameters such as C/I,,Ec/I0, RxLevel, RxQual, RSCP and throughput were observed for different measurement cases. The measurement files were analyzed from different perspectives to conclude upon the coverage,Quality aspect of the network and it was found that 900 MHz band has better coverage, better indoor penetration and economic benefits.

Sharad Chandra Joshi            Supervisor: Asst. Prof. Kumar Pudashine
ME Computer Engineering

Title: Performance Assessment Of Various Backpropagation Algorithms For Network Intrusion Detection

Abstract: Information is one of the most valuable possessions today. As the Internet expands both in number of hosts connected and number of services provided, security has become a key issue for the technology developers. This research is focused on the detection of attacks in a network by using Multilayer Perceptron (MLP) with Backpropagation. KDDCup99 dataset, an intrusion detection attacks database is used as an input dataset for network intrusion detection. In this research, a Multilayer Perceptron is trained with various types of Backpropagation algorithms, and the research evaluates the performance of the algorithms. Based on the evaluation results, the research purposes that Resilient Backpropagation algorithm is the most efficient model for network intrusion detection.


Mr. Hem Raj Ojha                        Supervisor:Asst. Prof. Suresh Pokharel

M.Sc. Computer Science

Title: Comparative Analysis Of Data Mining Methods For The Prediction Of Crop Productivity

Abstract:The agricultural data is much diversified in terms of nature, interdependencies and resources. So, from a research point of view, it is difficult to identify the key attributes like geographical location, soil types and seasonal conditions that help in predicting the crop productivity. For the balanced and sustainable development of agriculture, these attributes or resources need to be evaluated and analyzed so that proper advancements and policies could be formulated to ensure better crop production.

Nowadays, different research works are going on in the field of agriculture for crop production. The retrieved information goes through different approaches for knowledge extraction. One of the innovative and relatively new approaches in this field is data mining. Data mining is used for data characterization, discrimination and forecasting or predicting in agriculture crop management. But, the obstacle to achieve this goal is lack of appropriate techniques based on relevant farming situations.

In this thesis, we review the data mining techniques in the field of agriculture for the prediction on crop productivity like classification using C4.5 algorithm, Naïve Bayes and CART and compare them for selecting a better and suitable application depending on different farming conditions and resources.


Him Koirala

MSc Computer Science

Title: A Comparative Analysis Of Classification Algorithms For The Prediction Of Students Performance

Abstract: Educational data mining concerns with the discovering new knowledge from the dataset that comes from educational fields. Students  data was taken from the database of a higher secondary school affiliated to HSEB, Nepal. It included internal examinations marks, attendance, assignments records, SLC marks, etc. Such data was used to predict the final examination performance using different classification algorithms: ID3, C4.5, CART, Random Forest and Naïve Bayesian Classifier. It also compared the results of different classification algorithms for the prediction of students  performance using classifiers evaluation matrices: accuracy, precision and recall


Mr. Indra Chaudhary                  Supervisor:Asst. Prof. Suresh Pokharel

M.Sc. Computer Science

Title:P2P Queries Routing in Super-Super-Peer Based P2P System Using J48 Decision Tree Algorithm

Abstract:With the rapid increase in the number of computers connected to the Internet and the emergence of a range of mobile computational devices which might soon be equipped with Mobile Internet Protocol (IP) technology. The Internet is converging to a more dynamic, huge, fully distributed peer-to-peer (P2P) overlay networks containing millions of nodes typically for the purpose of information distribution and file sharing. Because of which a challenging problem in unstructured P2P system is how to locate peers that are relevant with respect to a given query with minimum query processing and minimum answering time because peers can leave the network and new peers can join it any time. This research had used an unstructured P2P system which organizes peers around Super-Peers that is connected to Super-Super-Peer according to their semantic domains. This research had implemented famous decision tree algorithm J48 to extract Super-Peer that contains peers with relevant data with respect to a given query and shows our system is scalable on increasing data instances.


Mr. Ruman Maharjan                         Supervisor:Asst. Prof. Suresh Pokharel

M.Sc. Computer Science

Title: Topic Extraction On News Archive Using Different Clustering Algorithm On Extracted Sentences By Using Term Frequency Inverse Document Frequency (Tf×Idf)

Abstract: With the growth of Nepali online news websites, the huge amount of information are added on news archive over time. This creates the opportunity for finding the relevant topics for information seeker. On the other hand, processing of these big data and retrieving relevant information is not an easy task. In this research, we propose a method for extracting the important topics. For this, firstly keywords are extracted by using Term Frequency-Inverse Document Frequency (TF×IDF) model. Secondly, representing sentences are identified base on key words. At last, the represent topics are identified by applying different clustering on represent sentences.


Menuka Maharjan

Title: Comparative Analysis of Classification Algorithms for the prediction of Loan Grants

Abstract: The data mining classification techniques and analysis can enable banks to more precisely classify consumers into various credit risk groups. Knowing what risk group a consumer falls into would allow a bank to fine tune its lending policies by recognizing high risk groups of consumers to whom loans should not be issued, and identifying safer loans that should be issued, on terms commensurate with the risk of default. So, research focuses the type II error that associates the high misclassification costs. In this research, C4.5, CART and Naïve Bayes are taken for classification and prediction of loan grants. The attributes are determined that have greatest effect in the loan grants. For this purpose C4.5, CART and Naïve Bayes are compared and analyzed in this research.


Chandra Mohan Jayaswal            Supervisor: Assoc. Prof. Saroj Shakya

Title: Development and Comparative Study of Spread XML App with App Developed using PhoneGap for iOS

Abstract: Greatest challenges for software analysis and software reengineering is to design and implement parsers in order to access the intermediate representation of the source code. Lot of research have done in XML and Source Code representation in several papers. These research provides some defined approach to developed XML- based application, in some cases it is general. While talking about Mobile Application Development for iOS, above research gives general guidelines rather than depth information.

This research gives clear step from theoretical to implementation by which App for iOS can be developed using XML data feed from web application. The comparative study of Spread XML App with App developed using PhoneGap technologies shows that Spread XML App is efficient in the context of time and memory. iOS application developer can use this research to develop more robust XML based mobile application development engine which would be better option than other technologies like Multiple phone web-based application framework.


Er. Chhatra Thapa               Supervisor: Prof. Dr. Shashidhar Ram Joshi

Title:Agent-Based Compatibility Testing Framework for Web-Based Application

Abstract: In IT industry everyone knows that IT is the fastest growing technology. Web Based Applications are the most demanding in business and education sector. These kind of applications are complex to developer and researcher due to their dynamic behaviour, heterogeneous representation and implementation. Testing Web Application is complex, time consuming and challenging work due to their inherent complexities. How to find faults and test the system quickly and efficiently is difficult task. Now days we operate different types of devices for accessing Web Application running on various types of OS platform with various types of browsers. Thus, compatibility testing is really important and demanding in this technical era. In this research, agent based testing framework has been built for Compatibility Testing of Web Based Application. Specific agents with specific feature are introduced for compatibility testing.


Dhan B. Thapa Magar                                                                      Supervisor: Kumar Pudashine

Title: Automated Detection of Text-Based CAPTCHA Using ANN with Backpropagation

Abstract: A CAPTCHA is a program that protects websites against bots by generating and grading tests that humans can pass but current computer programs cannot. The main purpose of detecting the CAPTCHA is to suggest its weaknesses, which in turn helps to develop more secure CAPTCHA. In our scheme, we detected text-based CAPTCHAs that first preprocessed the given CAPTCHA, segmented its characters, and then recognized the characters using Artificial Neural Network with backpropagation learning algorithm. The detection rate is found to be significant for automated attack.


Pradeep KC

Title: Linking Nepalese Herbs Information Using Linked Open Data

Abstract: Herbs have been used for centuries to cure sickness or to promote good health. Even though now we have modern medication, traditional herbal remedies are still widely accepted. Many Nepalese people may not be so well informed about herbs, so using the Linked Open Data is a good way for them to obtain this knowledge. Information about herbs can provide people with a basic knowledge which will allow them to treat and protect themselves from illness. This information is important for ordinary people, so the Linked Open Data based herb system is valuable. It will help people to reduce the costs of healthcare. Therefore, these information focus on the medicinal properties of the plant (i.e., what illness can it be applied to, which body part does it affect/ cure, the preparation mode, and the plant part to be used). Due to unstructured and scattered herb information published in majority of herbal portal nowadays, the design of ontology is one of the best techniques to organize the herbal research information. Therefore, development of Nepalese herb information system as a linked database for herbal research domain will be crucial for a feeling of social participation. The proposed system is for providing linked information about herbs that are required by herb researchers and those that researchers are willing to share. Furthermore, new technologies have been brought into this system as well. Resource Description Framework (RDF) is used to define all entities of herbs, all of which are conveniently changed entities. Furthermore, RDF is also useful for linking herbs to external datasets by using the Linked Open Data (LOD). On the other hand, this system can retrieve information about herbs from LOD which corresponds to the binomial name of the herb in the system.


Mr. Sanjesh Rimal

Title: A Framework for Information and Communication Technologies Adoption at Small and Medium Enterprises in a Context of Nepal

Abstract: ICT (Information and Communication Technologies) Adoption at SMEs (Small and Medium sized Enterprises) in developing countries has always been a field of challenge and area of interest to address on various factors determining the key issues affecting adoption of ICT. This study has been a theoretical contribution by proposing a conceptual framework for SMEs ICT readiness and adoption, on the basis of the investigating factors affecting ICT Adoption and Maintenance in SMEs and identifying key components, and practically providing significant multifaceted in-depth study on selected ICT adopted SMEs in developing countries in a context to Nepal, which would assist on further research and making strategies and regulatory for ICT professionalism.