Browsing by Author "Rokne, Jon G."
Now showing 1 - 20 of 30
Results Per Page
Sort Options
Item Open Access A physical simulation of ball lightning for computer graphics(2004) Varsa, Petri Matthew; Rokne, Jon G.Item Open Access Algorithmic approaches to optimal mean-square quantization(1988) Wu, Xiaolin; Rokne, Jon G.Item Open Access APPLYING THE EXPONENTIAL CHEBYSHEV INEQUALITY TO THE NONDETERMINISTIC COMPUTATION OF FORM FACTORS(1999-03-01) Baranoski, Gladimir V.G.; Rokne, Jon G.; Xu, GuangwuThe computation of the fraction of radiation power that leaves a surface and arrives at another, which is specified by the form factor linking both surfaces, is central to radiative transfer simulations. Although there are several approaches that can be used to compute form factors, the application of nondeterministic methods is becoming increasingly important due to the simplicity of their procedures and their wide range of applications. These methods compute form factors implicitly through the application of standard Monte Carlo techniques and ray casting algorithms. Their accuracy and computational costs are, however, highly dependent on the ray density used in the computations. In this paper a mathematical bound, based on probability theory, is proposed to determine the number of rays needed to obtain asymptotically convergent estimates for form factors in a computationally efficient stochastic process. Specifically, the exponential Chebyshev inequality is introduced to the radiative transfer field in order to determine the ray density required to compute form factors with a high reliability/cost ratio. Numerical experiments are provided which illustrate the validity and usefulness of the proposed bound.Item Open Access Biologically and physically-based rendering of natural scenes(1998) Baranoski, Gladimir V. Guimaraes; Rokne, Jon G.Item Open Access BRDF measurement using a digital camera(2007) Sekhon, Ravdeep; Rokne, Jon G.Item Open Access Centered forms and their algorithmic approaches(1989) Bao, Paul G.; Rokne, Jon G.Item Open Access Computational Drug Repositioning Based on Integrated Similarity Measures and Deep Learning(2020-09-11) Jarada, Tamer N R; Rokne, Jon G.; Alhajj, Reda S.; Özyer, Tansel; Helaoui, Mohamed; Sadaoui, SamiraDrug repositioning is an emerging approach in pharmaceutical research for identifying novel therapeutic potentials for approved drugs and discover therapies for untreated diseases. Due to its time and cost efficiency, drug repositioning plays an instrumental role in optimizing the drug development process compared to the traditional \textit{de novo} drug discovery process. Advances in the genomics, together with the enormous growth of large-scale publicly available data and the availability of high-performance computing capabilities, have further motivated the development of computational drug repositioning approaches. Numerous attempts have been carried out, with different degrees of efficiency and success, to computationally study the potential of identifying alternative drug indications, which slow, stop, or reverse the courses of incurable diseases. More recently, the rise of machine learning techniques, together with the availability of powerful computers, has made the area of computational drug repositioning an area of intense activities. In this thesis, the integration of various biological and biomedical data from different sources to improve the quality of biomedical knowledge in the computational drug repositioning field is addressed. The main contribution of this thesis is four-fold. First, it provides a comprehensive review of drug repositioning strategies, resources, and computational approaches. Second, it develops an approach for identifying disease-specific gene associations, which can be further used as a resource for computational drug repositioning methods. Third, it proposes a robust framework that utilizes known drug-disease interactions and drug-related similarity information to predict new drug-disease interactions. Fourth, it introduces a novel integrative framework for predicting drug-disease interactions using known drug-disease interactions, drug-related similarity information, and disease-related similarity information. The two proposed frameworks leverage advanced similarity calculation, selection, and integration to understand the functional and behavioural correlation between drugs and diseases. Furthermore, they employ the most advanced machine learning tools in predicting hidden or indirect drug-disease interactions for potential drug repositioning applications.Item Open Access DB-FSFO – “A Division-Based Feature Selection Flow Optimization Model for Better Summaries and Reading Recommendations(2018-11-02) Sharma, Sahil; Rokne, Jon G.; Alhajj, Reda; Kawash, JalalWith constant improvements in digital media technology, there has been a big growth in the quantity of research material available on-line for a researcher to supplement his work. An average researcher typically spends hours to study a research paper, trying to understand all its details and complexities. Sometimes this time spent is not quite justified since the paper might not be highly relevant to the reader’s research. Furthermore, this one paper might just be a small subset of the available information which a researcher would need. A researcher only has limited time and resources to deal with the challenge of accomplishing their reading goals. One approach to alleviate this is by shortening the size of a research document, thereby effectively reducing the time spent by a researcher on reading. Therefore, this thesis aims to provide a system that tackles the complex task of research text summarization. The model, DB-FSFO (Division Based – Feature Selection based Flow Optimization), makes use of Natural Language Processing tools combined with extensive Feature Extraction and Selection procedures to self-weigh the importance of various parameters of a research text document with the corpus in perspective. The final summary produced by the model is the result of a flow optimization through a Reinforcement Learning approach with an extended post-processing accuracy improvement. The model proposed in the thesis is also tested for robustness and versatility by effectively producing recommendations for the next papers to be read by the researcher. This is supplemented further by generation of a reading recommendation graph. Therefore, the DB-FSFO model makes absorbing the essentials of a research paper easier and more efficient.Item Open Access Fuzzy Logic Classification in Review Spam Detection(2019-05-21) Rachdi, Btissam; Rokne, Jon G.; Alhajj, Reda S.; Moshirpour, MohammadWith the recent popularity of e-commerce, customers publish reviews about the products or services they purchased or utilized and these reviews in turn serve as the means for the potential customers to make a better choice based on the experiences of others. These pieces of opinion information are not only important for individual users but also benefit the business organizations, as they can monitor the customers’ opinions, and accordingly adjust their business strategies. However, many of the reviewing systems exploit this motivation for some people to enter their fake reviews to promote some products or defame some others. Hence, in recent years, review analysis has gained a lot of importance and by using opinion-mining detection; I could locate and eliminate potential spam reviews. In this thesis, I have introduced fuzzy logic in the review spam detection and combined two others data mining techniques, periodicity of frequent pattern and the outlier detection to study the behavior of the reviewer towards the reviewed product and classify the users using the fuzzy logic classification model. Thus, the proposed analysis have been proposed and examined over a sample of dataset.Item Open Access Intelligent Data Analysis for Early Warning: From Multiple Sources to Multiple Perspectives(2019-09-12) Afra, Salim; Alhajj, Reda S.; Moussavi, Mahmood; Alhajj, Reda; Rokne, Jon G.; Moshirpour, Mohammad; Tavli, BülentMisusing and benefiting from the development in technology for communication, criminal and terror groups have recently expanded and spread into global organizations and activities. Fortunately, it is possible to benefit from the technology to fight against terror and criminal groups by tracing, identifying, surrendering, and preventing them from executing their bloodily plans. Indeed, it is very affordable to capture various kinds of data which could be analyzed to predict potential criminals and terrorists. Data comes in various formats from text to images, and may become available incrementally due to dynamic sources. This leads to what has been recently classified as big data which has attracted considerable attention from the industry and the research community. Researchers and developers involved in this domain are trying to adapt and integrate existing techniques into customized solutions which could successfully and effectively handle big data with all its distinguishing characteristics. Alternatively, tremendous effort has been invested in developing new techniques to cope with big data for situations where existing techniques neither individually nor as an integrated group could address the shortcomings in this domain. Realizing the need for effective solutions capable of dealing with criminal and terror groups could be mentioned as the main motivation to undertake the study described in this thesis. The main contribution of this thesis is an early warning system that uses different sources of data to identify potential criminals and terrorists (hereafter both criminals and terrorists will be meant when any of them is mentioned in the text). The process works as follows. Criminal profiles are analyzed and their corresponding criminal networks are derived. This automates and facilitates the work of crime analysts in predicting events that may lead to disaster. We used face images as a data source and performed different studies to determine the accuracy and effectiveness of current face recognition and clustering algorithms in identifying people in uncontrolled environments, which are actually the environments encountered in real situations when dealing with criminals and terrorists. We trained our own face recognition algorithm using convolutional neural networks (CNN) by pre-processing the input images for better recognition rates. We showed how this is more effective than frontalized profile face images. We designed a queuing system for surveillance camera monitoring to raise an alarm when unknown people who pass through a monitored area turn into potential suspects. We also integrated different data sources such as social media, news, and official criminal documents to extract criminal names. We then generate a criminal profile which includes the activities that a given criminal is involved in. We also linked criminals together to build a criminal network by expanding the coverage and analyzing the collected data. We then proposed several unique criminal network analysis techniques to provide better understanding and knowledge for crime analysts. To achieve this, we added more functions related to criminal network analysis to NetDriller which is a powerful social network analysis tool developed by our research group. We also designed an algorithm for link prediction which better detects if a link between two nodes will exist in the future. All these functionalities have been well integrated into the monitoring system which has been developed and well tested to demonstrate its applicability and effectiveness.Item Open Access Intelligent Medical Image Analysis for Quality Assurance, Teaching and Evaluation(2020-06-23) Aksac, Alper; Alhajj, Reda; Demetrick, Douglas James; Rokne, Jon G.; Moshirpour, Mohammad; Karray, Fakhreddine O.Manually spotting and annotating the affected area(s) on histopathological images with high accuracy is regarded as the gold standard in cancer diagnosis and grading. However, this is a time-consuming and tedious task that requires considerable effort, expertise and experience of a pathologist. These are gained over time by analyzing more cases. Whereas this visual interpretation has strict guidelines. This brings a certain subjectivity to the histological analysis, and therefore, leads to inter/intra-observer variability and some reproducibility issues. Besides, these issues may have a direct effect on patient prognosis and treatment plan. These problems can be alleviated by developing automated image analysis tools for digitized histopathology. Thanks to the rapid development in the image capturing and analysis technology which could be employed to not only give more insight to pathologists, but also guide them in detecting and grading diseases. These quantitative computational tools aim to improve the quality of pathology researchers in terms of speed and accuracy. Thus, it is very important to develop an automatic assessment tool for quantitative and qualitative analysis to help remove this drawback. The main contribution of this thesis is an intelligent system for quality assurance, teaching and evaluation applications in anatomical pathology. We present a spatial clustering algorithm, named CutESC (Cut-Edge for Spatial Clustering) with a graph-based approach. CutESC performs clustering automatically for complicated shapes and different density without requiring any prior information and parameters. We have developed an automatic cell nuclei detection method where the proposed solution uses the traditional CNN learning scheme solely to detect nuclei, and then applies single-pass voting with spatial clustering explicitly to detect them. We also propose an automated method to identify and locate the mitotic cells, and tubules in histopathology images using deep neural network frameworks. We present a dataset of breast cancer histopathology images named BreCaHAD which is publicly available to the biomedical imaging community. Moreover, we propose an efficient method for salient region detection. Finally, we introduce a new tool called CACTUS (Cancer Image Annotating, Calibrating, Testing, Understanding and Sharing) which is proposed to help and guide pathologists in their effort to improve disease diagnosis and thereby reduce their workload and bias among them. CACTUS can be useful for both disseminating anatomical pathology images for teaching, as well as for evaluating agreement amongst pathologists or against a gold standard for evaluation or quality assurance.Item Embargo List-processing on symmetric lists of variable-size nodes(1975) Bu, How-Shone; Rokne, Jon G.Item Open Access Performance Evaluation of LoRa LPWAN for the Internet of Things(2019-01-23) Muhammad Yousuf, Asif; Ghaderi, Majid; Rokne, Jon G.; Krishnamurthy, DiwakarThe goal of this thesis is to evaluate the performance of LoRa (Long Range), which is a leading Low-Power Wide-Area Network (LPWAN) technology for the Internet of Things (IoT). This work considers two scenarios: performance with indoor gateways and performance with outdoor gateways. Initially, this work studies the feasibility of building a low-cost IoT net- work, where the end devices and gateways are made of do-it-yourself (DIY) off-the-shelf hardware components. This work continues by analyzing the importance of understanding the capabilities and limitations of this technology in terms of its throughput, coverage, scalability and power consumption. Using real-world measurements with commercially-deployed devices from a city-wide LoRa deployment, this work aims to characterize the throughput and coverage of LoRa. Using a custom-built simulator, this work presents extensive simulation results, in order to characterize the scalability and power consumption analysis of LoRa under a variety of traffic and network settings. Our measurement results for the do-it-yourself (DIY) LoRa network (indoor gateways) setup shows that i) indoor coverage is sufficient to cover an entire seven-storey office building with minimal packet drop, ii) outdoor coverage is very dependent on the environment; in our experiments, a communication range of 4:4 km was achieved with only 15% packet drop, iii) network parameters such as spreading factor and packet size greatly affect the coverage; for example, we observed that a payload size of 242 bytes leads to 90% packet drop versus less than 5% drop with a payload size of 1 byte. Our measurement results with commercially-deployed gateways show that as few as three gateways are sufficient to cover a dense urban area within an approximately 15 km radius. Additionally, a single gateway can support as many as 10 [to the factor of 5] end devices, each sending 50 bytes of data every hour with negligible packet drops. On the negative side, while a throughput of up to 5:5 Kbps can be achieved over a single 125 KHz channel at the physical layer, the throughput achieved at the application layer is substantially lower, less than 1 Kbps, due to network protocols overhead.Item Open Access Physically-based modelling of flowers(2006) Poon, Kelly; Rokne, Jon G.Item Open Access Predictive Analysis and Recommendation for Managing Risk and Avoiding Hazard in Chemical and Oil & Gas Industrial Infrastructures(2018-12-07) Polat, Serhan; Rokne, Jon G.; Alhajj, Reda S.; Moshirpour, MohammadChemical processing industrial infrastructures such as oil & gas plants are operated with the risk of hazardous events which may lead to casualties, economic and/or environmental consequences. Fortunately, a variety of devices and mechanisms are already available or rapidly emerging to capture data which may be used to develop techniques that may assist in issuing timely hazard alerts. This would help to avoid or prevent the hazard and hence save lives, the environment and the economy. Thus, the aim of this thesis is to develop an approach capable of analyzing the reports data captured after operations of infrastructure which can be used to guide domain experts in handling various causes and consequences of hazards. Available data may be publicly available or may exist in private repositories of processing companies. The latter data may not be accessible outside the company premises. However, the data available for this thesis has been crawled from publicly available data which exists as reports in various formats varying from plain text, semi-structured to structured. The crawled reports have been preprocessed using natural language processing techniques. Domain ontology has been used to guide the whole processes of clustering, and classification and a multiagent system have been integrated into the developed approach. Utilizing a multiagent system in the process allows for multiple perspectives to be incorporated into the process. These aspects are represented by independent agents who collaborate and negotiate to reach a consensus. The developed approach has been successfully applied to some publicly available gas and oil infrastructure hazard related data. The reported results may be used to issue recommendations to use certain safeguards to reduce the risk level in the processes.Item Open Access Problems on a set of convex objects(1991) Chen, Haihuai; Rokne, Jon G.Item Open Access Proximity and applications in general metrics(1998) Gavrilova, Marina; Rokne, Jon G.Item Open Access Question-And-Answer Community Mining in Software Project Management – A Deep Learning Approach(2020-12) Ahmadi, Alireza; Ruhe, Guenther; Moussavi, Mahmood; Rokne, Jon G.Software project management (SPM) is one of the most dominant fields in Software Engineering (SE). During recent years, excessive growth in data science has brought a new research opportunity for supporting project managers, referred to as SPM Analytics. The majority of the field efforts are concerned with using projects' data in the estimation problems and mining the general public data in Requirement Engineering applications. However, in more general SE applications, Question and Answer (QA) communities such as StackOverflow have been known as a rich data source. While most studies in SPM analytics use traditional Machine Learning (ML) methods, in this thesis, a method named DeepQA-Miner based on Deep Neural Networks (DNNs) is proposed to mine SPM QA communities. Project Management StackExchange (PMSE), a well-known community for project managers, is targeted. It provides project managers with the opportunity to share their questions, making it a great candidate for characterizing practitioners' needs. The DeepQA-Miner method would pre-process the data and feed it into a multi-input multi-head network. The network receives different data parts separately, embeds the text internally, extracts the essential patterns, and classifies it for multi-purposes, leveraging a single shared knowledge base. More than 5000 questions at PMSE are accessed, classified through four different perspectives, and analyzed by their tone to formulate SPM practitioners' needs. The DeepQA-Miner's performance is compared with four baseline methods. Overall, DeepQA-Miner outperforms the other classifiers. Even though two of the traditional methods achieved slightly higher accuracy in one of the binary classification tasks, there is a remarkable improvement by the DeepQA-Miner in multi-class tasks. Furthermore, the findings provide potential directions for further research and development. As an application, the findings are compared with SPM education status quo resulting from SPM-related courses in Canada's top 10 universities. A set of considerations for reducing the existing gap between the industry needs and courses' agenda is proposed. As a contribution to Open Science, all data parts are being made publicly available: https://github.com/alirzahmadi/DeepQA-MinerItem Open Access RAY DENSITY ANALYSIS FOR VIRTUAL SPECTROPHOTOMETERS(1999-03-01) Baranoski, Gladimir V.G.; Rokne, Jon G.; Xu, GuangwuVirtual spectrophotometric measurements have important applications in physically-based rendering. These measurements can be used to evaluate reflectance and transmittance models through comparisons with actual spectrophotometric measurements. Moreover, they can also be used to generate spectrophotometric data, which are dependent either on the wavelength or on the illuminating geometry of the incident radiation, from previously validated models. In this paper the ray casting based formulation for virtual spectrophotometers is discussed, and a mathematical bound, based on probability theory, is proposed to determine the number of rays needed to obtain asymptotically convergent readings. Specifically, the exponential Chebyshev inequality is introduced to determine the ray density required to obtain reflectance and transmittance measurements with a high reliability/cost ratio. Practical experiments are provided to illustrate the validity and usefulness of the proposed approach.Item Open Access Recognizing human emotional states from body movement(2019-07-09) Ahmed, Ferdous; Gavrilova, Marina L.; Korobenko, Artem; Rokne, Jon G.An emotion-aware computer system capable of responding to expressive human gestures and movements can significantly change the dynamics of human-computer interaction. This thesis addresses the problem of the creation of a computer model capable of automatically discerning emotion using various motion-related features of the human body. The proposed emotion recognition model automatically identifies relevant motion features using a combination of filter-based feature selection methods and the power of genetic algorithms. In addition to recognizing emotions, this thesis also focuses on gaining a deeper understanding of the role that various motion features play in emotion recognition, the ability to express emotionally relevant information by various parts of the human body and the effects of various action scenarios on emotion recognition. Rigorous analysis conducted on a proprietary dataset shows that the proposed computer model is very effective at identifying human emotion based predominantly on motion-related information. The proposed emotion recognition system also outperforms existing state-of-the-art computer models for emotion recognition.