Omar Al-Azzam and Paul Court, Department of Computer Science and Information Technology, St. Cloud State University, St. Cloud, MN 56301, USA
Painstaking measures should be taken to determine how federal dollars are spent. Proper justification for allocation of funds rooted in logic and fairness leads to trust and transparency. The COVID-19 pandemic has warranted rapid response by government agencies to provide vital aide to those in need. Decisions made should be evaluated in hindsight to see if they indeed achieve their objectives. In this paper, the data collected in the final four months of 2020 to determine funding for nursing home facilities via the Quality Incentive Program will be analysed using data mining techniques. The objective is to determine the relationships among numeric variables and formulae given. The dataset was assembled by the Health Resources and Services Administration. Results are given for the reader’s insight and interpretation. With the data collection and analytical process, new questions come to light. These questions should be pondered for further analysis.
Predictive modelling, Cross validation, Linear Regression.
Kyle J. Cantrell1 and Carlos W. Morato, PhD2, 1Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, USA, 2Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, USA
We present in this text a framework for developing, simulating, assessing, and deploying a Deep Q-Learning Agent capable of “dropping-in” in place of existing classical HVAC controllers. The necessary aspects of integrating into modern building automation networks are discussed, along with other Deep Reinforcement Learning based approaches for HVAC. Benchmarks between several Deep Q-Networks and traditional classical control algorithms are demonstrated. Finally, a framework based on Open AI’s gym and ASHRAE’s BACnet is detailed to demonstrate how to deploy Deep Q-Learning methods to the field.
Deep learning, deep reinforcement learning, deep q-learning, building automation, BACnet.
Nour Zaarour, Nadir Hakem and Nahi Kandil, Engineering School, UQAT-LRTCS, Rouyn-Noranda, Canada
In wireless sensor networks (WSN) high-accuracy localization is crucial for both of WNS management and many other numerous location-based applications. Only a subset of nodes in a WSN is deployed as anchor nodes with their locations a priori known to localize unknown sensor nodes. The accuracy of the estimated position depends on the number of anchor nodes. Obviously, increasing the number or ratio of anchors will undoubtedly increase the localization accuracy. However, it severely constrains the flexibility of WSN deployment while impacting costs and energy. This paper aims to drastically reduce anchor number or ratio of anchor in WSN deployment and ensures a good trade-off for localization accuracy. Hence, we present an approach to decrease the number of anchor nodes without compromising localization accuracy. Assuming a random string WSN topology, the results in terms of anchor rates and localization accuracy are presented and show significant reduction in anchor deployment rates from 32% to 2%.
Wireless sensor network (WSN), anchors, received signal strength (RSS), localization, path-loss exponent (PLE), connectivity..
Usman Abdullahi Musa, Muhammad Sirajo Aliyu PhD, Abdurrazak Umar Abdullahi, Duda Sani Abdullahi, Saadatu Gimba, Federal University Dutse, Nigeria
Cardiovascular disease is a significant public health concern responsible for many deaths annually. It also causes a significant amount of morbidity and impairment to humans. An increase in health care data through the use of electronic health record (EHR) systems makes it possible to perform analysis on the data and forecasting diverse scenarios for numerous fields. The need to make accurate predictions of disease through the use of machine learning algorithms isas a result of many factors the human mind cannot process. Numerous machine learning algorithms such as Random Forest, Logistic Regression, ANN, K-Nearest Neighbor, SVM, etc. have been applied on Cleveland heart datasets however, there is a limit to modeling using Bayesian Network (BN). The widely used 14 features of the Cleveland heart data collected from the UCI repository used and modeled using BN modeling in this study. Experimental results prove that the use of feature reduction techniques does effectively improve the prediction performance of the classifier. The study aim is to check how feature reduction could increase performance and extract those feature dependency that affects the performance of the classifier.
Machine Learning,Bayesian Network (BN), Naïve Bayes Logistic Regression, KNN, Heart Disease, Prediction.
Ramu Mullangi, Tejasri Javvadi, Bhavani pappu, RGUKT Srikakulam, India
COVID-19 and Tuberculosis are infectious diseases that primarily affect the lungs and are having some same symptoms like cold, fever, and difficulty in breathing. A major cause caused by these diseases is death. Pneumonia, COVID-19, and TB are caused by airborne droplets. To confirm any of these diseases using traditional methods requires lots of time. But, both of these can be identified using Chest X-Ray images. Moreover, the RT-PCR test for COVID-19 is giving some false positives which are very dangerous. Since the time taken for getting the x-ray is less and effective, this method can be used for low latency requirements. In this work, we built two deep learning classification models using two different convolutional layers. The first model classifies COVID-19, TB, Others. Here, Others may be pneumonia or normal. Hence, the second model classifies pneumonia and normal images.
COVID-19, Tuberculosis, Pneumonia, RT-PCR, Chest X-Ray images, convolution, deep learning, classification.
Wang Chaofan1 and Chen Xinyue2, 1North China Electric Power University, School of Economics and Management, Wang Chaofan, Beijing, China, 2Qingdao University, School of Quality and Standardization, Chen Xinyue, Shandong, China
The fundamental motivation for enterprises to build a standard system is to meet the subjective needs of the unique use nature of standards, such as benchmarking and criteria, from input to output in the baseline relationship of all their business processes. How to guess the credibility and value of the standard system requires collaborative and mature processes to mediate cognition. Firstly, this paper clarifies the philosophical relationship between standard system construction and process management by using the general system structure theory. Secondly, it systematically summarizes the interaction mechanism between the two from the perspective of methodology. Finally, it designs a conceptual model with the "human regulation" composite system as the core and the coupling between standard system construction and the outer edge of process management, In order to provide a new integration idea for enterprise standardization management and process management to jointly realize the optimal value utility.
Standardization Discipline, Construction of Standard system, Process Management; Coupling Model.
Kamala Sudharani 1, Javvadi Tejasri2, kommana leelavathi3, 1Assistant professor, Department of ECE, RGUKT Srikakulam, Andhra pradesh, INDIA, 2Department of ECE, RGUKT Srikakulam, Andhra pradesh, INDIA, 3Department of ECE, RGUKT Srikakulam, Andhra pradesh, INDIA
Automatic Modulation Classification (AMC) is a key technology of non-cooperative communication systems, which has been playing a prominent role in military, security, and civilian telecommunication applications for decades. The traditional approaches such as likelihood and feature-based algorithms have been widely studied for AMC. These algorithms are limited to a set of modulation and SNR levels and require signal and channel parameters in advance. The purpose of this research is to use the Deep Learning algorithms for AMC. Deep learning (DL) is an elegant classification technique that has outstanding success in many application domains, Recently DL-based AMC methods have been proposed with outstanding performance. In this paper, we utilized CNN for Automatic Digital Modulation Classification (ADMC) and compare our network with DNN, Resnet, and Inception. The CNN model has shown to be comparable classification accuracy without the necessity of manual feature selection. The accuracy of a model is compared with SNR values ranging from -20dB to 18dB at step size 2. Finally, we address the issue of training CNN to differentiate the QAM16 and QAM64.
Automatic digital modulation classification, Deep learning, convolutional neural network, Resnet, inception, Deep neural network.
Ias Sri Wahyuni1 and Rachid Sabre2, 1University of Burgundy, Gunadarma University, 2Laboratory Biogéosciences CNRS, University of Burgundy/Agrosup Dijon, France
The aim of multi-focus image fusion is to integrate images with different objects in focus so that obtained a single image with all objects in focus. In this paper, we present a novel multi-focus image fusion method based using Dempster-Shafer Theory based on local variability (DST-LV). This method takes into consideration the information in the surrounding region of pixels. Indeed, at each pixel, the method exploits the local variability that is calculated from quadratic difference between the value of pixel I(x,y) and the value of all pixels that belong to its neighbourhood. Local variability is used to determine the mass function. In this work, two classes in Dempster-Shafer Theory are considered: blurred part and focus part. We show that our method give the significant result.
Multi-focus-images, Dempster-Shafer Theory, local distance.