Accepted Papers



EVOLUTIONARY ALGORITHMS TO SIMULATE REAL CONDITIONS IN ARTIFICIAL INTELLIGENCE AS BASIS FOR MATHEMATICAL FUZZY CLUSTERING
Ness, S. C. C.1, 1Evocell Institute, Austria

ABSTRACT

In present-day physics we may assumes space as a perfect continuum describable by discrete mathematics or a set of discrete elements described by a programmed probabilistic process or find alternative models that grasp real conditions better as they more closely simulate real behaviour. Clustering logic based on evolutionary algorithms is able to give meaning to the unlimited amounts of data that enterprises generate and that contain valuable hidden knowledge. Evolutionary algorithms are useful to make sense of this hidden knowledge, as they are very close to nature and the mind. However, most known applications of evolutionary algorithms cluster data points to to one group, thereby leaving key aspects to understand the data out and thus hardening simulations of biological processes. Fuzzy clustering methods divide data points into groups based on item similarity and detects patterns between items in a set, whereby data points can belong to more than one group. Evolutionary algorithm fuzzy clustering inspired multivariate mechanism allows for changes at each iteration of the algorithm and improves performance from one feature to another and from one cluster to another. It is applicable to real life objects that are neither circular nor elliptical and thereby allows for clusters of any predefined shape. In this paper we explain the philosophical concept of evolutionary algorithms for production of fuzzy clustering methods that produce good quality of clustering in the fields of virtual reality, augmented reality and gaming applications and in industrial manufacturing, robotic assistants, product development, law and forensics as well as parameterless body model extraction from CCTV camera images.

KEYWORDS

Artficial Evolution, Artificial Intelligence, Biology, Big Data, Cellular Automata, Data Interpretation and Analytics, Deep Learning, Features Selection, Genetic Algorithms, Generative Models, Machine Learning, Pattern Recognition, Robotic Process Automation, Simulation, Smart Systems, Virtual Machines, Visualization.


DESIGN AND IMPLEMENTATION OF LINE FOLLOWER AND OBSTACLE DETECTION ROBOT

Ahmed Bendimrad1, Ayoub El Amrani1, Karim El Khadiri2 and Bouchta El Amrani1
1Laboratory of thin films and surface treatment by Plasma, Higher Normal School, Sidi Mohamed Ben Abbellah University, Fez, Morocco
2Department of Physics Faculty of Sciences Sidi Mohamed Ben Abbellah University Fez, Morocco

ABSTRACT

In this paper, we propose a method for a line follower robot based on the instantaneous computation of the radius of curvature of this line, using infrared line sensors. The number and layout of its sensors, as well as the choice method, play an important role in the robot's response to the line, with the desired accuracy and speed. In addition, the robot must be equipped with an anti-collision system, using a ultrasonic distance sensor, to detect and avoid obstacles in several situations, especially at level crossings, when other robots share a common complex line.

KEYWORDS

Robot, Microcontroller, Sensor & Actuator.


INTELLIGENCE AND IMPLICATIONS ON THE DEMOCRATIC STATE OF LAW

Lucas Cortizo, Law School of University of Minho, Braga, Portugal

ABSTRACT

In this paper, there is a legal debate about Artificial Intelligence using personal data, after coming into force of General Data Protection Regulation, with their results on Democracy and Society. It’s a fact that we are passing through a democratic crisis and after all big political campaignswhich a massive utilization of Machine Learning to understand and develop a super personal political advertising, common sense tends to blame AI of making democracy weaker. The study shows that AI is just a tool, and any misuse to manipulate emotions after breach personal data has to be liable. Machine Learning just automatizes the process of acquiring knowledge, so highlights that and defends the neutrality of any technology. In addition, there is a list of a relevant usage of AI for society’s sake. The analysis reaches AI solving problems, developing much more efficient E-Government and as effect building a strengthen democracy.

KEYWORDS

Artificial Intelligence Machine Learning Algorithm Democracy Personal Data Protection


PERFORMANCE EVALUATION OF DOPPLER METHOD FOR ANGLE OF ARRIVAL ESTIMATION

Abubakar Y. Nasir, U. I. Bature, K. I. Jahun and A. M. Hassan Department of Computer and Communications Engineering, Faculty of Engineering and Engineering Technology Abubakar Tafawa Balewa University (ATBU), Bauchi, Bauchi State, Nigeria

ABSTRACT

Radio direction finder, which utilizes angle-of-arrival (AOA) estimation, is a function in a radio monitoring system to estimate the direction of the signal. In this paper, the single channel technique is implemented. The single channel direction finding (DF) systems offer several advantages over multiple channel systems, such as lower power consumption, portability and lower cost compared to the other DF technique. This paper presents the performance evaluation of Doppler DF techniques for angle of arrival estimation. The radio direction finder, which implements the Doppler method, consists of a circular antenna array that rotates at a constant speed. Signals received are spatially located and the rotation of the antenna introduces Doppler shift in the received signals. The Doppler method utilizes the Doppler shift and the spatial location of the receiving antenna to estimate the AOA for the received signals. The performance of the system was verified by Monte Carlo simulation to determine the effect of variance in the AOA estimation and location at various signal-to-noise ratios (SNR).

KEYWORDS

Angle-Of-Arrival (AOA), Signal-to-Noise Ratios (SNR), Doppler Method, Additive White Gaussian Noise (AWGN), Monte Carlo.


ANALYSIS AND CLASSIFICATION TECHNIQUES OF ECG SIGNALS: SURVEY

Taissir Fekih Romdhane1,2, Ridha Ouni3 and Mohamed Atri4, 1ENISo, Electrical Engineering Department, University of Sousse, Tunisia 2Laboratory of Electronics and Microelectronics, LR99ES30, FSM, 3College of Computer and Information Sciences, Department of Computer Engineering, KSU, KSA and 4Faculty of Science of Monastir, University of Monastir, Tunisia

ABSTRACT

Due to the gravity of some heart diseases, several researches try to develop robust ECG analysis and classification tools helping physiologists to detect correctly cardiac arrhythmia. In this context, this paper is a good survey of analysis and classification techniques that aims to help physiologic and Data science researchers for a better understanding of different ECG signal processing and classification algorithms . This paper introduces the different ECG signal properties (such as P wave, R wave, RR interval, PR interval, QRS complex, etc.) and the important noise sources like base line drift, EMG, muscle contraction, electrode contact, etc. that affect strongly this signal. Then, this survey presents various methods and algorithms used to preprocess signals collected from MIT-BIH database, to extract features and to classify them into many arrhythmia classes

KEYWORDS

Arrhythmia, Classification, ECG signal, Filter, Feature extraction, MIT-BIH database, Signal processing


USER POWER ALLOCATION ALGORITHM FOR DOWNLINK NOMA IN VISIBLE LIGHT COMMUNICATION

Xiaoyi Liu, Hongyi Yu and Erfeng Zhang National Digital Switching System Engineering and Technological Research Center, Zhengzhou, China

ABSTRACT

Visible light communication (VLC) is a promising technique in future networks due to its advantages of high data rate and licensed-free spectrum. In addition, non-orthogonal multiple access (NOMA) is considered as a candidate of multiple access schemes in 5G networks and beyond. In this paper, we study the power allocation problem in NOMA-based visible light communication. In particular, we optimize the power allocation strategies under both sum-rate maximization and max-min fairness criteria, where practical optical power and Quality of Service (QoS) constraints are included. The nonconvex objective function was transformed into convex function, and the optimal solution of problem was obtained by QoS constraint condition. As our main contribution, we achieve optimal power allocation solutions in semi-closed forms via mathematical analysis. Simulation results show that the performance gain of NOMA over OMA can be further enlarged by pairing users with distinctive channel conditions

KEYWORDS

Visible Light Communication (VLC), Non-Orthogonal Multiple Access (NOMA), Power Allocation, Quality of Service, Sum Rate, Max-min Fairness

Stabilization for nonlinear switched systems with slowly varying parameter

Wajdi Kallel
Mathematics Department, Faculty of Applied Sciences University Umm Al-Qura, KSA.

ABSTRACT

In this paper, we establish some conditions for the stabilization of switched systems with slowly varying parameters. Some necessary conditions are given for the stabilizability of switched homogeneous systems with varying parameter. Finally, the efficiency of the proposed approach is illustrated through some examples.

KEYWORDS

Stability, stabilization, switched systems, commom Lyapunov function.


Mining Interesting Rare Association Rules Using Objective and Subjective Measures

Ines Hilali Jaghdam1, Sadok Ben Yahia2, 1 Department of Computer Science - Community College, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia and 2 Department of Computer Science - Faculty of Sciences, University of Tunis El Manar, Tunis, Tunisia

ABSTRACT

Mining association rules is one of the most relevant techniques in the data mining area. The majority of existing techniques for mining association rules are based on frequent patterns to generate interesting association rules. Nevertheless, unfrequent patterns such as rare association rules or rare itemsets, could also be of interest to the user and may provide him relevant information. In this paper, we introduce a new approach for mining rare association rules using interestingness measures. The main originality of this contribution is that we will combine both objective and subjective measures in order to extract rare interesting and useful association rules. In fact, we show that finding interesting association rules from rare patterns is feasible whenever we process them using data driven and user driven measures. The experiments carried out on benchmark data sets show encouraging results in terms of interesting association rules found.

KEYWORDS

Association Rules Mining, Interesting Rare Rules, Objective Measures, Subjective Measures


CONSENT BASED ACCESS POLICY FRAMEWORK

Geetha Madadevaiah1, RV Prasad1, Amogh Hiremath1, Michel Dumontier2, Andre Dekker3, 1Philips Research, Philips Innovation Campus, Philips India Ltd, Manyata Tech Park, Bangalore, 2Institute of Data Science, Maastricht University, Maastricht, The Netherlands and 3Department of Radiation Oncology (MAASTRO), Maastricht University Medical Centre+, The Netherlands

ABSTRACT

In this paper, we use Semantic Web Technologies to store and share the sensitive medical data in a secure manner. The framework builds on the advantages of the Semantic Web technologies and makes it secure and robust for sharing sensitive information in a controlled environment. The framework uses a combination of Role-Based and Rule-Based Access Policies to provide security to a medical data repository. To support the framework, we built a lightweight ontology to collect consent from the users indicating which part of their data they want to share with another user having a particular role. Here, we have considered the scenario of sharing the medical data by the owner of data, say the patient, with relevant people such as physicians, researchers, pharmacist, etc. We developed a prototype,which is validated using Sesame OpenRDF Workbench with 202,908 triples and a consent graph stating consents per patient

KEYWORDS

Access Policies, Semantic Web,RDF/SPARQL, Role Based, Rule Based


A PARALLEL BIT-MAP BASED FRAMEWORK FOR CLASSIFICATION ALGORITHMS

Amila De Silva, Department of Computer Science & Engineering, University of Moratuwa, Katubedda, Sri Lanka.

ABSTRACT

Bitmap representations have been abundantly used in data analytic queries for their ability to represent data concisely and for being able to simplify processing. For the same reasons, bitmaps are gaining popularity in Data Mining domain, with the arrival of GPUs, since Memory organisation and the design of a GPU demands for regular & simple structures. However, due to the nature of processing, use of bitmaps have largely been restricted to FIM based algorithms. We in this paper, present a framework based on bitmap techniques, which speeds up classification algorithms on GPUs. The proposed framework uses both CPU and GPU for the algorithm execution, where the core computing is delegated to GPU. We implement two classification algorithms Naive Bayes and Decision Trees, using the framework, both of which outperform CPU counterparts by several orders of magnitude.

KEYWORDS

Data Mining, Classification, Naive Bayes, Decision Tree, Bitmaps, Bit-Slices, GPU.


NOX SENSOR FAILURE ANALYSIS IN HEAVY TRUCKS USING ADAPTED RANDOM SURVIVAL FOREST FOR HISTOGRAMS

Ram Bahadur Gurung, Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden

ABSTRACT

In heavy-duty trucks operation, unexpected breakdowns can result in delayed services and huge losses in business. Therefore, important components in trucks need to be regularly examined so that unexpected breakdowns can be prevented. Data-driven failure prediction models can be built using operational data from a large fleet of trucks. Machine learning methods such as Random Survival Forest (RSF) can be used to generate a survival model that can predict the survival probabilities of a particular component over time. However, RSF is not designed to handle data with histograms as feature variables, and the operational data from the trucks usually have many feature variables represented as histograms. Therefore, in this article, we propose an extension to the standard RSF algorithm and use to build a survival model for the NOx sensor. The model thus obtained is compared with the model obtained using a standard RSF approach where bins of histograms are allowed to be treated individually as a numeric feature. The performance of the trained models are measured in terms of overall error rate. The experiment results shows that the adapted approach outperforms the standard approach and the feature variables considered important are listed.

KEYWORDS

NOx sensor failure, histogram learning, histogram survival forest.


LIGHT-WEIGHT ALGORITHM FOR DEEP LEARNING ARCHITECTURE EVOLUTION APPLIED TO IMAGE-CLASSIFICATION

Patricio Astudillo1,2, Peter Mortier1, Matthieu De Beule1, and Joni Dambre2,
1FEops, Technologiepark 19, 9052 Zwijnaarde, Belgium 2UGent, Department of Electronics and information systems, Technologiepark 15, 9052 Zwijnaarde, Belgium

ABSTRACT

Recent studies have shown that algorithms for evolving deep learning architectures for image-classification can be used to generate high-performing deep learning models. These algorithms, however, require a lot of computation time and power. In this study, a light-weight algorithm for generating deep learning architectures for image-classification is proposed and validated. It is shown that this method can generate high-performing deep learning models with limited computation time and power.

KEYWORDS

Deep learning, Evolution, Classification


PREDICTING DAILY ACTIVITIES EFFECTIVENESS USING BASE-LEVEL AND META LEVEL CLASSIFIERS

Mohammed Akour1, Shadi Banitaan2 and Hiba Alsghaier3
1Yarmouk University, Jordan 2University of Detroit Mercy, USA 3Yarmouk University, Jordan

ABSTRACT

Collecting and analyzing Activities of Daily Living (ADL) could supplement elder care and long-term care services with very sensitive information about elder people and what they do during the day and what challenges they face. Providing care for elder people based on their ADL could let them live actively, independently and healthy. In this paper, we studied the effectiveness of base learners against ensemble methods for predicting ADL. The selected base learners are Naïve Bayes, Bayesian Network, Sequential Minimal Optimization, Decision Table and J48 while the selected ensemble learners are boosting, bagging, decorate and random forest. The dataset was gathered from a wearable accelerometer attached on the chest. The data used in this study is collected from 15 participants conducting 7 activities namely standing up, working at the computer, going up downstairs, standing, walking, walking and talking with someone and talking while standing, walking and going up downstairs. For base learners, J48 achieved the best results in terms of precision, recall, and F-measure. Results also showed that Boosting using decision table as the base classifier achieved the best improvement over base classifier. In addition, Bagging was the only ensemble approach that improved the results using all classifiers as base learners. Moreover, Bagging was able to predict five activities out of seven more efficiently than the other approaches while the rotation forest approach was able to predict the remaining two activities more efficiently than the rest. The results also indicated that all approaches took a reasonable time to build the model except Decorate.

KEYWORDS

Machine Learning, Classification, Pattern Recognition, Activity Recognition, ADL


A LEARNING CONTROLLER DESIGN APPROACH FOR A 3-DOF HELICOPTER SYSTEM WITH ONLINE OPTIMAL CONTROL

Guilherme B. Sousa1*, Janes V. R. Lima1, Patrícia H. M. Rêgo2, Alain G. Souza3 and João V. Fonseca Neto4
1Postgraduate Program in Computer Engineering and Systems, State University of Maranhão UEMA, São Luís, MA, Brazil, 2Mathematics and Computing Department, State University of Maranhão - UEMA, São Luís, MA, 3Technological Institute of Aeronautics, São José dos Campos, SP, Brazil 4Department of Electrical Engineering, Federal University of Maranhão - UFMA, São Luís, MA, Brazil

ABSTRACT

This paper presents the design and investigation of performance of a 3-DOF Quanser helicopter system using a learning optimal control approach that is grounded on approximate dynamic programming paradigms, speci?cally action-dependent heuristic dynamic programming (ADHDP). This approach results in an algorithm that is embedded in the actor-critic reinforcement learning architecture, that characterizes this design as a model-free structure. The developed methodology aims at implementing an optimal controllerthatactsinreal-timeintheplantcontrol,usingonlytheinputandoutputsignalsandstatesmeasured along the system trajectories. The feedback control design technique is capable of an online tuning of the controller parameters according to the plant dynamics, which is subject to the model uncertainties and external disturbances. The experimental results demonstrate the desired performance of the proposed controller implemented on the 3-DOF Quanser helicopter.

KEYWORDS

Action-Dependent Heuristic Dynamic Programming, Actor-Critic Reinforcement Learning, Real-TimeControl, 3-DOF Helicopter


CLOUD COMPUTING: ISSUES AND RISKS OF EMBRACING THE CLOUD IN A BUSINESS ENVIRONMENT

Shafat Khan
Himalayan University, Itanagar, India

ABSTRACT

Cloud computing is a swiftly advancing paradigm that is drastically changing the way people utilize their PCs. Over the latest couple of years, cloud computing has created from being a promising business thought to one of the rapidly creating portions of the IT business. Despite the boom of cloud and the numerous favorable circumstances, for example, financial advantage, a rapid elastic resource pool, and on-demand benefit, endeavor clients are yet hesitant to send their business in the cloud and the paradigm likewise makes difficulties for the two clients and suppliers. There are issues, for example, unapproved get to, loss of protection, information replication, and administrative infringement that require enough consideration. An absence of fitting answers for such difficulties may cause dangers, which may exceed the normal advantages of utilizing the paradigm. To address the difficulties and related dangers, an orderly hazard the board practice is vital that guides clients dissect the two advantages and dangers identified with cloud-based frameworks. The point of this paper is to provide better comprehension to configuration difficulties of cloud computing and distinguish essential research heading in such manner as this is an expanding area.

KEYWORDS

Cloud computing, Data center, Risks, Challenges, Security, Business


IMPROVEMENT OF CHATBOT IN TRADING SYSTEM FOR SMES BY USING DEEP NEURAL NETWORK

Sathit Prasomphan
Department of Computer and Information Science, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, THAILAND

ABSTRACT

This research presents a method for developing chatbots to serve their users. In many ways, these chatbots are for answering questions in the business, providing customer information, providing train schedules, helping customer reservations, virtual assistants, serve as call centers to serve ten million customers automatically. A deep learning based conversational artificial intelligence technique was used as tools for learning conversation between machine and customer. In addition, the steps required are the technique used in conjunction with the convolution neural network technique by using Tensorflow training to improve the accuracy of these chatbots. From the experimental results, using deep learning for chatbots learning, the accuracy is better than the traditional model.

KEYWORDS

NLU, NLG, Word Embedding, Tensorflow, RNN, LSTM, Sequence to Sequence Model, chatbots


TIME-INVARIANT CRYPTOGRAPHIC KEY GENERATION FROM CARDIAC SIGNALS

Sarah Alharbi, Md.Saiful Islam, and Saad Alahmadi
Department of Computer Science and Information, King Saud University, Riyadh, Kingdom of Saudi Arabia

ABSTRACT

Cardiac signal (also known as ECG signal) attracted researchers for using it in generating cryptographic keys due to its availability and its intrinsic nature for each individual. However, it has as well intraindividual variance which decreases the possibility of getting a time-invariant key for each participant which increases decryption errors in case of using it in symmetric cryptography. Furthermore, any procedure is taken for reducing the intra-individual variance should be combined with an increase in the inter-individual variance to ensure that an adversary cannot easily predict keys. In this paper, we propose a time-invariant cryptographic key generation approach (TICK) that improves these two types of variance in the real-valued ECG features of multiple sessions before converting it to binary sequences. Experiments of TICK shows its viability to improve the reliability and the randomness of keys generated using across-session data. By allowing more extended ECG features and lowering the number of bits assigned to each feature, keys lengths can be further increased without affecting the performance of the reliability and randomness.

KEYWORDS

Cryptography, Cryptographic key, ECG, Cardiac Signal, Enhancing the variance of features


MULTI-TARGET DETECTION METHOD OF LFMCW RADAR BASED ON SEGMENTED TIME-FREQUENCY IMAGE SYNTHESIS

Yu qi1 and Rao bin2, 1Wenchang satellite launch center, Wenchang, Hainan, China and 2National University of Defense Technology CEMEE, Changsha, Hunan, china

ABSTRACT

In this paper, time-frequency analysis of multi-target detection in the background of strong clutter is carried out. Firstly, the time-frequency analysis of Linear frequency modulation continuous wave (LFMCW) radar signal: the time-frequency image of the original signal, the time-frequency image of the beat signal, the time-frequency image of multiple target signals (including Stationary and moving), and the influence of multiple echoes on the time-frequency performance are analyzed. Finally, aiming at the problem of range-velocity ambiguity that is easy to occur in multi-target detection, this paper proposes a new multi-target detection method based on piecewise time-frequency image synthesis by analyzing the spectrum characteristics of LFMCW radar echo signal and the beat signal. This method includes the processes of spectrum spearing, spectrum superposition, fixed target cancellation and so on. The advantage of this method is that it could detect multiple targets at the same time, and it has the function of clutter cancellation

KEYWORDS

Linear frequency modulation continuous wave radar, time-frequency analysis, spectrum splicing, spectrum superposition, fixed target cancellation.


Pre-treatment for the Automatic Annotation Archeology Images

Marwa Ben Salah, Ameni Yengui and Muhammad Muzzamil Luqman, and Mahmoud Neji 1National Taiwan University Hospital and National Taiwan University College of Medicine, Taiwan, 2University of California San Diego, USA and 3,4,5National Taiwan University, Taiwan

ABSTRACT

An automatic annotation image is an effective way for content-based archaeological images. This paper shows the proposal of pretreatment for the automatic annotation. Image preprocessing step is the set of operations performed on an image, either to improve it, either to restore it, that is to say to restore as faithfully as possible the original signal.

KEYWORDS

RGB, HSV, Image Pretreatment, Automatic annotation


DYNAMIC HUMAN-CENTERED DESIGN: REINVENTING DESIGN PHILOSOPHIES FOR ADVANCED TECHNOLOGIES

Te-Wei Ho1, Timothy Wei2, Jing-Ming Wu1, and Feipei Lai345, 1National Taiwan University Hospital and National Taiwan University College of Medicine, Taiwan, 2University of California San Diego, USA and 3,4,5National Taiwan University, Taiwan

ABSTRACT

As technology becomes more advanced and saturated in various industries, the role of design becomes equally significant. Traditionally, human-centered design (HCD) has been the main creative approach for the design decisions in numerous applications. However, the role of HCD within the technology raises concerns. This paper examines the design philosophy of the HCD in parallel with rising technologies, specifically artificial intelligence and machine learning systems, and explores the implications of utilizing a more dynamic approach. With HCD, much of the considerations are determined through user research; the dynamic HCD approach is introduced to accommodate the different units of analysis presented by advanced technologies to create more streamlined designs that support and accelerate technological innovation.

KEYWORDS

Human-computer interaction, human-centered design, artificial intelligence