Samuel Benga and Julien-Hervé Mbappé, LingaTech Consulting, Douala, Cameroon
For several decades now, terminology work has no longer been carried out on paper with pencils, as it certainly was when the thesis by Eugen Wüster (1898-1977), founder of a terminology theory and figurehead of the Vienna Circle, was published (Roche, Terminologie et Ontologie, 2005). Indeed, since the 1990s and the proliferation of “digital humanities” (Abiteboul & Hachez-Leroy, 2015), an avalanche of precision tools has become available to terminology practitioners more than ever, whose aim is to make knowledge representation products that are accessible to human users as well as to “artificial agents” (Carsenty, p. 193). It is against this backdrop that, in this article, we set out not only to revisit the major principles underpinning this practice, but also to present some of the essential tools in its panoply, all of which contribute to a final product in line with terminological principles and methods.
Terminology work, Ontology, Concept system, Semantic product, Knowledge representation.
Ralph Bou Nader, Marwa Elleuch, Meriana Kobeissi, Walid Gaaloul, and Boualem Benatallah, T´el´ecom SudParis, Institut Polytechnique de Paris, Palaiseau, France
Process prediction involves analyzing traces to identify future activities at a specific step within a given process. These traces can be found in the logs of information systems used by business actors during process execution, including email systems. Email traces can be used for process prediction. However, the non- structured textual nature of their content makes applying existing process prediction techniques challenging. Additionally, making predictions in the context of process-oriented emails is not limited to identifying future business process (BP) activities, as it is mostly targeted by existing works. It requires recommending the emails that enable BP actors to perform these activities. As for existing works on email prediction, they mostly focus on enhancing email management, with limited consideration for the context of BPs. For those that combine email management with the notion of BP, they are mostly limited to the stage of BP discovery from email logs or at most classifying incoming emails into BP activities. In this paper, we present a process-activity aware email response recommendation system. We mainly focus on recommending useful BP knowledge and an email response template for its main body when responding to an incoming email. These BP knowledge are about the set of activities to be expressed in the email responses, the intention of expressing them in the email (i.e., speech act), and the manipulated business data. Our work differs from existing works in two fields: (1) In the field of process prediction, we consider non-structured textual data (i.e., emails) for predicting the next activities which are used to generate email responses, and (2) In the field of email recommendation, we inject BP knowledge of incoming emails to provide BP-oriented suggestions.
Business process, Process prediction, Email response recommendation.
Yousef M. AlMutiri, Mohammed S. AlRashidi, AbdulRahman M. AlQahtani, Turki S. Alqahtani, and A.M. Sadek, General Administration of Laboratory, Saudi Standards Metrology, and Quality Organization (SASO), Riyadh, Kingdom of Saudi Arabia
In the present study, the artificial neural network (ANN) modeling technique was used to simulate the air conditioner (AC) performance test under various conditions. A backpropagation ANN models with multiple hidden layers were trained using 22 input variables and three targets. More than 800 test reports produced by the National Energy Efficiency Laboratory in KSA were used to train the ANN models. The input processing functions, neuron sizes, starting values of the weights and biases, layer transfer functions, training functions, and performance evaluation functions were discussed. The uncertainty components associated with the experimental measurements and leakage in learning in the ANN model were evaluated. The minimum replicate number of runs with resetting the weights and biases was estimated. The model was used to study the effect of airflow on the performance of the AC and identify the conditions leading to high AC efficiency.
Artificial Neural Network; Electrical Efficiency, Air Conditioner Test.
Amna Altaf1, Adnen El Amraoui1, Francois Delmotte1, Christophe Lecoutre2, 1Univ. Artois, UR 3926 Laboratoire de G´enie Informatique etd’Automatique de l’Artois (LGI2A), B´ethune, France, 2CRIL-CNRS, UMR 8188, University of Artois, Lens, France
This paper presents a predictive model designed to anticipate and mitigate late truck arrivals in cross-docking stations. In the absence of empirical data, a hypothetical dataset was generated. Employ- ing sophisticated machine learning techniques, our model takes into account historical arrival patterns, traffic conditions, and unforeseen delays to forecast the likelihood of late truck arrivals. The methodology, encompassing data sources and machine learning algorithms, is delineated. Three experimental configura- tions were meticulously tested and analyzed, utilizing the Multilayer Perceptron (MLP), Recurrent Neural Network (RNN), and Convolutional Neural Network (CNN). The outcomes contribute to the enhancement of preventive maintenance in cross-dock facilities, providing valuable insights for optimizing operations amid dynamic logistical challenges.
S. Ensiye Kiyamousavi, Boris Kraychev, GATE Institute, Sofia University St. Kliment Ohridski, Bulgaria
This paper provides an overview of model pruning techniques within the context of Federated Learning (FL). As FL continues to gain traction across various applications, optimizing model performance while mitigating resource constraints on edge devices becomes paramount. We categorize model pruning techniques into unstructured, structured, and hybrid approaches, each with distinct mechanisms and implications for FL’s efficiency. Our survey spans recent advances, discussing the advantages and limitations of each pruning strategy. Through a detailed exam- ination of algorithms, we highlight the role of model pruning in enhancing computational and communication efficiency, thereby facilitating more scalable and effective FL systems. The paper delves into the where and when of applying model pruning in FL and also introduces a taxonomy for understanding these methods’ nuanced impacts on FL’s performance and scalability.
Federated Learning, Model Pruning, Decentralized Machine Learning.
Riccardo Crupi1, Daniele Regoli1, Alessandro Damiano Sabatino1, Immacolata Marano2, Massimiliano Brinis2, Luca Albertazzi2, Andrea Cirillo2, and Andrea Claudio Cosentini1, 1Data Science & Artificial Intelligence, Intesa Sanpaolo, Italy, 2Audit Data & Advanced Analytics, Intesa Sanpaolo, Italy
Explaining outliers occurrence and mechanism of their occurrence can be extremely important in a variety of domains. Malfunctions, frauds, threats, in ad- dition to being correctly identified, oftentimes need a valid explanation in order to effectively perform actionable counteracts. The ever more widespread use of sophis- ticated Machine Learning approach to identify anomalies make such explanations more challenging. We present the Decision Tree Outlier Regressor (DTOR), a tech- nique for producing rule-based explanations for individual data points by estimating anomaly scores generated by an anomaly detection model. This is accomplished by first applying a Decision Tree Regressor, which computes the estimation score, and then extracting the relative path associated with the data point score. Our results demonstrate the robustness of DTOR even in datasets with a large number of fea- tures. Additionally, in contrast to other rule-based approaches, the generated rules are consistently satisfied by the points to be explained. Furthermore, our evaluation metrics indicate comparable performance to Anchors in outlier explanation tasks, with reduced execution time.
Outlier detection, Explainability, Decision Tree.
K.Sri Varun Venkatesh, Dr.S.Pratap, Chaitanya Deemed To Be University, India
This research explores how artificial intelligence may be incorporated into marketing tactics and how it can transform business operations. Businesses are rapidly utilizing Artificial Intelligence’s capabilities to improve consumer interaction, expedite marketing campaigns, and streamline decision-making processes as a result of the technologys rapid growth. Predictive analytics, tailored suggestions, chat-bots, and natural language processing are just a few of the marketing-related uses of Artificial Intelligence that are examined in this paper. The paper emphasizes the substantial influence of Artificial Intelligence on enhancing marketing efficacy, efficiency, and overall business performance through a thorough analysis of the body of current literature and case studies. The study also looks into the potential and difficulties of using Artificial Intelligence in marketing plans. This research attempts to offer useful insights for companies looking to use Artificial Intelligence to improve their marketing strategies and streamline their operations by combining expert opinions with actual data. It emphasizes how crucial it is to use Artificial Intelligence as a tactical tool to gain a competitive edge and achieve long-term success in the fast-paced business world of today. The potential of artificial intelligence to streamline corporate procedures is the main emphasis of this study, which examines the relationship between Artificial Intelligence and marketing. It looks at how Artificial Intelligence applications like chat-bots, tailored suggestions, and predictive analytics may transform marketing tactics through a succinct assessment of the research and real-world examples. The paper also discusses issues including data protection and complicated integration, offering suggestions for effective Artificial Intelligence deployment.
marketing, artificial intelligence (AI), satisfaction.
Juan José Flores Sedano, Hugo Estrada-Esquivel, Alicia Martínez Rebollar, 1TecNM/Centro Nacional de Investigación y Desarrollo Tecnológico CENIDET, Cuernavaca, Morelos, México. C.P. 62470
At present, the Internet of Things and Artificial Intelligence are among the most relevant transformative technologies for making a smart world a reality. In this context, this paper explores the transformative synergy between the Internet of Things (IoT) and Artificial Intelligence (AI) by integrating AI algorithms into Arduino devices. The literature review has demonstrated a current need of the optimization in implementing AI algorithms on Arduino platforms. Through a empirical literature review and practical experimentation, this paper provides a comprehensive analysis of several Arduino boards, including the Portenta H7 Lite, Arduino Uno, Wemos D1 ESP8266, and Arduino Nano 33 BLE, comparing their performance for AI projects. The selection of an IoT board is emphasized based on project-specific needs and budget considerations. The research presented in this paper reveals the impact of combining IoT, AI, and Arduino on reshaping interactions with the connected world, paving the way for intelligent systems enabled for decision-making and to execute complex tasks.
Artificial Intelligence, Arduino platform, IoT.
Huang Jie,Fang Fang and Yee Choy Leong, School of Business and Economics, Universiti Putra Malaysia,43400 Serdang,Selangor Darul Ehsan,MALAYSIA
The role of food traceability is crucial in food safety management and quality assurance systems. The ability to effectively trace food products has increasingly influenced the purchasing decisions of Chinese consumers. While traditional food traceability systems have provided viable solutions for monitoring and tracing the quality of the food supply chain, most of these solutions rely on centralised servers, and suppliers may tamper with and hide information in the pursuit of profit. This also makes consumers suspicious of the food traceability information they receive and reduces the level of trust between participants in all parts of the supply chain. The most important features of the emerging blockchain technology are decentralisation and tamper-proof, which increases the possibility of secure and transparent food supply chain traceability management. However, the existing literature has not fully explored the development and management of food traceability systems using blockchain technology. Therefore, the main objective of this study is to design and develop a blockchain food traceability system through the study of blockchain technology for managing food traceability. This system will inject transparency and decentralisation in the supply chain to ensure food safety. The study also provides evidence for researchers and practitioners to apply blockchain for effective food traceability and has a positive effect on improving food sustainability.
Blockchain; food supply chain traceability system; food traceability management.
Faouzi Boufarès1, Aicha Ben Salem1, 2, and Adam Boufarès3, 1Northern Paris Computer Science Lab, LIPN, France LaMSN, Sorbonne Paris Nord University, France, 2Laboratory RIADI-La Manouba,Tunisia, 3Transactis Company, France
Nowadays, data is very important in organizations and companies. The data quality has a strong influence on decisions and the consequences can be very significant. It is very important to have an idea of how much trust you should place in the data before any process starts. For this reason, it is essential to carry out a checklist of data validity before using them and initiate, as much as possible, a process of data correction and enrichment. In this paper we present a method to visualize errors in a data source. A score is assigned to each error. The database checklist covers existing and missing values homogenizations, functional dependencies, duplicate and similar rows. Several measures are performed to assist the correction of anomalies.
Data Quality, Data trustworthiness, Data Anomalies, Constraints, Management Rules Scoring Data, Data Cleaning, Machine Learning, Datasets, Structured or not Structured Data, CSV or JSON files.
Marco Wagner, Faculty of Engineering, Heilbronn University, Heilbronn, Germany
Machine Learning (ML) and especially its application to cyber-physical systems is an uprising field of research. Many approaches on how to leverage the power of ML even in small devices have been published and applied in recent years, forming the field of TinyML. While TinyML has been promising several benefits such as cost-reduction, privacy and more, studies on which of these benefits are actually crucial when deciding to apply a TinyML approach to a real-world problem have been missing so far. The author of this paper argues, that without understanding the ”why”, the community of researchers and industrial parties may not understand the reasons of applying TinyML and hence may head into research and development directions not increasing its success. This work analysis the application of TinyML approaches and the reasons behind in recent years for three important fields of application: consumer electronics, manufacturing and automotive. It determines the distribution of TinyML applications in the named field and examines which of the bespoken benefits of TinyML were actually driving the decision to use it. Furthermore, this work investigates in cross connections between the benefits and hence points out the main combinations of benefits to the adopter.
TinyML, embedded systems, embedded ML, machine learning, consumer electronics, automation, automotive
Nadeem Qazi and Iftikhar Ahmed, University of East London, Tietoevry Finland
Deepfake technology, facilitated by deep learning algorithms, has emerged as a significant concern due to its potential to deceive humans with fabricated content indistinguishable from reality. The proliferation of deepfake videos presents a formidable challenge, propagating misinformation across various sectors such as social media, politics, and healthcare. Detecting and mitigating these threats is imperative for fortifying defenses and safeguarding information integrity. This paper tackles the complexities associated with deepfake detection, emphasizing the necessity for innovative approaches given the constraints of available data and the evolving nature of forgery techniques. Our proposed solution focuses on leveraging facial features and transfer learning to discern fake videos from genuine ones, aiming to identify subtle manipulations in visual content. We systematically break down videos into frames, employ the Haar cascade algorithm for facial recognition, and utilize transfer learning to extract discriminative features. We evaluate multiple pre-trained models, including VGG16, ConvNeXtTiny, EfficientNetB0, EfficientNetB7, DenseNet201, ResNet152V2, Xception, NASNetMobile, and MobileNetV2, for feature extraction. Subsequently, we feed these features into a Deep Artificial Neural Network (DANN) for deepfake detection and employ ensemble learning to combine the strengths of the best-performing models for enhanced accuracy. We found that the ensemble model comprising ConvNextTiny, EfficientNetB0, and EfficientNetB7 showed enhanced accuracy in detecting deep fakes compared to alternative models achieving up to 98% accuracy through ensemble learning.
Deepfake, video classification, Transfer learning, EfficentNet, DenseNet, Ensemble learning, Haar Cascade .
Peishan Zhou, and Stephen So and Belinda Schwerin, School of Engineering, Griffith University, Gold Coast, Australia
Electrocardiography (ECG) is a promising approach for continuous fetal heart rate monitoring. Its morphology can provide information on fetal health to guide patient care by clinicians. However, fetal ECGs extracted from abdominal ECGs are often too weak to reliably detect fetal heart rate. This study evaluates the application of a U-Net architecture for accurate R-peak detection in low-SNR fetal ECG signals. The proposed method achieves high accuracy with a positive predictive value of 99.81\%, sensitivity of 100.00\%, and an F1-score of 99.91% on direct fetal ECG from the Abdominal and Direct ECG Database, with significantly reduced false predictions, and outperforming two other baseline methods compared with. Notably, our approach demonstrates robustness, accurately predicting peaks in regions of high distortion, a capability unmatched by other methods evaluated. This finding indicates the suitability and benefits of the U-Net architecture for peak detection in fetal ECG signals.
U-NET, R-PEAK DETECTION, QRS DETECTION, FETAL ECG.
Parmida Geranmayeh1 and Eckhard Grass2, 1Faculty of Mathematics and Natural Sciences, Department of Computer Science, Humboldt University of Berlin, Berlin, Germany, 2Faculty of Mathematics and Natural Sciences Department of Computer Science Humboldt-University of Berlin and IHP GmbH – Leibniz Institute for High Performance Microelectronics Frankfurt (Oder), Germany
Rectangular patch antennas are essential components of modern wireless communication systems, and are integral part of 5G and emerging 6G technologies. They enable high-frequency, high-capacity, and energy efficient wireless communication. The size of these antennas significantly affects their operating frequency, radiation patterns, and suitability for high-frequency applications. In addition, beamforming technology is crucial in 5G and 6G networks, improving coverage, capacity, and data rates. Array configurations of rectangular patch antennas are essential for effective beamforming, providing precise control over antenna patterns to enhance signal quality and reduce interference. This article examines the importance of the size of rectangular patch antennas in advanced wireless communications, specifically in 5G and the evolving field of 6G networks. The primary focus is on maximizing channel capacity between transmitters and receivers. For our investigation, upon acquiring the room model through sensing, we conducted simulations using discretized antenna patterns for different patch array antenna sizes. The simulations are based on realistic room models and a ray-tracing approach. It is shown that the maximum network capacity is increased with larger antenna arrays. However, identification and selection of the optimal steering vectors for all antennas involved is not trivial, but crucial for achieving the optimum configuration.
Antenna size, Beamforming, Rectangular patch antennas, Wireless network capacity.
Wei Chen, United States of America
This paper presents axiomatic proof rules for exception handling and argues that exceptions are no more than forward goto statements. The rules are defined within Hoare’s native proof system with Dijkstra’s weakest precondi-tion model as a mathematical foundation.
exception handling, exception proof rules, goto axioms, weakest precondition.
Sarabjeet Singh1 and Mina Maleki2, 1University of Detroit Mercy, Detroit, MI, USA, 2Department of Computer Science, University of Detroit Mercy, Detroit, MI, USA
Requirements engineering (RE) is a crucial phase in software development, delineating the foundation for subsequent stages. Integral to this process are functional and non-functional requirements. This paper uses semantic, syntactic analysis, and machine learning to identify and classify functional and non-functional requirements. Various natural language processing methods are employed, including Tokenization, Stopwords Analysis and removal, Lemmatization, and advanced semantic models such as Google word2vec and TF-IDF Vectorizer. Results indicate significant accuracy improvements when using TF-IDF vectorizer as compared to word2vec. we utilized two widely adopted techniques for feature representation: TF-IDF and Word2Vec, alongside two commonly employed classifiers: Logistic Regression and Random Forest. Our findings indicate that the TF-IDF method outperformed Word2Vec across both classifiers. Specifically, Logistic Regression achieved an accuracy of around 62.2%, and Random Forest achieved approximately 63.8% accuracy when using TF-IDF. Moreover, TF-IDF yielded higher precision, recall, and F1 scores compared to Word2Vec for both classifiers.
Index Terms—text classification, machine learning, Requirements Analysis, NLP, word vectorization.
Copyright © SEA 2024