Call for Participation - 11th International Conference on Computer Science, Engineering and Applications (CSEA 2025)
November 15 ~ 16, 2025, Zurich, Switzerland
We invite you to join us on 11th International Conference on Computer Science, Engineering and Applications (CSEA 2025) will provide an excellent international forum for sharing
knowledge and results in theory, methodology and applications of Computer Science, Engineering
and Applications. The Conference looks for significant contributions to all major fields of the
Computer Science, Engineering and Information Technology in theoretical and practical aspects.
Non-Author / Co-Author/ Simple Participants (no paper)
100 USD for Online (Without Proceedings
390 USD for Face to Face (With Proceedings)
Here's where you can reach us mail: : csea@csea2025.org or cseaconf@yahoo.com
Accepted Papers
Hybrid Semantic Search for Legal Document Retrieval in the Swiss Parliament: The ParlementAIre Approach
Ornella Vaccarelli1, Emmanuel de Salis2, Eden Brenot1, Henrique Marques Reis2, Jacqueline Kucera3, Philippe Meyer3, Aphrodite Albanis3, Hatem Ghorbel2, and Jean Hennebert1, 1Institute of AI and Complex Systems (iCoSys), School of Engineering and Architecture of Fribourg (HEIA-FR), HES-SO University of Applied Sciences and Arts Western Switzerland, Fribourg, Switzerland, 2Haute Ecole Arc (HE Arc), HES-SO University of Applied Sciences and Arts Western Switzerland, Neuchˆatel, Switzerland, 3Parliamentary Library, Research & Data, Parliamentary Services, Bern, Switzerland
We present a domain-adapted hybrid AI retrieval system for legal and parliamentary document
search, developed within the ParlementAIre project in collaboration with the Swiss
Parliamentary Library. The system integrates BM25-based sparse retrieval with dense neural
embeddings in a multilingual, opensource pipeline. It is designed to address the linguistic,
structural, and terminological challenges of large legal corpora such as Fedlex and Curia Vista,
which differ in language, format, and semantic density. Evaluation on French-language
parliamentary queries demonstrates that the hybrid model consistently outperforms both sparse
and dense baselines, achieving statistically significant improvements in top-8 retrieval accuracy
across heterogeneous document types. Deployed and tested within the Swiss Parliamentary
Library, the system also improves time-to-discovery and recall of semantically relevant materials
that are frequently missed by conventional keyword-based approaches. These results highlight
the potential of AI technologies to enhance parliamentary and institutional processes while
adhering to core requirements for sovereignty, transparency, and democratic control.
Hybrid Retrieval Models, Document Retrieval, Parliamentary AI, Dense Embeddings, Semantic Search, Lexical Semantics, Semantic Processing, Legal NLP, AI for Governance, Information Retrieval, Information Extraction, Open-source NLP, Institutional NLP.
Elsa: A Style-aligned Dataset for Emotionally Intelligent Language Generation
Vishal Gandhi and Sagar Gandhi, Joyspace AI, WA, USA
Advancements in emotion-aware language processing increasingly shape vital NLP applications ranging from conversational AI and affective computing to computational psychology and creative content generation. Existing emotion datasets either lack emotional granularity or fail to capture necessary stylistic diversity, limiting the advancement of effective emotion-conditioned text generation systems. Seeking to bridge this crucial gap between granularity and style diversity, this paper introduces a novel systematically constructed dataset named ELSA (Emotion and Language Style Alignment Dataset)1 leveraging fine-grained emotion taxonomies adapted from existing sources (dair-ai/emotion dataset and GoEmotions taxonomy). This dataset comprises multiple emotionally nuanced variations of original sentences regenerated across distinct contextual styles (conversational, formal, poetic, and narrative) using advanced Large Language Models (LLMs). Rigorous computational evaluation using metrics such as perplexity, embedding variance, readability, lexical diversity, and semantic coherence measures validates the dataset’s emotional authenticity, linguistic fluency, and textual diversity. Comprehensive metric analyses affirm its potential to support deeper explorations into emotion-conditioned style-adaptive text generation. By enabling precision-tuned emotionally nuanced language modeling, our dataset creates fertile ground for research on fine-grained emotional control, prompt-driven explanation, interpretability, and style-adaptive expressive language generation with LLMs.
emotion-aware language modeling, fine-grained emotion recognition, stylistic variation, emotion-conditioned text generation, large language models (LLMs), text augmentation, emotion and style transfer, affective text generation, emotion-centric NLP, multistyle text synthesis, Natural Language Generation (NLG).
How Do Sentiment and Toxicity Vary Across Youtube, Reddit, and X Comments Regarding Kylian Mbappé and Fake News?
Olzhasbek Zhakenov, Northwestern University in Qatar, Kazakhstan
Social media has fundamentally altered the landscape of sports fandom, creating dynamic platforms for both fervent support and intense criticism. However, the amplification of negativity and misinformation on these platforms poses a significant threat to public perception and the well-being of athletes. This study analyzes social media commentary surrounding Kylian Mbappé and associated sports narratives, leveraging data from YouTube, Reddit, and X. Utilizing Communalytic and Voyant, this research processes over 30,000 comments to assess sentiment, toxicity, and thematic trends. A key component of this analysis is the differentiation between real and fake news contexts, which reveals notable variations in the tone and language of online discussions. The findings of this study underscore the dualistic nature of social media, highlighting its capacity to serve as a conduit for both admiration and harmful discourse. This research provides valuable insights into the complex digital ecosystem of modern sports.
Efficient Hybrid Prompt-pruning for Open-source LLM Based Machine Translation
Zaowad R. Abdullah, Manal Iftikhar, Md. Tariqul Islam Rifat Shahriyar, Bangladesh
We propose a hybrid retrieval strategy for open-source LLM-based machine translation that filters out irrelevant top-k candidates before constructing the final translation prompt, thereby reducing input token count while maintaining or improving translation quality. Throughout this work, we demonstrate that fixed top-k retrieval in translation specific LLMs is suboptimal, often incorporating redundant or irrelevant examples into the translation prompt. Our method combines dense embedding model relevance scores and normalized sparse BM25 scores to yield a hybrid score which is later used to filter out irrelevant examples that fall below an empirically derived threshold. Unlike prior domain adaptation methods such as kNN-MT [2], LLM-based translation avoids dense token-level lookups. Rather, it incorporates source-translation pairs semantically/lexically similar to the translation query into the prompt and achieves a significant level of domain adaptation. While being simpler and significantly faster than kNN-MT, the quality of LLM-based MT depends highly on the context provided. Fixed retrieval configurations (e.g., top-5 or top-10), commonly adopted from general NLP tasks, often include irrelevant or redundant examples. While reranker models are usually employed to reorder retrieved examples, they still rely on a fixed top-k setup, leading to the inclusion of superfluous examples. Our experiments demonstrate a simple yet effective method that dynamically filters out suboptimal examples, retaining only the most relevant context for each translation query. Experiments across seven domains and three language pairs (DE→EN, AR→EN, ZH→EN) show that our method preserves translation performance while significantly reducing prompt size. We also compare our setup with the popular reranker model Cohere Rerank 3.5 [3] to establish the credibility of our work. Furthermore, evaluations on the PeerQA benchmark demonstrate substantial gains in zero-shot segment-level retrieval, validating the hybrid pruning method. Our findings highlight the impact of selective example retrieval for optimally domain-adapted multilingual machine translation.
.
Machine Translation, LLM, RAG(Retrieval Augmented Generation), Information Retrieval,n-shot Prompting, Prompt-Pruning , Domain Adaptation.
Chorify Is An Intelligent Desktop Application To Teach Dance And Correct Motion Using Pose Estimation And Vibration Band
Chenxi Huang1, Rodrigo Onate2, 1Crean Lutheran High School, 12500 Sand Canyon Ave, Irvine, CA 92618, 2 California State Polytechnic University, Pomona, CA 91768
This project aims to make dance learning more accessible for deaf individuals through a program called RhythmSense. Many people who are deaf struggle to follow rhythm or music during dance. My solution combines AI-based pose detection, sound wave visualization, and a vibration feedback band. The system uses MediaPipe to track movements, compares them to a reference video, and provides instant visual and physical feedback. During testing, I focused on improving accuracy, reducing vibration delay, and ensuring that the program worked in different lighting and motion conditions. The results showed that the app could identify mistakes and match rhythm effectively. Overall, RhythmSense helps dancers not only see their errors but also feel the beat through vibration. This technology creates a more inclusive way for everyone, including deaf users, to experience and enjoy dance.
Dance learning, Pose estimation, Real-time feedback, Vibration Band .
Development of Yolofin: An Advanced Yolo–LSTM Based Architecture for Financial Trading
Markos Markides1 and Arodh Lal Karn2, 1University of Cyprus, Nicosia, Cyprus, 2Xi’an Jiaotong–Liverpool University, Suzhou, China
This paper presents YOLOFin, a deep learning framework for financial market regime prediction that integrates image-based feature extraction with temporal modeling. Traditional OHLCV (open, high, low, close, volume) data are converted into multiple visual representations, including candlestick charts, indicator heatmaps, Gramian Angular Field (GAF) images, and inter-asset divergence plots. Features are extracted with YOLOv8 and subsequently modeled over time using a Long Short-Term Memory (LSTM) network, with event-driven labels generated via the triple-barrier method Experiments on four years of Bitcoin data show that YOLOFin achieves a precision of 35%, outperforming the 33% random baseline in a Buy/Do Nothing classification setting. These results demonstrate the effectiveness of combining computer vision with financial forecasting and highlight the value of visual time-series representations for capturing patterns in noisy and volatile markets.
Cryptocurrency, Deep Learning, YOLOv8, LSTM, Financial Markets.
Cooperative Marl with Structured Rewards and Explainable Agents for Hyperparameter Tuning
Vedat Dogan, Steven Prestwich, and Barry O’Sullivan, University College Cork, Ireland
Hyperparameter optimization (HPO) remains a central challenge in literature, particularly when dealing with heterogeneous search spaces and competing optimization objectives. In this work, we present an extended cooperative multi-agent reinforcement learning framework, termed MARL-DA, which builds upon dynamic algorithm configuration (DAC) and multi-agent dynamic algorithm configuration (MA-DAC) by introducing a discrete-continuous agent decomposition with scalarized multi-objective reward components. MARL-DA enables specialized agents—continuous (DDPG) and discrete (DQN)—to optimize distinct types of hyperparameters in parallel. The reward function is designed as a scalarized combination of multiple objectives: predictive performance, training efficiency, model complexity, and generalization robustness. We conduct extensive empirical evaluations across six datasets spanning both classification and regression tasks. Results show that MARL-DA consistently outperforms traditional HPO techniques and MARL baselines, while offering interpretable agent behaviors and stable convergence. Explainability tools are integrated to provide insight into agent decisions, coordination dynamics, and reward attribution. This work demonstrates that minor yet structured modifications to cooperative MARL can yield substantial gains in optimization performance and explainability for HPO task.
Hyperparameter tuning, multi-agent reinforcement learning, scalarized multi-objective reward, explainability.
How to Make Explainable AI (Xai) More Explainable
Jean-Marie Le Ray, Independent researcher, Rome, Italy
Large language models (LLMs) produce fluent answers, yet most “explanations” remain narrative, post-hoc, and hard to verify. We introduce Explainability-by-Design (EbD), a framework that requires systems to ship executable explanation artifacts alongside natural-language outputs: (1) an explicit plan with pre/post-conditions, (2) a replayable trail of retrieval and decision steps, and (3) fragment-level, versioned evidence signed by cryptographic hashes. EbD revives and operationalizes three historical architectures—SHRDLU-2025 (situated reasoning + state logs), Memex-2025 (replayable research trails), and Xanadu-2025 (fine-grained, versioned transclusion)—in the same spirit as the Pucci-by-AI reimplementation [arXiv:2509.02506], which re-animates a century-old rule-and-trace paradigm for machine translation. Across evidence-backed question answering—and informed by the Pucci-by-AI module for Romance languages—EbD targets three dimensions neglected by prose-only XAI: (i) source fidelity, (ii) replayability, (iii) plan accuracy. This paper is a conceptual and architectural proposal: we do not release a running system, but provide a design blueprint and verifiable-by-construction contracts for implementation and evaluation. Although model-agnostic, EbD is particularly well-suited to SLMs (small language models): their tighter version surfaces and toolchains make seed/plan locking feasible; lower compute footprints allow complete trail capture and deterministic replays; domain-specialized vocabularies improve fragment-level addressability and citation coverage; and on-prem/edge deployment simplifies compliance, provenance control, and evidentiary hashing. The result reframes explanations as first-class, testable products rather than persuasive text, enabling audit, compliance, and pedagogy in high-stakes domains.
Explainability‑by‑Design; XAI; reproducibility; provenance; RAG; fragment‑level citation; Memex; SHRDLU; Xanadu; machine translation; Pucci-by-AI; evaluation metrics.
Design and Development of Verlanguage: A Mobile Application for Real-time Body Language and Emotion Recognition using Google ML Kit and Firebase
Brandon Michael Lim1, Jonathan Thamrun2, 1USA, 2California State Polytechnic University, Pomona, CA 91768
Blind people have a difficult time understanding body language or non-verbal emotions [1]. Kids may struggle learning body language, especially those with special needs. My app, verLanguage, identifies and solves these problems. Using code, percentages, and cameras, it calculates body language patterns to create an easy-to-use environment for people to use. Android Studio is the backbone of the app and houses all of the fine details and code. Google ML Kit is a mobile SDK that uses machine learning and houses body language detection to detect someones emotion [2]. Firebase houses different platforms to easily integrate my app onto different platforms and houses user sign-in information. Problems were difficult and fixed by trial and error to see which code/design worked best. To test the app, I wrote 20 inputs and outputs consisting of human faces, pictures, etc, to see if what I wrote initially matched with what happened when the app actually ran. The app is quick, free, and efficient, utilizing built-in technology to understand emotions without needing extra equipment or hassle; it is all with just one press of a button.
Emotion Recognition, Machine Learning, Accessibility, Mobile Application.
Compact CNN: Multi-Objective Optimization for Architecture Search
Wassim Kharrat1, Khadija Bousselmi2, and Ichrack Amdouni1, 1University of Manouba, Tunisia, 2University of Savoie Mont Blanc, France
Convolutional Neural Network (CNN) architectures have achieved remarkable success in various image analysis tasks. However, designing these architectures manually remains both labor-intensive and computationally expensive. Neural Architecture Search (NAS) has emerged as a promising approach for automating and optimizing network design. Among NAS methods, gradient-based techniques stand out for their ability to reduce computational costs while maintaining competitive performance. Nevertheless, the architectures they produce can still be demanding in terms of model size and inference time. To address this challenge, we propose a comprehensive image-analysis pipeline that combines the PC-DARTS algorithm with post-training 16-bit quantization and structured pruning. Experimental results show that our pipeline achieves an accuracy of 99.10% and an Intersection over Union (IoU) of 72.03%, while reducing the model size by up to 54%, making it well-suited for deployment in resource-constrained environments.
Neural Architecture Search, Convolutional Neural Networks, Compression, Quantization, Pruning, Segmentation.
Ecobin: A Solar-powered Self-cleaning and Deodorizing Trash Bin with Rainwater Collection using AI and Iot System
Isaac Liu12, 1USA, 2California State Polytechnic University, Pomona, CA 91768
Convolutional Neural Network (CNN) architectures have achieved remarkable success in various image analysis tasks. However, designing these architectures manually remains both labor-intensive and computationally expensive. Neural Architecture Search (NAS) has emerged as a promising approach for automating and optimizing network design. Among NAS methods, gradient-based techniques stand out for their ability to reduce computational costs while maintaining competitive performance. Nevertheless, the architectures they produce can still be demanding in terms of model size and inference time. To address this challenge, we propose a comprehensive image-analysis pipeline that combines the PC-DARTS algorithm with post-training 16-bit quantization and structured pruning. Experimental results show that our pipeline achieves an accuracy of 99.10% and an Intersection over Union (IoU) of 72.03%, while reducing the model size by up to 54%, making it well-suited for deployment in resource-constrained environments.
Neural Architecture Search, Convolutional Neural Networks, Compression, Quantization, Pruning, Segmentation.
Design and Implementation of Heritage Link: An AI Integrated Mobile Application for Heritage Education, Community Engagement, and Cultural Preservation
Mai Tung1 and Jason Moya2, 1Switzerland, 22California State Polytechnic University, USA
Heritage Link is an app that supports users desire to have a platform for discussing, uploading, and learning more about ancient heritages [1]. Designed and built entirely using Flutter for UI development and Firebase for database management, Heritage Link offers a socially impactful solution to those interested in heritages [2][3]. As ancient heritages are not mainstream and are considered niche communities, this app will allow those who were hesitant on learning more about their local heritages and other heritages throughout the entire world by utilizing the AI scan feature on Heritage Link. Using the OpenAI API allows the Heritage Link to use artificial intelligence to allow all users to upload images of any ancient heritage near them or found online so they can meaningful responses filled with meanginful information [4]. As a method to give back to the ancient heritage community, there is a dedicated page for those interested to donate heritages preservation organizations.
Cultural Heritage, Mobile Application, Artificial Intelligence, Community Engagement.
Software Engineering of AI-IoT Braking Systems: Safety and Performance in Fleets
Suryakant Kaushik, Texas A&M University, USA
This paper presents an IoT-enabled AI framework for intelligent braking systems in commercial fleets, focusing on software engineering challenges of integration, reliability, and predictive decision-making. Traditional braking systems are reactive and inefficient, leading to high downtime, energy loss, and safety risks. The proposed architecture leverages distributed IoT sensor networks, edge-cloud data fusion, and machine learning algorithms to enable predictive braking, adaptive driver support, and condition-based maintenance. I conduct a comparative analysis between traditional and AI-enhanced braking systems, showing reductions of up to 75% in collision incidents and 30% in unplanned downtime. Furthermore, I synthesize results from case studies across logistics, passenger transport, and manufacturing to validate real-world applicability. The contribution of this work lies in formalizing a software-centric IoT framework, supported by performance analysis, to advance safe and sustainable fleet operations.
AI, IoT, Commercial Fleet Management, Predictive Braking, Software Engineering
A Mobile App for Tracking Psychological Mood Changes and Providing E-Therapy using Natural Language Processing and GPT-3
Zachary Zhang1 and Austin Amakye Ansah2, 1USA, 2California State Polytechnic University, Pomona, CA 91768
Creative professionals face fragmentation across multiple platforms for portfolio sharing, social networking, andfreelance project management, leading to ineficient workflows and missed opportunities. DesignHub addresses thisby providing a comprehensive mobile social networking platform unifying portfolio sharing, community engagement, and task management through Flutter cross-platform development and Firebase Backend-as-a-Service architecture[1]. The system implements three core components: Firebase Authentication for secure user management, real-timedesign portfolio sharing with social engagement features through Firestore synchronization, and comprehensivetask management supporting both paid and volunteer opportunities. Key challenges included maintaining real-timedata consistency, optimizing cross-platform performance, and ensuring authentication security, addressed throughFirestore real-time listeners, cached network images, and granular security rules [2]. Experimental validationdemonstrated sub-300ms synchronization latency on modern networks (142ms WiFi, 298ms 4G), confirmingefective real-time data propagation for social interactions. The platform validates Flutter-Firebase methodologiesfor social collaboration applications, achieving 60-70% development time reduction while maintaining native- quality user experiences, of ering creative professionals an integrated ecosystem for portfolio showcase andfreelance collaboration.
Cross-Platform Development, Firebase Integration, Social Networking, Portfolio Management .
AI Driven Solutions in Cybersecurity and the Rise of Biometric Authentication
Mustapha Zeroual, Abderrazek Karim, Youssef Baddi, Faysal Bensalah, Chouaib Doukkali University, Morocco
The increasing danger of the landscape threats that we see today reflects just how important cyber security is, especially concerning recent notorious attacks on smartphones stealing user data and privacy in or now digital age. In this article, we uncover the clever but complex cybersecurity scenario and role artificial intelligence (AI) plays to provide creative solutions in fighting these crimes. An introduction to cryptography: How cryptography plays a vital role in secure communication and information over the network. Next are some basic ideas of identity and access management like authentication and authorization along with some details on the types by which one could deploy these security mecha nismsemphasising that biometric security systems are being increasingly used for authentication. This study illustrates the benefits and the limitations of biometric authentication cases, in order to expose practical situations where the biometric can operate. Beforehand, it discusses database security by identifying approaches to protect confidential information and the impacts of data breaches on organizations. He says it will take a holistic cybersecurity approach that combines new techniques for distributed systems and grid security. In addition, the article presents methods of information hiding and watermarking as tools for securing Intellectual Property in the digital spectrum. It illustrates the importance of intrusion detection systems (IDS) in discovering and addressing new threats. Emphasizing mobile computing, the first part of this course covers additional network security issues surrounding secure digital communications. Trusted com puting and its role in protecting the enterprise-controlled environment is then covered. Then, the transformative capacity of blockchain technology with respect to securing and protecting data is scrutinized, revealing its uses in cybersecurity. This post is intended to give a high-level broad outline of modern trends and modern security practices, and how AI is helping change security practices across the world against evolving cyber threats, which are getting more digitally complex with each passing day.
Cybersecurity, Cryptography, Biometric Authentication, Artificial Intelligence (AI), Blockchain.
Big Data in Cybersecurity: Leveraging AI and SDN for Enhanced Threat Intelligence and Network Optimization
Abderrazek Karim, Mustapha Zeroual, Youssef Baddi, Faysal Bensalah, Chouaib Doukkali University, Morocco
They can help ease the integration of Big Data analytics in cybersecurity frame works as the landscape of modern cyber threats changes. The following article is a preliminary study on the interaction of BIG DATA, with artificial intelligence (AI), Software Defined Networks (SDN) for improvement of network security in order to yield better results from threats intelligence. We also, consider some of tools components in Big Data ecosyste, key theory as well algorithms and a need for data visualization and mining approach to detect threats. We also address how sensor networks and social networks contribute to data sources for cybersecurity applications. We demonstrate how the performance characterization, evaluation and optimization practices underpinning our scientific results underscore not only the importance of real time data stream management in addressing cyber risks but also suggest that evaluating systems without taking their timing profile into account may lead to suboptimal and prejudiced conclusions. This means that there is a need for both heterogeneous data management and well roundedanaltics to provide information on what security measures should be taken. We also highlight Big Data analytics use cases in cybersecurity, focusing on applications with significant outcomes for business processes and threat detection using case studies. In conclusion, this article strongly recommends that cybersecurity professionals need to work hand in glove with technologists and data scientists in order to leverage the power of Big Data and AI to fortify their digital ecosystems.
Big Data , Cybersecurity, Artificial Intelligence (AI) , Software Defined Networks (SDN),Threat Intelligence.
Mycroft - Retrieval Augmented Generation for SDK Documentation
Diego Costa, Gabriel Matos, Gilson Russo, Leon Barroso, and Erick Bezerra, SIDIA Manaus - AM, Brazil
Information retrieval plays an important role in everyday tasks, especially when it comes to documentation. Retrieving information about private documentation used to build other software is very challenging due to its absence on the internet, meaning there is no information about it beyond its own documentation. Due to concerns about confidential data, using external proprietary systems is prohibited. Motivated by this, in this study, we present Mycroft, a retrieval system that leverages the Retrieval Augmented Generation technique to find a feasible approach that improves search and information retrieval requested by users about the documentation. To implement this system, a dataset of questions and answers about the documentation was generated for evaluation. The system was developed on-premise using open-source Large Language Models and evaluated using Natural Language Processing metrics and human evaluation to validate the generated answers. After evaluating the results, we concluded that the proposed retrieval system had reasonable performance in answering user queries and received good human evaluation, being considered useful. .
Retrieval Augmented Generation, Large Language Models, Software Documentation.
Design and Development of Neofocus: A Gamified Productivity Application Leveraging Gacha-inspired Reward Systems to Enhance Focus and Motivation
Lawrence Wen Lam1, Success Godday2, 1USA, 2California State Polytechnic University, Pomona, CA 91768
NeoFocus is an application that aims to combat distractions on the internet. It was developed as an experimental implementation based on the developer’s personal experiences. Although games, particularly gacha games, are often criticized for its addictive nature, NeoFocus utilizes it with virtuous intentions. With the promise of undecided rewards, this application intends to keep users hooked– in a productive way. Users using this application will inevitably associate hard work with a dopamine-reward, thereby fueling greater focus. Although the main appeal of the game is in its gamification, it also encourages friendly competition and community interactions in its blogs and challenges feature [1]. These social features hope to promote community interactions, which will increase the user’s consistency .
Gamification, Productivity, Reward Systems, Digital Wellbeing.
Thermal Imaging-based Defects Prediction in High-pressure Die Casting Using Hybrid Neural Networks and Fuzzy Cognitive Maps
T. Michno, R. Holom, S. Schmalzer, P. Meyer-Heye, G. Scampone, E. Riegler, M. Hartmann, U. Repansek, N. Koˇsir, P.Sifrer, and K. Poczeta, Austrian Institute of Technology GmbH, Austria
Producing a defect-free, lightweight, high-performance and complex geometry metal components is a highly challenging task. In this paper, we focused on High Pressure Die Casting (HPDC),proposing a hybrid AI model for non-destructive, in-line, and non-process-interrupting defect prediction, using thermal images. For that, a deep neural network model is used to extract features, which are then classified by a Fuzzy Cognitive Map (FCM). Experimental results show that the method improves prediction performance.The main contributions of this research include: (i) a novel hybrid model architecture for processing thermal images, (ii) a feature extractor for a FCM-based classifier, (iii) extension of FCM via three clustering techniques to enhance classification accuracy, (iv) a modular design, allowing easy addition of other data sources and classes without retraining, (v) a thorough evaluation through model comparisons and an ablation study, and (vi) to the best of our knowledge, first usage of FCM for this problem. .
HPDC, defect detection, Fuzzy Cognitive Maps, Thermal imaging, Hybrid AI, Industry 4.0.
Overview And Prospects Of Using Integer Surrogate Keys For Data Warehouse Performance Optimization
Sviatoslav Stumpf1 And Vladislav Povyshev2 1Itmo University, Saint Petersburg, Russia , 2 Itmo University, Saint Petersburg, Russia .
The paper examines methods for optimizing data warehouse performance using integer-based datetime labels. It is shown that replacing standard DATE and TIMESTAMP types with 32- and 64-bit integer formats reduces storage requirements by 30–60% and speeds up query execution by 25–40%. The paper presents indexing, aggregation, compression, and batching algorithms demonstrating up to an eightfold increase in throughput. Practical examples from finance, telecommunications, IoT, and scientific research confirm the efficiency and versatility of the proposed approach.
Integer labels, time series, optimization, performance, data warehouse, indexing, aggregation
Beyond Traditional Retrieval Systems: Leveraging AI With Documents, Knowledge Graphs And Databases
Antony Seabra, Claudio Cavalcante, and Sergio Lifschitz , PUC-Rio - Pontifical Catholic University of Rio de Janeiro, Brazil .
his study explores techniques for retrieving data from documents, knowledge graphs, and databases using Large Language Models (LLMs), specifically leveraging OpenAI’s GPT models as foundational frameworks for embeddings and conversational models in question-answering (QA) systems. Our research focuses on the utilization of Prompt Engineering, Retrieval-Augmented Generation (RAG), and Text-to-SQL techniques to effectively extract information from these diverse data sources without the need for model retraining. A key aspect of our study is the emphasis on explainability, demonstrating how these techniques can reveal the rationale behind retrieved information and enhance the understanding of results. We highlight the challenges encountered in specific use cases during our tests and present effective strategies and solutions to overcome them. Our findings demonstrate the potential of LLMs to surpass traditional search and retrieval systems, paving the way for more efficient and comprehensible information systems. .
Information Retrieval, AI, Explainability, Documents, Knowledge Graphs, Databases, Recommendation System
User Name : tania
Posted 14-07-2025 on 21:21:44 AEDT