Zero-Trust Architecture (ZTA) for IoT based environments

Submitted by CAE Community on

The Internet of Things (IoT) has been involved in all parts of our life (e.g., healthcare, smart cars, smart home appliances, smart cities). It is expected that by 2025, the number is expected to be around 75 billion. However, security is one of the major problems in IoT and even the manufacturers have not considered security in their design for a very long time. Furthermore, IoT devices have limited computational power and they are mostly battery operated, so we cannot have heavy security controls running on them. Hence, many IoT devices are still vulnerable to cyberattacks.

Building the Cybersecurity Pipeline: K12 Cybersecurity Credit Transfer Agreement Development

Submitted by CAE Community on

This poster will focus on the NCAE-C Cybersecurity Credit Transfer Agreement (CTA) Task, which is part of the NCAE-C Careers Preparation National Center. The task aims to address the challenge of meeting the growing demand for cybersecurity professionals by establishing a database of credit transfer agreements among NCAE-C designated CAE cybersecurity programs and K-12 schools.

Comprehensive Analysis of IoT Data Using Explainable AI for Intrusion detection

Submitted by CAE Community on

Recently, machine learning (ML) has been used extensively for intrusion detection systems (IDS), which proved to be very effective in various environments such as the Cloud and IoT. However, due to their complexity, the decisions that are made by such ML-based IDS are very hard to analyze, understand and interpret. Even though ML-based IDS are very effective, they are becoming less transparent. In this paper, we provide an explanation and analysis for ML-based IDS using the SHapley additive exPlanations (SHAP) explainability technique.

Impact Oriented Programming Prototype

Submitted by CAE Community on

Debugging helps identify and address vulnerabilities in code. Programmers inefficiently debug their own code by using print statements and debuggers. The lost time can be significantly reduced if the programmers can see the impact of their code in real time. Our team worked with Staris Labs to deliver a proof of concept to show the impact with various techniques, such as fuzzing and static analysis. We were able to verify the presence of known vulnerabilities in code.

Nleak: Automatic Memory Leak Debugging in Node.js

Submitted by CAE Community on

Memory leaks may cause a system to slow down or crash. If an attacker can intentionally trigger a memory leak, the attacker may be able to launch a denial-of-service attack or take advantage of other unexpected program behavior. JavaScript memory leaks are tricky and often time-consuming to identify and fix, as JavaScript is dynamically typed and leaks are fundamentally different from leaks in traditional C, C++, and Java programs. It is a daunting task even for experienced expert developers to effectively identify and fix memory leaks.

Impact of Adversarial Patches on Object Detection with YOLOv7

Submitted by CAE Community on

With the increased use of machine learning models, there is a need to understand how machine learning models can be maliciously targeted. Understanding how these attacks are ‘enacted’ helps in being able to ‘harden’ models so that it is harder for attackers to evade detection. We want to better understand object detection, the underlying algorithms, different perturbation approaches that can be utilized to fool these models. To this end, we document our findings as a review of existing literature and open-source repositories related to Computer Vision and Object Detection.

Mitigating the Impact of Object Overlapping on YOLOv4 Object Detection

Submitted by CAE Community on

Object detection algorithms like You Only Look Once (YOLOv4) can face challenges when multiple objects overlap within the same grid cell. In this scenario, accurately detecting and classifying each object becomes difficult. Data augmentation techniques can address this issue and improve the accuracy of YOLOv4. More diverse training data can be created by artificially generating images with non-overlapping objects through random shifting, rotating, resizing, color jittering, and flipping.

Cybersecurity Playable Case Studies

Submitted by CAE Community on

Playable Case Studies (PCSs) are interactive simulations that allow students to “play” through an authentic "case study" (i.e., scenario) as a member of a professional team. They include (a) an immersive, simulated online environment, and (b) accompanying in-class activities and discussions facilitated by a teacher to provide educational scaffolding and metacognition. PCSs are designed to be authentic and feel "real" by incorporating the "This is Not a Game" (TINAG) ethos from Alternate Reality Games. This poster will graphically highlight the core elements that make up a PCS.

Making Smart Contracts Predict and Scale

Submitted by CAE Community on

Artificial Intelligence algorithms predict the future based on the trained models and datasets. However, a reliable prediction requires a tamper-resistant model with immutable data. Blockchain technology provides trusted output with consensus-based transactions and an immutable distributed ledger. Therefore, blockchain can help AI to produce immutable models for trustworthy prediction. But most smart contracts that define the language of blockchain applications do not support floating-point data type, limiting computations for classification, which affects the prediction accuracy.

Subscribe to General Session