About the program

SecureAI is a cybersecurity and privacy training program designed for AI professionals and researchers to equip them with the knowledge and skills to build AI systems that are technically sound and secure. The program is a series of 12 engaging and hands-on synchronous online sessions/workshops held every two weeks for a period of six months. The sessions are held at evening times (6:00 PM CT) to accommodate participants with busy work-day schedules. SecureAI follows an experiential training approach offered by experts in AI security, and each workshop includes practical lab activities.

The program is designed to provide participants with the fundamental concepts that define the intersection of AI, cybersecurity, and privacy. Participants will earn a certificate at the end of the program.

01

Fundamentals and Threats

This module includes two workshops and provides participants with fundamental concepts that define the intersection of AI, cybersecurity, and privacy.

02

Adversarial Attacks and Robustness

The module explores the wide range of attacks and the influential factors that affect the vulnerabilities and strengths of AI models.

03

Privacy, Ethics, and Trust

This module includes three workshops that explore the ethical considerations surrounding the use of AI, such as bias and fairness, the importance of protecting privacy, and promoting trust and transparency in AI systems.

04

Secure Development and Data Governance

This module, of three workshops, addresses the importance of secure development and data governance in AI projects, and introduces practices for deployment, risk analysis, and compliance with regulatory and ethical requirements.

05

Case Studies

This group consists of two workshops dedicated to discussing case studies and real-world examples of AI systems. Workshops 11 and 12 analyze real-world cases and outline practices and challenges with respect to cybersecurity and privacy in AI systems..

Speakers

Jaron Mink

University of Illinois at Urbana-Champaign

Specializes in the interdisciplinary fields of usable security, machine learning, and system security.

Blaine Hoak

University of Wisconsin-Madison

Specializes in evaluating and advancing the security of machine learning models, with a primary focus on adversarial robustness.

Neophytos Christou

Brown University

Works in uncovering security vulnerabilities in deep-learning framworks, mitigating deserialization vulnerabilities in PHP, and hardening software against novel attacks.

Ryan Sheatsley

University of Wisconsin-Madison

Works in investigating the risks of deploying machine learning systems in security-centric domains, assessing their robustness at scale, and applying them to novel security challenges.

Ahmed Abusnaina, PhD

Meta

Work revolves around social graphs user understanding, pattern recognition, and robust signal processing.

Muhammad Saad, PhD

Paypal

Work and experience focuses on distributed systems security, web and network security, privacy-enhancing technologies, and social engeering attacks.

Jon McLachlan

YSecurity

Over 20 years of experience in security, B2B, B2C, deep tech, Consumer, Enterprise Hardware, and FinTech.

James Davis, PhD

Purdue University

Specializes in engineering of software-intensive computing systems, focusing on failure modes and mitigation strategies.

Kai Yue

North Carolina State University

Experience in federated learning, artificial intelligence, and video coding.

Yasser Shoukry, PhD

University of California, Irvine

Specializes in resilience, safety, security and privacy of artificial intelligence (AI), controlled cyber-physical systems (CPS), internet-of-things (IoT), and robotic systems.

Mohammed Abuhamad, PhD

Loyola University Chicago

Specializes in AI/Deep-Learning based Information Security, especially Software and Mobile/IoT Security

Eric Chan-Tin, PhD

Loyola University Chicago

Experience in Network Security, Computer Security, Distributed Systems, Peer-to-Peer Networks, anonymity, privacy.

Why participate in the program

The expected outcomes of the program include:

01
Increased knowledge and enhanced skills as participants gain a deeper understanding of the security, privacy, and ethical aspects associated with various lifecycle stages of building AI systems.
02
Improved cybersecurity measures and infrastructure as participants apply the skills learned to strengthen cybersecurity within their organizations by identifying vulnerabilities, implementing security measures, and developing policies and plans to mitigate AI-related risks.
03
Earn a certification to fortify your cybersecurity and privacy skills in AI, and demonstrate your commitment to professional development and ethical practices in AI development.

Workshops

Workshop 1: Introduction and Fundamentals

Wednesday, May 1: This workshop introduces participants to the key concepts of AI, cybersecurity, and privacy while highlighting their importance and creating awareness of potential risks and ethical considerations. It is crucial to provide a high-level overview and generate curiosity for further exploration in subsequent workshops. The panel will discuss examples of how different industries use AI and the potential biases and risks associated with it.

Workshop 2: AI and Threat Models

Wednesday, May 15: This workshop highlights the threat landscape in AI systems and the concept of adversarial attacks. It builds on the previous workshop and presents real-world examples to emphasize the importance of addressing these threats. The key concepts covered include threat models, adversarial attacks, model robustness, and defenses.

Workshop 3: Adversarial Attacks

Wednesday, May 29: This workshop provides a broad understanding of adversarial attacks in AI by focusing on the different types of attack, their methods, and their potential impact on AI systems. The workshop delves into different types of adversarial attacks, such as evasion, poisoning, and model inversion under various threat models (e.g., white- and black-box settings). Moreover, it addresses aspects such as transferability, single-class and targeted attacks.

Workshop 4: Robustness and Resilience

Wednesday, June 12: This workshop provides a comprehensive understanding of the key concepts related to robustness and resilience in AI models. These concepts include general and adversarial robustness (i.e., the ability of the model to perform consistently and accurately in the presence of uncertainties and perturbations), domain adaptation and transfer learning, model generalization, and fault tolerance. These concepts help participants understand the challenges facing AI systems and learn strategies to improve their stability, reliability, and performance.

Workshop 5: AI and Privacy: Differential Privacy and Federated Learning

Wednesday, June 26: This workshop introduces key concepts of privacy in the context of AI and explores privacy-preserving methods around differential privacy and federated learning. The workshop discusses challenges and methods to preserve privacy while leveraging sensitive data for AI model training. This includes understanding potential privacy risks caused by either unintended memorization of sensitive data or adversarial means (e.g., model inversion and inference attacks). The discussion also includes describing the privacy budget and trade-offs between privacy and utility. The panel will discuss the benefits, challenges, and potential risks in federated learning and decentralized data settings.

Workshop 6: Ethics in AI: Bias and Fairness

Wednesday, July 10: The goal of this workshop is to explore ethical considerations in AI, specifically focusing on the concepts of bias and fairness. Participants will gain an understanding of the challenges associated with bias in AI systems and the importance of promoting fairness and mitigating discrimination. The panel discussion will focus on the responsibility of AI practitioners to address bias (i.e., different types of biases such as algorithmic bias, data bias, and societal bias) and to prompt fairness in AI systems. Considering real-world examples, e.g., a biased hiring system, studying the impact of bias and approaches to mitigate bias, including data pre-processing techniques, algorithmic adjustments, and fairness-aware model training.

Workshop 7: Trust in AI: Transparency, Explainability, and Interpretability

Wednesday, July 24: This workshop explores the importance of trust in AI systems and introduces participants to the concepts of transparency, explainability, and interpretability. Participants will understand the role of these concepts in building trust, ensuring accountability, and addressing the black-box nature of AI models. The key concepts addressed in this workshop are, 1) trust in AI and its significance in fostering user and societal acceptance, 2) explainability and interpretability in AI and the ability to understand and trace the decision-making process of AI models, and 3) techniques for explainability and interoperability including examples for post-hoc techniques and inherently interpretable models.

Workshop 8: AI Development and Security

Wednesday, August 7: This workshop introduces the concepts of DevOps (Development and Operations) and MLOps (Machine Learning Operations) in the context of AI security. Participants will understand the importance of integrating security practices into the AI development lifecycle and learn how DevOps and MLOps can enhance the security and reliability of AI systems. The key concepts covered in this workshop are: 1) DevOps practices in the context of AI development to improve collaboration, efficiency, and quality, 2) Integration of security into the DevOps workflow, 3) MLOps practices and tools to manage the lifecycle of machine learning models, including development, deployment, and monitoring, 4) Security considerations during the model development phase, 5) Regulatory requirements and industry standards, such as GDPR, HIPAA, ISO 27001, and NIST Cybersecurity Framework.

Workshop 9: AI and Data Governance: Regulations and Standards

Wednesday, August 21: This workshop introduces key regulations and standards related to AI and data governance. Participants will understand the legal and ethical considerations involved in AI development and deployment and the importance of compliance with regulations and adherence to industry standards. The concepts covered are: 1) Introduction to data governance in the context of AI, including data collection, storage, processing, and sharing, and the importance of ensuring the privacy, security, and ethical use of data in AI systems. 2) Introduction to legal (e.g., GDPR, CCPA, and the EU AI Act, and ethical frameworks around AI systems. 3) Data privacy and protection principles, e.g., consent, purpose limitation, and the right to erasure, and methods for data protection, e.g., anonymization, pseudonymization, and encryption. 4) Responsible AI and impact assessment.

Workshop 10: Secure Deployment and Operation of AI Systems

Wednesday, September 4: This workshop highlights the importance of secure deployment and operation of AI systems and introduces key concepts and best practices to ensure the security and resilience of AI systems throughout their lifecycle. The emphasis in this workshop is on the secure deployment of AI models and the importance of collaboration between development, operations, and security teams during the operational stage of the system lifecycle to maintain system integrity.

Workshop 11: Case Study (I)

Wednesday, September 18: This workshop analyzes real-world cases and outline practices and challenges with respect to cybersecurity and privacy in AI systems.

Workshop 12: Case Study (II)

Wednesday, October 2: This workshop analyzes real-world cases and outline practices and challenges with respect to cybersecurity and privacy in AI systems.

Frequently Asked Questions

No, this is a free program.
SecureAI is designed specifically for professionals and researchers who are engaged in the field of Artificial Intelligence (AI).
Yes, given our focus on enhancing cybersecurity and privacy in AI systems, the program is best suited for AI professionals and researchers who already have a foundational understanding of AI.
Upon completion of this program, participants will receive a certificate that recognizes their understanding and application of the fundamental concepts at the intersection of AI, cybersecurity, and privacy. To be eligible for this certificate, attendees must attend and complete at least 10 of the 12 workshops.
Each workshop is divided into two main parts, totaling 2 hours. The first hour features a panel session, where speakers—experts in their respective fields—share their insights and knowledge. The second hour is dedicated to a lab session, emphasizing hands-on experience with AI models, allowing participants to apply the concepts discussed during the panel.

Join Us

Take your first step!

Transform Your AI Expertise: Gain Practical Skills in Secure and Ethical AI Development. Fill the form to enroll.

Sponsors

Contact Us

Our Address

Loyola Center for Cybersecurity
306 Doyle Center, Lake Shore Campus
1052 W Loyola Ave. Chicago, IL 60626

Email Us

Dr. Mohammed Abuhamad: mabuhamad AT luc DOT edu
Dr. Eric Chan-Tin: dchantin AT luc DOT edu