About the program

SecureAI is a cybersecurity and privacy training program designed for AI professionals and researchers to equip them with the knowledge and skills to build AI systems that are technically sound and secure. The program is a series of 6 engaging and hands-on synchronous online sessions/workshops held once every week for a period of six weeks. The sessions are held at evening times (6:00 PM CT) to accommodate participants with busy work-day schedules. SecureAI follows an experiential training approach offered by experts in AI security, and each workshop includes practical lab activities.

The program is designed to provide participants with the fundamental concepts that define the intersection of AI, cybersecurity, and privacy. Participants will earn a certificate at the end of the program.

01

Security Threats and Robustness in AI

This module provides participants with fundamental concepts that define the intersection of AI, cybersecurity, and privacy. It explores the wide range of attacks and the influential factors that affect the vulnerabilities and strengths of AI models.

02

Privacy and Security in AI

This workshop introduces participants to the key concepts of privacy and security in AI systems. It covers privacy-preserving methods around differential privacy and federated learning.

03

Ethics and Fairness in AI

This module explores ethical considerations in AI, specifically focusing on the concepts of bias and fairness. Participants will gain an understanding of the challenges associated with bias in AI systems and the importance of promoting fairness and mitigating discrimination.

04

Trust and Transparency in AI

This module explores the importance of trust in AI systems and introduces participants to the concepts of transparency, explainability, and interpretability. Participants will understand the role of these concepts in building trust, ensuring accountability, and addressing the black-box nature of AI models.

05

AI Development and Operations

his workshop introduces the concepts of DevSecOps (Development, Security, and Operations) and MLSecOps (Machine Learning Security Operations) in the context of AI security. Participants will understand the importance of integrating security practices into the AI development lifecycle and learn how DevSecOps and MLSecOps can enhance the security and reliability of AI systems.

06

Case Studies and Best Practices

This module analyzes real-world cases and outlines practices and challenges with respect to cybersecurity and privacy in AI systems. Participants will learn from case studies and best practices in AI security and privacy.

Why participate in the program

The expected outcomes of the program include:

01
Increased knowledge and enhanced skills as participants gain a deeper understanding of the security, privacy, and ethical aspects associated with various lifecycle stages of building AI systems.
02
Improved cybersecurity measures and infrastructure as participants apply the skills learned to strengthen cybersecurity within their organizations by identifying vulnerabilities, implementing security measures, and developing policies and plans to mitigate AI-related risks.
03
Earn a certification to fortify your cybersecurity and privacy skills in AI, and demonstrate your commitment to professional development and ethical practices in AI development.

Workshops

Workshop 1: Security Threats and Robustness in AI

Wednesday, January 15, 2025. This workshop introduces participants to the key concepts of adversarial attacks and robustness in AI systems. It covers the threat landscape in AI systems, adversarial attacks, and defenses. The workshop provides a broad understanding of adversarial attacks in AI by focusing on the different types of attacks, their methods, and their potential impact on AI systems. The workshop also covers the concepts of general and adversarial robustness, domain adaptation, and transfer learning. Participants will learn strategies to improve the stability, reliability, and performance of AI systems in the presence of uncertainties and perturbations.

Topics:

  • Introduction to Adversarial Attacks
  • Types of Adversarial Attacks
  • Various Attacks Implementations
  • Threat Models
  • Model Robustness
  • Defenses and Their Implementations

Workshop 2: Privacy and Security in AI

Wednesday, January 22, 2025. This workshop introduces participants to the key concepts of privacy and security in AI systems. It covers privacy-preserving methods around differential privacy and federated learning. The workshop discusses challenges and methods to preserve privacy while leveraging sensitive data for AI model training. Participants will understand potential privacy risks caused by either unintended memorization of sensitive data or adversarial means (e.g., model inversion and inference attacks). The workshop also covers the concepts of data privacy and protection principles, e.g., consent, purpose limitation, and the right to erasure, and methods for data protection, e.g., anonymization, pseudonymization, and encryption.

Topics:

  • Introduction to Privacy in AI
  • Differential Privacy and Implementations
  • Federated Learning and Implementations
  • Privacy Risks and Challenges
  • Trade-offs between Privacy and Utility
  • Data Privacy and Protection Principles

Workshop 3: Ethics and Fairness in AI

Wednesday, January 29, 2025. This workshop explores ethical considerations in AI, specifically focusing on the concepts of bias and fairness. Participants will gain an understanding of the challenges associated with bias in AI systems and the importance of promoting fairness and mitigating discrimination. The workshop will discuss the responsibility of AI practitioners to address bias (i.e., different types of biases such as algorithmic bias, data bias, and societal bias) and to prompt fairness in AI systems. Participants will study the impact of bias and approaches to mitigate bias, including data pre-processing techniques, algorithmic adjustments, and fairness-aware model training.

Topics:

  • Introduction to Ethics in AI
  • Bias in AI Systems
  • Types of Bias
  • Bias Mitigation Techniques
  • Fairness in AI Systems
  • Fairness-aware Model Training

Workshop 4: Trust and Transparency in AI

Wednesday, February 5, 2025. This workshop explores the importance of trust in AI systems and introduces participants to the concepts of transparency, explainability, and interpretability. Participants will understand the role of these concepts in building trust, ensuring accountability, and addressing the black-box nature of AI models. The workshop will cover the significance of trust in AI and its role in fostering user and societal acceptance. Participants will learn about explainability and interpretability in AI and the ability to understand and trace the decision-making process of AI models. The workshop will also cover techniques for explainability and interpretability, including examples for post-hoc techniques and inherently interpretable models.

Topics:

  • Transparency in AI Systems
  • Explainability and Interpretability
  • Techniques for Explainability and Interpretability
  • Post-hoc Techniques and Implementations
  • Attacks against Interpretability

Workshop 5: AI Development and Operations

Wednesday, February 12, 2025. This workshop introduces the concepts of DevSecOps (Development, Security, and Operations) and MLSecOps (Machine Learning Security Operations) in the context of AI security. Participants will understand the importance of integrating security practices into the AI development lifecycle and learn how DevSecOps and MLSecOps can enhance the security and reliability of AI systems. The workshop will cover DevOps practices in the context of AI development to improve collaboration, efficiency, and quality. Participants will learn about the integration of security into the DevOps workflow and MLSecOps practices and tools to manage the lifecycle of machine learning models, including development, deployment, and monitoring.

Topics:

  • DevSecOps Practices in AI Development
  • MLSecOps Practices and Tools
  • Security Considerations during Model Development
  • Regulatory Requirements and Industry Standards

Workshop 6: Case Studies and Best Practices

Wednesday, February 19, 2025 This workshop analyzes real-world cases and outlines practices and challenges with respect to cybersecurity and privacy in AI systems. Participants will learn from case studies and best practices in AI security and privacy. The workshop will provide insights into the application of security and privacy principles in real-world scenarios and the challenges faced by AI practitioners in ensuring the security and privacy of AI systems. Participants will gain practical knowledge and skills to address security and privacy concerns in AI systems.

Topics:

  • Real-world Cases in AI Security and Privacy
  • Challenges and Best Practices
  • Security and Privacy Principles in AI Systems

Logistics and FAQs

Workshop Schedule

The program consists of 6 workshops, each held once a week on Wednesdays from 6:00 PM to 8:00 PM CT. The workshops are synchronous and will be conducted online.

Workshop Format

The program follows a blended learning approach that combines asynchronous learning with synchronous hands-on labs. Participants will have access to resources and materials for self-paced learning prior to the synchronous labs. For each workshop, participants will be provided with reading materials and videos to complete before the synchronous lab session. The synchronous labs will focus on practical hands-on activities, allowing participants to apply the concepts learned during the asynchronous learning phase.

Expert Instructors

The program is led by expert instructors who are experienced professionals in AI security, privacy, and ethics. The instructors have extensive knowledge and practical experience in the field of AI security and privacy and are committed to providing participants with high-quality training and guidance.

Certificate of Completion

Participants who successfully complete the program will receive a certificate of completion. To be eligible for the certificate, participants must attend and complete at least 5 of the 6 workshops. The certificate will recognize participants' understanding and application of the fundamental concepts at the intersection of AI, cybersecurity, and privacy.

Cost

The program is free of charge. There are no registration fees or costs associated with participating in the program. The program is funded by the National Science Foundation (NSF) and Loyola University Chicago.

Eligibility

The program is designed for AI professionals and researchers who are interested in enhancing their knowledge and skills in AI security, privacy, and ethics. Participants should have a foundational understanding of AI concepts and be familiar with machine learning algorithms and techniques. The program is open to participants from academia, industry, and government organizations.

FAQs

No, this is a free program.
SecureAI is designed specifically for professionals and researchers who are engaged in the field of Artificial Intelligence (AI).
Yes, given our focus on enhancing cybersecurity and privacy in AI systems, the program is best suited for AI professionals and researchers who already have a foundational understanding of AI.
Upon completion of this program, participants will receive a certificate that recognizes their understanding and application of the fundamental concepts at the intersection of AI, cybersecurity, and privacy. To be eligible for this certificate, attendees must attend and complete at least 5 of the 6 workshops.
Each workshop is 2 hours hands-on lab activities. The workshop might include asynchronous components that participants are expected to complete before the synchronous lab session. The asynchronous components may include reading materials and videos that could take 1-2 hours to complete. The second live workshop session is dedicated to a lab activities, emphasizing hands-on experience with AI models.

Join Us

Take your first step!

Transform Your AI Expertise: Gain Practical Skills in Secure and Ethical AI Development. Fill the form to enroll.

Sponsors

Contact Us

Our Address

Loyola Center for Cybersecurity
306 Doyle Center, Lake Shore Campus
1052 W Loyola Ave. Chicago, IL 60626

Email Us

Dr. Mohammed Abuhamad: mabuhamad AT luc DOT edu
Dr. Eric Chan-Tin: dchantin AT luc DOT edu