Loading…
SOSS Fusion 2024 has ended
October 22-23, 2024 | Atlanta, Georgia USA
View More Details & Registration

The Sched app allows you to build your schedule but is separate from your event registration. You must be registered for SOSS Fusion 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Eastern Daylight Saving Time. To see the schedule in your preferred timezone, please select from the drop-down located at the bottom of the menu to the right.
strong>AI + Security [clear filter]
Tuesday, October 22
 

11:30am EDT

Artificial Intelligence Cyber Challenge (AIxCC): Overview and Releasing Research as Open Source Software - David A. Wheeler & Jeff Diecks, Linux Foundation
Tuesday October 22, 2024 11:30am - 12:10pm EDT
The Artificial Intelligence Cyber Challenge (AIxCC) is a two-year research competition sponsored by US DARPA in collaboration with ARPA-H. AIxCC  asks competitors to design novel AI systems to secure critical code, specifically finding and fixing its vulnerabilities, and provides significant prizes for the top winners. This presentation will provide an introduction to AIxCC, including its strategies and approaches. It will especially focus on the approach of releasing research as open source software (OSS) to support technology transfer, as well as discussing in-progress results of AIxCC.


Speakers
avatar for David A. Wheeler

David A. Wheeler

Director of Open Source Supply Chain Security, Linux Foundation
Dr. David A. Wheeler is an expert on open source software (OSS) and on developing secure software. His works on developing secure software include "Secure Programming HOWTO", the Open Source Security Foundation (OpenSSF) Secure Software Development Fundamentals Courses, and "Fully... Read More →
avatar for Jeff Diecks

Jeff Diecks

Technical Project Manager, OpenSSF, Linux Foundation
Jeff has more than two decades of experience in technology and communications with a diverse background in operations, project management and executive leadership. A participant in open source since 1999, he’s delivered digital products and applications for universities, sports... Read More →
Tuesday October 22, 2024 11:30am - 12:10pm EDT
Salon 1
  AI + Security
  • Session Slides Attached yes

12:15pm EDT

End-to-End Secure ML Development - Mihai Maruseac, Google
Tuesday October 22, 2024 12:15pm - 12:45pm EDT
We are seeing an increase in the number of AI powered applications. At the same time, we are seeing that AI software repeats the same security mistakes as traditional software, but at an accelerated time frame and with higher risks. In this talk -- planned as a tutorial --, we aim to show how AI applications can be developed in a safe way, starting with datasets and software dependencies, building a secure software supply chain, and only accepting models in production that have clear, untampered provenance (both SLSA but also analyzing the capabilities of the models to eliminate future risks). For example, we want to be able to trace back from a bad inference in production to the potential poisoned input in the training dataset. We will show how we can reduce cost of retraining models in the event of an ML framework compromise by analyzing the blast radius and only retraining impacted models. To keep the audience engaged, we will follow the development story of an ML model from data collection and training all the way to deploying the model in production. At each stage, we will go over the supply chain security risks and show how these can be mitigated.
Speakers
avatar for Mihai Maruseac

Mihai Maruseac

Staff SWE, Google
Mihai Maruseac is a member of Google Open Source Security team (GOSST), working on Supply Chain Security, mainly on GUAC. Before joining GOSST, Mihai created the TensorFlow Security team after joining Google, moving from a startup to incorporate Differential Privacy (DP) withing Machine... Read More →
Tuesday October 22, 2024 12:15pm - 12:45pm EDT
Salon 1
  AI + Security
  • Session Slides Attached yes

3:25pm EDT

Secure Numerical Computing is Hard: Lessons from 10 Years of Open Data Science & the Long Road Ahead - Peter Wang, Anaconda
Tuesday October 22, 2024 3:25pm - 3:55pm EDT
Over the last decade, enterprises have had to accelerate adoption of open source software for data science, machine learning, and AI. These numerically-intensive workloads posed unique new challenges for businesses and IT due to both technology and organizational dynamics.

As the creators of the PyData movement and as a foundational distributor of open source Python tools to millions of enterprise and individual users, Anaconda has had a front-row seat to these kinds of challenges. In this talk, we will draw upon data from our annual State of Data Science industry survey to understand the kinds of challenges that businesses face while trying to adopt even well-known OSS data & ML technology.

We’ll then look towards new deep-learning and AI workloads, and the new dimensions of the security challenges there. These include novel challenges posed by exotic hardware, just-in-time compilation, binary distribution, and data-oriented supply chain attacks.

The talk will conclude with some key principles to guide thinking about software and data supply chains in the new era of machine learning and AI software deployment.
Speakers
avatar for Peter Wang

Peter Wang

Chief AI & Innovation Officer, Co-Founder, Anaconda
Peter Wang is the Chief AI & Innovation Officer and Co-founder of Anaconda. Peter leads Anaconda’s AI Incubator, which focuses on advancing core Python technologies and developing new frontiers in open-source AI and machine learning, especially in the areas of edge computing, data... Read More →
Tuesday October 22, 2024 3:25pm - 3:55pm EDT
Salon 4-6

5:00pm EDT

Secure AI Orchestration: Mitigate Model-centric Attacks with Flyte - Niels Bantilan, Union.ai
Tuesday October 22, 2024 5:00pm - 5:30pm EDT
In recent years, major progress in machine learning (ML) has led to a corresponding boom in the broader artificial intelligence (AI) space, opening up commercial applications in text, image, audio, and video generation. However, data scientists and ML engineers still face many security issues that may lead to arbitrary code execution even in the space of "classical" ML, which often involves classification or regression on tabular data.

This talk will outline some of the model-centric attacks that you should be aware of and hone in on two types of attacks: malicious code injection on pickled model files, and malicious code written and executed by an LLM. We'll create a basic setup for these two attacks and see how Flyte, an open source ML orchestrator, can help mitigate some of the risks associated with these two attacks.

Finally, we'll analyze the limitations of the solutions provided by Flyte, abstract some of the ideas out in an orchestrator-agnostic way, and cover other open source tools, like `safetensors` and `onnx`, which we can leverage on top of Flyte to reduce these risks even further.
Speakers
avatar for Niels Bantilan

Niels Bantilan

Chief Machine Learning Engineer, union.ai
Niels is the Chief Machine Learning Engineer at Union.ai, and core maintainer of Flyte, an open source workflow orchestration tool, author of UnionML, an MLOps framework for machine learning microservices, and creator of Pandera, a statistical typing and data testing tool for scientific... Read More →
Tuesday October 22, 2024 5:00pm - 5:30pm EDT
Salon 2-3
  AI + Security
  • Session Slides Attached yes
 
Wednesday, October 23
 

2:05pm EDT

Trojan Model Hubs: Hacking the ML Supply Chain & Defending Yourself from Threats - Sam Washko & William Armiros, Protect AI
Wednesday October 23, 2024 2:05pm - 2:35pm EDT
In this age of open source in machine learning, ML practitioners increasingly rely on public model hubs for downloading foundation models to fine tune instead of creating models from scratch. However, compromised artifacts are very easy to share on these hubs. ML model files are vulnerable to Model Serialization Attacks (MSA), the injection of malicious code that will execute automatically when the file is deserialized. MSAs are the Trojan horses of ML, capable of turning a seemingly innocuous model into a backdoor to your system. So, what can you do about it? In this talk, we explore two strategies to use open-source tools to mitigate the risk of MSAs and other supply chain attacks on ML: model scanning with ModelScan by Protect AI and cryptographic signing with Sigstore by OpenSSF. Model scanning is our window into the black box model files. Cryptographic signatures link an artifact to a source’s identity, backed up by a trusted authority. Scanning and signing are both widely used defenses for traditional software artifacts, but they have not been widely adopted in AI yet. WWe will demonstrate how these tools can bridge the AI/ML security gap, and stop Trojan Horses at the gate.
Speakers
avatar for William Armiros

William Armiros

Senior Software Engineer, Protect AI
William is a Senior Software Engineer at Protect AI, where he is building systems to help ML engineers and data scientists introduce security into their MLOps workflows effortlessly. Previously, he led a team at AWS working on application observability and distributed tracing. During... Read More →
avatar for Sam Washko

Sam Washko

Senior Software Engineer, Protect AI
Sam Washko is a senior software engineer passionate about the intersection of security and software development. She works for Protect AI developing tools for making machine learning systems more secure. She holds a BS in Computer Science from Duke University, and prior to joining... Read More →
Wednesday October 23, 2024 2:05pm - 2:35pm EDT
Salon 4-6

3:40pm EDT

The Simple, Yet Lethal, Anatomy of an AI Attack - Erez Yalon, Chechmarx
Wednesday October 23, 2024 3:40pm - 4:10pm EDT
It surprises no one that attack surfaces expand as swiftly as AI and ML technologies advance, yet the security landscape lags behind. Join us for an eye-opening session where we dive deep into the dark world of AI security through the lens of attackers.

We'll tread carefully between different attacks, accompanied by demos, revealing the strategies and techniques used to compromise AI and LLMs. From reconnaissance and spoofing via supply chain attacks all the way to LLM poisoning, jailbreaking, and compromise—AI attacks are far from just prompt injection. Witness firsthand how attackers exploit vulnerabilities, manipulate AI systems, and leverage AI for malicious purposes.

You will gain a fundamental understanding of AI security and the nature of AI attacks, offering a rare glimpse into the adversarial mindset. By understanding the attacker’s perspective, you will be better prepared for a new era where threats are evolving and attackers are feeling increasingly comfortable in the AI domain.
Speakers
avatar for Erez Yalon

Erez Yalon

VP of Security Research, Checkmarx
Erez Yalon is the VP of Security Research at Checkmarx, the Founder of the DEF CON's AppSec Village, and the founder and co-leader of the OWASP API Security Project. Over the years, Erez has been invited to speak at prominent events, including RSAC, Black Hat, DEF CON, and OWASP’s... Read More →
Wednesday October 23, 2024 3:40pm - 4:10pm EDT
Salon 2-3
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Session Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.