Loading…
SOSS Fusion 2024 has ended
October 22-23, 2024 | Atlanta, Georgia USA
View More Details & Registration

The Sched app allows you to build your schedule but is separate from your event registration. You must be registered for SOSS Fusion 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Eastern Daylight Saving Time. To see the schedule in your preferred timezone, please select from the drop-down located at the bottom of the menu to the right.
Wednesday October 23, 2024 2:05pm - 2:35pm EDT
In this age of open source in machine learning, ML practitioners increasingly rely on public model hubs for downloading foundation models to fine tune instead of creating models from scratch. However, compromised artifacts are very easy to share on these hubs. ML model files are vulnerable to Model Serialization Attacks (MSA), the injection of malicious code that will execute automatically when the file is deserialized. MSAs are the Trojan horses of ML, capable of turning a seemingly innocuous model into a backdoor to your system. So, what can you do about it? In this talk, we explore two strategies to use open-source tools to mitigate the risk of MSAs and other supply chain attacks on ML: model scanning with ModelScan by Protect AI and cryptographic signing with Sigstore by OpenSSF. Model scanning is our window into the black box model files. Cryptographic signatures link an artifact to a source’s identity, backed up by a trusted authority. Scanning and signing are both widely used defenses for traditional software artifacts, but they have not been widely adopted in AI yet. WWe will demonstrate how these tools can bridge the AI/ML security gap, and stop Trojan Horses at the gate.
Speakers
avatar for William Armiros

William Armiros

Senior Software Engineer, Protect AI
William is a Senior Software Engineer at Protect AI, where he is building systems to help ML engineers and data scientists introduce security into their MLOps workflows effortlessly. Previously, he led a team at AWS working on application observability and distributed tracing. During... Read More →
avatar for Sam Washko

Sam Washko

Senior Software Engineer, Protect AI
Sam Washko is a senior software engineer passionate about the intersection of security and software development. She works for Protect AI developing tools for making machine learning systems more secure. She holds a BS in Computer Science from Duke University, and prior to joining... Read More →
Wednesday October 23, 2024 2:05pm - 2:35pm EDT
Salon 4-6
Feedback form is now closed.

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link