2025 March Webinar
Understand LLM Supervised Fine Tuning and Related InfoSec Risks
Virtual Event
Time
2:00 PM - 3:00 PM ET
Location
Webinar
Understand LLM Supervised Fine Tuning and Related InfoSec Risks
AI generative Large Language Model (LLM) usage has become a ubiquitous part of the technology landscape since the introduction of highly capable public LLM models. While public models do have significant advantages, there are numerous concerns surrounding data security and organizational intellectual property leakage. Organizations with suitably trained data science employees are turning to create domain specific knowledge models using a process called Supervised Fine Tuning (SFT). While SFT allows an organization to develop locally relevant generative AI models, and deploy these models in house, there are still significant information security concerns surrounding production deployment aspects. In this session, we:
- Provide background knowledge for infosec practitioners on Supervised Fine Tuning and the requisite accompanying task of data preparation
- Identify the related infosec risks and guardrails to consider around the deployment of the machine language operations life cycle surrounding a production deployment