AI Series: Evaluating technical security risks with GenAI-enabled applications

BY JAKE WILLIAMS, IANS FACULTY

This upcoming series walks through three important steps for evaluating and mitigating traditional cybersecurity risks in applications that use GenAI.

This series is broken into three guides covering:

  1. Mapping External Exposure to GenAI Applications 

  1. Ensuring Proper Use of Authentication for GenAI Applications 

  1. Assessing The Sufficiency of Your RBAC Schema for GenAI Applications

Complete the form to receive this content as it’s released this fall. 

Want the full 10-Step playbook now? Contact us here

 

* Required Fields

Report: Empower the Business to Use GenAI in Customer Facing Applications

by Jake Williams, IANS Faculty

It’s widely agreed that GenAI is already transforming business operations. Due to perceived benefits, business leaders are driving adoption before technical controls are available. Security leaders must be able to provide stakeholders with well-reasoned advice on the real risks of AI deployments without resorting to scare tactics.

Download this valuable report and find actionable guidance to communicate AI’s realistic capabilities, limitations and risk impact.

IANS Generative AI Report

 

On-Demand AI Content

Mitigate AI risk with actionable advice from IANS Faculty.

 

 

Tips to Deploy Generative AI Securely:

 

Understand the top risks of GenAI tools, along with eight mitigation steps to take prior to allowing GenAI usage within your organization.

 

To reduce the likelihood of AI deployment adversely impacting your org, follow these steps:

  1. Have your legal team review third-party agreements
  2. Classify data with GenAI in mind
  3. Check with legal counsel on output ownership
  4. Identify acceptable and unacceptable GenAI use cases
  5. Educate users on acceptable GenAI use cases
  6. Implement guardrails to protect against hallucination impacts
  7. Establish GenAI safe harbor policies for employees
  8. Revisit these guidelines as GenAI evolves

Guidance for Microsoft 365 Copilot implementation

This three-part series provides a roadmap towards AI adoption, a framework for Open AI risks, and a user cost/benefit analysis to build your AI strategy.

IANS M365 Copilot Content

Complete the form and we’ll redirect you to our M365 Copilot premium content.

Implementing AI Securely in 2024 - Curated IANS Portal Content:

  • Evaluate and Build a Roadmap for Securely Deploying Microsoft 365 Copilot
  • Understand Security Implications of Microsoft’s OpenAI Partnership
  • Microsoft 365 Copilot: A Security Cost/Benefit Analysis
 
 

Additional AI Resources

Gain further insight by accessing the following IANS Faculty content:

Learn More: AI and Third Parties: How to Hold Vendors Accountable
Author: Joshua Marpet, IANS Faculty

Learn More: AI Governance: Tech Problems Without Tech Solutions
Author: Alex Sharpe, IANS Faculty

Learn More: Tips to Mitigate AI Business Risks
Author: Jake Williams, IANS Faculty

IANS logo

About IANS

For the security practitioner caught between rapidly evolving threats and demanding executives, IANS Research is a clear-headed resource for making decisions and articulating risk. We provide experience-based security insights for chief information security officers and their teams. The core of our value comes from the IANS Faculty, a network of seasoned practitioners. We support client decisions and executive communications with Ask-an-Expert inquiries, our peer community, deployment-focused reports, tools and templates, and consulting.