AI Learning Track

Governing AI in the enterprise requires far more than simply approving a use case or signing off on funding a pilot. True governance demands continuous, intentional oversight at every stage of the AI lifecycle. From the earliest decision about where data is sourced and how it is safeguarded, to the scrutiny applied when models are trained and deployed into business-critical environments, leadership must stay engaged. And governance doesn’t end once systems are live—directors must also ensure accountability for monitoring outcomes, managing risks, adapting to regulatory shifts, and ultimately making responsible choices about when and how to decommission technologies that no longer serve the enterprise or its stakeholders.

This module empowers boards and executives to move beyond surface-level understanding and develop a framework of oversight that is both strategic and practical. Participants will learn how to ask the right questions, set the right guardrails, and foster a culture of trust and accountability around AI adoption. By mastering governance across the entire AI lifecycle, leaders not only mitigate risk but also position their organizations to unlock long-term, sustainable value from AI investments.

 

 

Artificial Intelligence has entered a new era, defined not by incremental automation but by language and reasoning at scale. Large Language Models stand at the center of this shift. Unlike traditional technologies, their architecture is probabilistic, data-driven, and profoundly complex—making governance both more challenging and more critical.

This module gives boards and executives a clear, non-technical understanding of how LLMs work and why they require different oversight than past systems. Participants will explore how data quality and sourcing underpin every outcome, why evaluation metrics must balance correctness and trustworthiness, and how risks emerge when these models are integrated into real-world operations. Just as important, the session equips leaders with the right questions to ask and the frameworks to apply, ensuring oversight extends beyond hype and into responsible deployment.

By the end, directors and executives will leave with a practical framework for governing LLMs—understanding not just the technical underpinnings, but how these architectures intersect with fiduciary duty, compliance, and long-term enterprise value.

Artificial Intelligence has entered a new era—one shaped not by incremental automation, but by language, reasoning, and scale. Large Language Models sit at the center of this transformation. Unlike traditional technologies, their architecture is probabilistic and deeply data-driven, which makes governance more complex and far more essential.

This module offers boards and executives a clear, accessible understanding of how LLMs function and why they require oversight beyond the norms of past systems. It examines how data sourcing and quality determine outcomes, why evaluation must balance correctness with trustworthiness, and where risks emerge once models are embedded in enterprise workflows. More than a primer, the session provides leaders with the right questions to ask and practical frameworks for responsible deployment.

By the conclusion, participants will leave with a confident grasp of how LLM governance intersects with fiduciary duty, compliance, and long-term enterprise value—equipped to guide their organizations past the hype and into sustainable advantage.

 

 

Artificial Intelligence promises extraordinary capability, but without clear performance standards, even the most advanced models can produce results that are misleading, biased, or unreliable. Governance requires more than technical dashboards—it requires boards and executives to understand the meaning behind performance metrics and how they translate into enterprise risk and value.

This module introduces leaders to the key measures of AI performance and explains them in accessible terms. Participants will explore how accuracy, bias, robustness, and trustworthiness are assessed, why trade-offs between speed and reliability matter, and how performance outcomes shift once models move from controlled development into real-world use. Just as important, the session provides a governance lens: how boards can use performance data to hold management accountable, challenge assumptions, and make informed decisions about scaling or reining in AI systems.

By the conclusion, directors and executives will leave with the confidence to interpret performance results, ask sharper oversight questions, and ensure AI is delivering outcomes that align with fiduciary duty, regulatory expectations, and long-term enterprise value.

Artificial Intelligence may drive growth and efficiency, but it also introduces a new category of financial risk: the possibility of runaway costs. Unlike traditional technology investments, AI expenses are not limited to development. They accumulate across the entire lifecycle—from data acquisition and storage, to training and retraining models, to scaling usage in production. Without disciplined oversight, organizations can quickly find costs spiraling beyond what was budgeted or strategically justified.

This module gives boards and executives the tools to govern AI economics with confidence. Participants will examine the major categories of AI spend, understand the trade-offs between upfront investment and ongoing operational costs, and learn how to evaluate proposals through a financial governance lens. Just as important, the session highlights how cost structures intersect with risk management, compliance obligations, and shareholder expectations.

By the end, leaders will leave with a practical framework for asking the right financial oversight questions, ensuring AI projects deliver not just innovation, but sustainable value aligned with fiduciary duty and long-term enterprise resilience.

 Artificial Intelligence does not exist in a vacuum. Its development and deployment are increasingly shaped by laws, regulations, and policy frameworks that operate far beyond the enterprise’s direct control. From evolving data protection rules to emerging AI-specific legislation, boards and executives must recognize that compliance is no longer optional—it is a central pillar of governance.

This module provides leaders with a clear, non-technical understanding of the regulatory landscape surrounding AI. Participants will examine how global and regional rules are being developed, what obligations organizations must anticipate, and how regulatory shifts can alter both risk exposure and strategic opportunity. More importantly, the session frames these external forces through a governance lens, showing directors how to oversee management’s response, ensure compliance programs are robust, and align enterprise practices with stakeholder expectations.

By the end, participants will be equipped to navigate the complexity of AI regulation with confidence. They will leave with practical insights into how external legal frameworks intersect with fiduciary duty, corporate reputation, and long-term enterprise value.