Resources / Webinars

How to build function calling and JSON mode for LLMs

In this webinar we'll dive deep into how to implement function calling and JSON mode for LLMs: defining schemas and tools, building a state machine, and more.


We introduced built-in functionality for function calling and structured output for LLM deployments with our TensorRT-LLM Engine Builder. In this webinar, we’ll dive into how we built it!

What you'll learn:

  • Understanding structured output and function calling: Learn how these features ensure schema-compliant model outputs and enable LLMs to select and execute specific tools.

  • Building JSON mode and tool use: Dive into the implementation details, including defining schemas and tools, building a state machine, and using logit biasing to force valid output.

  • Hands-on demo: See these features in action with live demos and a comparison with alternative methods.

Watch now on-demand!

Trusted by top engineering and machine learning teams
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo