Tool Use & Function Calling
5 exercises — master the vocabulary of giving LLMs the ability to act: tool schemas, tool calls, tool results, tool registries, and tool description engineering.
0 / 5 completed
Tool calling vocabulary quick reference
- Tool calling — LLM generates a structured request (name + params) that the host app executes
- Tool schema — the JSON definition (name, description, parameters) given to the LLM for each tool
- Tool call — the structured request produced by the LLM
- Tool result — the data returned by the executed tool (fed back as an Observation)
- Tool registry — centralised catalogue of all available tools with their schemas
- Tool description engineering — crafting tool descriptions to guide the LLM to choose the right tool
- Native function calling — built-in tool call support in OpenAI / Anthropic / Gemini APIs
1 / 5
What is "tool calling" (also called function calling) in an LLM-based agent?
Tool calling is the mechanism that gives LLMs the ability to interact with the real world.
Without tools, an LLM can only generate text — it cannot look up live data, run code, or modify state.
How tool calling works:
① Developer registers tools — provides the LLM with a list of tool schemas (name, description, parameters)
② LLM decides to use a tool — generates a structured JSON response instead of plain text:
③ Host application executes the tool — calls the real function/API with the given parameters
④ Result is returned as an Observation — fed back into the agent's context:
⑤ LLM continues reasoning — incorporates the result and decides the next action
Key vocabulary:
• Tool call — the LLM's structured request to execute a specific tool
• Tool result — the data returned by the executed tool
• Native function calling — OpenAI/Anthropic/Gemini built-in support for structured tool calls
Without tools, an LLM can only generate text — it cannot look up live data, run code, or modify state.
How tool calling works:
① Developer registers tools — provides the LLM with a list of tool schemas (name, description, parameters)
② LLM decides to use a tool — generates a structured JSON response instead of plain text:
{"tool": "get_stock_price", "parameters": {"ticker": "NVDA"}}③ Host application executes the tool — calls the real function/API with the given parameters
④ Result is returned as an Observation — fed back into the agent's context:
Observation: {"ticker": "NVDA", "price": 875.43, "timestamp": "2025-07-01T10:30Z"}⑤ LLM continues reasoning — incorporates the result and decides the next action
Key vocabulary:
• Tool call — the LLM's structured request to execute a specific tool
• Tool result — the data returned by the executed tool
• Native function calling — OpenAI/Anthropic/Gemini built-in support for structured tool calls