GEO

Function Calling

Function calling (also called tool use) is the feature that lets an LLM analyze a user request and invoke external functions or APIs in a structured JSON format. OpenAI introduced official support in June 2023; since then, Claude, Gemini, and Llama all ship it as a standard feature, and it has become the atomic unit of AI agent implementations.

Function calling (also called tool use) is the feature that lets an LLM analyze a user request and invoke external functions or APIs in a structured JSON format. OpenAI introduced official support in June 2023; since then, Claude, Gemini, and Llama all ship it as a standard feature, and it has become the atomic unit of AI agent implementations.

Why It Matters

LLMs are fundamentally text generators. Without function calling, asking "what's the weather in Seoul right now?" can only surface training-time knowledge. With function calling, the model emits get_weather(location: "Seoul") as JSON, the host app runs the actual API, and the result flows back into the model for the final answer. This simple mechanism is the foundation of AI agents, AI search, and the entire MCP ecosystem.

How It Works

  1. Tool definitions: The app passes the LLM a list of available functions — name, description, parameter schema.
  2. User request: The user enters a natural-language question.
  3. Call decision: The LLM decides which function to invoke and generates arguments as JSON.
  4. Actual execution: The host app parses the JSON and runs the real function. The model itself doesn't execute code.
  5. Inject results: Function output is fed back into the LLM context.
  6. Final response: The LLM produces a natural-language answer grounded in the results.

Example

Tool definition:

{
  "name": "search_blog_posts",
  "description": "Search blog posts by keyword",
  "parameters": {
    "type": "object",
    "properties": {
      "keyword": { "type": "string", "description": "search keyword" },
      "limit": { "type": "integer", "default": 5 }
    },
    "required": ["keyword"]
  }
}

User: "Find me 3 posts about GEO"

Model response:

{
  "tool_call": "search_blog_posts",
  "arguments": { "keyword": "GEO", "limit": 3 }
}

After app runs the tool: [{title: "...", url: "..."}, ...]

Final response: "I found 3 posts about GEO..."

Function Calling vs MCP vs ReAct

AspectFunction CallingMCPReAct
LevelAPI call protocolStandardized tool-connection protocolPrompting pattern
RoleModel emits JSON to invoke a toolShare tools across different hosts/serversStep-by-step reasoning + action loop
RelationshipMCP's foundationStandardizes function calling across appsUses function calling inside a loop

Function calling is a single tool invocation; MCP is how many apps share the same tool; ReAct is a pattern for looping calls with reasoning.

Limits and Gotchas

Hallucination risk: The model may invent nonexistent functions or generate bad arguments. Strict schema validation is mandatory.

Parallel calls: Frontier models (GPT-4o, Claude Opus 4.6) can invoke multiple functions in parallel. That brings dependency management.

Cost: Every call expands the context and raises token spend. Narrowing tool exposure improves both performance and cost.

Security: Combined with prompt injection, function calling can amplify damage. Require user confirmation for risky actions like payments or deletions.

GEO Implications

When AI search extracts info from a blog, it internally invokes functions like fetch_web_content, search_knowledge, or cite_source. Blogs that expose clean HTML and structured data improve the quality of those calls and raise citation probability. In other words, "a blog easy for functions to read" is a GEO-friendly blog.

Sources: