Build 5 real tools. Learn to write JSON schemas that Claude understands. Give your agent the ability to solve complex multi-step problems.
An agent with 5 real tools: web search (via API), file reading, HTTP requests, SQLite database queries, and a note-saver. Then give it a multi-step research task and watch it work through it autonomously. ~150 lines of Python.
Claude reads your tool descriptions and decides which tool to use based on the task. This means the description is the most important part of a tool definition. Here's the difference between a bad description and a good one:
# BAD: vague, Claude won't know when to use this {"name": "search", "description": "Search for things"} # GOOD: specific, tells Claude exactly when and how to use it { "name": "web_search", "description": ( "Search the web for current information about a topic. " "Use this when you need facts, recent events, or data you don't know. " "Returns a list of relevant snippets with URLs." ) }
Three things a good tool description includes:
This builds directly on Day 1's agent loop. We're adding 4 new tools and a more complex task to test them.
import anthropic, json, sqlite3, os, urllib.request from pathlib import Path client = anthropic.Anthropic() # ── Setup: create a sample SQLite database ───────── def setup_demo_db(): conn = sqlite3.connect("demo.db") conn.execute(""" CREATE TABLE IF NOT EXISTS sales ( id INTEGER PRIMARY KEY, product TEXT, revenue REAL, month TEXT, region TEXT ) """) conn.execute("DELETE FROM sales") conn.executemany( "INSERT INTO sales VALUES (?,?,?,?,?)", [ (1, "Widget A", 45000, "2024-01", "West"), (2, "Widget B", 32000, "2024-01", "East"), (3, "Widget A", 67000, "2024-02", "West"), (4, "Widget C", 89000, "2024-02", "North"), (5, "Widget B", 54000, "2024-03", "East"), ] ) conn.commit(); conn.close() # ── Tool implementations ─────────────────────────── def web_search(query: str) -> str: # Using DuckDuckGo Instant Answer API (no key required) url = f"https://api.duckduckgo.com/?q={urllib.parse.quote(query)}&format=json&no_html=1" try: import urllib.parse with urllib.request.urlopen(url, timeout=5) as resp: data = json.loads(resp.read()) abstract = data.get("AbstractText", "") if abstract: return abstract[:500] return f"No instant answer found for '{query}'. Try a more specific query." except Exception as e: return f"Search error: {e}" def read_file(path: str) -> str: try: p = Path(path) if not p.exists(): return f"File not found: {path}" content = p.read_text() if len(content) > 4000: content = content[:4000] + "\n[truncated]" return content except Exception as e: return f"Read error: {e}" def http_get(url: str) -> str: try: with urllib.request.urlopen(url, timeout=8) as resp: body = resp.read().decode() return body[:2000] # truncate large responses except Exception as e: return f"HTTP error: {e}" def query_db(sql: str) -> str: try: # Only allow SELECT for safety if not sql.strip().upper().startswith("SELECT"): return "Only SELECT queries are allowed." conn = sqlite3.connect("demo.db") conn.row_factory = sqlite3.Row rows = conn.execute(sql).fetchall() conn.close() if not rows: return "Query returned no rows." result = [dict(r) for r in rows] return json.dumps(result, indent=2) except Exception as e: return f"DB error: {e}" def save_note(title: str, content: str) -> str: fname = f"notes/{title.replace(' ','-')}.txt" os.makedirs("notes", exist_ok=True) Path(fname).write_text(content) return f"Saved to {fname}" # ── Tool definitions ─────────────────────────────── TOOLS = [ {"name":"web_search","description":"Search the web for current facts. Use when you need information you don't have. Returns a text snippet.", "input_schema":{"type":"object","properties":{"query":{"type":"string"}},"required":["query"]}}, {"name":"read_file","description":"Read a local file. Use to access local documents, configs, or data files.", "input_schema":{"type":"object","properties":{"path":{"type":"string"}},"required":["path"]}}, {"name":"http_get","description":"Make an HTTP GET request to a URL. Use for REST APIs or fetching web content.", "input_schema":{"type":"object","properties":{"url":{"type":"string"}},"required":["url"]}}, {"name":"query_db","description":"Run a SQL SELECT query on the sales database. Tables: sales(id,product,revenue,month,region).", "input_schema":{"type":"object","properties":{"sql":{"type":"string"}},"required":["sql"]}}, {"name":"save_note","description":"Save information to a file for later. Use to record findings or summaries.", "input_schema":{"type":"object","properties":{"title":{"type":"string"},"content":{"type":"string"}},"required":["title","content"]}}, ] def execute_tool(name, inp): return { "web_search": lambda: web_search(inp["query"]), "read_file": lambda: read_file(inp["path"]), "http_get": lambda: http_get(inp["url"]), "query_db": lambda: query_db(inp["sql"]), "save_note": lambda: save_note(inp["title"], inp["content"]), }[name]() # Same agent loop as Day 1 (reusable) def run_agent(task, max_steps=15): messages = [{"role":"user","content":task}] for step in range(max_steps): resp = client.messages.create( model="claude-sonnet-4-5", max_tokens=2048, tools=TOOLS, messages=messages ) if resp.stop_reason == "end_turn": return resp.content[0].text results = [] for b in resp.content: if b.type == "tool_use": print(f" [{step+1}] {b.name}({list(b.input.keys())})") result = execute_tool(b.name, b.input) results.append({"type":"tool_result","tool_use_id":b.id,"content":str(result)}) messages += [{"role":"assistant","content":resp.content},{"role":"user","content":results}] return "Max steps reached." if __name__ == "__main__": setup_demo_db() result = run_agent(""" Analyze our sales data: 1. Query total revenue by product 2. Find the best-performing region 3. Save a summary note titled 'Sales Analysis Q1 2024' Be specific with numbers. """) print("\n=== Final Answer ===\n", result)
What this agent does: It runs 3 SQL queries, calculates totals, forms a summary, and saves the result to a file — all autonomously, without you specifying which queries to run. You gave it a high-level task; it figured out the steps.
Claude doesn't have hardcoded logic for which tool to call. It reads descriptions and reasons about which tool fits the current need. If your agent calls the wrong tool, fix the description first.
Notice the query_db function only allows SELECT. Notice read_file truncates at 4,000 characters. Notice http_get has a timeout. Production tool implementations need these guardrails — infinite loops, huge files, and hanging requests can break your agent or cost you money.
The lambda dispatch pattern keeps the agent loop clean and makes it easy to add new tools: add the definition to TOOLS, add the implementation function, add one line to the dispatcher. Day 3 builds memory on top of this same foundation.
write_file(path, content) tool that lets the agent create fileslist_files(directory) tool and test itTomorrow: Memory. Your agent currently forgets everything between tasks. We fix that.
Before moving on, make sure you can answer these without looking: