Episode 4 — Generative AI Engineering / 4.7 — Function Calling Tool Calling
4.7.d --- Hybrid Logic
In one sentence: Hybrid logic combines AI reasoning (the model decides WHAT to do) with deterministic code (your functions do the HOW) --- the model acts as an intelligent router that classifies user intent and dispatches to the right function, while your code handles execution with exact business rules, validation, and side effects.
Navigation: <- 4.7.c --- Deterministic Tool Invocation | 4.7.e --- Building an AI Tool Router ->
1. The Hybrid Principle
The most powerful pattern in AI engineering is not pure AI and not pure code --- it is a hybrid where each handles what it does best:
+------------------------------------------------------------------------+
| THE HYBRID PRINCIPLE |
| |
| AI (Probabilistic) Code (Deterministic) |
| ===================== ====================== |
| - Understand natural language - Execute exact computations |
| - Classify user intent - Enforce business rules |
| - Extract arguments from text - Query databases |
| - Handle ambiguity - Call external APIs |
| - Generate natural responses - Validate data |
| - Infer missing context - Handle errors predictably |
| - Log and audit |
| - Enforce security |
| |
| AI decides WHAT to do. Code decides HOW to do it. |
| |
| Example: |
| User: "Make my bio better" |
| |
| AI decides: Code executes: |
| Function: improveBio - Enforce 500-char limit |
| Args: { bio: "I like dogs", - Filter banned words |
| tone: "witty" } - Apply A/B tested templates |
| - Call a specialized LLM prompt |
| - Log for analytics |
| - Bill the premium feature |
+------------------------------------------------------------------------+
2. Why Not Let AI Do Everything?
A tempting approach is to let the AI handle both the decision and the execution through a single prompt. Here is why that fails:
Pure AI approach (fragile)
// BAD: Everything in one prompt
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'system',
content: `You are a dating app assistant.
When users want to improve their bio:
- Keep it under 500 characters
- Don't use any words from the banned list: [profanity, explicit, ...]
- Make sure it doesn't contain phone numbers or emails
- Use a witty tone unless they specify otherwise
- Track that this user used the bio improvement feature
When users want openers:
- Generate exactly 3 unless they specify a number
- Each opener should be under 200 characters
- Don't include personal questions about appearance
When users want moderation:
- Check for phone numbers, emails, payment references
- Return whether it's safe and why
Also remember to:
- Only premium users can improve bios more than 3 times per day
- Log all actions for compliance
- Respect user's content preferences from their settings`,
},
{ role: 'user', content: 'Improve my bio: "I like hiking"' },
],
});
// PROBLEMS:
// 1. Character count is unreliable (LLM can't count precisely)
// 2. Banned word filtering is probabilistic (LLM might miss words)
// 3. Phone number detection via prompting is unreliable
// 4. No way to actually check premium status or usage limits
// 5. No way to log to your analytics system
// 6. No way to read user settings from database
// 7. Business rules buried in a prompt change with every edit
Hybrid approach (reliable)
// GOOD: AI routes, code executes
// AI's job: Decide to call improveBio, extract the bio text and desired tone
// Code's job: Everything else
function improveBio({ currentBio, tone = 'witty', userId }) {
// DETERMINISTIC: Exact character count check
if (currentBio.length > 2000) {
return { error: 'Bio too long. Please provide a bio under 2000 characters.' };
}
// DETERMINISTIC: Check premium status from database
const user = db.getUser(userId);
if (!user.isPremium && user.bioImprovementsToday >= 3) {
return { error: 'Free users can improve their bio 3 times per day. Upgrade for unlimited!' };
}
// DETERMINISTIC: Banned word filter (regex-based, not AI-guessing)
const bannedWords = loadBannedWordList();
const foundBanned = bannedWords.filter((word) =>
currentBio.toLowerCase().includes(word)
);
if (foundBanned.length > 0) {
return { error: `Please remove inappropriate language: ${foundBanned.join(', ')}` };
}
// AI-ASSISTED: Use a specialized prompt for the creative part
const improvedBio = await generateImprovedBio(currentBio, tone);
// DETERMINISTIC: Enforce character limit on the output
const trimmedBio = improvedBio.slice(0, 500);
// DETERMINISTIC: Log for analytics
analytics.track('bio_improved', { userId, tone, originalLength: currentBio.length });
// DETERMINISTIC: Update usage counter
db.incrementBioImprovements(userId);
return { improvedBio: trimmedBio, tone, characterCount: trimmedBio.length };
}
3. The Router Pattern
The router pattern is the architectural backbone of hybrid AI systems. The AI acts as a router that:
- Receives unstructured user input.
- Classifies the intent.
- Extracts structured arguments.
- Dispatches to the right handler.
+------------------------------------------------------------------------+
| THE ROUTER PATTERN |
| |
| +------------------+ |
| | User's Message | |
| +--------+---------+ |
| | |
| v |
| +------------------+ |
| | LLM Router | |
| | (AI reasoning) | |
| +--+----+----+--+--+ |
| | | | | |
| +-----------+ | | +----------+ |
| | | | | |
| v v v v |
| +------------+ +----------+ +----------+ +---------+ |
| | improveBio | | generate | | moderate | | no tool | |
| | (handler) | | Openers | | Text | | (text | |
| | | | (handler) | | (handler)| | reply) | |
| +------+-----+ +----+-----+ +----+-----+ +----+----+ |
| | | | | |
| v v v v |
| +-----------+ +-----------+ +-----------+ +----------+ |
| | Business | | Business | | Business | | Direct | |
| | rules, | | rules, | | rules, | | LLM text | |
| | DB calls, | | DB calls, | | regex, | | response | |
| | analytics | | API calls | | logging | | | |
| +-----------+ +-----------+ +-----------+ +----------+ |
| | | | | |
| +------+------+------+-----+ | |
| | | |
| v v |
| +-------------+ +-------------+ |
| | LLM formats | | Direct text | |
| | result for | | to user | |
| | the user | | | |
| +-------------+ +-------------+ |
+------------------------------------------------------------------------+
4. Building the Dating App Router: Full Working Code
Let us build the complete hybrid system where the AI routes between improveBio(), generateOpeners(), and moderateText().
Part 1: Tool definitions
import OpenAI from 'openai';
const openai = new OpenAI();
const tools = [
{
type: 'function',
function: {
name: 'improveBio',
description:
'Improve a user\'s dating profile bio to be more engaging and authentic. ' +
'Call this when the user wants to rewrite, enhance, fix, or improve their bio.',
parameters: {
type: 'object',
properties: {
currentBio: {
type: 'string',
description: 'The user\'s current bio text',
},
tone: {
type: 'string',
enum: ['witty', 'sincere', 'adventurous', 'intellectual'],
description: 'Desired tone (default: witty)',
},
},
required: ['currentBio'],
additionalProperties: false,
},
},
},
{
type: 'function',
function: {
name: 'generateOpeners',
description:
'Generate conversation starter messages based on a dating match\'s profile. ' +
'Call this when the user wants help with opening lines, icebreakers, ' +
'or starting a conversation with a match.',
parameters: {
type: 'object',
properties: {
profileDescription: {
type: 'string',
description: 'Info about the match\'s profile, interests, or photos',
},
count: {
type: 'number',
description: 'Number of openers to generate (1-5, default: 3)',
},
style: {
type: 'string',
enum: ['funny', 'thoughtful', 'flirty', 'casual'],
description: 'Style of the opener messages (default: casual)',
},
},
required: ['profileDescription'],
additionalProperties: false,
},
},
},
{
type: 'function',
function: {
name: 'moderateText',
description:
'Check if a message is appropriate for a dating platform. ' +
'Call this when the user asks to review, check, or verify if a message ' +
'is okay to send.',
parameters: {
type: 'object',
properties: {
text: {
type: 'string',
description: 'The message text to moderate',
},
},
required: ['text'],
additionalProperties: false,
},
},
},
];
Part 2: Function implementations (deterministic handlers)
// ---- improveBio: AI-assisted but with deterministic guardrails ----
async function improveBio({ currentBio, tone = 'witty' }) {
// DETERMINISTIC: Input validation
if (!currentBio || currentBio.trim().length === 0) {
return { success: false, error: 'Bio text cannot be empty' };
}
if (currentBio.length > 2000) {
return { success: false, error: 'Bio is too long. Max 2000 characters.' };
}
// DETERMINISTIC: Banned word check
const bannedWords = ['explicit-word-1', 'explicit-word-2']; // Real list from DB
const foundBanned = bannedWords.filter((w) =>
currentBio.toLowerCase().includes(w)
);
if (foundBanned.length > 0) {
return {
success: false,
error: 'Please remove inappropriate language before improving.',
};
}
// AI-ASSISTED: Generate the improved bio using a specialized prompt
const bioResponse = await openai.chat.completions.create({
model: 'gpt-4o',
temperature: 0.8,
messages: [
{
role: 'system',
content:
`You are an expert dating profile writer. Rewrite the bio below ` +
`in a ${tone} tone. Rules:\n` +
`- Maximum 500 characters\n` +
`- No phone numbers, emails, or social media handles\n` +
`- Be authentic and specific, avoid cliches\n` +
`- Return ONLY the improved bio text, nothing else`,
},
{ role: 'user', content: currentBio },
],
});
const improvedBio = bioResponse.choices[0].message.content.trim();
// DETERMINISTIC: Enforce character limit (LLM might exceed it)
const finalBio = improvedBio.slice(0, 500);
// DETERMINISTIC: Post-processing safety check
const hasPhone = /\b\d{3}[-.]?\d{3}[-.]?\d{4}\b/.test(finalBio);
const hasEmail = /[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}/.test(finalBio);
if (hasPhone || hasEmail) {
// AI slipped in contact info --- strip it
return {
success: false,
error: 'Generated bio contained contact information. Please try again.',
};
}
return {
success: true,
originalBio: currentBio,
improvedBio: finalBio,
tone,
characterCount: finalBio.length,
};
}
// ---- generateOpeners: AI-assisted with deterministic filtering ----
async function generateOpeners({
profileDescription,
count = 3,
style = 'casual',
}) {
// DETERMINISTIC: Input validation
if (!profileDescription || profileDescription.trim().length === 0) {
return { success: false, error: 'Profile description cannot be empty' };
}
const clampedCount = Math.min(Math.max(count, 1), 5);
// AI-ASSISTED: Generate openers
const openerResponse = await openai.chat.completions.create({
model: 'gpt-4o',
temperature: 0.9,
messages: [
{
role: 'system',
content:
`Generate exactly ${clampedCount} ${style} conversation openers ` +
`for a dating app. Base them on the profile described below.\n\n` +
`Rules:\n` +
`- Each opener must be under 200 characters\n` +
`- No comments about physical appearance\n` +
`- Ask genuine questions that show interest\n` +
`- Return as a JSON array of strings, nothing else`,
},
{ role: 'user', content: profileDescription },
],
response_format: { type: 'json_object' },
});
let openers;
try {
const parsed = JSON.parse(openerResponse.choices[0].message.content);
openers = Array.isArray(parsed.openers) ? parsed.openers : parsed;
if (!Array.isArray(openers)) openers = [String(openers)];
} catch {
return { success: false, error: 'Failed to generate openers. Please try again.' };
}
// DETERMINISTIC: Filter and enforce rules
const cleanOpeners = openers
.slice(0, clampedCount)
.map((o) => String(o).slice(0, 200)) // Enforce character limit
.filter((o) => o.length > 0); // Remove empty strings
return {
success: true,
openers: cleanOpeners,
count: cleanOpeners.length,
style,
};
}
// ---- moderateText: Purely deterministic ----
function moderateText({ text }) {
if (!text || text.trim().length === 0) {
return { success: false, error: 'Text cannot be empty' };
}
const issues = [];
// Phone number detection
if (/\b\d{3}[-.\s]?\d{3}[-.\s]?\d{4}\b/.test(text)) {
issues.push({
type: 'personal_info',
detail: 'Contains what appears to be a phone number',
severity: 'high',
});
}
// Email detection
if (/\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/.test(text)) {
issues.push({
type: 'personal_info',
detail: 'Contains an email address',
severity: 'high',
});
}
// Social media handles
if (/@[A-Za-z0-9_]{2,}/.test(text) && !text.includes('@gmail') && !text.includes('@yahoo')) {
issues.push({
type: 'social_media',
detail: 'May contain a social media handle',
severity: 'medium',
});
}
// Payment platform references
if (/\b(venmo|cashapp|paypal|zelle)\b/i.test(text)) {
issues.push({
type: 'financial',
detail: 'References a payment platform',
severity: 'high',
});
}
// Excessive caps (shouting)
const capsRatio = (text.match(/[A-Z]/g) || []).length / text.length;
if (capsRatio > 0.7 && text.length > 10) {
issues.push({
type: 'tone',
detail: 'Excessive use of capital letters',
severity: 'low',
});
}
return {
success: true,
text,
safe: issues.filter((i) => i.severity === 'high').length === 0,
issues,
suggestion:
issues.length === 0
? 'Message looks good to send!'
: `Found ${issues.length} issue(s). Consider revising before sending.`,
};
}
Part 3: The router (bringing it together)
const functionMap = {
improveBio,
generateOpeners,
moderateText,
};
async function datingAppRouter(userMessage) {
console.log(`\n--- User: "${userMessage}" ---\n`);
const messages = [
{
role: 'system',
content:
'You are a friendly dating app assistant. Use the available tools ' +
'to help users improve their profiles, craft messages, and stay safe. ' +
'If the user\'s message doesn\'t require a tool, respond conversationally.',
},
{ role: 'user', content: userMessage },
];
// Step 1: Let AI decide what to do
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
tools,
tool_choice: 'auto',
temperature: 0,
});
const assistantMessage = response.choices[0].message;
// Step 2: If no tool call, return the text response
if (response.choices[0].finish_reason !== 'tool_calls') {
console.log(' Route: Direct text response (no tool)');
return assistantMessage.content;
}
// Step 3: Execute the tool(s) the AI chose
const toolResults = [];
for (const toolCall of assistantMessage.tool_calls) {
const fnName = toolCall.function.name;
const fnArgs = JSON.parse(toolCall.function.arguments);
console.log(` Route: ${fnName}(${JSON.stringify(fnArgs)})`);
if (!functionMap[fnName]) {
toolResults.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify({ error: `Unknown function: ${fnName}` }),
});
continue;
}
try {
const result = await functionMap[fnName](fnArgs);
toolResults.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(result),
});
} catch (error) {
toolResults.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify({ error: error.message }),
});
}
}
// Step 4: Let AI format the result for the user
const finalResponse = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [...messages, assistantMessage, ...toolResults],
tools,
});
return finalResponse.choices[0].message.content;
}
// ---- Test with various inputs ----
// Should route to improveBio
console.log(await datingAppRouter('My bio is "I like hiking and coffee." Make it better!'));
// Should route to generateOpeners
console.log(await datingAppRouter(
'I matched with someone who loves rock climbing and photography. Help me start a convo!'
));
// Should route to moderateText
console.log(await datingAppRouter(
'Is this message okay to send? "Hey, you\'re cute! Text me at 555-0123"'
));
// Should NOT call any tool
console.log(await datingAppRouter('Thanks for the help!'));
// Should NOT call any tool (general question)
console.log(await datingAppRouter('What makes a good dating profile?'));
5. The AI Decision Boundary
Understanding exactly where the AI's responsibility ends and your code's responsibility begins is critical:
+------------------------------------------------------------------------+
| THE AI DECISION BOUNDARY |
| |
| BEFORE THE BOUNDARY (AI's domain): |
| +------------------------------------------------------------------+ |
| | "My bio is boring, it just says I like coffee. Can you jazz it | |
| | up and make it sound more adventurous?" | |
| | | |
| | AI reasoning: | |
| | - User wants bio improvement -> improveBio() | |
| | - Current bio: "I like coffee" -> currentBio argument | |
| | - "adventurous" mentioned -> tone: "adventurous" | |
| | | |
| | Result: tool_call improveBio({ | |
| | currentBio: "I like coffee", | |
| | tone: "adventurous" | |
| | }) | |
| +------------------------------------------------------------------+ |
| | |
| ============================|==========================================|
| THE BOUNDARY | |
| ============================|==========================================|
| | |
| AFTER THE BOUNDARY (Code's domain): |
| +------------------------------------------------------------------+ |
| | improveBio() executes: | |
| | 1. Validate input length (deterministic) | |
| | 2. Check banned words (deterministic) | |
| | 3. Check user's premium status (database query) | |
| | 4. Check daily usage limit (database query) | |
| | 5. Generate improved bio (AI-assisted, specialized prompt) | |
| | 6. Enforce 500-char limit (deterministic) | |
| | 7. Post-filter for contact info (regex) | |
| | 8. Log analytics event (side effect) | |
| | 9. Increment usage counter (database write) | |
| | 10. Return result | |
| +------------------------------------------------------------------+ |
+------------------------------------------------------------------------+
6. Hybrid Patterns
Pattern 1: AI routes, code executes entirely
The AI picks the function; the function is entirely deterministic.
// AI routes to this function
// The function itself has NO AI --- pure business logic
function moderateText({ text }) {
// All regex and rule-based checks
const hasPhone = /\d{3}[-.]?\d{3}[-.]?\d{4}/.test(text);
return { safe: !hasPhone, reason: hasPhone ? 'Contains phone number' : 'Clean' };
}
Best for: Validation, calculations, data lookups, CRUD operations.
Pattern 2: AI routes, code orchestrates AI
The AI picks the function; the function uses another AI call with a specialized prompt.
// AI routes to this function
// The function uses ANOTHER AI call (different prompt, different purpose)
async function improveBio({ currentBio, tone }) {
// First: deterministic validation
if (currentBio.length > 2000) return { error: 'Too long' };
// Then: specialized AI call (not the router, a focused task)
const result = await openai.chat.completions.create({
model: 'gpt-4o',
temperature: 0.8,
messages: [
{ role: 'system', content: `Rewrite this dating bio in a ${tone} tone...` },
{ role: 'user', content: currentBio },
],
});
// Then: deterministic post-processing
return { improvedBio: result.choices[0].message.content.slice(0, 500) };
}
Best for: Content generation, translation, summarization --- tasks where AI adds value but with guardrails.
Pattern 3: AI routes, code chains multiple steps
The AI picks the function; the function executes a multi-step pipeline.
// AI routes to this function
// The function runs a multi-step pipeline
async function processProfileUpdate({ bio, photos, preferences }) {
// Step 1: Moderate the bio (deterministic)
const modResult = moderateText({ text: bio });
if (!modResult.safe) return { error: 'Bio contains inappropriate content' };
// Step 2: Improve the bio (AI-assisted)
const improvedBio = await improveBio({ currentBio: bio, tone: 'witty' });
// Step 3: Analyze photos (AI-assisted, different model)
const photoAnalysis = await analyzePhotos(photos);
// Step 4: Update database (deterministic)
await db.updateProfile({ bio: improvedBio, photos, preferences });
// Step 5: Recalculate match score (deterministic algorithm)
const newMatchScore = calculateMatchScore({ bio: improvedBio, photos, preferences });
return { success: true, improvedBio, matchScore: newMatchScore };
}
Best for: Complex workflows that combine validation, AI processing, database operations, and business logic.
7. Handling Ambiguous Intent
Sometimes the user's message could match multiple tools. The AI handles this naturally through its understanding of context, but you should design for edge cases:
// Ambiguous: "I wrote 'Hey gorgeous, let's grab coffee' -- is it good?"
// Could be: moderateText (checking if it's appropriate)
// Could be: improveBio (if they think it's their bio)
// Could be: generateOpeners (if they want alternatives)
// The AI typically resolves this correctly from context.
// But you can help by making tool descriptions very specific:
const tools = [
{
type: 'function',
function: {
name: 'moderateText',
description:
'Check if a message is APPROPRIATE and SAFE to send on a dating platform. ' +
'Call this when the user asks "is this okay", "can I send this", ' +
'"is this appropriate", "check this message".',
// ...
},
},
{
type: 'function',
function: {
name: 'improveBio',
description:
'Improve a dating profile BIO (the about-me text on their profile). ' +
'Call this when the user says "improve my bio", "make my bio better", ' +
'"rewrite my profile". NOT for messages they want to send.',
// ...
},
},
];
The key: clear, specific descriptions with examples of trigger phrases help the model distinguish between similar tools.
8. Testing the Router
Test that the AI routes correctly for various inputs:
// ---- Router test suite ----
const testCases = [
// Should call improveBio
{ input: 'Make my bio better: "I like coffee"', expectedTool: 'improveBio' },
{ input: 'Rewrite this bio to sound cooler: "Just a chill dude"', expectedTool: 'improveBio' },
{ input: 'My profile says "I enjoy reading." Jazz it up!', expectedTool: 'improveBio' },
// Should call generateOpeners
{ input: 'Help me message someone who likes yoga', expectedTool: 'generateOpeners' },
{ input: 'I matched with a photographer. What should I say?', expectedTool: 'generateOpeners' },
{ input: 'Give me icebreakers for a dog lover', expectedTool: 'generateOpeners' },
// Should call moderateText
{ input: 'Is "call me at 555-1234" okay to send?', expectedTool: 'moderateText' },
{ input: 'Check this message: "You are so beautiful"', expectedTool: 'moderateText' },
{ input: 'Can I send "Add me on Venmo @john"?', expectedTool: 'moderateText' },
// Should NOT call any tool
{ input: 'Thanks!', expectedTool: null },
{ input: 'What makes a good profile?', expectedTool: null },
{ input: 'How does online dating work?', expectedTool: null },
];
async function testRouter() {
for (const tc of testCases) {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a dating app assistant.' },
{ role: 'user', content: tc.input },
],
tools,
tool_choice: 'auto',
temperature: 0,
});
const actual = response.choices[0].finish_reason === 'tool_calls'
? response.choices[0].message.tool_calls[0].function.name
: null;
const pass = actual === tc.expectedTool;
console.log(
`${pass ? 'PASS' : 'FAIL'}: "${tc.input}" -> ` +
`expected: ${tc.expectedTool}, got: ${actual}`
);
}
}
await testRouter();
9. Cost Analysis of Hybrid Logic
The hybrid approach uses two LLM calls per tool-call interaction (one for routing, one for final response), plus potentially more if the function itself uses AI. Here is how to think about the cost:
SCENARIO: User says "Improve my bio: I like hiking"
Call 1: Routing decision
Input: system prompt + user message + tool schemas = ~800 tokens
Output: tool_call JSON = ~50 tokens
Cost: 800 * $2.50/1M + 50 * $10/1M = $0.002 + $0.0005 = $0.0025
Call 2: Bio generation (inside improveBio function)
Input: specialized system prompt + bio = ~200 tokens
Output: improved bio = ~100 tokens
Cost: 200 * $2.50/1M + 100 * $10/1M = $0.0005 + $0.001 = $0.0015
Call 3: Final response formatting
Input: full conversation + tool result = ~1000 tokens
Output: formatted message = ~150 tokens
Cost: 1000 * $2.50/1M + 150 * $10/1M = $0.0025 + $0.0015 = $0.004
TOTAL: ~$0.008 per interaction (3 LLM calls)
At 100,000 interactions/day: ~$800/day
Optimization strategies:
- Use
gpt-4o-minifor routing (cheaper, still accurate for classification). - Cache tool results for identical inputs.
- Use
tool_choice: 'required'when you know a tool must be called (saves tokens on the "should I call a tool?" decision). - Combine the final formatting into the function's prompt when possible.
10. Key Takeaways
- Hybrid logic = AI decides WHAT to do + code decides HOW to do it. This is the most reliable architecture for AI-powered applications.
- The router pattern has the AI classify intent and extract arguments, while your functions handle validation, business rules, database queries, and side effects.
- Never rely on AI alone for business rules (character limits, banned words, rate limiting) --- always enforce them in deterministic code.
- Functions can themselves use AI-assisted steps (like generating improved bios) wrapped in deterministic guardrails.
- Test routing accuracy separately from function correctness --- they are independent concerns.
- The hybrid approach costs 2-3 LLM calls per interaction --- factor this into your cost planning.
Explain-It Challenge
- A product manager says "just put all the rules in the system prompt --- it's simpler." Explain three specific things that will go wrong with that approach in production.
- Draw the AI decision boundary for this scenario: a user asks "Improve my bio and also check if 'Hey cutie, Venmo me' is safe." Which parts are AI's job and which parts are code's job?
- Your
improveBiofunction sometimes returns bios with 600 characters even though the limit is 500. The AI was instructed to stay under 500 in its prompt. What went wrong, and how does the hybrid approach fix it?
Navigation: <- 4.7.c --- Deterministic Tool Invocation | 4.7.e --- Building an AI Tool Router ->