Genesis Computing

LinkedIn
October 20, 2025

Progressive Tool Use

Genesis Computing
Stay in the Fast Lane
News and product updates in Agentic AI for enterprise data teams.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

TL;DR: Giving an agent access to every tool at once is like opening every valve in a pipeline at the same time -- pressure drops, accuracy slips, and debugging becomes a nightmare. Progressive tool use keeps that flow regulated: activate only the tools needed for the current stage, link outputs to the next input, close tools when the work is done, and log any gaps for later review. When tools, context, and blueprints align, agents stop improvising and start operating like disciplined data engineers.

In Part 1 -- Blueprints, I explained how agents follow structured workflows instead of improvising. In Part 2 -- Context Management, I talked about how they keep their heads clear across hundreds of steps. This last part is about the execution layer: how agents decide which tools to bring into scope at each stage of a data workflow, and how to keep reasoning stable as the environment grows.

The Problem with Too Many Tools

Anyone who's managed complex data pipelines knows what happens when everything connects to everything else. You end up with overlapping jobs, half-deprecated connectors, and a DAG that only works if nobody touches it. Large tool inventories cause the same chaos inside an agent's head. Every SQL client, catalog API, schema parser, and validation script expands the model's internal vocabulary. Even when half of them aren't needed, the model still carries the mental weight of choosing among them. The result is the same kind of backpressure you see when too many upstream jobs push into the same sink: performance drops, accuracy slips, and debugging turns into archaeology.

Tool Orchestration as Data Plumbing

Tool orchestration in agent systems feels a lot like data plumbing. Each tool is a valve or a section of pipe that handles a specific part of the flow. If you open every valve at once, pressure drops and data backs up. Progressive tool use is about keeping that flow regulated.


The agent activates only the tools needed for the current stage of the blueprint, closes them when finished, and moves downstream with a clean workspace. It's controlled throughput instead of all-or-nothing concurrency. This maps closely to what researchers studying utility-guided agent orchestration have identified as the core tension in multi-step agent execution: additional tool calls improve coverage but compound latency and context growth at every step.

Linking Tools Across Stages

When one stage produces an output that becomes the input for the next, the agent links those tools directly. A metadata extraction might feed a schema inference step, which then passes its output to a validation layer. That linkage happens dynamically, so the tools stay in scope only as long as they're relevant. If the agent encounters a missing capability– say, a lineage tracer it can't find– it flags the gap instead of stalling. We log those requests, review them later, and decide whether the new connector belongs in the standard environment. That way, the system grows based on real use, not theoretical coverage.

Why Cleanup Is Not Optional

Every active tool also adds cost: state to maintain, query context to remember, and latency that compounds across long chains of calls. When the agent finishes a section of work, it drops that state and releases the connectors. That automatic cleanup keeps long-running blueprints deterministic. We've seen what happens when cleanup is skipped – metrics drift, transformations desynchronize, and the whole run becomes non-reproducible. So cleanup isn't optional; it's part of the discipline, just like versioning or lineage tracking.

Starting Minimal and Expanding as Needed

We're still learning what "too many tools" means. It depends on the model family, the schema size, and how verbose the tools are. So far it seems safer to start minimal, expose only what's required, and let the system request expansions as it proves it needs them. That pattern mirrors good data engineering: maintain flow control, reduce width, and prevent context pollution at every stage.

As Anthropic's own engineering team puts it when writing about effective context engineering for agents, the goal is "finding the smallest possible set of high-signal tokens that maximize the likelihood of some desired outcome." The same logic applies to tools: fewer, better-scoped instruments produce more stable and predictable agent behavior than a wide-open inventory.

When tools, context, and blueprints align, the agent starts behaving like a disciplined data engineer. It knows which table it's touching, which lineage it belongs to, and which downstream step depends on its output. That's when automation stops feeling like orchestration and starts looking like real operational intelligence – a system that understands, executes, and maintains the integrity of data flow at scale.

To see this pattern in action, see how Genesis agents execute a full dbt project end to end using exactly this kind of staged, controlled tool activation.

Frequently Asked Questions

What is progressive tool use in agent systems? Progressive tool use means an agent activates only the tools required for its current stage of work, closes them when that stage is complete, and moves to the next step with a clean slate. Instead of loading an entire tool inventory upfront, the agent expands its toolset incrementally based on what each phase of the workflow actually needs.

Why does giving an agent too many tools cause problems? Every tool an agent has access to adds to its internal decision space -- even tools that aren't being used. That overhead creates the same kind of backpressure you see in an overloaded data pipeline: performance drops, accuracy slips, and tracing a problem back to its source becomes much harder. Keeping the active toolset small and stage-specific avoids that pressure. Weaviate's breakdown of context engineering for tools frames it well: it's not enough to hand an agent a list of APIs, the real work is making sure it knows which one to use, when, and how to interpret the result.

How does this relate to Genesis Blueprints? Blueprints define the sequence of stages an agent executes. Progressive tool use is the discipline that governs what tooling is active at each stage. Together, they ensure that the agent has exactly what it needs for the current phase of work and nothing extra, keeping long-running missions predictable and auditable. You can read more about how Blueprints work in Part 1 of this series.

Is there a recommended starting point for tool scope? Start minimal. Expose only the tools required for the first stage and let the system surface what it needs from there. As LlamaIndex notes in their guide to context engineering techniques, breaking complex tasks into focused steps with their own optimized context windows prevents overload and makes agent behavior easier to reason about. That conservative starting point produces more stable, easier-to-debug agent behavior than trying to anticipate every possible tool requirement upfront.

This is Part 3 of a three-part series. Look back on: Part 1 — Blueprints and  Part 2 -- Context Management

Want to learn more? Get in touch!

Experience what Genesis can do for your team.
Request a Demo
Stay in the Fast Lane
News and product updates in Agentic AI for enterprise data teams.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Keep Reading

Context Management: The Hardest Problem in Long-Running Agents
Genesis Walkthrough #2: Loading data from S3 into Snowflake with Genesis
Genesis Walkthrough #1: Exploring an S3 Bucket with Genesis Agents
Your Data Backlog Isn't Just a List — It's a Risk Ledger
View All Articles
April 27, 2026
From "Something's Broken" to Root Cause in 5 Minutes
No items found.
No items found.
April 23, 2026
40 Minutes to Reverse-Engineer a Legacy Data Warehouse (Including the Ghost Artifacts Nobody Knew Existed)
No items found.
No items found.
April 22, 2026
From Raw Claims Data to a Live Analytics Dashboard in 7 Minutes
No items found.
No items found.
April 20, 2026
Meet Genesis Twin: The Digital Twin That Ends the Monday Morning Data Fire Drill
No items found.
No items found.
April 9, 2026
Super Data Science: ML & AI Podcast with Jon Krohn
Matt Glickman
April 8, 2026
Connecting Data Sources in Genesis
Todd Beauchene
Promotional banner for Genesis Computing
March 31, 2026
How Genesis Automates Synthetic Data Generation for Databricks Dev Environments in Under 34 Minutes
Todd Beauchene
March 19, 2026
The Death of Traditional BI - Part 1
Genesis Computing
March 11, 2026
AI Agent Builds dbt Analytics Schema in 30 Minutes
Todd Beauchene
February 26, 2026
Genesis Bronze, Silver, Gold Agentic Data Engineering: From Dashboard Sketch to Production Pipeline
Genesis Computing
February 19, 2026
How Genesis Automates Data Pipeline Development in Hours
Genesis Computing
February 12, 2026
3 cortex Codes Running in Parallel?
Justin Langseth
February 10, 2026
Powering Up Cortex Code with Genesis Superpowers
Matt Glickman
February 2, 2026
Automate Dashboard Creation with Genesis
Justin Langseth
January 27, 2026
Using AI Agents to Generate Synthetic Data
Justin Langseth
January 12, 2026
The Junior Data Engineer is Now an AI Agent
Matt Glickman
December 22, 2025
From Requirements to Production Pipelines With Genesis Missions
Genesis Computing
December 4, 2025
20 Years at Goldman Taught Me How to Manage People. Turns Out, Managing AI Agents Isn't That Different.
Anton Gorshkov
December 2, 2025
A CEO's Perspective on the Shift to AI Agents
Genesis Computing
December 2, 2025
Genesis Walkthrough #1: Exploring an S3 Bucket with Genesis Agents
Todd Beauchene
December 2, 2025
Genesis Walkthrough #2: Loading data from S3 into Snowflake with Genesis
Todd Beauchene
December 2, 2025
Genesis Walkthrough #3: Using a Blueprint to launch a mission
Todd Beauchene
December 2, 2025
Genesis Walkthrough #4: Genesis Mission prompt for required information
Todd Beauchene
December 2, 2025
Genesis Walkthrough #5: Checking in on a running mission
Todd Beauchene
December 2, 2025
Genesis Walkthrough #6: Mission document flow
Todd Beauchene
December 2, 2025
Genesis Walkthrough #7: Exploring Mission Results
Todd Beauchene
December 2, 2025
Genesis Walkthrough #8: DBT Engineering Blueprint
Todd Beauchene
November 7, 2025
Exploring Genesis UI: Agents & Their Tool
Todd Beauchene
November 7, 2025
Launching the Genesis App through the Snowflake Marketplace
Todd Beauchene
November 7, 2025
Exploring Mission Features in Genesis UI
Todd Beauchene
November 6, 2025
How Hard Could It Be? A Tale of Building an Enterprise Agentic Data Engineering Platform
Anton Gorshkov
November 4, 2025
Better Together: Genesis and Snowflake Cortex Agents API Integration
Genesis Computing
October 31, 2025
Exploring Genesis UI: Agent Workflows
Todd Beauchene
October 27, 2025
Agent Server [1/3]: Where Enterprise AI Agents Live, Work, and Scale
Justin Langseth
October 27, 2025
Agent Server [2/3]: Where Should Your Agent Server Run?
Justin Langseth
October 27, 2025
Agent Server [3/3]: Agent Access Control Explained: RBAC, Caller Limits, and Safer A2A
Justin Langseth
October 26, 2025
Delivering on agentic potential: how can financial services firms develop agents to add real value?
No items found.
No items found.
October 20, 2025
Blueprints: How We Teach Agents to Work the Way Data Engineers Do
Justin Langseth
October 20, 2025
Context Management: The Hardest Problem in Long-Running Agents
Justin Langseth
October 20, 2025
Progressive Tool Use
Genesis Computing
August 22, 2025
Your Data Backlog Isn't Just a List — It's a Risk Ledger
Genesis Computing
August 14, 2025
The Future of Data Engineering: From Months to Hours with Agentic AI
Genesis Computing
Matt Glickman gives an interview at Snowflake Summit 2025
June 27, 2025
Ex-Snowflake execs launch Genesis Computing to ease data pipeline burnout with AI agents
No items found.
No items found.
June 25, 2025
GXS Uses Autonomous AI Agents to Speed Data Engineering from Months to Hours
No items found.
No items found.
June 5, 2025
Enterprise AI Data Agents: Automating Bronze Layer to Snowflake dbt Pipelines
No items found.
No items found.
June 4, 2025
Stefan Williams, Snowflake & Matt Glickman, Genesis Computing | Snowflake Summit 2025
No items found.
No items found.
The Evolution of Data Work: Introducing Agentic Data Engineering
Matt Glickman
Justin Langseth