Genesis Computing

LinkedIn
October 20, 2025

Progressive Tool Use

Genesis Computing
Stay in the Fast Lane
News and product updates in Agentic AI for enterprise data teams.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In Part 1 – Blueprints, I explained how agents follow structured workflows instead of improvising. In Part 2 – Context Management, I talked about how they keep their heads clear across hundreds of steps. This last part is about the execution layer — how agents decide which tools to bring into scope at each stage of a data workflow, and how to keep reasoning stable as the environment grows.

Anyone who’s managed complex data pipelines knows what happens when everything connects to everything else. You end up with overlapping jobs, half-deprecated connectors, and a DAG that only works if nobody touches it. Large tool inventories cause the same chaos inside an agent’s head. Every SQL client, catalog API, schema parser, and validation script expands the model’s internal vocabulary. Even when half of them aren’t needed, the model still carries the mental weight of choosing among them. The result is the same kind of backpressure you see when too many upstream jobs push into the same sink — performance drops, accuracy slips, and debugging turns into archaeology.

Tool orchestration in agent systems feels a lot like data plumbing. Each tool is a valve or a section of pipe that handles a specific part of the flow. If you open every valve at once, pressure drops and data backs up. Progressive tool use is about keeping that flow regulated.


The agent activates only the tools needed for the current stage of the blueprint, closes them when finished, and moves downstream with a clean workspace. It’s controlled throughput instead of all-or-nothing concurrency.

When one stage produces an output that becomes the input for the next, the agent links those tools directly. A metadata extraction might feed a schema inference step, which then passes its output to a validation layer. That linkage happens dynamically, so the tools stay in scope only as long as they’re relevant. If the agent encounters a missing capability — say, a lineage tracer it can’t find — it flags the gap instead of stalling. We log those requests, review them later, and decide whether the new connector belongs in the standard environment. That way, the system grows based on real use, not theoretical coverage.

Every active tool also adds cost: state to maintain, query context to remember, and latency that compounds across long chains of calls. When the agent finishes a section of work, it drops that state and releases the connectors. That automatic cleanup keeps long-running blueprints deterministic. We’ve seen what happens when cleanup is skipped — metrics drift, transformations desynchronize, and the whole run becomes non-reproducible. So cleanup isn’t optional; it’s part of the discipline, just like versioning or lineage tracking.

We’re still learning what “too many tools” means. It depends on the model family, the schema size, and how verbose the tools are. So far it seems safer to start minimal — expose only what’s required — and let the system request expansions as it proves it needs them. That pattern mirrors good data engineering: maintain flow control, reduce width, and prevent context pollution at every stage.

When tools, context, and blueprints align, the agent starts behaving like a disciplined data engineer. It knows which table it’s touching, which lineage it belongs to, and which downstream step depends on its output. That’s when automation stops feeling like orchestration and starts looking like real operational intelligence — a system that understands, executes, and maintains the integrity of data flow at scale.

Want to learn more? Get in touch!

Experience what Genesis can do for your team.
Request a Demo
Stay in the Fast Lane
News and product updates in Agentic AI for enterprise data teams.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Keep Reading

Context Management: The Hardest Problem in Long-Running Agents
Genesis Walkthrough #2: Loading data from S3 into Snowflake with Genesis
Genesis Walkthrough #1: Exploring an S3 Bucket with Genesis Agents
Your Data Backlog Isn't Just a List — It's a Risk Ledger
View All Articles
April 8, 2026
Connecting Data Sources in Genesis
Todd Beauchene
Promotional banner for Genesis Computing
March 31, 2026
How Genesis Automates Synthetic Data Generation for Databricks Dev Environments in Under 34 Minutes
Todd Beauchene
March 19, 2026
The Death of Traditional BI - Part 1
Genesis Computing
March 11, 2026
AI Agent Builds dbt Analytics Schema in 30 Minutes
Todd Beauchene
March 2, 2026
The Evolution of Data Work: Introducing Agentic Data Engineering
Matt Glickman
Justin Langseth
February 26, 2026
Genesis Bronze, Silver, Gold Agentic Data Engineering: From Dashboard Sketch to Production Pipeline
Genesis Computing
February 19, 2026
How Genesis Automates Data Pipeline Development in Hours
Genesis Computing
February 12, 2026
3 cortex Codes Running in Parallel?
Justin Langseth
February 10, 2026
Powering Up Cortex Code with Genesis Superpowers
Matt Glickman
February 2, 2026
Automate Dashboard Creation with Genesis
Justin Langseth
January 27, 2026
Using AI Agents to Generate Synthetic Data
Justin Langseth
January 12, 2026
The Junior Data Engineer is Now an AI Agent
Matt Glickman
December 22, 2025
From Requirements to Production Pipelines With Genesis Missions
Genesis Computing
December 4, 2025
20 Years at Goldman Taught Me How to Manage People. Turns Out, Managing AI Agents Isn't That Different.
Anton Gorshkov
December 2, 2025
A CEO's Perspective on the Shift to AI Agents
Genesis Computing
December 2, 2025
Genesis Walkthrough #1: Exploring an S3 Bucket with Genesis Agents
Todd Beauchene
December 2, 2025
Genesis Walkthrough #2: Loading data from S3 into Snowflake with Genesis
Todd Beauchene
December 2, 2025
Genesis Walkthrough #3: Using a Blueprint to launch a mission
Todd Beauchene
December 2, 2025
Genesis Walkthrough #4: Genesis Mission prompt for required information
Todd Beauchene
December 2, 2025
Genesis Walkthrough #5: Checking in on a running mission
Todd Beauchene
December 2, 2025
Genesis Walkthrough #6: Mission document flow
Todd Beauchene
December 2, 2025
Genesis Walkthrough #7: Exploring Mission Results
Todd Beauchene
December 2, 2025
Genesis Walkthrough #8: DBT Engineering Blueprint
Todd Beauchene
November 7, 2025
Exploring Genesis UI: Agents & Their Tool
Todd Beauchene
November 7, 2025
Launching the Genesis App through the Snowflake Marketplace
Todd Beauchene
November 7, 2025
Exploring Mission Features in Genesis UI
Todd Beauchene
November 6, 2025
How Hard Could It Be? A Tale of Building an Enterprise Agentic Data Engineering Platform
Anton Gorshkov
November 4, 2025
Better Together: Genesis and Snowflake Cortex Agents API Integration
Genesis Computing
October 31, 2025
Exploring Genesis UI: Agent Workflows
Todd Beauchene
October 27, 2025
Agent Server [1/3]: Where Enterprise AI Agents Live, Work, and Scale
Justin Langseth
October 27, 2025
Agent Server [2/3]: Where Should Your Agent Server Run?
Justin Langseth
October 27, 2025
Agent Server [3/3]: Agent Access Control Explained: RBAC, Caller Limits, and Safer A2A
Justin Langseth
October 26, 2025
Delivering on agentic potential: how can financial services firms develop agents to add real value?
No items found.
No items found.
October 20, 2025
Blueprints: How We Teach Agents to Work the Way Data Engineers Do
Justin Langseth
October 20, 2025
Context Management: The Hardest Problem in Long-Running Agents
Justin Langseth
October 20, 2025
Progressive Tool Use
Genesis Computing
August 22, 2025
Your Data Backlog Isn't Just a List — It's a Risk Ledger
Genesis Computing
August 14, 2025
The Future of Data Engineering: From Months to Hours with Agentic AI
Genesis Computing
Matt Glickman gives an interview at Snowflake Summit 2025
June 27, 2025
Ex-Snowflake execs launch Genesis Computing to ease data pipeline burnout with AI agents
No items found.
No items found.
June 25, 2025
GXS Uses Autonomous AI Agents to Speed Data Engineering from Months to Hours
No items found.
No items found.
June 5, 2025
Enterprise AI Data Agents: Automating Bronze Layer to Snowflake dbt Pipelines
No items found.
No items found.
June 4, 2025
Stefan Williams, Snowflake & Matt Glickman, Genesis Computing | Snowflake Summit 2025
No items found.
No items found.