For most queries, follow this progression:
Discover — Query with entity: "project" to find the right project, then entity: "drawing" to see what’s uploaded
Explore — Query with entity: "sheet" to browse sheets and their block structure
Search — Search for natural language queries, Ask for direct questions
Drill down — Query with entity: "block" for OCR, metadata, overlays; entity: "feature" for rooms, doors, symbols
Visualize — ViewImage to convert storage URIs to viewable URLs
Compare — Compare to diff revisions, Parse to extract features, PollJob to poll
If the user already provides a project ID or sheet number, skip the discovery step.
Prompt Tips
Be specific about sheet numbers
The agent performs better with explicit sheet references. Discipline codes and sheet numbers eliminate ambiguity.
Prompt Quality ”What’s on the electrical drawings?” Vague — many sheets, unclear intent ”How many duplex receptacles are on E-201?” Specific — one sheet, one feature type, clear metric ”What changed on A-101 between Rev A and Rev B?” Specific — two revisions, one sheet
Use discipline codes
When referring to trades, use standard discipline codes. The agent maps these directly to the discipline filter.
Code Discipline AArchitectural SStructural MMechanical EElectrical PPlumbing FPFire Protection
Ask for one thing at a time
Multi-part questions (“count receptacles on E-201 AND check fire ratings on A-101”) often produce better results as separate prompts. Each focused question maps to a clean tool call sequence.
Context Management
Use progressive disclosure
Do not request heavy data unless needed. The include parameter controls token cost.
Lean (default)
With metadata
With OCR
{
"entity" : "block" ,
"project_id" : "proj_oak" ,
"sheet_number" : "A-101" ,
"type" : "Plan"
}
Returns block summaries with relations. Low token cost. {
"entity" : "block" ,
"project_id" : "proj_oak" ,
"id" : "blk_door_sched" ,
"include" : [ "metadata" ]
}
Adds structured schedule rows. Medium token cost. {
"entity" : "block" ,
"project_id" : "proj_oak" ,
"id" : "blk_notes" ,
"include" : [ "ocr" ]
}
Adds full OCR text. High token cost — can be thousands of tokens for general notes.
A good pattern: start with the default response, let the agent read relations and metadata to understand the data structure, then selectively request include fields only for the specific blocks that need them.
Let pass-through filters do the work
Instead of resolving entity IDs manually, use pass-through filters to skip round-trips:
// Instead of: Query entity:"sheet" to get sheet ID, then Query entity:"feature" with sheet_id
// Do this:
{
"entity" : "feature" ,
"project_id" : "proj_oak" ,
"sheet_number" : "E-201" ,
"type" : "duplex_receptacle"
}
Available pass-through filters:
entity: "block": sheet_number, drawing_id
entity: "feature": sheet_number, block_type, block_identifier, grid_intersection
Error Handling
All tools return errors in a consistent shape:
{
"error" : "not_found" ,
"message" : "Sheet E-999 not found in project proj_oak"
}
Error Code Meaning Recommended Action not_foundEntity does not exist or is not accessible Verify the ID or filter values; check project scope invalid_inputBad parameters (invalid filter combination, unknown type) Review the parameter values and types credits_exhaustedInsufficient credits for vision operations Alert the user; vision operations (Compare) require credits job_failedAsync job encountered an error Check the error message; retry or report to the user
The credits_exhausted error only applies to the Compare and Parse tools. The Query, Search, Ask, ViewImage, and PollJob tools do not consume credits.
Batch feature queries by sheet
When analyzing multiple feature types on the same sheet, combine them into fewer calls:
// Query all features on E-201 at once
{
"entity" : "feature" ,
"project_id" : "proj_oak" ,
"sheet_number" : "E-201"
}
Then filter in the agent’s reasoning rather than making separate calls for each feature type.
Use grid_intersection for cross-discipline queries
Instead of querying each discipline separately and correlating results manually, use the Query tool’s grid_intersection filter:
{
"entity" : "feature" ,
"project_id" : "proj_oak" ,
"grid_intersection" : "B-3" ,
"include" : [ "parent_chain" ]
}
This returns features from all disciplines at that grid location in a single call. The backend handles the spatial math.
Sort comparison results by score
After a comparison completes, the PollJob results include a score for each overlay. Lower scores indicate more changes. Focus on low-score sheets first:
// From PollJob result:
{
"overlays" : [
{ "score" : 0.82 , "block_b" : { "sheet_number" : "A-101" } }, // Most changes
{ "score" : 0.97 , "block_b" : { "sheet_number" : "A-102" } } // Minimal changes
]
}
Drill into the A-101 overlay first with Query with entity: "block" and include: ["overlays", "changes"].
Next Steps
Tools Reference Full parameter documentation for all seven tools.
Workflow Examples See complete tool call sequences for real construction queries.