Class OrchestratorAgent
- All Implemented Interfaces:
ProgressReporter
Analogy: like a project manager who receives a large contract, breaks it into work packages, assigns each to a specialist, monitors progress, collects reports, and escalates to the client only when a decision requires human judgment. The specialists (sub-agents) do the technical work; the orchestrator coordinates and decides what happens next.
This class is the core learning artifact for the "Agentic Architecture & Orchestration" exam domain (27%) - Domain 1, from Claude Certified Architect – Foundations Certification Exam Guide Every pattern described in the certification guide is implemented here.
Review lifecycle (the "agentic loop"):
- INIT: receive ReviewScope from StartReviewTool MCP call
- DECOMPOSE: TaskDecomposer splits project into agent-specific chunks
- DISPATCH: for each sub-agent: fire onBeforeAgent() hook, then execute()
- COLLECT: receive AgentResult; fire onAfterAgent() hook
- EVALUATE: SelfEvaluatorAgent checks output quality; retry if insufficient
- ESCALATE: escalate() checks for CRITICAL — pause if found
- AGGREGATE: AgentResultAggregator merges all findings
- COMPLETE: store ReviewReport, mark session as done
Created by: WorkshopServer.start() — a single instance is created at server
startup and shared across all MCP tool calls. It is injected into each tool:
OrchestratorAgent orchestrator = new OrchestratorAgent();
tools.add(new StartReviewTool(orchestrator).toolSpecification(jsonMapper));
tools.add(new GetReportTool(orchestrator).toolSpecification(jsonMapper));
tools.add(new RespondToEscalationTool(orchestrator).toolSpecification(jsonMapper));
This ensures all tools share the same activeSessions map — StartReviewTool
writes a session, GetReportTool reads it, RespondToEscalationTool modifies it.- See Also:
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidIf agent finds critical finding during file procession, and before continuing with the next file, it will notify the orchestrator of the escalation, and stop the work.Read content from the activeSessions.voidresumeAfterEscalation(String reviewId, HumanDecision decision) Resumes a paused review after the developer has responded to an escalation.startReview(ReviewScope scope) Starts a new review session asynchronously and returns immediately with a reviewId.voidupdateProgress(String reviewId, String currentFile) Will be called by theSubAgent.execute()during file processing, to push live progress data into theReviewSession.
-
Constructor Details
-
OrchestratorAgent
public OrchestratorAgent()
-
-
Method Details
-
startReview
Starts a new review session asynchronously and returns immediately with a reviewId. Called by StartReviewTool explicitly, which creates and passes to this method theReviewScoperecord instance, based on JSON from Claude AI, for example:
Here is chain of calls describing how this method is called:{ "project_path": "C:/project", "scope": "changed_files", "focus": "all" }
But, before returning the result to Claude AI, this method starts the agents execution in background threads, collects their results, and stores them in a review session.- Developer: "Review my changes before PR" - Claude: → undersand intent, and calls StartReviewTool(projectPath, scope) -StartReviewTool: → create ReviewScope → calls OrchestratorAgent.startReview(scope) → returns { reviewId: "abc-123" } without waiting to Claude AIAnalogy: like submitting a job to Java's ExecutorService — you get a Future (reviewId) immediately; the work runs in the background. The MCP caller (Claude AI) does not block waiting for results.
GetReportTool will periodically check status, by calling getStatus() from this class - see getStatus() javaDoc for an explanation of how this method is ensured to be called by GetReportTool. The state in the ReviewSession is changed in two ways: 1. by the agents that are working (by calling the updateProgress() method from this class), 2. as well as in this method itself. invalid input: '<'/>
- Parameters:
scope- ReviewScope defining project path and review depth- Returns:
- reviewId (UUID) used by all subsequent tool calls - StartReviewTool returns this value to Claude AI
-
getStatus
Read content from the activeSessions.The caller (Claude AI) polls
Here is chain of call for this method:GetReportToolfor completion. We do not write any polling loop in the code in order for GetReportTool to be called . We do not send instructions to Claude at runtime. We just describe the GetReportTool well - and Claude deduces from the tool description himself the logic of when and how to use it. IMPORTANT: On the other side, we must call this method explicitly in the GetReportTool, and that behaviour is implemented in GetReportTool.toolSpecification(), more precise, in GetReportTool.execute(). The GetReportTool.toolSpecification() is the method which returns the McpServerFeatures.AsyncToolSpecification, which is required when registering MCP tool with io.modelcontextprotocol.server.McpServer. It is a mechanism that ensures that this very method is called when GetReportTool is triggered by Claude AI.Developer: "Review my changes before PR" Claude AI: → calls StartReviewTool(projectPath, scope) StartReviewTool: → create ReviewScope → calls OrchestratorAgent.startReview(scope) → returns { reviewId: "abc-123" } without waiting to Claude AI Claude AI: → waits a moment, then calls GetReportTool("abc-123") GetReportTool: → returns { status: "RUNNING", progress: "SecurityAuditor 60%" } Claude AI: → calls GetReportTool again a little later GetReportTool: → returns { status: "COMPLETED", report: {...} } Claude AI: → presents findings to the developer in natural language. This is because of GetReportTool description part: "If status is COMPLETED, present the findings to the developer."- Parameters:
reviewId-- Returns:
- :
RUNNING : Review is currently running — agents are executing AWAITING_HUMAN : Review is paused — a CRITICAL finding requires human input COMPLETED : Review completed successfully — report is final. CANCELLED : Review was cancelled by the developer FAILED : Review failed due to an unrecoverable errorGetReportToolreturns different JSON to the Claude AI based on which state the review is in — running progress vs. final report vs. escalation request
-
updateProgress
Will be called by theSubAgent.execute()during file processing, to push live progress data into theReviewSession.CERTIFICATION NOTE — Domain 1: Agentic Architecture & Orchestration (27%): Covers Task Statement 1.7: "Manage session state, resumption, and forking". This method is the session state update mechanism — the agent pushes progress into the session while it runs, so GetReportTool can always return current state without polling the agent. State is maintained in the orchestrator, not in the agent.
Let say, OrchestratorAgent (this class) starts SecurityAuditorAgent in the startReview() method.
- - - - So when Claude AI calls GetReportTool — the data is already in the│ │ agent processes file by file │ for each file it starts processing: │ ▼ for (Path file : context.getFileList()) { // HERE: let's inform the orchestrator which file we are processing now: context.getOrchestrator().updateProgress(context.getReviewId(), file.getFileName().toString()); // we analyze the file — we call the Claude API: Listfindings = analyzeFile(file); // we are collecting results: allFindings.addAll(findings); } ReviewSession. GetReportTool only reads the current state- Specified by:
updateProgressin interfaceProgressReporter- Parameters:
reviewId-currentFile-
-
escalate
Description copied from interface:ProgressReporterIf agent finds critical finding during file procession, and before continuing with the next file, it will notify the orchestrator of the escalation, and stop the work. The agent notifies orchestrator this by calling this method (the orchestrator implements this interface), and, in fact, the orchestrator should stop the agent, by using CountDownLatch mechanism.
By calling this method, the agent sends the reviewId, as well as information about his finding for the file it processes ( Severity - CRITICAL, INFO, WARNING, then lineNumber, ...) Knowing rewiewId and Finding, this method then sends the Finding on deciding what to do - let's call the mechanism that decides what to do next - the Escalation Handler
This method will be called by the
. Since the implementation of this method typically uses java.util.concurrent.CountDownLatch await(), this means SubAgent#execute() will wait (being blocked), it will remain in this state until another thread - in our case,invalid reference
SubAgent#execute()- calls countDown()invalid reference
OrchestratorAgent#resumeAfterEscalation()- Specified by:
escalatein interfaceProgressReporter- Parameters:
reviewId-finding-
-
resumeAfterEscalation
Resumes a paused review after the developer has responded to an escalation. Blocked agent thread -- can continue working (was previously blocked by escalate() method call, from SubAgent.execute() )invalid reference
SubAgent#execute()Called by: RespondToEscalationTool when the developer makes a decision in Claude Desktop about a CRITICAL finding.
- Parameters:
reviewId- the paused session to resumedecision- ACCEPT_FIX / REJECT_FINDING / OVERRIDE_CONTINUE . Created and passed here by the RespondToEscalationTool
-