Skip to content

AI Use Cases

The MCP server gives an AI agent four tools: pull the workspace, validate it against the schema, audit it against 70+ rules, and push a change as an Architecture Change Request. On their own each tool is simple; the value is in the combinations. This page collects the combinations that earn their keep.

All patterns below assume the MCP server is already connected (see MCP Server Setup).

Turn existing documentation — a wiki page, a Confluence space, a README, a spreadsheet — into a first draft of the landscape.

“Here is our architecture wiki (attached). Create a complete Albumi workspace — applications, integrations, data objects, and business capabilities. Infer owning organizations and business criticality from the text where possible. Run validate, then audit. Push as an ACR named ‘Initial landscape import — {date}’.”

What the agent does:

  1. Parses the source document.
  2. Generates entity records with valid identifiers, referential integrity, and required fields set.
  3. Runs validate — fixes any schema or reference errors without asking.
  4. Runs audit — produces a summary of 70+ checks.
  5. Opens an ACR. You review the diff, adjust, approve, implement.

What makes a good source document:

  • Structured: clear headings per system, explicit mentions of data flows between them.
  • Named entities: real system names the business uses — “Salesforce”, “the billing engine”, “our PIM” — not generic roles.
  • Current-state: describes what is, not what is planned. Put planned systems in an Initiative after the import.

What makes a bad source document:

  • Marketing material with no system-level detail.
  • A mix of “what we have” and “what we’d like to have” with no separator — the agent will conflate them.
  • Screenshots of diagrams without accompanying text. Most agents will not OCR architecture diagrams reliably.

After an architecture meeting, turn the minutes into a proposed change.

“Read the attached meeting notes. Identify any landscape changes discussed — new applications introduced, old ones retired, new integrations, changed ownership. For each, create the corresponding edit. Push as an ACR named ‘Architecture review — {meeting date}’. Do not change anything I did not mention in the notes.”

The last sentence matters. Without it, agents sometimes add inferred changes that were not discussed. Be explicit about scope.

Before a governance meeting, run a structured audit so the board opens the session with a punch list.

“Pull the workspace. Run a full audit. Group findings by severity (critical, warning, info). For each critical finding, suggest a concrete fix. Format the result as a markdown report I can paste into the meeting notes.”

Useful follow-ups in the same conversation:

“Now filter to only findings that affect applications classified as Mission Critical or Business Critical.”

“For every finding about missing lifecycle dates, draft the fix as an ACR I can review before the meeting.”

During an incident, reconstruct what the affected application touches.

“Application X is down. Pull the workspace. List every integration that has X as source or target, every data object X operates on, every capability X realizes, and every other application that depends on X through shared IT Components. Format as an oncall briefing — two paragraphs, then a bulleted list.”

Because Albumi tracks the logical landscape and not runtime telemetry, the agent’s answer is what should be connected according to the model — not what is actually up right now. Treat it as a map, pair it with your observability stack for live state.

Scope an initiative before committing to it.

“We are considering retiring Application X in the next fiscal year. Pull the workspace. List: (1) every integration where X is source or target, (2) every data object X is the sole Create-operation source for, (3) every application that depends on an IT Component only X uses, (4) every capability that X is the only realizer of. For each, flag the severity of the dependency. Draft an Initiative named ‘Retire X’ with the affected applications in scope and their Impact Types (Remove for X, Modify for dependents that need rewiring). Push as an ACR.”

This is the closest thing the product has to a dedicated impact-analysis tool: it is a prompt pattern, not a feature button. The agent reads the relationships you already modeled and assembles the briefing. If the relationships are incomplete, the briefing is incomplete — fix the model first.

Once a landscape has been maintained for a while, it drifts from the source documents that seeded it. Periodic reconciliation keeps both honest.

“Compare the current workspace to the attached architecture document. Report: (1) systems in the document that do not exist in the workspace, (2) systems in the workspace not mentioned in the document, (3) integrations in one but not the other. Do not make any changes — this is a review report, not a sync operation.”

Run this before a major review cycle. The document is often the source of truth for business stakeholders; the workspace is the source of truth for architects. Drift between them is information.

Short prompts that keep the workspace clean:

“Audit the workspace. Fix every data-quality warning automatically and push as an ACR. Leave critical and info findings for me to review.”

“Pull the workspace. Find every application where TIME classification disagrees with the implied TIME from its functional and technical fit. List them.”

“For every application marked End-of-Life, list any active integrations still pointing at it. These are the dependencies the retirement process missed.”

  • Always ask for an ACR, not a direct change. Even if the agent could technically push a direct edit, it should not. Review is the mechanism that catches hallucinations. The MCP server enforces the ACR path, but a clear “push as an ACR” in the prompt removes ambiguity about what you expect.
  • Attach the source document verbatim. Summarizing a wiki page in your own words before handing it to the agent is a lossy step — the agent will infer less accurately from your summary than from the original. Attach the raw text and let the agent read it.
  • Ask the agent to run validate before push. A broken workspace export rejected at push is a wasted round-trip. Validate first; push second.
  • Review the ACR diff carefully. Agents hallucinate identifiers, miscount operations, or merge two distinct integrations into one. The ACR diff view exists to catch this; use it.
  • Bound the scope in the prompt. “Only entities mentioned in the attached document”, “only applications in the Finance organization”, “do not modify anything I did not ask for.” Agents default to helpfulness, which at the landscape level means additions you did not request.

A short list to save confusion later.

  • Runtime data. No uptime, no error rates, no telemetry. Albumi models the logical landscape; the agent only sees that.
  • Vendor knowledge beyond what is in the prompt. The agent knows “Salesforce is a CRM” from its training, but it does not know your specific Salesforce configuration, your custom objects, or your internal Salesforce naming conventions. Provide those in the prompt.
  • Implementation. An Admin user implements an approved ACR. The agent can propose, validate, audit, and push; it cannot merge. That is the separation of duties — see Change Requests (ACR).