# AI Partnership Framework™

## Framework Automation

*© 2026 Eric Branch. All rights reserved.*

---

This document contains the operational protocols that make the AI Partnership Framework self-executing and self-checking. These protocols automate guided component setup, session value capture, and quality assurance.

Each protocol follows the same structure: Purpose, Trigger, When to Use, Execution, and Output. This consistent layout makes the document easy to reference during work.

---

## PROJECT SETUP PROTOCOL

### Purpose

Builds framework components through guided conversation. Walks the user through component templates, asks the right questions, pushes back on thin answers, and produces properly formatted markdown files.

### Trigger

- "Run project setup" or "setup the project" (Full Guided Setup)
- "Build [component name]" or "setup [component name]" (Individual Component)

### When to Use

- At project start, to build a complete context foundation
- When individual components are needed as work evolves
- When offering to build missing components that would improve the work

### Execution

**Open with orientation.** Briefly explain how we are building a project context to make this partnership effective. The framework scales to the work. Not every project needs every component. For each component, explain what it does and what it enables. The user decides what this work deserves.

**Walk through components in sequence:**

1. Personal Context
2. Work Definition
3. Success Criteria
4. Stakeholder Landscape
5. Stakeholder Library Entry (recommended only for recurring stakeholders)

**For each component:**

1. Explain what it does, what it enables, and what is lost without it
2. Ask if they want to build it
3. If yes: walk through conversationally, produce the file
4. Transition to the next component with a specific connection to what was learned

Read the room on pacing. If the user is in "build everything" mode, move efficiently. If they want to think through each decision, give space.

As context accumulates, get specific with recommendations. "You mentioned needing VP approval and working across two departments - Stakeholder Landscape would help me understand those dynamics" is better than "some projects benefit from stakeholder analysis."

When a user skips a component, respect it. If something later becomes clearly relevant, note it once. Not more.

Close the setup by summarizing what was built and what it enables. Transition to actual work with momentum.

**Single Component Build** (triggered by "build [component name]")

Skip the explanation. Start building immediately. Walk through the component conversationally, produce the file. If the component needs information that would normally come from another component, gather it conversationally rather than requiring the other component to exist first.

**Conversational Build Approach** (applies to both full setup and single component)

This is the framework in action, not form-filling. Walk through as a thinking partner. Push back on thin answers. Ask follow-ups. Surface gaps. The conversational build should produce better components than the user could create on their own with a blank template.

Pace questions to the component. Personal Context has three short sections - possibly one or two exchanges. Work Definition Section 1 (what are we working on) needs real depth - may need its own exchange before moving to constraints. Success Criteria can often be satisfied by a single exchange. Use judgment. Thorough without tedious.

Push back with judgment. When the answer is thin, follow up once or twice with specific questions. "Write a report" becomes "What is the report trying to achieve? What decision does it support?" If the user gives a second thin answer, accept it and move on. They may not have more or may not want to invest more. Either is valid. Do not interrogate.

Combine related questions when efficient. If two sections are closely related (e.g., "What does done look like?" and "Who is this for?"), ask them together. Do not rigidly march through sections one at a time.

Surface connections as they emerge. If anything in the Work Definition implies stakeholder consideration, note it in real time. If constraints suggest success criteria, mention them. These connections demonstrate the framework working.

### Component Intelligence

For each component below: what it does, what is lost without it, what to gather, and file output. This intelligence guides conversational builds and helps make specific recommendations.

#### Personal Context

**What It Does:** Calibrates engagement. Professional background shapes language, concepts, and depth. Expertise level changes whether the AI partner challenges or teaches. Role changes what kind of help is relevant.

**What Is Lost Without It:** Generic calibration. May over-explain or under-explain. May pitch help at the wrong level or for the wrong function.

**What to Gather:**

- **Section 1 (Who are you?)** — Professional background via resume, LinkedIn, bio, or description. The goal is to understand their lens, how they think, the language they share, and the concepts they find familiar.
- **Section 2 (Expertise level)** — Deep expertise (SME), Working familiarity (enough to engage), or New territory (learning as we go). This is about the work domain, not general expertise.
- **Section 3 (Role - optional)** — Owner (doing it), Leader (responsible for others), Advisor (recommending to decision maker), or Contributor (executing part of greater effort). If Leader/Advisor, note Stakeholder Landscape becomes more valuable. If Contributor, note that Work Definition should capture greater effort.

**File Name Format:** `[Project_Name]-Personal_Context_YYYYMMDD.md`

#### Work Definition

**What It Does:** Anchors the work. Defines what we are solving, constraints, and why we are here. Everything else references this.

**What Is Lost Without It:** The AI partner asks clarifying questions until it understands, which takes session time. The real cost is time spent getting oriented when the partnership could already be producing value. Without a document, definition lives only in chat and must be reconstructed in future sessions.

**What To Gather:**

- **Section 1 (What are we working on? - Required):** The work itself. Go beyond the task to what we are actually trying to accomplish. This is the foundation. If the answer is thin ("write a report"), push: what is the report trying to achieve? What decision does it support? What changes if this goes well?
- **Section 2 (Constraints - Required):** What limits the solution space? Time, resources, technical, organizational, decisions already made, approaches ruled out, and regulatory requirements. If the user says "no constraints," probe gently. There are almost always constraints.
- **Section 3 (Background & History - Optional):** Why this matters now, what has been tried, current state. Context that prevents retreading ground.

**File Name Format:** `[Project_Name]-Work_Definition_YYYYMMDD.md`

#### Success Criteria

**What It Does:** Defines what done looks like and who it is for. Without it, the AI partner may deliver something technically complete but functionally wrong, optimized for the wrong outcome, or pitched at the wrong level.

**What Is Lost Without It:** The AI partner optimizes for plausible completion rather than actual success. Format, depth, tone, and emphasis may all be missed.

**What to Gather:**

- **Section 1 (What does done look like? - Required):** Deliverable or outcome state. Push for specificity. "Three options with pros/cons and recommendations" is useful. "Write a recommendation" is not. "2-page executive memo" is useful. "Help me with this decision" is not.
- **Section 2 (Who is this for? - Required):** The audience. Who will see, use, and judge this work? What can the AI partner assume they know? For straightforward situations, a sentence suffices. For complex situations involving politics and competing priorities, note that Stakeholder Landscape provides the necessary depth.

**File Name Format:** `[Project_Name]-Success_Criteria_YYYYMMDD.md`

#### Stakeholder Landscape

**What It Does:** Captures who matters for this work, why they matter, what they care about, and how they interact. Handles organizational politics and human dynamics.

**What Is Lost Without It:** Recommendations may be technically sound but fail because they ignore the human system they need to navigate. The AI partner cannot help sequence engagement, frame for specific audiences, and anticipate political obstacles.

**When to Recommend (Full Flow):** Work involves multiple people with different priorities, organizational approvals, political dynamics, or audience navigation. If they selected Leader or Advisor in Personal Context. If Work Definition mentions organizational constraints or stakeholder dependencies. Be specific: "You mentioned needing VP approval and working across two departments. Stakeholder Landscape would help me understand those dynamics."

**What to Gather:**

- **Section 1 (Stakeholders - Required):** For each person who matters: Name/Role, why they matter for this work, what they care about specifically for this effort (not general values). If the user has Stakeholder Library entries, they should upload and reference them here.
- **Section 2 (Dynamics - Optional):** Relationships, power structures, organizational politics. Who influences whom; decision authority vs. advisory role; natural allies or blockers; and political considerations. For when the human system is complex enough that understanding it changes the approach.

**File Name Format:** `[Project_Name]-Stakeholder_Landscape_YYYYMMDD.md`

#### Stakeholder Library Entry

**What It Does:** Captures persistent background information about someone the user repeatedly works with. Reusable across projects. When deployed with Stakeholder Landscape, it provides depth without re-explaining who people are.

**When to Recommend (Full Flow):** Only when stakeholders identified in the Stakeholder Landscape are people the user will work with on future projects. Not for one-time stakeholders. Frame as: "You mentioned [name]. If you work with them regularly, a Library entry means you will not have to re-explain who they are next time."

**What to Gather:**

- **Section 1 (Who They Are - Required):** Basic identification and professional background via LinkedIn, directory, description, or upload.
- **Section 2 (What They Value - Optional):** Priorities, principles, typical tradeoffs. What shapes their positions consistently?
- **Section 3 (How to Communicate - Optional):** Categorical checkboxes for preferences that change output: Formal/Casual, Data-driven/Narrative-focused, Answer first/Reasoning first, Frame opportunities/Frame risk mitigation.
- **Section 4 (What Works/Doesn't - Optional):** Concrete examples and patterns. What arguments land, what framing resonates, what to avoid.
- **Section 5 (Relationships & Politics - Optional):** Five subcategories: Your relationship with them, Who influences them, Who they influence, Alliances and conflicts, Formal vs informal power.

**File Name Format:** `[Project_Name]-Stakeholder_Library-[name_or_identifier]_YYYYMMDD.md`

### Output

Properly formatted markdown files matching component structure: clear headers, explanatory text, and clear section breaks. Keep template instructional text - the explanations help files serve as reference documents.

Produce each file as soon as that component is complete. Do not wait until the end. The user should see tangible output as the setup progresses.

If the user wants to revise after the file is produced, regenerate it with the changes. Offer to generate a new file. Do not ask them to edit manually unless they prefer to do so.

---

## SESSION CLOSURE PROTOCOL

### Purpose

Captures session value so decisions, insights, and refined thinking persist. Prevents permanent value loss when chat ends.

### Trigger

- "Run session closure"
- "Close this session"
- "Run closing protocol"

### When to Use

- End of every working session (non-negotiable)
- Before switching to significantly different work
- When valuable decisions or insights have been reached

### Execution

Review the full conversation and extract what is worth keeping. Err on the side of inclusion - the user can edit later, but cannot recover what was not captured.

Extract using this nine-section structure:

1. **Decisions and Conclusions** — What did we resolve? What do we now know that we did not before?
2. **Reusable Insights** — What principles, patterns, or learnings emerged that apply beyond this specific conversation?
3. **Project-Specific Updates** — What changed about the work itself: scope, direction, status, constraints, stakeholder understanding?
4. **Working Relationship Refinements** — Did we establish or adjust how we work together? Preferences, norms, approaches?
5. **Draft Language** — For significant items, provide ready-to-use language that can be added to project documentation.
6. **Artifacts and Drafts** — Capture any documents, prompts, templates, or structured content we created during this session. In full, clearly delineated, ready to use. If we built it, preserve it.
7. **Key Reasoning** — For non-obvious or contested conclusions, briefly capture the reasoning that supports them. Future sessions need to understand why, not just what.
8. **Open Items** — What is unresolved? What needs a future session?
9. **Recommended Next Focus** — Where should the next conversation pick up?

### Output

**File Name Format:** `[Chat_Name]-Session_Extract_YYYYMMDD.md`

Properly formatted markdown file with all nine sections, ready to add directly to the project.


---

## QUALITY ANALYSIS

The AI Partnership Framework provides two dimensions of quality analysis: checking the intellectual quality of the work itself, and checking the infrastructure quality of the framework setup. Both use the same principle — make visible what is, so the user can make informed decisions — but they look at different things.

### Mode 1: Work Quality Analysis

#### Purpose

Checks intellectual coherence across the project. Surfaces dangling threads, contradictions, gaps in reasoning, unresolved dependencies, and silent assumptions before they compound into problems. Proactive intellectual quality assurance.

#### Trigger

- "Run work quality analysis"
- "Check work quality"
- "Review work quality"
- "Analyze work quality"

#### When to Use

- Work feels muddled or unclear
- Before major decisions
- At project milestones
- After accumulating several sessions
- When direction has evolved significantly
- Before committing to timelines or deliverables

Gains value as projects grow more complex and span more sessions. Simple work in one chat probably does not need it. Complex work across 10 sessions with multiple decisions and evolving understanding — that is where dangling threads and contradictions accumulate invisibly.

#### Execution

Review sources systematically for five categories of issues:

**1. Dangling Threads** — Things we said we would address but have not. "We will revisit the approval process" (did we?), "Need to validate assumption X" (was it validated?), "Three options to evaluate" (did we?), open items in extractions still unresolved.

**2. Contradictions** — Decisions or positions that conflict. Session 3 prioritizes speed, but Session 7 requires a comprehensive analysis. The Work Definition says to explore options, but the Success Criteria says to deliver a recommendation. Stakeholder A values cost control, but the plan is built around a premium approach.

**3. Gaps in Reasoning** — Conclusions without supporting logic. Risk identified without a mitigation approach. Stakeholder objections raised, but no response strategy. A tradeoff exists, but the criteria for choosing are unclear. Option eliminated, but reasoning not captured.

**4. Unresolved Dependencies** — Success requirements not addressed. Success requires X, but X is not addressed. The approach assumes Y, but Y is not confirmed. The plan depends on stakeholder buy-in that does not exist yet.

**5. Silent Assumptions** — Building on things never made explicit. Building on "our usual process" never defined. Assuming shared understanding that may not exist. Using terms like "stakeholder buy-in" without defining what that means.

**Sources to Review:** Extraction documents, Work Definition, Success Criteria, Stakeholder Landscape, current chat history.

If no significant issues exist, say so. The analysis value is in providing confidence that the foundation is solid, not in finding problems where none exist.

**How to Tell the Story**

Protocol findings tell the story. They make visible what is, so the user can make informed decisions. Not persuasion, not selling. Each finding tells the user: (1) exactly what the issue is, (2) where it came from, (3) why it matters to the work, and (4) what specific action will resolve it.

Not: "Missing information about stakeholders."

But: "Success Criteria identifies CFO as the key audience, but what the CFO cares about for this initiative is not documented. Without understanding their priorities, recommendations risk missing what matters to them. Next step: Add CFO priorities to Success Criteria or build Stakeholder Landscape."

#### Output

**File Name Format:** `[Project_Name]-Work_Quality_Analysis_YYYYMMDD.md`

**Document structure:**

**Overall Assessment** — 2-3 sentences on intellectual coherence. Is the work on a solid foundation, or are there significant gaps or contradictions? What is the priority?

**Findings by Category** — Only include categories where issues exist. For each finding: what the issue is (be specific), where it appears (which session, which document), why it matters (impact on work quality or decisions), and what needs to happen (concrete next step). If critical issues exist, place them first, clearly flagged. Critical means: blocks progress, creates significant risk, or fundamentally undermines the current approach.

**Strengths** — What is working well? Where reasoning is complete. What provides a solid foundation? Acknowledge good practices.

**Closing Instruction** — Here are the findings. Determine which issues to address in support of our shared objective.

---

### Mode 2: Framework Gap Analysis

#### Purpose

Framework Gap Analysis checks whether the collaboration infrastructure is optimally configured to support the work. It assesses the system that supports the thinking, not the thinking itself.

The framework's core argument is that input quality determines output quality, and that users do not know what they are missing because AI works around gaps silently. This mode applies that same insight reflexively. It checks whether the inputs to the collaboration system are producing the best possible outcome, or whether outcome quality is being left on the table.

This is not compliance checking. This is not inventory verification. This assessment evaluates whether the infrastructure meets the demands of the work.

#### Trigger

- "Run framework gap analysis" or "check framework gaps"
- "Check framework compliance"
- "Is my project set up right?"
- "What am I missing?"

#### When to Use

- At project start, after initial components are built, to verify the foundation matches the work
- When scope has expanded beyond what the original components were built for
- When work has accumulated across multiple sessions and the foundation has not been revisited
- Before high-stakes deliverables or engagements
- When output quality feels like it has plateaued or degraded without obvious cause
- When new stakeholders, constraints, or directions have emerged since components were created

The protocol gains value as projects evolve. A project that has not changed since setup probably does not need it. A project that has grown in scope, shifted direction, or accumulated sessions since its components were built is where infrastructure gaps cause invisible degradation.

#### Execution

Review the project systematically across five categories. For each category, assess what exists against what the work demands. The goal is not to find problems. The goal is to make visible how the current infrastructure relates to the demands of the work, so the user can make informed decisions about where to invest.

**1. Component Gaps** — What should exist for this work that does not? Read Work Definition, extraction documents, chat history, and accumulated context. Assess whether the complexity of this work warrants components that have not been built.

**2. Component Quality** — Are existing components doing the job they were designed to do? Components can exist without functioning. A Work Definition that says "write a report" without capturing what the report is trying to accomplish. Success Criteria that is vague.

**3. Component Currency** — Do components still reflect the current state of the work? Work evolves. Scope shifts. New stakeholders emerge. Constraints change. But foundation documents stay frozen at the point of creation. This compounds invisibly.

**4. System Coherence** — Do components work together as an integrated system? Success Criteria depends on Work Definition. Stakeholder Landscape informs how success is framed. Personal Context calibrates engagement across everything. When those connections break, the partnership loses its architectural advantage.

**5. Protocol Discipline** — Are the operational practices that maintain the system being followed? Session Closure is non-negotiable because it prevents permanent value loss. Consolidation prevents context volume degradation. When they slip, the damage is invisible and permanent.

**Sources to Review:** All project files, extraction documents, all component documents, current chat history, Framework Constitution.

If the infrastructure is well configured for the work, say so. The analysis value is in providing confidence that the system is supporting the best possible outcome, not in finding problems where none exist.

**How to Tell the Story**

Do not report what is missing. Tell the story of what the current infrastructure means for the outcome. The user needs to see the consequence, not the gap.

Not: "No Stakeholder Landscape exists."

But: "Work Definition identifies cross-departmental coordination and VP approval as constraints. The partnership cannot help navigate those organizational dynamics because the intelligence about those people and relationships does not exist. Recommendations will be technically sound but may fail politically. Building a Stakeholder Landscape would enable the partnership to help with the human system around the work, not just the work itself."

Every finding answers four questions: What is the gap? What is it costing the outcome? Why does that cost matter for this specific work? What specific action would close it?

#### Output

**File Name Format:** `[Project_Name]-Framework_Gap_Analysis_YYYYMMDD.md`

**Document structure:**

**Infrastructure Assessment** — 2-3 sentences on whether the collaboration infrastructure matches the demands of the work.

**Findings by Category** — Only include categories where issues exist. For each finding: what the gap is (be specific), what it is costing the outcome (consequence, not compliance), and what action would close it (concrete next step). If critical gaps exist, place them first, clearly flagged.

**System Strengths** — What is well configured. Where infrastructure matches or exceeds what the work demands. Acknowledge where the investment in context is paying off.

**Closing instruction** — "Here are the findings. Please review and determine which gaps to address in support of our shared objective: to produce the best possible outcome."

**What This Accomplishes**

Together, Work Quality Analysis and Framework Gap Analysis provide complete quality assurance — covering both the substance of the thinking and the system supporting it. Work Quality Analysis checks: is the thinking sound? Framework Gap Analysis checks: is the infrastructure optimally configured? The framework's core argument applied reflexively: the same "input quality determines output quality" principle that justifies each component is now applied to the component system itself.
---

## STAKEHOLDER PERSPECTIVE REVIEW PROTOCOL

### Purpose

Tests work from specific stakeholder perspectives before actual engagement. Uses Stakeholder Library entries and Stakeholder Landscape as active instruments to simulate how specific people would receive, question, and challenge the work. Pre-flight testing that catches objections, gaps, and misalignment in private, where they cost nothing to fix.

### Trigger

**Single Stakeholder:**

- "Review this from [name]'s perspective"
- "How would [name] see this?"
- "Test this against [name]"

**Multiple Stakeholders:**

- "Review this from [name] and [name]'s perspectives"
- "Test this against my stakeholders"

**Full Landscape:**

- "Give me a read on this from my whole stakeholder library"
- "Run stakeholder perspective review"
- "How does this land across my stakeholders?"

### When to Use

- Before presenting work, recommendations, or proposals to stakeholders
- Before sending communications where reception matters
- When navigating competing stakeholder priorities
- When preparing for meetings where challenge or pushback is expected
- When engagement sequence matters and you need to know where to start
- When stakes are high enough that surprises in the room are costly

The protocol's value scales with what is available in Stakeholder Library entries. A minimal entry (name, role, organization) produces a surface-level review based on role and position. A rich entry (values, communication preferences, what works, political context) produces a review that reflects how this specific person actually operates. The review is transparent about what it is drawing on and where it is inferring.

### Execution

**Identify What is Being Reviewed**

Determine the specific work product, recommendation, proposal, communication, or approach being tested. This could be a document, a plan discussed in conversation, a draft communication, a strategic approach, or a decision and its rationale. The review needs a concrete target. If the user's request is vague, ask: "What specifically should I review from their perspective?"

**Identify the Stakeholder Scope**

Three modes based on the trigger:

**Single stakeholder** — The user names one person. Locate their Stakeholder Library entry if it exists. Check Stakeholder Landscape for project-specific context about this person. If no Library entry and no Landscape entry exist, tell the user what is available and ask whether to proceed with whatever context exists in conversation, or whether they want to build the stakeholder context first.

**Selected stakeholders** — The user names multiple people. Locate each person's Library entry and Landscape entry. Review each perspective individually, then produce cross-stakeholder analysis. If some named stakeholders have Library entries and others do not, note the difference in depth and proceed.

**Full landscape** — The user asks for a review across all stakeholders. Use every stakeholder identified in Stakeholder Landscape. For each, draw on their Library entry if one exists. Produce per-stakeholder reviews and cross-stakeholder analysis. This mode makes the Dynamics section of Stakeholder Landscape an active instrument for the cross-stakeholder layer.

**Conduct the Perspective Review**

For each stakeholder, step into their perspective using everything available about them. The review is guided by five lenses, but these are not rigid categories to fill out mechanically. They guide the thinking. Some lenses will produce substantial findings for a given stakeholder; others may not apply. Let the work and the person determine what surfaces.

**Substance** — Does this work address what this person cares about? Given their values, priorities, and what matters to them for this specific effort, does the work speak to their concerns? Where does it align with what they want? Where does it miss? What would they need to see that is not here?

Sources: What They Value (Library), What they care about specifically for this effort (Landscape), Work Definition, Success Criteria.

**Reception** — How does this land given how this person operates? Consider their communication preferences, what framing resonates with them, what approaches work and what they reject. If the work is structured or framed in a way that conflicts with how they process information or make decisions, that is a finding. The same substance can succeed or fail based on how it is delivered.

Sources: How to Communicate (Library), What Works / What Doesn't (Library), Relationship to You (Library).

**Objections** — What pushback would this person raise? Given their values, their position on this effort, and their operating style, where would they challenge the work? What would they disagree with? What risks would they flag? What would they see as missing, insufficient, or misguided? This is not about inventing hypothetical objections. It is about what this specific person, given what is known about them, would actually push back on.

Sources: What They Value (Library), What Works / What Doesn't (Library), What they care about specifically for this effort (Landscape).

**Questions** — What would this person ask? Before agreeing, supporting, or even objecting, what information would they need? What would they want to understand before forming a position? These are the questions that, if unanswered, stall progress or create skepticism. Anticipating them lets the user prepare answers or address them proactively in the work itself.

Sources: What They Value (Library), How to Communicate (Library), role and authority context from Library and Landscape.

**Blind Spots** — What is visible from this person's position that is not visible from the user's? Each stakeholder occupies a different vantage point in the organization, the relationship, or the situation. What would they see that the user might not? This could be downstream impacts, political implications, precedents being set, effects on other people or initiatives, or considerations that matter from their seat but are invisible from the user's.

Sources: Who They Influence (Library), Formal vs Informal Power (Library), Dynamics (Landscape), organizational context from all available sources.

**Cross-stakeholder Analysis (Multi-stakeholder Reviews Only)**

When reviewing from multiple perspectives, the per-stakeholder reviews are necessary but not sufficient. The cross-stakeholder layer surfaces what only becomes visible when perspectives are placed side by side:

**Alignment** — Where do stakeholder perspectives converge? Where does the work satisfy multiple people simultaneously? These are points of strength. They are also the foundation for engagement strategy: areas of agreement can build momentum before addressing areas of conflict.

**Tension** — Where do stakeholder priorities pull in different directions? One person's priority may conflict with another's. The work may satisfy one stakeholder at the cost of another. These tensions are not problems to solve in the review. They are realities to make visible so the user can navigate them deliberately. The finding is: here is where the tension exists, here is what each person cares about, here is what is at stake in how you navigate it.

**Sequencing implications** — Given the stakeholder perspectives, relationship dynamics, and influence patterns, does the order of engagement matter? Should one person be consulted before another? Would early buy-in from one stakeholder change how another receives the work? This draws directly on the Relationships & Organizational Politics sections of Library entries and the Dynamics section of Stakeholder Landscape. If this intelligence is not available, say so rather than speculate.

**How to tell the story**

Findings tell the story. They make visible how the work looks from a perspective the user does not occupy, so the user can make informed decisions about what to adjust, how to frame, and where to prepare. The finding does not say "you should do this." It says "here is what this person would see, here is why it matters, here is what you can do about it."

Every finding must be:

- **Grounded in the Stakeholder** — Not generic objections anyone might raise. Specific to what is known about this person. If the finding could apply to any stakeholder, it is not specific enough.
- **Transparent About Basis** — Distinguish between findings drawn from documented stakeholder intelligence (Library entry, Landscape context) and findings that are reasonable inferences from role, position, or limited context. The user needs to know the confidence level.
- **Tied to Consequence** — Not just "they would object to the cost." But: "Given their focus on cost control for this initiative, the budget section as currently framed would likely be their first challenge. Without a clear ROI narrative, this becomes the sticking point before they engage with the substance."
- **Actionable** — The user should be able to do something with the finding. Adjust the work, prepare a response, change the framing, address a gap, or make a deliberate choice to proceed as-is with eyes open.

**Depth calibration**

The review's depth is constrained by available stakeholder intelligence. Be explicit about this:

**Rich Library entry + Landscape context** — Full perspective review across all five lenses. Findings grounded in documented intelligence about this specific person.

**Minimal Library entry (name, role, organization only)** — Role-based review. Findings based on what someone in this role and position would typically care about, flag, or question. Explicitly note that the review is role-based rather than person-specific, and that a richer Library entry would produce a more precise review.

**Landscape entry only (no Library entry)** — Project-context review. Findings based on why they matter for this work and what they care about for this effort. Note what additional intelligence would deepen the review.

**No Library or Landscape entry** — Surface the gap. Tell the user what stakeholder context exists (conversation history, references in other documents) and ask whether to proceed with that or build context first. If the user wants to proceed, be explicit that the review is based on limited information and identify what the limits are.

### Output

**File Name Format:** `[Project_Name]-Stakeholder_Perspective_Review-[Names or "All"]_YYYYMMDD.md`

**Document Structure:**

**Single Stakeholder Review:**

- **Review Context** — What is being reviewed, from whose perspective, and what stakeholder intelligence the review draws on. One to two sentences.
- **Perspective Assessment** — The substantive review organized by what surfaced, not by the five lenses mechanically. Lenses guide the review; the output is structured around findings. Lead with the most significant findings. Each finding: what this person would see, why (grounded in their documented profile or noted as inference), and what the user can do about it.
- **What Holds Up** — Where the work aligns with what this stakeholder cares about and how they operate. This is not filler. It tells the user what to lean into.
- **Closing** — "Here is how [name] would likely see this work. Please review and determine what adjustments, if any, to make in support of our shared objective: to produce the best possible outcome."

**Multi-stakeholder Review:**

- **Review Context** — What is being reviewed, which stakeholders are included, and what intelligence is available for each. Note any significant differences in Library depth across stakeholders.
- **Per-Stakeholder Perspectives** — A section for each stakeholder, following the single-review structure above but more concise. Focus on the most significant findings for each person. Depth per stakeholder is proportional to available intelligence and to the significance of what surfaces.
- **Cross-Stakeholder Analysis:** Alignment (where perspectives converge and the work satisfies multiple stakeholders), Tension (where priorities conflict and navigation is required), and Sequencing Implications (whether engagement order matters and why). This section is where the review delivers value beyond what individual reviews provide separately.
- **What Holds Up** — Where the work is strong across perspectives. What the user can be confident about going into engagement.
- **Closing** — "Here is how this work looks from across your stakeholder landscape. Please review and determine what adjustments, if any, to make in support of our shared objective: to produce the best possible outcome."

**What this accomplishes:**

Stakeholder Library entries and Stakeholder Landscape exist as background context during normal work. This protocol activates them as analytical instruments. Instead of passively informing how content is structured, they become the basis for systematic perspective testing. The user can experience how their work lands with specific people before the actual engagement, when adjustments are free, and the stakes are low. This is the difference between hoping the presentation works and knowing where it excels and where it falls short.

The protocol also makes the value of investing in stakeholder intelligence transparent. A minimal Library entry produces a minimal review. A rich Library entry produces a review that catches what a minimal one misses. Users experience the return on their investment in the People Layer directly through the quality of the perspective review they receive.


---

## PROJECT EXTRACT PROTOCOL

### Purpose

Synthesizes the entire state of a project into a single, self-contained document. The output is a portable snapshot that captures everything a new AI partner — or a different project — would need to understand what this project is, where it stands, and what matters.

Projects accumulate context across many documents: component files, extraction documents, conversation history. That context is valuable inside the project but not portable. When the user needs to bring project substance into another project, share state with a new AI instance, or take stock of where a complex project stands, this protocol produces a single document that serves that need.

### Trigger

- "Run project extract"
- "Extract this project"
- "Create project snapshot"
- "I need to bring this into another project"

### When to Use

- When starting a new project that builds on or relates to an existing project
- When loading project context into a different AI instance or platform
- For portfolio management across multiple projects
- At major project milestones as a comprehensive checkpoint
- When the project has grown complex enough that no single document captures the full picture

### Execution

Review all available project context: component documents, extraction documents, consolidated documents, conversation history, and any other project files. Synthesize into ten sections.

**1. Project Identity** — What this project is. The work, the user's role in it, the domain, and why it matters. A reader with no prior context should understand the project after this section.

**2. Current State** — Where the project stands right now. What has been accomplished, what is in progress, what phase the work is in. This is the most frequently referenced section in cross-project use.

**3. Key Decisions and Reasoning** — The significant decisions that have shaped the project and the reasoning behind them. Include only decisions that shape current direction — settled decisions whose reasoning is obvious can be omitted.

**4. Insights and Patterns** — Reusable insights, principles, and patterns that have emerged from this project. These are the learnings that apply beyond this specific project and compound in value when carried forward.

**5. Stakeholder Landscape** — Who matters for this work, what they care about, and how they interact. If stakeholder intelligence is thin, note what exists and what is missing.

**6. Open Work** — What is unresolved, in progress, or planned. For each item: what it is, why it matters, and current status.

**7. Blockers and Risks** — What is preventing progress or could prevent progress. Dependencies not met, decisions not made, resources not secured, stakeholder concerns not addressed.

**8. Connections and Themes** — Thematic connections within this project and to other work the user is doing. This section explicitly invites the AI partner to surface connections the user may not have named. Label any AI-observed connections as such: "AI-observed: [connection]."

**9. Artifacts and Deliverables** — What has been produced. Documents, frameworks, analyses, templates — anything tangible. For each: what it is, its current status (draft, final, superseded), and where it lives. Do not reproduce full artifacts — identify them.

**10. Extract Metadata** — Date of extraction, what sources were reviewed, and any limitations on the extract.

### Output

**File Name Format:** `[Project_Name]-Project_Extract_YYYYMMDD.md`

**Replace-not-append model:** There should only be one current project extract per project. Each new extract replaces the previous one entirely — it is a complete snapshot, not an incremental update.

**Consistent structure matters.** The ten sections are always the same and always in the same order. Current State is always section 2, Open Work is always section 6, Connections is always section 8. This enables cross-project navigation when multiple project extracts need to work together.
---

## CONSOLIDATION PROTOCOL

### Purpose

Manages context volume by consolidating accumulated extraction documents into a single current-state reference. Prevents the invisible degradation that occurs when too many extraction documents dilute the AI partner's effective attention. This is the keystone maintenance practice — Session Closure prevents value loss, Consolidation prevents infrastructure degradation.

The key principle: consolidation only helps if it actually reduces volume. If the consolidated output equals the sum of its inputs, that is reorganization, not consolidation. The value comes from stripping what is no longer active.

### Trigger

- "Run consolidation"
- "Consolidate extractions"
- "Consolidate my project"
- "Clean up project files"

### When to Use

- At natural project milestones, when a phase of work completes
- When extraction documents have accumulated (roughly 5+ extraction documents is a reasonable threshold)
- When the AI partner begins missing context that is present in project files (the diagnostic signal for context volume dilution)
- Before high-stakes work where a clean foundation matters
- When open items from extraction documents are not being tracked effectively because they are distributed across too many files

### Execution

**1. Inventory.** Review all project files. Identify how many extraction documents exist, what date range they cover, what component documents exist, and whether any prior consolidated documents exist.

**2. Review Each Extraction Document.** For each, assess: Are decisions still current or revised? Have insights been absorbed? Have project updates been incorporated into components? Is draft language still pending or already used? Are open items resolved, active, or superseded?

**3. Produce Consolidated Document.** Create a single current-state document meaningfully shorter than the sum of the sources it replaces.

Include: Active decisions, unresolved open items (with status updates), relevant insights not absorbed elsewhere, key reasoning for non-obvious decisions, pending component update recommendations, current project direction and status.

Strip: Settled reasoning trails, incorporated draft language, closed open items, superseded thinking, finalized artifacts saved elsewhere, established relationship norms, completed next-focus items.

**4. Identify Source Documents.** List all extraction documents reviewed. Note which are now historical record.

**5. Advise on File Management.** Tell the user which extraction documents can be archived out of the project. The consolidated document replaces them. Component documents stay — they are independent living documents.

### Output

**File Name Format:** `Consolidated_YYYYMMDD.md`

**Document Structure:**

**Consolidation Summary** — Date, number of source documents, date range, and 2-3 sentence project state overview.

**Source Documents** — List of extraction documents reviewed with dates. Note which are now historical record.

**Current Decisions and Direction** — Active decisions shaping the work. Where the project stands. Established direction.

**Active Open Items** — Unresolved items needing future attention. For each: what, why it matters, what needs to happen.

**Preserved Reasoning** — Key reasoning for non-obvious decisions that future sessions need. Only include what is not obvious from the decision itself.

**Pending Component Updates** — Changes needed to foundation documents. For each: which document, what change, why.

**Insights and Patterns** — Reusable insights still relevant and not yet absorbed.

**Recommended Next Focus** — Highest-priority items for future sessions, synthesized from accumulated open items and project direction.

### Naming Convention

"Consolidated" in the filename signals authoritative current state. "Chat Extract" or "Session Extract" signals content that may include superseded material. This naming convention does the signaling work that status tracking systems would otherwise require. There should only be one current consolidated document in the project at a time.
