Incumbent / Prior Performer Finder
# Incumbent / Prior Performer Finder
## User Input
- **Target record:** [Solicitation number, notice ID, contract number, IDV or vehicle identifier, GovTribe link, or title plus agency]
## Goal
Use GovTribe MCP tools to determine whether a target federal contract opportunity, award, IDV, or vehicle has a confirmed incumbent and, if no direct incumbent can be confirmed, identify the most defensible prior performers on closely related work.
Use a two-tier evidence model:
1. Direct linkage first
2. Tightly constrained prior-performer fallback only if direct linkage is absent or incomplete
Focus on grounded incumbency and prior-performance evidence, not broad competitor intelligence, capture speculation, or loose market similarity.
## Required Documentation
Before doing any work, call the **GovTribe Documentation** tool and read the documentation required for this workflow.
Required documentation to retrieve and read:
- `article_name="Search_Query_Guide"`
- `article_name="Search_Mode_Guide"`
Retrieve these additional documentation articles only when the workflow needs them:
- `article_name="Search_Federal_Contract_Opportunities_Tool"` for opportunity-first resolution or meta-opportunity trace
- `article_name="Search_Federal_Contract_Awards_Tool"` for award-first resolution, direct-link evidence, or direct-link aggregation passes
- `article_name="Search_Federal_Contract_IDVs_Tool"` for IDV-first resolution or lineage checks
- `article_name="Search_Federal_Contract_Vehicles_Tool"` for vehicle-first resolution or lineage checks
- `article_name="Search_Vendors_Tool"` for performer identity normalization
- `article_name="Aggregation_and_Leaderboard_Guide"` only when a direct-link aggregation pass materially improves interpretation of multiple linked performers
- `article_name="Date_Filtering_Guide"` only if date filters are used
- `article_name="Vector_Store_Content_Retrieval_Guide"` only if file snippets are insufficient and the user explicitly needs deeper document content
Documentation rules:
- Call the **GovTribe Documentation** tool before the first research or search step.
- Read every required documentation article before using other GovTribe tools.
- Follow the documented tool contracts exactly.
- Treat the documentation as binding for tool names, `search_mode`, `query`, `fields_to_return`, relationship fields, `similar_filter` behavior, aggregation options, sort keys, and valid output assumptions.
## Required Input
The user must provide a specific target record before analysis begins.
Accept any of the following:
- Solicitation number or notice ID
- GovTribe link
- Opportunity title plus agency
- Contract number or PIID
- IDV or vehicle identifier
- Plain-language description only if it is specific enough to resolve a single target record
Optional constraints the user may provide:
- Whether to return only confirmed incumbents
- Whether to include prior performers when no incumbent is directly linked
- Whether to include vehicle or IDV lineage
- Whether to return a short **Incumbency Card** or a fuller analysis
Input rules:
- If the input resolves cleanly to one target, proceed immediately.
- If the input is too vague to resolve a single record, ask for the minimum missing detail needed to proceed.
- Do not guess the target.
- Do not start substantive analysis until the target is resolved.
## Workflow
### Rules
- Call `Documentation` before using any other GovTribe MCP tool.
- Federal contracts only; do not mix in grants or state/local workflows.
- Use GovTribe MCP tools as the primary source of evidence in this workflow.
- Always set both `search_mode` and `query` on every `Search_*` call.
- Prefer exact resolution over semantic retrieval.
- Use a semantic fallback only after direct linkage fails or is incomplete.
- Use `query: ""` for filter-defined or ID-defined cohorts.
- Use `per_page: 0`, `query: ""`, and `search_mode: "keyword"` only for direct-link aggregation cohorts when multiple linked performers need concentration context.
- Use `fields_to_return` whenever you need more than `govtribe_id`.
- Do not stop early when another tool call is required by the workflow.
- Keep calling tools until the task is complete or the tool budget is reached.
- If a tool returns empty or partial results and the workflow defines another defensible strategy, continue with that next strategy.
- Use only the evidence labels defined in this prompt.
- Do not present fallback similarity as confirmed incumbency.
- Do not infer customer preference, shaping behavior, teaming posture, or competitor favorability in this workflow.
- Keep the workflow compact; add an extra tool call only when it materially improves correctness or grounding.
### Steps
1. Before doing any research, call `Documentation` and read every required article listed above.
- Add optional documentation articles only when their branch becomes necessary.
- Use the documentation results to confirm valid tool names, `search_mode`, `query`, `fields_to_return`, relationship fields, `similar_filter` behavior, aggregation options, and sort keys before searching.
2. Resolve the target record exactly and branch early based on what the user supplied.
- Opportunity-first path: `Search_Federal_Contract_Opportunities`
- Award-first path: `Search_Federal_Contract_Awards`
- IDV-first path: `Search_Federal_Contract_IDVs`
- Vehicle-first path: `Search_Federal_Contract_Vehicles`
- Use `search_mode: "keyword"` plus quoted exact lookup for identifiers, titles, and links.
- If the user provides a GovTribe link, use the record identity embedded in the link when possible; otherwise resolve it through exact quoted lookup.
- Use `fields_to_return` explicitly.
- Opportunity path fields:
- `govtribe_id`, `govtribe_url`, `govtribe_type`, `solicitation_number`, `name`, `opportunity_type`, `posted_date`, `due_date`, `set_aside_type`, `descriptions`, `govtribe_ai_summary`, `federal_meta_opportunity_id`, `federal_contract_vehicle`, `federal_agency`, `place_of_performance`, `naics_category`, `psc_category`, `federal_contract_awards`, `federal_contract_idvs`, `points_of_contact`
- Award path fields:
- `govtribe_id`, `govtribe_url`, `govtribe_type`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `federal_contract_idv`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, `originating_federal_contract_opportunity`
- IDV path fields:
- `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `last_date_to_order`, `contract_type`, `description`, `govtribe_ai_summary`, `ceiling_value`, `set_aside`, `multiple_or_single_award`, `awardee`, `parent_of_awardee`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, `task_orders`, `blanket_purchase_agreements`, `originating_federal_contract_opportunity`
- Vehicle path fields:
- `govtribe_id`, `govtribe_url`, `name`, `award_date`, `last_date_to_order`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `set_aside_type`, `shared_ceiling`, `originating_federal_contract_opportunity`, `federal_agency`, `federal_contract_awards`
3. If multiple records match, disambiguate with the minimum additional evidence needed.
- Compare agency, record type, date, contract type, set-aside, NAICS, PSC, and vehicle context.
- Do not merge multiple possible matches into one narrative.
- If the record still cannot be resolved to a single target, stop and ask for one clarifying detail.
4. Run Tier 1 direct incumbent or direct prior-performer discovery before any similarity search.
- Use direct dataset linkage first.
- If an opportunity has `federal_meta_opportunity_id`, query:
- `Search_Federal_Contract_Awards` with `query: ""`, `search_mode: "keyword"`, and `federal_meta_opportunity_ids`
- `Search_Federal_Contract_IDVs` with the same meta-opportunity linkage when relevant
- `Search_Federal_Contract_Vehicles` with the same meta-opportunity linkage when relevant
- If the target is an award, use its `originating_federal_contract_opportunity` and parent IDV or vehicle fields to trace the thread before any fallback search.
- If the target is an IDV or vehicle, use its linked opportunity or award relationships before any fallback search.
- For direct-link searches, request explicit row fields needed to interpret performer ownership and thread context.
- Label direct evidence using only `Confirmed Incumbent` or `Confirmed Prior Performer`.
- Use `Confirmed Incumbent` only when the evidence supports current or thread-direct incumbent ownership.
- Use `Confirmed Prior Performer` when the linkage is direct but clearly historical rather than current.
- If multiple directly linked award rows are returned and performer concentration matters, read `article_name="Aggregation_and_Leaderboard_Guide"` and add a narrow aggregation-only verification pass:
- Use `Search_Federal_Contract_Awards`
- Use `query: ""`
- Use `search_mode: "keyword"`
- Use `per_page: 0`
- Reuse the same `federal_meta_opportunity_ids` or other direct-link award filters
- Use `aggregations` such as `dollars_obligated_stats`, `top_awardees_by_dollars_obligated`, and `top_contracting_federal_agencies_by_dollars_obligated`
- Use that aggregation branch only to distinguish a single dominant direct performer from a more distributed direct-history thread. Do not use it to justify fallback similarity claims.
5. Normalize performer identity only when it materially affects interpretation.
- If performer identity, parent-child structure, or legal-entity naming matters, read `article_name="Search_Vendors_Tool"` if it has not already been read, then use `Search_Vendors`.
- Prefer `vendor_ids` from relationships when available.
- Otherwise use an exact quoted company name.
- Request:
- `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `parent_or_child`, `parent`, `business_types`, `sba_certifications`, `govtribe_ai_summary`
- Use vendor normalization to clarify entity ownership, not to replace linkage evidence.
6. Run Tier 2 constrained prior-performer fallback only if Tier 1 does not confirm an incumbent.
- Use the resolved target as the seed.
- Prefer `similar_filter` on the resolved opportunity or award only when direct linkage is missing.
- Keep strict structural filters in place when available:
- Same agency
- Same NAICS or PSC
- Same or adjacent vehicle or IDV
- Same set-aside context
- Realistic recent award window
- Use `_score` sorting only for this fallback tier unless a date sort materially sharpens the answer.
- If date filters are needed, read `article_name="Date_Filtering_Guide"` first.
- Do not widen into open-ended likely-bidders logic.
- Label fallback evidence using only `Likely Prior Performer` or `Adjacent Only`.
- Use `Likely Prior Performer` only for strong constrained matches.
- Use `Adjacent Only` for weak or partially relevant matches that should not drive capture decisions.
7. Exclude weak or misleading evidence explicitly.
- Exclude awards that are only keyword-adjacent.
- Exclude same-agency work with the wrong scope.
- Exclude same-vehicle work with the wrong capability lane.
- Exclude entity-mismatched vendor results.
- Exclude semantic matches that are too thin to support incumbency or prior-performer claims.
8. Perform a verification pass before final ranking.
- Remove weak fallback matches.
- Re-check whether the top conclusion still holds.
- If a direct-link aggregation pass was used, confirm the final evidence labels still match the direct cohort shape after weak rows are removed.
- If the answer depends only on fallback evidence, lower confidence explicitly.
- If no defensible direct or fallback performer remains, return `No Evidence`.
9. Escalate file-content retrieval only when it is clearly necessary.
- If file snippets are insufficient and the user explicitly needs deeper document grounding, read `article_name="Vector_Store_Content_Retrieval_Guide"` first, then use `Add_To_Vector_Store`, then `Search_Vector_Store`.
- Typical direct-link path should stay small and evidence-heavy.
- If the question becomes primarily about the underlying notice or the awarded contract rather than incumbency, recommend `Opportunity Deep Dive` or `Award Deep Dive` instead of overloading this workflow.
## Tool Budget
Design the workflow to stay compact.
Typical path:
- 4 to 5 documentation calls
- 1 target-resolution call
- 1 to 3 linked-record calls
- 0 to 1 vendor-normalization call
Expected total:
- Typical: 7 to 10 calls
- High end with fallback: 11 to 14 calls
Avoid exceeding 15 calls unless an extra call materially changes correctness.
## Output Format
Return in this order:
1. **Resolved Target Summary**
- Briefly explain how the target was resolved
2. **Direct Incumbent / Direct Prior Performer Evidence**
- Present this section as a compact markdown table first
- Recommended columns: `Performer`, `Evidence Label`, `Linked Record`, `Why`
3. **Prior Performer Fallback**
- Include fallback results only if Tier 1 did not confirm an incumbent
- Present fallback results as a compact markdown table first
- Recommended columns: `Performer`, `Evidence Label`, `Closest Record`, `Why`
4. **Vehicle / IDV / Opportunity Lineage**
- Explain the record thread or lineage only when it was actually retrieved
5. **Why Others Were Excluded**
- Briefly note weak, adjacent, or entity-mismatched candidates that were rejected
6. **Risks, Gaps, or Unknowns**
- Briefly note missing linkage, ambiguity, sparse history, or other data limits
7. **Overall Confidence**
- State overall confidence and why
### Optional chart
Use Mermaid only for direct-link concentration context when it materially improves interpretation.
Do not use charts in fallback-only answers.
## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.
## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.Last updated
Was this helpful?
