Past Performance Match
# Past Performance Match
## User Input
- **Target company:** [Company name, UEI, CAGE, GovTribe link, or company description]
- **Target requirement:** [Solicitation number, notice ID, GovTribe link, opportunity title plus agency, uploaded SOW/PWS/RFI, or work description]
## Goal
Use GovTribe MCP tools to determine **how well a target company’s past performance aligns to a specific federal contract or grant opportunity, solicitation, requirement, or statement of work**.
Focus on actual awards and documented requirement evidence, not broad capability claims or keyword overlap. A complete answer should identify the strongest defensible past performance references, map them to the requirement, and explain the most important gaps or risks.
## Required Documentation
Before doing any work, call the **GovTribe Documentation** tool and read the documentation required for this workflow.
Required documentation to retrieve and read:
- `article_name="Search_Query_Guide"`
- `article_name="Search_Mode_Guide"`
- `article_name="Search_Vendors_Tool"`
Retrieve these additional documentation articles only when the workflow needs them:
- `article_name="Date_Filtering_Guide"` for award-date, completion-date, due-date, or recency filters
- `article_name="Location_Filtering_Guide"` for place-of-performance or geography filters
- `article_name="Aggregation_and_Leaderboard_Guide"` for lane-shape analysis, recency comparisons, or verification passes that use aggregations
- `article_name="Search_Federal_Contract_Opportunities_Tool"` for contract-opportunity resolution
- `article_name="Search_Federal_Contract_Awards_Tool"` for contract-award matching or aggregation passes
- `article_name="Search_Federal_Grant_Opportunities_Tool"` for grant-opportunity resolution
- `article_name="Search_Federal_Grant_Awards_Tool"` for grant-award matching or aggregation passes
- `article_name="Search_Federal_Grant_Programs_Tool"` for CFDA, ALN, or program-centric grant targets
- `article_name="Search_Federal_Agencies_Tool"` for agency ID resolution
- `article_name="Search_Naics_Categories_Tool"` for NAICS ID resolution
- `article_name="Search_Psc_Categories_Tool"` for PSC ID resolution
- `article_name="Search_User_Files_Tool"` for uploaded requirement documents
- `article_name="Search_Government_Files_Tool"` for opportunity attachments or metadata snippets
- `article_name="Vector_Store_Content_Retrieval_Guide"` only if snippets are insufficient and deeper file content is necessary
- `article_name="Search_Federal_Contract_Sub_Awards_Tool"` only if the user explicitly wants adjacent contract evidence beyond prime awards
- `article_name="Search_Federal_Grant_Sub_Awards_Tool"` only if the user explicitly wants adjacent grant evidence beyond prime awards
Documentation rules:
- Call the **GovTribe Documentation** tool before the first research or search step.
- Read every required documentation article before using other GovTribe tools.
- Add optional documentation articles only when their branch becomes necessary.
- Treat the documentation as binding for tool names, `search_mode`, `query`, `fields_to_return`, `vendor_ids`, structural bridge filters, aggregation options, `similar_filter` behavior, and valid output assumptions.
## Required Input
The user must provide both of the following before analysis begins:
1. A **target company**
2. A **target opportunity or requirement**
Accept any of the following for the target company:
- Company name
- UEI
- CAGE
- GovTribe link
- Plain-language description of the company
Accept any of the following for the target opportunity or requirement:
- Solicitation number or notice ID
- GovTribe link
- Opportunity title plus agency
- Uploaded solicitation, SOW, PWS, RFI, or requirement text
- Plain-language description of the work
Optional constraints the user may provide:
- Time window for relevant awards
- Agency or customer focus
- NAICS, PSC, or other classifications
- Contract type or vehicle focus
- Small business or set-aside context
- Whether to include only prime awards or also adjacent evidence
- Whether to identify gaps and teaming needs
- Whether to create a concise bid/no-bid style assessment
Input rules:
- If either the company or the target requirement is too vague, ask for the minimum missing detail needed to proceed.
- Do not guess the company, the requirement, or the applicable procurement lane.
- Do not start substantive analysis until both sides are resolved well enough to search.
## Workflow
### Rules
- Call `Documentation` before using any other GovTribe MCP tool.
- Use GovTribe MCP tools as the primary source of evidence in this workflow.
- Always set both `search_mode` and `query` on every `Search_*` call.
- Use `query: ""` for filter-defined or aggregation-only cohorts.
- Use `per_page: 0`, `query: ""`, and `search_mode: "keyword"` only for aggregation-only comparable cohorts.
- Use `fields_to_return` whenever you need more than `govtribe_id`.
- Do not stop early when another tool call is required by the workflow.
- Keep calling tools until the task is complete or the tool budget is reached.
- If a tool returns empty or partial results and the workflow defines another defensible strategy, continue with that next strategy.
- Resolve the target company with `Search_Vendors` and reuse `vendor_ids` in downstream award searches when possible.
- Do not widen from the exact company to parent or subsidiary scope unless user intent or retrieved evidence makes that relationship material and you label it clearly.
- Resolve the target requirement with `Search_Federal_Contract_Opportunities` or `Search_Federal_Grant_Opportunities` when possible.
- Use `Search_Federal_Grant_Programs` for CFDA, ALN, or program-centric grant targets.
- Use `Search_User_Files` for uploaded requirement documents and `Search_Government_Files` for opportunity attachments or metadata snippets.
- Use `Add_To_Vector_Store` and `Search_Vector_Store` only after the documented escalation path when snippets are insufficient.
- Use `Search_Federal_Contract_Sub_Awards` or `Search_Federal_Grant_Sub_Awards` only when the user explicitly wants adjacent evidence beyond prime awards.
- Do not stop at the first plausible answer. Check edge cases, entity-resolution issues, and false positives before final ranking.
### Steps
1. Before doing any research, call `Documentation` and read every required article listed above.
- Add optional documentation articles only when their branch becomes necessary.
- Use the documentation results to confirm valid tool names, `search_mode`, `query`, `fields_to_return`, structural bridge filters, aggregation options, `similar_filter` behavior, and valid output assumptions before searching.
2. Resolve the target company and normalize identity where possible.
- Use `Search_Vendors` when the input is a company name, GovTribe link, UEI, or other known vendor identity.
- For exact names or identifiers, use `search_mode: "keyword"` and a quoted `query`.
- Use `vendor_ids` when a GovTribe vendor ID or UEI is already known.
- Request at least `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `business_types`, `sba_certifications`, `parent_or_child`, `parent`, `naics_category`, and `govtribe_ai_summary`.
- Capture the resolved vendor identity for reuse later through `vendor_ids`.
- Normalize the legal name, common name or DBA, UEI when available, CAGE only if it appears in retrieved evidence, and any parent or subsidiary relationships that materially affect interpretation.
3. Resolve the target opportunity or requirement and branch early to contract, grant, or file-first handling.
- First determine whether the target is a contract opportunity, a grant opportunity, or a file-first requirement that does not resolve cleanly to a GovTribe opportunity record.
- Contract path:
- Use `Search_Federal_Contract_Opportunities`.
- For exact solicitation numbers, notice IDs, or quoted titles, use `search_mode: "keyword"` and a quoted `query`.
- Request at least `govtribe_id`, `govtribe_type`, `govtribe_url`, `solicitation_number`, `name`, `descriptions`, `govtribe_ai_summary`, `federal_meta_opportunity_id`, `federal_agency`, `naics_category`, `psc_category`, `set_aside_type`, `federal_contract_vehicle`, `place_of_performance`, `posted_date`, `due_date`, `government_files`, and `points_of_contact`.
- Resolve reusable IDs before deeper search when needed:
- `Search_Federal_Agencies` -> `federal_agency_ids`
- `Search_Naics_Categories` -> `naics_category_ids`
- `Search_Psc_Categories` -> `psc_category_ids`
- Grant path:
- Use `Search_Federal_Grant_Opportunities`.
- For exact notice IDs or quoted titles, use `search_mode: "keyword"` and a quoted `query`.
- Request at least `govtribe_id`, `govtribe_type`, `govtribe_url`, `solicitation_number`, `name`, `description`, `govtribe_ai_summary`, `federal_agency`, `federal_grant_programs`, `funding_instruments`, `applicant_types`, `funding_activity_categories`, `place_of_performance`, `posted_date`, `due_date`, `government_files`, and `points_of_contact`.
- If the user gives a CFDA or ALN-style identifier or a program-centric target, resolve it with `Search_Federal_Grant_Programs` first, then reuse `federal_grant_program_ids`.
- Resolve reusable IDs before deeper search when needed:
- `Search_Federal_Agencies` -> `federal_agency_ids`
- `Search_Federal_Grant_Programs` -> `federal_grant_program_ids`
- File-first requirement path:
- If the target is primarily an uploaded requirement artifact and does not resolve cleanly to a GovTribe opportunity record, extract the requirement from Step 4 before constructing award searches.
- Extract as many of these attributes as possible:
- Opportunity title
- Agency, subagency, or buying office
- Scope of work
- Key tasks or required outcomes
- NAICS, PSC, CFDA, ALN, or other relevant classifications
- Contract type, vehicle, or instrument type
- Set-aside or eligibility constraints
- Estimated value or value band
- Place of performance
- Required clearances, certifications, facilities, or operational environment
- Timing or period of performance, if available
4. Use requirement text and file evidence explicitly, not generically.
- For uploaded solicitation, SOW, PWS, RFI, or requirement documents, use `Search_User_Files`.
- For `Search_User_Files`, request `govtribe_id`, `govtribe_ai_summary`, `govtribe_url`, `name`, `description`, `content_snippet`, and `download_url`.
- If the resolved opportunity returns one or more `government_files`, this branch is required. Call `Search_Government_Files` with `federal_contract_opportunity_ids` or `federal_grant_opportunity_ids` before final scoring, even when the opportunity `govtribe_ai_summary` already looks detailed.
- For `Search_Government_Files`, request `govtribe_id`, `govtribe_ai_summary`, `govtribe_url`, `name`, `content_snippet`, `download_url`, `posted_date`, and `parent_record`.
- If `content_snippet` is not enough, read `article_name="Vector_Store_Content_Retrieval_Guide"` first, then use `Add_To_Vector_Store`, then `Search_Vector_Store`.
- Treat file chunks as supporting requirement evidence, not as a substitute for award matching.
- Keep the target’s `descriptions` or `description`, `govtribe_ai_summary`, and file-derived requirement language for later `query` construction and alignment checks.
5. Use keyword and filter-first award searches before semantic expansion.
- There is no separate structured-search tool. Use the relevant award tool with `search_mode: "keyword"` plus structured filters first.
- If structured filters define the cohort, set `query: ""`.
- Always anchor the cohort to the resolved target company using `vendor_ids` unless the user explicitly asks for a broader comparable-market scan.
- Use `fields_to_return` explicitly because the default row payload is only `govtribe_id`.
- Contract award path with `Search_Federal_Contract_Awards`:
- If the resolved opportunity has `federal_meta_opportunity_id`, call `Search_Federal_Contract_Awards` first with `query: ""`, `search_mode: "keyword"`, `vendor_ids`, and `federal_meta_opportunity_ids`.
- Then add the strongest available `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_award_types`, `set_aside_types`, `award_date_range`, `ultimate_completion_date_range`, `dollars_obligated_range`, `ceiling_value_range`, and `place_of_performance_ids`.
- Request at least `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, and `originating_federal_contract_opportunity`.
- Grant award path with `Search_Federal_Grant_Awards`:
- Start with `query: ""`, `search_mode: "keyword"`, `vendor_ids`, and the strongest available `federal_grant_program_ids`, `funding_federal_agency_ids`, `contracting_federal_agency_ids`, `assistance_types`, `award_date_range`, `ultimate_completion_date_range`, `dollars_obligated_range`, and `place_of_performance_ids`.
- Request at least `govtribe_id`, `govtribe_url`, `name`, `award_date`, `ultimate_completion_date`, `dollars_obligated`, `assistance_type`, `description`, `govtribe_ai_summary`, `awardee`, `parent_of_awardee`, `funding_federal_agency`, `contracting_federal_agency`, `federal_grant_program`, and `place_of_performance`.
- Do not rely on keywords alone when resolved company IDs and opportunity-derived filters are available.
6. Broaden the search only after the keyword and filter-first pass.
- Run this branch only when the keyword and filter-first pass leaves plausible coverage gaps, vague award text, or uncertainty that semantic broadening could materially improve the answer.
- There is no separate semantic-search tool. Use the same award search tools again with `search_mode: "semantic"`.
- Build a concise plain-language `query` from the resolved requirement, mission language, key tasks, and a few domain-aware synonyms or paraphrases.
- Keep the strongest company and requirement filters in place while broadening.
- Use `_score`-based `sort` for semantic passes unless the user specifically needs date or dollar ordering instead.
- Use `similar_filter` only if the current tool supports it and you have a strong seed record with the correct `govtribe_type` and `govtribe_id`.
- Do not let semantic broadening override the core company constraint unless the user explicitly asked for adjacent market evidence.
- If the keyword and filter-first pass already yields a small, well-supported candidate set and no material uncertainty remains, you may skip this branch and say why.
7. Use dataset-specific text evidence during comparison.
- For contract opportunities and contract awards, inspect both `descriptions` and `govtribe_ai_summary`.
- For grant opportunities and grant awards, inspect both `description` and `govtribe_ai_summary`.
- For user-uploaded and government files, use `content_snippet` first and escalate to vector-store retrieval only when snippets are not enough.
- Do not assume every dataset exposes the same description field name.
8. Compare each candidate award to the target requirement and keep only awards that are meaningfully relevant.
- Evaluate alignment using:
- Scope and task overlap
- Agency or customer overlap
- NAICS, PSC, CFDA, ALN, or classification overlap
- Contract type, vehicle, or instrument overlap
- Set-aside or eligibility overlap
- Value band similarity
- Period of performance or recency
- Clearance, facility, certification, or environment fit
- Alignment between textual extracts, especially `descriptions` or `description` plus `govtribe_ai_summary`
- File-derived requirement evidence when it materially sharpens the comparison
9. Exclude awards that are only keyword-adjacent, too generic, too small, too different in delivery model, or otherwise not truly comparable.
10. If many candidate awards remain after the comparable cohort is built, run an aggregation pass before final scoring.
- Use this step only when the cohort is broad enough that lane-shape evidence materially improves the answer.
- Do not treat this step as mandatory when the candidate set is already small, well-understood, and unlikely to benefit from lane-shape quantification before scoring.
- For aggregation-only calls:
- Use `query: ""`
- Use `search_mode: "keyword"`
- Use `per_page: 0`
- Keep the same `vendor_ids` and the same structural filters that define the comparable cohort
- Omit `fields_to_return` unless rows are also needed
- Contract path with `Search_Federal_Contract_Awards`:
- Use `aggregations` such as `dollars_obligated_stats`, `top_contracting_federal_agencies_by_dollars_obligated`, `top_naics_codes_by_dollars_obligated`, and `top_psc_codes_by_dollars_obligated`.
- Grant path with `Search_Federal_Grant_Awards`:
- Use `aggregations` such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, `top_federal_grant_programs_by_dollars_obligated`, and `top_locations_by_dollars_obligated`.
- Use this pass to quantify whether the company’s relevant history is concentrated in the same agency, category, program, location, or value lane as the target, whether the candidate set is too broad or off-pattern, and whether the fit is backed by repeated activity or just a few isolated rows.
- If recency materially affects match strength, compare a recent window and a prior window using the same filters so the trend claim is grounded.
11. If the user allows adjacent evidence beyond prime awards, add it deliberately rather than implicitly.
- Prime-only path:
- Stay on `Search_Federal_Contract_Awards` or `Search_Federal_Grant_Awards`.
- Adjacent-evidence path:
- Add `Search_Federal_Contract_Sub_Awards` or `Search_Federal_Grant_Sub_Awards` with the same resolved `vendor_ids` and relevant date or agency filters.
- Treat sub-awards as weaker supporting evidence than direct prime awards.
- Do not let adjacent evidence outrank direct prime past performance when both are available.
12. If no sufficiently relevant evidence remains, say so clearly and stop.
13. For the remaining awards, identify the strongest **past performance references** and map them to the target requirement.
- Distinguish which requirement areas are directly supported, partially supported, or unsupported by the available evidence.
14. Identify material gaps, such as missing capability evidence, missing customer or agency relevance, missing contract type, vehicle, or instrument relevance, missing clearance, facility, certification, or location fit, weak scale or complexity comparability, or areas where teaming or subcontract support may be needed.
15. Rank the overall match strength using only these labels:
- **Very Strong**
- **Strong**
- **Moderate**
- **Weak**
- **No Credible Match**
- Score the match using:
- Direct overlap between the requirement and prior award scope
- Similarity of customer, office, or mission context
- Same or adjacent NAICS, PSC, CFDA, ALN, or category
- Same contract type, vehicle, instrument, or acquisition pattern
- Similar scale, value, and complexity
- Recent relevant performance
- Clearance, facility, certification, or environment fit
- Consistency between requirement and award `descriptions` or `description`
- Consistency between requirement and award `govtribe_ai_summary`
- Strength of file-derived requirement evidence when that evidence was needed to interpret the target
- Guidance:
- **Very Strong**: Multiple awards strongly support the target requirement with clear scope overlap and no major fit issues.
- **Strong**: Good supporting evidence exists, but one or two meaningful gaps remain.
- **Moderate**: Some relevant evidence exists, but the fit is mixed or incomplete.
- **Weak**: Only partial or adjacent support exists; the match is not well grounded.
- **No Credible Match**: The available evidence does not support a meaningful past performance claim.
- Do not rate the match above **Moderate** unless there is at least one award with strong direct scope overlap and at least one additional supporting signal such as same agency, same classification, same contract type, similar scale, or strong alignment in `descriptions`, `description`, or `govtribe_ai_summary`.
- Do not let sub-awards or other adjacent evidence justify a high score on their own when direct prime evidence is weak.
- Do not force-fit a company to a requirement based on thin evidence or broad capability language.
16. Perform a verification pass for the most important conclusions.
- Remove weak, edge-case, or low-similarity awards and check whether the conclusion still holds.
- Rerun the cleaned cohort rather than trusting the first page.
- Use `per_page` and additional pages as needed to confirm coverage.
- Run at least one aggregation or metadata pass with `query: ""`, `search_mode: "keyword"`, `per_page: 0`, the same `vendor_ids`, and the same structural filters:
- Contract path with `Search_Federal_Contract_Awards`:
- `aggregations` such as `dollars_obligated_stats`, `top_contracting_federal_agencies_by_dollars_obligated`, `top_naics_codes_by_dollars_obligated`, or `top_psc_codes_by_dollars_obligated`
- Grant path with `Search_Federal_Grant_Awards`:
- `aggregations` such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, or `top_federal_grant_programs_by_dollars_obligated`
- If the conclusion shifts materially after cleanup, lower confidence and explain why.
## Tool Budget
Design the workflow to stay compact.
Typical path:
- 4 to 6 documentation calls
- 1 company-resolution call
- 1 opportunity or requirement-resolution call
- 0 to 3 resolver or file-evidence calls
- 1 keyword/filter-first award-matching pass
- 0 to 1 semantic pass
- 0 to 1 aggregation or adjacent-evidence branch
Expected total:
- Typical: 8 to 11 calls
- High end: 12 to 14 calls
Avoid exceeding 15 calls unless an extra call materially changes correctness.
## Output Format
Use compact markdown tables by default for relevant-history summaries, best-fit references, and requirement-to-evidence mapping.
Use Mermaid only when a real recent-versus-prior comparison materially improves readability; otherwise stay table-first.
Return the answer in this order:
1. **Match Summary**
- Briefly summarize how the target company and target requirement were interpreted.
- State the overall match strength using one of the labels from Step 15.
2. **Search Approach**
- Briefly explain which `Documentation.article_name` calls were used.
- Briefly explain which `Search_*` tools, filters, and bridge parameters were most important.
- Briefly explain how the keyword or filter-first pass, any aggregation pass, and the semantic pass were used.
- Briefly note any file-retrieval or vector-store steps used.
3. **Relevant History Summary**
- If aggregation analysis was used, start with a compact markdown table summarizing the main lane-shape or recency signals.
- Briefly summarize how concentrated the company’s relevant history looked in the target lane.
- Mention one or two aggregation-derived signals when aggregation analysis was used.
- If the workflow actually used recent-versus-prior window comparison and that trend materially affects the conclusion, you may add one small Mermaid `xychart-beta`.
- Only add the chart when it materially improves interpretation. Include a one to two sentence explanation and fall back to the compact table if the comparison is sparse or Mermaid is unavailable.
4. **Best-Fit Past Performance References**
- Present this section as a compact markdown table first.
- Recommended columns: `Rank`, `Award`, `Agency`, `Evidence Type`, `Match Strength`, `Why It Fits`.
- Add short notes below the table only when specific evidence or caveats do not fit cleanly in the table.
5. **Requirement-to-Evidence Mapping**
- Use a mandatory markdown table.
- Recommended columns: `Requirement Area`, `Support Level`, `Best Evidence`, `Gap / Caveat`.
- Use only evidence grounded in the retrieved awards and provided requirement materials.
6. **Gaps, Risks, and Teaming Needs**
- Briefly identify the most important capability, customer, scale, clearance, vehicle, instrument, or geographic gaps.
- Note where teaming support may be needed, if relevant.
7. **Overall Confidence**
- State overall confidence in the assessment and why.
## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.
## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.Last updated
Was this helpful?
