Likely Bidders
# Likely Bidders
## User Input
- **Target opportunity:** [Solicitation number, notice ID, GovTribe link, title plus agency, or opportunity description]
## Goal
Use GovTribe MCP tools to identify the vendors most likely to bid on a target federal or state and local contract opportunity, or the organizations most likely to receive a target federal grant opportunity.
Use prior awards as the primary pivot for identifying likely bidders or likely recipients, then rank candidates with grounded evidence, explicit caveats, and clear confidence.
## Required Documentation
Before doing any work, call the **GovTribe Documentation** tool and read the documentation required for this workflow.
Required documentation to retrieve and read:
- `article_name="Search_Query_Guide"`
- `article_name="Search_Mode_Guide"`
- `article_name="Search_Federal_Contract_Opportunities_Tool"`
- `article_name="Search_Federal_Grant_Opportunities_Tool"`
- `article_name="Search_State_And_Local_Contract_Opportunities_Tool"`
- `article_name="Search_Federal_Contract_Awards_Tool"`
- `article_name="Search_Federal_Grant_Awards_Tool"`
- `article_name="Search_State_And_Local_Contract_Awards_Tool"`
- `article_name="Search_Vendors_Tool"`
Documentation rules:
- Call the **GovTribe Documentation** tool before the first research or search step.
- Read every required documentation article before using other GovTribe tools.
- Add `article_name="Aggregation_and_Leaderboard_Guide"` when concentration checks or leaderboard-style cohort validation will affect the answer.
- Add `article_name="Date_Filtering_Guide"` or `article_name="Location_Filtering_Guide"` when you use date or location filters.
- Add `article_name="Search_Federal_Agencies_Tool"`, `article_name="Search_Naics_Categories_Tool"`, `article_name="Search_Psc_Categories_Tool"`, `article_name="Search_Federal_Grant_Programs_Tool"`, `article_name="Search_States_Tool"`, `article_name="Search_Jurisdictions_Tool"`, `article_name="Search_Nigp_Categories_Tool"`, `article_name="Search_Unspsc_Categories_Tool"`, or `article_name="Search_Government_Files_Tool"` when those tools are needed for ID resolution or supporting evidence.
- Add `article_name="Vector_Store_Content_Retrieval_Guide"` before using `Add_To_Vector_Store` or `Search_Vector_Store`.
- Treat the documentation as binding for tool names, parameters, field definitions, valid filters, and output assumptions.
## Required Input
The user must provide a target opportunity before analysis begins.
Accept any of the following:
- Solicitation number or notice ID
- GovTribe link
- Title plus agency
- Plain-language description only if it resolves to a single target without ambiguity
Optional constraints the user may provide:
- Additional context about scope, competitors, incumbency, or known customer signals
- Whether to narrow the analysis by timeframe, agency, contract vehicle, grant program, NAICS, PSC, or place of performance
Input rules:
- If the input resolves cleanly to one opportunity, proceed immediately.
- If the input is too vague to resolve to one opportunity, ask for the minimum missing detail required to proceed.
- Do not guess the target.
- Do not start substantive analysis until the target is resolved.
## Workflow
### Rules
- Call `Documentation` before doing any work and before using any other GovTribe tool.
- Decide early whether the target is a federal contract opportunity, a federal grant opportunity, or a state and local contract opportunity. Do not mix federal, grant, and state/local evidence in one ranking unless the target clearly requires it.
- Always set both `search_mode` and `query` on every `Search_*` call.
- Use `fields_to_return` whenever you need more than `govtribe_id`.
- Do not stop early when another tool call is required by the workflow.
- Keep calling tools until the task is complete or the tool budget is reached.
- If a tool returns empty or partial results and the workflow defines another defensible strategy, continue with that next strategy.
- Start with keyword and structured-filter matching before semantic expansion.
- For state and local opportunities, use `Search_State_And_Local_Contract_Opportunities` to resolve the target and refine peer-opportunity language, then use `Search_State_And_Local_Contract_Awards` to build the historical comparable-award cohort.
- Use awards as the primary evidence pivot. Use file retrieval or vector-store retrieval only when metadata, snippets, descriptions, or summaries are insufficient.
- Use `Search_Vendors` to normalize awardee identity before final ranking when vendor normalization materially improves accuracy.
- Do not over-rank candidates based on weak similarity, shared keywords, or thin evidence.
- If no sufficiently comparable awards remain after review, say so clearly and stop.
### Steps
1. Call `Documentation` before any other GovTribe tool, read the required articles, and add the optional articles needed for the exact path you will run.
2. Resolve the target opportunity to a single GovTribe record.
- Use `Search_Federal_Contract_Opportunities` for federal contracts, `Search_Federal_Grant_Opportunities` for grants, and `Search_State_And_Local_Contract_Opportunities` for state and local contracts.
- For exact identifiers, titles, notice IDs, or quoted phrases, use `search_mode: "keyword"` with a quoted `query`.
- If structured filters define the cohort, set `query: ""`.
- Request the fields needed to interpret the target, including identifiers, summary text, relevant classifications or taxonomies, agency or jurisdiction context, timing, place of performance when available, and supporting files when available.
- Resolve reusable IDs before deeper search when needed:
- `Search_Federal_Agencies` for `federal_agency_ids`
- `Search_Naics_Categories` for `naics_category_ids`
- `Search_Psc_Categories` for `psc_category_ids`
- `Search_Federal_Grant_Programs` for `federal_grant_program_ids`
- `Search_States` for `state_ids`
- `Search_Jurisdictions` for `jurisdiction_ids`
- `Search_Nigp_Categories` for `nigp_category_ids`
- `Search_Unspsc_Categories` for `unspsc_category_ids`
3. Extract the strongest structured signals from the resolved target, such as title, agency, office, description, NAICS, PSC, grant program, state, jurisdiction, NIGP, UNSPSC, set-aside or eligibility constraints, vehicle or instrument type, value band, place of performance, and due-date context.
4. Run a filter-first award matching pass before semantic broadening.
- Contract path:
- If the resolved opportunity has `federal_meta_opportunity_id`, start with `Search_Federal_Contract_Awards` using `query: ""`, `search_mode: "keyword"`, and `federal_meta_opportunity_ids`.
- Then add the strongest available agency, NAICS, PSC, set-aside, vehicle, date, and value filters.
- Grant path:
- Start with `Search_Federal_Grant_Awards` using `query: ""`, `search_mode: "keyword"`, and the strongest available grant-program, agency, assistance-type, date, value, and place-of-performance filters.
- State and local path:
- Start with `Search_State_And_Local_Contract_Awards` using `search_mode: "keyword"` and the most precise first-pass `query` you can build from the resolved opportunity title, exact identifiers, and scoped description phrases.
- Add `state_ids` and `contact_ids` when the resolved opportunity provides strong state or contact signals.
- Prefer quoted phrases from the opportunity title or scope language over broad paraphrases on the first pass.
- If the initial award cohort is thin or noisy, run a peer-opportunity pass with `Search_State_And_Local_Contract_Opportunities` using the same state, jurisdiction, NIGP, UNSPSC, due-date, and topic signals to find near-neighbor solicitations, then use those neighbor descriptions to tighten the award query.
- Do not rely on keywords alone when structured bridges are available.
5. Run an aggregation-only pass on the comparable-award cohort before full ranking.
- Use the same structural filters that define the comparable cohort.
- Use `query: ""`, `search_mode: "keyword"`, and `per_page: 0`.
- Use awardee, agency, program, vehicle, NAICS, PSC, location, and dollars-obligated aggregations as appropriate to measure concentration and determine whether the market is incumbent-dominated, moderately concentrated, or fragmented.
- State and local award aggregation path:
- Use `Search_State_And_Local_Contract_Awards`
- Use `aggregations` such as `dollars_obligated_stats`, `top_contract_entities_by_dollars_obligated`, `top_nigp_codes_by_dollars_obligated`, `top_unspsc_codes_by_dollars_obligated`, and `top_states_by_dollars_obligated`
- When historical state and local award coverage is thin, you may add an opportunity aggregation pass with `Search_State_And_Local_Contract_Opportunities` using `top_states_by_doc_count`, `top_jurisdictions_by_doc_count`, `top_unspsc_codes_by_doc_count`, and `top_nigp_codes_by_doc_count` to understand the active-solicitation market shape before final ranking.
6. Broaden only after the keyword and aggregation passes.
- Use the same award search tool with `search_mode: "semantic"` and a concise plain-language `query`.
- Keep the strongest structured filters in place while broadening.
- Use semantic broadening to capture near-neighbor work, not to replace the comparable-award cohort.
- For state and local work, use `Search_State_And_Local_Contract_Opportunities` for semantic peer-opportunity discovery only when active-solicitation language is needed to improve the award search, not as a substitute for historical bidder evidence.
7. Use text evidence as part of similarity scoring.
- For contracts, inspect both `descriptions` and `govtribe_ai_summary`.
- For grants, inspect both `description` and `govtribe_ai_summary`.
- For state and local work, inspect the opportunity `description`, award `description`, `govtribe_ai_summary`, and `line_items` when available.
- If supporting file content materially improves the analysis, use `Search_Government_Files`.
- If snippets are insufficient, read `article_name="Vector_Store_Content_Retrieval_Guide"` first, then use `Add_To_Vector_Store`, then `Search_Vector_Store`.
8. Compare each candidate award to the target and keep only meaningfully similar awards.
- Evaluate scope overlap, agency or customer overlap, state or jurisdiction overlap, classification or taxonomy overlap, contract type or assistance-type overlap, vehicle or instrument overlap, eligibility overlap, value-band similarity, recency, and alignment between the target text and award text.
- Exclude awards that are only keyword-adjacent or otherwise poor fits.
9. Normalize the surviving awardees with `Search_Vendors` when vendor identity, parent-child relationships, or entity consolidation matters to the ranking and the awardee appears to map cleanly to a GovTribe vendor record.
10. Rank the likely bidders or recipients using the labels below. Base the ranking on retrieved evidence, not intuition.
11. Perform a verification pass on the top candidates.
- Remove weak, edge-case, or low-similarity awards and check whether the ordering still holds.
- Re-run at least one aggregation check on the cleaned cohort using the same structural filters.
- If the leaderboard shifts materially after weak matches are removed, lower confidence and explain why.
### Likelihood Labels
- **Very High**: Incumbent or repeated winner on highly similar work for the same or very similar customer, with no obvious eligibility issue.
- **High**: Multiple strong comparable awards with clear scope and customer overlap.
- **Medium**: Some relevant comparable work, but one or more meaningful gaps remain.
- **Low**: Only adjacent or partial overlap. Plausible, but weakly supported.
- **Exclude**: Clear mismatch in scope, customer, eligibility, award type, or textual evidence.
Scoring factors:
- Direct scope overlap with the target
- Repeated wins on highly similar awards
- Same agency, office, or buying organization
- Same NAICS, PSC, grant-program, assistance-type, or instrument pattern
- Same contract vehicle or acquisition pattern
- Similar dollar range
- Recent relevant activity
- Eligibility fit, including set-aside or recipient type
- Consistency between the target and award descriptions or summaries
Ranking rules:
- Do not rank a candidate above **Medium** unless there is at least one award with strong scope overlap and at least one additional supporting signal such as the same agency, same classification, same vehicle or assistance type, repeat performance, or strong text alignment.
- Do not infer that a vendor is likely to bid or that an organization is likely to receive the award if the evidence is thin or mostly keyword-based.
## Tool Budget
Design the workflow to stay compact.
Typical path:
- 3 to 5 documentation calls
- 1 opportunity-resolution call
- 0 to 3 ID-resolution calls
- 1 keyword and filter-first award pass
- 1 aggregation pass
- 0 to 1 peer-opportunity refinement pass for state and local work
- 0 to 1 semantic pass
- 0 to 1 vendor-normalization or file-evidence call
Expected total:
- Typical: 7 to 10 calls
- High end with fallback: 11 to 13 calls
Budget rule:
- Avoid exceeding 15 calls unless an additional call materially changes correctness, completeness, or grounding.
## Output Format
Return the answer in this order:
1. **Target Opportunity Summary**
- Briefly summarize how the target opportunity was interpreted.
2. **Search Approach**
- Briefly explain which documentation articles were used.
- Briefly explain which `Search_*` tools, filters, and parameters were most important.
- Briefly explain how the keyword pass, aggregation pass, and semantic pass were used.
3. **Comparable Market Summary**
- Start with a compact markdown table summarizing the dominant awardees, customer concentration, vehicle, program, or taxonomy signals, and overall market shape.
- Briefly summarize whether the cohort looked highly concentrated, moderately concentrated, or fragmented.
- If awardee concentration is central to the answer, you may add one small Mermaid `pie` chart.
- Add the chart only when it materially improves interpretation, include a short explanation, and fall back to the compact table if the cohort is small or Mermaid is unavailable.
4. **Likely Bidders or Recipients**
- Present this section as a compact markdown table first.
- Recommended columns: `Rank`, `Vendor`, `Likelihood`, `Why`, `Key Evidence`, `Caveats`.
- Keep the table compact and move overflow detail into short notes immediately below the table when needed.
5. **Why Others Were Excluded**
- Briefly note close-but-rejected candidates or weak matches.
6. **Overall Confidence**
- State overall confidence in the ranking and explain why.
## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.
## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.Last updated
Was this helpful?
