Find Federal Recompete Opportunities

Use this prompt when you want to find federal contracts, task orders, BPAs, IDIQs, or vehicles that are nearing key end dates and may turn into follow-on work. It helps you focus on expiring work that is actually relevant to your market instead of dumping out a broad list of contracts. It is useful for recompete hunting, account planning, and building pipeline around known spending that is likely to come back to market.

# Find Federal Recompete Opportunities

## User Input
- **Target scope:** [Company, incumbent, agency, NAICS, PSC, contract or vehicle identifier, or work description]

## Goal
Use GovTribe MCP tools to identify federal awards, task orders, BPAs, IDIQs, and contract vehicles nearing meaningful lifecycle dates that may represent follow-on or recompete opportunities.

## Required Input
The user must provide a **target scope** before analysis begins.

Accept any of the following:
- Company or incumbent vendor name
- Agency, subagency, or buying office
- Product, service, or capability area
- NAICS, PSC, or other classification
- Contract, BPA, task order, IDIQ, or vehicle identifier
- Plain-language description of the work to search for

Optional constraints the user may provide:
- Agency or customer focus
- Contract type or vehicle type
- Incumbent vendor filters
- NAICS, PSC, or category filters

Input rules:
- If the target scope resolves cleanly enough to search, proceed immediately.
- If the target scope is too vague, ask for the minimum missing detail needed to proceed.

## Workflow

### Steps
1. Before any other GovTribe tool call, use `Documentation` once with `article_names=["Search_Query_Guide", "Search_Mode_Guide", "Date_Filtering_Guide", "Aggregation_and_Leaderboard_Guide"]`.
    - Use the documentation results to confirm valid tool names, filters, lifecycle date fields, `search_mode`, `query`, `fields_to_return`, `per_page`, `sort`, and aggregation options.

2. If identifiers or scoped filters materially improve filtering, resolve the target scope and reusable IDs before deeper search.
    - Use `Search_Vendors` when the user provides a company name, incumbent vendor, or GovTribe vendor link.
    - Use `Search_Federal_Agencies`, `Search_Naics_Categories`, and `Search_Psc_Categories` to resolve agency and classification filters.
    - For exact vendor names, identifiers, or known contract or vehicle numbers, use a double quoted `"query"`.
    - If the user provides a known contract, order, BPA, IDIQ, or vehicle identifier, resolve it with the most likely dataset first:
        - `Search_Federal_Contract_Awards` for definitive contracts, purchase orders, delivery orders, and BPA calls
        - `Search_Federal_Contract_IDVs` for vendor-specific BPAs, IDIQs, GWAC awards, schedules, BOAs, and similar parent instruments
        - `Search_Federal_Contract_Vehicles` for master vehicle names or acronyms
    - Use `fields_to_return` explicitly. At minimum request:
        - `Search_Vendors`: `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `business_types`, `sba_certifications`, `parent_or_child`, `parent`, `naics_category`, `federal_contract_awards`, `federal_contract_idvs`, `awarded_federal_contract_vehicle`, `govtribe_ai_summary`
    - Extract the strongest available scope signals:
        - Target company, agency, or market segment
        - Core product or service area
        - Relevant NAICS, PSC, or other classifications
        - Contract or vehicle type
        - Relevant keywords and synonyms
        - Known incumbent vendors, if any
        - Whether the resolved scope is broad enough to justify aggregation-only lifecycle sizing or narrow enough for direct row review

2A. Classify the resolved target scope as either a **seed record** or a **market scope**.
    - A **seed record** is a specific award, IDV, or vehicle that the user provided by identifier.
      It is the basis for the search, not a discovered recompete.
    - A **market scope** is a broader filter set such as a vendor, agency, NAICS, PSC, or work area
      where the expiring cohort itself is the discovery.
    - When the target scope resolves to a seed record:
        - Capture its key facts for use in Output Section 1, including incumbent, agency, lifecycle dates, scope summary, vehicle context, and material transaction history.
        - Use the seed to anchor downstream filters, similarity passes, and follow-on evidence gathering.
        - Do not include the seed record itself in Output Section 4 unless the user explicitly asked to evaluate that exact record as the output item.
        - Focus downstream search effort on follow-on evidence such as new solicitations, sources sought notices, forecasts, similar expiring awards from the same agency or program, or bridge and extension signals in transaction history.
    - When the target scope resolves to a market scope, proceed normally; expiring records discovered by the search are themselves the output.

3. Choose the lifecycle retrieval path after scope resolution.
    - **3A. Aggregation-only lifecycle sizing branch**
        - Run this branch only when one or more are true:
            - The user asks for market sizing, concentration, dominant awardees, vehicles, categories, or set-aside posture
            - The resolved scope is broad, such as an agency plus capability lane, agency plus NAICS or PSC slice, or an incumbent plus a broad work category
            - The initial scoped expiring cohort is likely too large or uncertain for efficient direct row review
        - Skip this branch when one or more are true:
            - The user provides an exact contract, order, BPA, IDIQ, or vehicle identifier and did not ask for market sizing around that seed
            - The search is clearly narrow, vendor-specific, or account-specific
            - The resolved cohort is obviously small enough for direct row review
        - Use `per_page: 0`.
        - Omit `fields_to_return` unless you also need rows.
        - Apply the same resolved filters that will define the later row cohort.
        - Keep aggregation calls focused on one market-sizing question when possible.
        - Use the top-level `total` as the full scoped expiring-population size for this branch.
        - Award aggregation path with `Search_Federal_Contract_Awards`:
            - Use `ultimate_completion_date_range` for the next 24 months and the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_award_types`, and `set_aside_types`.
            - Use `aggregations` such as `dollars_obligated_stats`, `top_awardees_by_dollars_obligated`, `top_federal_contract_vehicles_by_dollars_obligated`, `top_set_aside_types_by_dollars_obligated`, `top_naics_codes_by_dollars_obligated`, and `top_psc_codes_by_dollars_obligated`.
        - IDV aggregation path with `Search_Federal_Contract_IDVs`:
            - Use `last_date_to_order_range` for the next 24 months and the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_idv_types`, `multiple_or_single_award`, and `set_aside_types`.
            - Use `aggregations` such as `top_awardees_by_doc_count`, `top_vehicles_by_doc_count`, `top_set_aside_types_by_doc_count`, `top_naics_codes_by_doc_count`, and `top_psc_codes_by_doc_count`.
        - Use the aggregation results only to estimate the size of the scoped expiring segment, identify which awardees, vehicles, and categories dominate it, and decide whether the row-level pass should emphasize awards, IDVs, or both.
        - Do not treat aggregation results alone as evidence that a record is a likely recompete.
        - When using denominator-versus-numerator framing:
            - Treat the full scoped expiring cohort from this branch as the denominator
            - Treat the tighter likely-recompete cohort that survives later row-level screening as the numerator
            - Do not claim the ratio if cleanup or later filtering materially changes the scope
    - **3B. Row-level lifecycle retrieval branch**
        - Always run a row-level lifecycle retrieval pass.
        - If 3A was skipped because the scope is narrow, move directly to this branch.
        - Use keyword and structured filters before any semantic broadening.
        - Use `per_page` intentionally and paginate as needed when the cohort remains material.
        - If 3A ran, keep the same base filters so the row cohort matches the sized cohort.
        - Award path with `Search_Federal_Contract_Awards`:
            - Use `ultimate_completion_date_range` for the next 24 months of expiration or completion-horizon filtering.
            - Add the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_award_types`, and `set_aside_types`.
            - Use `sort` with `completionDate` when completion timing should drive review order.
            - At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `federal_contract_idv`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, and `originating_federal_contract_opportunity`.
        - IDV path with `Search_Federal_Contract_IDVs`:
            - Use `last_date_to_order_range` for the next 24 months of ordering-period expiration so expired or inactive parent instruments are excluded.
            - Add the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_idv_types`, `multiple_or_single_award`, and `set_aside_types`.
            - Use `sort` with `lastDateToOrder` when ordering-window timing should drive review order.
            - At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `last_date_to_order`, `contract_type`, `description`, `govtribe_ai_summary`, `ceiling_value`, `set_aside`, `multiple_or_single_award`, `awardee`, `parent_of_awardee`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `blanket_purchase_agreements`, `task_orders`, and `originating_federal_contract_opportunity`.
        - Vehicle path with `Search_Federal_Contract_Vehicles`:
            - Use `last_date_to_order_range` for the next 24 months of master-vehicle ordering-period expiration so expired or inactive vehicles are excluded.
            - Add the strongest available `vendor_ids`, `funding_federal_agency_ids`, and `federal_contract_vehicle_types`.
            - Use `sort` with `lastDateToOrder` when ordering-window timing should drive review order.
            - At minimum request `govtribe_id`, `govtribe_url`, `name`, `award_date`, `last_date_to_order`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `set_aside_type`, `shared_ceiling`, `federal_agency`, `federal_contract_awards`, and `originating_federal_contract_opportunity`.
        - Do not invent unsupported lifecycle filters such as a generic `option_end_date`. If option timing matters, treat it as inference only when the returned evidence supports that reading.

4. Broaden the search only after the keyword and filter-first row pass.
    - Semantic search is a broadening pass only. It must not replace the Step 3B row-level lifecycle retrieval branch.
    - If the search began from an exact award, IDV, or vehicle seed, or if the row-level lifecycle pass produced one strong candidate but the cohort is still too narrow, prefer one same-family `similar_filter` widening pass before a generic semantic pass.
    - Use seeded widening to find related follow-on evidence and same-family records around the seed, not to resurface the seed itself as a discovery.
    - Use `Search_Federal_Contract_Awards` for award seeds, `Search_Federal_Contract_IDVs` for IDV seeds, and `Search_Federal_Contract_Vehicles` for vehicle seeds.
    - Keep the same lifecycle window and the strongest agency, vendor, NAICS, PSC, type, and set-aside filters in place.
    - Do not use cross-dataset similarity jumps in this workflow.
    - Only if the seeded widening pass is still too thin should you use generic semantic broadening.
    - Use `Search_Federal_Contract_Awards`, `Search_Federal_Contract_IDVs`, or `Search_Federal_Contract_Vehicles` again with `search_mode: "semantic"`.
    - Build a concise plain-language `query` from the resolved scope, mission language, and domain synonyms.
    - Keep the strongest structured filters in place while broadening.
    - Use `_score`-based `sort` for semantic passes unless the user specifically needs date ordering instead.
    - Use `similar_filter` only if the current tool supports it and you have a strong seed record with the correct `govtribe_type` and `govtribe_id`.
    - Do not treat supporting transaction evidence as the primary semantic-search surface.

5. If awards, IDVs, or vehicles are identified and supporting modification evidence is needed, use `Search_Federal_Transactions`.
    - Use this only after awards, IDVs, or vehicles are identified.
    - Use the strongest available `federal_contract_award_ids`, `federal_contract_idv_ids`, `federal_contract_vehicle_ids`, `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, and `psc_category_ids`.
    - Use `sort` with `transactionDate` descending when reviewing recent modification activity.
    - At minimum request `govtribe_id`, `date`, `last_mod_number`, `reason_for_modification`, `total_value`, `federal_value`, `awardee`, `funding_federal_agency`, and `contracting_federal_agency`.

6. If candidate lifecycle records remain after retrieval, judge relevance with dataset-specific evidence and record structure.
    - For `Search_Federal_Contract_Awards`, inspect both `descriptions` and `govtribe_ai_summary`.
    - For `Search_Federal_Contract_IDVs`, inspect both `description` and `govtribe_ai_summary`.
    - For `Search_Federal_Contract_Vehicles`, inspect both `descriptions` and `govtribe_ai_summary`.
    - If supporting transactions were retrieved, use those rows only for lifecycle evidence such as recent modifications, obligation activity, and modification reasons.
    - Use relationship fields such as `originating_federal_contract_opportunity`, `federal_contract_vehicle`, `federal_contract_idv`, `task_orders`, and `blanket_purchase_agreements` when they materially clarify likely follow-on structure.
    - Distinguish base awards, task orders, BPA calls, IDIQs, and master vehicles using documented dataset fields, not intuition.

7. Remove results that are not meaningfully relevant, are already closed out without meaningful follow-on potential, are not credibly likely recompetes, or are supported only by weak keyword overlap.
    - If the workflow began from a seed record, remove the seed itself from the discovery set before final ranking unless the user explicitly asked for a one-record evaluation.

8. If no relevant expiring work remains after review, say so clearly and stop.

9. If relevant expiring work remains after review, normalize the surviving results and extract key facts.
    - Use the explicit fields returned from awards, IDVs, vehicles, and supporting transactions.
    - When incumbent identity or parent-child context matters, use `Search_Vendors` with `vendor_ids` from award relationships and request `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `parent_or_child`, `parent`, `business_types`, `sba_certifications`, and `govtribe_ai_summary`.
    - Record and normalize:
        - Contract or vehicle name
        - Agency or buying office
        - Incumbent vendor
        - Vehicle or agreement type
        - Relevant lifecycle date such as `ultimate_completion_date`, `completion_date`, or `last_date_to_order`
        - Contract number or identifier
        - Short scope summary
        - Key follow-on or recompete signals

10. Rank the remaining results using the priority labels below, then perform a verification pass.
    - If the workflow began from a seed record, do not place the seed record in the final ranked recompete table unless the user explicitly asked to evaluate that exact record as the output item.
    - Remove weak, edge-case, or low-similarity matches and check whether the ranking still holds.
    - Rerun the cleaned cohort rather than trusting the first page.
    - Use `per_page` and additional pages as needed to confirm coverage.
    - If Step 3A was used, rerun at least one cleaned aggregation cohort on the same supporting tool you used for market sizing.
    - If the ranking shifts materially after cleanup, lower confidence and explain why.

### Priority Labels
Use these labels:
- **Very High**
- **High**
- **Medium**
- **Low**
- **Exclude**

Score results using these factors:
- Direct relevance to the target scope
- Credibility of the expiration or lifecycle date
- Evidence that the work is ongoing, recurring, or likely to be recompeted
- Agency or customer importance
- Similarity in scope, category, and vehicle type
- Presence of a clear incumbent
- Alignment between the result and the target in `descriptions` or `description`
- Alignment between the result and the target in `govtribe_ai_summary`
- Supporting transaction evidence, when recent modification activity materially sharpens the recompete signal

Guidance:
- **Very High**: Strong target fit, clear relevant lifecycle date, and strong signs the work is likely to continue or recompete
- **High**: Clear fit with good lifecycle evidence and a plausible follow-on path
- **Medium**: Potentially relevant, but one or more meaningful gaps remain
- **Low**: Only partial or adjacent fit; weakly supported
- **Exclude**: Clear mismatch in scope, agency, vehicle structure, timing, or textual evidence

Ranking rules:
    - If the workflow began from a seed record, do not include that seed in the final ranked table unless the user explicitly asked for a one-record evaluation of it.
    - Do not rank a result above **Medium** unless there is both strong scope relevance and a credible lifecycle signal such as `ultimate_completion_date`, `last_date_to_order`, or recent modification evidence that supports likely follow-on action.
    - Do not treat every expiring contract as a likely recompete.
    - Do not include a record in the final ranked table unless the evidence supports it as a likely recompete.

## Output Format
Use compact markdown tables by default for structured market-size summaries and ranked expiring-work results.

Use Mermaid only when denominator-versus-numerator comparisons or concentration patterns materially improve interpretation.
- Use `xychart-beta` for denominator-versus-numerator or simple window comparisons.
- Use `pie` only when one or a few players, vehicles, or set-aside buckets dominate the cohort.
- Only add a chart when it materially improves interpretation, include a short explanation, and fall back to the compact table if the data is sparse or Mermaid is unavailable.

Return the answer in this order:

1. **Search Scope Summary**
    - Briefly summarize how the target scope was interpreted.
    - If the workflow began from a seed record, identify it explicitly as the seed and summarize the key facts carried forward into the search.

2. **Search Approach**
    - Briefly explain which `Search_*` tools and lifecycle date fields were used.
    - Briefly state whether the Step 3A aggregation-only sizing branch was used or skipped and why.
    - Briefly explain how the aggregation-only sizing branch, keyword and filter-first row pass, and semantic pass were used.
    - Briefly note any date windows, filters, or aggregations applied.

3. **Market Size / Concentration Summary**
    - If Step 3A was used, start with a compact markdown table summarizing the major awardees, vehicles, set-aside patterns, classifications, or other dominant cohort signals.
    - If Step 3A was skipped because the scope was narrow, say so briefly and explain that direct row-level lifecycle review was more appropriate.
    - Briefly summarize the size of the scoped expiring market when Step 3A was used.
    - Briefly note whether the market looks concentrated or diffuse when aggregation evidence exists.
    - Only quantify denominator-versus-numerator framing when the same base filters remained intact through the likely-recompete screen.

4. **Top Likely Expiring Recompetes**
    - Present this section as a compact markdown table first.
    - Recommended columns: `Rank`, `Record`, `Agency`, `Incumbent`, `Lifecycle Date`, `Priority`.
    - Include only records that survived the likely-recompete screen.
    - When the workflow began from a seed record, the seed record itself is not eligible for this table unless the user explicitly asked for a one-record evaluation.
    - If rationale, caveats, or specific evidence do not fit cleanly in the table, add them immediately below the table as short notes.

5. **Why Others Were Excluded**
    - Briefly note weak matches, ambiguous records, or results with poor recompete signals.

6. **Overall Confidence**
    - State overall confidence in the output and why.
    - Call out ambiguities, unsupported date interpretations, or missing lifecycle details when they materially affect the answer.

## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.

## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.

Last updated

Was this helpful?