Expiring Contracts / Recompetes

Use this prompt when you want to find contracts, task orders, BPAs, IDIQs, or vehicles that are nearing key end dates and may turn into follow-on work. It helps you focus on expiring work that is actually relevant to your market instead of dumping out a broad list of contracts. It is useful for recompete hunting, account planning, and building pipeline around known spending that is likely to come back to market.

# Expiring Contracts / Recompetes

## User Input
- **Target scope:** [Company, incumbent, agency, NAICS, PSC, contract or vehicle identifier, or work description]

## Goal
Use GovTribe MCP tools to identify federal awards, task orders, BPAs, IDIQs, and contract vehicles nearing meaningful lifecycle dates that may represent follow-on or recompete opportunities.

Focus on scoped, relevant expiring work rather than a loose list of records. Use aggregation-first market sizing when the scope is broad, then narrow to the records most likely to matter.

## Required Documentation
Before doing any work, call the **GovTribe Documentation** tool and read the documentation required for this workflow.

Required documentation to retrieve and read:
- `article_name="Search_Query_Guide"`
- `article_name="Search_Mode_Guide"`
- `article_name="Date_Filtering_Guide"`
- `article_name="Search_Federal_Contract_Awards_Tool"`
- `article_name="Search_Federal_Contract_IDVs_Tool"`
- `article_name="Search_Federal_Contract_Vehicles_Tool"`

Retrieve these additional documentation articles only when the workflow needs them:
- `article_name="Aggregation_and_Leaderboard_Guide"` for market sizing, concentration checks, or denominator-versus-numerator comparisons
- `article_name="Search_Federal_Transactions_Tool"` for supporting modification or lifecycle evidence
- `article_name="Search_Vendors_Tool"` for vendor normalization or incumbent resolution
- `article_name="Search_Federal_Agencies_Tool"` for agency resolution
- `article_name="Search_Naics_Categories_Tool"` for NAICS resolution
- `article_name="Search_Psc_Categories_Tool"` for PSC resolution
- `article_name="Location_Filtering_Guide"` before using place-of-performance filters
- `article_name="Create_Saved_Search_Tool"` before creating a saved search or alert

Documentation rules:
- Call the **GovTribe Documentation** tool before the first research or search step.
- Read every required documentation article before using other GovTribe tools.
- Add the optional documentation articles before you use the branch they govern.
- Follow the documented tool contracts exactly.
- Treat the documentation as binding for tool names, parameters, field definitions, lifecycle date interpretation, sort keys, aggregation options, and saved-search handoff.

## Required Input
The user must provide a **target scope** before analysis begins.

Accept any of the following:
- Company or incumbent vendor name
- Agency, subagency, or buying office
- Product, service, or capability area
- NAICS, PSC, or other classification
- Contract, BPA, task order, IDIQ, or vehicle identifier
- Plain-language description of the work to search for

Optional constraints the user may provide:
- Expiration window, such as next 12 or 24 months
- Agency or customer focus
- Contract type or vehicle type
- Incumbent vendor filters
- NAICS, PSC, or category filters
- Geography or place of performance
- Dollar range
- Whether to include only active vehicles
- Whether to include only likely recompetes
- Whether to create a saved search or alert

Input rules:
- If the target scope resolves cleanly enough to search, proceed immediately.
- If the target scope is too vague, ask for the minimum missing detail needed to proceed.
- Do not guess missing identifiers, lifecycle windows, or relevance constraints.

## Workflow
### Rules
- Call `Documentation` before using any other GovTribe MCP tool.
- Use GovTribe MCP tools as the primary evidence in this workflow.
- Always set both `search_mode` and `query` on every `Search_*` call.
- Use `query: ""` for filter-defined or aggregation-only cohorts.
- Use `fields_to_return` whenever you need more than `govtribe_id`.
- Do not stop early when another tool call is required by the workflow.
- Keep calling tools until the task is complete or the tool budget is reached.
- If a tool returns empty or partial results and the workflow defines another defensible strategy, continue with that next strategy.
- Load `Aggregation_and_Leaderboard_Guide` before aggregation-driven market sizing or denominator-versus-numerator analysis.
- Use `Search_Federal_Contract_Awards`, `Search_Federal_Contract_IDVs`, and `Search_Federal_Contract_Vehicles` as the primary candidate surfaces.
- Use `Search_Federal_Transactions` only as supporting modification or lifecycle evidence.
- Use documented lifecycle fields such as `ultimate_completion_date_range` and `last_date_to_order_range`; do not invent unsupported date filters.
- Use `Create_Saved_Search` only from a prior `search_*` persistence ID and only when the user asked for a saved search or alert.
- Do not stop at the first plausible answer. Verify ambiguous lifecycle dates, weak matches, and false positives before ranking.
- If the evidence is insufficient to identify relevant expiring work, say so clearly and stop.

### Steps
1. Before any other GovTribe tool call, use `Documentation`, read the required articles above, and add the optional articles needed for the exact path you will run.
    - Use the documentation results to confirm valid tool names, filters, lifecycle date fields, `search_mode`, `query`, `fields_to_return`, `per_page`, `sort`, aggregation options, and saved-search handoff.

2. Resolve the target scope and reusable IDs before deeper search when identifiers materially improve filtering.
    - Use `Search_Vendors` when the user provides a company name, incumbent vendor, or GovTribe vendor link.
    - Use `Search_Federal_Agencies`, `Search_Naics_Categories`, and `Search_Psc_Categories` to resolve agency and classification filters.
    - For exact vendor names, identifiers, or known contract or vehicle numbers, use `search_mode: "keyword"` with a quoted `query`.
    - If the user provides a known contract, order, BPA, IDIQ, or vehicle identifier, resolve it with the most likely dataset first:
        - `Search_Federal_Contract_Awards` for definitive contracts, purchase orders, delivery orders, and BPA calls
        - `Search_Federal_Contract_IDVs` for vendor-specific BPAs, IDIQs, GWAC awards, schedules, BOAs, and similar parent instruments
        - `Search_Federal_Contract_Vehicles` for master vehicle names or acronyms
    - Use `fields_to_return` explicitly. At minimum request:
        - `Search_Vendors`: `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `business_types`, `sba_certifications`, `parent_or_child`, `parent`, `naics_category`, `federal_contract_awards`, `federal_contract_idvs`, `awarded_federal_contract_vehicle`, `govtribe_ai_summary`
    - Extract the strongest available scope signals:
        - Target company, agency, or market segment
        - Core product or service area
        - Relevant NAICS, PSC, or other classifications
        - Contract or vehicle type
        - Time horizon for expiration
        - Relevant keywords and synonyms
        - Known incumbent vendors, if any
        - Geography or place-of-performance constraints
        - Dollar range or size band
        - Whether the focus is broad market mapping or a narrow account-specific search

3. Run keyword and filter-first lifecycle searches before semantic broadening.
    - There is no separate structured-search tool. Use the lifecycle datasets with `search_mode: "keyword"` plus structured filters first.
    - If structured filters fully define the cohort, set `query: ""`.
    - Use `per_page` intentionally. For broad market mapping, you may start with `per_page: 0` plus `aggregations`, then rerun with row retrieval.
    - Award path with `Search_Federal_Contract_Awards`:
        - Use `ultimate_completion_date_range` for expiration or completion-horizon filtering.
        - Add the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_award_types`, `set_aside_types`, `dollars_obligated_range`, `ceiling_value_range`, and `place_of_performance_ids`.
        - Use `sort` with `completionDate` when completion timing should drive review order.
        - At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `federal_contract_idv`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, and `originating_federal_contract_opportunity`.
    - IDV path with `Search_Federal_Contract_IDVs`:
        - Use `last_date_to_order_range` for ordering-period expiration.
        - Add the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_idv_types`, `multiple_or_single_award`, `set_aside_types`, and `ceiling_value_range`.
        - Use `sort` with `lastDateToOrder` when ordering-window timing should drive review order.
        - At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `last_date_to_order`, `contract_type`, `description`, `govtribe_ai_summary`, `ceiling_value`, `set_aside`, `multiple_or_single_award`, `awardee`, `parent_of_awardee`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, `blanket_purchase_agreements`, `task_orders`, and `originating_federal_contract_opportunity`.
    - Vehicle path with `Search_Federal_Contract_Vehicles`:
        - Use `last_date_to_order_range` for master-vehicle ordering-period expiration.
        - Add the strongest available `vendor_ids`, `funding_federal_agency_ids`, `federal_contract_vehicle_types`, `federal_meta_opportunity_ids`, and `shared_ceiling_value_range`.
        - Use `sort` with `lastDateToOrder` when ordering-window timing should drive review order.
        - At minimum request `govtribe_id`, `govtribe_url`, `name`, `award_date`, `last_date_to_order`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `set_aside_type`, `shared_ceiling`, `federal_agency`, `federal_contract_awards`, and `originating_federal_contract_opportunity`.
    - Supporting transactions path with `Search_Federal_Transactions`:
        - Use this only after awards, IDVs, or vehicles are identified.
        - Use `query: ""`, `search_mode: "keyword"`, and the strongest available `federal_contract_award_ids`, `federal_contract_idv_ids`, `federal_contract_vehicle_ids`, `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, and `psc_category_ids`.
        - Use `sort` with `transactionDate` descending when reviewing recent modification activity.
        - At minimum request `govtribe_id`, `date`, `last_mod_number`, `reason_for_modification`, `total_value`, `federal_value`, `awardee`, `funding_federal_agency`, and `contracting_federal_agency`.
    - Do not invent unsupported lifecycle filters such as a generic `option_end_date`. If option timing matters, treat it as inference only when the returned evidence supports that reading.

4. Use aggregation-first lifecycle passes when the user wants market sizing, concentration checks, or a broad scan.
    - Use `query: ""`, `search_mode: "keyword"`, and `per_page: 0`.
    - Omit `fields_to_return` unless you also need rows.
    - Apply the same resolved filters that will define the later row cohort.
    - Keep aggregation calls focused on one market-sizing question when possible.
    - Award aggregation path with `Search_Federal_Contract_Awards`:
        - Use `ultimate_completion_date_range` and the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_award_types`, `set_aside_types`, `dollars_obligated_range`, `ceiling_value_range`, and `place_of_performance_ids`.
        - Use `aggregations` such as `dollars_obligated_stats`, `top_awardees_by_dollars_obligated`, `top_federal_contract_vehicles_by_dollars_obligated`, `top_set_aside_types_by_dollars_obligated`, `top_naics_codes_by_dollars_obligated`, and `top_psc_codes_by_dollars_obligated`.
    - IDV aggregation path with `Search_Federal_Contract_IDVs`:
        - Use `last_date_to_order_range` and the strongest available `vendor_ids`, `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_idv_types`, `multiple_or_single_award`, `set_aside_types`, and `ceiling_value_range`.
        - Use `aggregations` such as `top_awardees_by_doc_count`, `top_vehicles_by_doc_count`, `top_set_aside_types_by_doc_count`, `top_naics_codes_by_doc_count`, and `top_psc_codes_by_doc_count`.
    - Use the aggregation results to estimate the size of the expiring segment, identify which awardees, vehicles, and categories dominate the scoped market, and decide whether the row-level pass should emphasize awards, IDVs, or both.
    - If the user asked for likely recompetes rather than a broad expiring-work list, treat the full expiring scoped population as the denominator cohort and a tighter follow-on-signal cohort as the numerator. Use the same base filters so you can explain how much of the expiring universe looks plausibly actionable.

5. Broaden the search only after the keyword and aggregation passes.
    - Use `Search_Federal_Contract_Awards`, `Search_Federal_Contract_IDVs`, or `Search_Federal_Contract_Vehicles` again with `search_mode: "semantic"`.
    - Build a concise plain-language `query` from the resolved scope, mission language, and domain synonyms.
    - Keep the strongest structured filters in place while broadening.
    - Use `_score`-based `sort` for semantic passes unless the user specifically needs date ordering instead.
    - Use `similar_filter` only if the current tool supports it and you have a strong seed record with the correct `govtribe_type` and `govtribe_id`.
    - Do not treat `Search_Federal_Transactions` as the primary semantic-search surface.

6. Judge relevance with dataset-specific evidence and record structure.
    - For `Search_Federal_Contract_Awards`, inspect both `descriptions` and `govtribe_ai_summary`.
    - For `Search_Federal_Contract_IDVs`, inspect both `description` and `govtribe_ai_summary`.
    - For `Search_Federal_Contract_Vehicles`, inspect both `descriptions` and `govtribe_ai_summary`.
    - For `Search_Federal_Transactions`, use transaction rows only for supporting lifecycle evidence such as recent modifications, obligation activity, and modification reasons.
    - Use relationship fields such as `originating_federal_contract_opportunity`, `federal_contract_vehicle`, `federal_contract_idv`, `task_orders`, and `blanket_purchase_agreements` when they materially clarify likely follow-on structure.
    - Distinguish base awards, task orders, BPA calls, IDIQs, and master vehicles using documented dataset fields, not intuition.

7. Remove results that are not meaningfully relevant, are already closed out without meaningful follow-on potential, or are supported only by weak keyword overlap.

8. If no relevant expiring work remains after review, say so clearly and stop.

9. Normalize the surviving results and extract key facts.
    - Use the explicit fields returned from awards, IDVs, vehicles, and supporting transactions.
    - When incumbent identity or parent-child context matters, use `Search_Vendors` with `vendor_ids` from award relationships and request `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `parent_or_child`, `parent`, `business_types`, `sba_certifications`, and `govtribe_ai_summary`.
    - Record and normalize:
        - Contract or vehicle name
        - Agency or buying office
        - Incumbent vendor
        - Vehicle or agreement type
        - Relevant lifecycle date such as `ultimate_completion_date`, `completion_date`, or `last_date_to_order`
        - Contract number or identifier
        - Short scope summary
        - Key follow-on or recompete signals

10. If the user requested a reusable search, create or recommend a saved search or alert structure that matches the final refined criteria.
    - Preserve the final `search_persistence_id` from the chosen award, IDV, or vehicle search.
    - Only call `Create_Saved_Search` after the final query is defined.
    - If creating a saved search, provide `search_persistence_id`, `name`, and `frequency`.
    - Restrict `frequency` guidance to `Daily`, `Weekly`, `Instant`, or `Never`.
    - If the user did not request creation, recommend the saved-search logic, filters, and cadence in plain language instead of inventing a tool call.

11. Rank the remaining results using the priority labels below, then perform a verification pass.
    - Remove weak, edge-case, or low-similarity matches and check whether the ranking still holds.
    - Rerun the cleaned cohort rather than trusting the first page.
    - Use `per_page` and additional pages as needed to confirm coverage.
    - For broad market scans, rerun at least one cleaned aggregation cohort on the same supporting tool you used for market sizing.
    - If the ranking shifts materially after cleanup, lower confidence and explain why.

### Priority Labels
Use these labels:
- **Very High**
- **High**
- **Medium**
- **Low**
- **Exclude**

Score results using these factors:
- Direct relevance to the target scope
- Credibility of the expiration or lifecycle date
- Evidence that the work is ongoing, recurring, or likely to be recompeted
- Agency or customer importance
- Similarity in scope, category, and vehicle type
- Similarity in dollar range
- Presence of a clear incumbent
- Alignment between the result and the target in `descriptions` or `description`
- Alignment between the result and the target in `govtribe_ai_summary`
- Supporting transaction evidence, when recent modification activity materially sharpens the recompete signal

Guidance:
- **Very High**: Strong target fit, clear relevant lifecycle date, and strong signs the work is likely to continue or recompete
- **High**: Clear fit with good lifecycle evidence and a plausible follow-on path
- **Medium**: Potentially relevant, but one or more meaningful gaps remain
- **Low**: Only partial or adjacent fit; weakly supported
- **Exclude**: Clear mismatch in scope, agency, vehicle structure, timing, or textual evidence

Ranking rules:
- Do not rank a result above **Medium** unless there is both strong scope relevance and a credible lifecycle signal such as `ultimate_completion_date`, `last_date_to_order`, or recent modification evidence that supports likely follow-on action.
- Do not treat every expiring contract as a likely recompete.

## Tool Budget
Design the workflow to stay compact.

Typical path:
- 4 to 6 documentation calls
- 0 to 4 scope-resolution calls
- 1 aggregation-first lifecycle sizing pass
- 1 keyword/filter-first row-retrieval pass
- 0 to 1 semantic pass
- 0 to 1 transactions or vendor-normalization call
- 0 to 1 saved-search handoff

Expected total:
- Typical: 8 to 11 calls
- High end: 12 to 14 calls

Budget rule:
- Avoid exceeding 15 calls unless an additional call materially changes correctness, completeness, or grounding.

## Output Format
Do not return a loose list of expiring records.

Use compact markdown tables by default for structured market-size summaries and ranked expiring-work results.

Use Mermaid only when denominator-versus-numerator comparisons or concentration patterns materially improve interpretation.
- Use `xychart-beta` for denominator-versus-numerator or simple window comparisons.
- Use `pie` only when one or a few players, vehicles, or set-aside buckets dominate the cohort.
- Only add a chart when it materially improves interpretation, include a short explanation, and fall back to the compact table if the data is sparse or Mermaid is unavailable.

Return the answer in this order:

1. **Search Scope Summary**
    - Briefly summarize how the target scope was interpreted.

2. **Search Approach**
    - Briefly explain which `Documentation.article_name` calls were used.
    - Briefly explain which `Search_*` tools and lifecycle date fields were used.
    - Briefly explain how the aggregation-first sizing pass, keyword and filter-first row pass, and semantic pass were used.
    - Briefly note any date windows, filters, or aggregations applied.

3. **Market Size / Concentration Summary**
    - Start with a compact markdown table summarizing the major awardees, vehicles, set-aside patterns, classifications, or other dominant cohort signals.
    - Briefly summarize the size of the scoped expiring market.
    - Briefly note whether the market looks concentrated or diffuse and, when relevant, how much of it looks like plausible recompete territory.

4. **Top Expiring Contracts / Recompetes**
    - Present this section as a compact markdown table first.
    - Recommended columns: `Rank`, `Record`, `Agency`, `Incumbent`, `Lifecycle Date`, `Priority`.
    - If rationale, caveats, or specific evidence do not fit cleanly in the table, add them immediately below the table as short notes.

5. **Why Others Were Excluded**
    - Briefly note weak matches, ambiguous records, or results with poor recompete signals.

6. **Saved Search / Alert Recommendation**
    - If relevant, provide the recommended or created saved-search logic, filters, `frequency`, and `name`.
    - Keep this section prose or a short field list. Do not add charts here.

7. **Overall Confidence**
    - State overall confidence in the output and why.
    - Call out ambiguities, unsupported date interpretations, or missing lifecycle details when they materially affect the answer.

## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.

## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.

Last updated

Was this helpful?