Federal Buying Pattern Analysis
# Federal Buying Pattern Analysis
## User Input
- **Target market slice:** [Buyer/customer dimension plus work dimension when available, or one resolved buying lane such as agency + NAICS, agency + work category, PSC, vehicle, IDV, or work category]
## Goal
Use GovTribe MCP tools to determine how a federal buyer, customer, or resolved federal buying lane actually buys a given type of contract work.
## Required Input
The user must provide a **target market slice** before analysis begins.
For this prompt, a **target market slice** means a federal buying lane defined by:
- a buyer or customer dimension when available, such as an agency, bureau, office, or buying organization
- a work dimension when available, such as a capability lane, work category, NAICS, PSC, vehicle, IDV, or contract seed used only to infer buying structure
- a fixed 24-month historical window
If both a buyer/customer dimension and a work dimension are provided, treat them together as one combined slice.
If only one dimension is provided, resolve that dimension and explicitly state what the resulting slice became.
Accept any of the following:
- Buyer or customer dimension, such as an agency, bureau, office, or buying organization
- Work dimension, such as a capability area, market lane, NAICS, PSC, vehicle, or IDV
- Contract number only if it is being used as a seed to infer buying structure
- Plain-language description of the work category or buying lane
Optional constraints the user may provide:
- Known incumbent or vendor context only if it sharpens the slice instead of turning the workflow into Vendor Deep Dive
Input rules:
- If the input resolves cleanly to one target market slice, proceed immediately.
- If the input is too vague to resolve to one market slice, ask for the minimum missing detail required to proceed.
- If a buyer/customer dimension and work dimension are both present, keep both in the slice definition.
- If only one dimension is present, explicitly state the resolved single-dimension slice in the output.
- Do not guess the target market slice.
- Do not start substantive analysis until the slice is resolved.
## Workflow
### Steps
1. Call `Documentation` once with `article_names=["Search_Query_Guide", "Search_Mode_Guide", "Aggregation_and_Leaderboard_Guide", "Date_Filtering_Guide"]` before any other GovTribe tool.
- Use the documentation results to confirm valid tool names, filter names, `search_mode`, `query`, `fields_to_return`, `per_page`, `sort`, and aggregation options before searching.
2. If buyer, work, vendor-context, or record identifiers materially improve filtering, resolve and normalize the target market slice.
- This workflow does not require a target company.
- Resolve buyer or customer names into `federal_agency_ids` when names are provided.
- Resolve work dimensions into `naics_category_ids` or `psc_category_ids` when codes or labels are provided.
- Resolve a vehicle or IDV identifier into the correct GovTribe record when the user provides one directly.
- If the user provides a contract number, vehicle, or IDV only as a seed, use it to infer the broader buying lane from the returned buyer, work-category, vehicle, IDV, and classification signals. Do not let the workflow collapse into a single-record deep dive unless the user explicitly scopes the question that narrowly.
- If the user provides a known incumbent or vendor only as context for the market slice, use `Search_Vendors` to normalize identity, but do not let the workflow drift into Vendor Deep Dive.
- Resolve agencies, NAICS, PSCs, vehicles, and IDVs only when that resolution materially sharpens the slice.
- Express the final slice explicitly as buyer/customer dimension plus work dimension when both exist, or as the resolved single-dimension slice when only one exists.
- Use a fixed `award_date_range` covering the last 24 months.
3. Run a historical award aggregation pass first.
- Use `Search_Federal_Contract_Awards` as the primary evidence surface because market structure is best grounded in actual buying behavior.
- Use `per_page: 0`.
- Keep the scope filters stable across later comparison passes.
- Preferred filters when available:
- `contracting_federal_agency_ids`
- `funding_federal_agency_ids`
- `naics_category_ids`
- `psc_category_ids`
- `federal_contract_vehicle_ids`
- `federal_contract_idv_ids`
- `federal_contract_award_types`
- `award_date_range`
- Preferred aggregations:
- `dollars_obligated_stats`
- `top_awardees_by_dollars_obligated`
- `top_contracting_federal_agencies_by_dollars_obligated`
- `top_funding_federal_agencies_by_dollars_obligated`
- `top_idvs_by_dollars_obligated`
- `top_federal_contract_vehicles_by_dollars_obligated`
- `top_set_aside_types_by_dollars_obligated`
- `top_naics_codes_by_dollars_obligated`
- `top_psc_codes_by_dollars_obligated`
- Use this pass to estimate market size, measure concentration, identify set-aside posture, identify dominant agencies or offices, derive value-band behavior from returned dollars statistics, and determine whether vehicle concentration is strong enough to justify a dedicated vehicle or IDV branch.
4. Run a historical award row-retrieval pass after the aggregation pass.
- Use `Search_Federal_Contract_Awards` again with the same stable scope filters.
- Do not assume the first page is representative.
- If the first page is dominated by very large outliers, weak lane matches, or records that are not representative of the resolved slice, rerun with a more representative sort, tighter resolved filters, or additional pagination before choosing example rows.
- Request:
- `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `federal_contract_idv`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, `originating_federal_contract_opportunity`
- Use these rows to pull representative examples, confirm whether the market is stand-alone or vehicle-linked, and support statements about structure, size, and repeat players.
5. Run the vehicle or IDV structure branch only when it materially improves correctness.
- Only use this branch when one of these is true:
- Award results show strong concentration in one or a few vehicles.
- The single-award vs. multiple-award question cannot be answered credibly from award rows alone.
- Vehicle, BPAs, GWACs, IDIQs, or stand-alone buying structure is central to the market characterization.
- IDV path with `Search_Federal_Contract_IDVs`:
- Use directly linked IDV IDs or an exact quoted contract number if needed.
- Request:
- `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `last_date_to_order`, `contract_type`, `description`, `govtribe_ai_summary`, `ceiling_value`, `set_aside`, `multiple_or_single_award`, `awardee`, `parent_of_awardee`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, `task_orders`, `blanket_purchase_agreements`, `originating_federal_contract_opportunity`
- Use this branch to support single-award vs. multiple-award interpretation, parent-vehicle structure, ordering behavior, and whether the lane primarily flows through IDVs.
- Vehicle path with `Search_Federal_Contract_Vehicles`:
- Use directly linked vehicle IDs or an exact quoted vehicle name if needed.
- Request:
- `govtribe_id`, `govtribe_url`, `name`, `award_date`, `last_date_to_order`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `set_aside_type`, `shared_ceiling`, `originating_federal_contract_opportunity`, `federal_agency`, `federal_contract_awards`
- Use this branch to support master-vehicle dependence, shared-ceiling structure, agency vehicle sponsorship, and whether the work is largely vehicle-routed or more stand-alone.
- Do not run both branches unless the evidence requires both and the extra call materially improves correctness.
6. Use a current opportunity overlay only when present-tense buying posture or projection validation materially improves the answer.
- Use `Search_Federal_Contract_Opportunities` only when one of these is true:
- Historical awards alone do not explain whether the current market posture is persisting.
- Current pipeline evidence materially sharpens present-tense buying posture.
- Current pipeline evidence materially sharpens vehicle or set-aside interpretation.
- Comparing historical buying behavior with live pipeline behavior materially improves the characterization.
- If this branch runs, use `opportunity_types`-based passes rather than a loose mixed notice set:
- Run a `Solicitation` pass when active demand matters.
- Run a `Pre-Solicitation` pass when near-term demand matters.
- Add `Special Notice` only when it materially sharpens structure or set-aside validation.
- Use a concise keyword query only when it materially sharpens the resolved slice inside those typed passes.
- Request:
- `govtribe_id`, `govtribe_url`, `name`, `solicitation_number`, `opportunity_type`, `set_aside_type`, `posted_date`, `due_date`, `descriptions`, `govtribe_ai_summary`, `federal_meta_opportunity_id`, `federal_contract_vehicle`, `federal_agency`, `place_of_performance`, `naics_category`, `psc_category`, `points_of_contact`
- Preferred aggregations when needed:
- `top_federal_agencies_by_doc_count`
- `top_set_aside_types_by_doc_count`
- `top_naics_codes_by_doc_count`
- `top_psc_codes_by_doc_count`
- Use this branch only to validate whether the current market resembles the historical buying pattern, whether set-aside posture is tightening or loosening, and whether vehicle usage is persisting into the current pipeline.
- Keep this live-demand overlay subordinate to the historical buying profile. Do not let a sparse, off-pattern, or empty live cohort override the core pattern derived from awards.
- If this branch is thin or returns no usable demand, say so clearly rather than implying current demand exists.
7. Use a recent-vs.-prior trend comparison only when change over time materially improves the market characterization.
- Only run this branch if trend, change over time, or projection validation is necessary to support the answer.
- Default comparison window: the most recent 12 months versus the prior 12 months within the fixed 24-month window.
- Rerun the same aggregation set for those two comparable windows unless the user provides a stronger reason to compare a different pair of windows.
- Keep the same scope filters in both windows.
- Compare set-aside mix, value band, buyer concentration, vehicle concentration, and dominant NAICS or PSC posture.
8. Classify the market explicitly using evidence-backed labels.
- Buying model:
- `Vehicle-Driven`
- `Mixed`
- `Standalone-Leaning`
- Award structure:
- `Single-Award-Leaning`
- `Multiple-Award-Leaning`
- `Unclear`
- Set-aside posture:
- `Mostly Unrestricted`
- `Mixed Set-Aside`
- `Small-Business Heavy`
- Concentration:
- `Highly Concentrated`
- `Moderately Concentrated`
- `Fragmented`
- Do not assign these labels from a few isolated rows. The classification should follow the cleaned aggregation and representative-row evidence.
9. Exclude weak or misleading evidence explicitly.
- Exclude keyword-adjacent awards outside the intended capability lane.
- Exclude cross-lane contamination caused by loose NAICS or PSC overlap.
- Exclude one-off outlier contracts treated as the whole market.
- Exclude sparse current-opportunity evidence presented as a strong future projection.
- Exclude vehicle assumptions not supported by returned fields.
- Exclude grant or state and local interpretation in this contract-only workflow.
10. Perform a verification pass before finalizing the answer.
- Remove obvious outliers or weak matches.
- Confirm the main classification still holds after cleanup.
- If trend claims are central, confirm they are based on comparable windows.
- Lower confidence explicitly when the slice is sparse or the trend claim relies on thin evidence.
- If evidence is too sparse to characterize the market credibly, say so clearly and stop.
## Output Format
Return the answer in this order:
1. **Federal Buying Pattern Summary**
- Briefly explain how the market slice was interpreted.
- Explicitly state the buyer/customer dimension, the work dimension, and the fixed last-24-month window used.
2. **Search Approach**
- Briefly explain how the historical-award aggregation pass, row-retrieval pass, optional vehicle or IDV branch, optional current-opportunity overlay, and any trend comparison were used.
- Briefly note the fixed 24-month window, the resolved buyer/work slice, and any narrowing decisions applied.
3. **Federal Buying Pattern Snapshot**
- Use a required markdown table.
- Recommended columns: `Dimension`, `Finding`, `Evidence`.
4. **Buying Pattern Findings**
- Summarize the main buying model, award structure, set-aside posture, value-band pattern, buyer concentration, repeat-awardee pattern, and any contracting-versus-funding differences that matter.
5. **Vehicle / IDV Findings**
- Include only when that branch was actually used.
6. **Current Opportunity Overlay**
- Include only when that branch was actually used.
7. **Representative Records**
- Use a compact markdown table.
- Recommended columns: `Record`, `Agency`, `Vehicle / Structure`, `Set-Aside`, `Why It Matters`.
8. **Risks, Gaps, or Unknowns**
- Briefly note sparse data, ambiguous structure, outlier sensitivity, or thin projection evidence.
9. **Overall Confidence**
- State overall confidence and why.
### Optional charts
Use Mermaid only when aggregation evidence materially improves interpretation:
- `pie` for concentration or set-aside mix
- `xychart-beta` for recent-vs.-prior trend comparison
Fallback to compact markdown tables when the data is sparse.
## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.
## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.Last updated
Was this helpful?
