Market Structure / Buying Pattern Analysis

Use this prompt when you want to understand how an agency or market actually buys a given type of work. It analyzes prior awards, vehicles, set-aside mix, value bands, and repeat winners to show whether the market is vehicle-driven, stand-alone, concentrated, or fragmented. It is useful for account planning, qualification, set-aside validation, and deciding how to position before you chase work.

# Market Structure / Buying Pattern Analysis

## User Input
- **Target market slice:** [Agency, customer, market lane, NAICS, PSC, vehicle, IDV, or work category]

## Goal
Use GovTribe MCP tools to determine how a federal agency, customer, or market slice actually buys a given type of contract work.

Use historical awards as the default evidence surface, then add vehicle, IDV, or current-opportunity context only when it materially improves correctness or the user explicitly asks for it.

## Required Documentation
Before doing any work, call the **GovTribe Documentation** tool and read the documentation required for this workflow.

Required documentation to retrieve and read:
- `article_name="Search_Query_Guide"`
- `article_name="Search_Mode_Guide"`
- `article_name="Aggregation_and_Leaderboard_Guide"`
- `article_name="Date_Filtering_Guide"`
- `article_name="Search_Federal_Contract_Awards_Tool"`

Documentation rules:
- Call the **GovTribe Documentation** tool before the first research or search step.
- Read every required documentation article before using other GovTribe tools.
- Add `article_name="Location_Filtering_Guide"` when location filters matter.
- Add `article_name="Search_Federal_Agencies_Tool"`, `article_name="Search_Naics_Categories_Tool"`, `article_name="Search_Psc_Categories_Tool"`, or `article_name="Search_Vendors_Tool"` when those tools are needed to resolve the market slice.
- Add `article_name="Search_Federal_Contract_IDVs_Tool"` or `article_name="Search_Federal_Contract_Vehicles_Tool"` before the optional vehicle or IDV branch.
- Add `article_name="Search_Federal_Contract_Opportunities_Tool"` before the optional current-opportunity overlay.
- Treat the documentation as binding for tool names, parameters, filter names, `search_mode`, `query`, `fields_to_return`, `per_page`, `sort`, and aggregation behavior.

## Required Input
The user must provide a **target market slice** before analysis begins.

Accept any of the following:
- Agency or customer
- Capability area or market lane
- NAICS or PSC
- Vehicle or IDV name or identifier
- Contract number only if it is being used as a seed to infer buying structure
- Plain-language description of the work category
- Optional geography
- Optional time window

Optional constraints the user may provide:
- Focus on historical awards only
- Include current opportunity overlay
- Focus on set-aside behavior
- Focus on vehicle or IDV usage
- Focus on single-award vs. multiple-award structure
- Focus on buyer offices or awarding offices
- Focus on value bands
- Focus on one agency vs. cross-agency market slice
- Known incumbent or vendor context only if it sharpens the slice instead of turning the workflow into vendor analysis
- Whether to return `full_analysis` or `buying_pattern_card`

Input rules:
- If the input resolves cleanly to one target market slice, proceed immediately.
- If the input is too vague to resolve to one market slice, ask for the minimum missing detail required to proceed.
- Do not guess the target market slice.
- Do not start substantive analysis until the slice is resolved.

## Workflow

### Rules
- Call `Documentation` before using any other GovTribe tool.
- Federal contracts only. Do not mix in grants or state and local workflows.
- Always set both `search_mode` and `query` on every `Search_*` call.
- Use GovTribe MCP tools as the primary evidence source for market-structure claims.
- Use historical awards as the default primary evidence surface.
- Use `query: ""`, `search_mode: "keyword"`, and `per_page: 0` for aggregation-only cohorts when structured filters define the market slice.
- Use explicit `fields_to_return` whenever fields beyond `govtribe_id` are needed.
- Do not stop early when another tool call is required by the workflow.
- Keep calling tools until the task is complete or the tool budget is reached.
- If a tool returns empty or partial results and the workflow defines another defensible strategy, continue with that next strategy.
- Prefer aggregation-first interpretation over a loose scan of individual rows.
- Use vehicle or IDV analysis only when it materially improves correctness.
- Use current opportunities only as an overlay when the user asks for present-tense buying posture or projection validation.
- Do not treat one-off outlier awards as the whole market.
- Do not infer vehicle usage, single-award posture, or set-aside posture from weak evidence.
- Do not turn this workflow into Relevant Opportunities, Likely Bidders, Vendor Analysis, Incumbent / Prior Performer Finder, Opportunity Deep Dive, or Award Deep Dive.
- If the user wants company-fit next, switch to Relevant Opportunities.
- If the user wants likely competitors next, switch to Likely Bidders.
- If the user wants incumbent identity next, switch to Incumbent / Prior Performer Finder.
- If the user wants one notice or one award next, switch to Opportunity Deep Dive or Award Deep Dive.

### Steps
1. Call `Documentation` before any other GovTribe tool, read the required articles, and add the optional articles needed for the exact path you will run.
   - Use the documentation results to confirm valid tool names, filter names, `search_mode`, `query`, `fields_to_return`, `per_page`, `sort`, and aggregation options before searching.
2. Resolve and normalize the target market slice.
   - This workflow does not require a target company.
   - Resolve agency names into `federal_agency_ids` when names are provided.
   - Resolve NAICS into `naics_category_ids` when codes or labels are provided.
   - Resolve PSC into `psc_category_ids` when codes or labels are provided.
   - Resolve geography into `place_of_performance_ids` when a location filter is needed.
   - Resolve a vehicle or IDV identifier into the correct GovTribe record when the user provides one directly.
   - If the user provides a known incumbent or vendor only as context for the market slice, use `Search_Vendors` to normalize identity, but do not let the workflow drift into vendor analysis.
   - Resolve agencies, NAICS, PSCs, locations, vehicles, and IDVs only when that resolution materially sharpens the slice.
   - If the user does not specify a time window, default to a recent multi-year historical view such as the last 5 years and say so explicitly.
3. Run a historical award aggregation pass first.
   - Use `Search_Federal_Contract_Awards` as the primary evidence surface because market structure is best grounded in actual buying behavior.
   - Use `query: ""` when structured filters define the cohort.
   - Use `search_mode: "keyword"`.
   - Use `per_page: 0`.
   - Keep the scope filters stable across later comparison passes.
   - Preferred filters when available:
     - `contracting_federal_agency_ids`
     - `funding_federal_agency_ids`
     - `naics_category_ids`
     - `psc_category_ids`
     - `federal_contract_vehicle_ids`
     - `federal_contract_idv_ids`
     - `federal_contract_award_types`
     - `set_aside_types`
     - `award_date_range`
     - `dollars_obligated_range`
     - `place_of_performance_ids`
   - Preferred aggregations:
     - `dollars_obligated_stats`
     - `top_awardees_by_dollars_obligated`
     - `top_contracting_federal_agencies_by_dollars_obligated`
     - `top_funding_federal_agencies_by_dollars_obligated`
     - `top_federal_contract_vehicles_by_dollars_obligated`
     - `top_set_aside_types_by_dollars_obligated`
     - `top_naics_codes_by_dollars_obligated`
     - `top_psc_codes_by_dollars_obligated`
     - `top_locations_by_dollars_obligated`
   - Use this pass to estimate market size, measure concentration, identify set-aside posture, identify dominant agencies or offices, and determine whether vehicle concentration is strong enough to justify a dedicated vehicle or IDV branch.
4. Run a historical award row-retrieval pass after the aggregation pass.
   - Use `Search_Federal_Contract_Awards` again with the same stable scope filters.
   - Use `query: ""` when the cohort is fully defined by filters.
   - Request:
     - `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `federal_contract_idv`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, `originating_federal_contract_opportunity`
   - Use these rows to pull representative examples, confirm whether the market is stand-alone or vehicle-linked, and support statements about structure, size, and repeat players.
5. Run the vehicle or IDV structure branch only when it materially improves correctness.
   - Only use this branch when one of these is true:
     - The user explicitly asks about contract vehicles, BPAs, GWACs, IDIQs, or stand-alone buying.
     - Award results show strong concentration in one or a few vehicles.
     - The single-award vs. multiple-award question cannot be answered credibly from award rows alone.
   - IDV path with `Search_Federal_Contract_IDVs`:
     - Use directly linked IDV IDs or an exact quoted contract number if needed.
     - Request:
       - `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `last_date_to_order`, `contract_type`, `description`, `govtribe_ai_summary`, `ceiling_value`, `set_aside`, `multiple_or_single_award`, `awardee`, `parent_of_awardee`, `federal_contract_vehicle`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `place_of_performance`, `task_orders`, `blanket_purchase_agreements`, `originating_federal_contract_opportunity`
     - Use this branch to support single-award vs. multiple-award interpretation, parent-vehicle structure, ordering behavior, and whether the lane primarily flows through IDVs.
   - Vehicle path with `Search_Federal_Contract_Vehicles`:
     - Use directly linked vehicle IDs or an exact quoted vehicle name if needed.
     - Request:
       - `govtribe_id`, `govtribe_url`, `name`, `award_date`, `last_date_to_order`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `set_aside_type`, `shared_ceiling`, `originating_federal_contract_opportunity`, `federal_agency`, `federal_contract_awards`
     - Use this branch to support master-vehicle dependence, shared-ceiling structure, agency vehicle sponsorship, and whether the work is largely vehicle-routed or more stand-alone.
   - Do not run both branches unless the evidence requires both and the extra call materially improves correctness.
6. Use a current opportunity overlay only when the user asks about current buying posture or projection validation.
   - Use `Search_Federal_Contract_Opportunities` only when one of these is true:
     - The user asks how the market is being bought now.
     - The user asks to validate set-aside projections.
     - The user asks about current or near-term buying posture.
     - The user wants to compare historical buying behavior with live pipeline behavior.
   - Default opportunity scope:
     - `Pre-Solicitation`
     - `Solicitation`
     - `Special Notice` only when it materially sharpens structure or set-aside validation
   - Use `query: ""` when structured filters define the cohort, otherwise use a concise keyword query.
   - Request:
     - `govtribe_id`, `govtribe_url`, `name`, `solicitation_number`, `opportunity_type`, `set_aside_type`, `posted_date`, `due_date`, `descriptions`, `govtribe_ai_summary`, `federal_meta_opportunity_id`, `federal_contract_vehicle`, `federal_agency`, `place_of_performance`, `naics_category`, `psc_category`, `points_of_contact`
   - Preferred aggregations when needed:
     - `top_federal_agencies_by_doc_count`
     - `top_set_aside_types_by_doc_count`
     - `top_naics_codes_by_doc_count`
     - `top_psc_codes_by_doc_count`
     - `top_locations_by_doc_count`
   - Use this branch only to validate whether the current market resembles the historical buying pattern, whether set-aside posture is tightening or loosening, and whether vehicle usage is persisting into the current pipeline.
7. Use a recent-vs.-prior trend comparison only when trend is a real part of the question.
   - Only run this branch if the user asks about trend, change over time, or projection validation.
   - Rerun the same aggregation set for two comparable windows.
   - Keep the same scope filters in both windows.
   - Compare set-aside mix, value band, buyer concentration, vehicle concentration, and dominant NAICS or PSC posture.
8. Classify the market explicitly using evidence-backed labels.
   - Buying model:
     - `Vehicle-Driven`
     - `Mixed`
     - `Standalone-Leaning`
   - Award structure:
     - `Single-Award-Leaning`
     - `Multiple-Award-Leaning`
     - `Unclear`
   - Set-aside posture:
     - `Mostly Unrestricted`
     - `Mixed Set-Aside`
     - `Small-Business Heavy`
   - Concentration:
     - `Highly Concentrated`
     - `Moderately Concentrated`
     - `Fragmented`
   - Do not assign these labels from a few isolated rows. The classification should follow the cleaned aggregation and representative-row evidence.
9. Exclude weak or misleading evidence explicitly.
   - Exclude keyword-adjacent awards outside the intended capability lane.
   - Exclude cross-lane contamination caused by loose NAICS or PSC overlap.
   - Exclude one-off outlier contracts treated as the whole market.
   - Exclude sparse current-opportunity evidence presented as a strong future projection.
   - Exclude vehicle assumptions not supported by returned fields.
   - Exclude grant or state and local interpretation in this contract-only workflow.
10. Perform a verification pass before finalizing the answer.
    - Remove obvious outliers or weak matches.
    - Confirm the main classification still holds after cleanup.
    - If trend claims are central, confirm they are based on comparable windows.
    - Lower confidence explicitly when the slice is sparse or the trend claim relies on thin evidence.
    - If evidence is too sparse to characterize the market credibly, say so clearly and stop.

## Tool Budget
Design the workflow to stay compact.

Typical path:
- 5 required documentation calls
- 0 to 4 additional documentation calls for optional branches or resolver tools
- 0 to 3 resolver calls
- 1 historical-award aggregation pass
- 1 historical-award row pass
- 0 to 1 IDV or vehicle branch
- 0 to 1 current-opportunity overlay
- 0 to 1 trend-comparison or verification pass

Expected total:
- Typical: 8 to 11 calls
- High end with current overlay and trend comparison: 12 to 14 calls

Avoid exceeding 15 calls unless an extra call materially changes correctness.

## Output Format
Return the answer in this order:

1. **Target Market Summary**
   - Briefly explain how the market slice was interpreted.
2. **Search Approach**
   - Briefly explain which `Documentation.article_name` calls were used.
   - Briefly explain how the historical-award aggregation pass, row-retrieval pass, optional vehicle or IDV branch, optional current-opportunity overlay, and any trend comparison were used.
   - Briefly note any filters, time windows, or narrowing decisions applied.
3. **Market Structure Snapshot**
   - Use a required markdown table.
   - Recommended columns: `Dimension`, `Finding`, `Evidence`.
4. **Buying Pattern Findings**
   - Summarize the main buying model, award structure, set-aside posture, value-band pattern, buyer concentration, and repeat-awardee pattern.
5. **Vehicle / IDV Findings**
   - Include only when that branch was actually used.
6. **Current Opportunity Overlay**
   - Include only when that branch was actually used.
7. **Representative Records**
   - Use a compact markdown table.
   - Recommended columns: `Record`, `Agency`, `Vehicle / Structure`, `Set-Aside`, `Why It Matters`.
8. **Risks, Gaps, or Unknowns**
   - Briefly note sparse data, ambiguous structure, outlier sensitivity, or thin projection evidence.
9. **Overall Confidence**
   - State overall confidence and why.

### Optional charts
Use Mermaid only when aggregation evidence materially improves interpretation:
- `pie` for concentration or set-aside mix
- `xychart-beta` for recent-vs.-prior trend comparison

Fallback to compact markdown tables when the data is sparse.

## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.

## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.

Last updated

Was this helpful?