Vendor Analysis
# Vendor Analysis
## User Input
- **Target vendor:** [Vendor name, UEI, CAGE, GovTribe link, or description]
## Goal
Use GovTribe MCP tools to analyze a **target federal vendor, contractor, grantee, teammate, or competitor** and produce a grounded view of the company's federal footprint.
Use awards as the primary evidence pivot to explain what the company actually does, who buys from it, where it is strongest, and which awards best represent its market position.
## Required Documentation
Before doing any work, call the **GovTribe Documentation** tool and read the documentation required for this workflow.
Required documentation to retrieve and read:
- `article_name="Search_Query_Guide"`
- `article_name="Search_Mode_Guide"`
- `article_name="Aggregation_and_Leaderboard_Guide"`
- `article_name="Search_Vendors_Tool"`
- `article_name="Search_Federal_Contract_Awards_Tool"`
- `article_name="Search_Federal_Grant_Awards_Tool"`
Documentation rules:
- Call the **GovTribe Documentation** tool before the first research or search step.
- Read every required documentation article before using other GovTribe tools.
- Add `article_name="Date_Filtering_Guide"` when you use time windows, `award_date_range`, or `ultimate_completion_date_range`.
- Add `article_name="Location_Filtering_Guide"` when geography, `vendor_location_ids`, or `place_of_performance_ids` matter.
- Add `article_name="Search_Federal_Agencies_Tool"`, `article_name="Search_Naics_Categories_Tool"`, `article_name="Search_Psc_Categories_Tool"`, and `article_name="Search_Federal_Grant_Programs_Tool"` before using those resolver tools.
- Add `article_name="Search_Federal_Contract_Sub_Awards_Tool"` or `article_name="Search_Federal_Grant_Sub_Awards_Tool"` before any subaward branch.
- Add `article_name="Search_Federal_Contract_IDVs_Tool"` or `article_name="Search_Federal_Contract_Vehicles_Tool"` before the optional follow-on vehicle or IDV branch.
- Add `article_name="Search_Government_Files_Tool"` before any file-evidence branch.
- Add `article_name="Vector_Store_Content_Retrieval_Guide"` before using `Add_To_Vector_Store` or `Search_Vector_Store`.
- Treat the documentation as binding for tool names, parameters, filter names, `fields_to_return`, `search_mode`, `query`, `similar_filter`, `per_page`, `sort`, and aggregation behavior.
## Required Input
The user must provide a **target vendor** before analysis begins.
Accept any of the following:
- Company name
- UEI
- CAGE
- GovTribe link
- Plain-language description only if it resolves to a single vendor without ambiguity
Optional constraints the user may provide:
- Time window, such as last 24 months or last 5 years
- Agency or customer focus
- Product, service, or capability area
- NAICS, PSC, or other classifications
- Contract type or vehicle focus
- Geography or place of performance
- Whether to include grants, subawards, or only prime awards
- Whether the goal is partner vetting, competitor research, account planning, or past performance support
Input rules:
- If the input resolves cleanly to one vendor, proceed immediately.
- If the input is too vague or ambiguous, ask for the minimum missing detail needed to proceed.
- Do not guess the target vendor.
- Do not start substantive analysis until the vendor scope is resolved.
## Workflow
### Rules
- Call `Documentation` before using any other GovTribe tool.
- Always set both `search_mode` and `query` on every `Search_*` call.
- Use `query: ""` when structured filters define the cohort.
- Use `fields_to_return` whenever you need more than `govtribe_id`.
- Do not stop early when another tool call is required by the workflow.
- Keep calling tools until the task is complete or the tool budget is reached.
- If a tool returns empty or partial results and the workflow defines another defensible strategy, continue with that next strategy.
- Use `Search_Federal_Contract_Awards` and `Search_Federal_Grant_Awards` as the default evidence surfaces.
- Use `Search_Federal_Contract_Sub_Awards` and `Search_Federal_Grant_Sub_Awards` only when the user explicitly asks for subaward evidence or when weaker supporting evidence is still materially useful after prime-award review.
- Reuse the resolved company through `vendor_ids`.
- Use resolver tools to turn agencies, classifications, and grant programs into canonical IDs before deeper search.
- Use `query: ""`, `search_mode: "keyword"`, and `per_page: 0` for aggregation-only footprint cohorts.
- Use `Search_Government_Files`, `Add_To_Vector_Store`, and `Search_Vector_Store` only through the documented file-content escalation path.
- Do not silently widen the scope to parent or subsidiary entities.
- Do not assume unsupported field names or undocumented bridges.
- Do not treat subaward evidence as stronger than direct prime-award evidence.
- Do not stop at the first plausible answer; check for entity-resolution issues, duplicate records, and false positives.
### Steps
1. Call `Documentation` before any other GovTribe tool, read the required articles, and add the optional articles needed for the exact path you will run.
- Use the documentation results to confirm valid tool names, filter names, `fields_to_return`, `search_mode`, `query`, `similar_filter`, and aggregation options before searching.
2. Resolve the target vendor identity explicitly with `Search_Vendors`.
- Use `Search_Vendors` for company name, GovTribe link, UEI, CAGE, or other known vendor identity.
- For exact names or identifiers, use `search_mode: "keyword"` and a quoted `query`.
- If vendor geography matters, use `vendor_location_ids` only after reviewing `Location_Filtering_Guide`.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `business_types`, `sba_certifications`, `parent_or_child`, `parent`, `naics_category`, and `govtribe_ai_summary`.
- Also request relationship fields useful for follow-on analysis: `federal_contract_awards`, `federal_grant_awards`, `federal_contract_sub_awards`, `federal_grant_sub_awards`, and `federal_contract_idvs`.
- Normalize the legal name, common name or DBA, UEI when available, CAGE only when it appears in retrieved evidence, and any parent or subsidiary relationships returned by the tool.
- Do not invent a direct `cage` filter because `Search_Vendors` does not document one.
3. Resolve optional user constraints into canonical IDs before deeper award search.
- Use `Search_Federal_Agencies` to resolve agency names into `federal_agency_ids`.
- Use `Search_Naics_Categories` to resolve NAICS text or codes into `naics_category_ids`.
- Use `Search_Psc_Categories` to resolve PSC text or codes into `psc_category_ids`.
- Use `Search_Federal_Grant_Programs` to resolve program names or CFDA/ALN-style identifiers into `federal_grant_program_ids`.
- Reuse resolved IDs instead of carrying raw text forward into downstream award filters when a resolver tool exists.
4. Run the first prime-award pass. Awards are the default evidence path.
- Decide whether the analysis is contract-only, grant-only, or mixed. Keep contract and grant evidence distinct when they need separate interpretation.
- Contract branch with `Search_Federal_Contract_Awards`:
- Start with `query: ""`, `search_mode: "keyword"`, and `vendor_ids`.
- Add only relevant filters from the resolved constraints: `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_idv_ids`, `federal_contract_award_types`, `set_aside_types`, `award_date_range`, `ultimate_completion_date_range`, `dollars_obligated_range`, `ceiling_value_range`, and `place_of_performance_ids`.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, `federal_contract_idv`, `place_of_performance`, and `originating_federal_contract_opportunity`.
- Grant branch with `Search_Federal_Grant_Awards`:
- Start with `query: ""`, `search_mode: "keyword"`, and `vendor_ids`.
- Add only relevant filters from the resolved constraints: `federal_grant_program_ids`, `funding_federal_agency_ids`, `contracting_federal_agency_ids`, `assistance_types`, `award_date_range`, `ultimate_completion_date_range`, `dollars_obligated_range`, and `place_of_performance_ids`.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `award_date`, `ultimate_completion_date`, `dollars_obligated`, `assistance_type`, `description`, `govtribe_ai_summary`, `awardee`, `parent_of_awardee`, `funding_federal_agency`, `contracting_federal_agency`, `federal_grant_program`, and `place_of_performance`.
- Do not rely on keywords alone when `vendor_ids` and resolved filters are available.
5. Broaden only after the filter-first pass.
- Reuse the same award tools with `search_mode: "semantic"` when the first pass is too narrow.
- Keep `vendor_ids` and the strongest resolved filters in place while broadening.
- Build a concise plain-language `query` from the vendor's evidenced capabilities, customer mix, and adjacent phrasing.
- Use `_score`-based `sort` when semantic relevance should dominate.
- Use `similar_filter` only when the current tool supports it and you have a valid `{ govtribe_type, govtribe_id }` seed record.
- Do not let semantic broadening override the core vendor constraint unless the user explicitly asked for adjacent market evidence.
6. Use dataset-specific text evidence.
- For vendor records, use `govtribe_ai_summary`, `business_types`, `sba_certifications`, and `naics_category` to interpret identity and company profile, but do not let vendor-profile text outrank award evidence.
- For contract awards, inspect both `descriptions` and `govtribe_ai_summary`.
- For grant awards, inspect both `description` and `govtribe_ai_summary`.
- For contract subawards, inspect `description`.
- For grant subawards, inspect `description`.
- Do not assume every dataset exposes the same description field name.
7. Add subawards only when that evidence is explicitly needed.
- Prime-only default: stay on `Search_Federal_Contract_Awards` and/or `Search_Federal_Grant_Awards`.
- If contract subawards are in scope, use `Search_Federal_Contract_Sub_Awards` with `query: ""`, `search_mode: "keyword"`, `vendor_ids`, optional `award_date_range`, `contracting_federal_agency_ids`, and `funding_federal_agency_ids`, and request `govtribe_id`, `name`, `award_date`, `description`, `sub_contractor`, `prime_contractor`, `contracting_federal_agency`, and `funding_federal_agency`.
- If grant subawards are in scope, use `Search_Federal_Grant_Sub_Awards` with `query: ""`, `search_mode: "keyword"`, `vendor_ids`, optional `award_date_range`, `funding_federal_agency_ids`, and `contracting_federal_agency_ids`, and request `govtribe_id`, `name`, `award_date`, `description`, `sub_grantee`, and `prime_grantee`.
8. Analyze vehicle and IDV patterns only after awards establish the footprint.
- Request `federal_contract_vehicle` and `federal_contract_idv` on contract-award rows.
- Only if vehicle or IDV patterns materially affect the analysis, follow those relationship IDs into `Search_Federal_Contract_Vehicles` or `Search_Federal_Contract_IDVs`.
- Use those follow-on calls to explain contract structure, access patterns, and vehicle concentration.
9. Escalate to file content only through the documented workflow.
- Use `Search_Government_Files` only when award-side scope is still unclear and the returned contract evidence exposes a usable `originating_federal_contract_opportunity`.
- Reuse the related opportunity GovTribe IDs through `federal_contract_opportunity_ids`. Do not pass award IDs directly into `Search_Government_Files`.
- For `Search_Government_Files`, request `fields_to_return` including `govtribe_id`, `govtribe_ai_summary`, `govtribe_url`, `name`, `content_snippet`, `download_url`, `posted_date`, and `parent_record`.
- Only if `content_snippet` is insufficient, call `Documentation` with `article_name="Vector_Store_Content_Retrieval_Guide"`, then use `Add_To_Vector_Store`, then `Search_Vector_Store`.
- Treat file chunks as supporting evidence, not as a replacement for award evidence.
10. Run a footprint-profiling aggregation pass before selecting representative awards.
- For aggregation-only calls, use `query: ""`, `search_mode: "keyword"`, and `per_page: 0`.
- Keep the same `vendor_ids` and the same structural filters that define the scoped cohort.
- Contract footprint path with `Search_Federal_Contract_Awards`:
- Use `aggregations` such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, `top_contracting_federal_agencies_by_dollars_obligated`, `top_naics_codes_by_dollars_obligated`, `top_psc_codes_by_dollars_obligated`, `top_federal_contract_vehicles_by_dollars_obligated`, and `top_locations_by_dollars_obligated`.
- Grant footprint path with `Search_Federal_Grant_Awards`:
- Use `aggregations` such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, `top_federal_grant_programs_by_dollars_obligated`, and `top_locations_by_dollars_obligated`.
- Use the aggregation results to support claims about main customers, dominant classifications, typical value bands, concentration versus diversification, and whether the footprint is spread across many lanes or dominated by a few.
- If the user asks about changes over time, rerun the same aggregation set for a recent window and a prior comparison window with the same filters so trend claims stay legitimate.
11. Synthesize the vendor's footprint and positioning from the retrieved evidence.
- Identify core capabilities, main customers, dominant classifications, typical contract or assistance patterns, typical value bands, recurring customers, and whether the footprint is driven mainly by prime awards, grants, or weaker subaward support.
- For each major capability area, customer lane, or market claim, use one of these labels: `Strong`, `Moderate`, `Weak`, `Unclear`, or `Exclude`.
- Score each label using direct award support, repeat wins, same agency or office pattern, same or adjacent classification, similar contract type or assistance pattern, similar dollar range, recent relevant activity, and consistency with `descriptions`, `description`, or `govtribe_ai_summary`.
- Do not label a claim as `Strong` unless it is supported by direct award evidence and at least one additional reinforcing signal.
- Do not infer capabilities from thin keyword overlap alone.
- Do not let subaward support alone justify a `Strong` label when direct prime-award evidence is weak.
12. Exclude weak, misattributed, duplicate, or entity-mismatched records, even if they share a similar name.
- Exclude parent, subsidiary, or adjacent entities that fall outside the chosen company scope.
- Exclude records that are only keyword-adjacent or otherwise inconsistent with the resolved vendor identity.
13. If the available evidence is too thin to support a meaningful vendor analysis, say so clearly and stop.
14. Extract a short list of representative awards that best illustrate the vendor's capabilities and customer relevance.
- Prefer the records that best explain the vendor's real footprint rather than the records with the most generic keywords.
15. Perform a verification pass for the most important conclusions.
- Remove weak, edge-case, or low-confidence records and check whether the main conclusions still hold.
- Rerun the cleaned cohort with `query: ""`, `search_mode: "keyword"`, and `per_page: 0`.
- If contract evidence matters, use cleaned aggregations such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, `top_contracting_federal_agencies_by_dollars_obligated`, `top_naics_codes_by_dollars_obligated`, `top_psc_codes_by_dollars_obligated`, `top_federal_contract_vehicles_by_dollars_obligated`, and `top_locations_by_dollars_obligated`.
- If grant evidence matters, use cleaned aggregations such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, `top_federal_grant_programs_by_dollars_obligated`, and `top_locations_by_dollars_obligated`.
- If the conclusions shift materially after cleanup, lower confidence and explain why.
## Tool Budget
Design the workflow to stay compact.
Typical path:
- 6 required documentation calls
- 0 to 6 additional documentation calls for optional branches or resolver tools
- 1 vendor-resolution call
- 0 to 3 resolver calls
- 1 prime-award or grant-award pass
- 0 to 1 semantic pass
- 0 to 2 optional branches for subawards, vehicle or IDV context, or file evidence
- 1 aggregation or verification pass
Expected total:
- Typical: 8 to 11 calls
- High end: 12 to 14 calls
Avoid exceeding 15 calls unless an extra call materially changes correctness.
## Output Format
Return the answer in this order:
1. **Vendor Summary**
- Briefly summarize the company and how the identity was resolved.
- State whether the scope stayed exact-company-only or intentionally included parent or subsidiary context.
2. **Search Approach**
- Briefly explain which `Documentation.article_name` calls were used.
- Briefly explain which `Search_*` tools were used.
- Briefly explain which filters, bridge parameters, and aggregation passes mattered most.
- Briefly explain how the keyword or filter-first pass differed from the semantic pass.
- Briefly note whether the analysis stayed prime-only or included subaward support.
3. **Federal Footprint Overview**
- Start with a compact markdown table.
- Recommended columns: `Lane`, `Main Signals`, `Value Band`, `Pattern / Caveat`.
- Cover main agencies or customers, classifications, contract, vehicle, IDV, or assistance patterns, and typical value bands.
- Use aggregation-backed statements whenever aggregation analysis was run.
- If customer concentration or work-type concentration is central to the conclusion, you may add one small Mermaid `pie` chart.
- If the prompt used recent-versus-prior window comparison and the shift materially affects the conclusion, you may add one small Mermaid `xychart-beta`.
- Only add charts when they materially improve interpretation, include a short explanation, and fall back to the compact table if the data is sparse or Mermaid is unavailable.
4. **Representative Awards**
- Present this section as a compact markdown table first.
- Recommended columns: `Evidence Type`, `Award`, `Agency / Customer`, `Why It Matters`, `Key Evidence`.
5. **Capability and Market Positioning**
- Use a required markdown table.
- Recommended columns: `Capability / Market Lane`, `Strength`, `Evidence`.
- Use only `Strong`, `Moderate`, `Weak`, `Unclear`, or `Exclude` in the `Strength` column.
- Call out what looks strong, what looks adjacent, and what is unsupported.
6. **Key Risks, Gaps, or Unknowns**
- Briefly note data limitations, identity ambiguity, missing records, or overclaim risks.
7. **Overall Confidence**
- State overall confidence in the analysis and why.
## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.
## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.Last updated
Was this helpful?
