Vendor Deep Dive
# Vendor Deep Dive
## User Input
- **Target vendor:** [Vendor name, UEI, CAGE, GovTribe link, or description]
## Goal
Use GovTribe MCP tools to resolve a target vendor and produce a grounded government activity briefing across the vendor's recent federal contract awards, federal contract IDVs, federal grant awards, state and local contract awards, and government-related news signals.
## Required Input
The user must provide a **target vendor** before analysis begins.
Accept any of the following:
- Company name
- UEI
- CAGE
- GovTribe link
- Plain-language description only if it resolves to a single vendor without ambiguity
Input rules:
- The workflow always analyzes the resolved vendor's last 5 years of federal contract awards, federal contract IDVs, federal grant awards, and state and local contract awards.
- The workflow always analyzes a bounded recent window of government-related news, typically the last 24 months.
- Derive agencies, customers, capabilities, classifications, assistance types, vehicles, IDVs, and value bands from the returned evidence rather than treating them as user-selectable modes.
- Treat any additional context as interpretation context only. Do not let it turn this workflow into family roll-up analysis, subaward analysis, competitor ranking, buyer expansion planning, or past-performance matching.
- If the input resolves cleanly to one vendor, proceed immediately.
- If the input is too vague or ambiguous, ask for the minimum missing detail needed to proceed.
- Do not guess the target vendor.
- Do not start substantive analysis until the vendor scope is resolved.
## Workflow
### Steps
1. Call `Documentation` once with `article_names=["Search_Query_Guide", "Search_Mode_Guide", "Aggregation_and_Leaderboard_Guide", "Date_Filtering_Guide"]` before any other GovTribe tool.
- Use the documentation results to confirm valid tool names, filter names, `fields_to_return`, `search_mode`, `query`, `similar_filter`, and aggregation options before searching.
2. Resolve the target vendor identity explicitly with `Search_Vendors`.
- Use `Search_Vendors` for company name, GovTribe link, UEI, CAGE, or other known vendor identity.
- For exact names or identifiers, use a double quoted `"query"`.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `uei`, `dba`, `business_types`, `sba_certifications`, `parent_or_child`, `parent`, `naics_category`, and `govtribe_ai_summary`.
- Normalize the legal name, common name or DBA, UEI when available, CAGE only when it appears in retrieved evidence, and any parent or subsidiary relationships returned by the tool.
- Do not invent a direct `cage` filter because `Search_Vendors` does not document one.
3. Lock the exact vendor scope before cross-dataset search.
- State the resolved legal name and any confirmed DBA names or naming variants that are safe to reuse in later search steps.
- Keep parent or child entities out of scope unless the returned evidence shows the user clearly meant one of them instead of the resolved vendor.
- If the vendor resolution is still ambiguous after one careful retry, ask for clarification and stop.
4. Run the federal contract-award path over the last 5 years.
- Use `Search_Federal_Contract_Awards`.
- Start with one aggregation-first pass using `vendor_ids` and an `award_date_range` covering the last 5 years.
- Use `per_page: 0` and `aggregations` such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, `top_contracting_federal_agencies_by_dollars_obligated`, `top_naics_codes_by_dollars_obligated`, `top_psc_codes_by_dollars_obligated`, `top_federal_contract_vehicles_by_dollars_obligated`, `top_idvs_by_dollars_obligated`, `top_set_aside_types_by_dollars_obligated`, and `top_locations_by_dollars_obligated`.
- Then retrieve representative rows with the same `vendor_ids` and the same 5-year `award_date_range`.
- Add only supported filters when returned evidence makes them necessary for cleanup or follow-on interpretation, such as `contracting_federal_agency_ids`, `funding_federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `federal_contract_vehicle_ids`, `federal_contract_idv_ids`, `federal_contract_award_types`, and `set_aside_types`.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `completion_date`, `ultimate_completion_date`, `contract_type`, `descriptions`, `govtribe_ai_summary`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, `federal_contract_idv`, `place_of_performance`, and `originating_federal_contract_opportunity`.
- Use contract-award text fields such as `descriptions` and `govtribe_ai_summary` to interpret scope, but do not let text outrank the structured vendor match.
- Use `search_mode: "semantic"` only when the first pass is too noisy or too thin to explain the vendor's main federal contract lanes.
5. Run the federal contract IDV path over the last 5 years.
- Use `Search_Federal_Contract_IDVs`.
- Start with one aggregation-first pass using `vendor_ids` and an `award_date_range` covering the last 5 years.
- Use `per_page: 0` and `aggregations` such as `top_awardees_by_doc_count`, `top_funding_federal_agencies_by_doc_count`, `top_contracting_federal_agencies_by_doc_count`, `top_vehicles_by_doc_count`, `top_naics_codes_by_doc_count`, `top_psc_codes_by_doc_count`, `top_set_aside_types_by_doc_count`, and `top_transaction_contacts_by_doc_count`.
- Then retrieve representative rows with the same `vendor_ids` and the same 5-year `award_date_range`.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `award_date`, `last_date_to_order`, `ceiling_value`, `contract_type`, `multiple_or_single_award`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `funding_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, `originating_federal_contract_opportunity`, `place_of_performance`, `set_aside`, and `transaction_contacts`.
- Only if IDV rows show vehicle concentration that materially sharpens the analysis, follow the strongest returned `federal_contract_vehicle` relationship IDs into `Search_Federal_Contract_Vehicles`.
- Use `search_mode: "semantic"` only when the first pass is too noisy or too thin to explain the vendor's main IDV footprint.
6. Run the federal grant-award path over the last 5 years.
- Use `Search_Federal_Grant_Awards`.
- Start with one aggregation-first pass using `vendor_ids` and an `award_date_range` covering the last 5 years.
- Use `per_page: 0` and `aggregations` such as `dollars_obligated_stats`, `top_funding_federal_agencies_by_dollars_obligated`, `top_awardees_by_dollars_obligated`, `top_locations_by_dollars_obligated`, and `top_federal_grant_programs_by_dollars_obligated`.
- Then retrieve representative rows with the same `vendor_ids` and the same 5-year `award_date_range`.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `description`, `award_date`, `ultimate_completion_date`, `assistance_type`, `dollars_obligated`, `awardee`, `parent_of_awardee`, `funding_federal_agency`, `contracting_federal_agency`, `federal_grant_program`, `place_of_performance`, and `govtribe_ai_summary`.
- Use `search_mode: "semantic"` only when the first pass is too noisy or too thin to explain the vendor's main grant footprint.
7. Run the state and local contract-award path over the last 5 years.
- Use `Search_State_And_Local_Contract_Awards`.
- Because this dataset does not support `vendor_ids`, build the initial `query` from the resolved legal name and only the safest confirmed DBA variants.
- Start with one aggregation-first pass using the strongest exact vendor-name query you can support and an `award_date_range` covering the last 5 years.
- Use `per_page: 0` and `aggregations` such as `dollars_obligated_stats`, `top_contract_entities_by_dollars_obligated`, `top_states_by_dollars_obligated`, `top_nigp_codes_by_dollars_obligated`, and `top_unspsc_codes_by_dollars_obligated`.
- Then retrieve representative rows with the same 5-year `award_date_range` and the same exact-name query.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `description`, `awardee_name`, `contracting_organization`, `award_date`, `completion_date`, `estimated_annual_value`, `contract_amount`, `cumulative_value`, `contract_number`, `contract_type`, `state`, `nigp_categories`, `unspsc_categories`, `government_files`, `points_of_contact`, and `govtribe_ai_summary`.
- Exclude ambiguous name matches aggressively. If the tool returns noisy results and you cannot defend attribution to the resolved vendor, say the state and local surface is inconclusive instead of overclaiming.
- Use `search_mode: "semantic"` only for one careful fallback pass when exact-name retrieval is too thin and you still have a defensible vendor-specific query.
8. Run the government-related news path over a recent window.
- Use `Search_Government_Related_News_Articles`.
- Start with a bounded recent window such as the last 24 months in `date_published`.
- Build the initial `query` from the resolved legal name and only the safest confirmed DBA variants.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `title`, `subheader`, `published_date`, `site_name`, and `body`.
- Sort by `datePublished` descending for chronology-first briefing, or by `_score` only when one careful semantic follow-on search materially improves relevance.
- Exclude articles that mention a different entity with a similar name or that do not materially relate to the resolved vendor's government business.
9. Synthesize the cross-dataset findings into one vendor briefing.
- Use the structured award, IDV, grant, state/local, and news evidence to explain what the vendor appears to do, where it is active, which agencies or customers recur, which contract or assistance structures dominate, and what current signals matter most.
- Keep the datasets distinct where their evidence quality differs. Do not let thin state/local or news evidence outweigh stronger federal contract or grant evidence.
- Surface parent or child relationships only as context and clearly label them as out of scope unless the user asked for a family-level view.
10. Exclude weak, misattributed, duplicate, or entity-mismatched records, even if they share a similar name.
- Exclude parent, subsidiary, or adjacent entities that fall outside the chosen company scope.
- Exclude rows or articles that are only loosely keyword-adjacent or otherwise inconsistent with the resolved vendor identity.
11. If the available evidence is too thin to support a meaningful vendor briefing, say so clearly and stop.
12. Extract a short list of representative records that best illustrate the vendor's government footprint across the datasets that returned defensible evidence.
- Prefer the records that best explain real government activity rather than the records with the noisiest text match.
13. Perform a verification pass for the most important conclusions.
- Re-check the strongest claims against the representative records and the aggregation results for each dataset that materially contributes to the briefing.
- If a claim depends on thin, noisy, or single-record evidence, downgrade confidence and explain why.
## Output Format
Return the answer in this order:
1. **Vendor Resolution Summary**
- Briefly summarize the company and how the identity was resolved.
- Briefly note any parent or subsidiary context that was returned and whether it was excluded from scope.
2. **Cross-Dataset Search Approach**
- Briefly explain which `Search_*` tools were used.
- Briefly explain which dataset-specific filters, aggregation passes, and follow-on lookups mattered most.
- Briefly explain where exact vendor matching was strong and where attribution was noisier, especially for state and local awards and news.
3. **Federal Contract Awards Overview**
- Start with a compact markdown table.
- Recommended columns: `Theme`, `Main Signals`, `Value Band`, `Pattern / Caveat`.
4. **Federal IDV Overview**
- Start with a compact markdown table.
- Recommended columns: `Theme`, `Main Signals`, `Structure`, `Pattern / Caveat`.
5. **Federal Grant Awards Overview**
- Start with a compact markdown table.
- Recommended columns: `Theme`, `Main Signals`, `Funding Pattern`, `Pattern / Caveat`.
6. **State and Local Contract Awards Overview**
- Start with a compact markdown table when this dataset returns defensible evidence.
- Recommended columns: `Theme`, `Main Signals`, `Geography / Entity`, `Pattern / Caveat`.
- If state and local attribution is noisy or inconclusive, say so explicitly instead of forcing a summary table.
7. **Government News and Current Signals**
- Start with a compact markdown table when recent relevant articles were found.
- Recommended columns: `Article`, `Date`, `Why It Matters`, `Key Signal`.
- If recent government-related news is thin or inconclusive, say so explicitly.
8. **Representative Records**
- Present this section as a compact markdown table first.
- Recommended columns: `Dataset`, `Record`, `Why It Matters`, `Key Evidence`.
9. **Overall Positioning Across Government Markets**
- Synthesize the cross-dataset evidence into a concise briefing on where the vendor appears strongest, where activity is adjacent or limited, and what current signals matter.
10. **Risks, Gaps, or Unknowns**
- Briefly note data limitations, identity ambiguity, missing records, or overclaim risks.
11. **Overall Confidence**
- State overall confidence in the briefing and why.
## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.
## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.Last updated
Was this helpful?
