Relevant Opportunities

Use this prompt when you want to find open opportunities that match a specific company, product, or capability. It pulls together the best-fit contracts based on the work you do, the agencies you care about, and the kinds of requirements you are positioned to pursue. It is useful for pipeline building, business development, and turning a general capability profile into a focused list of real pursuits.

# Relevant Opportunities

## User Input
- **Target profile:** [Company, product, capability, website, or offering description]

## Goal
Use GovTribe MCP tools to identify the open federal contract or grant opportunities most relevant to a target company, solution, or capability profile.
Prioritize opportunities the target is plausibly positioned to pursue, then rank them with grounded evidence, explicit caveats, and clear confidence.

## Required Input
The user must provide a target company, solution, or capability profile before analysis begins.

Accept any of the following:
- Company name
- GovTribe link
- Company website
- Capability statement, one-pager, or other uploaded material
- Plain-language description of the company’s offerings

Optional constraints the user may provide:
- Agencies or customers of interest
- Geographic scope
- NAICS, PSC, or other classifications
- Contract type, funding instrument, or vehicle preferences
- Set-aside or eligibility preferences
- Due date window
- Incumbent or recompete preference
- Clearances, certifications, or deployment requirements

Input rules:
- If the target profile resolves cleanly, proceed immediately.
- If the target is too vague to search well, ask for the minimum missing detail needed to proceed.
- Do not guess the target or silently broaden it into a generic market scan.
- Do not start substantive opportunity analysis until the target profile is resolved.

## Workflow

### Steps
1. Call `Documentation` once with `article_names=["Search_Query_Guide", "Search_Mode_Guide", "Aggregation_and_Leaderboard_Guide", "Date_Filtering_Guide", "Location_Filtering_Guide", "Vector_Store_Content_Retrieval_Guide"]` before any other GovTribe tool.
2. Resolve the target company, solution, or capability profile and extract the strongest reusable signals.
   - Use `Search_Vendors` when the input is a company name, GovTribe link, or otherwise appears to map to a vendor record.
   - For exact company names or identifiers, use a double quoted `"query"`.
   - Request the fields needed to interpret the target, including identifiers, business types, certifications, classifications, parent-child context, descriptions or summaries, and linked award context when needed.
   - Capture as many of these signals as possible: company name, core offerings, product versus services versus software versus R&D profile, customer or mission alignment, NAICS, PSC, certifications, set-aside status, geography, likely value band, technical keywords, and recent relevant awards when they materially sharpen fit.
   - If the user supplied only a website URL, treat website content as usable only when that content is already present in user-provided context or available through an allowed file workflow.
3. Review text evidence in addition to structured fields.
   - Use `Search_User_Files` for uploaded capability statements, one-pagers, and other user-provided files.
   - Request `fields_to_return` such as `name`, `description`, `content_snippet`, `download_url`, and `govtribe_ai_summary` when file evidence is needed.
   - If `content_snippet` is insufficient, use `Add_To_Vector_Store`, then `Search_Vector_Store`.
   - Keep the strongest technical phrases, mission language, and delivery-model clues for later `query` construction.
4. Resolve filter IDs before broad opportunity retrieval when the user supplied structured constraints.
   - Use `Search_Federal_Agencies` for `federal_agency_ids`.
   - Use `Search_Naics_Categories` for `naics_category_ids`.
   - Use `Search_Psc_Categories` for `psc_category_ids`.
   - Use `Search_Federal_Grant_Programs` for `federal_grant_program_ids`.
5. Run an aggregation-first market-sizing and narrowing pass when the initial scoped market is broad or when concentration checks will improve the answer.
   - Use `per_page: 0` when filters define the cohort.
   - Contract path:
     - Use `Search_Federal_Contract_Opportunities`.
     - Apply concrete filters such as `federal_agency_ids`, `naics_category_ids`, `psc_category_ids`, `set_aside_types`, `place_of_performance_ids`, `due_date_range`, and `opportunity_types`.
     - Use `aggregations` such as `top_federal_agencies_by_doc_count`, `top_set_aside_types_by_doc_count`, `top_locations_by_doc_count`, `top_naics_codes_by_doc_count`, and `top_psc_codes_by_doc_count`.
   - Grant path:
     - Use `Search_Federal_Grant_Opportunities`.
     - Apply concrete filters such as `federal_agency_ids`, `federal_grant_program_ids`, `funding_instrument_types`, `funding_activity_categories`, `due_date_range`, and `opportunity_types`.
     - Use `aggregations` such as `top_federal_agencies_by_doc_count`, `top_federal_grant_programs_by_doc_count`, and `top_points_of_contact_by_doc_count` when outreach density or filing complexity materially affects the answer.
   - Use the aggregation results to estimate market size, describe whether the scoped market is concentrated or fragmented, and tighten filters before row-by-row review when the cohort is still too broad.
6. Run a keyword and structured-filter opportunity retrieval pass before semantic expansion.
   - Do not proceed to Step 7 until you have retrieved and reviewed row-level results from the Step 5 cohort.
   - If the Step 5 cohort is 200 results or fewer, paginate through the full cohort before any semantic pass.
   - Contract path:
     - Use `Search_Federal_Contract_Opportunities`.
     - Request `fields_to_return` explicitly, including `govtribe_id`, `govtribe_url`, `name`, `solicitation_number`, `descriptions`, `govtribe_ai_summary`, `federal_agency`, `naics_category`, `psc_category`, `set_aside_type`, `federal_contract_vehicle`, `place_of_performance`, `posted_date`, `due_date`, `government_files`, and `points_of_contact`.
   - Grant path:
     - Use `Search_Federal_Grant_Opportunities`.
     - Request `fields_to_return` explicitly, including `govtribe_id`, `govtribe_url`, `name`, `solicitation_number`, `description`, `govtribe_ai_summary`, `federal_agency`, `federal_grant_programs`, `funding_instruments`, `applicant_types`, `funding_activity_categories`, `posted_date`, `due_date`, `government_files`, and `points_of_contact`.
   - If the user requested only open or active results, make that an explicit filter decision rather than a loose interpretation.
7. Broaden only after the keyword and filter-first pass.
   - Semantic search is a broadening pass only. It must not replace the Step 6 keyword and structured-filter row retrieval pass.
   - If the initial opportunity cohort is thin or repetitive and at least one strong kept opportunity already exists from Step 6, you may run one seeded similarity pass before broader semantic expansion.
   - Contract path: use `Search_Federal_Contract_Opportunities` with `similar_filter` from the strongest kept contract opportunity and keep the strongest agency, NAICS, PSC, set-aside, due-date, and eligibility filters in place.
   - Grant path: use `Search_Federal_Grant_Opportunities` with `similar_filter` from the strongest kept grant opportunity and keep the strongest agency, program, instrument, due-date, and eligibility filters in place.
   - Do not pair this branch with a broad semantic `query` in the same call unless one narrow clarifying phrase is needed.
   - Use the same opportunity search tool with `search_mode: "semantic"` and a concise plain-language `query` built from the resolved capability profile, delivery model, mission language, and domain synonyms.
   - Keep the strongest structured filters in place while broadening.
   - Use `_score`-based `sort` for semantic passes.
   - Use `per_page` and follow-up calls as needed. Do not stop because the first page contains plausible matches.
8. Compare each candidate opportunity to the target and keep only meaningfully relevant opportunities.
   - Evaluate direct scope fit, mission or customer fit, delivery-model fit, classification overlap, contract type or funding instrument fit, eligibility fit, geography, due-date practicality, and alignment between the target profile and opportunity `descriptions` or `description` plus `govtribe_ai_summary`.
   - Exclude opportunities that are too broad, too generic, too far outside the target’s delivery model, or supported only by weak keyword overlap.
9. Add validation branches only when they materially improve the answer.
   - Use `Search_Federal_Contract_Awards` when incumbent, recompete, or prior contract-delivery evidence matters.
   - Use `Search_Federal_Grant_Awards` when grant-history validation materially improves a grant ranking.
   - For cohort-level validation, you may run an aggregation-only pass with `per_page: 0` and the same structural filters to confirm the refined market shape.
10. Rank the remaining opportunities using the fit labels below. Base the ranking on evidence, not intuition.
11. Perform a verification pass on the top-ranked opportunities.
   - Remove weak, edge-case, or low-similarity matches and check whether the ordering still holds.
   - Re-run the final filtered cohort rather than trusting the first page.
   - Use additional pagination or a repeat aggregation check when needed to confirm coverage.
   - If the ranking changes materially after cleanup, lower confidence and explain why.

### Fit Labels
- **Very High**: Strong direct fit to the target’s offering and customer mission, with no obvious eligibility or delivery-model mismatch.
- **High**: Clear relevance with multiple supporting signals, but not as direct as the strongest matches.
- **Medium**: Plausible and potentially worth watching, but one or more meaningful gaps remain.
- **Low**: Only partial or adjacent fit. Plausible, but weakly supported.
- **Exclude**: Clear mismatch in scope, customer, eligibility, delivery model, geography, timing, or textual evidence.

Scoring factors:
- Direct fit to the target’s core offering
- Alignment with the target’s delivery model
- Same or very similar customer mission
- Same or adjacent NAICS, PSC, or grant-program pattern
- Similar technical requirements
- Similar contract type, funding instrument, or vehicle pattern
- Eligibility fit, including set-aside or applicant-type alignment
- Geographic and operational fit
- Realistic pursuit timing based on due date and stage
- Consistency between the target profile and opportunity text or summary evidence

Ranking rules:
- Do not rank an opportunity above **Medium** unless there is strong direct scope fit and at least one additional supporting signal such as the same agency, same classification, same contract type, same operational environment, or strong text alignment.
- Do not include opportunities based mostly on keyword overlap.

## Output Format
Return the answer in this order:

1. **Target Profile Summary**
   - Briefly summarize how the target company, solution, or capability profile was interpreted.
2. **Search Approach**
   - Briefly explain which `Search_*` tools, filters, and parameters were most important.
   - Briefly explain how the aggregation-first pass, keyword and filter-first row pass, and semantic pass were used.
3. **Market Slice Summary**
   - Start with a compact markdown table summarizing the dominant agencies, programs, classifications, set-aside patterns, or other market-shape signals in the scoped cohort.
   - Briefly summarize whether the scoped market looked concentrated or fragmented.
   - If concentration or distribution is central to the answer, you may add one small Mermaid `pie` or `xychart-beta` chart.
   - Add the chart only when it materially improves interpretation, include a short explanation, and fall back to the compact table if the data is sparse or Mermaid is unavailable.
4. **Top Relevant Open Opportunities**
   - Present this section as a compact markdown table first.
   - Recommended columns: `Rank`, `Opportunity`, `Agency`, `Due Date`, `Fit`, `Why It Matters`.
   - Keep the table compact and move overflow detail into short notes immediately below the table when needed.
5. **Why Others Were Excluded**
   - Briefly note close-but-rejected or weak-fit opportunities.
6. **Overall Confidence**
   - State overall confidence in the ranking and explain why.

## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.

## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.

Last updated

Was this helpful?