Forecast / Early Signal Radar

Use this prompt when you want to see what may be coming before a requirement turns into a fully active solicitation. It surfaces formal forecasts, early notices, and other credible signals that help you spot demand earlier and organize it by stage. It is useful for market monitoring, early pipeline development, and getting ahead of opportunities before they become widely visible.

# Forecast / Early Signal Radar

## User Input
- **Target scope:** [Topic, capability, agency, GovTribe link, NAICS, PSC, or requirement area]

## Goal
Use GovTribe MCP tools to surface emerging federal demand signals across a market slice before they mature into fully active solicitations.

This workflow is demand-signal-first, not target-company-first. It should surface:
- Formal federal forecasts
- Early federal notices that function as demand signals
- Optional government-related article corroboration when recent public reporting materially strengthens or explains the signal

Use article and news search only as optional corroboration. It must not drive the radar by itself.

## Required Documentation
Before doing any work, call the **GovTribe Documentation** tool and read the documentation required for this workflow.

Required documentation to retrieve and read:
- `article_name="Search_Query_Guide"`
- `article_name="Search_Mode_Guide"`
- `article_name="Date_Filtering_Guide"`
- `article_name="Search_Federal_Forecasts_Tool"`
- `article_name="Search_Federal_Contract_Opportunities_Tool"`

Retrieve these additional documentation articles only when the workflow needs them:
- `article_name="Location_Filtering_Guide"` for geography or `place_of_performance_ids`
- `article_name="Search_Federal_Agencies_Tool"` for agency ID resolution
- `article_name="Search_Naics_Categories_Tool"` for NAICS ID resolution
- `article_name="Search_Psc_Categories_Tool"` for PSC ID resolution
- `article_name="Search_Government_Related_News_Articles_Tool"` for optional article corroboration
- `article_name="Search_Federal_Contract_Awards_Tool"` for optional historical validation
- `article_name="Create_Saved_Search_Tool"` for saved-search or alert handoff

Documentation rules:
- Call the **GovTribe Documentation** tool before the first research or search step.
- Read every required documentation article before using other GovTribe tools.
- Follow the documented tool contracts exactly.
- Treat the documentation as binding for tool names, parameters, field definitions, valid filters, `fields_to_return`, sort keys, and saved-search handoff.

## Required Input
The user must provide a target scope before analysis begins.

Accept any of the following:
- Topic, capability, or market lane
- Agency or customer
- NAICS or PSC
- GovTribe link to a related forecast or opportunity
- Plain-language description of the requirement area
- Optional geography or time horizon

Optional constraints the user may provide:
- Include only formal forecasts
- Include forecasts plus early notices
- Include active solicitations too
- Include government-related articles or not
- Agency focus
- NAICS or PSC focus
- Set-aside focus
- Geography
- Release window
- Forecast type such as `New Requirement`, `Recompete`, or `Exercise of Option`
- Whether to return a saved-search or alert recommendation

Input rules:
- If the target scope is too vague to search well, ask for the minimum missing detail needed to proceed.
- Do not guess the target scope.
- Do not start substantive analysis until the target scope is resolved well enough to search.

## Workflow

### Rules
- Call `Documentation` before using any other GovTribe MCP tool.
- Federal contracts only; do not mix in grants or state/local workflows.
- This workflow is demand-signal-first, not company-fit-first.
- Use GovTribe MCP tools as the primary source of evidence.
- Always set both `search_mode` and `query` on every `Search_*` call.
- Use `fields_to_return` whenever you need more than `govtribe_id`.
- Do not stop early when another tool call is required by the workflow.
- Keep calling tools until the task is complete or the tool budget is reached.
- If a tool returns empty or partial results and the workflow defines another defensible strategy, continue with that next strategy.
- Default stage scope is `Forecast`, `Pre-Solicitation`, and `Special Notice`.
- Include `Solicitation` rows only when the user explicitly asks for active solicitations or says `forecast or active`.
- Use `Search_Government_Related_News_Articles` only as optional corroboration.
- Do not treat article hits as equivalent to forecasts, notices, or solicitations.
- Use semantic expansion only after keyword/filter-first passes.
- If the user later wants company-specific fit ranking, switch to `Relevant Opportunities`.
- If the user later wants one-record notice analysis, switch to `Opportunity Deep Dive`.
- If the user later wants expiring incumbent or follow-on analysis, switch to `Expiring Contracts / Recompetes`.

### Steps
1. Before doing any research, call `Documentation`, read every required article listed above, and add only the optional articles needed for the exact workflow branch.
    - Use the documentation results to confirm valid tool names, `search_mode`, `query`, filter names, `fields_to_return`, and sort keys before searching.

2. Resolve and normalize the target scope.
    - This workflow does not require a target company.
    - Normalize topic or capability terms into a concise query set.
    - Resolve agencies into `federal_agency_ids` when names are provided.
    - Resolve NAICS into `naics_category_ids` when codes or labels are provided.
    - Resolve PSC into `psc_category_ids` when codes or labels are provided.
    - Resolve geography into `place_of_performance_ids` only after reviewing `Location_Filtering_Guide`.
    - Translate the time horizon into:
        - `estimated_solicitation_release_date_range` and `anticipated_award_start_date_range` for forecasts
        - `posted_date` and `due_date_range` for opportunity signals
        - `date_published` for article corroboration

3. Run the forecast pass first.
    - Use `Search_Federal_Forecasts` as the first-class formal planning surface.
    - Use `query: ""` when structured filters fully define the cohort.
    - Use `search_mode: "keyword"` for aggregation-first and filter-first passes.
    - Aggregation-first pass:
        - Use `per_page: 0`
        - Use `aggregations` such as `top_federal_agencies_by_doc_count`, `top_set_aside_types_by_doc_count`, `top_naics_codes_by_doc_count`, and `top_contacts_by_doc_count`
        - Use this pass to size the forecast slice, identify dominant agencies and set-aside posture, and narrow the row-retrieval pass when the cohort is broad
    - Row-retrieval pass:
        - Request `govtribe_id`, `govtribe_url`, `name`, `forecast_type`, `set_aside`, `estimated_solicitation_release_date`, `estimated_award_start_date`, `estimated_award_value`, `descriptions`, `updated_at`, `federal_agency`, `place_of_performance`, and `points_of_contact`
    - Use the tool’s documented field names exactly.
    - Important detail:
        - The input filter is `anticipated_award_start_date_range`
        - The returned row field is `estimated_award_start_date`

4. Run the early-notice pass second.
    - Use `Search_Federal_Contract_Opportunities` as the early-notice surface.
    - Default early-signal opportunity types:
        - `Pre-Solicitation`
        - `Special Notice`
    - Use `search_mode: "keyword"` first.
    - Use exact quoted identifiers where possible.
    - For RFI, sources-sought, and market-research style asks, carry those terms in `query`.
    - Do not assume there is a dedicated `RFI` opportunity-type enum. Use `Special Notice` plus query terms when needed.
    - Request:
        - `govtribe_id`, `govtribe_url`, `name`, `solicitation_number`, `opportunity_type`, `set_aside_type`, `posted_date`, `due_date`, `descriptions`, `govtribe_ai_summary`, `federal_meta_opportunity_id`, `federal_contract_vehicle`, `federal_agency`, `place_of_performance`, `naics_category`, `psc_category`, and `points_of_contact`

5. Use optional government article corroboration only when it materially improves the answer.
    - Use `Search_Government_Related_News_Articles` only when one of these is true:
        - The user explicitly asks for article or news corroboration
        - Forecast and notice signals are sparse, and public reporting may explain recent agency priorities
        - The user wants supporting public evidence for why a demand signal matters now
        - There is a known topic, agency, or initiative where public announcements plausibly precede or explain procurement movement
    - Keep this branch recent by default. Use a short `date_published` window such as the last 30 to 90 days unless the user asks for more.
    - Use keyword queries first for named agencies, initiatives, programs, or phrases.
    - Use semantic search only when the topic is conceptual and recent keyword coverage is thin.
    - Request:
        - `govtribe_id`, `govtribe_url`, `title`, `subheader`, `published_date`, `site_name`, and `body`
    - Article evidence may:
        - Corroborate timing, agency attention, policy momentum, program naming, or public announcements
        - Explain why a signal may matter
        - Add context for prioritization
    - Article evidence must not:
        - Create a procurement signal by itself
        - Substitute for structured GovTribe forecast or opportunity evidence

6. Broaden with semantic expansion only after the keyword/filter-first passes.
    - Use `search_mode: "semantic"` on forecasts or opportunities only when the topic is conceptual or synonym-heavy.
    - Keep the strongest structural filters in place.
    - Use `_score` sorting for semantic passes.
    - Do not let semantic broadening replace stage-based filtering.
    - If article semantic search is used at all, keep it secondary to forecast and opportunity retrieval.

7. Merge, dedupe, and stage the radar.
    - Forecast records stay in the `Forecast` bucket.
    - Opportunity rows are mapped into `Pre-Solicitation` or `Special Notice / RFI / Sources Sought`.
    - If explicitly requested, `Solicitation` rows go into `Active Solicitation`.
    - Article rows go into `Article / Public Signal` only when they materially support or explain a structured signal.
    - Collapse obvious duplicates by agency, title, timing, and scope similarity.
    - If a formal forecast and an early notice appear to describe the same requirement, keep both and explain the progression.
    - If an article appears to describe the same requirement or initiative, link it narratively to the structured signal rather than treating it as a separate procurement record.

8. Use optional historical validation only when it sharpens the radar.
    - If the user asks whether a signal looks like a recompete or wants historical confirmation, use `Search_Federal_Contract_Awards`.
    - Validate through agency, NAICS, PSC, vehicle, and date context.
    - Use this only to sharpen the radar, not to turn the workflow into `Expiring Contracts / Recompetes`.

9. Hand off to saved-search logic only after the radar is finalized.
    - If the user requests reusable monitoring, preserve the final `search_results_id`.
    - Recommend or create a saved search only after the radar scope is narrowed.
    - Restrict cadence guidance to `Daily`, `Weekly`, `Instant`, or `Never`.

10. Rank and verify the remaining signals before finalizing the answer.
    - Use the signal labels and scoring factors in `## Output Format`.
    - Prefer records with concrete stage clarity, timing, scope specificity, agency clarity, and evidence of progression from forecast to notice.
    - Remove weak or article-only signals before finalizing the radar.
    - If the evidence is sparse, conflicting, or mostly weak, say so clearly instead of forcing a confident radar.
    - Include timing outlook and next-step logic when the evidence supports it.

## Tool Budget
Design the workflow to stay compact.

Typical path without articles:
- 5 documentation calls
- 0 to 3 resolver calls
- 1 forecast aggregation or row pass
- 1 early-notice pass
- 0 to 1 semantic expansion pass
- 0 to 1 saved-search or award-validation pass

Article branch:
- Add 1 article-search call in most cases
- At most 2 article-search calls if one metadata-only check and one row-retrieval pass materially improve the answer

Expected total:
- Typical path without articles: 7 to 10 calls
- High end with article corroboration, semantic expansion, or saved-search handoff: 11 to 13 calls

Avoid exceeding 15 calls unless an extra call materially changes correctness.

## Output Format
Use these signal labels:
- `High Signal`
- `Medium Signal`
- `Watch`
- `Weak`
- `Exclude`

Score each surfaced item using:
- Stage maturity
- Release timing
- Specificity of scope
- Agency clarity
- Set-aside clarity
- Classification clarity
- Presence of points of contact
- Evidence of progression from forecast to notice
- Whether the signal looks like a new requirement, recompete, or option exercise
- Whether article or public evidence reinforces, but does not replace, the structured signal

Return the answer in this order:

1. **Target Scope Summary**
    - Briefly summarize how the topic, agency, codes, geography, and timing were interpreted

2. **Search Approach**
    - Briefly explain which `Documentation.article_name` calls were used
    - Briefly explain how the forecast pass, early-notice pass, and optional article branch were used
    - Briefly note any filters, time windows, or stage-scope decisions applied

3. **Forecast Signals**
    - List the strongest formal forecast signals

4. **Early Notice Signals**
    - List the strongest early-notice signals

5. **Article / Public Signal Corroboration**
    - Include only when article evidence was actually used and materially improved the answer

6. **Radar Summary and Timing Outlook**
    - Summarize which signals look strongest and what stage progression appears most likely

7. **Saved Search / Monitoring Recommendation**
    - Include only when requested

8. **Risks, Gaps, or Unknowns**
    - Briefly note sparse signal coverage, ambiguity, thin evidence, or timing uncertainty

9. **Overall Confidence**
    - State overall confidence and why

## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.

## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.

Last updated

Was this helpful?