SBA Certification Graduation Dashboard
# SBA Certification Graduation Dashboard
## User Input
- **Target scope:** [Agency / NAICS / PSC / vehicle / description of work]
## Goal
Use GovTribe MCP tools to build a **gap-ranked 8(a) exposure analysis** for **federal contract awards** and **federal contract IDVs** where both the vendor's **SBA 8(a) certification expiration date** and the record lifecycle date fall within the active analysis window. Default to the next 12 months unless the user clearly requests a shorter defensible window. Compute the date gap between lifecycle and certification and surface the strongest vendor and record-level exposures in a standard text answer.
## Required Input
The user must provide a **target scope** before analysis begins.
The target scope may include one or more of the following:
- Agencies or customers of interest
- NAICS, PSC, or other classifications
- Federal contract vehicles
- Set-aside type or eligibility posture
- A shorter lifecycle and certification date window
- A specific product, service, or work description if it is precise enough to narrow the results
Input rules:
- This workflow always searches both federal contract awards and federal contract IDVs.
- This workflow always uses `vendor_sba_8a_expiration_date_range` as the certification filter and `vendor_sba_cert_8a_expiration_date` as the returned certification date field.
- Default the certification and lifecycle windows to the next 12 months. Only narrow those windows when the user clearly requests a shorter defensible range that can be applied symmetrically to both datasets.
- Convert the target scope into the narrowest defensible shared scope bundle you can apply symmetrically to both datasets.
- If the user provides a product, service, prefer converting it into structured filters such as agency, NAICS, PSC, vehicle, set-aside, or a shorter date window when possible.
- Any scope derived from the target scope and applied to awards must also be applied to IDVs unless the user explicitly says otherwise.
## Workflow
### Steps
1. Call `Documentation` once with `article_names=["Search_Query_Guide", "Search_Mode_Guide", "Date_Filtering_Guide", "Aggregation_and_Leaderboard_Guide"]` before any other GovTribe tool.
- Use the documentation results to confirm valid tool names, date range shapes, filter names, `fields_to_return`, aggregation options, and sort keys before searching.
### Important Dataset Field Differences
- Awards use `ultimate_completion_date_range` as the lifecycle filter and `ultimate_completion_date` as the returned lifecycle field.
- IDVs use `last_date_to_order_range` as the lifecycle filter and `last_date_to_order` as the returned lifecycle field.
- Awards use `set_aside_type` for set-aside output.
- IDVs use `set_aside` for set-aside output.
- Awards sort lifecycle retrieval with `completionDate`.
- IDVs sort lifecycle retrieval with `lastDateToOrder`.
- These names are not interchangeable across datasets. Do not assume awards and IDVs share the same field pattern.
2. Set the fixed SBA 8(a) certification configuration and shared scope bundle before searching.
- Always use `vendor_sba_8a_expiration_date_range` on both `Search_Federal_Contract_Awards` and `Search_Federal_Contract_IDVs`.
- Always request `vendor_sba_cert_8a_expiration_date` as the returned certification date field on both datasets.
- Convert `target_scope` into a shared scope bundle. Apply the same scope, or the closest valid equivalent filters, to both `Search_Federal_Contract_Awards` and `Search_Federal_Contract_IDVs`.
- If a requested scope cannot be applied defensibly to both datasets, say so explicitly and ask the user to clarify before searching.
- Default the certification and lifecycle windows to `now/d` through `now+12M/d`.
- If the user clearly requests a shorter defensible date window and it can be applied symmetrically to both datasets, use that narrower window instead.
- Use the returned certification and lifecycle dates for exact row-level gap calculation, display, and interpretation.
3. Run a federal contract-award count and aggregation-overview pass with `Search_Federal_Contract_Awards`.
- Use `ultimate_completion_date_range` for the lifecycle window established in Step 2.
- Apply `vendor_sba_8a_expiration_date_range` for the certification window established in Step 2.
- Apply the shared scope bundle from Step 2.
- Use `per_page: 0` to measure the filtered award cohort and request aggregation highlights such as `top_awardees_by_dollars_obligated`, `top_contracting_federal_agencies_by_dollars_obligated`, `top_set_aside_types_by_dollars_obligated`, and, when useful, `top_federal_contract_vehicles_by_dollars_obligated`.
- If the filtered award cohort is more than `250` rows, stop and ask the user to de-scope before continuing.
- Suggested narrowing options: one or more specific agencies or customers, NAICS or PSC codes, a set-aside type, a shorter date window, or a precise product/service/work description.
4. Run the federal contract-award row retrieval workflow with `Search_Federal_Contract_Awards`.
- Reuse the Step 3 lifecycle, certification, and shared scope filters.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `ultimate_completion_date`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, and `vendor_sba_cert_8a_expiration_date` only.
- Sort retrieval by `ultimate_completion_date` ascending.
- Retrieve `per_page: 50` rows at a time.
- Review up to the first `100` scoped award rows sorted by lifecycle date. Do not attempt exhaustive pagination.
- Only retrieve a second page when the first 50 rows are not enough to support a defensible ranking or pattern summary.
- Exclude rows missing either `ultimate_completion_date` or `vendor_sba_cert_8a_expiration_date`.
- Keep all remaining rows in the scoped cohort and compute `gap_days = ultimate_completion_date - certification_date`.
- Interpret `gap_days` as: positive means the work extends past certification expiration; zero or negative means the work ends on or before certification expiration.
5. Run a federal contract IDV count and aggregation-overview pass with `Search_Federal_Contract_IDVs`.
- Use `last_date_to_order_range` for the lifecycle window established in Step 2.
- Apply `vendor_sba_8a_expiration_date_range` for the certification window established in Step 2.
- Apply the shared scope bundle from Step 2.
- Use `per_page: 0` to measure the filtered IDV cohort and request aggregation highlights such as `top_awardees_by_doc_count`, `top_contracting_federal_agencies_by_doc_count`, `top_set_aside_types_by_doc_count`, and `top_vehicles_by_doc_count`.
- If the filtered IDV cohort is more than `100` rows, stop and ask the user to de-scope before continuing.
- Suggested narrowing options: one or more specific agencies or customers, NAICS or PSC codes, a set-aside type, a shorter date window, or a precise product/service/work description.
- If the top vehicle buckets are dominated by names containing `Multiple Award Schedule`, `MAS`, `FSS`, `GWAC`, or `OASIS`, and those buckets account for more than 60% of the scoped IDV cohort, tell the user the pool is MAS/GWAC-heavy, suggest narrowing before row retrieval, and offer to exclude `Federal Supply Schedule` from `federal_contract_idv_types` if the user wants to narrow.
6. Run the federal contract IDV row retrieval workflow with `Search_Federal_Contract_IDVs`.
- Reuse the Step 5 lifecycle, certification, and shared scope filters.
- Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `last_date_to_order`, `ceiling_value`, `set_aside`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, and `vendor_sba_cert_8a_expiration_date` only.
- Sort retrieval by `last_date_to_order` ascending.
- Retrieve `per_page: 50` rows at a time.
- Review up to the first `100` scoped IDV rows sorted by lifecycle date. Do not attempt exhaustive pagination.
- Only retrieve a second page when the first 50 rows are not enough to support a defensible ranking or pattern summary.
- Exclude rows missing either `last_date_to_order` or `vendor_sba_cert_8a_expiration_date`.
- Keep all remaining rows in the scoped cohort and compute `gap_days = last_date_to_order - certification_date`.
- Interpret `gap_days` as: positive means the work extends past certification expiration; zero or negative means the work ends on or before certification expiration.
7. Synthesize the final analysis from the retained award and IDV rows only.
- Keep awards and IDVs separate throughout the answer.
- Base all counts, pattern claims, summaries, and top-exposure tables only on the scoped rows that survived the active date-window filters and have both required dates.
- Do not merge awards and IDVs into a single ranked table.
- Rank vendors and representative rows by `gap_days` descending, then value descending.
- Add a row-level `Set-Aside Alignment` label using returned set-aside text only:
- `matching expected set-aside`
- `other set-aside`
- `no set-aside used`
- Treat `8(a) Sole Source` and `Competitive 8(a)` as `matching expected set-aside` for this fixed 8(a) workflow.
- For IDVs, if `federal_contract_vehicle` contains `Multiple Award Schedule`, `MAS`, `FSS`, `GWAC`, or `OASIS`, flag the record as a shared vehicle ceiling and exclude that ceiling from aggregate exposure totals.
- Keep pattern commentary qualitative and limited to what the scoped rows and aggregation-overview passes clearly support.
- Focus on the strongest vendor and record-level exposures. Do not attempt an exhaustive full-cohort dashboard.
8. Perform a verification pass before finalizing the dashboard.
- Recheck any row with missing vendor identity context, conflicting dates, or a filter/display mismatch.
- Explicitly compare `awardee` and `parent_of_awardee`.
- If both are present and differ, note the parent relationship in the answer and do not silently merge vendors solely because they share a parent.
- Flag rows where vendor identity appears ambiguous or fragmented across LLC/subsidiary variants.
- If one dataset returns no defensible rows, say so clearly instead of padding the dashboard.
- Lower confidence if the remaining results are sparse, MAS/GWAC-heavy, concentrated in one narrow pocket, or dependent on ambiguous parent/child consolidation.
# Output Format
Return the answer in this order:
1. **8(a) Certification Filter and Shared Scope**
- Briefly summarize the fixed 8(a) certification focus, the lifecycle and certification windows, and any shared scope the user applied across both datasets.
2. **Search Approach**
- Briefly explain which searches were run, how the shared scope was applied symmetrically, how the count and aggregation passes were used, and how `gap_days` was computed from returned rows.
3. **Cohort Overview**
- Start with one compact markdown table.
- Recommended columns: `Dataset`, `Scoped Count`, `Rows Reviewed`, `Earliest Lifecycle Date`, `Latest Lifecycle Date`, `Top Signals`.
4. **Top Award Exposures**
- Present this section as a compact markdown table first.
- Recommended columns: `Vendor`, `Award`, `Agency`, `Lifecycle Date`, `Certification Date`, `Gap Days`, `Set-Aside Alignment`, `Exposure`.
- Focus on the strongest and most decision-useful rows rather than exhausting the full cohort.
5. **Top IDV Exposures**
- Present this section as a compact markdown table first.
- Recommended columns: `Vendor`, `IDV`, `Agency`, `Last Date to Order`, `Certification Date`, `Gap Days`, `Set-Aside Alignment`, `Ceiling / Exposure`.
- Flag shared vehicle ceilings clearly in the table or a short note below it.
6. **Key Patterns**
- Briefly summarize agency concentration, set-aside alignment, shared vehicle ceiling posture, and the vendors with the largest positive `gap_days`.
7. **Risks, Gaps, or Unknowns**
- Briefly note missing dates, parent-child ambiguity, shared ceiling limitations, MAS/GWAC-heavy IDV pools, or any de-scope limits that affected the answer.
8. **Overall Confidence**
- State overall confidence and explain why.
Do not create a React dashboard, code artifact, serialized JSON payload, or second output format. Return a normal text answer with compact markdown tables only.
## Citation Rules
- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.
## Grounding Rules
- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.Last updated
Was this helpful?
