SBA Certification Graduation Dashboard

Use this prompt when you want a scoped, gap-ranked dashboard of federal contract awards and IDVs tied to an SBA certification type. It applies the same scope to awards and IDVs, sizes each cohort before row retrieval, then compares lifecycle and certification dates to surface the vendors and records with the most meaningful exposure. It is useful for 8(a), HUBZone, joint venture, and SDB graduation monitoring tied to live federal contract activity.

# SBA Certification Graduation Dashboard

## User Input
- **Target scope:** [Agency / NAICS / PSC / vehicle / description of work]

## Goal
Use GovTribe MCP tools to build a **gap-ranked vendor dashboard** for **federal contract awards** and **federal contract IDVs** where both the vendor's **SBA 8(a) certification expiration date** and the record lifecycle date fall within the active analysis window. Default to the next 12 months unless the user clearly requests a shorter defensible window. Show the full scoped cohort that meets those date windows, compute the date gap between lifecycle and certification, and rank the results by the severity of that gap.

## Required Input
The user must provide a **target scope** before analysis begins.

The target scope may include one or more of the following:
- Agencies or customers of interest
- NAICS, PSC, or other classifications
- Federal contract vehicles
- Set-aside type or eligibility posture
- A shorter lifecycle and certification date window
- A specific product, service, or work description if it is precise enough to narrow the results

Input rules:
- This workflow always searches both federal contract awards and federal contract IDVs.
- This workflow always uses `vendor_sba_8a_expiration_date_range` as the certification filter and `vendor_sba_cert_8a_expiration_date` as the returned certification date field.
- Default the certification and lifecycle windows to the next 12 months. Only narrow those windows when the user clearly requests a shorter defensible range that can be applied symmetrically to both datasets.
- Convert the target scope into the narrowest defensible shared scope bundle you can apply symmetrically to both datasets.
- If the user provides a product, service, prefer converting it into structured filters such as agency, NAICS, PSC, vehicle, set-aside, or a shorter date window when possible.
- Any scope derived from the target scope and applied to awards must also be applied to IDVs unless the user explicitly says otherwise.

## Workflow

### Steps
1. Call `Documentation` once with `article_names=["Search_Query_Guide", "Search_Mode_Guide", "Date_Filtering_Guide", "Aggregation_and_Leaderboard_Guide"]` before any other GovTribe tool.
   - Use the documentation results to confirm valid tool names, date range shapes, filter names, `fields_to_return`, aggregation options, and sort keys before searching.

### Important Dataset Field Differences
- Awards use `ultimate_completion_date_range` as the lifecycle filter and `ultimate_completion_date` as the returned lifecycle field.
- IDVs use `last_date_to_order_range` as the lifecycle filter and `last_date_to_order` as the returned lifecycle field.
- Awards use `set_aside_type` for set-aside output.
- IDVs use `set_aside` for set-aside output.
- Awards sort lifecycle retrieval with `completionDate`.
- IDVs sort lifecycle retrieval with `lastDateToOrder`.
- These names are not interchangeable across datasets. Do not assume awards and IDVs share the same field pattern.

2. Set the fixed SBA 8(a) certification configuration and shared scope bundle before searching.
   - Always use `vendor_sba_8a_expiration_date_range` on both `Search_Federal_Contract_Awards` and `Search_Federal_Contract_IDVs`.
   - Always request `vendor_sba_cert_8a_expiration_date` as the returned certification date field on both datasets.
   - Convert `target_scope` into a shared scope bundle. Apply the same scope, or the closest valid equivalent filters, to both `Search_Federal_Contract_Awards` and `Search_Federal_Contract_IDVs`.
   - If a requested scope cannot be applied defensibly to both datasets, say so explicitly and ask the user to clarify before searching.
   - Default the certification and lifecycle windows to `now/d` through `now+12M/d`.
   - If the user clearly requests a shorter defensible date window and it can be applied symmetrically to both datasets, use that narrower window instead.
   - Use the returned certification and lifecycle dates for exact row-level gap calculation, display, and interpretation.
3. Run a federal contract-award count and aggregation-overview pass with `Search_Federal_Contract_Awards`.
   - Use `ultimate_completion_date_range` for the lifecycle window established in Step 2.
   - Apply `vendor_sba_8a_expiration_date_range` for the certification window established in Step 2.
   - Apply the shared scope bundle from Step 2.
   - Use `per_page: 0` to measure the filtered award cohort and request aggregation highlights such as `top_awardees_by_dollars_obligated`, `top_contracting_federal_agencies_by_dollars_obligated`, `top_set_aside_types_by_dollars_obligated`, and, when useful, `top_federal_contract_vehicles_by_dollars_obligated`.
   - If the filtered award cohort is more than `250` rows, stop and ask the user to de-scope before continuing.
   - Suggested narrowing options: one or more specific agencies or customers, NAICS or PSC codes, a set-aside type, a shorter date window, or a precise product/service/work description.
4. Run the federal contract-award row retrieval workflow with `Search_Federal_Contract_Awards`.
   - Reuse the Step 3 lifecycle, certification, and shared scope filters.
   - Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `ultimate_completion_date`, `dollars_obligated`, `ceiling_value`, `set_aside_type`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, and `vendor_sba_cert_8a_expiration_date` only.
   - Sort retrieval by `ultimate_completion_date` ascending.
   - Paginate through the filtered award cohort only when the Step 3 count is `250` rows or fewer.
   - Exclude rows missing either `ultimate_completion_date` or `vendor_sba_cert_8a_expiration_date`.
   - Keep all remaining rows in the scoped cohort and compute `gap_days = ultimate_completion_date - certification_date`.
   - Interpret `gap_days` as: positive means the work extends past certification expiration; zero or negative means the work ends on or before certification expiration.
5. Run a federal contract IDV count and aggregation-overview pass with `Search_Federal_Contract_IDVs`.
   - Use `last_date_to_order_range` for the lifecycle window established in Step 2.
   - Apply `vendor_sba_8a_expiration_date_range` for the certification window established in Step 2.
   - Apply the shared scope bundle from Step 2.
   - Use `per_page: 0` to measure the filtered IDV cohort and request aggregation highlights such as `top_awardees_by_doc_count`, `top_contracting_federal_agencies_by_doc_count`, `top_set_aside_types_by_doc_count`, and `top_vehicles_by_doc_count`.
   - If the filtered IDV cohort is more than `100` rows, stop and ask the user to de-scope before continuing.
   - Suggested narrowing options: one or more specific agencies or customers, NAICS or PSC codes, a set-aside type, a shorter date window, or a precise product/service/work description.
   - If the top vehicle buckets are dominated by names containing `Multiple Award Schedule`, `MAS`, `FSS`, `GWAC`, or `OASIS`, and those buckets account for more than 60% of the scoped IDV cohort, tell the user the pool is MAS/GWAC-heavy, suggest narrowing before row retrieval, and offer to exclude `Federal Supply Schedule` from `federal_contract_idv_types` if the user wants to narrow.
6. Run the federal contract IDV row retrieval workflow with `Search_Federal_Contract_IDVs`.
   - Reuse the Step 5 lifecycle, certification, and shared scope filters.
   - Request `fields_to_return` explicitly. At minimum request `govtribe_id`, `govtribe_url`, `name`, `contract_number`, `last_date_to_order`, `ceiling_value`, `set_aside`, `awardee`, `parent_of_awardee`, `contracting_federal_agency`, `naics_category`, `psc_category`, `federal_contract_vehicle`, and `vendor_sba_cert_8a_expiration_date` only.
   - Sort retrieval by `last_date_to_order` ascending.
   - Paginate through the filtered IDV cohort only when the Step 5 count is `100` rows or fewer.
   - Exclude rows missing either `last_date_to_order` or `vendor_sba_cert_8a_expiration_date`.
   - Keep all remaining rows in the scoped cohort and compute `gap_days = last_date_to_order - certification_date`.
   - Interpret `gap_days` as: positive means the work extends past certification expiration; zero or negative means the work ends on or before certification expiration.
7. Synthesize the dashboard from the retained award and IDV rows only.
   - Keep awards and IDVs separate throughout the answer.
   - Base all counts, pattern claims, summaries, and rollups only on the scoped rows that survived the active date-window filters and have both required dates.
   - Do not merge awards and IDVs into a single ranked table.
   - Build one rollup row per vendor within each dataset using the returned `awardee`.
   - For each vendor rollup, report `Records`, `Worst Gap Days`, `Earliest Lifecycle Date`, `Latest Lifecycle Date`, `Aggregate Exposure` or `Aggregate Ceiling`, and `Set-Aside Alignment Summary`.
   - Sort rollups and representative rows by `gap_days` descending, then value descending.
   - Add a row-level `Set-Aside Alignment` label using returned set-aside text only:
     - `matching expected set-aside`
     - `other set-aside`
     - `no set-aside used`
   - Treat `8(a) Sole Source` and `Competitive 8(a)` as `matching expected set-aside` for this fixed 8(a) workflow.
   - For IDVs, if `federal_contract_vehicle` contains `Multiple Award Schedule`, `MAS`, `FSS`, `GWAC`, or `OASIS`, flag the record as a shared vehicle ceiling and exclude that ceiling from aggregate exposure totals.
   - Keep pattern commentary qualitative and limited to what the scoped rows and aggregation-overview passes clearly support.
8. Perform a verification pass before finalizing the dashboard.
   - Recheck any row with missing vendor identity context, conflicting dates, or a filter/display mismatch.
   - Explicitly compare `awardee` and `parent_of_awardee`.
   - If both are present and differ, note the parent relationship in the answer and do not silently merge vendors solely because they share a parent.
   - Flag rows where vendor identity appears ambiguous or fragmented across LLC/subsidiary variants.
   - If one dataset returns no defensible rows, say so clearly instead of padding the dashboard.
   - Lower confidence if the remaining results are sparse, MAS/GWAC-heavy, concentrated in one narrow pocket, or dependent on ambiguous parent/child consolidation.

# Output Format

This workflow produces **two outputs**: a concise text summary and an interactive React dashboard. Both must be grounded in the same scoped rows and aggregation data. Do not pad either output with data that was not retrieved in the current workflow.

This spec is model-agnostic. It works on any LLM platform with MCP tool access and the ability to execute code and render a React component (e.g., Claude artifacts, ChatGPT canvas, or equivalent).

---

## Data Serialization Step (mandatory before writing the dashboard)

After completing the verification pass (Step 8) and before writing any dashboard code, use a code execution step (Python preferred) to serialize the final vendor rollup arrays and cohort metadata as structured JSON.

The serialization script must:

1. Ingest all parsed row data from earlier workflow steps (already held in memory or saved to intermediate files during pagination).
2. Build vendor rollups with all required fields (see Data Shape below).
3. Sort rollups by `maxGap` descending, then value descending.
4. Cap award rollups at 25 vendors and IDV rollups at 25 vendors. If the full cohort exceeds 25, retain only the top 25 by `maxGap` and record the true cohort count in the metadata object.
5. Output three JSON structures: awards rollups, IDV rollups, and cohort metadata.

When writing the React dashboard, embed these JSON structures verbatim as the `AWARDS_DATA`, `IDVS_DATA`, and `COHORT_META` constants at the top of the component file. Do not hand-transcribe rollup data from tool output text, prior conversation turns, or memory. The code execution step is the single source of truth for dashboard data.

**Why this matters:** The lossiest step in this workflow is transcribing parsed data into dashboard constants by hand. The serialization step eliminates truncated values, dropped vendors, and rounded numbers.

---

## Text Output

Return the text answer in this order. Keep each section brief — the dashboard carries the detail.

1. **8(a) Certification Filter and Shared Scope** — Fixed 8(a) certification focus, `vendor_sba_8a_expiration_date_range`, shared scope constraints, and the active lifecycle and certification windows. 2–4 sentences.
2. **Search Approach** — Tools used, active date windows, symmetric scope, count+agg passes, gap_days computation. 3–5 sentences.
3. **Cohort Overview** — One compact markdown table. Columns: `Dataset`, `Scoped Count`, `Rows Reviewed`, `Earliest Lifecycle Date`, `Latest Lifecycle Date`, `Top Signals`.
4. **Key Patterns** — 1–2 paragraphs. Dominant agencies, categories, ANC/holding-company clusters, set-aside alignment signals, MAS/GWAC posture.
5. **Risks, Gaps, or Unknowns** — 1 paragraph. Missing dates, identity ambiguity, ANC timing nuances, pagination gaps, shared ceiling exclusions.
6. **Overall Confidence** — 1–2 sentences with a confidence label (High / Moderate-High / Moderate / Low).

Do not include vendor-level or row-level tables in the text output. Those live in the dashboard.

---

## Interactive Dashboard

After the text output, create a **single-file React component** that renders an interactive dashboard. Use whichever artifact or canvas mechanism the host platform provides for rendering React (e.g., `.jsx` artifact on Claude, canvas on ChatGPT).

The component must be **self-contained** — all data is embedded as JSON constants at the top of the file, populated from the Data Serialization Step. The component must not fetch from any external API at runtime.

### Data Shape

Define three top-level constants: `COHORT_META`, `AWARDS_DATA`, and `IDVS_DATA`.

#### Cohort metadata

```js
const COHORT_META = {
  awardsCohortVendors: number,       // total unique vendors in full scoped award cohort
  idvsCohortVendors: number,         // total unique vendors in full scoped IDV cohort
  awardsRowsReviewed: number,        // total award rows retrieved
  idvsRowsReviewed: number,          // total IDV rows retrieved
  awardsDisplayed: number,           // vendors shown in dashboard (<=25)
  idvsDisplayed: number,             // vendors shown in dashboard (<=25)
  searchResultsUrlAwards: string,    // GovTribe view_search_results_url from award retrieval pass
  searchResultsUrlIdvs: string,      // GovTribe view_search_results_url from IDV retrieval pass
  scopeLabel: string,                // human-readable scope description
  certWindowFrom: string,            // YYYY-MM-DD
  certWindowTo: string,              // YYYY-MM-DD
  lifecycleWindowFrom: string,       // YYYY-MM-DD
  lifecycleWindowTo: string,         // YYYY-MM-DD
};
```

#### Award rollup row

```js
{
  vendor: string,              // awardee name
  vendorUrl: string | null,    // GovTribe vendor profile URL from awardee.govtribe_url
  parent: string | null,       // parent_of_awardee name if different, else null
  agency: string,              // primary contracting agency (most frequent across this vendor's rows)
  records: number,             // count of retained award rows for this vendor
  maxGap: number,              // worst gap_days (positive = past cert; negative = safe)
  totalCeiling: number,        // sum of ceiling_value across this vendor's rows
  totalObligated: number,      // sum of dollars_obligated across this vendor's rows
  certExp: string,             // YYYY-MM-DD — vendor_sba_cert_8a_expiration_date
  latestEnd: string,           // latest ultimate_completion_date YYYY-MM-DD
  alignment: string,           // "matching" | "mixed" | "other" | "no set-aside used"
  primaryNaics: string | null, // most frequent NAICS name across this vendor's rows, or null
}
```

#### IDV rollup row

```js
{
  vendor: string,
  vendorUrl: string | null,    // GovTribe vendor profile URL
  parent: string | null,
  agency: string,
  records: number,
  maxGap: number,
  ceiling: number | null,      // sum of non-shared ceilings; null if all shared
  certExp: string,             // YYYY-MM-DD
  latestLDTO: string,          // latest last_date_to_order YYYY-MM-DD
  sharedCeiling: boolean,      // true if ALL this vendor's IDVs are shared-ceiling
  alignment: string,
  primaryNaics: string | null, // most frequent NAICS name across this vendor's rows, or null
}
```

Populate these arrays from the serialized JSON. Do not use placeholder or sample data.

---

### Design Requirements

The dashboard must look like a polished BI tool — not a table dump. Follow these rules:

#### Theme and typography

- Dark theme. Deep navy/charcoal background (e.g., `#0b0f19`), light text.
- Use a distinctive Google Font pairing — a clean sans-serif for body text and a monospace for numbers/dates. Load via `@import` inside a `<style>` block in the component. Do not use Inter, Roboto, Arial, or system-default fonts.

#### KPI cards row (top of dashboard)

5 summary stat cards. **Critical: derive counts from `COHORT_META`, not from the display-capped array lengths.**

| Card | Source | Notes |
|------|--------|-------|
| Total award vendors | `COHORT_META.awardsCohortVendors` | Full cohort, **not** `AWARDS_DATA.length` |
| Total IDV vendors | `COHORT_META.idvsCohortVendors` | Full cohort, **not** `IDVS_DATA.length` |
| Aggregate award exposure | Sum of `totalCeiling` from `AWARDS_DATA` | Formatted as $XM / $XB |
| Aggregate IDV exposure | Sum of non-shared `ceiling` from `IDVS_DATA` | With "excl. shared ceilings" subtitle |
| Critical + High count | Vendors with `maxGap >= 90` across both datasets | Label: "≥90d gap vendors" |

#### Scope subtitle

Below the dashboard title, render `scopeLabel`, cert window, and lifecycle window from `COHORT_META`.

#### Gap severity color scale

Use consistently across all gap indicators:

| Level | Condition | Color |
|-------|-----------|-------|
| Critical | ≥ 150d | `#ef4444` |
| High | 90–149d | `#f97316` |
| Moderate | 30–89d | `#eab308` |
| Low | 1–29d | `#22c55e` |
| Safe | ≤ 0d | `#6b7280` (neutral gray) |

#### Gap bar rendering

- Only render a gap bar for rows with `maxGap > 0`. Bar fill width is proportional to the max positive gap in the current dataset tab; fill color uses the severity scale.
- For rows with `maxGap <= 0`, render no bar. Display a small muted "✓" or "safe" indicator instead.

#### Table

- **Tab switcher** to toggle Awards / IDVs. Tab labels include the display count in parentheses (e.g., "Awards (25)"). Sorted by `maxGap` descending by default. All columns sortable.
- **GovTribe drill-down link**: Each tab header includes a small "View in GovTribe →" link using the corresponding `searchResultsUrlAwards` or `searchResultsUrlIdvs` from `COHORT_META`. Opens in a new tab.
- **Vendor name column**:
    - If `vendorUrl` is present, render the vendor name as a clickable link (opens in new tab).
    - If `parent` is non-null, render as a small secondary line under the vendor name (e.g., `→ Parent Name`).
    - If `primaryNaics` is non-null, render as a small tertiary line in muted text.
- **Monospace font** for all dates and numbers.
- **Set-aside alignment** shown as color-coded pill badges: green for "matching", amber for "mixed", blue for "other", gray for "no set-aside used".
- **Awards table** includes both a `Ceiling` and `Obligated` column. Display obligated in a slightly muted color to visually subordinate it to ceiling.
- **IDV table** includes a `Shared?` column with a purple badge for shared-ceiling vendors; their ceiling column shows ``.
- **Subtle row highlight** on hover.
- **Truncation footer**: If the display array has fewer vendors than the full cohort (per `COHORT_META`), render a footer row below the table:
  > Showing top {displayed} of {cohortTotal} vendors by gap severity. [View full results in GovTribe →]

  Link to the corresponding search results URL.

#### Footer panel

Two-column card layout: "Key Patterns" on the left, "Risks & Caveats" on the right. Populate from the same content as text output sections 4–5.

#### Bottom line

Small muted text:

> Powered by GovTribe · Data as of {current month/year}

---

### Constraints

These constraints ensure the dashboard renders reliably across hosting platforms.

| Constraint | Reason |
|-----------|--------|
| No runtime API calls | All data must be embedded; the component is static after render |
| No `localStorage` or `sessionStorage` | Not reliably available in sandboxed artifact/canvas renderers |
| No `<form>` tags | Some renderers block form submission; use `onClick` handlers instead |
| No `<link>` elements for fonts | Use `@import` inside `<style>` for reliable cross-platform font loading |
| All state in React hooks | Use `useState` / `useMemo` for interactivity; no external state managers |
| Single-file component | No separate CSS/JS files; everything in one `.jsx` file with inline styles or embedded `<style>` blocks |
| Vendor rollup rows only | No raw record-level detail rows; the GovTribe search results URLs provide drill-down |
| Max 25 vendor rows per tab | If the full cohort exceeds 25, show top 25 by `maxGap` with a truncation footer |
| Target 900–1200px width | Horizontal scroll on table for narrow viewports; flex-wrap on KPI cards |

#### Styling approach

Use **inline styles** (React `style` objects) as the primary styling method. This is the most portable approach across artifact and canvas renderers. Avoid Tailwind utility classes unless you are certain the host platform includes the Tailwind base stylesheet. Avoid CSS-in-JS libraries that require a build step.

If the host platform reliably supports Tailwind (e.g., Claude artifacts), Tailwind utility classes are acceptable as a secondary styling method alongside inline styles.

---

## Citation Rules

- Only cite sources retrieved in the current workflow.
- Never fabricate citations, URLs, IDs, or quote spans.
- Use exactly the citation format required by the host application.
- Attach citations to the specific claims they support, not only at the end.

## Grounding Rules

- Base claims only on provided context or GovTribe MCP tool outputs.
- If sources conflict, state the conflict explicitly and attribute each side.
- If the context is insufficient or irrelevant, narrow the answer or state that the goal cannot be fully completed from the available evidence.
- If a statement is an inference rather than a directly supported fact, label it as an inference.

Last updated

Was this helpful?