pl8ypus

Build 02 / Controlled Intelligence

Governed marketing intelligence for decision-ready teams.

A controlled intelligence system for campaign performance, competitor monitoring, web visibility, ingestion governance, and executive-ready AI summaries.

Dashboard Evidence

The intelligence layer needs to be readable before it becomes automated.

Marketing Intelligence AI is being shaped as a decision-support surface first: visible data quality, four intelligence views, executive summaries, and controlled evidence before automation scales.

Click to enlarge

Illustrative dashboard panel for the governed Marketing Intelligence AI build.

Intelligence Model

Four intelligence views. One governed data model.

Part 01

Ad Library Audit

Tracks competitor advertising themes, formats, messaging volume, estimated spend intensity, timing, and positioning gaps across a defined monthly reporting cycle.

Part 02

Organic Social Intelligence

Benchmarks public organic activity, content themes, format choices, engagement signals, and competitor posting patterns across the tracked market set.

Part 03

Website and Search Intelligence

Connects website traffic, channel visibility, search behaviour, referral sources, and content themes into a more useful competitive visibility layer.

Part 04

Campaign Performance Intelligence

Brings paid campaign metrics, spend signals, impressions, engagement, clicks, and channel-level performance into one decision-support environment.

Decision System Controls

The dashboard is the surface. The control layer is the product.

Marketing intelligence only becomes useful when the source, freshness, quality, review state, and publication status are visible. The system is designed to protect decisions from untrusted inputs before AI summaries or executive readouts are generated.

Source control

Known inputs

Every source is registered, labelled, and reviewed before it becomes part of the decision layer.

Quality gates

Validated data

Schema checks, row counts, required fields, and exception states are visible before publishing.

Review state

Human approval

Automation can collect and prepare data, but trusted publishing remains review gated.

Fallback path

Last-known-good

If ingestion fails, the dashboard can fall back to the latest approved dataset.

Architecture Options

Three viable routes. One selected delivery path.

This page shows the architecture decision behind the build: three viable delivery routes, the trade-offs between speed and enterprise control, and the selected path for a fast but governed portfolio implementation.

Option A

Selected

Vercel-led build route

Best fit for developer velocity, preview deployments, serverless API routes, strong local development, and staged review before production release.

Vercel hosting + API routes

Cloudflare DNS / CDN

Supabase data layer

Apify ingestion pilot

Option B

Cloudflare-native route

Best fit when procurement simplicity, existing Cloudflare infrastructure, low V1 cost, and IT-owned access control are the highest priorities.

Cloudflare Pages

Cloudflare Workers

Supabase storage

Access policy gate

Option C

Enterprise cloud route

Best fit when centralised IAM, managed secrets, enterprise logging, monitoring, BigQuery analytics, and long-term data-platform scale are the priority.

Cloud Run

BigQuery

Secret Manager

Cloud Logging

Selected Build Route

Option A: developer velocity with a governed data spine.

Developer velocity

Preview before production

Branch and preview deployments support stakeholder review, ingestion testing, and safer iteration before anything touches the live dashboard.

Serverless control

API routes own the logic

CSV upload, validation, Apify webhooks, data cleaning, dashboard queries, and review gate transitions all run server-side.

Portable data model

No lock-in at the schema level

Raw tables, cleaned tables, source registry, ingestion logs, metrics config, and dimension tables remain portable across the options.

Ingestion Governance

Manual first. Automated second. AI after trust.

V1

Manual CSV upload

Controlled monthly upload through an admin panel. Validates column names, data types, required fields, row counts, and source records before dashboard refresh.

V1.5

Apify pilot with human review gate

One actor, one source, one dashboard. Every run enters review before anything is published. CSV fallback remains available if the actor fails or data is rejected.

V2

Scale across approved sources

Additional actors and data sources activate only after source registry approval, rate-limit configuration, schema validation, and review gate confidence.

V3

AI insight summaries

AI-generated summaries are layered on top of approved data only. The AI explains patterns, drafts briefings, and surfaces gaps, but humans own decisions.

V4

Execution layer

Future CRM, campaign, or marketing automation integrations are deferred until the intelligence layer is stable, reviewed, and trusted.

Data Model

Raw data preserved. Cleaned data published. Governance logged.

Raw layer

Immutable source archive

CSV files and Apify JSON are stored as received. Raw records are not overwritten, which protects auditability and enables reprocessing.

Clean layer

Dashboard-ready tables

Cleaned tables standardise companies, themes, formats, platforms, dates, derived metrics, and display-ready fields.

Control layer

Source registry and logs

Approved sources, review status, ingestion method, operator action, run status, row count, and publication state are logged.

Data Quality Framework

Confidence is visible instead of assumed.

Raw

Collected or uploaded data before review. Hidden from the main dashboard.

Validated

Schema passed, but data still needs review or comparison before being trusted.

Approved

Human-reviewed and published to the dashboard as the current trusted dataset.

Trusted

Repeatedly successful sources can move toward lighter review with exception alerts.

AI Layer

AI summaries sit on approved data, not raw noise.

The AI layer is deliberately deferred until the ingestion, review, and cleaned-table model is stable. Once the data spine is trustworthy, the system can produce summaries, gap analysis, briefing drafts, and campaign recommendations.

Future AI outputs

> Executive competitive summary

> Monthly gap analysis

> Campaign response brief

> Data quality and stale-data warnings

Scope Control

Scope control protects the architecture.

Deferred deliberately

> No multi-actor automation before the pilot proves quality

> No AI agent before clean and approved data exists

> No CRM or marketing automation integration in early phases

> No real-time refresh without a separate approval decision

Always available

> CSV fallback at every stage

> Raw archive before transformation

> Human review gate for pilot automation

> Last-known-good dashboard data if ingestion fails

Signal

Discuss Marketing Intelligence AI

Use the contact form for demos, architecture conversations, speaking opportunities, or collaboration around governed marketing intelligence workflows.