🎮 The Next Input — Issue #155

The Day Claude Code Leaked

In partnership with

fbi fbifam GIF by CBS

⚡ The Briefing — 60 sec

🛠️ The Playbook — The AI Exposure Engine

Mission
Identify where your organisation is exposed to AI risk — across vendors, data, and internal usage — before it becomes a headline.

Difficulty
Intermediate

Build time
3–5 hours

ROI
Fewer blind spots, tighter control over sensitive systems, and a much lower chance of getting caught off guard by leaks, vendor shifts, or policy changes.

0) Why This Matters

Three different signals, one underlying theme: exposure.

Governments are partnering directly with AI companies, which means the stakes are rising from “tooling” to “infrastructure.”

Capital is pouring into the space at massive scale, which usually accelerates both innovation and risk-taking.

And then you get something like a full repo leak of a major AI product, which is a blunt reminder that anything connected to code, agents, or systems can surface in places you did not expect.

So the move is not just “use AI well.”

It is:

  • know what you are connected to

  • know what could leak

  • know what breaks if a vendor changes

  • know where your real risk sits

1) Architecture

Component

Tool

Purpose

Owner

Failure mode

Vendor map

Airtable / spreadsheet

Track AI tools, providers, and dependencies

Operations

Shadow tools go unnoticed

Access layer

SSO / API keys / permissions

Control what AI systems can reach

IT

Overexposed systems

Data classification

Docs / policies / tagging

Identify sensitive vs safe data

Security / Ops

Sensitive data leaks into AI

Usage tracking

Logs / dashboards

Monitor how AI is actually used

Ops

Blind usage patterns

Risk layer

Internal checklist / review

Identify exposure points

Security

Risk discovered too late

Audit log

Database / logs

Record actions, prompts, outputs

Security / Ops

No traceability

2) Workflow

  1. List every AI tool, model, and integration currently used across the business.

  2. Map what data each tool can access and what actions it can take.

  3. Identify where sensitive data could be exposed through prompts, logs, or integrations.

  4. Check vendor dependency and what happens if the product changes, leaks, or disappears.

  5. Add controls for high-risk workflows, including approval gates and restricted access.

  6. Review exposure regularly as new tools and updates are introduced.

3) Example Prompts

Exposure Mapping Prompt

You are identifying AI exposure risk across a system.

For the workflow below:
- identify all connected tools and models
- identify what data is being accessed
- identify where sensitive data could leak
- identify the top 5 exposure risks

Workflow:
[insert workflow here]

Leak Impact Prompt

You are assessing the impact of a potential leak.

If the following system or repo became public:
- what would be exposed
- what business risk would it create
- what safeguards should already be in place

System:
[insert system]

Vendor Dependency Prompt

You are reviewing vendor dependence risk.

For the product below:
- what workflows rely on it
- what breaks if it disappears
- what fallback exists
- whether the dependency is too high

Product:
[insert product]

Access Control Prompt

You are reviewing permissions for an AI system.

Identify:
- what access is necessary
- what access is excessive
- what should be restricted
- where human approval is required

4) Guardrails

  • Never assume internal tools stay internal.

  • Limit AI access to only what is required.

  • Separate sensitive and non-sensitive workflows.

  • Track vendor dependence explicitly.

  • Log usage and access patterns.

  • Review exposure as part of regular operations, not just incidents.

5) Pilot Rollout — 3 hours

  1. Pick one AI-heavy workflow currently in use.

  2. Map all tools, data sources, and integrations involved.

  3. Identify what data is sensitive and where it flows.

  4. Add one control (permission limit, approval step, or logging layer).

  5. Run the workflow and observe where exposure still exists.

  6. Refine before expanding to other workflows.

6) Metrics

  • Number of AI tools mapped

  • Percentage of workflows with defined access controls

  • Sensitive data exposure incidents

  • Vendor dependency score

  • Number of workflows with fallback options

  • Audit log coverage

  • Time to detect and respond to issues

Pro Tip: The most dangerous AI risk is not the one you can see. It is the one quietly sitting in a workflow nobody has mapped yet.

🎯 The Arsenal — Tools & Platforms

Copy-paste prompt block:

You are helping me build an AI Exposure Engine.

For the workflow below:
1. identify all AI tools and vendors involved
2. identify what data is accessed
3. identify where sensitive data could leak
4. identify vendor dependency risks
5. identify access control gaps
6. list the top 5 exposure risks
7. propose a 2-week pilot

Workflow:
[insert workflow here]

Return the answer in markdown with sections for:
- Workflow summary
- Tool and vendor map
- Data exposure points
- Access control gaps
- Risks
- Pilot rollout
- Metrics

💡 Free Office Hours

If you are trying to understand where your AI systems are exposed before something breaks or leaks, I run free office hours to help map your stack and tighten control.

The Biggest Knock on Private Credit? Percent Changed That.

The number one knock on private credit has always been the same: you can't get out. Lock in for 12 or 24 months, hope things go as planned, wait. Percent changed that in December 2025 with a secondary marketplace. Browse live bid and ask data on seasoned deals, submit an indication of interest to buy or sell, and Percent coordinates the match. For accredited investors who want private credit yields without locking up capital indefinitely, you can do that now. The numbers as of Q4 2025: · $1.82B funded across 981 deals · 16.72% current weighted average coupon · 0.58% lifetime charge-off rate Very few individual investor platforms offer this. New investors can receive up to $500 on their first investment.

Alternative investments are speculative. Secondary liquidity not guaranteed. Past performance not indicative. Terms apply.

🕹️ Game Over

AI is scaling fast. Exposure scales faster if you are not watching it.

— Aaron Automating the boring. Amplifying the brilliant.

Subscribe: link