- The Next Input by Cylentis AI
- Posts
- 🎮 The Next Input — Issue #155
🎮 The Next Input — Issue #155
The Day Claude Code Leaked

⚡ The Briefing — 60 sec
Federal government strikes major deal with US-based AI giant Anthropic for research investments Big day for Anthropic and Australia. When governments start cutting deals like this, you are not just looking at tech adoption anymore — you are looking at positioning.
OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise Cha ching. Retail money flowing into a $122B valuation tells you AI is no longer just institutional — the public wants in on the upside too.
instructkr-claude-code repo leak Told you Anthropic was having a day. Craziest leak I’ve ever seen. All of Claude Code… in a repo no less.
🛠️ The Playbook — The AI Exposure Engine
Mission
Identify where your organisation is exposed to AI risk — across vendors, data, and internal usage — before it becomes a headline.
Difficulty
Intermediate
Build time
3–5 hours
ROI
Fewer blind spots, tighter control over sensitive systems, and a much lower chance of getting caught off guard by leaks, vendor shifts, or policy changes.
0) Why This Matters
Three different signals, one underlying theme: exposure.
Governments are partnering directly with AI companies, which means the stakes are rising from “tooling” to “infrastructure.”
Capital is pouring into the space at massive scale, which usually accelerates both innovation and risk-taking.
And then you get something like a full repo leak of a major AI product, which is a blunt reminder that anything connected to code, agents, or systems can surface in places you did not expect.
So the move is not just “use AI well.”
It is:
know what you are connected to
know what could leak
know what breaks if a vendor changes
know where your real risk sits
1) Architecture
Component | Tool | Purpose | Owner | Failure mode |
|---|---|---|---|---|
Vendor map | Airtable / spreadsheet | Track AI tools, providers, and dependencies | Operations | Shadow tools go unnoticed |
Access layer | SSO / API keys / permissions | Control what AI systems can reach | IT | Overexposed systems |
Data classification | Docs / policies / tagging | Identify sensitive vs safe data | Security / Ops | Sensitive data leaks into AI |
Usage tracking | Logs / dashboards | Monitor how AI is actually used | Ops | Blind usage patterns |
Risk layer | Internal checklist / review | Identify exposure points | Security | Risk discovered too late |
Audit log | Database / logs | Record actions, prompts, outputs | Security / Ops | No traceability |
2) Workflow
List every AI tool, model, and integration currently used across the business.
Map what data each tool can access and what actions it can take.
Identify where sensitive data could be exposed through prompts, logs, or integrations.
Check vendor dependency and what happens if the product changes, leaks, or disappears.
Add controls for high-risk workflows, including approval gates and restricted access.
Review exposure regularly as new tools and updates are introduced.
3) Example Prompts
Exposure Mapping Prompt
You are identifying AI exposure risk across a system.
For the workflow below:
- identify all connected tools and models
- identify what data is being accessed
- identify where sensitive data could leak
- identify the top 5 exposure risks
Workflow:
[insert workflow here]
Leak Impact Prompt
You are assessing the impact of a potential leak.
If the following system or repo became public:
- what would be exposed
- what business risk would it create
- what safeguards should already be in place
System:
[insert system]
Vendor Dependency Prompt
You are reviewing vendor dependence risk.
For the product below:
- what workflows rely on it
- what breaks if it disappears
- what fallback exists
- whether the dependency is too high
Product:
[insert product]
Access Control Prompt
You are reviewing permissions for an AI system.
Identify:
- what access is necessary
- what access is excessive
- what should be restricted
- where human approval is required
4) Guardrails
Never assume internal tools stay internal.
Limit AI access to only what is required.
Separate sensitive and non-sensitive workflows.
Track vendor dependence explicitly.
Log usage and access patterns.
Review exposure as part of regular operations, not just incidents.
5) Pilot Rollout — 3 hours
Pick one AI-heavy workflow currently in use.
Map all tools, data sources, and integrations involved.
Identify what data is sensitive and where it flows.
Add one control (permission limit, approval step, or logging layer).
Run the workflow and observe where exposure still exists.
Refine before expanding to other workflows.
6) Metrics
Number of AI tools mapped
Percentage of workflows with defined access controls
Sensitive data exposure incidents
Vendor dependency score
Number of workflows with fallback options
Audit log coverage
Time to detect and respond to issues
Pro Tip: The most dangerous AI risk is not the one you can see. It is the one quietly sitting in a workflow nobody has mapped yet.
🎯 The Arsenal — Tools & Platforms
Airtable · map AI tools, vendors, and exposure points · https://www.airtable.com
Google Sheets · quick tracking of dependencies and risk scoring · https://workspace.google.com/products/sheets/
Claude / GPT / Gemini · powerful, but require strict access control and monitoring · https://www.anthropic.com · https://openai.com · https://gemini.google.com
SSO / IAM tools · enforce access boundaries across systems · https://azure.microsoft.com/products/active-directory
Logging systems · track prompts, outputs, and actions for traceability · https://grafana.com
Copy-paste prompt block:
You are helping me build an AI Exposure Engine.
For the workflow below:
1. identify all AI tools and vendors involved
2. identify what data is accessed
3. identify where sensitive data could leak
4. identify vendor dependency risks
5. identify access control gaps
6. list the top 5 exposure risks
7. propose a 2-week pilot
Workflow:
[insert workflow here]
Return the answer in markdown with sections for:
- Workflow summary
- Tool and vendor map
- Data exposure points
- Access control gaps
- Risks
- Pilot rollout
- Metrics
💡 Free Office Hours
If you are trying to understand where your AI systems are exposed before something breaks or leaks, I run free office hours to help map your stack and tighten control.
Book here: https://calendly.com
The Biggest Knock on Private Credit? Percent Changed That.
The number one knock on private credit has always been the same: you can't get out. Lock in for 12 or 24 months, hope things go as planned, wait. Percent changed that in December 2025 with a secondary marketplace. Browse live bid and ask data on seasoned deals, submit an indication of interest to buy or sell, and Percent coordinates the match. For accredited investors who want private credit yields without locking up capital indefinitely, you can do that now. The numbers as of Q4 2025: · $1.82B funded across 981 deals · 16.72% current weighted average coupon · 0.58% lifetime charge-off rate Very few individual investor platforms offer this. New investors can receive up to $500 on their first investment.
Alternative investments are speculative. Secondary liquidity not guaranteed. Past performance not indicative. Terms apply.
🕹️ Game Over
AI is scaling fast. Exposure scales faster if you are not watching it.
— Aaron Automating the boring. Amplifying the brilliant.
Subscribe: link

