Daily AI News
for Executives

Short, practical updates on AI, business strategy, and emerging technology — curated for founders, operators, and executives.

Episode summary. On February 17, 2026, federal Judge Jed Rakoff issued the first nationwide ruling holding that conversations with consumer AI chatbots are not protected by attorney-client privilege and are fully discoverable in litigation. Six weeks later, the Delaware Court of Chancery used a CEO's deleted AI chat logs as trial evidence in a $250 million earnout dispute. This episode walks CEOs, GCs, and CISOs through what the courts actually held, what it means for your company in practice, and the five specific moves to make this week.

Why this matters. Every prompt your employees type into ChatGPT, Claude, Gemini, or Copilot is now a timestamped, logged document living on a third party's servers under terms that explicitly permit disclosure to regulators and courts. The candor of AI conversations — precisely because employees feel they are thinking in private — makes them disproportionately damaging in discovery. This is the AI wake-up call, and it lands harder than email did in the 2000s or Slack did in the 2010s.

The Four Rulings You Need to Know

1. United States v. Heppner — No. 25 Cr. 503 (JSR), 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026). Judge Jed S. Rakoff, Southern District of New York. The anchor case. Bradley Heppner, former Chair of GWG Holdings, was indicted for securities fraud allegedly costing investors more than $150 million. Facing a grand jury subpoena, he used the free version of Anthropic's Claude to generate 31 documents analyzing his defense strategy and shared them with Quinn Emanuel. FBI agents seized the documents during a Dallas search warrant. The government moved to compel. Rakoff — calling it "a question of first impression nationwide" — ruled the documents were not privileged on three independent grounds and found they may have even waived privilege over the original attorney-client communications Heppner had pasted into Claude.

2. Fortis Advisors LLC v. Krafton, Inc. — C.A. No. 2025-0805-LWW (Del. Ch. Mar. 16, 2026). Delaware Court of Chancery, Vice Chancellor Will. Krafton acquired Unknown Worlds Entertainment (maker of Subnautica) for $500M up front plus a $250M earnout. When the deal soured, Krafton's CEO used an AI chatbot to draft a "Response Strategy to a No-Deal Scenario" including a "pressure and leverage package" and a "two-handed strategy" combining legal pressure with softer retention offers. The court quoted the AI logs extensively to establish pretextual intent — and noted the CEO's admitted deletion of some logs may "factor prominently" in the damages phase. Civil discovery, not criminal. The reasoning travels.

3. Warner v. Gilbarco, Inc. — No. 2:24-CV-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026). Magistrate Judge Anthony P. Patti. A pro se plaintiff in an employment discrimination case used ChatGPT to prepare filings. The court upheld work product protection on narrow facts — a pro se litigant is the party, FRCP Rule 26(b)(3)(A) protects party-prepared materials, and uploading to an AI tool is not disclosure to an adversary. This is not a circuit split with Heppner (different context, criminal vs. civil, represented vs. pro se), but it is the only counterweight on the books.

4. Morgan v. V2X, Inc. — No. 1:25-cv-01991 (D. Colo. Mar. 30, 2026). Magistrate Judge Maritza Dominguez Braswell. A modified protective order establishing the precise contractual checklist any AI tool must meet before confidential discovery materials can be loaded into it: (1) no training on inputs, (2) strict confidentiality, (3) contractual right to delete. The court acknowledged this effectively bars most consumer AI tools from discovery-sensitive workflows.

5. In re OpenAI Copyright Litigation — S.D.N.Y. Jan. 5, 2026. The court upheld a discovery order requiring OpenAI to produce a sample of 20 million de-identified ChatGPT conversation logs. Confirms that AI providers retain enormous logs and that courts can compel their production.

The Three Heppner Holdings in Plain English

Holding 1 — Claude is not an attorney. Privilege presupposes a trusting human relationship with a licensed professional who owes fiduciary duties and is subject to discipline. A chatbot owes you nothing. The government literally asked Claude whether it was a lawyer and submitted Claude's own denial as evidence.

Holding 2 — No reasonable expectation of confidentiality. Anthropic's terms of service state that Anthropic collects user inputs, may use them to train the model, and "reserves the right to disclose data to third parties, including governmental regulatory authorities, and in connection with claims, disputes or litigation." Every major consumer AI platform has some version of this language. Clicking "I agree" waives the confidentiality prong.

Holding 3 — Not for the purpose of obtaining legal advice. Counsel did not direct Heppner to use Claude. The AI's own disclaimer that it cannot provide formal legal advice sealed it. The court also shut down retroactive privilege: "non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel."

The backward blast radius. Because Heppner had previously pasted privileged attorney-client communications into Claude, the court found doing so may have waived privilege over those original communications themselves. Summarizing a privileged memo in ChatGPT may hand that memo to every future adversary you will ever face.

What "Discoverable" Actually Means

Discoverable means your adversary — a plaintiff's lawyer, a regulator, a government agency — has a legal right to make you produce the document. The preservation obligation attaches the moment a reasonable person in your company's position would anticipate litigation: a demand letter, a regulatory inquiry, a contract dispute escalating, a whistleblower report, a discrimination complaint. From that moment, deleting relevant records — including AI chat logs — is spoliation under FRCP Rule 37(e).

Key distinctions.

  • Discoverable — must be produced if requested.
  • Admissible — can be used as evidence at trial (higher bar).
  • Privileged — protected from disclosure under attorney-client or work product doctrine.
  • Protected — shielded under a protective order (trade secrets, confidential discovery).

Discoverable does not mean admissible. But it means opposing counsel gets to read it, use it in depositions, and decide whether to move to admit it. For an executive, that alone is the problem.

Conversations Now at Risk

Every one of these, post-Heppner, is a discoverable document:

  • An HR director asking: "Would this employee have a valid discrimination claim if we terminate them?"
  • An engineer describing a proprietary algorithm in detail to troubleshoot code.
  • A CFO running M&A scenario analysis with internal numbers.
  • A manager asking how to document a performance issue before a termination.
  • A compliance officer asking whether a specific pattern of conduct crosses a regulatory line.
  • Executives asking AI to stress-test financial projections using internal data.
  • Senior leadership workshopping regulatory responses with an AI assistant.
  • Customer success managers drafting responses to complaints that raise legal exposure.

The Retention Dilemma

There is no clean answer. Both paths carry legal exposure.

Keep them all — and you expand the blast radius of every future lawsuit. Candid AI conversations are disproportionately damaging in discovery. Your people will not self-edit their prompts. The whole point of a chatbot is that they do not have to.

Delete them — and under FRCP Rule 37(e), if litigation was reasonably anticipated, you have committed spoliation. Sanctions can include adverse inference instructions (the jury is told to assume the destroyed evidence was unfavorable), monetary penalties, or case-dispositive rulings.

The only path through is a tiered retention framework — written in advance, enforced consistently, paused instantly on legal hold. Treat AI retention the way leading companies treated email retention starting in 2005. The ones that got it right paid far less than the ones that did not.

Regulatory Requirements by Industry

Financial services. SEC Rule 17a-4(b)(4) requires broker-dealers to retain all business-related communications in accessible and then WORM-compliant storage for up to six years. FINRA Rule 4511 is the general recordkeeping rule. FINRA's 2026 Annual Regulatory Oversight Report explicitly includes chat-style AI communications. FINRA Rule 3110.09 covers internal communications review. Investment Advisers Act Rule 204-2 requires registered advisers to retain written communications tied to recommendations and advice for at least five years. Since 2022 the SEC has collected more than $3 billion in fines for off-channel communications on WhatsApp, iMessage, and Signal. AI is the next enforcement sweep. Any AI-assisted discussion touching material nonpublic information also raises insider trading and Reg FD exposure.

Healthcare and life sciences. HIPAA (45 CFR Parts 160/164) treats any platform that receives, maintains, or transmits Protected Health Information as a "business associate" requiring a signed Business Associate Agreement. Consumer AI tools do not sign BAAs with individual users. Encryption alone does not satisfy the rule — a BAA is required regardless. FDA 21 CFR Part 11 governs electronic records and signatures in clinical and drug-approval contexts. 42 CFR Part 2 imposes heightened confidentiality on substance use disorder records.

Public companies. Sarbanes-Oxley Section 802 prohibits destruction, alteration, or falsification of records in federal investigations and requires seven-year retention of audit-relevant records, including electronic communications. Willful destruction can carry up to 20 years of imprisonment. Dodd-Frank whistleblower provisions make AI conversations discussing potential violations — and AI conversations about whistleblowers — directly relevant in retaliation proceedings. Reg FD creates risk when selectively disclosing material information to AI platforms that may in turn disclose to third parties.

Law firms and accounting firms. ABA Model Rule 1.6 requires "reasonable efforts" to prevent inadvertent disclosure of client information. ABA Formal Opinion 512 (2024) addresses generative AI use specifically. Rule 1.1 requires technological competence. Rule 5.3 requires lawyers to supervise nonlawyer assistance, including AI tools, so their use is compatible with professional obligations.

The Five Moves To Make This Week

Move 1 — Issue an employee-wide AI use policy. Your GC and CISO issue a one-page written policy: do not enter into ChatGPT, Claude, Gemini, or any consumer AI product any of the following — privileged matters, HR decisions about specific individuals, material nonpublic information, patient data, trade secrets, litigation strategy, internal investigation content, M&A information, or regulatory strategy. Require signed acknowledgment. Cite Heppner as the reason. Distribute within the week.

Move 2 — Run an AI tool audit within two weeks. Your CIO inventories every AI tool in use across the organization — especially the ones employees expensed on corporate cards without IT's knowledge. For each tool, answer three questions. (a) Do we have an enterprise contract with a no-training clause and defined retention? (b) What sensitive data categories have been entered? (c) Can we export that user's full conversation history today in an e-discovery-compatible format? If the answer to (c) is no, you cannot use that tool for business purposes — period.

Move 3 — Update every active litigation hold. Counsel issues or updates every open hold to expressly cover AI chat logs on all platforms the custodians use or have used. Consumer platforms delete history on default settings after 30 to 90 days. If you are in active litigation and have not yet issued the hold, you may be 30 days from a spoliation problem you do not know you have. Confirm with each AI vendor whether preservation of a specific user's history is even possible under their default retention settings. Several are not.

Move 4 — Brief your board at the next meeting. This is governance, not IT. Directors have fiduciary duty on information systems oversight. The Krafton case put the directors' oversight of executive AI use directly in the trial record. "We didn't know our executives were doing that" has never been a winning board answer.

Move 5 — Migrate sensitive workflows to enterprise AI within 30 days. Every business-sensitive workflow must move to enterprise-tier contracts with: a Data Processing Agreement, a no-training covenant, a zero-data-retention option or a defined short retention window with right-to-delete, audit log export, and legal hold capability. For healthcare, require a signed BAA. For financial services, confirm WORM-compliant or audit-trail-compliant archiving. For highest-sensitivity use cases, consider on-premises or VPC-hosted models (Azure OpenAI dedicated, AWS Bedrock with private endpoints, on-prem Llama, private cloud Mistral) where prompts never leave your perimeter.

Enterprise AI Governance Checklist

Procurement and contracting.

  • Enterprise-tier contracts only — never consumer or starter tiers for business-sensitive workflows.
  • DPA requirements: no-training covenant, strict confidentiality, defined short retention (30 days or less for most sensitive), right to delete on demand, legal-process notice obligation, zero-data-retention option for the highest-risk flows.
  • Healthcare: signed BAA before any PHI is processed.
  • Financial services: WORM-compliant or audit-trail-compliant prompt and output archiving.

Access and identity.

  • SSO with directory-bound accounts only. No personal accounts. No shared credentials. Every interaction attributable to a specific authenticated employee.
  • Role-based access controls — not everyone needs every AI tool.
  • Defined contractor and vendor access limits.

Data loss prevention.

  • DLP scanning AI prompts for PII, PHI, trade secrets, privileged matter, MNPI, and regulated data categories.
  • Block or flag sensitive patterns before transmission to the AI vendor.
  • Endpoint plus network DLP for regulated industries, not just application-level.

Logging, audit, and e-discovery.

  • Log every AI interaction with timestamp, user identity, session ID, full prompt, full response.
  • Logs exportable in formats compatible with Relativity, DISCO, Everlaw, and Logikcull.
  • Retention governed by explicit policy — not vendor defaults — aligned to your legal hold process.
  • Legal hold capability verified before deployment: you must be able to preserve a specific user's full AI history immutably and export it for review. If you cannot do all three, you cannot use that tool for business purposes.

Privilege protocols.

  • Any AI work touching legal strategy, privilege analysis, regulatory risk, or compliance must be counsel-initiated and counsel-directed, documented as such to establish the Kovel agent framework.
  • Deploy Upjohn-style notices at AI session start for legal or compliance use: "This tool is a company resource. The company holds any applicable privilege. Do not use for personal legal questions."
  • Route sensitive legal workflows through counsel-provisioned tools (Harvey, CoCounsel, Lexis+ AI) with documented no-training and confidentiality terms.

Employee training.

  • Annual training with signed acknowledgment; refresh after major rulings.
  • Clear policy distinguishing personal AI use from business AI use.
  • Explicit list of categories that never go into a consumer chatbot.

Handling Historical AI Logs

If your organization has been using consumer AI tools without governance, the remediation sequence is: (1) audit every tool currently in use, including shadow IT on corporate cards; (2) categorize historical use cases that touched active litigation, known regulatory inquiries, HR matters involving claims, M&A transactions, or financial reporting, and place those logs under preservation immediately; (3) inventory the gap between what logs existed and what is still retrievable — this is your spoliation risk map; (4) migrate all business-sensitive workflows to enterprise-tier tools and formally discontinue consumer tool use for business purposes.

The CEO Question

The question to take into your week is not whether to let people use AI. That ship sailed. The question is: if opposing counsel subpoenaed every AI conversation your executive team had in the last six months, would you be comfortable having those conversations — unedited, tangential, hypothetical, candid — read aloud in a deposition? Would you be comfortable having your CEO's brainstorm on a bad acquisition quoted at your board meeting?

If the answer is no, you know what to do this week.

Further Reading

Legal analysis from Perkins Coie, Goodwin, Greenberg Traurig, Kirkland & Ellis, Paul Weiss, and Bloomberg Law, plus Massachusetts Lawyers Weekly, JD Supra, Everlaw, and the FINRA 2026 Annual Regulatory Oversight Report. Links and full citations in the episode research brief.

This episode is editorial commentary for an executive audience. It is not legal advice. Consult qualified counsel licensed in your jurisdiction for advice on specific matters.

Hosted by Stephen Forte. A BuildClub production.

Summary

Episode summary. On February 17, 2026, federal Judge Jed Rakoff issued the first nationwide ruling holding that conversations with consumer AI chatbots are not protected by attorney-client privilege and are fully discoverable in litigation. Six weeks later, the Delaware Court of Chancery used a CEO's deleted AI chat logs as trial evidence in a $250 million earnout dispute. This episode walks CEOs, GCs, and CISOs through what the courts actually held, what it means for your company in practice, and the five specific moves to make this week.

Why this matters. Every prompt your employees type into ChatGPT, Claude, Gemini, or Copilot is now a timestamped, logged document living on a third party's servers under terms that explicitly permit disclosure to regulators and courts. The candor of AI conversations — precisely because employees feel they are thinking in private — makes them disproportionately damaging in discovery. This is the AI wake-up call, and it lands harder than email did in the 2000s or Slack did in the 2010s.

The Four Rulings You Need to Know

1. United States v. Heppner — No. 25 Cr. 503 (JSR), 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026). Judge Jed S. Rakoff, Southern District of New York. The anchor case. Bradley Heppner, former Chair of GWG Holdings, was indicted for securities fraud allegedly costing investors more than $150 million. Facing a grand jury subpoena, he used the free version of Anthropic's Claude to generate 31 documents analyzing his defense strategy and shared them with Quinn Emanuel. FBI agents seized the documents during a Dallas search warrant. The government moved to compel. Rakoff — calling it "a question of first impression nationwide" — ruled the documents were not privileged on three independent grounds and found they may have even waived privilege over the original attorney-client communications Heppner had pasted into Claude.

2. Fortis Advisors LLC v. Krafton, Inc. — C.A. No. 2025-0805-LWW (Del. Ch. Mar. 16, 2026). Delaware Court of Chancery, Vice Chancellor Will. Krafton acquired Unknown Worlds Entertainment (maker of Subnautica) for $500M up front plus a $250M earnout. When the deal soured, Krafton's CEO used an AI chatbot to draft a "Response Strategy to a No-Deal Scenario" including a "pressure and leverage package" and a "two-handed strategy" combining legal pressure with softer retention offers. The court quoted the AI logs extensively to establish pretextual intent — and noted the CEO's admitted deletion of some logs may "factor prominently" in the damages phase. Civil discovery, not criminal. The reasoning travels.

3. Warner v. Gilbarco, Inc. — No. 2:24-CV-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026). Magistrate Judge Anthony P. Patti. A pro se plaintiff in an employment discrimination case used ChatGPT to prepare filings. The court upheld work product protection on narrow facts — a pro se litigant is the party, FRCP Rule 26(b)(3)(A) protects party-prepared materials, and uploading to an AI tool is not disclosure to an adversary. This is not a circuit split with Heppner (different context, criminal vs. civil, represented vs. pro se), but it is the only counterweight on the books.

4. Morgan v. V2X, Inc. — No. 1:25-cv-01991 (D. Colo. Mar. 30, 2026). Magistrate Judge Maritza Dominguez Braswell. A modified protective order establishing the precise contractual checklist any AI tool must meet before confidential discovery materials can be loaded into it: (1) no training on inputs, (2) strict confidentiality, (3) contractual right to delete. The court acknowledged this effectively bars most consumer AI tools from discovery-sensitive workflows.

5. In re OpenAI Copyright Litigation — S.D.N.Y. Jan. 5, 2026. The court upheld a discovery order requiring OpenAI to produce a sample of 20 million de-identified ChatGPT conversation logs. Confirms that AI providers retain enormous logs and that courts can compel their production.

The Three Heppner Holdings in Plain English

Holding 1 — Claude is not an attorney. Privilege presupposes a trusting human relationship with a licensed professional who owes fiduciary duties and is subject to discipline. A chatbot owes you nothing. The government literally asked Claude whether it was a lawyer and submitted Claude's own denial as evidence.

Holding 2 — No reasonable expectation of confidentiality. Anthropic's terms of service state that Anthropic collects user inputs, may use them to train the model, and "reserves the right to disclose data to third parties, including governmental regulatory authorities, and in connection with claims, disputes or litigation." Every major consumer AI platform has some version of this language. Clicking "I agree" waives the confidentiality prong.

Holding 3 — Not for the purpose of obtaining legal advice. Counsel did not direct Heppner to use Claude. The AI's own disclaimer that it cannot provide formal legal advice sealed it. The court also shut down retroactive privilege: "non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel."

The backward blast radius. Because Heppner had previously pasted privileged attorney-client communications into Claude, the court found doing so may have waived privilege over those original communications themselves. Summarizing a privileged memo in ChatGPT may hand that memo to every future adversary you will ever face.

What "Discoverable" Actually Means

Discoverable means your adversary — a plaintiff's lawyer, a regulator, a government agency — has a legal right to make you produce the document. The preservation obligation attaches the moment a reasonable person in your company's position would anticipate litigation: a demand letter, a regulatory inquiry, a contract dispute escalating, a whistleblower report, a discrimination complaint. From that moment, deleting relevant records — including AI chat logs — is spoliation under FRCP Rule 37(e).

Key distinctions.

  • Discoverable — must be produced if requested.
  • Admissible — can be used as evidence at trial (higher bar).
  • Privileged — protected from disclosure under attorney-client or work product doctrine.
  • Protected — shielded under a protective order (trade secrets, confidential discovery).

Discoverable does not mean admissible. But it means opposing counsel gets to read it, use it in depositions, and decide whether to move to admit it. For an executive, that alone is the problem.

Conversations Now at Risk

Every one of these, post-Heppner, is a discoverable document:

  • An HR director asking: "Would this employee have a valid discrimination claim if we terminate them?"
  • An engineer describing a proprietary algorithm in detail to troubleshoot code.
  • A CFO running M&A scenario analysis with internal numbers.
  • A manager asking how to document a performance issue before a termination.
  • A compliance officer asking whether a specific pattern of conduct crosses a regulatory line.
  • Executives asking AI to stress-test financial projections using internal data.
  • Senior leadership workshopping regulatory responses with an AI assistant.
  • Customer success managers drafting responses to complaints that raise legal exposure.

The Retention Dilemma

There is no clean answer. Both paths carry legal exposure.

Keep them all — and you expand the blast radius of every future lawsuit. Candid AI conversations are disproportionately damaging in discovery. Your people will not self-edit their prompts. The whole point of a chatbot is that they do not have to.

Delete them — and under FRCP Rule 37(e), if litigation was reasonably anticipated, you have committed spoliation. Sanctions can include adverse inference instructions (the jury is told to assume the destroyed evidence was unfavorable), monetary penalties, or case-dispositive rulings.

The only path through is a tiered retention framework — written in advance, enforced consistently, paused instantly on legal hold. Treat AI retention the way leading companies treated email retention starting in 2005. The ones that got it right paid far less than the ones that did not.

Regulatory Requirements by Industry

Financial services. SEC Rule 17a-4(b)(4) requires broker-dealers to retain all business-related communications in accessible and then WORM-compliant storage for up to six years. FINRA Rule 4511 is the general recordkeeping rule. FINRA's 2026 Annual Regulatory Oversight Report explicitly includes chat-style AI communications. FINRA Rule 3110.09 covers internal communications review. Investment Advisers Act Rule 204-2 requires registered advisers to retain written communications tied to recommendations and advice for at least five years. Since 2022 the SEC has collected more than $3 billion in fines for off-channel communications on WhatsApp, iMessage, and Signal. AI is the next enforcement sweep. Any AI-assisted discussion touching material nonpublic information also raises insider trading and Reg FD exposure.

Healthcare and life sciences. HIPAA (45 CFR Parts 160/164) treats any platform that receives, maintains, or transmits Protected Health Information as a "business associate" requiring a signed Business Associate Agreement. Consumer AI tools do not sign BAAs with individual users. Encryption alone does not satisfy the rule — a BAA is required regardless. FDA 21 CFR Part 11 governs electronic records and signatures in clinical...

Key Takeaways
  • Discoverable — must be produced if requested.
  • Admissible — can be used as evidence at trial (higher bar).
  • Privileged — protected from disclosure under attorney-client or work product doctrine.
  • Protected — shielded under a protective order (trade secrets, confidential discovery).
  • An HR director asking: "Would this employee have a valid discrimination claim if we terminate them?"
  • An engineer describing a proprietary algorithm in detail to troubleshoot code.
  • A CFO running M&A scenario analysis with internal numbers.
  • A manager asking how to document a performance issue before a termination.

Latest Episodes

View all
Google Just Built An HR System For Agents - AI Executive Brief
All Category
Episode #48

Google Just Built An HR System For Agents

Google retired Vertex AI in a single afternoon and replaced it with the Gemini Enterprise Agent Platform — what Sundar Pichai called "mission control for the agentic enterprise." Stephen Forte argues this is the moment AI agents got an HR system: cryptographic identity, a directory, an access gateway, and a performance review.
Twenty Agents, 1.2 Humans, 2.4 Million Closed - AI Executive Brief
All Category
Episode #47

Twenty Agents, 1.2 Humans, 2.4 Million Closed

Most AI conversations happening in boardrooms right now are cost conversations — G&A reduction, procurement automation, headcount trimming.

Need help implementing AI
in your company?

BuildClub helps executives and product teams design practical AI strategies and build AI-native products. From identifying high-impact opportunities to implementing AI solutions, our team works with organizations ready to turn AI ideas into real business outcomes.
Schedule Your Free Strategy Consult