Skip to main content

DossiersSam Altman / OpenAI

◼ Public record

Sam Altman

CEO and co-founder of OpenAI. Chairman of Tools for Humanity (Worldcoin). The public face of "safe AI."

Net worth: ~$2.8B (personal VC portfolio; OpenAI equity undisclosed) · OpenAI valuation: $852B · 5 documented charges

Sam Altman built an $852 billion enterprise on stolen books, destroyed the safety board that tried to stop him, converted a charitable institution into a for-profit empire worth $852 billion while the nonprofit that used to own it all now holds 26%, and is scanning the irises of the Global South for cryptocurrency tokens — suspended in a dozen countries, still operating everywhere else. The safety mission is still on the website. The safety mission is gone.

$852B

OpenAI valuation — was a nonprofit; now a for-profit

12+

countries suspended or investigated Worldcoin iris collection

5 days

to reverse safety board firing via 738-employee commercial threat

Alleged

Intellectual property — theft at civilizational scale · 2019–2025

OpenAI trained on 100,000+ copyrighted books and 1M+ hours of YouTube video — without consent — then destroyed the evidence after lawsuits were filed

OpenAI trained its commercial models on copyrighted material without consent or compensation — building an $852 billion enterprise on content it did not pay for. Books1 and Books2, the datasets used for GPT-3 training, contained over 100,000 copyrighted books. When this became public in May 2024, OpenAI destroyed both datasets — after litigation had already been filed. Courts treat post-litigation evidence destruction as spoliation. For GPT-4, OpenAI transcribed more than one million hours of YouTube videos for training data — acknowledged in court filings. More than a dozen major lawsuits are pending, including suits from the New York Times, the Authors Guild (representing George R.R. Martin, John Grisham, Jodi Picoult, and others), eight major newspapers, and Canadian national outlets. No settlements have been disclosed. The copyright violation is embedded in every response ChatGPT has ever produced.

  • Books1 and Books2 datasets: 100,000+ copyrighted books used for GPT-3 training. Disclosed publicly May 2024. Datasets destroyed after litigation commenced.
  • GPT-4 training: 1M+ hours of YouTube video transcribed — acknowledged in OpenAI court filings.
  • Active lawsuits (partial): Sarah Silverman et al. (July 2023); Authors Guild / Martin, Grisham, Picoult, Franzen (September 2023); The New York Times (December 2023); The Intercept, Raw Story (February 2024); 8 newspapers including Chicago Tribune, Denver Post (April 2024); Toronto Star, Globe and Mail, CBC, Postmedia (November 2024); Ziff Davis (April 2025).
  • Post-litigation destruction of Books1 and Books2 is the conduct courts classify as spoliation — raising an inference that the destroyed evidence would have been damaging.
  • OpenAI publicly lobbied for "fair use" exemptions to copyright law while simultaneously destroying the datasets that would have documented the scope of the violation.
Documented

Corporate governance — destruction of oversight · 2023–2024

Safety board fired Altman for "failures of candor." He reversed the firing in five days via commercial leverage. The safety team was then dissolved.

On November 17, 2023, OpenAI's safety-focused board — Helen Toner (Georgetown AI researcher), Ilya Sutskever (co-founder), Adam D'Angelo, Tasha McCauley — fired Sam Altman as CEO, stating he "was not consistently candid in his communications with the board." Within hours, Microsoft (27% stakeholder) objected. Investors protested. 738 of OpenAI's 770 employees signed an open letter threatening to resign and join Microsoft if Altman wasn't reinstated and the board didn't resign. Five days later, Altman was reinstated. All safety-focused board members departed. The new board: Bret Taylor (chairman, former Salesforce co-CEO) and Lawrence Summers (former Treasury Secretary) — commercial and financial figures, not safety researchers. The governance structure created to check Altman was replaced by governance that would not. The aftermath: Ilya Sutskever resigned in May 2024, citing safety concerns. The superalignment project — designed to develop techniques for governing superintelligent AI before it arrived — was dissolved 18 months after launch. Approximately half of OpenAI's AI safety researchers left the company throughout 2024, publicly citing the organization's "deprioritization of safety goals."

  • Board firing (November 17, 2023): stated reason — Altman "was not consistently candid in his communications with the board."
  • 738 of 770 OpenAI employees signed open letter threatening mass resignation if Altman wasn't reinstated.
  • Reinstatement (November 21, 2023): 5 days after firing. All safety-focused board members replaced.
  • New board: Bret Taylor (former Salesforce co-CEO), Lawrence Summers (former Treasury Secretary). Safety expertise: zero.
  • May 2024: Ilya Sutskever resigned. Superalignment project — created to solve AI oversight before superintelligence — simultaneously dissolved.
  • Jan Leike (superalignment co-lead) departed simultaneously, posting publicly about safety deprioritization.
  • ~50% of OpenAI's AI safety researchers left throughout 2024, citing "deprioritization of safety goals."
  • OpenAI now races toward AGI without the safety oversight structure it publicly committed to building.
Legal. Moral crime.

Nonprofit law — charitable asset extraction · 2015–2025

OpenAI was founded as a nonprofit. Altman converted it into a for-profit worth $852 billion. The nonprofit's stake went from 100% to 26%.

OpenAI was incorporated in 2015 as a nonprofit corporation with assets legally dedicated to its public benefit mission. In October 2025, Altman completed the conversion of OpenAI into a for-profit Public Benefit Corporation. The nonprofit foundation — which used to own the entire enterprise — now holds a 26% stake. At the $852 billion valuation (April 2026), that 26% is worth approximately $221 billion. The foundation used to own 100% of an enterprise now worth $852 billion. The founders, investors, and employees capture the difference. Multiple critics — including legal scholars and Elon Musk's litigation team — characterized the transaction as an illegal diversion of charitable assets. The California and Delaware Attorneys General reviewed the transaction. The FTC opened an investigation in July 2023 examining "circular" spending arrangements between OpenAI and Microsoft. No criminal charges have been filed. The conversion is complete. Sam Altman's personal equity stake — which was zero under the nonprofit structure — has not been publicly disclosed. The company is valued at $852 billion.

  • 2015: OpenAI incorporated as nonprofit. Assets legally dedicated to mission of benefiting humanity. No private inurement allowed.
  • December 2024: Restructuring announced — conversion to Public Benefit Corporation.
  • October 28, 2025: Conversion completed. OpenAI Group PBC formally created.
  • Post-conversion ownership: OpenAI Foundation (nonprofit): 26%. Microsoft: 27%. Investors + employees: 47%.
  • Valuations at time of conversion: ~$500B (employee share sale); grew to $852B by April 2026.
  • Legal letter "Not For Private Gain": argued the nonprofit was legally obligated to retain control of assets dedicated to its public mission.
  • Elon Musk litigation (revised April 2026): sought reversal of the corporate restructuring on grounds of illegal diversion of charitable assets.
  • Sam Altman's personal equity: was zero under nonprofit structure. Post-conversion equity stake has not been publicly disclosed. The company is worth $852B.
Ordered to comply

Biometric surveillance — colonial extraction · 2021–2025

Worldcoin scanned the irises of millions of people in the Global South for crypto tokens. 12+ countries suspended, investigated, or ordered data deletion.

Sam Altman co-founded and chairs Tools for Humanity, which operates the "World" (formerly Worldcoin) project. The model: send physical "Orb" devices to low-income countries, offer cryptocurrency tokens in exchange for iris scans, and build a global biometric identity database. The iris is the most distinctive and un-changeable piece of biometric data a human possesses. The target populations — in Africa, Southeast Asia, and Latin America — have limited income and limited legal recourse when data is misused. The iris data cannot be revoked if it is breached or sold. By September 2025, the project had 15 million iris-verified users. The regulatory verdict from a dozen countries was consistent: illegal. Kenya (August 2023): suspended enrollment; May 2025 ordered full deletion of all biometric data collected. Spain (March 2024): ordered halt to biometric data collection. Portugal (March 2024): temporary suspension. Hong Kong (January 2024): found violations of the Personal Data Ordinance. Philippines (October 2025): cease and desist for multiple data privacy violations. Thailand (November 2025): ordered halt and deletion of data from 1.2 million users. France, UK, Bavaria, South Korea, Brazil, India, Indonesia all opened investigations or suspended operations.

  • Business model: iris scan → cryptocurrency token. Permanent, irrevocable biometric data exchanged for speculative digital currency.
  • Scale: 33 million World App users (September 2025); 15 million iris-verified.
  • Geographic concentration: Africa, Southeast Asia, Latin America — low-income populations with limited legal recourse.
  • Kenya (August 2023): suspended enrollment over "security, privacy and financial concerns." May 2025: ordered full deletion of all biometric data.
  • Spain (March 2024): ordered halt to biometric data collection.
  • Philippines (October 2025): cease and desist for multiple data privacy violations.
  • Thailand (November 2025): ordered halt; deletion ordered for 1.2 million users' data.
  • Additional enforcement/investigations: Hong Kong, Portugal, France, UK, Bavaria (Germany), South Korea, Brazil, India, Indonesia.
  • Altman continues operating Worldcoin/World in jurisdictions where regulators have not yet acted.
Documented

Political capture — regulatory evasion · 2023–2025

$1M to Trump's inaugural fund. Opposed California's AI safety bill. Called for federal regulation that doesn't exist — creating a vacuum during maximum commercial expansion.

Sam Altman has pursued a consistent strategy to prevent regulation of AI during the period of maximum commercial expansion. In December 2024, at the precise moment OpenAI was completing its nonprofit-to-PBC conversion — requiring federal regulatory non-interference — Altman donated $1 million to Donald Trump's inaugural fund. OpenAI actively opposed California's AI safety legislation (SB 1047), arguing it "encroaches on a more competent federal government" — a federal government that has no AI safety regulation in place. The effect of this position: oppose regulation where it exists (states); advocate for regulation where it doesn't (federal). The gap between those two positions is the window during which OpenAI deploys its products at scale. In July 2023, the FTC opened a civil investigation into OpenAI's data security and privacy practices, flagging "circular" spending arrangements between OpenAI and Microsoft as potentially anticompetitive. Altman simultaneously testified before Congress in May 2023 calling for AI regulation — while opposing every state-level bill that would have applied immediately.

  • $1M to Trump Inaugural Fund (December 2024): timed to OpenAI's nonprofit-to-PBC conversion completion; also concurrent with ongoing federal contract negotiations.
  • California SB 1047 opposition: OpenAI argued the bill "encroaches on a more competent federal government" — a federal government with zero AI safety statutes in force.
  • Pattern: oppose state regulation (exists now, would apply now); advocate for federal regulation (does not exist; creates regulatory vacuum).
  • FTC civil investigative demand (July 2023): examined data security/privacy practices; flagged "circular" OpenAI-Microsoft spending arrangements.
  • Congressional testimony (May 2023): publicly called for AI regulation while simultaneously opposing every actionable bill at the state level.
  • The regulatory vacuum during 2023–2025 is the period in which ChatGPT went from zero to 400M+ weekly users and OpenAI's valuation went from ~$30B to $852B.

◼ List of charges

01

Financial Misconduct

515 years

Statute: Documented financial impropriety — including misuse of fiduciary relationships, commingling of funds, unauthorized transfers, or exploitation of financial access — causing documented harm to investors, beneficiaries, or the public.

Basis: OpenAI trained on 100,000+ copyrighted books and 1M+ hours of YouTube video without consent; destroyed the training datasets after lawsuits were filed. The copyright violation is embedded in every commercial product OpenAI has ever shipped.

No jurors have rendered guilty yet

02

Financial Fraud

1025 years

Statute: Sustained falsification of financial statements, business records, or asset valuations to defraud lenders, insurers, taxing authorities, or the public — established by jury verdict, civil judgment, or regulatory finding.

Basis: Safety board fired Altman for "failures of candor"; reversed via commercial threat in 5 days. Safety-focused governance replaced by financial industry figures. Superalignment project dissolved. ~50% of AI safety researchers left.

No jurors have rendered guilty yet

03

Philanthropic Capture of Public Institutions

1025 years

Statute: Using charitable foundation infrastructure — earmarked donations, grant dependency, media funding — to acquire agenda-setting power over international public health bodies, journalism, and policy organizations, while generating personal tax benefits and avoiding democratic accountability for the resulting policy outcomes.

Basis: OpenAI incorporated as nonprofit with assets legally dedicated to public benefit. Converted to for-profit PBC (October 2025). Foundation's stake: 100% → 26% of an $852B enterprise. Critics: legally indefensible diversion of charitable assets.

No jurors have rendered guilty yet

04

Mass Surveillance for Profit

1025 years

Statute: Non-consensual, persistent collection and commercial exploitation of detailed behavioral, biometric, or personal data at population scale.

Basis: Worldcoin/World: iris scans of 15M+ people in low-income countries in exchange for crypto tokens. Suspended, investigated, or ordered to delete data in 12+ countries. Permanent biometric data, irrevocable, from populations with limited legal recourse.

No jurors have rendered guilty yet

05

Regulatory Capture

1020 years

Statute: Systematic use of financial, political, or revolving-door leverage to reduce the enforcement effectiveness of regulatory bodies — including engineering settlements and fines that represent a negligible fraction of revenue from the penalized conduct, thereby institutionalizing impunity.

Basis: $1M to Trump inaugural fund (December 2024) during OpenAI's conversion requiring federal non-interference. Opposed California AI safety law while calling for federal regulation that doesn't exist — creating the regulatory vacuum that enabled $0 → $852B commercial expansion.

No jurors have rendered guilty yet

Total sentence

45110 years

That is

0.61.4 life sentences

(using 78 years as one life)

At $1 million per day

Sam Altman / OpenAI fortune would last 8 years

0.1 lifetimes of luxury — before running out.

These are moral charges, not legal ones. The actual legal system has not — and will not — bring them.

WEEKLY DIGEST

New dossiers, new charges, verdict updates.

One email per week when there's something new to report. No filler.

No spam. Unsub anytime.