Law & Regulation

Algorithm Design Is Now Product Liability: What Every Tech Leader Needs to Know

By Bobby Alexis · · 10 min read

A few weeks ago, I posted a short video making a point that generated more reaction than almost anything I've shared this year: courts are now treating algorithm design as product design. The logical extension — that defective algorithm design creates product liability — isn't a prediction anymore. It's case law.

This post is the long-form version of that argument. Because the implications are broader than the Meta trial, broader than COPPA 2.0, and broader than any single company's legal exposure. We are watching the legal infrastructure around children's technology being rebuilt in real time, and most of the people building in this space don't fully understand what that means for them.

Let me walk through it.

How We Got Here: The Death of Platform Immunity

For the first two decades of the social media era, the tech industry operated under a relatively stable legal assumption: Section 230 of the Communications Decency Act gave platforms near-total immunity from liability for user-generated content. Platforms weren't publishers. They were neutral conduits. What users did with them wasn't the platform's legal problem.

That model held because the central claim was true: a platform displaying user content wasn't meaningfully different from a telephone company carrying user calls. The legal logic was sound for the technology it described.

But recommendation algorithms broke the analogy.

A telephone company routes your call to whoever you're calling. It doesn't decide, based on 5,000 data points about your psychological profile, which calls to put through, which to amplify, and which to bury — optimized to keep you on the phone as long as possible. An algorithmic recommendation engine does exactly that. It is not a neutral conduit. It is an active editorial system making billions of decisions per day about what content reaches which users.

And when the user is a child, and the content being amplified is content the company's own researchers documented as harmful — the legal question becomes: is this a publishing decision, or a product design decision?

Courts have increasingly answered: product design.

What the Meta Trial Established

The ongoing litigation against Meta is the clearest window into where this is headed. In discovery, plaintiffs obtained internal Meta research that has become the evidentiary foundation for a new understanding of algorithm liability.

The documents are damning in specific ways:

Meta knew Instagram harmed teen girls. Internal research from 2019 — the "Teen Mental Health Deep Dive" study — found that Instagram was associated with body image issues and depression in adolescent girls. The research team's findings were clear. The internal response was to limit external visibility of the study while continuing to develop engagement-maximizing features.

The harm was measurable and predicted. This is the critical point for liability purposes. It's one thing to argue that a company should have known its product caused harm. It's substantially more serious when internal documents show the company did know, quantified the harm, and made a product decision to continue anyway. That's the transition from negligence to recklessness in product liability law.

The algorithm was the product. Meta's defense, like that of other social media defendants, has relied heavily on Section 230 — arguing that its recommendation engine simply surfaces user content and therefore cannot be held liable for the content it surfaces. Courts have been consistently skeptical of this framing. The algorithm isn't neutral curation. It is a product that makes active choices about what to amplify, to whom, at what frequency, and with what psychological effect.

The distinction between "hosting content" and "algorithmically targeting content at vulnerable users" is where Section 230 immunity ends and product liability begins. Several federal courts have now drawn that line explicitly.

COPPA 2.0: April 22 Is Not a Suggestion

On April 22, 2026, the Federal Trade Commission's updated Children's Online Privacy Protection Act rules take effect. COPPA was originally passed in 1998, when the primary concern was websites collecting email addresses from children. The digital landscape since then has changed in ways that make the original rules look like they were written for a different civilization.

COPPA 2.0 addresses the current reality:

Expanded age coverage. The original rules protected children under 13. The updated rules extend protections to adolescents up to 16 in significant respects, recognizing that 13-year-olds are not meaningfully different from 12-year-olds in terms of developmental vulnerability to algorithmic manipulation.

Behavioral advertising prohibition. COPPA 2.0 prohibits targeted advertising to children based on behavioral data. This isn't a minor compliance adjustment. For products that monetize through behavioral advertising, it requires a fundamental rethink of business models.

Algorithmic transparency requirements. The updated rules include provisions requiring platforms to be able to demonstrate how their recommendation systems work when those systems touch children's content consumption. The black-box defense — "the algorithm is too complex to explain" — is explicitly foreclosed.

Data minimization mandates. Companies can no longer collect children's data on the theory that it might be useful someday. Data collection must be tied to specific, disclosed purposes. The surveillance-first, compliance-later model is prohibited.

The enforcement date is real. The FTC under its current mandate has made children's data protection an explicit priority. Companies that reach April 22 without meaningful compliance infrastructure are not behind on a checklist. They are in active legal exposure.

I wrote about the full scope of COPPA 2.0's implications for EdTech specifically here — but the core issue applies to anyone building technology that touches children.

The State AG Coalition: 40 Fronts at Once

Federal enforcement is one thing. The coordinated action of state attorneys general is something different in scale and character.

More than 40 state AGs have aligned around the Kids Online Safety Act and coordinated enforcement frameworks. This isn't a handful of activist states running individual investigations. It's a multi-decade enforcement coordination mechanism that has learned from the tobacco litigation playbook.

The tobacco playbook works like this: you don't need to win every case. You need to win discovery. Once discovery is compelled, internal documents surface. Internal documents establish what the company knew. What the company knew establishes liability. The cost of fighting 40 coordinated investigations simultaneously dwarfs the cost of settlement.

This is exactly the mechanism now in motion against social media companies. And it's expanding beyond the obvious defendants. The state AG coalitions are looking at the full ecosystem of products that touch children's digital experience — platforms, ad networks, data brokers, embedded analytics tools, and any SDK that collects behavioral data from devices used by minors.

If your product is somewhere in that stack and you haven't done a serious compliance audit, the question isn't whether you're exposed. It's whether you know the scope of your exposure.

Algorithm Design as Design Constraint

Here's where I want to shift from legal analysis to building philosophy, because the legal landscape isn't just about risk management. It's about what gets built next.

The legal framework that's emerging treats algorithm design as a design discipline with accountability attached. That's actually the right frame. Algorithm design has always been a design discipline — we just haven't treated it that way. We've treated it as a technical optimization problem: maximize the metric, iterate on the result. The optimization target was engagement. The externalities were someone else's problem.

Courts are assigning the externalities back to the designers. And I think that's correct.

What does it mean to design an algorithm responsibly for children? A few principles that the emerging legal standards are pointing toward:

Wellbeing as a design constraint, not a brand claim. Companies that claim to care about child wellbeing while building engagement-maximizing recommendation systems are describing a contradiction that courts are now empowered to examine. The design constraint has to be real — measured, documented, and demonstrable. Not a mission statement.

Transparency about amplification logic. What signals cause your system to show a child more of a particular content type? If the answer includes any proxy for emotional arousal — and for most engagement-optimized systems, it does — that needs to be examined against what we know about the psychological effects of sustained high-arousal negative content on developing brains.

Audit infrastructure. One of the most significant practical implications of COPPA 2.0 and the litigation landscape is that companies need to be able to answer questions about their algorithmic behavior with documentation, not speculation. Building audit capability into algorithm infrastructure isn't optional anymore. It's how you demonstrate compliance when the FTC, a state AG, or a plaintiff's discovery request asks.

This last point is worth sitting with. The companies that will be most exposed in the next wave of enforcement aren't necessarily the ones with the most harmful algorithms. They're the ones that built without audit trails — the ones who genuinely can't demonstrate what their systems were doing, to whom, and why.

The existence of observable, documented, correctable algorithmic behavior is now a compliance asset. The absence of it is a liability.

What This Means If You're Building

Let me be direct about the practical implications.

If your product touches children's data in any way, April 22 is a real date. Not a suggestion. Not a guideline. An enforcement date for a federal regulatory framework with civil penalty authority and a coordinated state AG enforcement apparatus behind it.

If your recommendation system touches children's content consumption, you need to be able to explain it. Not in marketing language. In technical specificity that can survive discovery. What signals does it use? How does it weight them? What outcomes does it optimize for? Has it been evaluated against child development research? Can you show that evaluation?

If you haven't done a systematic audit of your algorithmic behavior relative to COPPA 2.0 and KOSA standards, you don't know your exposure. You may not be non-compliant. But you don't know. And in the current legal environment, not knowing is itself a risk posture.

The companies that will navigate this landscape well aren't the ones scrambling to make cosmetic compliance changes before April 22. They're the ones who built compliance infrastructure into their development process — the ones for whom a regulatory audit is a demonstration of what they already do, not a crisis to manage.

That infrastructure needs to be built. Some of it will be internal. But companies in this space should also be thinking about what systematic, automated scanning of their algorithmic behavior would look like — the kind of continuous compliance monitoring that makes "we didn't know" impossible and makes "we can demonstrate compliance" straightforward.

That tooling either doesn't exist yet in mature form, or isn't widely deployed. That gap is itself a building opportunity.

The Accountability Era

I want to close with the framing that I keep coming back to: this is the accountability era for children's technology.

The previous era was defined by a legal and cultural consensus that platforms were neutral, data was inert, and the harms of algorithmic systems were either speculative or someone else's problem. That consensus is gone. It was replaced, piece by piece, by whistleblower documents, by Meta trial testimony, by 2,200+ lawsuits filed by families, by 42 coordinated state AGs, by congressional hearings where tech CEOs could not explain what their own algorithms did.

The accountability era doesn't mean you can't build. It means you have to be able to account for what you build. That's a higher bar than the previous era. It's also, I'd argue, the right bar.

The companies and builders who thrive in the accountability era will be the ones who internalized this before they had to. Who built with the assumption that their algorithmic decisions were product design decisions — and designed accordingly.

If your product touches children and you're unsure what systematic algorithmic compliance looks like in practice, that question deserves an answer before April 22. Not after.

Navigating the Accountability Era?

I provide expert analysis and advisory for companies navigating COPPA 2.0 compliance, algorithmic accountability, and children's digital safety regulation.

Work With Mindful Media →

Stay in the Loop

Get my weekly take on children's media, ethical AI, and what's coming next.