From “inspired by” to impersonation: why Expert Review triggered a lawsuit
Grammarly’s now-paused Expert Review feature crossed a line from “inspired by” to something closer to AI impersonation, according to critics and newly filed lawsuits. The tool generated editorial feedback that was credited to real journalists and authors, presenting guidance as if it came from named experts. After public scrutiny, Grammarly disabled Expert Review and said it would no longer use AI to “clone” experts for this purpose (The Verge; Engadget).
The backlash escalated into litigation. Investigative journalist Julia Angwin, one of the listed “experts,” sued Grammarly, alleging misuse of her identity by having AI produce feedback attributed to her without consent (The Verge). Separate filings have sought class-action status, arguing that attributing AI-generated comments to real people risks misleading users and violating rights of publicity (WIRED).
- What triggered the claims: Expert Review attached real names to AI output, creating the appearance of personalized feedback from specific individuals (The Verge; Engadget).
- Why it matters: Plaintiffs say the practice blurred the line between stylistic inspiration and impersonation, potentially deceiving users about authorship and endorsement (The Verge; WIRED).
The company’s shutdown of Expert Review underscores the legal and reputational risks when AI systems appear to speak with a real person’s voice—and name—without clear, verified consent (The Verge).
The SDNY test: right of publicity vs. generative AI business models
The right of publicity claims now aimed at Grammarly will likely be stress‑tested in the Southern District of New York, where plaintiffs have filed a proposed class action lawsuit challenging the company’s AI attribution practices (WIRED; TechBuzz.ai). Separate but related, investigative journalist Julia Angwin has sued over the use of her name, alleging that AI‑generated feedback was presented as if authored by her without consent (The Verge).
Framed this way, SDNY becomes the test bed for a core clash: the right of publicity versus generative AI business models that attach recognizable names to synthesized output. Plaintiffs say attributing AI text to real people misleads users and violates publicity rights; they’ve asked the court to certify a class to address systemic harm (WIRED; TechBuzz.ai). Angwin’s filing underscores the individual stakes when identity and authorship are blurred by product design (The Verge).
- What SDNY will be asked to weigh: Whether presenting AI output under a real person’s name constitutes unauthorized commercial use and risks consumer deception, as alleged by the class action and Angwin’s suit (WIRED; The Verge).
- Why the ruling matters for AI businesses: A decision could signal how courts treat identity‑based branding of AI features and the limits of attributing machine‑generated content to named experts (WIRED).
Disclaimers aren’t consent: how UX invited FTC and civil risk
Grammarly’s Expert Review didn’t just borrow tone; its interface paired AI‑generated feedback with real journalists’ names, creating the impression of expert‑authored guidance. Coverage from The Verge and Engadget centered on that attribution choice—AI feedback credited to real writers—before the company disabled the feature and said it would stop using AI to “clone” experts. That sequence highlights a basic product truth: a product disclaimer can’t override what the UX teaches users to believe.
When the most salient signal on screen is a recognizable byline, a footnote or toggle is unlikely to cure the impression that a named person actually reviewed the text. According to The Verge, Grammarly halted the feature after public criticism; Engadget similarly reported the tool had credited AI feedback to real writers before being paused. Those reports underscore how attribution‑forward UX can invite confusion about authorship and endorsement even if fine print exists.
- Why disclaimers weren’t enough: The core representation lived in the UI—named “experts” attached to AI output—so any product disclaimer sat downstream of the claim users internalized (The Verge; Engadget).
- The risk signal: Disabling the feature after criticism, and pledging to stop “cloning” experts, reflects recognition that attribution UX can create civil exposure when it blurs who actually authored or endorsed the feedback (The Verge; Engadget).
Style emulation at scale: editorial voice cloning crosses a human line
Style emulation isn’t new in publishing, but editorial voice cloning is different when software makes it look like a named person is speaking. Grammarly’s Expert Review did exactly that: AI feedback appeared under real journalists’ names, according to reporting, before the company disabled the feature and said it would stop using AI to “clone” experts for this purpose (The Verge; Engadget). At scale, the difference between mimicking tone and attributing words hardens into a trust problem: users reasonably infer endorsement when a byline is attached.
That’s the human line. Once a system suggests a particular person authored or approved guidance, it stops being neutral “style help” and starts looking like identity use. Grammarly’s decision to pause the tool, and its commitment to stop cloning experts, reflect acknowledgment that attribution can’t be faked in editorial contexts without confusing readers (The Verge; Engadget).
- Why scale matters: A single mislabeled suggestion is a glitch; thousands are a narrative that erodes author trust and reader confidence (Engadget).
- The editorial boundary: Voice cloning tied to real names functions as implied endorsement, a leap beyond generic style emulation, and the trigger for the backlash covered by tech press and the broader “WIRED investigation” conversation around identity and AI (The Verge; Engadget).
Build it right: a consent-first expert architecture
Build it so attribution is earned, not assumed. The lawsuits over AI “expert” impersonation allege that users were shown guidance credited to real journalists without permission, a pattern that invites a consent‑first design response (TechBuzz.ai; WIRED).
- Verified consent registry: Maintain a signed, revocable record for each named person. No name, likeness, or “voice” appears unless the consent flag is true and current—addressing the core allegation of attribution without permission (WIRED).
- Expert licensing framework: Use clear licenses that set scope, compensation, revocation terms, and editorial boundaries. This aligns incentives and avoids implied endorsement risks raised by the complaints (TechBuzz.ai).
- Attribution‑aware model routing: If consent is absent, route to generic guidance with neutral labeling. If present, enable named attribution with signed proofs and metadata (WIRED).
- UI that can’t mislead: Prominent “AI‑generated, licensed expert attribution” badges for approved uses; “AI‑generated, not expert‑reviewed” for all others. Disclaimers aren’t enough when names sit on screen (WIRED).
- Provenance and audit: Immutable logs for each suggestion, plus user‑visible provenance panels to reinforce consent in AI systems (TechBuzz.ai).
- Policy guardrails: Ban training or prompts that imitate named individuals without licenses; institute pre‑launch reviews focused on AI ethics in productivity software, emphasizing identity and endorsement risks (WIRED).
The throughline is simple: consent, provenance, and honest UI prevent the attribution confusion at the heart of the complaints (TechBuzz.ai; WIRED).
Investor memo: rights as a moat in AI productivity
For investors, the litigation around Grammarly’s Expert Review isn’t just downside risk—it’s a map for defensibility. The suits filed by investigative journalist Julia Angwin and proposed class plaintiffs allege that AI feedback was presented under real names without consent, raising right‑of‑publicity and deception concerns (The Verge; WIRED). That signal is clear: attribution without verified permission invites legal and reputational exposure.
In AI productivity—whether email power tools like Superhuman or writing assistants—products that can prove licensed identity use, or avoid it entirely, will meet procurement screens faster in regulated and media‑sensitive sectors. Angwin’s complaint, alongside the class filing, underscores how attaching recognizable names to synthesized output can become a liability wall that only consent‑first players can scale (The Verge; WIRED).
- Moat thesis: Rights management is product strategy. Build verifiable consent, licensing, and attribution controls to reduce the litigation vector highlighted in the Angwin and class actions (The Verge; WIRED).
- Sales unlock: Clear provenance and “no‑impersonation” guarantees shorten compliance reviews where identity misuse is a known concern (WIRED).
- Downside cap: Avoiding attribution that mimics endorsement limits the risk vector that triggered these cases (The Verge; WIRED).
Call it a simple bar: prove who’s speaking, or don’t name them at all. In a market under scrutiny from reporters like Julia Angwin and audiences shaped by watchdog journalism, that bar is a moat (The Verge; WIRED).
📰 Sources
- Grammarly Faces Class Action Over AI ‘Expert’ Impersonation
- Grammarly says it will stop using AI to clone experts … – The Verge
- One of Grammarly’s ‘experts’ is suing the company over its identity …
- Grammarly Is Facing a Class Action Lawsuit Over Its AI … – WIRED
- Grammarly has disabled its tool offering generative-AI … – Engadget
- undefined Latest News – 2026-02-26 – YouTube
Stay informed: Get the daily CronCast briefing delivered to your inbox. Subscribe for free.