Rosa Del Mar

Daily Brief

Issue 60 2026-03-01

Perceived Authorship And Trust Management

Issue 60 Edition 2026-03-01 4 min read
Not accepted General
Sources: 1 • Confidence: Medium • Updated: 2026-04-12 10:14

Key takeaways

  • The author reports that many readers assume his blog writing is partially or fully created by LLMs because he often writes about LLMs.
  • The author uses an LLM to update code documentation or draft a project README, and then edits the output to remove opinions and invented rationale statements.
  • The author uses LLMs to proofread blog text that he publishes and says he has shared his current proofreading prompt.
  • The author does not allow LLMs to speak on his behalf in a first-person or opinionated voice.

Sections

Perceived Authorship And Trust Management

  • The author reports that many readers assume his blog writing is partially or fully created by LLMs because he often writes about LLMs.
  • The author does not allow LLMs to speak on his behalf in a first-person or opinionated voice.

Bounded Llm Use With Human Redaction

  • The author uses an LLM to update code documentation or draft a project README, and then edits the output to remove opinions and invented rationale statements.
  • The author uses LLMs to proofread blog text that he publishes and says he has shared his current proofreading prompt.

Unknowns

  • What exact disclosure practices does the author use (e.g., per-post notes, a standing policy page), and how consistently are they applied?
  • What is the author’s proofreading prompt, and what changes (if any) has it undergone over time?
  • How often are LLMs used for documentation/README drafting, and what portion of final published documentation typically originates from an LLM draft?
  • Does the author have a defined accuracy verification process for AI-assisted documentation beyond removing opinions/invented rationale (e.g., technical fact-checking checklist)?
  • Did the author observe any measurable change in reader confusion or trust after communicating this AI-writing policy?

Investor overlay

Read-throughs

  • Rising demand for tooling and workflows that help creators disclose AI assistance, preserve authentic voice, and manage reader trust.
  • Growing need for AI assisted documentation and README drafting solutions paired with human review and redaction controls to prevent invented rationale or opinions.
  • Increased emphasis on governance features in LLM products that restrict first person or opinionated generation and support auditable human editing steps.

What would confirm

  • The author publishes a consistent disclosure policy across posts and reports reduced reader confusion about AI authorship.
  • The author shares a stable proofreading prompt and describes a repeatable process for removing opinions and invented rationale from LLM drafts.
  • The author adds or references a formal accuracy verification step for AI assisted documentation beyond stylistic redaction and proofreading.

What would kill

  • No clear or consistent disclosure practice is adopted, and perceived AI authorship confusion persists or worsens.
  • LLM use expands into first person or opinionated writing, contradicting the stated boundary and undermining trust framing.
  • Repeated errors appear in AI assisted documentation, suggesting the redaction approach does not prevent misinformation.

Sources

  1. 2026-03-01 simonwillison.net