Perceived Authorship And Trust Management
Sources: 1 • Confidence: Medium • Updated: 2026-04-12 10:14
Key takeaways
- The author reports that many readers assume his blog writing is partially or fully created by LLMs because he often writes about LLMs.
- The author uses an LLM to update code documentation or draft a project README, and then edits the output to remove opinions and invented rationale statements.
- The author uses LLMs to proofread blog text that he publishes and says he has shared his current proofreading prompt.
- The author does not allow LLMs to speak on his behalf in a first-person or opinionated voice.
Sections
Perceived Authorship And Trust Management
- The author reports that many readers assume his blog writing is partially or fully created by LLMs because he often writes about LLMs.
- The author does not allow LLMs to speak on his behalf in a first-person or opinionated voice.
Bounded Llm Use With Human Redaction
- The author uses an LLM to update code documentation or draft a project README, and then edits the output to remove opinions and invented rationale statements.
- The author uses LLMs to proofread blog text that he publishes and says he has shared his current proofreading prompt.
Unknowns
- What exact disclosure practices does the author use (e.g., per-post notes, a standing policy page), and how consistently are they applied?
- What is the author’s proofreading prompt, and what changes (if any) has it undergone over time?
- How often are LLMs used for documentation/README drafting, and what portion of final published documentation typically originates from an LLM draft?
- Does the author have a defined accuracy verification process for AI-assisted documentation beyond removing opinions/invented rationale (e.g., technical fact-checking checklist)?
- Did the author observe any measurable change in reader confusion or trust after communicating this AI-writing policy?