Rosa Del Mar

Daily Brief

Issue 79 2026-03-20

Prompting And Context Management Should Be Treated As Reusable Engineering Artifacts

Issue 79 Edition 2026-03-20 9 min read
General
Sources: 1 • Confidence: Medium • Updated: 2026-04-11 19:24

Key takeaways

  • Eric Chou states that effective prompt engineering is iterative and that once a good prompt is found it should be made persistent (copyable or checked into Git), because additional verbosity eventually yields diminishing returns.
  • Eric Chou states that many loud AI critics dismiss LLMs as unusable due to hallucinations after trying them only once and not iterating.
  • The AI Networking Cookbook is organized to start with foundational AI setup (accounts, parameters, local models) and progress to advanced topics such as MCP, building copilots, and evaluating model cost/performance tradeoffs.
  • Eric Chou states that teams can codify AI guardrails via persistent files (including tool-specific hidden files, cloud.md, or a cross-provider agents.md), but cross-provider compliance with agents.md is not guaranteed.
  • OpenAI's API is usage-based (token-billed) and is distinct from the ChatGPT subscription product, which is billed as a monthly subscription.

Sections

Prompting And Context Management Should Be Treated As Reusable Engineering Artifacts

  • Eric Chou states that effective prompt engineering is iterative and that once a good prompt is found it should be made persistent (copyable or checked into Git), because additional verbosity eventually yields diminishing returns.
  • A speaker states that teams can codify learned prompting context into a reusable file checked into Git so everyone gets the same boilerplate without retyping it each session.
  • Eric Chou states that internal standards such as naming conventions can be injected into LLM context via repository files checked into Git or via boilerplate instructions pasted at chat start.
  • Eric Chou states that prompt engineering matters because LLMs do not reliably infer unstated preferences, and explicit technical context, examples, and desired output constraints increase the likelihood of a useful response.
  • Eric Chou states that system messages provide persistent context across chats so users do not need to re-paste repeated instructions such as preferred technologies, roles, and response format.
  • Ethan Banks states that specifying an output format such as JSON or YAML in the prompt often causes an LLM to return responses in that format even without fully defining keys upfront.

Evaluation And Trust: Risk-Based Human Oversight And Prompt-Sensitive Benchmarking

  • Eric Chou states that many loud AI critics dismiss LLMs as unusable due to hallucinations after trying them only once and not iterating.
  • Eric Chou states that AI evaluation can be done by saving model traces/outputs and using another model as a judge, and that results can invert expectations if prompts are better tuned for a weaker model.
  • Eric Chou states that getting competent LLM results typically depends on choosing an appropriate model (not the cheapest default) and iterating prompts until the request is correctly specified.
  • Eric Chou states that trust in AI output should be based on risk and reversibility, that humans should remain in the loop, and that accountability cannot be offloaded to the model.
  • Eric Chou states that comparing outputs across multiple models is most worthwhile for high-stakes decisions or systematic evaluation, and is usually unnecessary for routine tasks once a model is good enough.
  • Eric Chou recommends that for security-sensitive use, users should prefer paid LLM plans when possible, opt out of training or data retention where settings allow, and minimize sharing of sensitive information.

Practical Adoption Path And Pedagogy For Netops Ai

  • The AI Networking Cookbook is organized to start with foundational AI setup (accounts, parameters, local models) and progress to advanced topics such as MCP, building copilots, and evaluating model cost/performance tradeoffs.
  • For initial learning, the book uses OpenAI as the primary provider to avoid overwhelming beginners and to teach core concepts such as temperature and system messages before expanding to other options.
  • All code examples from the book are available in a public GitHub repository and include explanatory comments so the code can be followed without the book.
  • Eric Chou migrated community content to networkautomation.com, which offers free signup notifications and a Discord community for real-time Q&A.
  • The intended audience is a network engineer with basic networking knowledge (around Network+ or CCNA level) and basic comfort reading Python code.

Guardrails And Governance Via Repo Files, With Interoperability Risk

  • Eric Chou states that teams can codify AI guardrails via persistent files (including tool-specific hidden files, cloud.md, or a cross-provider agents.md), but cross-provider compliance with agents.md is not guaranteed.
  • Eric Chou describes a two-level guardrail approach where global and project-local guardrails can be combined and more-specific local instructions override broader defaults.
  • Eric Chou states that cross-provider guardrail files such as agents.md are informal and not mandated by any standard.
  • A speaker states that the guardrail files discussed are typically plain-English Markdown rather than requiring a strict structured format.
  • Eric Chou describes a workflow pattern of starting in planning/ask mode with a coding assistant, then having it summarize agreements into a file checked into a centralized repository for team reuse and change history.

Cost And Procurement Friction Is Now A First-Order Constraint

  • OpenAI's API is usage-based (token-billed) and is distinct from the ChatGPT subscription product, which is billed as a monthly subscription.
  • Eric Chou estimates that about $20 of OpenAI API credit was sufficient for his book examples and was not exhausted during the process.
  • Eric Chou asserts that many AI providers now require credit card verification or paid trials rather than broad free tiers, increasing the upfront requirements to get started.

Watchlist

  • Eric Chou expects to supplement the book with posts covering alternative local LLM options beyond Ollama, including LM Studio.
  • Eric Chou reports hearing that Mac minis are in high demand for local LLM use and that this popularity has contributed to shortages.
  • Eric Chou states that teams can codify AI guardrails via persistent files (including tool-specific hidden files, cloud.md, or a cross-provider agents.md), but cross-provider compliance with agents.md is not guaranteed.

Unknowns

  • How consistently do different AI assistants actually read and comply with repository guardrail files (including agents.md) in real workflows?
  • What is the measurable impact (time-to-completion and defect/rollback rate) of plan-first workflows and versioned prompt/guardrail artifacts versus ad hoc prompting?
  • How large is the procurement/onboarding barrier from reduced free tiers across providers, and how does it vary by organization type (e.g., regulated vs non-regulated)?
  • What are the task-specific cost and quality tradeoffs across model tiers and providers under a controlled prompt-iteration protocol?
  • For local inference, what model sizes and performance are achievable on common enterprise-approved hardware configurations, and what operational constraints (latency, throughput) result?

Investor overlay

Read-throughs

  • Rising enterprise focus on versioned prompts, guardrails, and plan-first workflows could increase demand for developer tooling that manages prompt artifacts, evaluations, and governance alongside code.
  • Interoperability risk from inconsistent assistant compliance with repo guardrail files could create demand for standardization layers or verification tooling that tests and enforces instruction adherence across assistants.
  • Cost and procurement becoming first-order constraints, plus separation of subscription chat from token-billed APIs, could increase demand for usage governance, budgeting, and cost performance evaluation tooling for LLM usage.

What would confirm

  • Teams increasingly check prompts, context boilerplate, and guardrail markdown files into Git and formalize plan-first workflows as standard engineering practice.
  • Organizations run controlled, prompt-sensitive evaluation suites and track time-to-completion, defects, and rollback rates to justify process changes and model choices.
  • More tooling and workflows emerge to manage token-billed API usage, including budget controls, cost-performance comparisons across model tiers, and procurement onboarding playbooks.

What would kill

  • Repository guardrail files such as agents.md become consistently honored across major assistants without additional tooling, reducing the need for verification or standardization layers.
  • Empirical tests show plan-first workflows and versioned prompt artifacts do not materially improve delivery speed or quality versus ad hoc prompting.
  • Procurement friction declines and free tiers expand broadly, making payment onboarding and cost governance less central to early adoption and experimentation.

Sources