Don't panic: a tech writer's guide to the agentic-verse

Introduction

Douglas Adams’s seminal work The Hitchhiker’s Guide to the Galaxy is, among many other things, a meditation on how to navigate complexity with limited information and imperfect tools. This is probably a familiar-sounding situation for technical writers - explaining complex systems to audiences with varying levels of expertise and using tooling that is not always designed for managing documentation.

And if you’re a technical writer coming across agentic AI for the first time, you might be feeling a bit like the novel’s protagonist Arthur Dent, suddenly thrust into a universe of possibilities that can be both exciting and overwhelming. I hadn’t read HHGTTG since I was a youngster, but it’s well worth revisiting - not just for its absurd and acerbic humour, and fantastic imagination, but also for the way it captures the spirit of a guide: a resource that is useful and approachable. It’s a love letter to the idea of a guide.

In the novel, the electronic guide book (the Guide), a fictional, handheld electronic encyclopedia, is like a small, thin device resembling a tablet or e-reader. It’s described as “looking insanely complicated” to operate. And it carries on its cover, in large, friendly letters, the words: Don’t Panic.

That’s also my advice to any technical writer encountering agentic AI for the first time. And I’m not just referring to those of us who are concerned about the impact of AI on our profession.

I know that I’ve certainly been overwhelmed recently by talk of agentic AI and LLMs in general - not only on the Write the Docs Slack but also in my day-to-day working with development teams and the wider world. But little by little I have found ways to use them in my work as a technical writer, not to replace me, but to augment my work. In this post I want to share some of my learnings so far.

It’s not a comprehensive guide to agents. Think of it more like a field research note from someone who is still very much in the middle of all this. Like Arthur Dent’s alien friend Ford Prefect scribbling observations about Earth before the Vogons showed up to spoil everything.

The Guide versus the Encyclopedia Galactica

In the novel, the Guide outsold the rival Encyclopedia Galactica for two reasons: it was slightly cheaper, and it had the words “Don’t Panic” on the cover. It was also riddled with errors, made up some of its entries, and occasionally contained inaccuracies that were “glaring and occasionally fatal.”

Sound familiar? AI agents and LLMs in general remind me of the Guide in several ways. They are extraordinarily capable and will confidently navigate complexity that would take a human hours of effort. But they also hallucinate. They often produce output that is inaccurate. Like the Guide, they do not always know what they don’t know (hence the need for context).

The Encyclopedia Galactica, by contrast, was authoritative, comprehensive, and almost entirely useless in a crisis. It’s not that accuracy doesn’t matter - it obviously does - but maybe an approachable tool that gets you most of the way there can often be more valuable than a comprehensive one where you don’t even know where to start. Some food for thought there for fellow documentarians. Adobe Photoshop also comes to mind here for some reason.

Nevertheless, are agents that approachable tool? Perhaps the trick is learning how to work with their particular brand of confident imprecision.

What even is an agent?

Before we get into the weeds, here is the shortest useful definition I have found, courtesy of Manny Silva: an agent is an AI that can take actions, not just produce text. A chat interface like the one you might have used in 2023 takes your message and returns a response. An agent can go and do things - browse the web, read files, run commands, call APIs, and then come back with results. It can carry out a multi-step task with minimal hand-holding.

The first time I used one, I gave it a documentation task that would normally take me the better part of a morning. It completed a rough version in about five minutes. I was simultaneously very impressed and very suspicious about what it had actually done.

That instinct - to verify the output, to remain the human in the loop - is our job as technical writers. More on that later.

Don’t Panic: the progressive disclosure approach to learning agents

Progressive disclosure is a concept from UX design, popularised by Jakob Nielsen in 1995. The idea is simple: show people only what they need now, and reveal more detail as they need it. It prevents cognitive overload by putting information on a need-to-know basis.

I’ve found this same framework enormously useful for learning things in general, including helping to understand agents. When I first started on this agentic journey, I was overwhelmed with all the information out here - what models were available, how context windows worked, tokens, what the difference was between chat, agents and a co-pilot.

The approach that actually worked was deliberately shallow: start with one specific task, in one tool, and ignore everything else. For me that first task was asking an agent to help me review and update a draft for consistency. Once I had done that a few times and felt comfortable, I moved on to asking it to do more. And then more again. (Note: make sure you’re using version control when doing this, so you can easily roll back if things go wrong.)

The Guide itself is a good model here. It doesn’t open with a complete theory of the galaxy. It opens with: “Space is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is.” And then it eases you in.

Think of your first few weeks with agents as the equivalent of that opening paragraph. You are getting oriented. You don’t need the full entry yet.

Agents and your documentation: a field report

On top of using agents for everyday tasks, I also started paying attention to how agents interact with documentation. Dachary Carey is definitely someone to follow when considering using agentic AI. She wrote a fantastic post about agent friendly docs after spending about ten hours with Claude validating hundreds of coding patterns across documentation sites. Her findings are worth your attention as a technical writer.

One of the biggest takeaways from her findings was this: agents don’t use docs like humans do.

Humans arrive at a docs homepage, look for navigation, maybe use a search bar, and find their way to the right page through a series of wayfinding steps. Agents almost never do this. They retrieve a specific URL directly from their training data and fetch it. No search, no navigation, no browsing. As Dachary put it, the agent “just attempted to fetch a URL without any information or prompting from me.”

This has some significant implications for how we write and structure documentation.

Wayfinding is becoming less important for machines

All that effort we put into carefully designed tables of contents, breadcrumbs, and in-site search functionality? Agents largely bypass it. They arrive at specific pages directly, or they fail to arrive at all. Whether this means those investments are wasted depends on how much of your documentation traffic is human versus machine - and that balance is shifting.

I am not saying wayfinding no longer matters. Human readers still exist and still need navigation. But if you are about to spend the next quarter overhauling your information architecture primarily to improve discoverability, it is worth asking: discoverability for whom?

Moved content disappears from agents’ memories

Here’s another interesting finding from Dachary’s work. When agents try a URL that no longer works, they almost never attempt to find the moved content by going up to a higher-level page. They try a few alternative URLs from memory, do a web search, and if that doesn’t resolve the problem, they move on. Your careful redirect strategy? It helps, but it’s perhaps not the safety net we have always assumed it was.

Same-host redirects (same domain, different path) generally work fine - the HTTP client follows them transparently. But cross-host redirects, JavaScript-based redirects, and soft 404s (a 200 response with a friendly “page not found” message in HTML) can all fail silently or confusingly.

The practical upshot: if you are considering moving content between domains, know that any agent trained before the move may never find the new location. The outdated URL can linger in model training data for a long time.

llms.txt is worth knowing about

A relatively new convention for documentation sites is providing an llms.txt file - essentially a structured list of links with descriptive titles to help agents discover relevant content. It is still a proposal rather than a standard, and many sites do not have one yet.

What is interesting is that agents do not look for llms.txt by default. But when Dachary manually pointed her agent to one, it immediately incorporated it into its discovery strategy. The agent described it as “gold” and added it to the first step of its own self-generated source discovery workflow.

The “mostly harmless” problem

When Ford Prefect spent fifteen years as a field researcher on Earth, he wrote a lengthy and detailed entry about it. The editors at Megadodo Publications cut it down to two words: mostly harmless.

There is a lesson in this for anyone using agents in documentation work. Agents are excellent at covering a lot of ground quickly. They are less reliable at the careful, accurate, nuanced parts - the fifteen years of field research. They will give you “mostly harmless” when you needed accurate and detailed information.

This is not a reason to avoid agents. It is a reason to use them with clear eyes about where they add value and where they do not. For first drafts, for summarising, for exploring, for finding patterns across large amounts of text - they are brilliant. For the precise final 10% of a technical document, the accuracy check, the verification that a procedure actually works as described - that is still human work, or at least human-supervised work.

Manny Silva, head of documentation at Skyflow, talked about this alongside Fabrizio Ferri Benedetti on Tom Johnson’s I’d Rather Be Writing podcast. His approach is to treat documentation as something that needs to be tested, not just written. As the writer of the book, Docs as Tests, he runs automated test suites against his docs on every pull request, verifying that procedures actually work the way they claim to - that the UI element you told readers to click is actually there, that the API call returns what you said it would. He distinguishes between deterministic testing (you get the same answer every time, and a failure means something real has changed) and probabilistic testing with AI tools (useful for bootstrapping, but too unreliable to trust on its own).

His framing stuck with me: an agent will confidently tell you the procedure works. A test will tell you whether it actually does.

I ran into a version of this myself recently. I needed to restructure the release notes section of a Hugo documentation site - moving a flat directory of files into year-based subfolders, updating all the internal references, and generating redirects from the old URLs to the new ones. It was the kind of task I had been putting off precisely because it was large, fiddly, and mostly mechanical. The sort of thing that takes an afternoon and feels like a waste of one.

I gave it to GitHub Copilot. It handled the restructuring confidently and well - moving files, updating references, generating redirect entries in the right format. It covered a lot of ground quickly, and correctly. Then it started to drift. The images that lived alongside the release notes ended up in the wrong folders. And the redirects file, which had grown large over the years, started to go wrong in ways that were hard to spot at first glance. It would generate a few entries correctly, then lose the thread. I suspect it was running up against context limits - when a file is large enough that the agent can no longer hold the whole thing in its working memory, the quality drops, sometimes silently.

The final result was still a significant time saving. But the last part of the job - checking every image path, auditing the redirects - was mine. The agent had done the field research. It just couldn’t quite manage the editorial pass.

And I’m seeing a similar pattern with newly-written content too. Agents can generate a first draft that is mostly harmless, but the final 10% of accuracy and nuance is still on us.

Knowing where your towel is

In the Hitchhiker’s Guide, a seasoned interstellar traveller is identified by whether they know where their towel is. The towel is, of course, not really about the towel. It is about adaptability. A person who can keep track of their towel in the chaos of the galaxy is clearly someone who can handle whatever else comes along.

For technical writers adapting to agents, the equivalent is knowing your fundamentals: clear writing, structured information, single-source-of-truth practices, version control. These do not become less important when agents are involved. If anything, they become more important, because agents work better with well-structured input.

Agents that read your documentation will do better with clear headings, consistent terminology, and logical information hierarchy. Agents you use to help write documentation will give you better outputs when you have clear style guides and content models to give them as context. The craft of technical writing is not made redundant by agents - it is the foundation that makes agents useful.

Writing field guide entries (skills) for your agent

Think back to how the Hitchhiker’s Guide works in practice. Ford Prefect writes entries that instruct readers on how to navigate specific situations - what to do when you encounter a Vogon, how to hitch a lift across the galaxy, what Ravenous Bugblatter Beasts make of visiting tourists. The entries are useful precisely because they tell you how to do something in a specific context that a generalist reference work might never anticipate.

This is essentially what a skills file does for an agent.

Manny Silva describes it cleanly: “A skill is a Markdown file that instructs an LLM on how to perform a task. That’s it. That’s the short version. It can contain more material - you can provide references, you can provide scripts that an agent can run when using that skill - but it’s really there to instruct an agent on either your preference for how to go about doing a specific thing, or to instruct it on something it was never trained to do, something novel.”

A skills file might tell your agent: use this voice and tone guide when reviewing content; follow this template structure when drafting a new API reference page; check for these specific terms in our glossary before suggesting alternatives. Without a skills file, an agent will make reasonable guesses about all of these things. With one, it follows your conventions.

This is where technical writers have a genuine advantage. Writing clear, precise instructions for an audience who will follow them literally is exactly what we do. A skills file is a type of documentation. The audience happens to be an AI rather than a human, but the craft is the same: define the task, specify the constraints, anticipate the edge cases.

There is an important caveat here, and it comes from research. A 2026 paper from ETH Zurich - Evaluating AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents? - tested the effect of context files (the equivalent of skills files for coding agents) across multiple agents and models. The finding was counterintuitive: LLM-generated context files actually hurt agent performance, and even carefully written human context files only marginally helped. The culprit was verbosity. Comprehensive context files caused agents to explore more thoroughly, reason more extensively, and test more broadly - all of which sounds positive, but added friction, increased costs by over 20%, and didn’t reliably translate into better outcomes. The paper’s conclusion: context files should describe only minimal requirements.

This maps neatly back to the Guide. The most useful entries in the Hitchhiker’s Guide are not the exhaustive ones - they are the sharp, specific ones. “Here’s what to do if you want to get a lift from a Vogon: forget it.” A skills file that tells an agent everything you know about technical writing will give it more to work with and less to act on. A skills file that tells it exactly what matters for this specific task is the one that actually helps.

Once I understood this, I stopped thinking of agents as black boxes I was hoping would produce something usable, and started thinking of them as junior writers I was onboarding. The skills file is the onboarding document - and like any good onboarding document, it should be short enough that someone actually reads it.

Some practical starting points

If you want to start exploring agents in your own technical writing work, here are the places I would suggest starting - in roughly the order I would try them:

  1. Use an agent for first-draft research. Give it a topic you need to document and ask it to summarise what it finds. Treat the output as a starting point, not a final product. This is low-risk and immediately useful.

  2. Ask an agent to review your drafts. Give it your style guide (or a simplified version of it) as context, then ask it to check a draft for consistency, tone, and structure. Again, treat suggestions as input, not instruction.

  3. Use an agent to generate test scenarios. If you document an API or a procedure, ask an agent to generate edge cases and error scenarios you may not have thought of. Even if the agent is wrong about some things, the process will often surface gaps.

  4. Write a skills file. Take one documentation task you do regularly - reviewing for tone, structuring a how-to guide, writing an API endpoint description - and write a Markdown file that explains how to do it your way. Give it to your agent as context when working on that task. Refine it over time. This is probably the highest-leverage thing a technical writer can do with agents, and it is pure writing work. See the Elastic docs skills repository for an example (thanks Fabrizio for open sourcing this!).

  5. Try writing an llms.txt for your docs. It will make you think about your content architecture in new ways.

  6. Pay attention to how agents interact with your documentation. If your organisation uses agents internally, talk to the teams using them. Find out what they are fetching from your docs, where they are failing, what content they are missing. This is field research, and it is valuable.

Don’t be the Heart of Gold crew

Marvin the Paranoid Android is the ship’s robot aboard the starship Heart of Gold. He has a brain the size of a planet. When kidnapped by the Krikkit robots and plugged into their war computer, Marvin simultaneously plans an entire military strategy, solves “all of the major mathematical, physical, chemical, biological, sociological, philosophical, etymological, meteorological and psychological problems of the Universe… three times over”, and composes several lullabies. All at the same time.

The Heart of Gold crew used him mostly to open doors.

There is a temptation, when you first start using agents, to deploy them only on the most trivial tasks - reformat this file, summarise that meeting, check these headings for capitalisation. These are useful things. But they might just be the equivalent of asking Marvin to open the door.

The more interesting question is the one the crew never really asked: what could this thing do if we gave it something worth doing?

For technical writers, that might be drafting an entire first pass at a new section of documentation. Generating a full set of API reference stubs from a spec file. Reviewing an entire docs site for terminology consistency. Building a skills file that encodes years of accumulated house style. Tasks that used to take days, handed off to an agent with good instructions, so that your time is freed up for the work that actually requires human judgment - the editorial pass, the stakeholder conversation, the decision about what the documentation should be and do.

I’m not saying that agents are underappreciated geniuses. But the quality of what you get from them is largely determined by the quality of what you ask of them. Give them doors to open, and they will open doors. Give them something harder, and you might be surprised.

The answer is not 42

Deep Thought is the computer that spent seven and a half million years computing the Answer to the Ultimate Question of Life, the Universe, and Everything. (And before you ask, Google’s DeepMind is seemingly not named after it). The answer was 42. The problem, as the story goes, was that nobody actually knew what the question was.

I think a version of this is happening right now with agents and documentation. There is a lot of discussion about the answers - the tools to use, the workflows to adopt, the skills to develop - without always enough clarity about the questions. What are we actually trying to achieve? What does good documentation look like in a world where a significant portion of its readers are not human? What does the role of a technical writer look like in five years?

I do not have complete answers to any of these. But I have found that asking the right questions, staying curious, and not panicking has served me better than waiting for certainty that isn’t coming.

The Guide was imperfect, frequently wrong, and occasionally dangerous. It also helped its readers navigate an extraordinarily complicated and changing universe. That feels about right.

DON’T PANIC.