The book you just read was written in a single day. One person, working with an AI assistant, starting with a closet full of old papers and ending with a published book, complete with an illustrated cover, source links, and a "Read the Book" button on the archive website.
I want to explain how, because the world is changing fast, and I think you should know.
The Queen Anne Fortnightly Club had been meeting every two weeks since 1894. Over 130 years, their members wrote more than 450 papers on everything from the founding of Children's Hospital to Marilyn Monroe to the price of real estate on Queen Anne Hill. These papers existed as stacks of old documents, handwritten pages, typewritten manuscripts, and faded photocopies.
Over the previous few months, I had used AI-powered tools to scan all of these documents and convert them into readable text. That cost about $15 total. The result was a digital archive of over a million words of women's writing spanning three centuries.
But nobody was going to read a million words of raw text. That's like asking someone to read an entire encyclopedia to find the good parts. I knew the good parts were in there. I just needed to find them and stitch them together into something people would actually want to read.
Morning. I told the AI: go through this entire archive and find the best stories. It read the catalog, the themes, the member profiles, and about 30 of the most promising papers in full. Within a couple of hours, it had identified the moments that would make people lean forward: the fish-net ceiling, the baby diary, the king who fined himself a cow, the 90-year-old in a rocking chair, the letter that makes you cry.
Afternoon. A complete first draft of the book was written. I read it and said: too polite, not enough surprises, needs more "oh my God, are you kidding me" moments. The AI went back into the archive, found a whole second layer of material I hadn't seen, and rewrote the entire book from scratch with a completely different structure.
Evening. I reviewed the rewrite and we did an enrichment pass: adding historical context (what was Seattle like in 1894? what did the Depression feel like?), sensory details (the smell of ground coffee and smoked meats in the grocery store), and source links so readers could click through to the original papers. An AI-generated book cover was created. The book was deployed to the web.
Night. I wrote the foreword in my own voice. We added the prologue, the epilogue with my five favorite papers, the "About the Author" section, the invitation to ArtLove Salon, and this page you're reading now. The book went through three rounds of revision based on my feedback about tone, voice, and what felt right.
I'm not telling you this to brag. I'm telling you because the world is about to change very fast.
One person, working alone, produced a complete illustrated book from raw historical documents in less than 24 hours. The AI read hundreds of papers, identified the best stories, wrote 25,000 words of narrative, formatted everything as a beautiful web page, generated a book cover, and deployed it to a live website. I directed every decision, wrote the foreword, chose the tone, and decided what stories mattered. But the heavy lifting? That was the machine.
This is going to transform how we work. A lot of jobs that used to take teams of people will soon be done by one person with the right tools. That's exciting and also scary. It means the bonds between people, the face-to-face connections, the kind of thing the Fortnightly women built in their parlors, matter more than ever. Because when machines can do the work, what's left is the human part. The showing up. The listening. The caring.
I should mention: my brain works a certain way. I'm a polymath, an engineer, a painter, a builder. I think fast and I've been coding since I was twelve. I have a Ph.D. from Stanford and thirty years of experience building websites. So when I say "one person did this in 24 hours," I mean one person with a very specific set of skills and a very fast brain, plus a very powerful AI. Your mileage may vary. But the tools are getting better every month, and the gap between what experts can do and what anyone can do is closing fast.
The women of the Fortnightly understood something important: if you don't write it down, it's gone. They wrote everything down. They kept their yearbooks in pink and green. They preserved their minutes for 130 years. This book is just the latest attempt, using the latest tools, to make sure their voices are heard.
Now might be a good time to get to know your neighbors.
I'll skip the pleasantries. Here's what happened.
I had a digital archive of 458 extracted text files from the Queen Anne Fortnightly Club (1894-2025), totaling 1,075,701 words. The OCR was done over previous months using Claude Sonnet via the Anthropic API at a total cost of about $15. The files were cataloged in a master _catalog.json with metadata (author, date, topics, one-liners, summaries). Member profiles, themes, photo metadata, and gallery data were in separate JSON files. The whole archive was served as a static site built by a Python script that injected all JSON into an HTML template.
The archive site was already live at conru.com/queenanne. Nobody was going to read 458 raw papers. I wanted a book.
Everything was done inside Cursor IDE using Claude (claude-4.6-opus-high-thinking) in Agent mode. No other tools except SSH for deployment to a VPS (Debian, nginx) at 107.161.22.39:9221.
The AI had access to: Shell, file read/write/edit, Grep, Glob, semantic search, web fetch, image generation, and the ability to spawn background subagents for parallel work. No database. No framework. Just HTML, CSS, and raw text files.
The AI read _catalog.json (201K chars, too large for one read), _themes.json, _member_profiles.json, _summaries.json, and _gallery.json. It then spawned 4 parallel subagents to read approximately 30 full papers from _extracted_text/, selecting them based on the summaries and one-liners that indicated narrative richness. Each subagent returned the full text of 4-6 papers.
An 11-chapter book was written in a single HTML file (book.html) with embedded CSS for print-ready styling (6x9 inch pages, Garamond fonts, drop caps, scene breaks). Deployed via SCP.
After reading the first draft, I said it was too reverent. The AI spawned 3 more parallel research subagents to find juicier material: Mrs. Fry's 1898 poem, the 13 Years History by 6 charter members, Adelaide Pollock's biography, the grocery stores paper with Don Nelsen's secret projection room. Two background subagents then wrote chapters 1-5 and 6-10 in parallel, each receiving detailed prompts with all source material. The chapters were assembled into a new HTML file by a third subagent that read both the chapter text files and the CSS template from the first draft.
A subagent added: (a) "From the Archive" source link cards at the end of each chapter (23 links total), and (b) 1-2 paragraphs of historical context per chapter (Depression breadlines, 1918 flu, etc.). One real quote from the archive about the grocery store smell was added because it was too specific and good to paraphrase. File was modified in-place using StrReplace operations.
Book cover generated via AI image generation (watercolor style, Queen Anne Hill at twilight). A Python patch script was written locally, SCP'd to the server, and executed to inject a "Read the Book" button into the archive site's header. The button CSS was injected before </style> and the HTML after </header>. A movie-trailer prologue was added between the dedication and TOC using StrReplace.
The AI read Andrew's 78,000-word memoir (conru.com/unadulterated/read.html) to understand his writing voice. A foreword was written and revised 3 times based on voice-match feedback. An epilogue with 5 favorite papers was added. An "About the Author" section with foundation links. A "Start Your Own Fortnightly" invitation card. Em-dashes were globally replaced (124 instances) because the author doesn't use them. This "How It Was Made" page was written.
458 source papers (1,075,701 words)
~30 papers read in full by AI
~15 subagents spawned (research, writing, assembly, enrichment)
10 chapters + foreword + prologue + epilogue + invitation
~25,000 words final book
1 generated image (book cover)
3 Python scripts (header patch, em-dash removal, server-side injection)
0 fabricated facts
~20 hours wall clock
1 person (directing) + 1 AI (executing)
The em-dash removal should have been a constraint from the start, not a 124-instance find-and-replace at the end. The first draft's chronological structure was the wrong call; character-driven would have been better from the start if I'd done a better initial brief. The subagent writing was sometimes hit-or-miss on tone; having the AI read 2-3 chapters of the author's memoir BEFORE writing would have saved a revision pass.
The whole thing could probably be done in 8 hours now that the architecture is established. The bottleneck was iteration on voice and tone, which requires a human with taste. The AI is fast. The human deciding what "right" sounds like is slow. That's the job that remains.