Published on 23 November 2016 by

William Hertling, Portland-based author of scifi novel Kill Process.

I recently read Kill Process, the latest book by local Portland author William Hertling, and enjoyed it immensely, both due to the thorough grounding in real-world tech and the many references to the Portland tech scene. Will has also written several other books, including Avogadro Corp.: The Singularity Is Closer Than It Appears, A.I. Apocalypse, The Last Firewall, and The Turing Exception.

I love the Singularity series, as I’ve always had a soft spot for fiction about the technological singularity. There’s a great podcast William did (as part of a series with Ramez Naam and Greg Bear) that covers a lot of what I find so fascinating about this topic. Many of us in the tech field love sci-fi, and with the Portland/Puppet connections, I thought I’d invite Will to be interviewed for the Puppet blog.

Nigel Kersten: Hey Will, good to chat with you again! We first met at Luke Kanies's house I think, not long after I discovered you co-founded Tripwire with my good buddy Gene Kim of The Phoenix Project fame. Tell me about how that all happened, and how you ended up working in technology?

William Hertling: Gene Kim and I studied computer science together in grad school, then independently ended up in Portland. Two years later, Gene self-bootstrapped a company with a contract to develop an online multiplayer game. He wanted me to work with him, but I wasn’t convinced game development was a viable long-term endeavor. A few months later, he acquired the right to create a commercial version of Tripwire from Purdue, where he had developed it as an undergrad, and suddenly there was a long-term vision for the company. I joined shortly after that, and ironically ended up spending most of my effort on game development, which turned out to be a much bigger project than anticipated. But during that time Gene and I worked out the design for what would become the commercial version of Tripwire.

Nigel: You were still working in tech when you wrote your first book Avogadro Corp: The Singularity is Closer Than It Appears, weren’t you? I know lots of people think “I’ve got a book in me”, but what led you to actually decide to do it? And when did you decide you could do this full time?

William: A few things tipped me over the edge. One of them was reading Cory Doctorow novels. I absolutely love his books, which are full of big ideas about how tech influences people and culture. His actual writing is more workmanlike than literary, and it demonstrated to me that a writer could have effective, not beautiful, prose, and still be able to pull off a fantastic story. That convinced me that writing was worth giving a try.

The other thing that made me write the novel was thinking that I’d identified a gap in the market. At the time, there didn’t appear to be any very realistic novels that chronicled the moment of emergence of artificial intelligence, and that seemed worth exploring in more detail.

I’m not actually a full-time writer yet, although I’d love to be. I make about a third of my income from writing, and work part time on 3D printing software at HP.

Nigel: The “gap in the market” comment is very Lean Startup :) One of the things I loved about Kill Process was how grounded it was in the Portland tech scene, with local startup people whom I think are amazing. Not to give away any major details, but I got a huge kick out of central characters visiting the Puppet office here. Had you been thinking of setting a book in Portland for a while? How did you develop your characters?

William: Ironically, Kill Process was originally set in New York City, and Avogadro Corp. started out set in the Bay Area. In both cases, I realized I could write more detail and more credibly when I set stories in Portland. As a result, in the alternate universe of my books (which is shared by Kill Process and the Singularity series), Portland plays a much bigger role in the tech world, roughly equal to the Bay Area.

Characters are inspired by their closest real-world analogues. When Angie needs a Portland-based female cofounder who is an expert in IndieWeb, there’s one logical answer. At the same time, those characters can draw only a few key attributes from their influences, because I need the creative freedom to have characters act and be motivated in ways consistent with the story. More than once, I’ve had a character based on a close friend, only to have that character turn out to be evil, or to be killed off. I only remember afterwards “Oh, wait — that was inspired by so-and-so. Oops.”

Nigel: Hacking is an important part of the plot in Kill Process. I saw you seek feedback on a bunch of points in social media as you were writing it, and that probably helps explain how realistic the tech and how plausible the hacking incidents felt. Were they all based on real exploits? How did you go about researching those?

William: There’s a pivotal scene that takes place in the mid-1980s. That was all stuff I directly experienced. For the modern hacks, I tried to base them on known exploits where possible.

However, I also have Angie accomplish much of what she does through software engineering, rather than using other people’s exploits to break into systems. She works in a very large social media company in a senior and trusted position. What can an engineer like that do with the rights and access they have? Quite a bit. It’s also far easier to write a backdoor into software that you control and deploy than to break into that software without those rights. It’s the only plausible way for her to accomplish so much on her own.

I also took advantage of the recent trend towards basing embedded systems on common, full-stack platforms. Once upon a time, a smart smoke alarm, for example, was custom firmware or ASIC with little or no connectivity. That’s tough to hack into. In the future, smoke alarms will run on a variant of Linux with all the protocols, connectivity, and services thereof, which means there’s likely to be many exploits to get you access to the device.

Nigel: You’re completely self-published now, right? What’s that like? Do you see self-publishing continuing to grow, and what do you think the medium-term impact on literature is going to be if that becomes a dominant model?

William: I love the independence of being self-published. I have working SQL and valid JSON in Kill Process. Any traditional publisher would never let that into a published novel because they’d be so focused on expanding the audience reach and nervous about alienating non-techie readers. I’d like a big audience too, but I believe delighting my core audience is more important, and I also believe it’s possible to write a book that’s technically accurate and geeky, and yet still accessible to others.

Self-publishing will continue to grow because the economic model is so much better for authors. When a self-published author sells an ebook for $4.99, they make $3.25. When a traditionally published author sells an ebook for the same price, they make $1.25. Now that the technology exists to give self-published authors the same access to the market as traditionally published authors, there’s little incentive to go with traditional publishing, except for the few markets that are still channel driven, such as children’s books and middle-grade novels.

Like everything else on the web, what’ll happen with literature is that we’ll get better tools to find and consume the books we want to read. There’s no need for traditional publishers to be the filter for books, because technology can do a better job of that. Since self-publishing makes books cheaper and returns more money to the author, the publishing industry will support more authors earning more money than it ever has before.

Nigel: I know you’re also kind of obsessed with the idea of the singularity. There’s been a bunch of prominent folks like Elon Musk and Stephen Hawking expressing a lot of fear about this. What do you think? How careful should we really be when it comes to AI?

William: For the purposes of a common definition, let’s agree that we’re talking about the point where artificial intelligence roughly equals human intelligence and then continues to grow faster and more intelligent at an exponential rate, due either to improvements in algorithms or even just the underlying hardware.

  • Nobody knows for sure if strong AI is possible, but we know that we keep building better and better AI, and we keep growing available computing power, so the precursors to strong AI are present.

  • Nobody knows for sure if strong AI will do bad things, but we know that it’s often hard to tell what AI algorithms are doing and why, and we know that the more control you give to software over the physical world, the more opportunity you have for software to shape the world. So the potential for AI to do unexpected things that affect the real world is there.

  • No one knows for sure if AI will continue on an exponential growth curve, becoming vastly more intelligent than humans, but most things in computing do exhibit exponential growth curves.

  • If AI does become vastly more intelligent than us, we don’t know what the impact on us will be, but we do know that we humans are vastly more intelligent than other species, and oftentimes the impact we have on less-intelligent species is negative.

There are so many unknowns, and I can’t help but believe there is some potential risk. Therefore, we should be thinking about ways to minimize that risk, even as we also look to reap the benefits of AI.

Many people tend to fall into either of two camps: AI is so risky we should completely ban it; or, AI has no risks, and therefore there is no need to consider risk reduction. Neither one seems like a realistic perspective.

But this is a topic that could consume an entire book. If you want to read a little more about it, Ramez Naam and I debated this point in a couple of blog posts:

Nigel: So what’s next? Working on anything you can tell us about?

William: I’m in the very early stages of a sequel to Kill Process that looks at how social media and modern communication tools shape who we are, how we think, and how we act, and the illusion of control or lack thereof that shapes many people’s interactions with the world. Echoing the pattern I created in the Singularity series, the protagonist has switched to a different character, but someone we’re familiar with from Kill Process.

Nigel: Thanks Will, very much looking forward to the sequel, and it was great chatting again! For the readers, if you’re interested in picking up Kill Process or the Singularity series, you can find them all here.

Nigel Kersten is CIO and vice president of operations at Puppet.

Learn more

Share via:
Posted in:
The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.