Spring 2024 Workshops!

In Alabama this week, the trees are budding. So a student tells me.

Here in New York, and online, the spring workshop courses are just showing their heads in the flowerbed course catalogs.

Explore my offerings at GWW and here at AWS.


It’s here! Find out what AI understands and doesn’t understand about the world’s preeminent literary ailment and the man tasked with getting it recognized by the medical establishment—me, Benjamin Obler.


Surprisingly, I’ve been dabbling with some desktop AI tools, including CoPilot on Windows, and the Designer app in Office 365. Now that the technologies are available for the average home user such as myself, I feel less reactionary. I think my initial reaction to news stories of the past year about AI’s emergence was one of resistance because of course a ChatBot can “write,” so immediately poses questions about the usefulness of the writer. However, in the last week I’ve been keen to find out for myself what AI can and cannot do and to perhaps embrace some of its capabilities.

There are limitations, of course, and I’m far from feeling like AI is superior to natural intelligence. That this is not the point of AI, or even the end goal of AI development by its developers, is not well understood or easily apparent. But for the writer, the concerns are perhaps even more acute, because what is a fiction writer’s work but to synthetize the nature of reality and produce a synthetic simulacrum that functions as well as the real thing, bears all its hallmarks, and delivers something akin to its full pleasures?

On the limitations side: Automatic’s Jetpack Search tool cannot discern that my post about the artistic merits of blogs quotes Megan Marz’s article published on LongReads.com a few weeks ago. Asked “Does this site offer any views about blogs?” Jetpack Search replied with statements that confused Marz’s views with mine, even though I used the blockquote tag for one excerpt and restate Marz’s views with clear attribution before contravening and elaborating my own views.

Yesterday I used the ChatGPT add-in for Microsoft Word to summarize the contents of an entire book manuscript. I did it chapter by chapter, and in several chunks at a time within chapters. The process was incredibly easy. Just select a bunch of text and click “summarize.” Click “copy” when the results are done and paste into a document. As of today, this add-in says it’s free while it’s in beta, and it shows you the number of “tokens” that each submission consumes, which grew larger as the text selection grew larger. I did about half the book before I was instructed to get an API key, which I did, cost-free in a few clicks.

The summaries are painfully arid. It’s almost as if they are written by a computer—ha ha. It was exciting, though, to see how concise and summative the copy could be. At the moment the results are appealing for use with, perhaps, agent submissions. In particular, the summaries might serve a writer well to create a two-page synopsis of the entire manuscript, which is something that agents often request for a nonfiction work, which this particular work is.

The onerous, sometimes impossible part of summarizing one’s work for an agent is achieving objectivity and delivering something that speaks honestly to its content while still being factual and concise, not lapsing into embellishments about its value or strengths. So on this point I find the AI tools really promising–a boon even. I like the idea of turning over an AI-produced summary document to an agent, because we can both be assured that the MS contains what the app purports it contains, and that it’s unbiased.

Would an agent find this unbearable to read, though?

The text describes the author’s journey as a writer, starting from childhood influences and his aspirations to become a published writer. It covers his experiences in the literary field, including working for publishing companies, teaching workshops, and getting his novel, “Javascotia,” published. The author reflects on the impact of external factors such as the political climate, the pandemic, and personal challenges on his writing career. The text also hints at the author’s artistic crisis and explores themes of identity, narrative control, and the changing landscape of the publishing industry.

—Excerpt from ChatGPT summary of “You Can If You Want, But You Don’t Have To,” from the section titled “Itinerary”

That brings me to the other facet of my experiment: a new and highly unpleasant sensation in my cranium, which I can hardly compare to anything familiar. After working with the technology for an hour or so, and reading the results of summaries, I felt a mix of ennui, fatigue, headache, depletion, hunger, lightheadedness, aggravation, uncertainty, and distaste for my experience. I think it was a combination of reading the prose, which is precise, as I said, but also: utterly without pleasing inflection; deaf to the rhythms of quality prose; void of colorful expression and figurative language; and probably worst of all (for me as the author of the project I subjected to summarization) the content is reduced to a kind of banal sheer facticity that robbed it of its moods, flattened its most heartfelt and expressive bits, and flame-throwered all its crises and joys, leaving them crusty, dehydrated freeze-dried language turds.

The summaries are useful, though, and I’m completely flabbergasted by the technology. How does it begin “typing” within a second of my submitting click? Doesn’t it have to think? Doesn’t it have to synthesize? Doesn’t it have to cogitate? Apparently not.

I also asked Microsoft’s Designer to produce a logo for a company called Aspiring Writer Syndrome, which treats writers who suffer from a prevailing compulsion to produce literary works. I asked it to make the logo in the style of a woodblock. Here’s a look.

“Why is the company name spelled wrong,” I asked the bot. It said because because of the woodcuts. Could it fix the spelling? No, it could not. It asked for my forgiveness on that point. I did not grant it. But I don’t feel bad, because it didn’t either. ∯