For any writer struggling with their craft but looking for a direct payoff from it, or for any publisher looking to slice through the slushpile, an interesting piece of scientific research has emerged via Science Daily. The paper "Success with Style: Using Writing Style to Predict the Success of Novels," presented at a recent conference on Empirical Methods in Natural Language Processing, claims to show mathematically how certain characteristics of writing style which can be parsed by computer distinguish successful works of fiction.


"Predicting the success of literary works poses a massive dilemma for publishers and aspiring writers alike," said Yejin Choi, Assistant Professor  at Stony Brook Department of Computer Science and co-author of the paper. "We examined the quantitative connection between writing style and successful literature." The paper, based on statistical analysis of sentences from 800 books across eight genres, using Project Gutenberg download levels as a gauge of popularity, claims an 84 percent success rate in correlating certain stylistic markers to popularity.

And these secret formulas for success? As reported by Science Daily, the more successful books included a greater number of conjunctions ("and" or "but," etc.), adjectives, prepositions, and other noun modifiers. Linking up and characterizing your things appears to be a big help in hooking readers. What didn’t appear to work so well was reliance on verbs, especially simplistic and overused action or feeling verbs, cliches like "love," adverbs, loan words, and "foreign words." (Given Dan Brown’s success, I’d be surprised at the last one, but those are the numbers.)

Naturally, some publishers or their service providers might already be moving to incorporate this research into an algorithm to weed out no-hopers before they even take up valuable slushpile space. After all, 84 percent is a sexy number, and if it’s scientifically proven, it must be true, no? And as a method for picking up those more likely to succeed, it may be no worse than the non-numerical, or even just plain casual, methods used by many publishers to select new works for print.

Of course, the real test would be to throw a slew of utterly failed, totally unsuccessful, proverbially bad books into the mix and see the results. Given its reliance on Project Gutenberg, I suspect, though I can’t be sure, that the authors of the paper unwittingly inserted a bias towards lasting literary value rather than success in the current publishing market. After all, when you consider the number of bad books that go global … And I wouldn’t put it past some enterprising publisher to incorporate this research into an automatic ghostwriting program that cuts out those pesky authors entirely. Far cheaper than paying royalties …