
They say death and taxes are the only certainties in life. They're wrong.
If you create content for a living, add one to the list: dealing with feedback.
Feedback is good, in principle. It's always wise to ask other people to look at your work before it goes to a wider audience, especially reviewers who have great experience and judgment. Getting great feedback will even make you a better writer. But when you have problems figuring out what to do with feedback you get, applying three simple terms can work wonders.
A Study in Contradictions
Input often comes from many different people, who have different perspectives and priorities, and different ways of communicating their thoughts. If everyone offers the same suggestion or has a similar reaction to your work, it's easy to decide how to handle it. But how do you deal with inconsistent or even contradictory recommendations and suggestions?
Early in my career I wrote a brochure for a new local business service, one that few people in the region were familiar with. I described the service using an analogy that most prospects would understand, and then went into the nuts, bolts, and technical details.
The sales manager sent me his feedback first. He loved the analogy, because he believed it would make it easier to communicate the service's value. "This explains what we're doing so everyone will get it," he wrote in the margin. "You should put this in bold!"
Then the product manager sent her review. She'd crossed out that paragraph in the manuscript and wrote in the margin, "Don't waste time with this flowery stuff, just get to the meat about what we offer."
Finessing a solution that balances two opposing viewpoints such as those can be tricky, and time consuming. It also increases the risk that somebody's nose will remain out of joint at the end of the process.
When Kindness Obscures Clarity
A different problem with feedback involves a desire to be nice. People who are good at giving feedback know they need to be direct (which is not the same as being mean), but reviewers who are less experienced—or more sensitive—may offer their suggestions using language that is meant to be tactful or kind, but isn't very specific. Often, such well-intentioned feedback can leave you wondering just what the reviewer is really asking you to do.
As a content creator, it's your job to deal with all this feedback and incorporate it in a way that pleases everyone, or at least ruffles as few feathers as possible. That can be a real challenge.
Whether you're dealing with great reviewers or awful reviewers, adopting a clear feedback system will make your life easier, especially when it comes to dealing with many different points of view, or people whose input tends toward the vague.
A Three-Degree Feedback System
If you're struggling to deal with vague or contradictory input from reviewers, setting some simple ground rules at the start of the review will help. I've had experience with several feedback ranking systems over the years, and this is the one that has worked best for me. It puts the onus on your reviewers to be clear about how strongly they feel about each piece of feedback they offer.
Some pieces of feedback—simple typo fixes, for example—are easy to interpret and don't need to be classified. But when a reviewer rewrites a sentence, adds new ideas, or makes other major changes in a manuscript, classifying the change into one of the following categories gives the content creator and the reviewer clarity about the feedback's importance.
1. Observation
Sometimes reviewers notice something they just think is interesting, or seems odd, or bothers them for some reason they can't quite put their finger on. They may not think a sentence is grammatically sound, or have information that might be interesting but they aren't sure it's worth making changes for.
Such comments are best classified as Observations: the reviewer is pointing something out, but isn't necessarily calling for the manuscript to be changed. They might be asking a question ("Is there a simpler word than 'solipsistic'?") or tossing out an additional fact or item for consideration, without feeling strongly that it needs to be added.
2. Suggestion
These comments involve changes that the reviewer would like to see, or personally preferred wording or style changes, but that are not necessarily wrong or off-message. For example, if you're writing copy about how to use tools, and your intended readers live in the United Kingdom as well as the United States, a reviewer might suggest you use the word "spanner" instead of "wrench." In this case, depending on how the content is delivered, you may be able to accommodate both options. Most current CMS systems for the web enable you to display different content based on a visitor's geographic location, so you can use the word "spanner" for visitors from the U.K., but visitors from other locales will see "wrench" instead.
3. Must Change
The final, and most important, class of comments are those that absolutely, positively must be made. Typically, these kinds of comments come from executives and higher-level managers, as well as specialized subject-matter experts. These comments might involve factual inaccuracies, adjustments to quotes, using language that will resonate more effectively with an intended audience, or other critical matters.
Making Feedback Easy for Everyone
Adopting conventions for feedback such as those above creates clarity and makes it easier for creatives and content developers to know how to handle each comment, but it shouldn't be a burden on the reviewers.
Some teams use color codes for comments—green for "observation," yellow for "suggestion," and red for "must change" works well. In other cases, reviewers may prefer to just mark each comment or change with "O," "S," or "M," or adopt some similar convention.
Classifying comments and feedback might seem like an unnecessary step in the review process, but in my experience it makes things much easier, especially when you have many different reviewers with competing interests and priorities. In one particularly complicated workplace, for a project that involved no fewer than 15 different technical reviewers, one article manuscript received feedback that took more than a month to implement because we needed to resolve so many contradictory and vaguely phrased directives. And the reviewers weren't happy with the end result: "Why weren't all of my changes made?" was the most common complaint we heard...and we heard it a lot, because we had to make some hard choices about whose feedback to include.
The next time we worked on an article with that group, we set the expectation that each reviewer would use the Observation / Suggestion / Must-Change classification for any comment more substantive than a typo fix. The volume of input we received from reviewers was the same, but implementing the changes took only a few days. What's more, we had very few complaints about suggestions not being taken: all of the "must-change" items had been incorporated, and many of the suggestions were as well.
The three-degree system worked perfectly for this group, but there are many other options, more and less complicated. How do you prioritize and handle the feedback you receive on manuscripts and other content projects?