Search Blog:

How to moderate social commentary in KnowledgeNets

I joined a discussion on Andrew McAfee's blog recently regarding the iPad and Apple's closed ecosystem. The level of vitriol regarding the Apple iStore was astounding. Many people were claiming that the iStore environment is the end of freedom as we know it and Steve Jobs is virtually the devil himself.

As I wrote in my comment, I believe this is all very much overblown. The Apple ecosystem is just an alternative model where moderation (Apple getting to decide what's in the iStore and what's not) is used to help the user and the community. If this form of moderation really bothers you, then you are free to opt out.

But the discussion regarding moderation in that environment got me thinking again about moderation in a KnowledgeNet environment. I get this question all the time when we talk to prospects and customers: Won't the social media aspects of the system go wild? I generally answer with the following story.

How social media is controlled and your ability to moderate results are very much a question of context. Take for example an Internet-based hotel review site, like TripAdvisor, where users are encouraged to share reviews. What's the most crucial part of this scenario? I believe it is that the reviews are not moderated in any way (except possibly for spam and offensive language).

The issue is that if the user believes that bad reviews or good reviews are being deleted, then what is the point of using the site? The user needs to know that all the feedback provided by the community is available for review and that common sense tells the user to take the over-the-top good and bad reviews with a grain of salt.

This is why rumors that sites such as Yelp manipulate the reviews as a sales tactic are so damaging to these organizations. Why use the site if the wisdom of the community is being compromised?

Now, if you take this same logic and apply it to the internal use of socially enabled tools, where does it get you? It would seem to lead to the conclusion that even when erroneous information is posted on the internal sites, you just need to accept it.

I do not agree. A wiki is a good example of group moderation. And it works well in both the context of the public Internet and a private intranet. Erroneous information can be quickly "corrected" by other users.

But when subjects get very controversial, like on the George Walker Bush page on Wikipedia, then you move to moderation by an administrator. Users can no longer directly edit the article. Instead they are asked to submit requests for changes to the article to the administrator.

Wikipedia calls this feature "silverlock," and any article that is no longer available for general editing by the public displays an icon that looks like a small silver padlock.

But what about social media items (social intelligence), like comments? Here intranet practices should diverge from Internet practices. On the intranet, our clients are often looking to the KnowledgeNet to make high-quality information more readily available and re-useable.

They like the idea that a KnowledgeNet can allow vetted content, such as a document or presentation, to be enhanced and informed by comments from community experts. This makes the knowledge repository a dynamic and ever-improving organization asset.

However, customers also worry about low-quality comments and how they might degrade the overall goal and quality of the system. The solution is simple: Allow low-quality comments to be deleted, or moderated, by an administrator.

This concept of "weeding and feeding" a knowledge repository is an old one. Throw out poor-quality and out-dated content, and add high-quality and relevant content. The same rule applies to the socially contributed content on an intranet. The goal of high quality dominates the need for all comments to remain intact.

We call this type of control the social volume knob. It gives you control over the social components of your system. Unlike the Internet where the social media dial is at full-tilt, a la the amps in "Spinal Tap", many organizations want to control what content can be socialized, and how it is socialized. For example, having ratings for news feeds; and ratings, comments, and tagging for documents.

Additionally the social volume knob allows you to control who has these capabilities. For example, everyone can tag, but only managers can comment. The social volume knob gives the organization the control they need over their content, and thus provides "social security."

Usually after I go over this with customers, a couple other follow-up questions are raised. Firstly, how do I identify content that should be deleted? There are several ways to accomplish this. One is to delegate responsibility. Use domain experts to review social commentary and give them the power to moderate the content.

Another way is to use the ratings (the community) to tell you that the comment is poor quality. Administrators should be able to run reports and easily review the quality of the comments within the system. Lastly, a third way is to use alerts to identify vulgar or off-color comments.

This leads to the second question, and that is how do I avoid being heavy handed? Well, remember moderation in all things. Delete items only after careful analysis. Error on the side of not deleting until you gain more experience and confidence.

Also, refrain from deleting items you disagree with. Just delete erroneous information. For instance, in Presto, you can turn on features such as view history, which enable the community to see the revision history. Transparency is good.

With these best practices in mind, we find that moderation of social media within the KnowledgeNet helps the overall quality of the system and does not detract the community from using the system. In fact, some users are relieved to find out that a bone-headed comment they made can be easily removed from the intranet, whereas the off-color post on a public social media site will haunt them forever.

I believe moderation capabilities provide a number of key benefits within the context of enterprise socially enabled applications, such as KnowledgeNets. What do you think? Do you have any experience with either moderated or unmoderated social media products?

No comments:


Related Posts Plugin for WordPress, Blogger...