After its initial public offering (IPO) the last year, Facebook has become sensitive to the smallest threats to its business. There is little wonder then that the recent threat of boycott by advertisers such as Marks and Spencer and BSkyB due to misplacement of their advertisements on offensive Facebook Pages has rattled the billion-member social network.
Facebook uses a sophisticated algorithm that determines the Pages and Groups where advertiser can get the greatest bang for the buck. By mining Page users' data on multiple dimensions, the algorithm estimates how likely Facebook fans are to click on an advertisement. However, it seems that the algorithm currently doesn't consider how offensive some Facebook Pages and Groups can be to the regular customers of the advertisers. Even remote association with Pages that glorify violence, misogyny, or racial hatred can create a huge negative impact on brand equity and Facebook is replete with such material. Understandably many advertisers are cautious.
Facebook responded quickly to the suspension by a few advertisers by introducing manual validation of Pages that will be eligible for displaying advertisements and by restricting "ads from appearing next to Pages and Groups that contain any violent, graphic or sexual content (content that does not violate Facebook's community standards).
" Gradually it will introduce algorithmic review process replacing the proposed manual review. However, it's not clear how Facebook will achieve this complicated task. Many pages and Groups may be offensive even when they don't contain any violent, graphics, or sexual content. On the other hand, Pages and Groups with such content can't all be colored black in one clean sweep. Facebook has responded efficiently to the emerging crisis but can it take the bull by its horns as the complexities of monitoring millions of Pages and Groups become more apparent?