Using Bias Detector in NewsFrames

Under Development, Last Updated 15 November 2017

This functionality, a collaboration with Georgia Tech's Behavioral Modeling and Computational Social Systems, is experimental and undergoing active research. It is only available in English at the current time.

Publications using this research should include citation information as directed on Github.

Results are meant to be suggestive of potential bias at the sentence level, and are still under review.

This is why any approach with Bias Detector results should view them as flagging possible ways that texts – whether our own or someone else's – might be limited or slanted. Human interpretation is required to affirm or reject the suggestions made by these algorithms.

To learn more about where we are in experimentation and implementation around this tool, please contact newsframes [at] globalvoices [dot] org.

Biased Detector Sub-Score Definitions

Overall scoring

  1. Sentiment – sentiment scores provide an overall assessment of text as to its negative or positive affect when taken in whole.
  2. For example, “The restaurant wasn’t that bad.” would be assigned a positive sentiment score where “not bad” is understood to be a positive valence or direction for the meaning expressed about the restaurant.

    Likewise with more complex phenomena: “The restaurant was super bad”  would have a much more negative sentiment score where the adverb super amplifies the negative valence of the word bad.

  1. Modality – modality scores indicate the overall amount of certainty expressed, on average, by the sentences within the text.
  2. “It will rain” receives much higher scores than “It might rain.” Scores above 0.5 at the sentence level are interpreted as expressions of facts.

Proportional scoring

The following sub-scores are reported on words within the text that fit within the category of interest.

  1. Opinion words – opinion words signal the expression of positive or negative attitudes or opinions, which may signal bias.
  2. e.g., best, benign, contradict
  1. Tentative words – words that express uncertainty
  2. e.g., bets, dubious, hazy, guess
  1. Achievement words – words that indicate praise
  2. e.g., accomplished, master, prized
  1. 3rd Person Pronouns – words that are thought to be useful in detecting attribution bias
  2. words like e.g. he, him, she, hers, they
  1. Discrepancy words – words that indicate something is not as expected
  2. e.g. inadequate, mistake, liability
  1. Work words – words that indicate a strong work orientation
  2. e.g. ambitious, resourceful, hard-work
  1. One-sided Bias terms – words that express only one side of contentious issues
  2. e.g. anti-abortion vs. pro-life
  1. Factive verbs – words that presuppose the truth of their complement
  2. e.g. realize, as in: “The speaker of the house didn’t realize the mistake passing the law was.”
  1. Hedge words – words used to reduce commitment to the truth of a proposition, evading bold predictions
  1. Assertive verbs – verbs that assert something but do not imply that the assertion is definitely truth (contrast with factive verbs above)
  2. e.g. “The inventor claims the product produces less radiation than earlier versions.”
  1. Strong subjective words – adjectives or adverbs that add strong force to the meaning of a phrase or proposition
  2. e.g. fantastic work or accurate diagnosis
  1. Weak subjective words – adjectives or adverbs that add some/weak force to the meaning of a phrase or proposition
  2. e.g. noisy
These explanations were developed in partnership with members of the Behavioral Modeling and Computational Social Systems Group.