Using Bias Prism in NewsFrames

Under Development, Last Updated 30 March 2018

This functionality, a collaboration with Georgia Tech's Behavioral Modeling and Computational Social Systems Group, is experimental and undergoing active research. It is only available in English at this time.

Familiarity with Natural Language Processing (NLP) and multiple regression analysis is necessary for fully understanding the results. Up-to-date explanations can be found on GitHub.

What is Bias Prism?

Bias Prism offers a spectrum of approaches in terms of analyzing bias at the sentence level. It is not meant to judge statements as “biased” or “not biased.”

The main goal of Bias Prism is to help writers and researchers think about bias in more specific and complex ways, whether about their own writing or others. Results are meant to be suggestive of potential factors of bias at the sentence level, and are still under review.

Another goal of Bias Prism is increase familiarity with the algorithms being developed related to language assessments. As developments in NLP research continue, we seek to help writers and researchers understand the assumptions and processes related to this field.

Human interpretation is required to definitively affirm or reject the suggestions made by these algorithms. Results are provided in three formats:

  • an “Extended” CSV with all the raw calculations from the algorithms for each sentence
  • a “Normalized” CSV of key results (significant p-values) of the above values that have been normalized according to a sigmoid logistic function,
  • an immediate HTML display of the normalized results.

A Bias Composite offers an analysis of sentence language according to the multiple regression model. Currently, results above 1.00 are being investigated further for interpretation in lieu of any means or medians that result from larger dataset analysis.

See our List of Known Issues.

Analysing Bias Prism Results

Any approach with the matrix of Bias Prism results should view them primarily as flags or suggestions. The flags are meant to signal possible ways that texts – whether our own or someone else's – might be perspectival.

Selection of Biased Detector Sub-Score Definitions

  1. Value laden language – injects a writer's own subjective values into the presentation of issues/facts; subjective opinion or positively/negatively loaded language; partial tone; sensationalism.
  2. For example, related to sentiment, “The restaurant wasn’t that bad.” would be assigned a positive sentiment score where “not bad” is understood to be a positive valence or direction for the meaning expressed about the restaurant.

    Likewise with more complex phenomena: “The restaurant was super bad”  would have a much more negative sentiment score where the adverb super amplifies the negative valence of the word bad.

  3. Expressions of doubt, uncertainty, unsupported or vague attributions – may reflect or imply inaccuracies, calling a statement's credibility into question
  4. “It will rain” receives much higher scores than “It might rain.” Scores above 0.5 at the sentence level are interpreted as expressions of facts.

    Or, words that express uncertainty e.g., “bets”, “dubious”, “hazy”, “guess”.

  5. Partisan language, contentious labels, one-sided terms – reflects ideological bias and/or framing bias, and a non-neutral point of view
  6. e.g. “anti-abortion” vs. “pro-life” vs. “pro-choice”; “terrorists” vs. “freedom fighters” vs “rebels” vs “insurgents”
  7. Presupposition markers – reflects epistemological bias and presupposed truths; “leading” or suggesting a conclusion; endowment/shepherding bias; editorializing; indicators of framing bias
  8. Includes factive verbs, which presuppose the truth of their complement, e.g. “realize,” as in: “The speaker of the house didn’t realize the mistake passing the law was.”
  9. Figurative language – reflects non-neutral perspective bias; such as Wikipedia Neutral Point of View (NPOV) “puffery” or “peacock” language, editorializing; sensationalism. Includes: English idioms, metaphors, metonymy, hyperbole, simile.
  10. Self Reference – may indicate personal thoughts rather than an objective/unbiased point of view. Includes: self-referential pronouns.
  11. e.g., “.. this is my story.”
  12. Attribution – reflects possible (fundamental) attribution bias, actor-observer bias; ultimate attribution bias
  13. Such as Achievement words that indicate praise, e.g. “accomplished,” “master,” “prized.” Or 3rd Person Pronouns , words that are thought to be useful in detecting attribution bias like e.g. “he,” “she,” “they”

To learn more about where we are in the implementation around this tool, please contact newsframes [at] globalvoices [dot] org.

List of Known Issues and Areas of Research
– Ongoing research into the use of all caps and quotes.
– Difference between a title and article text.

This functionality was previously known as “Bias Detector.”
These explanations were developed in partnership with members of the Behavioral Modeling and Computational Social Systems Group.