4) INABILITY OF CURRENT SURVEY METHODS TO ACCURATELY MEASURE THESE EFFECTS Recent research is hampered by measurement problems: media effects are highly complex and difficult to measure: researchers lack tools to measure objective human thinking (NLP potentials?); the cognitions, feelings and actions that respondents bring to the table is difficult to separate from actual effects that may have changed; impacts also vary based on subject matter. --GRABER 2002 Incorrect and "don't know" responses to political knowledge survey questions may not mean the same thing; results indicate that the two perform differently when disaggregated. Individuals who "guess" on political knowledge questions or answer "don't know" do so for reasons that are systematic only when accounting for individual personality differences. This direct effect indicates that collapsing these categories may result in personality-based contamination of knowledge measures. Improvements require considerations for these effects during analysis or through improved survey questions/design. --MONDAK 2000 "The common practice of grouping incorrect answers and DKs must be discontinued" (74). The author indicates, as one potential 'correction' for some of these personality affects, that just as dummy variables are used to distinguish three categories of partisanship/ideology, we could similarly use dummy variables to account for the three 'levels' response (i.e., correct, incorrect, and don't know). If I understand the author's intentions correctly, it would be interesting to see how an analysis would respond to pulling out the 'effects' of these three, similar to using dummy variables to account for personality differences. Of course I still believe in reviewing how we design survey questions in an effort to put respondents in the same 'frame' before responding to questions, so analyzing how such a question wording change might compare to results after accounting for personality differences would be worthwhile too. --MONDAK 2000 Aggregated public opinion results may be a poor reflection of true public sentiment. Specifically, individuals who harbor "unacceptable opinions" may do so with a 'don't know' response. Feldman and Zaller's view that individuals do not possess 'true attitudes' on issues is limited; "it does not account for response effects arising from the social nature of the interaction in the survey interview" (1210). The typical assumption is that 'don't know' responses indicate uncertainty of the political issue; in fact, it could just as easy to said that individuals may respond DK when they feel uncomfortable answering a politically charged question. Analysis results indicate that correcting for selection bias changes the substantive significance of many of the explanatory variables. --BERINSKY 1999 at the individual level -- respondents' 'real' opinions are not necessarily the same as those expressed in surveys; aggregate level -- these results in concert will reflect a biased indication of public sentiment on important policy issues, with implications for the role public opinion plays in the democratic processes --BERINSKY 1999 DK for many respondents hides the significance of the respondents' unwillingness to share their 'true' opinion. The result is a reliance on aggregated survey responses that cannot reflect the publics true opinions on sensitive policy issues. If this is in fact true, it has huge implications for much of what we believe about the utility of public opinion data, at least as its currently measured and analyzed. Though the correction methods outlined by the author help, it seems a mere bandage; it seems at the least we should consider reviewing the way in which we collect this data. However, this sentiment does not account for readings not yet covered. --BERINSKY 1999 Recent research is hampered by measurement problems: media effects are highly complex and difficult to measure: researchers lack tools to measure objective human thinking (NLP potentials?); the cognitions, feelings and actions that respondents bring to the table is difficult to separate from actual effects that may have changed; impacts also vary based on subject matter. "Inability to prove the scope of mass media impact beyond a doubt has made social scientists shy away from assessing media influence on many important political events" (16). --GRABER 2000 Zaller notes: "The slightly positive intercept captures the idea that a few people who have zero media exposure may nonetheless respond to campaign events, presumably because they discuss politics with people who have higher levels of media exposure. If exposure to a debate or news story makes citizens more likely to vote for a candidate, it might positively affect trait judgments, emotional reactions, and thermometer scores, and these variables might then absorb the media effect. This danger would be especially great if media exposure were measured with more error than the other variables, as it may well be. The problem can occur in any sort of model, but is especially likely to occur when summary variables, like trait evaluations or emotional reactions, are included in a vote model." --ZALLER 2002 "Researchers have generally been content to use survey responses as direct indicators of unobservedc psychological constructs, recognizing that survey questions may be affected by random and systematic measurement error, but only occasionally attempting to explicitly deal with the consequences of these errors for statistical estimation" (249). --FELDMAN 1995 Information necessary to answer questions of opinion or long-term memroy are most prone to variations, and that responses come not from one indication of preference, but instead as a culminationa of a range of preferences, which are processed to find the 'correct' a nswer on a survey questionnaire. Not all questions are answered in this way, however, so analyzing results should take this into consideration. 4) The authors "measured framing's influence on belief content, belief importance, and issue opinion. In both experiments, framing significantly affected issue opinion. Casual analysis shows that framing independently affected belief content and belief importance, and that each contributed to issue opinion" (1040). --FELDMAN 1995 The study focuses on the ability, or lack thereof, of framing techniques to change either belief patterns/ideas, or at least a respondents position on an immediate issue topic. Specifically, whether framing changes the actual belief of the respondent, or just changes their perspective on this one issue, as a result. "We report on a pair of experimental studies that seek to demonstrate that alternative issue frames can significantly influence issue opinion, and that this influence will not be primarily explained by belief content, but rather by the importance respondents assign to different beliefs" (1043). --NELSON & OXLEY 2000 "In both experiments, framing influenced belief importance, which in turn influenced opinion. We also expected that framing would minimally impact belief content, but in both cases there was some discerible effect" (1055). --NELSON & OXLEY 2000 What is typically considered 'measurement error' or static or inconsistency within methodological analysis of survey responses may actually be indications of this variation, or range of preferences, inherent in the decision-making process. If this is true, then how we view these results can be either greatly enhanced, or largely diminished in relevance depending on whether or not we account for this variance or consider it as a random error term. --FELDMAN 1995 "The greater the demands a question places on memory, the less accruate the respondents' answers and, all else being equal, the less accurate the survey estimates derived from them" (62). --TOURANGEAU, RIPS & RASINSKI 2000 "The most straightforward way to establish the effect of various beliefs on attitudes is to develop separate measures of these beliefs and then to estimate the contribution of each belief makes to the overall attitude expressed on the issue" (211). --CHONG 1996 "Researchers have generally been content to use survey responses as direct indicators of unobservedc psychological constructs, recognizing that survey questions may be affected by random and systematic measurement error, but only occasionally attempting to explicitly deal with the consequences of these errors for statistical estimation" (249). --FELDMAN 1995 "There are several important conclusions that emerge if the cognitive response framework is taken seriously. The most obvious implication is that survey questions are inherently noisy measures...'Better questions' will never eliminate this variability" (266). --FELDMAN 1995 Implications: This effort recognizes that there is partisan bias in news reporting, but contends that previous efforts have focused incorrectly on content analysis. The opinion survey, however, and specifically where the author notes a perceptual gap between a journalists self-image of their actions and actual behaviors, indicates that a more objective measure of these biases may be warranted. To be fair, this influence would likely be subtle and not even noticed by the journalist themselves, as the author pointed out. However it still leaves room to speculate whether these self-administered surveys are an appropriately objective measure of true journalist bias. A better option may be to administer the surveys in person (though highly costly in the case of five countries) where interviewer could note body languages and nuances when answering, or to provide a way in which to ask questions pertaining to partisan bias without asking it directly. It seems asking a journalist directly "where do you think you fall on a partisan scale," given the well-known contention regarding journalist bias, would immediately put the respondent in a more defensive state of mind, hence making it less likely that their responses will be, even if unintentionally, fully accurate. --PATTERSON & DONSBACK 1996 Highlights: voter fatigue IV - influence on turnout or voter drop-off (Bowler, Donovan, and Happ 1992; Magleby 1984), here it measures how number of propositions on a ballow affect voter awareness; "Morality [measure] indicates that these issues exhibit a little over 13 percentage points greater awareness" and Civil Liberties/Rights nearly 18 (408); negative spending driving results for overall campaign spending, increasing awareness by just over 16 percentage points (408) --JERIT & BARABAS 2003 Early research on media effect failings: focused on influence through ability to affect vote outcome; focused on individual level effects instead of tracing effects on social groups; failed to account for the bias of theories indicating media as impersonal, easy to ignore, etc. --GRABER 2002 Aggregated public opinion results may be a poor reflection of true public sentiment. Specifically, individuals who harbor "unacceptable opinions" may do so with a 'don't know' response. Feldman and Zaller's view that individuals do not possess 'true attitudes' on issues is limited; "it does not account for response effects arising from the social nature of the interaction in the survey interview" (1210). The typical assumption is that 'don't know' responses indicate uncertainty of the political issue; in fact, it could just as easy to said that individuals may respond DK when they feel uncomfortable answering a politically charged question. Analysis results indicate that correcting for selection bias changes the substantive significance of many of the explanatory variables. --BERINSKY 1999 This analysis focuses on the power of current models of media exposure to predict statistically significant effects. "The results of the study indicate that the vast majority of election studies lack the statistical power to detect exposure effects for the three, five, or perhaps 10 point shifts in vote choice that are the biggest that may be reasonably expected to occur. Even a study with as many as 100,000 interviews could, depending on how it deploys resources around particular campaign events, have difficulty pinning down important communication effects of the size and nature that are likely to occur." Increase in sample size doesn't always, however, mean a direct parity with increases in predictability, and detection of exposure effects is likely to be unreliable unless the effects are both large and captured in a large survey. The point is that not ALL samples necessarily need to be large, just those that try to predict certain types of effects. --ZALLER 2002 the ability of researchers to draw general conclusions from this literature has been frustrated by inconsistent methods for analyzing news content, conflicting ideas of what "independent" news coverage might look like, and the tendency to study press-state relationships using stand-alone case studies having unique policy contexts and dynamics that obscure common patterns. --ALTHAUS 2003 The literature on media independence shows that the public government officials can simultaneously stimulate news coverage and regulate the discursive parameters of that coverage. This study investigates two sources of uncertainty in that literature which have limited the ability of researchers to draw firm conclusions about the nature of media independence: how critical the news actually is and how journalists put the indexing norm into practice. I examine policy discourse appearing in evening news broadcasts during the 1990-1991 Persian Gulf crisis, and find that sources outside the institutions of American government produced far more discourse critical of American involvement in the Gulf crisis than was produced by the #official# debate among domestic political leaders. Moreover, changes in the amount of governmental criticism coming from official circles did not tend to produce parallel changes in the amount of critical news coverage. This suggests that criticism of government in evening news discourse was not triggered by or closely tied to patterns of gatekeeping among elected officials. Television news coverage did not merely toe the #line in the sand# drawn by the Bush Administration. Instead, the evidence from this case suggests that journalists exercised considerable discretion in locating and airing oppositional voices. ALTHAUS 2003 Implications: The idea that "knowledge per se" is not responsible for opinion formation/changes is the first start of a process whereby we can also conclude that exposure to media and the questions utilized to "measure" this exposure are not the best ways to measure knowledge levels. A few notes of interest however. First, it seems that various "types" of conversational setting would be significant. How a person might discuss a political issue with a colleague at work (or in this kind of setting with someone they didn't know) compared to how they may have a discussion about the same issue with a close friend, or someone of the same or different sex, or even a significant other versus not, will all have an affect on the content, emphasis, and nature of the discussion. It seems this would/should be addressed further, as most political conversation occurs within the family (Campbell et al. 1960). Lastly, it seems a pre- and post-treatment model of this experiment would have been additionally instructive. How an individual's "previous beliefs" might have affected the overall results would be instructive. --DRUCKMAN & NELSON 2003 Arguments: Previous stand-alone case studies of press-state relations have obscured patterns in reporting methods, and incorrectly assumed an indexing heuristic that is not indicated when news is analyzed more closely. By analyzing the Persian Gulf crisis on a day to day basis, patterns of variability indicate that criticism of government during the crisis was not tied to patterns of gatekeeping among elected officials, but instead reports exercised considerable discretion in locating and airing oppositional voices. --ALTHAUS 2003B Implications: The idea is that if people facilitate de facto selective exposure to political diversity by actively dodging political conflict, then cross-cutting interpersonal interactions would be all but impossible. Although the authors mention this, it seems they don't stress enough how changes in technology will increase (and in many ways already has) the likelihood that individuals will selectively choose even their media consumption. Numerous cable channels and hundreds more websites will mean that individuals will not have to rely on listening to the nightly news from beginning to end, or even flipping the newspaper to another page to finish the one article that was of interest. How this change affects democracy overall is an interesting concept that should be addressed further. --MUTZ & MARTIN 2001 Arguments: This analysis focuses on the power of current models of media exposure to predict statistically significant effects. "The results of the study indicate that the vast majority of election studies lack the statistical power to detect exposure effects for the three, five, or perhaps 10 point shifts in vote choice that are the biggest that may be reasonably expected to occur. Even a study with as many as 100,000 interviews could, depending on how it deploys resources around particular campaign events, have difficulty pinning down important communication effects of the size and nature that are likely to occur." Increase in sample size doesn't always, however, mean a direct parity with increases in predictability, and detection of exposure effects is likely to be unreliable unless the effects are both large and captured in a large survey. The point is that not ALL samples necessarily need to be large, just those that try to predict certain types of effects. --ZALLER 2002 Implications: This effort recognizes that there is partisan bias in news reporting, but contends that previous efforts have focused incorrectly on content analysis. The opinion survey, however, and specifically where the author notes a perceptual gap between a journalists self-image of their actions and actual behaviors, indicates that a more objective measure of these biases may be warranted. To be fair, this influence would likely be subtle and not even noticed by the journalist themselves, as the author pointed out. However it still leaves room to speculate whether these self-administered surveys are an appropriately objective measure of true journalist bias. A better option may be to administer the surveys in person (though highly costly in the case of five countries) where interviewer could note body languages and nuances when answering, or to provide a way in which to ask questions pertaining to partisan bias without asking it directly. It seems asking a journalist directly "where do you think you fall on a partisan scale," given the well-known contention regarding journalist bias, would immediately put the respondent in a more defensive state of mind, hence making it less likely that their responses will be, even if unintentionally, fully accurate. --PATTERSON & DONSBACK 1996 Argument: Political decision-making has not accounted for collective decision-making uncertainty prevalent in political decisions. The authors attempt to analyze whether the dimensional structure of issues is reduced during the legislative process, thus reducing subjective uncertainty in later decision-making (i.e., floor voting). The authors present a "dynamic theory of uncertainty that shows how attribute, alternative and collective decision uncertainty vary across decision contexts" (119). Results indicate that the decision context facing legislators varies dramatically at different stages of the legislative process (134). FN1 FN1:"We show that at the policy debate stage the policy proposals are judged along multiple evaluative dimensions. At the choice stage on the legislative floor, the issue agenda reveals a low dimensional structure" (122). --BRYAN, TALBERT & POTOSKI 2003 uncertainty is more fundamental to choice than probability models allow March, 1994 --BRYAN, TALBERT & POTOSKI 2003