There are a number of other factors one must consider when reading a report on survey results.
BEEF magazine recently published the results from its October issue survey concerning producer attitude to the GIPSA rule, entitled "'Decided' Beef Producers Strongly Oppose GIPSA Rule". This story elicited some negative response to the findings stating the story was "trash" and the survey itself was a "joke."
This led me to recall a quote that Mark Twain popularized: "There are three kinds of lies: lies, damn lies and statistics." Actually, I would change it to read: "There are three kinds of lies: lies, damn lies and lying with statistics."
Too often, a credibly conducted study will be slammed because the results run counter to a person's own opinion and anecdotal evidence. For example, a person might talk with producers at the feed store, sale barn or corner café, and find all agree that the GIPSA rule is a good thing. Thus, the logic follows that everyone favors the rule.
I am a realist and know that the people who took exception to the research results likely won't be swayed by my assurance that the survey is valid and provides useful information.
One of the most common initial arguments that people make against a survey is that the survey generated results from only 730 people, while the USDA Ag Census says there are 798,290 farms that sold cattle in 2007. Thus, they argue, the results can't be valid as they represent only one out of every 1,000 producers.
BEEF editors actually conducted this study on the GIPSA rule in two consecutive years in order to compare the movement in reader sentiment toward the rule. In the 2010 study, we found that 42.7% of the respondents were against the GIPSA rule. This study generated 730 respondents, so the margin of error was +/-3.6% at a 95% confidence level. This means the true percentage of those against the GIPSA ruling would fall between 39.1% and 46.3%.
Meanwhile, the 2011 study generated 951 respondents and we found that 42.1% were against the GIPSA rule. This gives us a margin of error at the 95% confidence level of +/-3.1%, resulting in a range of 39.0% to 45.2%.
By receiving 220 more responses, the 2011 study resulted in a "better" estimate of the true proportion of readers against GIPSA. But this only gained a one-half of one percent improvement. While it would be the most optimal to ask every last person, one is limited by time, money and the willingness of producers to respond to a survey.
There are a number of other factors one must consider when reading a report on survey results. First, who did they ask? This GIPSA study was conducted of BEEF readers for whom we have e-mail addresses. Thus, we cannot say that these results are representative of the entire beef industry because BEEF magazine does not serve the entire industry; BEEF has limited circulation that is focused on larger beef operations. So, we limited the scope of our universe to what we can assign the results to.
Second, did the person conducting the survey ask a leading question? In my opinion, "From your understanding of the rule, do you favor it?" is not a leading question.
A third factor one needs to consider is if there is a bias formed by those individuals who did not fill out the survey. This is the most difficult to quantify. How do you learn if the opinions of people who didn't fill out a survey differ from those who did fill out a survey? For other studies, we have done follow-up phone interviews to fill in this gap and found that this bias is minimal.
A reader should always critically think about any results of a survey or any statistic they read. The more transparent the author is about where the information originated, the easier to determine if someone is trying to pull a fast one. As for the BEEF magazine GIPSA survey, the results were never claimed to represent more than just the BEEF readers with an e-mail address on file.