How to Survey, Part 2 (Best Practices)

In my 16 April blog entry How to Survey, I presented 3 sections: Key Questions, Tools and Services, and Reading. In this entry, I present some Best Practices based on my experience and the advice of two wise and capable women with whom I had the honor to work: Dr. Robin Jeffries and Dr. Kornelija Zgonc. All errors may be attributed to my misunderstanding, not their teaching!

The most recent survey completed by my department here in the Chief Technologist’s Organization at Sun was the SEED* mentoring program quarterly report for April 2008. See Mentoring Success Metrics (April 30, 2008) for details. SEED has been collecting quarterly feedback from a web-based survey since 2002, so this is a mature example of a cyclic survey. The SEED survey is not anonymous. Most of the practices below are also appropriate for one-time surveys and for anonymous surveys.

Characteristics of a Good Web-based Survey (with examples from SEED):

  • It is Short. The SEED survey consists of 14 questions. One way to shorten surveys: don’t ask for information that can easily be mined from another source.
  • It is Easy to Use and Understand. Use pull down menus wherever possible to provide clear options. When a range of answers is possible, offer the same one-to-seven range, with “1” being low, “4” neutral, and “7” being high. State questions as simply as possible and test for clarity (if it is possible to misunderstand, someone will). Avoid jargon, abbreviations, and local slang.
  • It is Easy to Analyze the Responses. Use very few open text fields. Use a seven point range so that there is a clear low, neutral, and high (more on this below). “Does Not Apply” and “No Response” are always options. “No Response” is the default option (that is, the respondent must make an active change to answer).
  • For Cyclic Surveys – Prior and Future Versions are Comparable. Questions do not change much over time.
  • It is Trustworthy. Send a survey copy immediately in email to the respondent. Make survey analysis results available to respondents promptly. Actively protect private and anonymous information. Say in the survey introduction what will happen with the results (then, do what you say). Remember Robin Jeffries’ First Law of Surveys: “Don’t ask questions unless you are prepared to act on the results!”

The following Attributes of Poor Surveys list is material developed by Kornelija Zgonc, former Sun Chief Master Black Belt, and my Six Sigma mentor:

    What’s Wrong?

    • Survey goals unclear
    • No forethought about your processes
    • Lots of yes/no questions
    • Lots of written questions
    • Focus on symptoms
    Why it’s a Problem:

    • Take-aways unclear
    • Don’t know how to implement changes
    • Limited analytics; need big sample sizes
    • Unclear or unfocused questions
    • Get more questions, not answers!

The following Attributes of Great Surveys is also material developed by Kornelija Zgonc:

  • Goals, processes, and possible cause/effect relationships are analyzed up front
  • Widely-scaled numerical questions allow lots of analytics and keep sample sizes low
  • Only need a few written questions to address unforeseen situations or problems
  • Survey has action-oriented focus to generate solutions, not more questions

Why a 1 to 7 Range?
Multiple choice options make it easier to statistically analyze survey results. One of the common and energetic “discussions” among those who design surveys is what range to allow for numerical questions. Simply put: how many number choices should the respondent be offered? Too short a range (like: 1=bad, 2=neutral, 3=good) may not reflect an accurate subtlety of opinion. However, too many options can give a false confidence in the value and gradation of the answer. Don’t ask for more precision than your users are likely to know!

A range of seven is the best choice. When seven or more numbers are offered in a scale (like: 1=strongly disagree, 2=disagree, 3=disagree, 4=neutral, 5=agree, 6=agree, 7=strongly agree), the data collected behaves and can be analyzed like continuous variables. (Data are discrete if there are a limited number of values possible. Example: number of legs on a cat, number of letter grades possible on a test. Data are continuous when the measurements can have any value. Examples: time, weight.) This allows tremendous analysis flexibility because there are many more statistical tools for continuous data analysis than for discrete  data analysis.

Why Statistics Don’t Matter (sometimes) With all deference to my colleagues who are statisticians and Six Sigma Master Black Belts, sometimes statistics don’t matter.

  • The survey itself is a form of communication, regardless of whether it is answered, analyzed, or acted on. The survey may change the nature of the audience’s
    awareness.
  • If you don’t ask the right audience or collect enough responses, the answer does not matter.
  • Some people will never give a top or bottom score under any circumstances.
  • Refine, reduce, remove:
    • Too many surveys make people hate or ignore you.
    • Too many questions will cause your audience to abandon the survey part way through.
  • If your questions are too personal or respondents are embarrassed to tell the truth
    (for example: admitting they don’t know the answer), answers will be worthless.

* More information on the SEED worldwide Engineering mentoring program is available on the SEED program web page.

Advertisements

1 Comment

Filed under Mentoring & Other Business

One response to “How to Survey, Part 2 (Best Practices)

  1. Pingback: How to Survey « KatysBlog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s