Evidence-Based Innovation Blog

Percept Research Updates MBA Student Exit Survey Questionnaire

Posted on Aug 13, 2013 8:50:00 AM

Find me on:

Percept Research, the leading provider of MBA student and alumni satisfaction assessments for business schools worldwide, is pleased to announce a significant update to the firm’s popular MBA Student Exit Survey

The questionnaire for the 2013-14 fielding year has been revised to reduce the overall interview length for graduating students while adjusting the managerially-actionable metrics to help guide business schools in their continuous improvement efforts. This update is reflected in each of the instruments specifically designed for full-time, part-time, and executive MBA programs as well as specialized masters programs.

The MBA Student Exit questionnaire revisions include:

  • 2 measures updated for contemporaneous relevance
  • 4 measures removed due to redundancy
  • 1 measure added to enhance the Student Loyalty Index
  • The 11-point continuous response scale converted to 5-point discrete scale for improved clarity and easier survey completion

These enhancements reflect our continued commitment to improve and respond to contemporary assessment needs.

Questions Updated

The following questions were updated (changes highlighted in red).

  • Based on your entire MBA educational experience, please rate how this program performed on the following attributes:
    • Learning Management System (Blackboard, Canvas, etc.)
  • What percentage of the program cost was reimbursed by your employer? If you received no reimbursement, please enter 0. Include any funds received from the Post 9/11 GI Bill.

These measures were updated to better reflect current terminology and the increased impact of educational funding for veterans.

Questions Removed and Added

The following questions were removed:

  • Overall quality of facility
  • Overall quality of teaching methods
  • Overall quality of curriculum
  • How likely are you to offer financial support to this program as an alumnus?

The ‘overall quality’ measures above were recently added in 2011-12 to develop an ‘overall’ measure for each survey dimension (each dimension represents a subset of the student experience). This design was intended to provide regression analysis within each dimension to these ‘overall’ measures to isolate key drivers of specific dimensions. However, we recognized that Key Driver Analysis should be based on regression analysis to the Student Loyalty Index, which is a measure of overall stakeholder temperament, as opposed to particular survey dimensions.

The ‘likelihood of financial support’ measure was removed to reduce bias from non-North American programs as alumni financial support is less common. This measure has been replaced with the following question:

  • If you were seeking an MBA again, how likely would you be to choose to attend this program again?

This new measure will be included in the Student Loyalty Index.

Scale Adjustment

tape measure value 300The most significant adjustment this year is converting the previous 0–10 numeric rating scale (NRS) to a 5-point categorical scale for a majority of the quantitative ratings questions. 

We have been planning this conversion for some time, conducting an extensive literature review and testing the categorical scale with our MBA Student Midterm Survey. Based on these efforts, we determined this conversion would lead to better response yields with increased understanding for both respondents in completing the survey and survey administrators in interpreting the survey results.

The best response scale is one that is easy to understand by respondents, discriminates well between respondents' perceptions, is easy to interpret, and has minimal response bias. There is not one perfect scale for all uses. Instead, we must consider the goals of the survey and how the results will be utilized.

The overarching concerns when choosing the number of response options is the effect on the scale's reliability and ability to discriminate between degrees of the respondents' perceptions of an item (discriminability). One would think, given these requirements, more scale points would be better than less, but this is not necessarily the case.

Ideally, a rating scale should consist of enough points to extract the necessary information. There is evidence that the more scale points used, the more reliable the scale (Churchill and Peter 1984). Using too few points will result in a scale that is less reliable. However, using more points than subjects can handle will probably result in an increase in variability without a concomitant increase in precision.

Eleven-point scales can look a bit overwhelming for customer satisfaction survey participants, resulting in a limited range of responses. Because of this, they can be misleading. For example, 7.5 out of 10 might sound like a good result, but some research indicates people will only score between 6 and 9 unless they feel very strongly about the issue. Taking this into account, a rating of 6 actually indicates that a respondent is somewhat dissatisfied, 9 is satisfied, and the midpoint of 7.5 is only neutral.

Another issue worth considering is that it may not be the number of scale points that is the factor in discriminability and reliability, but rather the anchors used to label those points. Krosnick and Fabrigar (1997) found that labeling increased the reliability of respondent answers. Some studies indicate that presenting a scale as a series of verbal descriptions lead to more dispersion and less clustering of responses.

The new 5-point Likert scale is favored because the discrete integer values are more convenient and understood by respondents as well as better defined with anchors for each response option (categorical).

Previous Continuous Scale:

2013 scale Faculty dimension

New Discrete Scale:

2014 scale Faculty dimension

With the 5-point scale, it is easier to discretely interpret responses whereas specific clarification for the continuous 11-point scale as the gradations of agreement become too fine to easily express in words (i.e., anchors). While simulation studies and empirical studies have generally concurred that reliability and validity are improved by using 5-point scales rather than coarser ones (those with fewer scale points), more finely graded scales do not improve reliability and validity further. Despite the strong opinions of some scale enthusiasts, there just isn’t powerful empirical evidence that a single number of scale points are always the best.

The 5-point scale has high test-retest reliability and provides better dispersion of responses (and are, therefore, more discriminating) than other scales. This scale also permits respondents to complete MBA Lifecycle surveys in a shorter amount of time with a high level of accuracy.

Trending Impact

Citing previous studies, the two scales maintain a high degree of consistency over fielding periods of identical populations. Peaks match with peaks and troughs match with troughs. There is a remarkable degree of parallelism.  Unlike the numeric NRS, the 5-point scale is specifically tied to a conceptual framework of categorical performance and improvement.

To maintain trending to past data with the new scale, we will recode past results in the following manner (using the performance scale as an example):

New Scale                         Old Scale

Extremely Well (5)               10,9

Very Well (4)                         8,7

Moderately Well (3)               6,5

Slightly Well (2)                     4,3

Not at All Well (1)                 2,1,0

The rationale behind this coding structure is based on industry research on similar transformations and an analysis of the historical data. The actual recoding structure will be finalized leveraging the actual results from 2013-2014 to ensure validity in the logic.

Why Update?

time to adapt

The graduate management education industry is constantly changing and it is important for our MBA Lifecycle Survey Suite to stay relevant to the most salient issues and priorities of business school assessment. 

Long-time Student Exit Survey participants may recognize that the questionnaire is updated regularly with minor adjustments.  We keep the questionnaire largely unchanged for business schools to facilitate year-to-year comparisons.

We release major updates to our student and alumni assessments at longer-term intervals such as the 2013-14 update. This approach balances the need of institutions to have year-to-year comparisons with industry’s need to respond periodically to changes in the higher education landscape, informed by a methodical research and collaborative discussion.

The graduate management education industry is dynamic, with increasing demands for data-driven decision-making (DDDM) and rapid adoption of new technologies for learning, curriculum innovation, high-impact practices, etc.  This survey update will respond to these developments and ensure that the MBA Lifecycle Survey Suite continues to provide business schools with the best available information regarding student engagement.

Instrument Innovation Process

Percept Research employs a rigorous and collaborative approach to stay relevant to current issues and concerns.  This innovation process involves:

  • Gathering ideas and feedback from MBA Lifecycle Survey users and from industry consultants
  • Consulting with leading industry associations such as the Executive MBA Council and MBA Roundtable
  • Examining the instrument’s psychometric properties
  • Conducting reliability and validity analyses with over 12 years of experiential data

We want to hear from you…  

Share Your Ideas and Feedback

your survey your sayWe are continuously seeking your ideas, feedback and questions to help innovate all of our MBA Lifecycle surveys.  We established Get Satisfaction communities for our lifecycle studies: http://getsatisfaction.com/perceptresearch

These communities are dedicated areas where our clients can exchange ideas, provide detailed input on how to improve our services, and get answers to support questions.  This insight allows us to deliver the best user experience possible and plan for future enhancements to our research and consulting services.

We look forward to hearing your ideas and suggestions!

By-line:

Brian Mahoney, author for this article, is a marketing research consultant and Managing Partner of Percept Research.  Brian welcomes your questions and comments.

Click me

MBA student exit questionnaire update

Topics: Brian Mahoney, MBA Student Exit Survey, Questionnaire