Search


Background and context

The first CK Knight Schools report was in 2003, with the objective “…to shine a light on individuals in business schools across Canada who are working to push the social responsibility, environmentally sustainable and community engagement agendas forward.” Nine reports have been completed to date, all of which produce a ranking of university programs according to how well they integrate sustainability into their ucrriculum and educational atmosphere. Analysis of MBA degree programs remain central to the ranking, but over the years many degree programs (and disciplines) have been evaluated, including: law, engineering, urban planning, architecture, journalism, public policy and public administration, industrial design, teacher’s education, and actuarial science. This year, in the 9th Knight Schools Ranking, we focus on MBA business programs, along with undergraduate Engineering programs.

Research Methodology

The Knights Schools methodology is based on the Beyond Grey Pinstripes Report, which is published every two years by the Aspen Institute. The institutes evaluate MBA programs across the globe for social, ethical and environmental stewardship using two metrics: required and elective courses, and faculty research. CK has taken this methodology and expanded it in a number of ways. First, we evaluate MBA business programs, as we believe it is critical to root ideas of sustainability within students, future managers, at the start of their training. We also evaluated student-led initiatives on top of the two metrics used, as initial research told us taht this is a hotbed of innovation and sustainability activities. This is where we started in 2003, evaluating programs under four broad categories: Institutional (25%), Faculty Research (20%), Student-Led Initiaties (15%) and Course Work (40%).

Fast-forward to the 2012 Knight Schools ranking. Each year, slight modifications were made to improve and streamline our analysis. In a broad sense, the main difference from 2003 is that programs were evaluated in three categories, as ‘faculty research’ evaluation was integrated into ‘institutional’. Within each category, changes to weighting and normalization methodology were made, detailed below.

Data Collection

To collect information, a survey designed by CK was distributed to programs selected for evaluation. Faculty members were given a month to complete the Knight Schools survey. If a school did not complete a survey, CK used public information, such as the university’s website, to collect data unless exclusion from the ranking was specifically requested. The survey is used to collect data within the timeframe of September 2011 – August 2012.

The survey was designed to evaluate three sectors wtihin an academic program environment:

1. “Institutional support” considers if the faculties are doing their part to encourage sustainability through external guest speakers and events, orientation activities, internships and consulting programs, loan forgiveness and scholarships, student competitions, faculty-led community involvement, endowed faculty chairs, institutes and centers, faculty research and ‘other’ programs or activities.

2. “Student-led initiatives” evaluates how sustainability is fostered outside teh classroom by the student body in the form of clubs, groups and events.

3. “Coursework” clarifies how and if sustainability is integrated into the curriculum of the program by looking at required and elective courses (fully dedicated to sustainability concepts and/or partially dedicated), joint degrees, and degree specializations available.

Data Analysis

Once data was collected via the survey for each program, it was analyzed by a team of CK researchers and evaluated for “sustainability” content. If a school or program completed the survey and returned it to CK, researchers would fact-check the data presented against that which is publicly available (i.e. university and program websites). Clarification emails to the program of contact were sent if necessary. With the use of Excel, score for each question were inputted into a spreadsheet and final scores for each program were calculated based on the 2011 points and weighting scheme.

Standard operating procedures for numerical evaluation of survey information developed in 2009 were strictly adhered to. An example of this evaluation procedure is as follows. For question 1, external speakers and events, data submitted by schools or collated by CK researchers typically consisted of a list of speakers, with the title, date, location, and brief description of the presentation. CK staff assessed the speaking engagement under the following criteria: speakers had to be external to the university, present at a time during the ranking time frame (September 2011-August 2012), the presentation had to be sponsored in part or whole by the faculty, the speaker must lecture on a subject directly related to social, environmental, or sustainability issues, and the session or event must be accessible to the majority of students in the program. If all criteria were met, one point was allocated for the relevant external speaker presentation. Speaking engagements listed under question 1 were evaluated until the maximum number of points (cap) was reached to obtain a perfect score for that question.

Methodology and analysis challenges

Quantification of “sustainability integration” into education is very challenging and requires the paring down of an abstract term to definable categories (i.e. institutional support, student-led initiatives, coursework), which need to be defined by measurable quantities (survey questions, i.e. the number of external speakers invited to lecture on sustainability to students as a metric of institutional support). Therefore, the design of the survey and questions asked has a significant impact on the ranking results. Over time, CK has adjusted and modified the Knight Schools survey in attempts to create an accurate as possible assessment of sustainability integration in education, basing survey design on an established method (Beyond Grey Pinstripes Report).

The distribution of the survey to selected schools and response rate is a second point of influence in the research methodology. Schools were contacted with details of the ranking and faculty members were given a month to complete the Knight Schools survey. If a school did not complete a survey, CK used public information, such as the university’s website, to collect data unless exclusion from the ranking was specifically requested. The lack of response from a school may influence the ranking outcome, but given that program websites are the primary source of information for incoming and future students, all survey information should be available online.

Finally, the most significant impact on the ranking result would be the design of the normalization scheme, or where information in a survey question is translated into a number, which is then weighted to produce a final ranking score. The cap or maximum number of points allocated per question or metric to achieve 100% certainly influences the final outcome. However, this effected all evaluated programs in the same way and is intentional.

Despite the potential limitations above, there are many benefits to producing a ranking in this manner. Quantifying the concept of sustainability puts an abstract concept in more concrete terms, which then allows one to track progress numerically over time. Numerical ranking also allows for direct comparison between things, such as programs, which is useful for benchmarking and evaluating sustainability success between programs at home and abroad.

Click here to go back to the ranking landing page.

Related Articles