peer reviewed article

Developing an MBA Assessment Program: Guidance from the Literature and One Program’s Experience  

by Jacqueline K. Eastman, Ralph C. Allen, and Claude R. Superville

 


Ralph C. Allen is a Professor and Chairman, Department of Marketing and Economics, Valdosta State University;  Jacqueline K. Eastman jeastman@valdosta.edu is an Associate professor, Department of Marketing and Economics, Valdosta State University; and Claude R. Superville is an Associate professor, Department of Management, Valdosta State University.


In this paper, the authors discuss the importance of assessment, guidance from the assessment literature, and the experiences  their university encountered in developing an assessment plan for its MBA program. They discuss in detail the creation of measurable learning outcomes, a knowledge measure, a skills measure (e.g., a capstone simulation project), and a curriculum/instructional review process. Based on the results of the assessment process, the authors then describe closing the assessment loop by selecting areas for improvement and refining the assessment measures. Suggestions are provided for other programs pursuing their assessments.


INTRODUCTION

Business school administrators and faculty are responsible for planning, designing, and administering outcomes assessment and for integrating the assessment results into their continuous improvement processes. [Miller, Chamberlain, and Seay 1991] According to the AACSB, The International Association for Management Education, successful quality and continuous improvement strategies require processes to measure and document performance and may involve redesigning the business curriculum. Therefore, all AACSB-accredited business schools must embark on this assessment journey. [Turnquist, Bialaszewski and Franklin 1991)]

However, the AACSB is not the only source of assessment pressure. Some states now require higher education assessment. [Durant 1997; Herring and Izard 1992; Jumper 1992] Additionally, the business community is pressing for educational improvements. [Chonko and Caballero 1991] If programs do not take a leadership role in assessment, outside constituents may impose an assessment process on them. [Miller et al. 1991; Hartmann 1992]

The underlying tenet of assessment is outcome-based education (OBE). In OBE, educational success is based on what the student learns (outcomes) rather than what the student is taught (input). [Swanger 1996] Some of the issues that arise with OBE are how performance is to be assessed, how teachers and schools are to be held responsible for results, and what system of restructuring must occur. The benefits of OBE, though, are many: a shift in focus from faculty teaching to student learning, a shift in thinking from course requirements to course results, clearer definitions of learning outcomes, new insights for teaching and learning, renewed enthusiasm for teaching, and more precise insights about institutional achievement. [Torgerson 1991]

How assessment should be conducted for an MBA program may, however, be unclear. The purpose of this paper is to use the assessment experiences of one MBA program and its faculty to aid others in assessment. This paper is divided into three main sections. First, the professional literature relating to the development of an assessment program is discussed. The literature, however, provides only a limited guide. [Miller et al. 1991] Second, the reader is directed through the successes (and problems) of one program that committed itself to the assessment process. Suggestions for others starting an assessment program are provided. Finally, the educational implications of a successful assessment process are discussed.

A REVIEW OF THE LITERATURE ON ASSESSMENT

Assessment is "any regular and systematic process by which the faculty designs, implements, and uses valid data gathering techniques for determining program effectiveness and making decisions about program conduct and improvement." [Metzler and Tjeerdsma 1998, p. 470)] Assessment is used to make judgments about student learning and goal achievement for a program. [Torgerson 1991] Assessment shifts the focus from what faculty are teaching to what students are learning, how this learning contributes to professional success, and what value the program adds. [Boyatzis et al. 1992] Assessment, through the use of outcomes [Herring and Izard 1992], stresses measurement and accountability [Boyatzis et al. 1992].

Stakeholders  

In determining outcomes and revising curriculum to meet them, the importance and difficulty of including and addressing external stakeholders are stressed. [Boyatzis et al. 1992; Ehrhardt 1992] Although their inclusion is difficult, these external stakeholders may provide a perspective different from that of faculty and administrators. [Jenkins, Reizenstein, and Rodgers 1984] Specifically, while professors often emphasize knowledge outcomes, the business community stresses that skills are of equal importance. [Lamb, Shipp, and Moncrief 1995]

Curriculum Assessment Issues in the Business Literature

The majority of the literature specific to graduate business education focuses on the need to revise the business curriculum in order to develop better outcomes. [Kobayashi 1991; Patterson and Helms 1993; Windsor and Tuggle 1982] MBA programs are criticized for concentrating too much on the financial and marketing aspects of business [Patterson and Helms 1993] while ignoring areas such as technical skills, creative skills, communication skills, cross-functional skills, ethical and people skills, and competitive and international issues in which students are often deficient [Ehrhardt 1992; Friedman 1996; Ghorpade 1991; Jacob 1993; Jenkins et al. 1984; Krugel 1997; Ondrack 1992/1993; Patterson and Helms 1993]. For example, Reinsch and Shelby (1996) specifically describe the need to teach MBA students the communication skills that extend beyond just making formal presentations. The literature stresses that more managerial relevance in MBA programs is needed. [Jeuck 1998]

Additionally, research such as Boyatzis, Cowen, and Kolb (1992), Ehrhardt (1992), Kleindorfer (1994), and Kobayashi (1991) discuss major curriculum revisions of specific MBA programs. For example, Boyatzis, et al. (1992) describe the seven design principles utilized in developing a new MBA curriculum for the Weatherhead MBA program. Finally, the need for alternative MBA delivery systems (e.g., distance and on-line learning, and flexible scheduling), the use of team teaching, and techniques for grading student teams are discussed. [Ehrhardt 1992; McMillen, White and McKee 1994; Phillips 1998; Sampson, Freeland, and Weiss 1995] For example, Heimovics, Taylor and Stilwell (1996) discuss starting a new Executive MBA program at the University of Missouri-Kansas City that involves a team teaching approach. Therefore, while the literature does address a variety of MBA curriculum and delivery issues, it has not significantly addressed assessment process design.

Unfortunately, only a few resources directly discuss MBA assessment. Schlesinger (1996) describes the use of an integrative, cross-functional approach to MBA education at Babson utilizing an integrative case exam involving Ford Motor Company that has assessment implications. Dubas, Ghani, Davis, and Strong (1998) discuss the use by St. Joseph University of surveys to MBA alumni and soon-to-be graduates to measure their general satisfaction and their perception of the program’s success in addressing twelve knowledge and skills areas. Finally, several assessment resources (not peer-reviewed) are available on the web that discuss higher education outcomes assessment (e.g., http://www2.acs.ncsu.edu/UPA/survey/resource.htm). Other fields, however, such as accounting [Herring and Izard 1992], education [Metzler and Tjeerdsma 1998, Webb 1996], public administration [Durant 1997], and geography [Jumper 1992], address assessment in more detail. Therefore, more research is needed in the specific area of how to conduct MBA assessment.

Assessment Measurement Issues

In measuring knowledge outcomes, Jumper (1992) discusses in detail the creation of a locally developed geography knowledge test. In contrast, Hartmann (1992) and Herring and Izard (1992) describe the benefits of using national, standardized tests to compare programs. However, Hartmann (1992) notes that national tests may be inferior to locally designed exams in measuring the factual knowledge gained in a program designed for a particular environment.

Other fields provide guidance in measuring skill outcomes. For example, Durant (1997, p. 398) describes for an MPA program, "how creating an outcomes-based capstone course became a catalyst for fundamentally reevaluating what the student ‘product’ of the program was to be, what was taught in core courses, when and how it was taught, and who would teach it." Hartmann (1992) discusses the use of a capstone class in developing skills for sociology.

After measuring a program's outcomes, one must make comparisons. Two distinct types of comparisons are possible. First, assessment may compare the before and after performance for the same group of students or compare the performance of one group who had the program with a control group who did not. Second, assessment may compare the program group with a national norm. [Webb 1996] The first type measures value added, but it does not determine whether the program's product meets a competitive quality level. The second type allows the program to compare its product with others but does not actually determine how effective the program is in changing the product from its initial state. Ideally the two measures are combined. While AACSB/EBI (http://www.aacsb.edu/Publications/EBI/ebindx.htm) does offer a comparative benchmarking survey in the area of satisfaction with an MBA program, no nationally-normed instruments are available that specifically measure MBA knowledge or skills gained. Therefore, most MBA program measures will be of the valued-added type.

Assessment Design Issues

The development of an assessment program involves four steps. First, the desired outcomes of the program that are consistent with the school’s mission must be determined. [Herring and Izard 1992] These outcomes may reflect desired student knowledge, skills, attitudes, or behaviors. [Torgerson 1991] The curriculum development is based on these outcomes. Second, measurement tools that can determine if these outcomes are met by the program must be designed and applied. Third, a set of qualitative and quantitative criteria must be developed to evaluate the effectiveness of the assessment efforts. [Eastern New Mexico University 1998-1999] Fourth, a procedure must be established to address weaknesses exposed by the measurement process. This fourth step, called "closing the assessment loop," is the component of the assessment program that is often least developed but most critical for the success of the assessment program.

DEVELOPING DESIRED LEARNING OUTCOMES

When the two-year, lockstep MBA program was initiated in 1993, the faculty on Valdosta State University's MBA Committee (MBAC) created the desired learning outcomes with the input of local business leaders who compose the College’s Business Advisory Council. The MBAC designed the curriculum to meet these outcomes. After initiating the program, a focus group was conducted with the Business Advisory Council to assess the MBA program. In response, the MBAC revised the outcomes to make them measurable. The revised outcomes include knowledge-based Outcome One and skill-based Outcomes Two to Five. As faculty, we believe that demonstrating knowledge and skills are both important outcomes. Knowledge is a necessary, but not sufficient condition, for achieving success in our program because skills are also needed. We have not set weights (percentages) on the importance of knowledge versus skills because the need for knowledge and skill development will vary by student based on what they bring to the program (i.e. some students enter the program with stronger communication skills, others with better quantitative skills, and still others with a better knowledge base). All students upon graduating from the program need to demonstrate that they have the needed knowledge and skills as outlined in the outcomes:

MEASURING LEARNING OUTCOMES

Once the desired skills and knowledge outcomes were established, measures of these outcomes had to be found or developed. Some measures that departments may need to address include the following: (1) test result measures (such as knowledge gained from the program), (2) skills measures, (3) placement related measures (such as placement rates and average starting salary for graduates), and (4) long-term career satisfaction and achievement measures (such as employer and alumni satisfaction). [Miller et al. 1991]

Because Valdosta State University provides career placement measures, the MBA faculty decided to focus on curriculum and instructional review for knowledge and skill assessment, and graduate satisfaction. In the next four subsections, how the faculty established the knowledge and skill outcome measures and how these measures evolved will be described. The development of a curriculum and instructional review process and graduate satisfaction measures is also discussed.

Knowledge Assessment Measure - A knowledge assessment tool had to be chosen for Outcome One. Attempting to avoid a homegrown knowledge test, the MBAC initially adopted the Core Curriculum Assessment Program (CCAP) examination developed by AACSB. The first cohort of 30 entering MBA students took the CCAP (in 1993) with a pretest score of 56%. The posttest score of the 23 graduating students (in 1995) was 65%. For the second cohort of MBA students (in 1995), the pretest score was 55% (25 students) and the posttest score (in 1997) was 64% (13 students). These scores exhibited reliability and some pre-to-posttest improvement. However, because the CCAP was designed to measure core knowledge of undergraduate business students, the MBAC thought that the test failed to reflect the objectives or content of the MBA program and discontinued its use.

The MBAC then initiated the development of a homegrown instrument in 1995. The members of the MBAC had to determine the level of knowledge to be tested, the particular means to test this knowledge, and the responsibility for testing. For comparison, the faculty knew it needed to measure the level of student knowledge at both the start (a pretest score) and at the end (a post test score) of the program. The MBAC also needed to establish benchmarks to assess improvement. Unfortunately, the literature provides no suggestion of a "good" benchmark score for either a pretest or posttest score.

Initially, each professor was asked to create a ten-item multiple-choice test that could be given at the beginning and end of the program. Because MBA courses were only taught once every two years and were extensively revised each time taught, some professors resisted having to prepare pretest questions so far in advance. Additionally, some professors did not like the multiple-choice format. Finally, the MBA faculty was concerned that the posttest would reflect a "recency" effect (i.e., subjects taken at the end of the program would be better remembered).

To address these concerns, the MBAC revised the format to allow each professor to develop a pretest and posttest knowledge assessment measure that best fit his or her style and subject matter and to allow for pre and post testing in each course. The pretest is a short test given the first day of class to assess the class’s initial knowledge level. The posttest is given on the last day of class or incorporated into a larger final exam. Care is taken to ensure that the tests meet Outcome One.

Each professor submits to the MBA Director the course syllabus, a copy of the test used with the answers, and the pretest and posttest average along with the percentage change. The results for each course as well as the overall average are compiled and distributed to the entire MBA faculty. For the 1995/1997 cohort, the overall pretest average was 28%, and the posttest average was 68%, a 40-percentage point improvement. In 1997, the MBA program changed the lockstep requirement and allowed students to start the program at any time. Since then, the knowledge assessment score has been reported each Fall Semester rather than for each cohort. Later classes continued to improve with the average pretest/posttest scores of 26/79% for the 1997/1998 school year, 24/69% for the 1998/1999 school year, and 35/82% for the 1999/2000 school year.

The results show that the program consistently meets or is close to an annual passing rate of 70% for knowledge assessment. Additionally, a wide variety of testing styles are utilized, including multiple choice, matching, fill in the blank, problems, essays, and cases. Therefore, overall, the MBAC feels that knowledge Objective One is being met. The information by course is shared with the administration and the faculty with the hopes that the individual faculty (especially those with a below average passing rate) will work to improve any knowledge deficiency in the particular courses.

Skill Assessment Measures - Along with developing a knowledge assessment measure, the MBA faculty had to establish a skill assessment measure for Outcomes Two to Five and a means to administer it. Initially, each MBA professor created a skill assessment survey that described certain skills for his or her particular MBA course and asked each student to rate on a one to seven Likert scale the extent that each skill had been gained. This survey was unproductive for a number of reasons. One, the skill list created by faculty did not always match the program outcomes, and the skills listed were often very specific and varied each time the course was taught. Finally, the survey only measured the students’ perceptions of their skills; the students were not required to demonstrate their skills. Therefore, the MBAC decided on a new approach to skill assessment.

In 1998, the MBAC adopted Strategic Management as a capstone course and its simulation project as the basis for skills assessment. This project involves teams of students acting as managers of a company competing in the international athletic shoe industry. Each team's company is in its eleventh year of operation, and each team must make eight decisions addressing the areas of finance, manufacturing, labor, shipping, plant capacity and automation, and branded/private label marketing. Teams also are required to prepare a five-year strategic plan and a code of ethics. Focusing on the last three years of operation and simulating a presentation to their stockholders regarding the company, the students make a presentation to the MBAC. Therefore, this skill assessment is a complex and realistic simulation.

An Observer Report form based on the skill outcomes was created for the faculty to complete. This form spells out the needed skills addressed in the MBA Outcomes Two to Five and requires the faculty to determine if the skills are satisfactorily demonstrated by the students' projects. The form also generates discussion of the strengths/weaknesses of each project, and enlists suggested changes for both the overall curriculum and the assessment process. By having multiple evaluators complete this form, the reliability of the process is enhanced. [Hartmann 1992] All Observer Reports are submitted to the MBA Director who then submits a report to the faculty as part of an annual report addressing program improvement efforts.

Some skill measures were easier to develop than others. For example, the MBAC was able to develop satisfactory measures for Outcomes Two (leadership and team building) and Four (communication skills), but the measures for Outcomes Three (ethics and diversity) and Five (decision making) are limited. The measure for Outcome Three determines whether the students can identify ethical and diversity issues, but is not able to measure how effectively the issues are managed. The members of the MBAC believe that by breaking Outcome Five (decision making) into identifying problems/opportunities, demonstrating quantitative techniques, and supporting decisions, an adequate evaluation could be made. Future adjustments will focus on improving the measures of Outcomes Three and Five.

In reviewing the forms and the comments from the faculty, several areas for improvement have been suggested. First, in oral communication, more practice with public speaking and better use of visual aids would improve the students’ presentation abilities. Second, the students, while able to conduct a financial analysis, have difficulty explaining or justifying their financial decisions or their financial and managerial impact.

As noted above, a real flaw in the assessment measure is that the simulation did not measure the students’ ability to manage ethical and diversity issues. One suggestion is to build ethical issues into the simulation to see how the students would respond. Additionally, the MBAC requested that the form be revised next time to measure the items on a one to seven scale of satisfaction to measure a greater range of satisfaction with the skill demonstration (rather than the current satisfactory/unsatisfactory measure).

The only administrative problem with skills assessment is convincing the faculty to give up an evening and to submit their comments in writing. With the assessment administered each year, getting input from all faculty is difficult. However, overall, the skill assessment process has been successful; and, based on the results from this process, the MBA faculty is making needed changes in the MBA program and the assessment process.

Continuous Instruction and Curriculum Assessment

The MBAC wanted instructional and curriculum review and revision to be undertaken on a continuous basis. The MBAC also wanted the faculty to learn from each other's experiences. Therefore, the MBAC developed a form on which each faculty at the end of his or her course states: (1) what went well in the course, (2) what did not go as well, and (3) suggested changes for the next time taught. In the annual report, the professors’ comments are listed anonymously. Each professor, therefore, can see the successes and problems that other professors encountered.

For example, successful techniques included class discussion, real-world projects, covering leading-edge topics, outside guest speakers, an all-case format, simulations, and guided projects. Problems included overuse of lecture format, inadequate student prep time between Tuesday and Thursday classes, two classes in one night, coordinating group work, non-performers in groups, inadequate mathematical skills, busy work, and inadequate course structure. Finally, suggestions for improvement included increased use of a seminar format, more emphasis on practical applications, and smaller segmentation of projects/assignments (i.e., breaking up one large project into several smaller segments). This process allowed faculty to discern potential areas for improvement and ways to enhance their MBA course and the overall program for the next cycle.

Satisfaction Assessment Measures

To determine graduates' satisfaction with the MBA program and the impact of the degree on the graduates' careers, the MBAC initially conducted its own post-graduation surveys. These surveys asked graduates about their overall satisfaction with the timing of classes, physical facilities, classmates, camaraderie of the class, and the program in general. For each course, graduates were asked their satisfaction with the content of the course, the amount of work required, and the instructional method. Additionally, graduates had the opportunity to write comments about the individual courses and the overall program. The results were shared with the faculty.

The survey results showed a consistently positive response across classes, suggesting that the students were satisfied with the program. Many of the comments were similar to those provided by the professors in their assessment. The biggest issues raised were the need for more electives, the difficulty with twice-a-week classes, and the pressure of juggling work and class assignments. Regarding individual classes, the students noted a need for a quantitative course (rather than a research class) and a desire for more emphasis on information technology and the managerial and financial implications of international business.

When the University decided in 1998 to conduct all surveys from the Institutional Research office rather than from individual Colleges, the MBA satisfaction survey was revised, and items of interest to the University were added (many not relevant to part-time working MBA students, e.g., student housing). The response rate was very unsatisfactory (only five students). The low response rate is in part due to the length of the survey and its University, rather than program, orientation. We recommend that any satisfaction survey be tailored to the particular target group.

CLOSING THE ASSESSMENT LOOP

An assessment plan needs to address how to strengthen any exposed weaknesses. [Hartmann 1992] Since starting the MBA program in 1993, several overall changes to the program have been implemented in response to assessment. In 1997, the program went from a lockstep format to a continuous admission, flexible progress program. Additionally, the international class was divided into two separate international courses covering managerial issues and financial issues. The program stopped offering "Business Research," "Distribution, and Total Quality Management" and instead offered "Management Information Systems" and "Quantitative Methods." In 1998, with the conversion to the semester system, class-meeting times changed from two nights a week to one night a week. Finally, in 2000, the program was revised from a 36-hour program with twelve required courses to a 30-hour program with eight required courses, one international elective, and one additional MBA elective. The members of the  MBAC believe that this change will balance the need for rigor in the graduate courses with the students’ pressure from work obligations. While the College does not have the resources to develop and offer new electives, the members of the  MBAC hope that this change will provide students more choice and flexibility.

Boyatzis et al. (1992) suggest, that the impact of program changes can be determined by the effect on application and enrollment numbers. As a result of the Valdosta faculty's efforts, its MBA program has experienced a growth in enrollment of 141%.  The members of the MBAC believe that the continuous improvements in this probram are responsible for the significant growth in enrollment in the MBA program.

Although overall changes have been successfully implemented to improve the program, we are now at the stage where individual courses in the curriculum need to show improvement based on assessment data. Content and teaching style may need to be revised in some courses in order to enhance knowledge retention, skill development, and student satisfaction. For example, a greater emphasis on skill development may be required. In the initial development of MBA outcomes, the outcomes were not assigned relative weights of importance by the faculty. The faculty believes that both knowledge and skills are critical for our students' success in business. However, the College’s Business Advisory Council continues to emphasis the importance of skills, and the skills assessment does indicate weaknesses in certain skill areas (e.g., oral communications). The members of the MBAC will need to address this issue, realizing that focusing on one particular MBA outcome may be at the expense of other outcomes.

Because the majority of the faculty does not want to give up their class autonomy, and curriculum changes are very time consuming, the faculty may not easily accept that improvements are needed. Therefore, while the MBAC has been successful over time in making positive changes to the overall MBA program, getting faculty to make needed improvements in their individual courses may be difficult.

Additionally, the MBAC must also consider the role played by admissions requirements in establishing and assessing the level of pre-program knowledge and skills.  The MBAC uses a combination of prerequisite course requirements, undergraduate GPA, GMAT score, employment experience, and writing samples to determine admission.  These criteria have been very successful at predicting success in the MBA program.  Those who were marginal in either GMAT score or undergraduate GPA (i.e. students accepted on a probationary basis) had a lower likelihood of successfully completing the program compared with the stronger candidates. The probationary acceptances were more likely to drop out of the program, to be dismissed for academic reasons, and to take longer to complete the program.  The quantitative admission criteria (i.e. GMAT and GPA) used to determine probationary acceptances may assess knowledge better than skills. If, to ensure that post program skill levels meet employers’ expectations, the program shifts to a more skills oriented curriculum, greater emphasis may need to be placed on admission requirements that better assess skills (such as employment experience and writing samples).

Assessment is a long, continuous process that MBA programs need to address. AACSB standards [Curriculum Planning and Evaluation standard C.2.2., p. 20] require that "Each degree program should be systematically monitored to assess their effectiveness and should be revised to reflect new objectives and to incorporate improvements based on contemporary theory and practice." To meet this standard, MBA programs need to demonstrate a process for planning, monitoring, and revising their curriculum and this process needs to have resulted in new or revised curriculum. Thus, MBA programs will need to develop an assessment process to determine how their curriculum can be improved and make those improvements (i.e. close the assessment loop) based on the assessment results.

If assessment is successful, this process can result in an improvement in both the students' knowledge and the professors' teaching. However, student and faculty support may not come easily. [Brown and Koenig 1993; Herring and Izard 1992; Torgerson 1991] A balance between the demands put on faculty and students (such as with filling out long surveys) and the need to acquire assessment information is required. Benefits to both the students and the faculty for the costs they incur in the assessment process must be demonstrated. Additionally, in order to encourage participation in future assessment efforts, stakeholders need to be shown the positive results of assessment, especially in terms of improving a program by closing the assessment loop.

EDUCATIONAL IMPLICATIONS AND CONCLUSIONS

Assessment is imperative, and departments avoiding assessment activities will succeed only in encouraging outside constituents to force their views on the department. [Herring and Izard 1992; Jumper 1992] Assessment is not a process that can be done quickly to meet outside requirements, and an attempted quick fix may result in more work later. A number of issues remain underdeveloped in the assessment of MBA programs. An evaluation of different assessment methods, how the results of assessment are used to facilitate change, and the measurement of the impact of assessment on all stakeholders are all areas in which further research is needed.

Assessment needs to be viewed, not as a threat to a program and its autonomy, but rather as an opportunity to demonstrate continued improvement. In the current educational environment, programs are often asked to do more with less. Assessment may reveal the impact of these pressures on outcomes. For assessment to be useful, though, it needs to involve external constituents, to be detailed, and to demonstrate plans for improvement. In this paper, the authors describe their journey in assessment and encourage other MBA programs to develop and report on their assessment efforts.


SOURCES

AACSB (The International Association for Management Education), Standards for Business Accreditation, (2000).

Banta, Trudy W. and Janet A. Schneider. "Using Faculty-Developed Exit Examinations to Evaluate Academic Programs" Journal of Higher Education, 59 (January/February 1988), 69-79.

Boyatzis, Richard E., Scott S. Cowen, and David A. Kolb. "Implementing Curricular Innovation in Higher Education: Year One of the New Weatherhead MBA Program," Selections, 9 (Autumn 1992), 1-9.

Brown, Daniel J. and Harold F. Koenig. "Applying Total Quality Management to Business Education," Journal of Education for Business, 68 (July/August 1993), 325-329.

Chonko, Laurence B. and Marjorie J. Cabellero. "Marketing Madness, or How Marketing Departments Think They're in Two Places at Once When They're Not Anywhere at All (According to Some)," Journal of Marketing Education, 13 (Summer 1991), 14-25.

Dubas, Khalid, M., Waqar I. Ghani, , Stanley Davis, and James T. Strong (1998), "Evaluating Market Orientation of an Executive MBA Program," Journal of Marketing for Higher Education, 8, 4, 49-59.

Durant, Robert F. "Seizing the Moment: Outcomes Assessment, Curriculum Reform, and MPA Education," International Journal of Public Administration, 20 (February 1997), 397-429.

Eastern New Mexico University. "Academic Outcomes Assessment Plan 1998-1999, College of Business Administration," http://www.enmu.edu/users/smith/Assess/aoap/1998_1999/plans/cobug.htm, 1998-1999, 1-3.

Ehrhardt, Michael C. "Managerial Education: A Guide to Curriculum and Content," Survey of Business, 28 (Summer 1992), 13-17.

Friedman, Stewart D. "Community Involvement Projects in Wharton’s MBA Curriculum," Journal of Business Ethics, 15 (January, 1996), 95-101.

Gwinner, Kevin P. and Richard F. Beltramini. "Alumni Satisfaction and Behavioral Intentions: University Versus Departmental Measures," Journal of Marketing Education, 17 (Spring 1995), 34-40.

Ghorpade, Jai. "Ethics in MBA Programs: The Rhetoric, the Reality, and a Plan of Action," Journal of Business Ethics, 10 (December 1991), 891-905.

Hartmann, David J. "Program Assessment in Sociology: The Case for the Bachelor's Paper," Teaching Sociology, 20 (April 1992), 125-128.

Herring, Hartwell C. III and C. Douglass Izard. "Outcomes Assessment of Accounting Majors," Issues in Accounting Education, 7 (Spring 1992), 1-17.

Jacob, Nancy L."The Internationalization of Management Education," Selections, 10 (Autumn 1993), 18-22.

Jenkins, Roger L., Richard C. Reizenstein, and F. G. Rodgers. "Report Card, on the MBA," Harvard Business Review, 62 (September/October 1984), 20-30.

Jeuck, John E. "Pride and Prejudice," Selections, 14 (Spring 1998), 29-36.

Jumper, Sidney. "Program Assessment in Geography: Boondoggle or Opportunity," Journal of Geography, 91 (May-June 1992), 94-96.

Heimovics, Dick, Marilyn Taylor, Richard Stilwell (1996), "Assessing and Developing a New Strategic Direction for the Executive MBA," Journal of Management Education, 20, 4 (November), 462-478.

Kleindorfer, Paul R. "TQM at the University of Pennsylvania," Managing Service Quality, 4 (1994 ), 20-23.

Kobayashi, Noritake. "Renaissance of U.S. Business Education," Tokyo Business Today, 59 (July 1991), 33.

Krishnan, H. Shanker and Thomas W. Porter. "A Process Approach for Developing Skills in a Consumer Behavior Course," Journal of Marketing Education, 20 (May 1998), 24-34.

Krugel, Marcy L. "Integrating Communications in the MBA Curriculum," Journal of Language for International Business, 8 (2, 1997), 36-52.

Lamb, Charles W. Jr., Shannon H. Shipp, and William C. Moncrief III. "Integrating Skills and Content Knowledge in the Marketing Curriculum," Journal of Marketing Education, 17 (Summer 1995), 10-19.

Lamont, Lawrence and Ken Friedman. "Meeting the Challenges to Undergraduate Marketing Education," Journal of Marketing Education,19 (Fall 1997), 17-30.

McMillen, M. Cecilia, Judith White, and Anne McKee (1994), "Assessing Managerial Skills in a Social Context," Journal of Management Education, 18, 2 (May), 162-181.

Metzler, Michael W., and Bonnie L. Tjeerdsman. "PETE Program Assessment Within a Development, Research, and Improvement Framework," Journal of Teaching in Physical Education, 17 (July 1998), 468-492.

Miller, Fred, Don Chamberlain, and Robert Seay. "The Current Status of Outcomes Assessment in Marketing Education," Journal of the Academy of Marketing Science,19 (Fall 1991), 353-362.

Ondrack, Daniel. "Internationalizing Management Education: Human Resource Management," Journal of Business Administration, 21 (1,2 1992/1993), 237-249.

Patterson, Shirley and Marilyn M. Helms. "Improve MBAs to Meet the Needs of Production and Operations Management," Executive Development, 6 (1993), 18-20.

Phillips, Vicky. "Online Universities Teach Knowledge Beyond the Books," HR Magazine, 43 (July 1998), 120-128.

Reinsch, Lamar, Jr. and Annette N. Shelby (1996), "Communication Challenges and Needs: Perceptions of MBA Students," Business Communication Quarterly, 59, 1 (March), 36-53.

Sampson, Scott E., James R. Freeland, Elliot N. Weiss. "Class Scheduling to Maximize Participant Satisfaction," Interfaces, 25 (May/June 1995), 30-41.

Schlesinger, Phyllis Fineman (1996), "Teaching and Evaluation in an Integrated Curriculum," Journal of Management Education, 20, 4 (November), 479-499.

Torgerson, Richard L. "Assessing Student Academic Achievement: Bethany College," North Central Association Quarterly, 66 (Fall 1991), 477-481.

Turnquist, Philip H., Dennis W. Bialaszewski, and LeRoy Franklin. "The Undergraduate Marketing Curriculum: A Descriptive Overview," Journal of Marketing Education, 13 (Spring 1991), 40-46.

Webb, Florence. "The Necessary Art of Program Assessment," Thrust for Educational Leadership, 25 (February/March 1996), 30-32.

Windsor, Duane and Francis D. Tuggle. "Redesigning the MBA Curriculum," Interfaces, 12 (August 1982), 72-77.