Creating a Mission-Based Reporting System at an Academic Health Center

MBM – Creating a Mission-Based System

Lydia Pleotis Howell, MD, Michael Hogarth, MD, and Thomas F. Anders, MD

Abstract

The authors developed a Web-based mission-based reporting (MBR) system for their university’s (UC Davis’s) health system to report faculty members’ activities in research and creative work, clinical service, education, and community/university service. They developed the system over several years (1998 –2001) in response to a perceived need to better define faculty members’ productivity for faculty development, financial management, and program assessment. The goal was to create a measurement tool that could be used by department chairs to counsel faculty on their performances. The MBR system provides measures of effort for each of the university’s four missions. Departments or the school can use the output to better define expenditures and allocations of resources. The system provides both a quantitative metric of times spent on various activities within each mission, and a qualitative metric for the effort expended.

The authors report the process of developing the MBR system and making it applicable for both clinical and basic science departments, and the mixed success experienced in its implementation. The system appears to depict the activities of most faculty fairly accurately, and chairs of test departments have been generally enthusiastic. However, resistance to general implementation remains, chiefly due to concerns about reliability, validity, and time required for completing the report. The authors conclude that MBR can be useful but will require some streamlining and the elimination of other redundant reporting instruments. A well defined purpose is required to motivate its use.

The development of mission-based management programs has been the focus of many academic medical centers. The Association of American Medical Colleges (AAMC) has encouraged its use. The AAMC defines mission-based management as “a process for organizational decision making that is mission-driven, ensures internal accountability, distributes resources in alignment with organization-wide goals, and is based on timely, open and accurate information.”1 An essential aspect of mission-based management is the ability to measure faculty and department activities that contribute to the missions of the school. This is, however, a highly controversial area, since faculty fear that poorly designed measurement systems will adversely affect their salaries, promotions, workloads, and allocation of support. Relative-value units (RVUs), commonly used for billing, are a generally accepted method of gauging clinical productivity; however, there are only a few published methods describing productivity measures for non-clinical missions, such as education.2–6 Likewise, only a few of the published mission-based management systems have attempted to integrate the information from all missions for an individual faculty member.7,8

In this article we describe our development of a mission-based reporting (MBR) system that measures faculty members’ quantitative and qualitative efforts in the four missions of clinical work, research, education, and administration/community-service activities. We designed MBR as a reporting system for chairs to provide them with quantitative and qualitative information about their departments related to each of the four missions. We avoided the term mission-based management because we wanted to deemphasize control and the negative connotations of the term management. We intended, rather, to imply that the term reporting should lead to recognition of faculty members’ efforts and growth in their careers. The purpose of MBR is to provide a reporting tool for use in evaluating faculty resources and department performance, both retrospectively and prospectively. The tool helps chairs to better fulfill the missions of their departments and the school, plan for the future, and mentor and reward individual faculty members.

System Design

Technical characteristics: We initially designed the MBR system in 1998 as an Excel spreadsheet, but changed it to a Web-based program early in the course of development so that participating faculty could better access their individual records and enter and view their own results. The current version of MBR employs a three-tier architecture with a Web browser as the client software, an application server for middle-tier “business logic,” and a relational database for data storage. Since the MBR system is a Java Servlet 2.1-compatible system, it can be implemented on a large variety of server environments. User summary reports are provided as portable document format (PDF) files, constructed “on the fly” from data in the database and submitted to the Web browser when a user requests the report. We chose the PDF format because it produces high-fidelity printing, constructing summary reports with a professional appearance. A printed record is available for each individual faculty member. Printable summary reports compile data by department, for the school as a whole, and by faculty rank and/or series across departments (Charts 1–3). Security levels exist so that an individual faculty member can view his or her own personal record only. A department chair can view the records of all faculty members within his or her own department, and the deans can view the records of all faculty and departments.

Designing the database structure: We designed the basic data-entry module in three sections: an Activity Section for faculty to enter their year’s activities, an Evaluation Section for qualitative assessment of performance, and an automated Summary “Report Card.” Each of the three sections is further subdivided according to the university’s four missions: clinical service, investigation and creative work (i.e., research/scholarship), teaching, and administration/university/community service. Before a faculty member begins to enter data, that individual’s “budgeted” or “targeted” percent effort for each mission is entered by the department manager. Budget projections (targets) of faculty effort by mission for each faculty member are required as part of each department’s annual budget submission. These budgeted projections are entered into the MBR system.

The MBR system is a self-report system whereby individual faculty members enter their data (quantitative and qualitative) by mission and immediately see the relative values of their efforts. Faculty entries are later reviewed and validated by the department chair during an annual career-planning session required for all faculty. Based on the faculty member’s entries in the Activity Section, the MBR program computes an estimate of the time spent in each activity, using the RVU codes embedded in the program. Activity scores for each mission are summed. Each mission summary score is then transferred to the “% Actual” field in the summary report card. A grand total for percent effort is also computed. The summary report card thus compares previously entered “projected” or “targeted” effort with actual activities entered by the faculty member for each mission (Chart 1).

Defining activities and computing RVUs: Faculty from diverse departments within the University of California Davis School of Medicine served on committees dedicated to defining parameters for each of the university’s four missions (listed earlier). Faculty volunteered, were appointed, or were selected to serve on committees because of their special interests or expertise. In general, committees were open to anyone who wished to serve, but committee size did not exceed 15 for any one committee. Two of us (LH and TA) served as chair or co-chair for each of the committees. We charged each committee to select and define the most relevant and representative activities for its assigned mission. The charge urged comprehensiveness but, at the same time, demanded simplicity.

The Activity Section translates activities into quantitative time/effort-based metrics. Thus, another of the committee’s charges requested estimates of the quantity of time expected to complete each activity over the course of a year (Chart 4). The quantity of time was defined as a percentage of a year spent performing that activity, using a 50-hour work week as the standard. The committees achieved consensus on estimated average times to accomplish each activity based on personal experience and creative deduction. For example, there is no easily established standard for the length of time it takes to complete a manuscript. However, promotion committees generally expect faculty to publish the equivalent of at least two journal articles per year. Our clinical faculty strive to have a minimum of 20% of their time protected for scholarly activities. Thus, the RVU time allotment for a journal article for a clinical series faculty member was calculated accordingly.

In a later refinement, a higher RVU score was assigned to articles published in peer reviewed journals than to limited distribution articles because promotion committees value the former more highly. Similarly, book chapters were given more relative value for clinician– educator faculty than for research faculty. In the same spirit, abstracts and “submitted” grants were weighted more for junior than for senior faculty. Such differential weightings of time-based RVU codes motivate and reward faculty for activity that is aligned toward academic success in their respective series (i.e., track) and rank. The MBR program knows which RVU codes to select for a given faculty member because the department manager enters the rank and series of each faculty member at the same time that the percent “targeted” effort from the budget is entered. The faculty member entering data is “blind” to the RVU weight assigned to each activity.

Both the teaching and the clinical services committees were required to distinguish patient care with students from clinical service without associated teaching. Since published reports indicate that faculty spend approximately 43–53% of time teaching residents in ambulatory care settings,9,10 we designed the MBR system to allocate 50% of clinical time spent with trainees to the clinical mission and 50% to the teaching mission. The clinical services module was designed as a logic tree requiring faculty to enter the weekly half-days in the clinic with and without students, and the number of months per year as ward attending with and without students. The MBR program then allocates effort to the two missions automatically. In the first version of the MBR system, these calculations had been left to the individual faculty member. Significant confusion and misinterpretation of instructions led us to automate the input via the structured decision tree.

Similarly, for the administration/ university/community service mission, we did not want to credit all committee and administrative activities equally. The university endorses community service, and the promotion committees expect some service activities of faculty. However, academic advancement is not enhanced by excessive community service at the expense of scholarship. Therefore, less RVU credit and fewer opportunities were provided in the Activity Section for these activities. Only major school and university committees, such as the institutional review board, promotion committee, and admission committee, were included. These committees require large time commitments of faculty and are considered important for the school’s function. We did not include minor committees and service work outside the university but credited them qualitatively in the Evaluation Section. We coded administrative activities that are considered part of the job description of a chair, dean, division chief, or other leader on the basis of the size of the department/division or scope of the responsibility.

For the qualitative metrics designed for the Evaluation Section, the committees were charged with developing a list of standards reflecting the quality of the work performed. The standards were ranked from 0 to 5. Thus, the Evaluation Section (Chart 2) summarizes the qualitative aspects of faculty scored previously. The teaching mission is evaluated from the perspectives of student and peers and is averaged to achieve a final evaluation score for teaching. Individual evaluation standards are not additive. An individual faculty member records only one standard for each mission. This evaluation score is then automatically imported to the Summary Report Card and can be viewed separately for each mission.

As part of the Summary Report Card, the computer also multiplies the evaluation score by the activity score to achieve a single quantity/quality product for each mission. The mission products are then summed to obtain a single summary score for each faculty member. The following theoretical model drives the interpretation of this summary score. If a faculty member’s actual activities total 100% and her or his evaluation codes for each mission are 3, the resultant final summary score of 300 (100 3) reflects expected and appropriate performance. In other words, faculty members whose summary scores are at least 300 are on target for academic advancement. A score below 300 suggests substandard performance for the year and requires attention from the chair. A score above 400 indicates outstanding performance worthy of an incentive reward.

Implementation: Testing and Modifications

Phase 1: Selected feasibility testing: We chose to test and modify the MBR system in three phases. In phase 1 in 1998, we tested the initial RVU and performance codes created by each committee for inconsistencies, omissions, and other user-entry problems on 21 randomly selected volunteer faculty members. Of the 21, two had quantitative scores less than 100% (56.0 and 55.9%), six had scores between 100% and 150%, and 13 had scores higher than 150%. The faculty with the high scores were hardworking, but not working at the level their scores would indicate, nor were the two faculty members whose scores were less than 100% considered to be “slackers.” The mission in which the largest number of faculty showed discrepancies between targeted effort and actual effort was the teaching mission. Sixteen of 21 faculty exceeded their targeted expectations by more than 10%. The next most discrepant mission was the investigation and creative work mission, with nine of 21 faculty demonstrating similar over-reporting. For the clinical mission, all of the faculty had discrepancies of less than 10% between targeted effort and actual effort. For the administration/university/community service mission, department chairs and deans had actual percentages below the targeted percentages because some of their activities had not been included. In response to this initial pilot trial, adjustments were made to the RVU codes. For the teaching mission, time values believed to be excessive were decreased for some activities. In the quantitative portion of the administration/university/community service section, a line was added for “administrative stipend” (% salary support) to account for time spent on administrative activities relevant to the job descriptions of department chairs or other leaders. The results from the phase 1 trial enabled us to better define activities and adjust the RVU weighted scores.

Phase 2: Pilot testing with selected departments: In phase 2 in 1999, we tested the revised system on 131 faculty members from eight departments. These departments ranged in size from five to 28 faculty members and included two basic science departments, three surgical departments, two medical departments, and one hospital-based specialty department, with an almost even division between clinical and basic science activities. Faculty members in each of the test departments completed MBR data entry online prior to their annual career planning sessions with their chairs. The printed results for each faculty member were validated by the chair and discussed with the faculty member.

For the investigation and creative work mission, only one department did not have faculty members who were under target. Half of the departments had more than 48% of their faculty under target, suggesting under-performance. The under-target faculty in this mission tended to be basic scientists or faculty with large percentages of time designated for research. They were often junior faculty who were still in the start-up phases of their research careers. Based on these findings, several new activities were added to the investigation and creative work section to reflect work in progress. Credit for published abstracts, grants submitted but not yet funded, cost recovery on grants, and time spent in study sections was added. These activities were also given greater RVU weight for junior faculty than for senior faculty. Only one department produced results that showed that the majority of its faculty were over target for the investigation and creative work mission. This was a surgery department whose faculty had been budgeted with minimal time for research. As a consequence, even modest scholarly output made it fairly easy for these faculty to exceed their targeted time.

For the teaching mission, all of the Phase 2 trial departments produced results that indicated that the majority of faculty were on or over target. The improvements to the RVU weightings after phase 1 had been successful. Only one fourth to one third of the faculty were under target. Almost equal numbers of faculty were over and under target. For the administration/university/ community service mission, five of the eight departments also showed the majority of their faculty to be on or over target for that mission. Likewise, for the clinical mission, six of the eight departments with clinicians showed that more than 50% of their members were on target. In two departments large percentages of faculty were under target. One of these was a hospital-based specialty whose clinical activities were not easily measured by the system. The results of phase 2 pointed to yet other areas in need of revision.

Phase 3: School-wide implementation: Based on the experience from phase 2, we made additional refinements, focusing primarily on further fine-tuning the RVU scores. Because some faculty were concerned about the invisibility of RVU equivalents of the activity scores, we revised the program so that a mouse click provides the actual RVU weight used in the computation. In addition, we added “help” buttons for specific items whose definitions had been ambiguous. A mouse click on the help button now provides a specific definition of the activity.

During the post-phase 2 refinements, we reconvened the committees. Their further guidance and advice were reflected in the revision. Many committee members had experienced first-hand the phase 2 implementation. Throughout all phases of MBR development, we actively pursued dialogue with our faculty. We discussed difficulties and changes in a variety of forums such as the faculty senate, the Council of Department Chairs, and the curriculum committee, and at department faculty meetings. Individual faculty provided input directly or via e-mail. Phase 3 tested MBR in a school-wide trial of all faculty and departments.

We modified the RVU coding system to stratify faculty by rank and faculty series. Since junior faculty are often in more of a “building” phase of their careers, with less published investigative/creative work or funded grants, instructors and assistant professors were given more credit for work in progress than were senior faculty. Stratification based on rank and series also expanded the system’s summary reporting and dataanalytic capacities.

In 2000 for phase 3, the dean’s office required use of the new version of the MBR system for annual faculty career planning by all departments in the school. The dean’s office did not articulate a clear purpose for MBR but did clearly state that the results of MBR would not be used for any salary or promotion planning. The dean’s office implied that the results would be used only to further refine categories of academic activity and the RVU and Evaluation Section codes.

Discussion

Developing an MBR system is a complex task requiring careful group planning, considerable administrative support, and significant time for design, testing, and modification. Even then, there are obstacles to general faculty acceptance and uniform use. It is not clear from the extant literature that any mission-based management system has gained general acceptance and is regularly being employed successfully.

The system we describe differs from other published mission-based systems in several ways. One important difference concerns the definition of the research/ scholarly mission and what types of work should be included as evidence of productivity. In our system, we specifically selected the term “investigation and creative work” to encompass the scholarship of education, application, and integration as well as the scholarship of discovery. The former are evidenced by publication of books, book chapters, educational manuals, review articles, and peer-reviewed articles describing clinical experience. In other mission-based systems, many of these activities would be included under the educational mission.6,7 However, our university defines all of these types of activities as creative scholarship and views them as research-specific to one or more of the academic series. The university criteria are reinforced in the MBR system by giving due credit for integrative and educational publications for faculty in the education and clinical series. RVU credits were weighted according to the publication (chapter versus peer review) and the faculty member’s rank and series.

Another difference unique to MBR is the separation of quantitative and qualitative measurements of productivity. The system described by Nutter et al. integrates a qualitative multiplier directly into the quantitative RVU score assigned to each activity.6 By separating the two in MBR, department chairs or administrators can consider each dimension separately for different purposes. Examining the quantitative component alone can be useful in determining staffing or assignment of duties to an individual. The qualitative component can be examined separately to advise faculty about areas needed for improvement. The quantity/quality product provides an indication of the cost– benefit value of the activity. The summary score might be useful in the promotion process or in comparing faculty for other forms of rewards. School administrators might also consider rewards on a broader department level. For example, the mission-based management system at the University of Florida bases 20% of the department’s budget allocation on the qualitative component of its effort in the educational mission.11

It is important to note that the phase 2 trial with eight departments demonstrated that many of the faculty in the clinical departments had quantitative scores significantly exceeding 100%. This indicates that most faculty are working more than the 50-hour week, which had been considered the standard in creating this MBR system. We were not surprised by this result. We operate a rapidly growing primary care network in a highly competitive managed care market. The faculty’s clinical workload has significantly increased.

If the quantitative RVU scores assigned to clinical activities are deemed to be accurate and fair, faculty members should be able to advance successfully academically by working only slightly above 100% time. If faculty members are academically successful only by working clinically at effort levels that greatly exceed 100%, then the expectations that surround academic advancement and the assignment of clinical workload are in direct conflict. Demanding continued performance much greater than 100% will lead to faculty burn-out and problems with retention. Exit interviews by the dean with a number of faculty have suggested that “private” group practice is a more personally rewarding and manageable alternative than the 150% effort required of academic medicine. We believe that it is important to document faculty efforts beyond normal working hours in order to support academic advancement and better align faculty compensation to faculty effort.

During the phase 2 trial, we also found it interesting that the mission with the most discrepancy between target effort and actual effort was the investigation and creative work mission. Basic scientists were understandably suspicious of a system that made them look underproductive. Gauging research productivity had been problematic during the design stage. The research subcommittee had specifically concluded that quantitative effort in this mission should be based only on final products (published papers, funded grants). The other missions were largely time-based. Since the MBR system is designed to be implemented annually, research productivity may be specifically compromised because of publication lag times and grant-submission review cycles. Most research projects take several years before coming to fruition. Since work in progress was not originally credited and only published work was considered, a faculty member could appear to be under-productive one year and over-productive the next year when the work that was in progress the first year was finally published in the second year.

We used the results of phase 2 to revise the MBR system. In phase 3, we included additional credit for salary support from grants, for abstracts, and for new grant submissions. The AAMC’s mission-based management program noted that there are some advantages in including these activities, and that they are included in mission-based systems at other schools.12 Despite these additions, some element of under-reporting of faculty efforts in the investigation and creative work mission may continue to exist. Discovery-type research is by nature an inefficient process in which many time-consuming efforts do not result in funded grants or as publishable work. If the MBR system described here is to be used as part of annual faculty career counseling, chairs will need to be cognizant of this issue and not unfairly evaluate a faculty member unless a trend is observed for more than one year. This mission will merit continued scrutiny as the system is further refined.

The difficulties we encountered in phase 3 testing of the MBR system include a persistent general resistance by faculty and chairs. Faculty concerns focused on the resistance to quantification of their activities, a belief that the information collected would be more harmful than helpful, and a conviction by each specialty that its activities are unique and, therefore, can not be fitted into a general template. Similar difficulties have been encountered by others and remain a challenge for general implementation.

One significant remaining challenge that requires further refinement is the area of on-call time. The issues of in-house versus at-home call, 24-hour versus night and weekend call, procedural versus consultative call, and resident versus non-resident supported call are difficult to equilibrate between specialties.

Despite these ongoing challenges, we believe that the overall experience with the MBR system at UC Davis has been positive. Significant faculty-wide attention has been focused on the benefits of MBR, and there has been general recognition of its necessity. Skeptical department chairs became more enthusiastic when shown the summary results for their faculty. In general, chairs of the eight test departments in phase 2 felt that the MBR system did give higher scores to the faculty that they had previously perceived as high achievers, and lower scores to those faculty whom they felt were relatively weaker. They also found MBR to be a good springboard for discussions with faculty members during their annual career-counseling sessions.

We are making an effort to overcome continued resistance by some faculty and address the barriers to implementation. Integration of existing data collected by other administrative units, such as a faculty member’s clinical RVUgeneration report, and research grant and contract dollars, should directly be downloaded to that individual’s MBR record. Such automation reduces redundancy, minimizes individual input, and increases data integrity and report accuracy. However, MBR may never gain acceptance until input efforts result in responsive decision making for allocation of resources to departments and/or for more streamlined procedures for academic advancement.

MBR can be used by department chairs as a management tool for individuals, to discuss faculty performances and goals and determine salary, or to automate some of the tedious hard-copy paperwork required for promotion actions. For departments, examination of the total projected effort and actual effort expended in each mission can aid in determining faculty staffing and work assignments, identifying recruitment needs, and developing department budgets. For the school, MBR data can be used to aid in equitable allocation of funds and space to missions and departments. Allocation of positions and money to departments based on MBM elsewhere has been described.9 Use in decision making, however, requires trust in the accuracy of the system. Future efforts to ensure accuracy and build trust will require refinement of quantitative and qualitative scores for each mission. Comparison of MBR results with successful promotion actions is one way to establish validity.

Acknowledgement

Special thanks to Benny Poon, Medical Informatics Group, for his programming expertise in the development of the Web-based MBR system.


This article was originally published in the
February 2002 issue of Academic Medicine.

References

1 Association of American Medical Colleges.
Mission-Based Management Program:
Introducing the MBM Resource Materials.
Washington, DC: AAMC, 2000.


2 Bardes CL, Hayes JG. Are the teachers
teaching? Measuring the educational activities
of clinical faculty. Acad Med. 1995;70:111–4.


3 Bardes CL, Hayes JG, Falcone DJ, Hajjar DP,
Alonso DR. Measuring teaching: a relative
value scale in teaching. Teach Learn Med.
1998;10:40–3.


4 Bardes CL. Teaching counts: the relativevalue scale in teaching. Acad Med. 1999;74:1261–3.


5 Sachdeva AK, Cohen R, Dayton MT, et al. A\
new model for recognizing and rewarding the
educational accomplishments of surgery faculty. Acad Med. 1999;74:1278–87.


6 Nutter DO, Bond JS, Coller GS, et al. Measuring
faculty effort and contributions in medical
education. Acad Med. 2000;75:199–207.


7 Garson A, Strifert KE, Beck R, et al. The
metrics process: Baylor’s development of a
“report card” for faculty and departments.
Acad Med. 1999;74:861–70.


8 Hilton C, Fisher W, Lopez A, Sanders C. A
relative-value– based system for calculating
faculty productivity in teaching, research,
administration, and patient care. Acad Med.
1997;72:787–93.


9 Zweig SC, Glenn JK, Reid JC, Williamson
HA, Garrett E. Activities of the attending
physician in the ambulatory setting: what part
is teaching? Fam Med. 1989;21:263–7


10 Melgar T, Schubiner H, Burack R, Aranha A,Musial J. A time–motion study of the
activities of attending physicians in an.
internal medicine and internal medicine—
pediatrics residents continuity clinic. Acad
Med. 2000;75:1138–43.


11 Watson RT, Romrell LJ. Mission-based
budgeting: removing a graveyard. Acad Med.
1999;74:627–40.


12 Holmes EW, Burks TF, Dzau V, et al.
Measuring contributions to the research
mission of medical schools. Acad Med. 2000;
75:304–13

Looking at the Forest Instead of Counting the Trees: An Alternative Method for Measuring Faculty’s Clinical Education Efforts

MBM – Looking at the Forest (Jarrell)

Bruce E. Jarrell, MD, David B. Mallot, MD, Louisa A. Peartree, MBA, and Frank M. Calia, MD

Abstract

Purpose: To present an alternative approach to mission-based management (MBM) for assessing the clinical teaching efforts of the faculty in the third and fourth years of medical students’ education. 

Method: In fiscal years 2000 and 2001, interviews were conducted with department chairs and faculty members with major responsibilities in education at the University of Maryland School of Medicine. Using a standard worksheet, each rotation was categorized according to the amounts of time students spent in five teaching modes. After each department described its rotation and maximum teaching time, the department team and the MBM team negotiated the final credit received for its course. This final determination of departmental clinical teaching was used in subsequent calculations. Adjustments were made to the department clinical education time based on the teaching mode. Groups of medical students were surveyed to determine the relative value of each teaching mode. These relative values were then used to modify the clinical education times credited to the department. The last step was to distribute the effort of the faculty between clinical and educational missions.

Results: The data analysis showed approximately 57,000 credited faculty hours in one year for direct education of medical students in each curriculum year. These hours equal the annual workload of 28 full-time faculty members. 

Conclusions: A powerful use of MBM data is to move from thinking about resource allocation to thinking about the effective management of a complex organization with interlaced missions. Reliable data on faculty’s contributions to medical students’ education across departments enhances other MBM information and contributes to a picture of the dynamic interconnectedness of missions and departments.

One outcome of the profound economic changes in medical reimbursement over the last 15 years is a need for more attention to resource allocation within academic medical centers (AMCs).1 One tool currently enjoying interest is mission-based management (MBM), whereby money or effort is matched, albeit with great difficulty, to the AMC’s three traditional missions of education, research, and clinical care. Decisions regarding departmental support by the dean can then be made on a mission-directed rather than on a historical basis.2 Progress has been made in reliably consolidating clinical and research budgets from various accounting systems, allowing a global view of those missions. This consolidation has permitted resource allocation to be data based. The educational activity has been more difficult to measure.3 Many efforts have focused on educational assessment based on faculty-effort surveys and are based largely on self-reporting, with inherent problems of inaccuracy, lack of response, and problems of categorizing various teaching activities.4,5 This is particularly problematic for faculty when responding to their clinically-based teaching activities. Difficulties in accurately and consistently separating clinical care from teaching time and dealing with trainees at multiple levels of sophistication add to the complexity. One additional problem with self-reporting is the tendency to define educational time as the time left over after the amounts of time devoted to other, more definable, missions (i.e. research, clinical care, and administration) have been determined.

From an institutional perspective, teaching responsibilities are assigned to departments and oversight is provided by a curriculum committee. This assignment creates a departmental teaching responsibility that is, in turn, determined by the sizes of medical students’ classes and lengths of rotation. The department must allocate resources, including faculty, to undertake that educational load. Meeting that educational requirement is determined by the teaching philosophy of the department in conjunction with economic factors, residency workforce, school policies, and external review boards. Thus, although the total departmental teaching load is able to be estimated accurately and is relatively predictable from year to year, the contribution of any single faculty member might vary as frequently as daily and is much less predictable. In this study, we present an alternative approach to assessing the clinical education efforts of the faculty in the third and fourth years of medical students’ education.

Method: At the University of Maryland School of Medicine, year one and year two have interdisciplinary curricular blocks that use both basic science and clinical faculty. Faculty are assigned hour-for-hour credit for lectures, small-group sessions, and teaching laboratories. In addition, each of those teaching modes is credited with additional time that reflects class preparation and test development. The additional time assigned to each mode was debated and determined by the Fiscal Affairs Advisory Committee (FAAC), the committee charged with overseeing MBM. Course administration credit is based on the length of the course (see Figure 1).

Figure 1

Departmental teaching responsibility in the clinical years is determined by aggregating faculty effort in a variety of teaching modes. Third- and fourth-year rotations have two main components. The first, small component is formal classroom sessions with no patient interaction, and includes lectures and small-group discussions. Departmental credit for these sessions is determined by the average number of didactic sessions for each rotation. For each hour of formal didactic teaching, an additional half hour is added to reflect the faculty time necessary to prepare for the session. This “prep” time is less than credit assigned for preclinical didactic activities. These hours can be attributed to individual faculty members in the department, but in our analysis the total time for these activities is accounted for only at the departmental level. The second and major component of clinical education is face-to-face teaching with an attending physician in the presence of a patient.

In the clinical setting, the occasion for faculty to educate medical students depends on the number of trainees rotating through the department and the amount of time students spend “face-to-face” with faculty. Assuming every interaction consists of one student with one faculty member for a given number of hours per day, it is possible to define in hours the maximum time for “face-to-face” teaching while delivering care. This number of hours is calculated by multiplying the number of trainees in the rotation by the number of days in the rotation by the number of hours per day of faculty interaction in a clinical setting

Maximum Teaching Time = number of
medical students X length of rotation in
days X agreed-upon number of hours
with faculty in a clinical setting

This maximum time is defined as the upper limit of the department’s teaching load, which would be the total faculty teaching effort if every trainee were taught in a one-to-one ratio with a faculty member and full credit was given for teaching even though clinical care was also being delivered.

After determining the maximum teaching time, in fiscal years 2000 and 2001, we conducted interviews with each department chair and one or two faculty members with major responsibilities in education. Using a standard worksheet, completed at the time of the interview, each rotation was categorized according to the amount of time students spent in large versus small groups and by the trainee mix. We categorized activities into five teaching modes: one student and one faculty attending physician; a small group of medical students (two to four) and a faculty attending physician; one student with one faculty attending physician and one resident; small groups (three to five) of medical students and residents and a faculty attending physician; and a large group of both types of trainees (more than five) and a faculty attending physician. During the departmental interview, the clerkship or rotation was discussed and categorized based on teaching groups. Special teaching situations (i.e., “teaching attending physician”) and unique teaching features of the clerkship were discussed so that department-specific credit could be applied. After the department described its rotation, the department team and the MBM team negotiated the final credit that the department received for its course. We used this final determination of departmental clinical teaching in subsequent calculations.

Using the data gathered from each department, adjustments were made to the department clinical education time based on the teaching mode. We surveyed groups of medical students to determine the relative value of each type of educational interaction (see Appendix A). The resulting relative values were reviewed by key “education” faculty members from a variety of clinical departments. We found the values generated by the medical students to be consistent with the perceptions of faculty reviewers. The students’ values for clinical education time were: an attending physician with a medical student (one hour), an attending physician with two to four medical students (0.77 hours), an attending physician with a medical student and a resident (0.56 hours), an attending physician with five medical students and residents (0.36 hours), and an attending physician with more than five medical students and residents (0.25 hours). These relative values were then used to determine the clinical education times credited to the department. (For a more detailed discussion of our method for measuring clinical education time, see Appendix B).

The last major step in our clinical education method was to designate the efforts of the faculty in fulfilling their clinical and education missions. Up to this point, the method produced a total time that medical students and faculty were together in a clinical setting where the faculty member performs both clinical and educational activities. This total time had then to be split between these missions to give appropriate credit and avoid double counting. Through a series of discussions with the members of the school of medicine’s FAAC, and faculty and leadership in the office of medical education, we decided that for every eight hours of patient care delivered in the presence of a medical student, 1.5 hours (18%) of clinical educational time would be credited to the department. This ratio of clinical education to patient care is based on the student’s, the faculty’s, and the administration’s input and discussion and does not include time directed to teaching residents in the clinical setting. This method of allotting credit also did not include faculty’s scholarly educational activities unrelated to the medical students’ curriculum, e.g., developing innovative teaching materials for future use, writing textbooks, and advising students.

Finally, we determined unique education endeavors with the department. An example would be teaching attending physicians with no clinical responsibilities who receive full hour-for-hour teaching credit without any reduction in their credit for patient care. Another example would be credit given to the department of radiology for teaching medical students during basic third-year clerkships that include significant radiology components. Credit hours for education administration were also given for clerkship directors. We derived data from discussions with the individual departments to determine allocations for these special situations. After these data were summarized, they were given to the departments for review and further input. The data were then submitted to the FAAC and became a key component in institutional decision making.

Results: The total faculty times allotted to teaching medical students are summarized in Figure 2. In addition, this figure shows the breakdown between basic science faculty’s and clinical faculty’s contributions. The data exclude the participation of residents, fellows, and staff in the curriculum. The total hours for teaching students in their first two years are nearly equal in faculty time, approximately 10,000 hours, reflecting the similarity in curriculum structure. The distribution of hours in the second year between preclinical and clinical departments reflects clinicians’ participation in the current curriculum. The large number of clinical faculty’s hours in the third year of medical school reflects the individual and small-group teaching modes in the clinical setting as well as the increased time medical students spend with faculty. The time in year four is significantly less than that in year three because the year itself is shorter, and many students spend considerable time in community sites or other medical institutions.


Figure 3 shows the distribution of faculty education hours summarized for all four years of medical school and shows the relative ranking of education hours among the departments of the school of medicine. By using our method, clinical departments that have required clinical rotations are credited with large numbers of hours. For example, faculty in the department of medicine received the greatest amount of credit due to the length of the third-year clerkship, the amount of teaching required in fourth year sub-internships, as well as a significant teaching contribution in year two. On the other hand, many medical students are assigned to community sites for the obstetrics and gynecology clerkship, resulting in fewer hours credited to the department.

Our method produced additional data not directly tied to the MBM process. For instance, the distribution of clinical teaching in third-year clerkships was an attending physician with a medical student, 19%; an attending physician with two to four medical students, 7.1%; an attending physician with a medical student and a resident, 21.5%; an attending physician with three to five medical students and residents, 47.5%; and an attending physician with more than five medical students and residents, 4.9%. This distribution also shows the efficiency of teaching within a department. These data, originally collected for MBM purposes, can then be reviewed and analyzed by the curriculum committee and individual departments.

Our data analysis showed approximately 57,000 credited faculty hours in one year for direct education of medical students in each curriculum year. Using a standard work-year definition of 2,080 hours (52 weeks 40 hours), these 57,000 hours of credited education time equal the workload of 28 full-time faculty members. However, this credited time is not the total cost of medical students’ education, because it does not include residents’ student teaching or indirect expenses the department incurs coordinating the educational effort or mentoring the students. In addition, the data do not take into account the number of faculty members necessary to generate patient volumes to sustain a teaching program. While our method produces a relative ranking among departments, it also provides an overall faculty effort number for the medical students’ education mission, which can then be compared with faculty’s efforts in the clinical and research missions.

Discussion

The Association of American Medical Colleges (AAMC) has identified six core principles as central to MBM: integrating a school’s financial statements, measuring faculty and departmental activities and contributions to mission, building organizational support for reporting tools and metrics, guiding the dynamics of leadership, holding faculty and department and institutional leaders accountable, and building trust and institutional perspective.2 With regard to the education mission of an AMC, the AAMC’s second core principle—faculty and departmental activities and contribution to mission— has historically been measured through an aggregation of individual faculty members’ teaching activities. The crux of the problem, however, is that individual faculty members’ activities, even if accurate, do not necessarily reflect the educational mission of the school. That mission is defined by external accrediting bodies, the dean’s office, departmental chairs, and faculty education committees. Individual faculty members assume that a variety of teaching activities are central to the education mission, especially in the clinical years. These activities may enrich the students’ experiences and augment the curriculum. However, our method of measuring the faculty’s contribution seeks to segregate those activities tied to the core education mission for purposes of MBM.

Our method yields data that describe the educational effort in a reproducible manner. The data verify our impressions about the effort and time expended by each department and should encourage discussion about the relative educational effort. Our method does not focus on individual faculty members, but it does spotlight different departmental teaching activities and raises questions about the merit and/or cost of those activities, furthering the MBM core principle, “building trust and institutional perspective.” At our medical school, the FAAC uses MBM to make fiscal recommendations to the dean. In fiscal year 1999, the FAAC recommended redistributing $3.0 million over two years among departments that were critical to the educational mission, but that were in financial difficulty. These decisions were based on perceptions of the faculty’s educational activities but had little supporting data. In fiscal year 2002, the school of medicine redistributed $1.0 million of the dean’s funds among departments important to the education of medical students. In this redistribution, the method we describe in this paper provided key education data for the FAAC’s discussions.

Previously, annual departmental support from the dean’s office has been a continuation of historic allocations. The data in Figure 4 represent a scattergram of the historical allocation of dean’s funds in fiscal year 2001. The data show the lack of correlation between the dean’s historical financial support to departments and the medical students’ education data derived from our analysis. Thus, a redistribution based on our method could align resource allocation with educational effort. The FAAC operates under the assumption that medical students’ education should be a factor in departmental support.

Our method has other advantages. Gathering and compiling the data are less time-consuming than surveying individual faculty members. Our method permits separation of teaching medical students from teaching residents. It can also be used for other education evaluations. The next applications of our method will be to analyze faculty’s time spent teaching residents and graduate students and to refine the measures of education merit described above. By meeting with the chair and lead educators in each department, our method adhered to the MBM core principle of “building organizational support for reporting tools and metrics.” Each department is able to demonstrate the uniqueness of its own educational approach.

Conclusion

One of the major stated objectives of MBM is to provide a medical school’s decision makers with accurate information with which they can allocate resources.6 A more powerful use of MBM data is to move from thinking about resource allocation (“accounting”) to thinking about the effective management of a complex organization with interlaced missions. The addition of reliable data on medical students’ education across departments enhances other MBM information and contributes to a picture of the dynamic interconnectedness of missions and departments. In contrast, summation of individual faculty member’s efforts (i.e., survey methods) does not necessarily reflect the overall mission of the school and is unlikely to produce an accurate picture of a complex organization. These summations may even obscure direct educational activity and hinder an open discussion of the place of education in a school’s mission. The University of Maryland School of Medicine’s experience with MBM and the use of our method produce an informative image of a complex environment. In describing this image, it is more informative to describe the forest than to count the trees.

This article was originally published in the December 2002 issue of Academic Medicine.

References

1 Cohen JJ. Financing academic medicine:
strengthening the tangled strands before they
snap. Acad Med. 1997;72:520.


2 Cohen JJ. Introducing the Mission-Based
Management Resource Materials.Washington, DC:
Association of American Medical Colleges, 2000.


3 Nutter DO, Bond JS, Coller BS, et al.
Measuring faculty effort and contributions in
medical education. Acad Med. 2000;75:200–7.


4 Hilton C, Fisher W. A relative-value-based
system for calculating faculty productivity in
teaching, research, administration, and patient
care. Acad Med. 1997;72:787–91.


5 Mallon WT, Jones RF. How do medical
schools use measurement systems to track
faculty activity and productivity in teaching?
Acad Med. 2002;77:115–23.


6 Watson RT, Romrell LJ. Mission-based budgeting:
removing a graveyard. Acad Med. 1999;74:627–40.

Appendix A

Estimation of the educational value of learning in a group setting:

 The medical school is developing an internal methodology to measure education in the clinical setting. As part of this methodology we are trying to understand how trainees perceive the amount of teaching they receive in certain situations while clinical care is being delivered. We will evaluate this information along with responses from department leadership and selected faculty to the same questions.

Instructions: 

Please think about the amount of teaching that you as an individual receive in-one-on-one interaction with an attending. With that in mind, compare it to the teaching that you as an individual receive in the situations delineated below. Please try to generalize your answers across all of your third-year required rotations and not bias your answer based upon your current position or upon some good or bad anecdotal experience.

Please respond to each question in comparison to one-to-one teaching. That is, you and one attending in the clinical setting. This will be considered a 1:1 teaching value. Please do not consider lectures, small group or seminars in your answers. For the purpose of this survey, we are interested in education during rounds, ambulatory clinic, operating room, reviewing films/test, etc.

Example:

Consider a rotation where one aspect of the rotation has you and several other students with an attending discussing a clinical problem. If you consider that the amount of teaching that you receive from the attending is similar to what it would have been if it had been just you and the attending, you would answer 1:1. If you thought that this experience is less than that, for example one half the benefit, then you would answer 1:2.

  1. Individual  an attending 1:1
  2. Individual  an attending  one resident __
  3. Individual  an attending  group of students (3–5) __
  4. Individual  an attending  group of students and residents (3–5) __
  5. Individual  an attending  group of students and residents (5) __
  6. Does teaching directed to a higher-level trainee have a similar benefit as that directed to a lower-level trainee?

Appendix B

An In-depth Look at the University of Maryland School of Medicine’s Method for Measuring Faculty’s Clinical Education Time

Example:

Suppose that for Department 1, the analysis shows that ten third-year students rotate through this department every four weeks. The students spend five hours a day with faculty in the clinical setting. On average, the following teaching modes for the students are described by the department:
▪ Group A: Attending  one medical student (10% of the rotation)
▪ Group B: Attending  two to four medical students (15% of the rotation)
▪ Group C: Attending  one medical student  one resident (11% of the rotation)
▪ Group D: Attending  three to five medical students and residents (32% of the rotation)
▪ Group E: Attending  more than five medical students and residents (32% of the rotation)

Implications:

Calculations for Group A or B. Suppose for the 10% of time spent in Group A that the typical experience is an attending with one third-year student (and no residents) for four hours in a medical clinic. Of that clinical care time, 18% or 0.72 hours is credited as educational time. Since this is Group A teaching mode, the department is credited with one student x 0.72 hours x 1.0 0.72 hours of clinical education time per day for third-year students. Thus, the department received 0.18 hours (or 10.8 minutes) of credit for teaching this student for each hour. Since there are five clinical days per week and four weeks per rotation, the total time for Group A is 0.72x5x4 14.4 hours per rotation. For students in Group B, the department would receive 0.138 hours (or 8.25 minutes) of credit for teaching each student for each hour.

Clinical implication. One student and no residents (Group A) would slow down the clinical activity rate, but still allow the attending to perform other independent activities. In one hour, the student might be able to see one patient, develop a diagnosis and treatment plan, discuss the case with the attending and revisit the patient with the attending at which time the attending could evaluate and treat the patient. There is significant time for one-on-one teaching at the student level. The attending has a relatively low “overhead” of getting to know the student, teaching the student, and evaluating the student’s ability and performance. Several students with no resident participation (Group B) would likely slow down the clinical activity rate more significantly and not allow the attending to perform independent activities. There would still be significant time for one-on-one teaching at the student level. The attending’s overhead is higher than for one student.

Calculations for Group C. Suppose for the 11% of time spent in Group C that the typical experience is an attending with one student and one resident for three hours while delivering clinical care in the operating room. Of that clinical care, 18% or 0.54 hours is credited as educational time. Since this is Group C teaching mode, the department is credited with one student x 0.54 hours x 0.56 0.30 hours of clinical education time for the third-year student. Thus, the department received 0.10 hours (or 6.0 minutes) of credit for teaching this student for each hour. Since there are five clinical days per week and four weeks per rotation, the total time for Group C is 0.56x5x4 11.2 hours per rotation.

Clinical implications. One student with one resident could have a positive or negative effect on patient flow depending on the training level of the resident and whether the student works with the resident or independently. Since some of the attending’s time is needed to supervise and teach the resident, less time would be available for one-on-one teaching with the student. In addition, the teaching that is not one-on-one with the student would have to be directed at two levels of trainees, reducing its effectiveness at the student level. The attending’s overhead is low for this combination of trainees.

Calculations for Group D or E. Suppose for the 32% of time spent in Group D that the typical experience is an attending making rounds with two third-year students and two residents on the inpatient floor each day for six hours. Of that clinical care, 18% or 1.08 hours is credited as clinical education time. Since this is Group D teaching mode, the department is credited with two students x 1.08 hours x 0.36 0.78 hours of clinical education time for the third-year students. Thus, the department received 0.065 hours (or 3.9 minutes) of credit for teaching each student for each hour. Since there are five clinical days per week and four weeks per rotation, the total time for Group D is 1.08x5x4 21.6 hours per rotation. For students in Group E, the department would receive 0.045 hours (or 2.7 minutes) of credit for teaching each student for each hour.

Clinical implications. A small group of students and residents (Group D) would most likely have a positive effect on the clinical activity rate but would leave less time for teaching one-on-one with students. Also, group teaching would have to be directed to the multiple levels of the trainees. The attending’s overhead would be higher in this setting. With a larger group of students and residents, the attending would be required to supervised the residents and, therefore, would have less time for one-on-one teaching with the students. Note that although the attending’s teaching time in Groups C, D, and E might be less than in Groups A and B, students in Groups C, D, and E might receive additional teaching from residents, a factor which is not assessed in this analysis.

Mission Aligned Management and Allocation: A Successfully Implemented Model of Mission-Based Budgeting

MBM – Mission Aligned Management (Wisconsin)

Gordon T. Ridley, MHA, Susan E. Skochelak, MD, MPH, and Philip M. Farrell, MD, PhD

Abstract

In response to declining funding support and increasing competition, medical schools have developed financial management models to assure that resource allocation supports core mission-related activities. The authors describe the development and implementation of such a model at the University of Wisconsin Medical School. The development occurred in three phases and included consensus building on the need for mission-based budgeting, extensive faculty involvement to create a credible model, and decisions about basic principles for the model. While each school may encounter different constraints and opportunities, the authors outline a series of generic issues that any medical school is likely to face when implementing a mission-based budgeting model. These issues include decisions about the amounts and sources of funds to be used in the budgeting process, whether funds should be allocated at the department or individual faculty level, the specific metrics for measuring academic activities, the relative amounts for research and teaching activities, and how to use the budget process to support new initiatives and strategic priorities. The University of Wisconsin Medical School’s Mission Aligned Management and Allocation (MAMA) model was implemented in 1999. The authors discuss implementation issues, including timetable, formulas used to cap budget changes among departments during phase-in, outcome measures used to monitor the effect of the new budget model, and a process for school-wide budget oversight. Finally, they discuss outcomes tracked during two years of full implementation to assess the success of the new MAMA budget process.

In the early 1990s, medical schools began examining their budgeting processes to align their resource allocations with the fulfillment of their multiple missions. The Association of American Medical Colleges (AAMC) encouraged and supported schools’ widespread interest in mission-based management (MBM) by creating forums for institutional leaders to share their variety of approaches, and several years later developed an operational framework.1–3 Early efforts at conceptualizing and developing models of ways to link academic resources with faculty effort have been described to help other institutions develop their own resource-allocation plans.4 –10 The University Hospital Consortium initiated a related effort for identifying revenue streams (“funds flow”) among schools and their related hospitals and practice organizations.11

Currently, approximately 25% of U.S. medical schools are working on the development of metrics to measure the teaching and academic activities of faculty.12 However, relatively few have implemented systems that link the budgeting process with those metrics.

At the University of Wisconsin Medical School, a method for alignment of resource allocation and academic mission has been developed and is in its second year of implementation. In this article, we report on the process of development, the model’s successes thus far, and the lessons we learned in development and implementation.

The Mission Aligned Management and Allocation Model

The University of Wisconsin Medical School’s annual operating budget is approximately $300 million (2000-2001), including university and state funding, extramural grant support, hospital support, and practice plan contributions. Of these funds, the school has direct control over around $70 million received from the university/state and from the practice plan. Approximately two thirds, or $47 million, of these funds are allocated to the school’s 25 departments and one third to support such schoolwide needs as facilities, libraries, animal care, and administration. The missionaligned model is applied to the entirety of the $47 million departmental allocation and not to any other school funds. Allocations to departments are based on quantification of their contributions to education, research, service, and school strategic priorities.

Development

Between 1994 and 1996, a variety of forces prompted the University of Wisconsin Medical School to explore resource alignment and accountability models, eventually naming its plan Mission Aligned Management and Allocation (MAMA). Those forces included the appointment of a new dean, a lean prognosis for the adequacy of existing resources, growing skepticism about the longstanding budget model, and the appointment of many new chairs and other leaders. The school’s 14 clinical practice partnerships unified to become the University of Wisconsin Medical Foundation in 1996, founded on principles of accountability, productivity, and academic mission.13 This not-forprofit organization contributes a portion of its revenue to the school and sought a rationalized method for its distribution. For all of these reasons, the context for change was favorable.

The dean of the medical school initiated planning for mission-aligned budgeting in 1995, emphasizing that “process is as important as product.”14 Despite initial support for the concept, three phases of planning for implementation were needed until a final product garnered sufficient acceptance from the various constituencies within the medical school.

Phase 1. A task force representing the school’s many constituencies was assembled in 1995. After a year of work, the task force achieved consensus around acceptable measures of academic activity (research awards, lectures, mentoring, etc.) but fell short of an operational model suitable for implementation. Most significantly, this group established principles that eventually served as a guiding force for future model development. In addition, the first phase involved approximately 100 faculty (including department chairs) who, through informal and formal communication, began the cultural transformation that proved essential to achieve faculty “buy in” and successful implementation.

Phase 2. During Phase 1, it became clear that although the school had a strategic plan in place, more effort was needed to delineate priority programs eligible for preferential allocation of discretionary funds. A second task force, consisting of a subset of chairs and associate deans, worked in 1998 to refine the initial ideas, identify the need for predetermined strategic priorities, and build a climate for greater acceptability for mission-based resource allocation. This group established that departments rather than individual faculty should receive budget allocations, and began to quantify curriculum components for a complex educational program. The task force set the stage for definitive model development, which occurred in the subsequent academic year.

Phase 3. A steering committee of 16 leaders, chaired by the dean with equal numbers of basic science chairs, clinical science chairs, associate deans, and faculty at large, was convened for the final phase of the process. The faculty included members of the medical school’s governing body, the Academic Planning Council— our university-designated “official” governance body. Subcommittees increased the total number of faculty involved to over 60. The group worked for approximately six months, and in July 1999 completed an operational model that was implemented in July 2000. The model has been the sole basis of departmental allocations of medical school funds for the 2000-2001 and 2001-2002 budgets.

Description of MAMA

Each year the school calculates the portion of its total budget, derived from university/state support and the faculty practice plan contribution (sometimes referred to as the “dean’s tax” at various institutions), to be allocated to departments. This amount is then divided into five categories: education, research, faculty, leadership development, and dean’s discretionary funding, and allocated as follows: 

▪ Sixty percent to education, based on department contributions to medical student, graduate study, allied health, and undergraduate teaching

▪ Twenty percent to research, based on extramural funding and salaries received from grants

▪ Ten percent to academic service, based on a per-capita distribution 

▪ Ten percent at the dean’s discretion, based on alignment with the school’s strategic priorities 

▪ Two percent to leadership activities such as sponsorship of training programs and participation on key school committees (these funds are a subset of the funds allocated to education) There are several other features of the plan: 

▪ Only academic funds originating in the medical school are allocated, thus excluding extramural or hospital support. 

▪ Each department develops its own allocation methods for distributing these funds to its infrastructure, programs, and facilities, and for faculty compensation. 

▪ Implementation is phased over three years. 

▪ Credit is awarded for faculty effort that crosses departmental lines, such as interdisciplinary courses and research grants. 

▪ Strategic priorities influence allocations of the dean’s discretionary category. 

▪ An oversight committee adjudicates disagreements over application of the model or major policy issues. 

▪ The dean and associate deans evaluate the entire model at two-year intervals. 

▪ Allocations to departments are unrestricted, and chairs have flexibility (within the guidelines of their compensation plans) for allocation of funds to individual faculty. 

▪ There is no faculty self-reporting of academic activity. 

▪ There is a commitment to avoid unintended consequences by monitoring and adjustment. 

▪ The model’s content is transparent to faculty members. 

▪ There are no large shifts of resources between basic science and clinical science departments.

Decision making to create the model. Mission-aligned budgeting processes require critical decision making around a limited number of issues. The resolution of each issue is described below.

Scope of funds for allocation. The spectrum for potential resources to be included in mission-aligned models is broad, from discretionary, special funds for particular purposes to the inclusion of all funds over which the school has some influence. The Phase 1 plan restricted mission alignment to funding for faculty salaries. However, a consensus developed that all medical school funds provided to departments should be subject to the model in order to strengthen its impact on academic productivity and not create a perception of “protected budgets.” Of the $70 million in funds directly controlled by the dean’s office, approximately one third are used for school-wide purposes such as libraries, animal care, facilities, and information technology. It was considered practical not to submit these expenditures to a mission-aligned model but rather to annually evaluate them to assure that they support the mission. All other funds, such as grants and hospital support, were already restricted in use. At the University of Wisconsin–Madison, student tuition is paid directly to the state of Wisconsin, returned to the university, and becomes one source of the university’s allocations to its schools and colleges.

Relationship to strategic planning. A major obstacle to faculty acceptance of Mission-Based Management Academic Medicine, Management Series: Mission-Based Management 37 mission-aligned budgeting was the perception that it would either encourage academic activity in a random, indiscriminate manner or carry a strong bias toward status-quo activities. The school had previously undertaken strategic planning, but not comprehensively or linked to budgeting. In 1997, the dean established a faculty task force to develop strategic priorities. It identified six major strategic priorities, since expanded to ten, that form the centerpiece of the school’s strategic plan and serve as a guide for mission-aligned budgeting.14

Allocation of funds to departments or to individual faculty. Early on, some questioned how a model solely allocating resources from the school to departments could have an impact on the alignment of academic work with mission, as teaching and research are primarily individual behaviors. A very creditable, precise system might be in place for delivering resources to departments, but if they continued their current methods of compensation to individual faculty— often based more on historic factors than an alignment with the academic mission—the entire purpose of the exercise would be frustrated.

The steering committee decided that the model should measure each department’s academic activity in aggregate, and should allocate school funds to departments on this basis. Departments of the University of Wisconsin – Madison enjoy a strong tradition as the academic and financial home for faculty, and it was concluded that department chairs were best able to judge the individual academic efforts of their diverse faculties. In fact, MAMA takes advantage of chair leadership and has increased the chairs’ authority. Also, there was trepidation about using the MAMA model to develop a school-wide individual compensation model—a “one size fits all” model for 1,200 faculty—which would require an exponential increase in the model’s complexity. Instead, the chosen model measures departmental academic activity in the aggregate, and funds are transferred from the school to the department on an unrestricted basis, allowing the department chair and executive committee discretion in how academic work and compensation are distributed among department faculty.

The question remained, however, of how to obtain alignment at the individual faculty level. The steering committee concluded that departments should replicate the model’s principles in their compensation plans and other allocation mechanisms. Department compensation plans must adhere to guiding principals established by the school and the practice organization and are subject to approval by the school’s compensation committee. This combination allows flexibility across departments while still providing assurance that school-wide strategic priorities are met.

Proportions of funds allocated to education and research. The model called for annual creation of separate funding pools for education and research, which are then each distributed to departments. One initial and very important question was how to determine the relative sizes of the two pools.

The steering committee decided on a three-to-one ratio for allocation of funds to education and research. This ratio was chosen for a number of reasons, including the acknowledgement that education is supported primarily through tuition and state revenues, and has no other significant source of funding. Research is expected to be predominantly supported from extramural sources at the University of Wisconsin Medical School. The ratio was analyzed through a number of empiric measures that attempted to determine the magnitude of revenues needed to support the teaching mission of the medical school. This exercise was complicated by the fact that it is difficult to separate faculty activity into discrete categories of “teaching, research, service, and clinical practice” and that the indirect costs for teaching have not been well defined. Three separate empiric approaches were used to determine the magnitude of funds allocated to the educational mission: (1) calculation of the faculty fulltime equivalent (FTE) requirements for the number of credit hours offered by the school based on university FTE teaching standards; (2) calculation of the faculty’s teaching contact hours using a relative-value-unit factor; and (3) analysis of tuition recovery. These three calculations approximated an absolute cost for education that allowed a consensus to form around the three-to-one ratio for education-to-research funds. This ratio was established firmly before any departmental modeling exercise was performed—a sequence that proved essential when some departments that were found to have budget “gaps” asked to change from a 3:1 ratio to a ratio more favorable to them.

Measurement of academic activity. Methods to measure academic activity were studied and debated intensively during all three phases of model development. These methods included faculty contact hours, revenues generated by research and clinical practice, a relative-value system for weighting academic activity, and individual reporting of comprehensive activities such as publications, presentations, and committee work. The medical school already had access to data that measured academic activity at the department level: courses and clerkships offered by each department, numbers of graduate students, extramural research awards, and numbers of faculty, including those participating in mission-related leadership activities. The steering committee selected global measurement criteria of departmental academic activity, as shown in List 1.

List 1
Measurement Criteria for a Faculty Member’s Academic Activity, University of Wisconsin Medical School, 2001
Courses and clerkships, based on credit hours and enrollment
Mentorship of doctoral students
Extramurally funded research
Faculty salary support obtained from extramural sources
Participation on major academic committees
A global “service” allocation based on number of faculty in each department
Leadership roles for training programs

These measures were described as proxies for academic activity; exclusion from this list did not represent devaluation of a particular faculty member’s work. For example, productive research can be done without extramural funding, but it is difficult to measure and therefore was not chosen as an allocation criterion. Rather, the assumption was made that departments with funded research programs could choose to use portions of their MAMA support for unfunded research. Publications and similar activities were an expected outcome of the measured academic activity, and thus were not an allocation criterion. These benchmarks were expected to be determined at the department level, based on individual faculty roles, and incentives could be created through individual department compensation plans. 

The steering committee and the dean were firmly committed to this level of specificity and have resisted attempts to include efforts at a highly detailed level, many of them requiring faculty self-reporting. These measures will be refined and improved with actual experience.

Resource allocation versus resource identification. The option of developing a medical center funds-flow model (encompassing university, hospital, and practice organization funds) was thoroughly considered,8 either in addition to or in lieu of mission-aligned allocation. In order to create a well defined, achievable end product, the school elected to defer consideration of a medical center funds-flow model and instead focus all energies on tight linkage between academic mission and resource allocation, with a specific target date for implementation. The three medical center entities recognized the need for improving the factual basis for the considerable amount of funds they exchange, and progress will continue on the most important of these.

Implementation timetable. The dean directed that a mission-aligned budget model be initiated in the first fiscal year after the steering committee completed its work and fully implemented after a three-year phase-in period. This became a useful parameter for compressing the group’s work and encouraged the use of readily available and verifiable data sources in the MAMA model.

Formula budgeting versus leadership flexibility. The need for a more transparent, quantitative resource allocation model was obvious, and the measures chosen on which to base the allocation— course direction, lectures, mentoring of graduate students, etc.— were indisputable means of doing so. Some faculty leaders suggested perfecting this method and using it to allocate the entire budget of the school.

However, during Phase 2, department chairs emphasized that the leadership expected of the dean’s office would be undermined by a formula that modeled 100% of all funds. The chairs clearly stated that the dean needed a source of strategic funds to enhance mission outcomes and stimulate change. Without some strategic funds under the dean’s discretion, the school’s need to support emerging areas of research and learning might be forfeited and along with it the dean’s negotiating influence with chairs. There was also a perception that however refined the model became it could never respond to all valid needs of the learning community. Some level of dean’s discretion could make the model responsive to unmeasurable needs of an extremely complex organization.

The solution to this dilemma was an additional category, the Academic Discretionary Fund, equal to 10% of the total allocation to departments. It is completely at the dean’s discretion to allocate to departments and is heavily weighted toward strategic priorities.

Measuring the product. Outcome measures have been defined to allow evaluation of the model’s impact at the completion of the third year of implementation, including teaching quality before and after MAMA, extramural research support, salaries supported by extramural sources, the tendency for faculty to seek teaching roles, and others.

Including graduate medical education (GME). Because GME is a major teaching activity for most clinical departments, there was initial interest in using it as a basis of allocating school funds. However, because funding support for GME rests solely with the school’s affiliated hospitals, it was decided to continue treating support for GME as a funds flow between departments and hospitals, and to assure that this funds flow would also be reassessed to assure fairness and mission alignment.

Implementation without destabilizing departments. The model in its pure form required redistribution of funds among departments, and while no department’s critical mass of funds was threatened, more movement of funds was prescribed than could be immediately accomplished. Implementation will occur over three years, and as departments reorient their academic activity, substantial compliance should be achieved. A formula limits a department’s maximum annual loss to the lesser of two amounts: one third of the formula-derived reduction, or 3% of the department’s revenue from all sources.

Avoiding manipulation of the model and adjudicating disputes. A faculty committee, advisory to the dean, was established during the first year of model implementation. It was acknowledged that no budget model would ever perfectly reflect all academic work. Therefore, the MAMA Oversight Committee was established to review and revise the model as issues arose during implementation. The committee was appointed by the medical school’s governing body, the Academic Planning Council, and chaired by the senior associate dean for academic affairs. During the first year of implementation, a number of questions were reviewed that resulted in revisions or clarifications to the model.

Discussion—Measures of Success

The University of Wisconsin Medical School’s experience with mission-aligned budgeting has been positive to date; the model is now in the second year of implementation. There has been evidence of increased academic productivity at both the department level and the individual faculty level. Even in the early phases, it became clear that department chairs and faculty were motivated to obtain more resources— or prevent loss of resources— by engaging in activities that earned more support under the MAMA plan. For example, discussion at department meetings began to focus on how to place more salaries on grants and involve more faculty members in teaching. One advantage of the department-linked MAMA method is that the role of the department chair as a Mission-Based Management Academic Medicine, Management Series: Mission-Based Management 39 manager and motivator has been strengthened. In addition, the chairs can now reinforce faculty accountability to the medical school. They have emphasized, with dean’s office guidance, encouraging their faculty to “close MAMA gaps” by methods shown in List 2.

List 2
Methods for Departments to Close Gaps in the Mission Aligned Management
and Allocation System, University of Wisconsin Medical School, 2001

Provide more instruction in more courses
Participate in more education leadership roles
Allow attrition (don’t fill faculty vacancies)
Attract graduate students
Use gift funds
Obtain more salary support from grants
Prepare more grant proposals and receive more awards
Provide service on targeted committees
Contribute activities that match strategic priorities
Create named professorships

Other changes have been correlated with the MAMA plan’s implementation. Course directors have reported that it has been easier to recruit faculty for medical student teaching. There is new interest and energy from faculty for educational programs, as evidenced by a number of new courses and clerkships that have been proposed. For example, a new clerkship in radiology and an integrated neurosciences clinical clerkship have been approved. Increased research productivity has been correlated with the implementation of MAMA. When compared with the pre-MAMA plan base year (1997), total research awards to medical school departments and research awards to clinical departments have both increased by 25% in three years.

Concomitantly, some “gaming” of the system has already been evident, including strenuous vying for curriculum time. A checks-and-balances system has been developed to try to shield the educational mission from mercenary goals. The Educational Policy Council (curriculum committee) operates separately from the MAMA Oversight Committee and is charged with the responsibility of maintaining curriculum standards and quality. The Educational Policy Council has defined a limit on the number of credit hours and contact hours that will be available for medical student teaching each semester, to limit the tendency for additional courses to enhance department revenues rather than to support the academic goals of medical education. The council has developed competency standards for each year of medical education and is planning for ongoing curricular revision without consideration of budget implications. The MAMA Oversight Committee is charged with phasing in implementation of budget changes that result from curricular revision, ideally separating the educational standards from direct influence of departmental budget considerations.

Despite the three-year implementation plan there is still work to be done, as the MAMA model is considered a work in progress. Further refinement of longitudinal outcome measures is occurring to help assess whether the mission-aligned budgeting process has indeed helped to achieve the school’s mission-related goals and strategic priorities. More specific measures of quality, especially in teaching, are being developed for courses and clerkships, including position descriptions for course and clerkship directors. The credit hour and enrollment measures for education are good approximations of contact hours, but are unduly influenced by the number of small groups that are offered within courses, and refinement of these measures is under discussion. Finally, the clinical practice plan and the university hospital are working on plans to align their resources more closely with activities that support their core missions. This last step has always been anticipated so that departments can respond to mutually compatible reward systems from the school, hospital, and practice organization.

Conclusion

The MAMA budget process at the University of Wisconsin Medical School has helped focus attention on the school’s prime mission and strategic goals and helped define the roles of departments and individual faculty in achieving those goals. It has given the school, especially the dean’s office and department chairs, a tool for motivating behavior in support of the academic mission and allowed all constituencies to see how the school’s resources are allocated. While the initial outcomes have been positive at this stage of the second year of implementation, careful monitoring and refinement are necessary to ensure that the alignment of the budgeting process with academic mission is truly helping the University of Wisconsin Medical School achieve its mission of meeting the health needs of Wisconsin and beyond through excellence in education, research, patient care, and service.

This article was originally published in the February 2002 issue of Academic Medicine.

References
1 Nutter DO, Bond JS, Collier BS, et al.
Measuring faculty effort and contributions in
medical education. Acad Med. 2000;75:200–7.


2 Holmes EW, Burks TF, Dzau V, et al.
Measuring contributions to the research
mission of medical schools. Acad Med. 2000;
75:304–15.


3 Association of American Medical Colleges:
Mission-Based Management Program,
Introducing the MBM Resource Materials.
Washington, DC: Association of American
Medical Colleges, 2000.


4 Cramer SJ, Ramalingam S, Rosenthal TC, Fox
CH. Implementing a comprehensive relativevalue— based incentive plan in an academic family medicine department. Acad Med.
2000;75:1159–66.


5 Johnston MC, Gifford RH. A model fo
distributing teaching funds to faculty. Acad
Med. 1996;71:138–40.


6 Garson A, Strifert KE, Beck JR. The metrics
process: Baylor’s development of a “report
card” for faculty and departments. Acad Med.
1999;74:861–70.

7 Scheid DC, Hamm RM, Crawford SA.
Measuring academic production— caveat
inventor. Acad Med. 2000;75:993–5.


8 Watson RT, Romrell LJ. Mission based
budgeting: removing a graveyard. Acad Med.
1999;74:627–40.

9 Watson RT, Suter E, Romrell LJ, Harman
EM, Rooks LG, Neims AH. Moving a
graveyard: how one school prepared the way
for continuous curriculum renewal. Acad
Med. 1998;73:948–55.


10 Watson RT. Managed education: an
approach to funding medical education. Acad
Med. 1997;72:92–3.


11 Burnett DA. Funds flow initiative:
collaborative efforts supporting missionbased business transformation. Paper presented at the University Health System
Consortium, Oakbrook, IL, October
15, 1998.


12 Mallon WT, Jones RF. How do medical
schools use measurement systems to track
faculty activity and productivity in teaching?
Acad Med. 2002;77:115–23.


13 Rao V, Farrell PM. Transformation of faculty
practice organizations: the University of
Wisconsin experience. Paper presented at the
Council of Deans Spring Meeting, Santa
Barbara, CA, April 18, 1999.


14 Bonazza J, Farrell PM, Albanese M, Kindig D.
Collaboration and peer review in medical
schools’ strategic planning. Acad Med. 2000;
75:409–18.

Foundations of Leadership

Foundations of Leadership.pdf

The Foundations of Leadership by David S. Hefner and Katharine R. Becker
Excerpted pp 219-231, Section 2 Managerial Leadership from Clinical Laboratory Management, 2nd Edition Edited by L. S. Garcia ©2013 ASM Press, Washington, DC

OBJECTIVES To help the reader understand the difference between the realms of management and leadership To develop an understanding of the importance of integrity and its relationship with performance To clarify the misconceptions about operating with integrity by honoring your word, and why people may disregard it, thereby diminishing their power as a leader To help aspiring leaders appreciate the inward journey of leadership and discover for themselves what it means to be authentic To provide clarity about the relationship between being committed to something bigger than oneself and becoming a leader To offer resources for the reader’s continued education

Whatever you can do or dream you can, begin it. Boldness has great genius, power and magic in it!

— Johann Wolfgang von Goethe

Being a leader has everything to do with Goethe’s quote. Every leader has faced the challenge to “just begin it!” This chapter offers access to actionable pathways for developing the foundational elements that are keys to leadership and exercising leadership effectively. You do not need to master the elements first, but they will be important for ensuring effectiveness in your leadership journey. Being a leader is a lifetime endeavor; once you step out, everything is different and there is no turning back. Healthcare today needs leaders at every level and in each discipline to succeed in solving complex problems with transdisciplinary solutions that can deliver cost-effective, high-quality care that ensures the health of all.

Distinguishing Management from Leadership

What Is “Leadership” and How Do You Become a Leader?

A 2012 Google search for leadership returned more than 113 million entries, but that is not really helpful, as you may only want the best 10 or 20. If you seek guidance from experts for a useful definition as a starting point, you will discover that faculty who teach leadership courses know there are as many definitions of leadership as books written about it. When checked in 2012, Amazon listed 96,883 book entries. Add to this the overwhelming amount of potentially contradictory information that conventional wisdom offers, such as that a really good manager will be a great leader, or that leadership is the same as good management, or that leadership is only available to those with a position near the top, or that people must be born with the inherent ability to lead.

The critical first step for answering the question posed
above is to understand what management is and how it differs from leadership. Management and leadership operate in different realms (Table 8.1). Managers are accountable for a known scope and use a set of well-developed skills to manage in their area. Conversely, leaders are responsible for and function in an unknown scope, using quite different skills to exercise leadership effectively.

Management

The Oxford English Dictionary defines the term manager as deriving from an Italian verb, maneggiare, which means “to handle, train, be in charge of, control horses.” Today, managers are assigned accountability to oversee the processes dealing with or controlling things or people. Therefore, it should not be surprising that managers’ roles and responsibilities include coordination and interaction with employees; handling, sharing, and analyzing information; problem-solving and decision making; producing the best results possible; operating within a budget; and accounting for the status of everyone and everything within their scope of accountability. Larger organizations often have up to three levels of manager phenotype—top, middle, and supervisory—and at each level the scope of accountability
varies, and in some cases overlaps.

High-performing staff members are often promoted to become managers. Some succeed and others learn that being a manager is not to their liking. The shift to a manager’s role challenges strong performers to delegate and to develop others. The role requires a shift from self-concern to one of mobilizing employees to tackle tough problems, as a manager’s success is frequently measured by the results of those they manage. Managers need to be taught skills to succeed in their new role, which cover a range
from delegating, to building credible and reliable budgets, to delivering performance evaluations, to building a diverse, team-based workforce. Their roles cover a large span of activities and meetings, from providing incentives and recognition for work well done to implementing layoffs when the circumstances require it. Managers’ authority is limited and tends to remain within their assigned area or scope, and their actions must align with senior management and the organization’s articulated strategies. Tasks are frequently identified and assigned by middle- or top-level managers who have the authority to make such delegations. After 30 years of teaching and working with managers, the authors have found that managers are often interested in being taught how to “manage up,” or to get their manager(s) to do what they think should be done. However, we find that after being given the tools, rarely do these same managers succeed or even try. If the concept of managing up is of interest, one needs to understand that it has more to do with being a leader than managing.

Leadership

In contrast with the work of managing a known scope, a leader’s work is to make something happen in the future that was otherwise not (predictably) going to happen. Leaders have the courage to take on being responsible for what is unknown, the sense-making ability to navigate in uncertainty and ambiguity, the comfort to experiment using trial-and-error methods to discover their way, and the enthusiasm to inspire the engagement of others to garner progress when the pathways are uncertain or current knowledge argues that the vision being articulated cannot or will not happen (1).

How does one become a leader? As your parents probably said, anyone (you) can be a leader in any arena of concern or from any position in an organization. However, to succeed in being a leader and to exercise leadership effectively requires the development of a foundation upon which your confidence to lead grows and is recognized by others.

Can leadership be taught? Does understanding what other leaders have done or knowing the styles they used help someone to lead? The editors of a recent book, The Handbook for Teaching Leadership, reported an interesting conclusion after interacting with 30 authors during the publication of this book about their varied teaching methods (18). In more than 25 years of using many methods of teaching about leadership and leadership styles and
studying cases to learn what leaders do, the authors found “scant empirical evidence that any of these approaches work” (17). In addition, there is insufficient research and an inconsistent body of knowledge to validate whether the methods being used succeed in developing the kind of leaders needed for the uncertain future (17). While teaching skills and imparting knowledge are what educators best know how to do, “the current state of leadership education lacks the intellectual rigor and institutional structure to advance beyond its present (and precariously) nascent stage” (12). In fact, only one of the 30 different teaching methods in The Handbook for Teaching Leadership has the objective of leaving students actually being leaders and exercising leadership effectively (7, 9). Many of the perspectives in this book chapter are derived from, or are a synopsis of, the ground-breaking material developed to support the precept of actually being an effective
leader (5–8, 14).

So, you may rightly ask, can this chapter possibly be different? While we cannot reach through these pages and shape you into a leader, the material in this chapter offers access to foundational elements that provide potency and bring power to leading. While they can also improve your management, mastering them is critical for anyone desiring to be a leader and to exercise leadership effectively. The rest will be up to you.

Few will have the greatness to bend history itself. But each of us can work to change a small portion of events, and in the total of all these acts will be written the history of this generation.

Robert F. Kennedy

The Foundational Factors for Being a Leader: The First Foundational Factor Is Integrity

The softest pillow is a clear conscience.

Narayana Murthy, founder and former CEO, Infosys

When you ask people what integrity is, their answers are often expressed as values and norms. For instance, someone with integrity does not lie or steal. While there are moral, legal, and ethical underpinnings in every situation, organization, or professional group, integrity (as described here as a foundation for leadership) is not a normative or relative phenomenon. Integrity is independent and yet it underlies everything, and without it nothing works. Without being a person of integrity, you can set aside the notion of ever being a leader, and to be a person of integrity is a never-ending undertaking.

The Definition of Integrity

To understand this area, we start with the Merriam- Webster
Dictionary
definition of “integrity”:

  1. firm adherence to a code of especially moral or artistic values: incorruptibility
  2. an unimpaired condition: soundness
  3. the quality or state of being complete or undivided: completeness

Rarely do people notice the second and third components of the definition, though they are critical to having an actionable access to operating with integrity. If you focus on the second and third definitions, the notion that integrity establishes the underpinning for workability and performance becomes clear. We ask you to consider the following heuristic: as integrity (unimpaired, complete) increases, the conditions that allow for maximum performance also increase; therefore integrity is a critical condition of performance.

But what does it mean for a person or a leader to be unimpaired and complete? What it means for a person to operate with integrity is to honor one’s word. To clarify what that means, we will examine what honoring your word is, and more specifically, what is meant in detail by your word.

Honoring Your Word

Honoring your word means doing what you said you would do, or if you cannot or will not be doing what you said, letting others know as soon as you know that you will not be doing what you said, and dealing with the resultant consequences. Though this concept sounds relatively simple, many people do not understand (or they disregard) how important and fundamental it is to optimal performance. We are sure you can think of instances in which people say they will do something, do not do it, and never talk about it again. Common misconceptions become pitfalls that often prevent people from honoring their word.

The Pitfalls in Honoring Your Word

Many people fail to let others know what they did or did not do, sometimes even after the fact. Have you ever awakened in the middle of the night in a panic about a passed deadline, because you didn’t know if your team member(s) delivered or not?

The first pitfall in the arena of honoring your word occurs when people think that integrity is only a virtue. If integrity is understood to be a virtue, rather than a necessity, it conceals the fact that honoring your word is a necessary condition for performance. As a virtue, integrity can become more easily sacrificed, especially when it appears to a person that he or she must do so to succeed, or that it really does not matter. For instance, reporting only the good news, or the news you think others want to hear, can seem to be acceptable or smart behavior, as can saying you finished something that you in fact did not, simply because it sounds better and you know you can complete it after the fact, on an evening or weekend. What is unseen or not easily recognized in these situations is the resulting damage to the individual and/or the organizational performance. When integrity is understood as honoring your word, then saying both what is and is not happening— the good, the bad, and the ugly—becomes dependable behavior. In environments where reliable information is readily available, managers can make more appropriate decisions about what is (or is not) getting done. For instance, if a manager knows what was not done, and the stakeholders who are expecting the deliverable have been advised of the potential delay, they can decide together if the deliverable can be delayed until a later date or if it must be completed immediately. As more people learn to speak openly about what is and is not done, individual and organizational confidence grows and overall performance increases. Honoring your word helps to establish workable relationships that enable others to develop a sufficient sense of security so that they can provide complete information.

A second pitfall occurs when managers are unaware that they have not honored their word or have missed a deadline. For years, we authors have discussed with people the importance of honoring their word. Many have admitted that they take better care of their automobiles than they do their word, because they can “see” their car but they cannot “see” their word. Managers have much to do; it seems almost impossible to know what outstanding commitments are yet to be completed (especially when many commitments are delegated via e-mail, often without discussion). Even when managers do know what has not been done, their energy may be focused on explaining why they did not complete the commitment (and constructing a report justifying their nondelivery) rather than communicating what was not done and focusing instead on the impact and possible solutions. When managers and staff operate without full awareness of their commitments, they frequently are unaware of the increased potential for a decline in performance in their area of accountability, their organizations, and/or themselves. It does not take staff long to determine if deadlines are reliable or not, or if their managers know the outcomes they are working on (or not), and reliable performance declines in such settings. If you see performance decreasing and it seems like everyone is honoring their word, you should ask, “When was performance last progressing at necessary or acceptable levels and/or when did it go off track, and at that point, what happened, what commitments were not being honored?”

The third pitfall that managers confront with honoring their word occurs when people think integrity means keeping your word or that you must always do what you said. Keeping one’s word and honoring one’s word are not synonymous as presented in this chapter. However, most people think the two are one and the same.

What happens when it is not possible (or when it is inappropriate, perhaps due to legal reasons, strategic reasons, force majeure, etc.) to fulfill what was previously committed? Working to keep your word, when it might be more appropriate to honor your word by letting others know what you will not or did not do, enhances performance. When transparency is not embraced, it often leads to counterproductive behaviors, like not responding to e‐mail in a timely manner (or ever) or avoiding people or meetings, and therefore impairs the overall coordination of performance. When a manager cannot keep his or her word and opts for the apparent short‐term gain of concealing it rather than courageously acknowledging it, he or she may forfeit the power and respect that will accrue from honoring one’s word. And without the respect of others, you can forget about being a leader.

Once you are aware of what it means to honor your word and how to honor your word, it becomes important to understand what constitutes your word.

C-Suite Recruiting Practices in Academic Medical Centers

William T. Mallon, Ed.D., David S. Hefner, M.P.A., and April Corrice
Association of American Medical Colleges

Summary of Findings

1. Teaching hospitals are constantly searching for new leaders. See page 4.

2. The average leadership search in major teaching hospitals takes seven months and most frequently results in an external candidate being selected for the position. See page 5.

3. Teaching hospitals have professional guidance and regularly use search committees in the search process for C-suite executives. See page 6.

4. CEOs appear satisfied with many aspects of the leadership search process, but less so with outcomes in achieving a more diverse leadership team. Yet teaching hospitals might not tap into all the resources at their disposal to reach out to diverse applicants. See page 7.

5. Identifying candidates with the best “fit” is the most vexing challenge in the leadership search process for major teaching hospitals and health systems; building systems of talent management and leadership development is a potential solution. See page 9.

6. Almost 4 in 10 medical school deans have no active role in the search and recruitment process for C-level executives at integrated teaching hospitals.
See page 10.

For each of the findings, we provide an analysis of the data and offer a promising practice, adapted from the AAMC’s Finding Top Talent handbook, on how to search for leaders in academic medicine.

AAMC-C-Suite-Recruiting-Practices-Mallon-Hefner