Creating a Mission-Based Reporting System at an Academic Health Center

MBM – Creating a Mission-Based System

Lydia Pleotis Howell, MD, Michael Hogarth, MD, and Thomas F. Anders, MD

Abstract

The authors developed a Web-based mission-based reporting (MBR) system for their university’s (UC Davis’s) health system to report faculty members’ activities in research and creative work, clinical service, education, and community/university service. They developed the system over several years (1998 –2001) in response to a perceived need to better define faculty members’ productivity for faculty development, financial management, and program assessment. The goal was to create a measurement tool that could be used by department chairs to counsel faculty on their performances. The MBR system provides measures of effort for each of the university’s four missions. Departments or the school can use the output to better define expenditures and allocations of resources. The system provides both a quantitative metric of times spent on various activities within each mission, and a qualitative metric for the effort expended.

The authors report the process of developing the MBR system and making it applicable for both clinical and basic science departments, and the mixed success experienced in its implementation. The system appears to depict the activities of most faculty fairly accurately, and chairs of test departments have been generally enthusiastic. However, resistance to general implementation remains, chiefly due to concerns about reliability, validity, and time required for completing the report. The authors conclude that MBR can be useful but will require some streamlining and the elimination of other redundant reporting instruments. A well defined purpose is required to motivate its use.

The development of mission-based management programs has been the focus of many academic medical centers. The Association of American Medical Colleges (AAMC) has encouraged its use. The AAMC defines mission-based management as “a process for organizational decision making that is mission-driven, ensures internal accountability, distributes resources in alignment with organization-wide goals, and is based on timely, open and accurate information.”1 An essential aspect of mission-based management is the ability to measure faculty and department activities that contribute to the missions of the school. This is, however, a highly controversial area, since faculty fear that poorly designed measurement systems will adversely affect their salaries, promotions, workloads, and allocation of support. Relative-value units (RVUs), commonly used for billing, are a generally accepted method of gauging clinical productivity; however, there are only a few published methods describing productivity measures for non-clinical missions, such as education.2–6 Likewise, only a few of the published mission-based management systems have attempted to integrate the information from all missions for an individual faculty member.7,8

In this article we describe our development of a mission-based reporting (MBR) system that measures faculty members’ quantitative and qualitative efforts in the four missions of clinical work, research, education, and administration/community-service activities. We designed MBR as a reporting system for chairs to provide them with quantitative and qualitative information about their departments related to each of the four missions. We avoided the term mission-based management because we wanted to deemphasize control and the negative connotations of the term management. We intended, rather, to imply that the term reporting should lead to recognition of faculty members’ efforts and growth in their careers. The purpose of MBR is to provide a reporting tool for use in evaluating faculty resources and department performance, both retrospectively and prospectively. The tool helps chairs to better fulfill the missions of their departments and the school, plan for the future, and mentor and reward individual faculty members.

System Design

Technical characteristics: We initially designed the MBR system in 1998 as an Excel spreadsheet, but changed it to a Web-based program early in the course of development so that participating faculty could better access their individual records and enter and view their own results. The current version of MBR employs a three-tier architecture with a Web browser as the client software, an application server for middle-tier “business logic,” and a relational database for data storage. Since the MBR system is a Java Servlet 2.1-compatible system, it can be implemented on a large variety of server environments. User summary reports are provided as portable document format (PDF) files, constructed “on the fly” from data in the database and submitted to the Web browser when a user requests the report. We chose the PDF format because it produces high-fidelity printing, constructing summary reports with a professional appearance. A printed record is available for each individual faculty member. Printable summary reports compile data by department, for the school as a whole, and by faculty rank and/or series across departments (Charts 1–3). Security levels exist so that an individual faculty member can view his or her own personal record only. A department chair can view the records of all faculty members within his or her own department, and the deans can view the records of all faculty and departments.

Designing the database structure: We designed the basic data-entry module in three sections: an Activity Section for faculty to enter their year’s activities, an Evaluation Section for qualitative assessment of performance, and an automated Summary “Report Card.” Each of the three sections is further subdivided according to the university’s four missions: clinical service, investigation and creative work (i.e., research/scholarship), teaching, and administration/university/community service. Before a faculty member begins to enter data, that individual’s “budgeted” or “targeted” percent effort for each mission is entered by the department manager. Budget projections (targets) of faculty effort by mission for each faculty member are required as part of each department’s annual budget submission. These budgeted projections are entered into the MBR system.

The MBR system is a self-report system whereby individual faculty members enter their data (quantitative and qualitative) by mission and immediately see the relative values of their efforts. Faculty entries are later reviewed and validated by the department chair during an annual career-planning session required for all faculty. Based on the faculty member’s entries in the Activity Section, the MBR program computes an estimate of the time spent in each activity, using the RVU codes embedded in the program. Activity scores for each mission are summed. Each mission summary score is then transferred to the “% Actual” field in the summary report card. A grand total for percent effort is also computed. The summary report card thus compares previously entered “projected” or “targeted” effort with actual activities entered by the faculty member for each mission (Chart 1).

Defining activities and computing RVUs: Faculty from diverse departments within the University of California Davis School of Medicine served on committees dedicated to defining parameters for each of the university’s four missions (listed earlier). Faculty volunteered, were appointed, or were selected to serve on committees because of their special interests or expertise. In general, committees were open to anyone who wished to serve, but committee size did not exceed 15 for any one committee. Two of us (LH and TA) served as chair or co-chair for each of the committees. We charged each committee to select and define the most relevant and representative activities for its assigned mission. The charge urged comprehensiveness but, at the same time, demanded simplicity.

The Activity Section translates activities into quantitative time/effort-based metrics. Thus, another of the committee’s charges requested estimates of the quantity of time expected to complete each activity over the course of a year (Chart 4). The quantity of time was defined as a percentage of a year spent performing that activity, using a 50-hour work week as the standard. The committees achieved consensus on estimated average times to accomplish each activity based on personal experience and creative deduction. For example, there is no easily established standard for the length of time it takes to complete a manuscript. However, promotion committees generally expect faculty to publish the equivalent of at least two journal articles per year. Our clinical faculty strive to have a minimum of 20% of their time protected for scholarly activities. Thus, the RVU time allotment for a journal article for a clinical series faculty member was calculated accordingly.

In a later refinement, a higher RVU score was assigned to articles published in peer reviewed journals than to limited distribution articles because promotion committees value the former more highly. Similarly, book chapters were given more relative value for clinician– educator faculty than for research faculty. In the same spirit, abstracts and “submitted” grants were weighted more for junior than for senior faculty. Such differential weightings of time-based RVU codes motivate and reward faculty for activity that is aligned toward academic success in their respective series (i.e., track) and rank. The MBR program knows which RVU codes to select for a given faculty member because the department manager enters the rank and series of each faculty member at the same time that the percent “targeted” effort from the budget is entered. The faculty member entering data is “blind” to the RVU weight assigned to each activity.

Both the teaching and the clinical services committees were required to distinguish patient care with students from clinical service without associated teaching. Since published reports indicate that faculty spend approximately 43–53% of time teaching residents in ambulatory care settings,9,10 we designed the MBR system to allocate 50% of clinical time spent with trainees to the clinical mission and 50% to the teaching mission. The clinical services module was designed as a logic tree requiring faculty to enter the weekly half-days in the clinic with and without students, and the number of months per year as ward attending with and without students. The MBR program then allocates effort to the two missions automatically. In the first version of the MBR system, these calculations had been left to the individual faculty member. Significant confusion and misinterpretation of instructions led us to automate the input via the structured decision tree.

Similarly, for the administration/ university/community service mission, we did not want to credit all committee and administrative activities equally. The university endorses community service, and the promotion committees expect some service activities of faculty. However, academic advancement is not enhanced by excessive community service at the expense of scholarship. Therefore, less RVU credit and fewer opportunities were provided in the Activity Section for these activities. Only major school and university committees, such as the institutional review board, promotion committee, and admission committee, were included. These committees require large time commitments of faculty and are considered important for the school’s function. We did not include minor committees and service work outside the university but credited them qualitatively in the Evaluation Section. We coded administrative activities that are considered part of the job description of a chair, dean, division chief, or other leader on the basis of the size of the department/division or scope of the responsibility.

For the qualitative metrics designed for the Evaluation Section, the committees were charged with developing a list of standards reflecting the quality of the work performed. The standards were ranked from 0 to 5. Thus, the Evaluation Section (Chart 2) summarizes the qualitative aspects of faculty scored previously. The teaching mission is evaluated from the perspectives of student and peers and is averaged to achieve a final evaluation score for teaching. Individual evaluation standards are not additive. An individual faculty member records only one standard for each mission. This evaluation score is then automatically imported to the Summary Report Card and can be viewed separately for each mission.

As part of the Summary Report Card, the computer also multiplies the evaluation score by the activity score to achieve a single quantity/quality product for each mission. The mission products are then summed to obtain a single summary score for each faculty member. The following theoretical model drives the interpretation of this summary score. If a faculty member’s actual activities total 100% and her or his evaluation codes for each mission are 3, the resultant final summary score of 300 (100 3) reflects expected and appropriate performance. In other words, faculty members whose summary scores are at least 300 are on target for academic advancement. A score below 300 suggests substandard performance for the year and requires attention from the chair. A score above 400 indicates outstanding performance worthy of an incentive reward.

Implementation: Testing and Modifications

Phase 1: Selected feasibility testing: We chose to test and modify the MBR system in three phases. In phase 1 in 1998, we tested the initial RVU and performance codes created by each committee for inconsistencies, omissions, and other user-entry problems on 21 randomly selected volunteer faculty members. Of the 21, two had quantitative scores less than 100% (56.0 and 55.9%), six had scores between 100% and 150%, and 13 had scores higher than 150%. The faculty with the high scores were hardworking, but not working at the level their scores would indicate, nor were the two faculty members whose scores were less than 100% considered to be “slackers.” The mission in which the largest number of faculty showed discrepancies between targeted effort and actual effort was the teaching mission. Sixteen of 21 faculty exceeded their targeted expectations by more than 10%. The next most discrepant mission was the investigation and creative work mission, with nine of 21 faculty demonstrating similar over-reporting. For the clinical mission, all of the faculty had discrepancies of less than 10% between targeted effort and actual effort. For the administration/university/community service mission, department chairs and deans had actual percentages below the targeted percentages because some of their activities had not been included. In response to this initial pilot trial, adjustments were made to the RVU codes. For the teaching mission, time values believed to be excessive were decreased for some activities. In the quantitative portion of the administration/university/community service section, a line was added for “administrative stipend” (% salary support) to account for time spent on administrative activities relevant to the job descriptions of department chairs or other leaders. The results from the phase 1 trial enabled us to better define activities and adjust the RVU weighted scores.

Phase 2: Pilot testing with selected departments: In phase 2 in 1999, we tested the revised system on 131 faculty members from eight departments. These departments ranged in size from five to 28 faculty members and included two basic science departments, three surgical departments, two medical departments, and one hospital-based specialty department, with an almost even division between clinical and basic science activities. Faculty members in each of the test departments completed MBR data entry online prior to their annual career planning sessions with their chairs. The printed results for each faculty member were validated by the chair and discussed with the faculty member.

For the investigation and creative work mission, only one department did not have faculty members who were under target. Half of the departments had more than 48% of their faculty under target, suggesting under-performance. The under-target faculty in this mission tended to be basic scientists or faculty with large percentages of time designated for research. They were often junior faculty who were still in the start-up phases of their research careers. Based on these findings, several new activities were added to the investigation and creative work section to reflect work in progress. Credit for published abstracts, grants submitted but not yet funded, cost recovery on grants, and time spent in study sections was added. These activities were also given greater RVU weight for junior faculty than for senior faculty. Only one department produced results that showed that the majority of its faculty were over target for the investigation and creative work mission. This was a surgery department whose faculty had been budgeted with minimal time for research. As a consequence, even modest scholarly output made it fairly easy for these faculty to exceed their targeted time.

For the teaching mission, all of the Phase 2 trial departments produced results that indicated that the majority of faculty were on or over target. The improvements to the RVU weightings after phase 1 had been successful. Only one fourth to one third of the faculty were under target. Almost equal numbers of faculty were over and under target. For the administration/university/ community service mission, five of the eight departments also showed the majority of their faculty to be on or over target for that mission. Likewise, for the clinical mission, six of the eight departments with clinicians showed that more than 50% of their members were on target. In two departments large percentages of faculty were under target. One of these was a hospital-based specialty whose clinical activities were not easily measured by the system. The results of phase 2 pointed to yet other areas in need of revision.

Phase 3: School-wide implementation: Based on the experience from phase 2, we made additional refinements, focusing primarily on further fine-tuning the RVU scores. Because some faculty were concerned about the invisibility of RVU equivalents of the activity scores, we revised the program so that a mouse click provides the actual RVU weight used in the computation. In addition, we added “help” buttons for specific items whose definitions had been ambiguous. A mouse click on the help button now provides a specific definition of the activity.

During the post-phase 2 refinements, we reconvened the committees. Their further guidance and advice were reflected in the revision. Many committee members had experienced first-hand the phase 2 implementation. Throughout all phases of MBR development, we actively pursued dialogue with our faculty. We discussed difficulties and changes in a variety of forums such as the faculty senate, the Council of Department Chairs, and the curriculum committee, and at department faculty meetings. Individual faculty provided input directly or via e-mail. Phase 3 tested MBR in a school-wide trial of all faculty and departments.

We modified the RVU coding system to stratify faculty by rank and faculty series. Since junior faculty are often in more of a “building” phase of their careers, with less published investigative/creative work or funded grants, instructors and assistant professors were given more credit for work in progress than were senior faculty. Stratification based on rank and series also expanded the system’s summary reporting and dataanalytic capacities.

In 2000 for phase 3, the dean’s office required use of the new version of the MBR system for annual faculty career planning by all departments in the school. The dean’s office did not articulate a clear purpose for MBR but did clearly state that the results of MBR would not be used for any salary or promotion planning. The dean’s office implied that the results would be used only to further refine categories of academic activity and the RVU and Evaluation Section codes.

Discussion

Developing an MBR system is a complex task requiring careful group planning, considerable administrative support, and significant time for design, testing, and modification. Even then, there are obstacles to general faculty acceptance and uniform use. It is not clear from the extant literature that any mission-based management system has gained general acceptance and is regularly being employed successfully.

The system we describe differs from other published mission-based systems in several ways. One important difference concerns the definition of the research/ scholarly mission and what types of work should be included as evidence of productivity. In our system, we specifically selected the term “investigation and creative work” to encompass the scholarship of education, application, and integration as well as the scholarship of discovery. The former are evidenced by publication of books, book chapters, educational manuals, review articles, and peer-reviewed articles describing clinical experience. In other mission-based systems, many of these activities would be included under the educational mission.6,7 However, our university defines all of these types of activities as creative scholarship and views them as research-specific to one or more of the academic series. The university criteria are reinforced in the MBR system by giving due credit for integrative and educational publications for faculty in the education and clinical series. RVU credits were weighted according to the publication (chapter versus peer review) and the faculty member’s rank and series.

Another difference unique to MBR is the separation of quantitative and qualitative measurements of productivity. The system described by Nutter et al. integrates a qualitative multiplier directly into the quantitative RVU score assigned to each activity.6 By separating the two in MBR, department chairs or administrators can consider each dimension separately for different purposes. Examining the quantitative component alone can be useful in determining staffing or assignment of duties to an individual. The qualitative component can be examined separately to advise faculty about areas needed for improvement. The quantity/quality product provides an indication of the cost– benefit value of the activity. The summary score might be useful in the promotion process or in comparing faculty for other forms of rewards. School administrators might also consider rewards on a broader department level. For example, the mission-based management system at the University of Florida bases 20% of the department’s budget allocation on the qualitative component of its effort in the educational mission.11

It is important to note that the phase 2 trial with eight departments demonstrated that many of the faculty in the clinical departments had quantitative scores significantly exceeding 100%. This indicates that most faculty are working more than the 50-hour week, which had been considered the standard in creating this MBR system. We were not surprised by this result. We operate a rapidly growing primary care network in a highly competitive managed care market. The faculty’s clinical workload has significantly increased.

If the quantitative RVU scores assigned to clinical activities are deemed to be accurate and fair, faculty members should be able to advance successfully academically by working only slightly above 100% time. If faculty members are academically successful only by working clinically at effort levels that greatly exceed 100%, then the expectations that surround academic advancement and the assignment of clinical workload are in direct conflict. Demanding continued performance much greater than 100% will lead to faculty burn-out and problems with retention. Exit interviews by the dean with a number of faculty have suggested that “private” group practice is a more personally rewarding and manageable alternative than the 150% effort required of academic medicine. We believe that it is important to document faculty efforts beyond normal working hours in order to support academic advancement and better align faculty compensation to faculty effort.

During the phase 2 trial, we also found it interesting that the mission with the most discrepancy between target effort and actual effort was the investigation and creative work mission. Basic scientists were understandably suspicious of a system that made them look underproductive. Gauging research productivity had been problematic during the design stage. The research subcommittee had specifically concluded that quantitative effort in this mission should be based only on final products (published papers, funded grants). The other missions were largely time-based. Since the MBR system is designed to be implemented annually, research productivity may be specifically compromised because of publication lag times and grant-submission review cycles. Most research projects take several years before coming to fruition. Since work in progress was not originally credited and only published work was considered, a faculty member could appear to be under-productive one year and over-productive the next year when the work that was in progress the first year was finally published in the second year.

We used the results of phase 2 to revise the MBR system. In phase 3, we included additional credit for salary support from grants, for abstracts, and for new grant submissions. The AAMC’s mission-based management program noted that there are some advantages in including these activities, and that they are included in mission-based systems at other schools.12 Despite these additions, some element of under-reporting of faculty efforts in the investigation and creative work mission may continue to exist. Discovery-type research is by nature an inefficient process in which many time-consuming efforts do not result in funded grants or as publishable work. If the MBR system described here is to be used as part of annual faculty career counseling, chairs will need to be cognizant of this issue and not unfairly evaluate a faculty member unless a trend is observed for more than one year. This mission will merit continued scrutiny as the system is further refined.

The difficulties we encountered in phase 3 testing of the MBR system include a persistent general resistance by faculty and chairs. Faculty concerns focused on the resistance to quantification of their activities, a belief that the information collected would be more harmful than helpful, and a conviction by each specialty that its activities are unique and, therefore, can not be fitted into a general template. Similar difficulties have been encountered by others and remain a challenge for general implementation.

One significant remaining challenge that requires further refinement is the area of on-call time. The issues of in-house versus at-home call, 24-hour versus night and weekend call, procedural versus consultative call, and resident versus non-resident supported call are difficult to equilibrate between specialties.

Despite these ongoing challenges, we believe that the overall experience with the MBR system at UC Davis has been positive. Significant faculty-wide attention has been focused on the benefits of MBR, and there has been general recognition of its necessity. Skeptical department chairs became more enthusiastic when shown the summary results for their faculty. In general, chairs of the eight test departments in phase 2 felt that the MBR system did give higher scores to the faculty that they had previously perceived as high achievers, and lower scores to those faculty whom they felt were relatively weaker. They also found MBR to be a good springboard for discussions with faculty members during their annual career-counseling sessions.

We are making an effort to overcome continued resistance by some faculty and address the barriers to implementation. Integration of existing data collected by other administrative units, such as a faculty member’s clinical RVUgeneration report, and research grant and contract dollars, should directly be downloaded to that individual’s MBR record. Such automation reduces redundancy, minimizes individual input, and increases data integrity and report accuracy. However, MBR may never gain acceptance until input efforts result in responsive decision making for allocation of resources to departments and/or for more streamlined procedures for academic advancement.

MBR can be used by department chairs as a management tool for individuals, to discuss faculty performances and goals and determine salary, or to automate some of the tedious hard-copy paperwork required for promotion actions. For departments, examination of the total projected effort and actual effort expended in each mission can aid in determining faculty staffing and work assignments, identifying recruitment needs, and developing department budgets. For the school, MBR data can be used to aid in equitable allocation of funds and space to missions and departments. Allocation of positions and money to departments based on MBM elsewhere has been described.9 Use in decision making, however, requires trust in the accuracy of the system. Future efforts to ensure accuracy and build trust will require refinement of quantitative and qualitative scores for each mission. Comparison of MBR results with successful promotion actions is one way to establish validity.

Acknowledgement

Special thanks to Benny Poon, Medical Informatics Group, for his programming expertise in the development of the Web-based MBR system.


This article was originally published in the
February 2002 issue of Academic Medicine.

References

1 Association of American Medical Colleges.
Mission-Based Management Program:
Introducing the MBM Resource Materials.
Washington, DC: AAMC, 2000.


2 Bardes CL, Hayes JG. Are the teachers
teaching? Measuring the educational activities
of clinical faculty. Acad Med. 1995;70:111–4.


3 Bardes CL, Hayes JG, Falcone DJ, Hajjar DP,
Alonso DR. Measuring teaching: a relative
value scale in teaching. Teach Learn Med.
1998;10:40–3.


4 Bardes CL. Teaching counts: the relativevalue scale in teaching. Acad Med. 1999;74:1261–3.


5 Sachdeva AK, Cohen R, Dayton MT, et al. A\
new model for recognizing and rewarding the
educational accomplishments of surgery faculty. Acad Med. 1999;74:1278–87.


6 Nutter DO, Bond JS, Coller GS, et al. Measuring
faculty effort and contributions in medical
education. Acad Med. 2000;75:199–207.


7 Garson A, Strifert KE, Beck R, et al. The
metrics process: Baylor’s development of a
“report card” for faculty and departments.
Acad Med. 1999;74:861–70.


8 Hilton C, Fisher W, Lopez A, Sanders C. A
relative-value– based system for calculating
faculty productivity in teaching, research,
administration, and patient care. Acad Med.
1997;72:787–93.


9 Zweig SC, Glenn JK, Reid JC, Williamson
HA, Garrett E. Activities of the attending
physician in the ambulatory setting: what part
is teaching? Fam Med. 1989;21:263–7


10 Melgar T, Schubiner H, Burack R, Aranha A,Musial J. A time–motion study of the
activities of attending physicians in an.
internal medicine and internal medicine—
pediatrics residents continuity clinic. Acad
Med. 2000;75:1138–43.


11 Watson RT, Romrell LJ. Mission-based
budgeting: removing a graveyard. Acad Med.
1999;74:627–40.


12 Holmes EW, Burks TF, Dzau V, et al.
Measuring contributions to the research
mission of medical schools. Acad Med. 2000;
75:304–13