Using a recognition and reward initiative to improve service quality: a quasi-experimental field study in a public higher education institution.

July 23, 2011

Public Personnel Management

June 22, 2011

Using a recognition and reward initiative to improve service quality: a quasi-experimental field study in a public higher education institution.

BYLINE: Kopelman, Richard E.; Gardberg, Naomi A.; Brandwein, Ann Cohen

SECTION: Pg. 133(17) Vol. 40 No. 2 ISSN: 0091-0260

LENGTH: 7503 words

A growing body of evidence suggests that excellence in service quality is an important
driver of customer satisfaction leading to additional benefits to organizations,
such as superior customer satisfaction and customer loyalty. (1) One approach to
enhancing service quality    is via employee recognition and reward programs. Such
programs have also been found to improve employee attitudes (2) and to facilitate
organizational change. (3) As DeMers noted: “City, state, and county governments
would do quite well to institute a blend of best practices    from around the country
that embrace new private-sector strategies while at the same time accentuate existing
government benefits.” (4)Similarly, Lachance commented that “[t]here is growing
realization that the tools and techniques we have available to engage and sustain
the commitment we need from public servants go beyond the annual salaries or even
basic forms of pay-for-performance.” (5) Yet, surprisingly little research has examined
such tools in nonprofit or public sector contexts. In the present investigation,
we examine the effects of a service excellence recognition and reward program on
perceptions of service quality in a public higher education institution.

Our research addresses four gaps in the current empirical literature. First, Dean describes a service profit chain in which a positive environment for employees leads to value for customers, which in turn leads to improved organizational results. (6) Yet, her literature review examined only one article from the nonprofit or public sector (a police department in the UK). We believe it is important for both academicians and managers to understand whether these findings are generalizable to other contexts. For example, do relationships between organizational features, such as management culture and employee attitudes generalize to the nonprofit and public sectors? More specifically,    although an extensive body of research has examined associations between service climate initiatives and the behaviors of employees and customers in the private sector, to our knowledge no research has examined the effects of a service excellence initiative in a public higher education institution. Faculty and administrative assistants share some characteristics with private sector employees but vary in other important ways. As Ruben noted in connection with the longstanding (12-year) service quality enhancement initiative at Rutgers University, public sector higher education institutions are characterized by: (1) “ultrastability” of the workforce; (2) limited availability of incentives and disincentives; and (3) complex bureaucratic structures. (7)

Second, in addition to examining relationships in a new context, we    think that these relationships require study using a longitudinal approach. The body of research on the relationship between service excellence initiatives and organizational outcomes has primarily been cross-sectional. Dean calls for longitudinal research in order to have more confidence in the internal validity of cause and effect inferences. (8)

Third, little empirical research has examined the effects of a recognition program (coupled with rewards) to facilitate service quality improvement. The present research employs a quasi-experimental multiple measure design in this quest.

Finally, the present investigation is contributory in focusing on an often overlooked job category, administrative assistants. Although usually located at lower levels of organization charts, administrative assistants can be integral to an organization’s performance. They frequently serve in a boundary-spanning capacity–e.g., as the receptionist or clerical person who is first encountered by a client/customer. (9) In this role, they help create the initial impression and set the tone for the service/product encounters that ensue. Administrative assistants also function as conduits for information and resources between departments, and in this manner assist the process of strategic linking. (10) Of course, administrative assistants also handle the    myriad details that are essential for office efficiency (e.g., ordering supplies, scheduling meetings, completing various forms, etc). At    one extreme, an administrative assistant can (passive aggressively)simply tell visitors “Not in; stop back later;” or, at the other extreme the administrative assistant can actively listen, apply initiative, and assist the caller in solving his/her query or problem.

In the next section we review literature related to service excellence initiatives and perceptions of service quality. In the third section we describe our intervention and hypotheses. In the fourth section we discuss our results. We conclude with an assessment of the contribution of the present intervention and suggestions for practice and implementation in public sector organizations.

Service Excellence and Organizational Outcomes

Service excellence occurs when perceptions and evaluations of service received exceed service expectations,n Widely accepted criteria of    service quality include the following five dimensions (and operational definitions): reliability (providing the level of service promised), responsiveness (demonstrating a willingness to help), assurance (possessing knowledge and providing courtesy), empathy (degree of caring), and tangibles (including physical facilities). (12) See Horwitz and Neville for a review of the literature on organizational design for service excellence. (13)

The relationship between service excellence and positive organizational outcomes is well-established. After reviewing more than twenty years of research in marketing, operations management and human resource management, Dean developed a model linking organizational characteristics to service delivery and consequent outcomes. Specifically, her model shows how organizational features influence employee attitudes which in turn influence service quality, customer reactions, and financial outcomes. (14)

Blackburn and Rosen found that human resource management was a critical factor in total quality provision among Baldridge Award-winning companies. (15) They identified fourteen “ideal” human resource practices as well as challenges for total quality management. Among their14 checkpoints are: top management being responsible for initiating and supporting a vision of a total quality structure as well as the provision of non-financial recognition systems at both individual and work-group levels to reinforce small and big victories.

Recognition of the positive effects of service excellence on organizational outcomes has led organizations to encourage a culture of service. Rewards and recognition can complement structural changes in the implementation of organizational change. (16)

Blackburn and Rosen describe more than 30 types of rewards and reinforcers, including employee empowerment, team goals, wellness programs, and so forth. (17) Evidence also exists that recognition may be as efficacious as financial rewards in service settings.

(18,19) DeMers provides anecdotal evidence of the effectiveness of employee recognition programs for IT professionals employed in the public sector. (20)

Shostack (21) describes industries in terms of a tangible–intangible continuum, in which soft drinks are tangible dominant and teaching    is intangible dominant. Consistent provision of service excellence becomes more challenging as an industry becomes more intangible dominant. (22) Thus, a nonprofit or public sector context, such as a public    university, provides a challenging environment in which to provide quality service. Recognition and rewards may be particularly valuable when institutional factors such as civil servant status and unionization hinder other potential organizational changes. Moreover as noted above, Dean did not find any studies of service excellence in a university context. (23)

Intervention and Hypotheses

In this section we describe the Service Excellence Initiative (SEI)    that was undertaken and advance several hypotheses pertinent to assessing the effectiveness of one component of the SEI. In the summer of    2003, the Dean of a very large business school (hereafter VLBS) articulated the organization’s mission and issued a call for creating a culture of service excellence comprised of both technical and administrative support. The SEI focused on enhancing the administrative support provided by frontline personnel in assisting faculty, department chairs, students, and prospective students. Examples of support include scheduling meetings, ordering supplies, answering student questions, and disseminating information. A primary goal of the SEI was to recognize and reward outstanding work performance by administrative assistants and related staff titles (hereafter described for brevity as administrative assistants), and thereby enhance the overall quality of    service provided. To implement this goal, a SEI Task Force was created that was comprised of three faculty members (including a Department Chair), three students (including a PhD student who served as administrator), two administrative assistants (members of the focal group to be recognized and rewarded), an administrator (the Director of Graduate Student Services), and a representative from the College’s Human Resource Department. In subsequent years, the composition of the Task Force was expanded slightly, including an administrator who reported directly to the Dean. At the initial “kick-off” meeting the Dean presented his vision for the project and communicated his enthusiastic    support, which included the provision of financial resources to recognize and reward outstanding administrative staff members–a population that previously had never received any accolades. The Service Excellence Initiative Recognition and Reward (SEIRR) intervention and sources of effectiveness evidence are described below.

Recognition and Reward (SEIRR) Intervention

A Web-based form was developed soliciting nominations that could be    accompanied by narratives describing specific examples of excellent service (i.e., critical incidents) provided by the nominated administrative assistant(s). Requests for nominations (and narratives) were sent via e-mail to all faculty, students, and staff of VLBS. It should    be noted that all administrative assistants were first contacted to assure their willingness to be included in the pool of potential awardees before nomination notices were sent. Nominations were accepted for a period of two weeks. Awards were determined by the Task Force based on the number of nominations received per individual, in conjunction with the poignancy of the narratives provided. Four awards were given out in year 1.

During years 2 and 3, the SEIRR intervention was expanded to include additional administrative jobs related to business school service, and five individuals were recognized for outstanding service the latter two years. Upon receiving the SEIRR Award, an individual was excluded from eligibility during the subsequent two academic years.

Table 1 provides descriptive statistics for each year of the program, including the number of eligible and participating employees, approximate number of emails sent to the VLBS community, number    of responses received, numbers of nominations and narratives, and narratives per eligible employee.

The number of eligible employees fluctuated over the three-year period from 36 to 49 to 39. Between the first and second years, the total of nominations soared over 250 percent. Although the total number of nominations dipped in year 3, nominations per employee continued to    increase from 4.3 to 10.5 to 12.7. This pattern suggests that members of the VLBS community were not only becoming more aware of the SEIRR program but perceived it as a way to recognize excellent service.

The first year the recognition and reward process was begun with no    prior notice; consequently there should have been no incentive effect, just a possible reward effect. In fact, it was first mentioned at the time of the Honors and Recognition Ceremony-when the first recognition and reward cycle concluded–that awardees would also be receiving a payment of $1,000 and a plaque. The Honors and Recognition Ceremony was well attended because more than 100 students were inducted into Beta Gamma Sigma and Sigma Iota Epsilon honor societies, and faculty members were recognized for their outstanding achievements.

In addition to having their names appear in the Award booklet, administrative assistant awardees subsequently received additional forms of recognition: a group photo and a news stoW appeared prominently on the College’s web site; articles appeared in the student newspapers; and a personal letter of appreciation was sent by the Dean. Importantly, the    Award ceremony entailed having the Dean reading some of the most poignant comments provided by (the anonymous) nominators on behalf of each Award recipient. This form of public acknowledgement made the ceremony particularly meaningful.

A few excerpts from the Dean’s recent comments follow, and provide an overall sense (or Gestalt) of the ceremony:

As you can see on the program, the first order of business is our
Service Excellence Awards. I want to tell you why we have decided
to give service awards to administrative staff members. These are
the folks you see when you first enter offices … [or] who work
behind the scenes in order to get students served, systems
working … It is with these Awards that [VLBS] is able to
acknowledge those members of our community who provide outstanding
service…. I would like to quote some of what has been written…
[Name] is unfailingly cheerful and always ready to help you with a
problem. Who else could find a box of old fashioned transparency
blanks?–Or would take the time to find them? …

After the first year, it was hoped that this combination of recognition and rewards would have incentive value insofar as future potential awardees might be more fully aware of what to expect. However, as Luthans and Stajkovic have noted, what is deemed rewarding in the mind of a program administrator may not function as a reinforcer if lacking in motivational appeal. (24)

Figure 1 (below) presents a schematic of the timing of the SEIRR process including the indicators of effectiveness, over the three iterations of the Award. Because the SEIRR intervention was intended primarily to influence the work behavior and job attitudes of administrative assistants at VLBS we advanced the following hypotheses:

H1: EBI faculty ratings of “secretarial assistance” will increase from preintervention to post-intervention.

H2: Faculty ratings of “secretarial assistance” will increase more than faculty ratings on the other 83 EBI items.

H3: Faculty ratings of “secretarial assistance” will increase more at VLBS than will be the case among peer/aspirant schools.

Administrative assistants at VLBS performed jobs that were often stressful-dealing on the “front lines” with students, faculty, and staff in a public higher education setting without abundant staffing–and, moreover, they performed jobs that were not highly paid.

In public organizations, which often have rigid classification systems, internal equity often drives compensation. (25) Consequently, we anticipated    that the recognition and reward initiative would be positively received by the intended recipients. Thus, we posited that:

H4: Administrative assistants at VLBS will have positive attitudes regarding the SEIRR intervention.


Samples and Procedures

We utilized two data sets to test our hypotheses. Hypotheses 1–3 were tested using data from Educational Benchmarking, Inc. (EBI), an independent survey entity. There are two key advantages to using EBI data. First, scores were collected at two points in time allowing fora longitudinal design. Second, the fact that EBI is an independent source mitigates the potential confound of common method variance.

We also conducted our own survey of the target population, administrative staff. While cross-sectional, the survey allowed us to ask questions specific to our intervention.


EBI. Evaluative data provided by Educational Benchmarking, Inc. (EBI) were obtained from faculty during the spring of 2004, when the first year’s initiative was just begun, as well as during the spring of2006, when two years of the SEIRR intervention had been completed and    the third year’s effort was underway.

The EBI survey is comprised of 84 items grouped into ten major categories: faculty support, faculty development, business school administration/leadership, faculty teaching, program evaluation, doctoral programs, culture, learning environment, assessment, and “the bottom line” (i.e., global satisfaction items). Nearly all of the 84 EBI survey items are conceptually unrelated to the behavior and performance of    the focal population, administrative assistants in the VLBS. However, one item should have been directly affected by the SEIRR intervention, namely Question #10 faculty satisfaction with “secretarial assistance.” (The term “secretarial assistance” is how Question #10 is worded on the EBI survey.) The other (83) items logically should have been unaffected by the SEIRR intervention. For example, the 17 items on satisfaction with faculty development pertained to such matters as classroom technology and salary, and the 17 items categorized as satisfaction with business school administration/leadership were related to    such issues as the quality of new faculty appointments and raising money from external sources.

That most EBI items were unrelated to the    SEIRR intervention enabled us to employ a program evaluation procedure that parallels Chen’s theory-driven evaluation methodology, i.e.,incorporating the basic concepts of convergent and discriminant validity as explicated by Campbell and Fiske nearly 50 years ago. (26, 27)

In 2006 we supplemented the EBI survey with four items that asked about faculty satisfaction with practices that should be affected directly by the SEIRR intervention, viz., the friendliness/courtesy and knowledge/professionalism of administrative assistants. Business schools are permitted to add up to 10 institution-specific questions to the EBI survey.

The EBI survey feedback process enabled VLBS to compare itself with    six peer (and/or aspirant) business schools–entities that essentially served as a comparison condition.

Thus, not only can ratings be examined over time across the 84 different items, but also compared with ratings (and changes in ratings) in peer/aspirant schools. The existence of such comparative data further enhances confidence regarding inferences about the internal validity of the SEIRR intervention.

It is important to note that EBI restricts the use and reporting of    its data to protect the confidentiality of participating institutions. Schools may report aggregate results for comparison institutions,but may not, of course, reveal the names of comparison institutions.For reasons of confidentiality, we report comparisons in the form of index numbers, with ratings in 2003-2004 set at 100.

SEIRR Questionnaire. During the summer and fall of 2006 the SEIRR coordinators prepared a 28-item questionnaire that was distributed to and completed by currently employed administrative assistants who previously were eligible to win the Award and individuals who would be eligible in spring 2007. The questionnaire, comprised of items pertinent to job attitudes, attitudes regarding the SEIRR intervention, and biographic information, was completed voluntarily and anonymously. To    encourage participation, individuals who completed the survey were given a $10 gift card. The identities of respondents were known solely    by our research assistant, so that gift cards could be distributed.A copy of the questionnaire is available on request to the first author.

Key attitudinal and factual questions in the survey concerned whether the respondent: (1) thought prior recipients were deserving; (2) had previously been an Award recipient; (3) had nominated someone for the Award; (4) thought the nomination process was fair; (5) thought the Award process might encourage their colleagues to improve the service they provide; (6) thought the Award process might encourage improvement in their own service; and (7) thought that the Award program should be continued. Comments were elicited regarding attitudinal questions.


EBI Data for Hypotheses 1-3

We tested hypothesis 1-3 using the EBI benchmarking described above. EBI data were provided by 57 and 95 faculty members at VLBS during the spring of 2004 and 2006, respectively. Hypothesis 1 predicts that    faculty evaluations of secretarial assistance will increase following the intervention. Scores on Question 10–satisfaction with “secretarial assistance”–increased by 11.3%, however the improvement did not    achieve statistical significance (t = .98, p = .16, one-tailed.) The    difference did correspond to a noticeable, yet small, effect size (d    = .17) based on Cohen’s criteria. (28)
Consequently, we performed a power analysis using Power & Precision    Software to determine the probability of finding statistical significance given the effect size and sample size. (29) Statistical power is defined as the probability of rejecting the null hypothesis when it    is false and should be rejected. (30) In other words, it represents the probability of detecting a difference between groups when one exists. Large sample sizes are required to detect small effects. Given that our sample size was only n = 57 at [T.sub.1] and n = 95 at [T.sub.2], the probability of achieving statistical significance was only 17 percent (assuming alpha = .05).

Importantly, it has been noted previously that a phenomenon may have a small effect size, statistically, yet be highly meaningful in a practical sense. (31) For example, Meyer et al. reported that the effect size associated with ever smoking and the onset of lung cancer within 25 years is .08-an effect size one-half as large as that observed    in the present intervention.

Thus, the evidence with respect to H1 is somewhat equivocal; there was a small and potentially meaningful effect, but the difference in raw data scores did not reach the conventional level of statistical significance, due to the small sample size (i.e., lack of statistical power). Indeed, the power analysis indicated that statistical significance was to be expected only one-sixth of the time.

Table 2 presents results of analyses using EBI data from VLBS and six peer/aspirant (i.e., comparison) schools. Findings pertinent to H2    appear in the 2nd and 3rd columns; findings pertinent to H3 appear in the fourth and fifth columns.

Hypothesis 2 posits that faculty ratings of “secretarial assistance” will increase more than faculty ratings on the other 83 EBI items.To test H2, we examined EBI data on an ipsative basis–viz., in terms    of the magnitude of relative change across all 84 EBI items. On this    basis, the index of the change for Question 10 was the 5th largest among the 84 items. Using the test-retest correlation between items as    an indicator of reliability, the change in Question 10 corresponded to a change of 5.49 Standard Errors of Measurement (SEM);p < .001.

Additionally, as noted above, VLBS added four new items in the 2006    EBI survey. The items related to the friendliness/courtesy and knowledge/professionalism of administrative assistants. Scores on the four    new items were compared to the mean score in 2006 on the 83 items, excluding Question 10, “secretarial assistance.” The mean was, as anticipated, significantly greater on the four new items compared to the other 83: [Z.sub.SEM] = 2.99,p < .001, one-tailed. These findings provide strong support for H2.

Hypothesis 3 posited that the change in Question 10 would be greater at VLBS than at peer/aspirant schools. This conjecture was supported using raw data: the difference between sample means in terms of magnitude of change was t =3.84, p < .001. It was also supported using index data. The index of change was significantly larger at VLBS than at peer/aspirant schools, VLBS [Z.sub.SEM] = 5.49 as compared to [Z.sub.ZEM]=-1.24 at peer/aspirant schools.

Survey data for Hypothesis 4

Hypothesis 4 suggested that administrative assistants at VLBS will have positive attitudes regarding the SEIRR intervention. The 28-item    questionnaire regarding the SEIRR intervention was distributed during the fall of 2006. The response rate was relatively high: 37 out of49 potential respondents completed the survey, a response rate of 75.5 percent. Examination of biographic data from respondents and for the population of eligible administrative assistants indicated a high degree of similarity: 86 percent of respondents and 86 percent of the focal population were female; likewise, with regard to having previously been an Award recipient the proportions were 38 and 34 percent, respectively. We next present results pertinent to the five attitudinal questions asked, for which summary data are presented in Table 3.

We measured perceptions of how deserving recipients were using the item “On the whole, how deserving were the recipients of the ServiceExcellence Award?” The four response alternatives ranged from “Most recipients are very much not deserving” to “Most recipients are very much deserving.” The names of the 14 Award recipients to date appeared    above the item. All respondents found prior recipients to be deserving of the Award: 75 percent “very deserving” and 25 percent “somewhat    deserving”. We tested the significance of all attitudinal questions against the null hypothesis of chance responding, i.e., of 50 percent    affirmative. Of course, the 100 percent affirmative view of deservingness was statistically significant, Interestingly, perceptions of the degree of deservingness were not statistically different between those who had nominated a colleague and those who had not. In addition,    there was no statistically significant difference between those who had received the Award and those who had not. The Fisher Exact Test yielded p levels of close to 1 and .41, respectively.

We then measured perceived fairness of the nomination process using    the item “Do you think the nomination process is fair?” Respondents chose between “yes” and “no,” and had the opportunity to explain why.    Of those who responded 69.2 percent felt the process was fair (p < .01). There was no relationship between fairness responses and whether    the respondent was an Award recipient (p = .41); or between fairness    responses and whether the respondent had nominated someone (p =.36).    To gauge whether the award influenced service quality, we asked two questions: (1) whether the Award initiative has improved the work of their colleagues; and (2) whether the Award initiative has improved their own work effort. The first question was worded as follows: “Do you think that the existence of the Service Excellence Award might have spurred or encouraged some of your colleagues to improve the service they provide to faculty, peers, or students?” Respondents chose between “yes” and “no,” and had the opportunity to explain why. Results:    Of those who responded, 59.4 percent felt the Award encouraged colleagues to improve service, (p < .05).

The question about self-improvement was worded: “Has the Service Excellence Award spurred or encouraged you to improve the service you provide to faculty, peers, or students?” Respondents chose between “yes” and “no,” and had the opportunity to explain why. Only 28 percent of respondents thought that that the SEIRR intervention had improved their own work performance, a result significantly lower than chance would dictate (p < .001). Evidently, respondents attributed their own    work behavior to internal causes (whereas their colleagues were seen    as more susceptible to influence by external factors such as recognition and rewards.) Sample explanations by respondents included: “I don’t work to get rewards;” “I work hard because I like what I do;” and    “I always provide the best service that I can.”

Finally we sought to find out if administrative assistants thought that the program should be continued. With response choices of “yes”and “no” the question was worded: “Would you recommend that the Service Excellence Award be continued?” 88.6 percent of respondents provided an affirmative response (p < .001). Using the Fisher Exact Test, affirmative responses were unrelated to having been an Award recipient    (p close to 1.0) or having nominated someone (p = .62).


Summarizing results, in connection with the anticipated improvement    in faculty perceptions of “secretarial assistance” (H1), we found that the SEIRR intervention yielded a non-significant positive change using raw EBI data. Sample size parameters, though, made it very unlikely that a statistically significant change would be observed due tothe lack of statistical power. Nonetheless, the change corresponded to a noticeable, small effect size (d =. 17) that could be potentially    meaningful. Further, it should be noted that the T2 data were collected after only the first two years of the SEIRR intervention–perhaps    an insufficient time for changing work behavior on the part of administrative assistants, and perceptions of same by faculty.

Examining data on an ipsative (within school) basis, the percentage    change in perceptions of “secretarial assistance” at VLBS was the 5th largest of 84 changes.

Compared to the mean change in 83 faculty perceptions at VLBS, the change in this item was statistically significant, a finding supportive of H2. Likewise, the four new questions added to the EBI survey that specifically addressed administrative assistants’ work behavior and job performance were rated more highly than the average rating at VLBS during [T.sub.2], adding further support to the predicted relative improvement in “secretarial assistance.”

On the assumption that the peer/aspirant (comparison) schools of VLBS did not similarly conduct an intervention to enhance faculty perceptions of the work behavior and job performance of administrative assistants, we advanced H3. Specifically, we predicted that faculty ratings of “secretarial assistance” would show a more positive change atVLBS than at comparison schools. H3 was supported based on raw EBI data. In terms of relative changes, VLBS showed a significant increase in the focal index (+ 11.3 percent) whereas the comparison schools showed a non-significant decrease (-6.4 percent).

We prepared and distributed a 28-item questionnaire in order to ascertain whether administrative assistants at VLBS indeed had a positive attitude toward the SEIRR intervention. Three-quarters of respondents thought that prior Awardees were very deserving and one-quarter rated prior Awardees as somewhat deserving. The majority of respondents    (69 percent) thought the process was fair, and this finding was independent of whether the respondent was a prior Award recipient or nominator. As per Lachance’s comments, perceptions of the program as legitimate and fair are necessary for staff support. (32)

Somewhat surprisingly, the majority of respondents (59 percent) thought that the SEIRR program would spur their colleagues on to improved work behavior and job performance, but only a minority of respondents (28 percent) thought the initiative would have a positive affect on their own service provided. Apost hoc analysis of the difference between sample proportions indicated that the difference was statistically significant, Z = 2.52; p <.05, 2-tailed test.

Consistent with DeMers who stated ‘All of the innovative recruitment and retention tactics in the world do not do a city any good if they are not properly publicized,” (33) we found that awareness of the recognition and reward process critically influenced perceptions of the award. Following Lachance, who stated “… it only works if the observers are given good information about who merited the award.” (34)We suggest that these initiatives need year-long visibility such as the creation of a website.

From a research perspective, several limitations of the SEIRR intervention deserve mention. First, as noted above, the small number of faculty responding to the EBI survey made it unlikely (p =. 16) that a    statistically significant change would be found.

Second, post-intervention data were obtained only two years after the SEIRR intervention commenced. We believe that a two-year pre-postmeasurement interval may have been insufficient time for administrative assistants to change the levels of service they provide, and for faculty to perceive an improvement.

Third, we acknowledge that the SEIRR program might be characterized    as a relatively modest intervention. During any year only about 12 percent of eligible administrative assistants were granted an Award. With such a low proportion of Awardees, it is likely that many, if not    most, of the eligible employees reasoned that their chances of “winning” were not all the great, even if they provided much improved service. Yet, it might be argued that the decision to provide superior service is not one that is made based solely on rational calculations.Rather, we believe that top-level leadership is an important component of a successful recognition and reward initiative. (35) Indeed, senior leadership involvement has been widely cited as a prerequisite condition for the success of a recognition program in all sectors of the economy. For example, Nelson attributes the success of the recognition program at Marriott International to the fact that the CEO, Bill Marriott, leads the company’s recognition and appreciation program. (36) A key feature of the SEIRR intervention at VLBS was the high level of involvement and support provided by the Dean.

Contribution and Conclusion

Little prior research has examined the effects of recognition programs in a nonprofit or a public sector context. Whether this has beendue to the assumption of the generalizability of findings from for-profit settings to the latter context or a lack of interest in these contexts is unclear. However, in light of the evidentiary support for all four Research Questions, we conclude that an employee reward and recognition initiative can be effective in improving service quality.Thus, we extend theory from the for-profit to the nonprofit context and complement prior research. (37)

Second, we used a longitudinal, quasi-experimental research design to capture cause and affect relationships more adequately than priorcross-sectional research has accomplished.

(38) Additionally, we extend research by providing empirical evidence of service quality in the    form of colleagues’ evaluations.

Lachance commented “A strategic human resources management program or practice is defined and refined by constantly testing whether its design and outcomes are moving the organization toward its strategic goals (italics in original).” (39) Our field experiment is one such example of strategic human resource management implementation.

Given that the present intervention was neither costly nor particularly time-consuming, we believe that it be readily transported to other public sector organizations. The key ingredients, in our opinion,are two-fold: 1) a widely accessible and publicized nomination process; 2) a well-attended Award ceremony where the organization’s Director personally reads some of the comments and presents an attractive plaque to each Award recipient. The first ingredient provides a measure    of transparency and increases the likelihood of selecting and recognizing the most deserving administrative assistants. The second component adds to the poignancy of the recognition process. There is abundant evidence that visible top-management is essential for a recognition program to succeed. (40) Further, the provision of a tangible reward likely adds to the incentive value of the intervention, but we didnot attempt to isolate the effects of recognition from those of the accompanying reward.

In conclusion, our research suggests that a recognition and reward intervention can improve service excellence in a public sector higher    education institution, and probably most large public sector organizations.

Acknowledgements: We gratefully acknowledge the research assistance    of Katrina Motch and Elif Selcuk.

Richard E. Kopelman, DBA, SPHR
Department of Management
Zicklin School of Business
Baruch College
One Bernard Baruch Way, Box B9-240
(646) 312-3629
Naomi A. Gardberg, PhD
Department of Management
Zicklin School of Business
Baruch College
One Bernard Baruch Way, Box B9-240

Ann Cohen Brandwein, PhD
Department of Statistics and Computer Information Systems
Zicklin School of Business
Baruch College
One Bernard Baruch Way, Box B11-220


(1) Schneider, B. & White, S. W. (2004). Survey Quality: Research Perspectives. Thousand Oaks, CA: Sage Publications.
(2) Saunderson, R. (2004). “Survey findings of the effectiveness of    employee recognition in the public sector.” Public Personnel Management, 33(3): 255-275.
(3) Cassidy, E. & Ackah, C. (1997). “A role for reward in organizational change?” Irish Business and Administrative Research, 18: 52-62.
(4) DeMers, A. (2002). “Solutions and strategies for IT recruitment    and retention: A manager’s guide.” Public Personnel Management 31 (1): 27-40.
(5) Lachance, J.R. (2000). “International Symposium of the International Personnel Management Association, 29, 305-313.
(6) Dean. A.M. (2004). “Links between organisational and customer variables in service delivery.” International Journal of Service Industry Management, 15(3/4): 332-350.
(7) Ruben, C.D. (2005). “The Center for Organizational Development and Leadership: A case study.” Advances in Developing Human Resources, 7: 368-395.
(8) Dean (2004).
(9) Bowditch, J.L. & Buono, A.E (2I)05). A Primer on Organizational    Behavior (6th Edition). New York: Wiley.
(10) Nadler, D.A. & Tushman, M.L. (1997).Competing by Design: The Power of Organizational Architecture. New York: Oxford University Press.
(11) Horwitz, EM. & Neville, M.A. (1996). “Organization design for service excellence: A review of the literature.” Human Resource Management, 35(4): 471-492.
(12) Parasuraman, A., Zeithaml, V. & Berry, L.L. (1985). “A conceptual model of service quality and its implications for future research.” Journal of Marketing, 49: 12-40.
(13) Horwitz & Neville (1996).
(14) Dean (2004).
(15) Blackburn, R. & Rosen, B. (1993). “Total quality and human resources management: Lessons learned from Baldridge Award-winning companies.” Academy of Management Executive, 7: 49-66.
(16) Cassidy & Ackah (1997)
(17) Blackburn & Rosen (1993).
(18) Luthans, F. & Stajkovic, A.D. (1999). “Reinforce for performance: The need to go beyond pay and rewards.” Academy of Management Executive, 13: 49-57.
(19) Ford, E.L. & Fina, M.C. (2006). “Leveraging recognition: Noncash incentives to improve performance.” Workspan, 49(11): 18-22.
(20) DeMers (2002).
(21) Shostack, G.L. (1977). “Breaking free from product marketing.”    Journal of Marketing, April: 73-80.
(22) Horwitz & Neville (1996).
(23) Dean (2004).
(24) Luthans and Stajkovic (1999).
(25) Lachance (2000).
(26) Chen, H. (1990). Theory-Driven Evaluations. Newbury Park, CA:Sage.
(27) Campbell, D.T. & Fiske, D.W. (1959). “Convergent and discriminant validation by the multitrait-multimethod matrix.” Psychological Bulletin, 56: 81-105.
(28) Cohen, J. (1992). “A power primer.” Psychological Bulletin, 112(1): 155-159.
(29) Borenstein, M., Rothstein, H, & Cohen, J. (1997). Power and Precision. Teaneck, NJ: Biostat.
(30) Cohen, J. (1977). Statistical Power for the Social Sciences. New York: Academic Press.
(31) Meyer, G.J., Finn, S.E., Eyde, L.D., Kay, G.G. Moreland, K.L.,    Dies, R.R., Eisman, E.J., Kubiszyn, T.W. & Reed, G.M. (2001) “Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56: 128-165.
(32) Lachance (2000).
(33) DeMers (2002).
(34) Lachance (2000).
(35) Saunderson (2004).
(36) Nelson, B. (2006). “Recognition programs need support foundation.” Denver Business Journal.

(37) Dean (2004).
(38) Ibid.
(39) Lachance (2000)
(40) Nelson (2006); Saunderson (2004).

Richard E. Kopehnan, DBA, SPHR, is professor of management at Baruch College and academic director of the Baruch Executive MSILR Program. He is the author of Managing Productivity in Organizations (McGraw-Hill), numerous articles on work motivation and performance improvement, and a coauthor of the widely-cited Public Personnel Management article on executive coaching.

Naomi A. Gardberg, PhD, is an associate professor of management at Baruch College. Her research interests include the accumulation, cross-national transfer and dissipation of intangibles, such as corporate    reputation and trust. She has published in Academy of Management Review and Journal of International Business, among other journals.

Ann Cohen Brandwein, PhD, is a professor of statistics at Baruch College. Her research work is in the area of multivariate point estimation for spherically symmetric distributions. From 1993-2006 she was an associate editor for the Theory and Methods Section of the Journal of the American Statistical Association.

Table 1: Descriptive Statistics: Reward and Recognition Initiative

Academic   Eligible    Participating   Emails    Responses
Year     Employees     Employees      Sent     Received

2003-04       36            33          8,000       92
2004-05       49            48          8,000       305
2005-06       39            36          8,000       273

Academic        Total          Total       Narratives/
Year     Nominations (a)   Narratives   Part. Emp. (b)

2003-04          142            124            4.30
2004-05          502            342           10.46
2005-06          457            228           12.69

(a) Each respondent could submit nominations for up to three people
(two in AY 2003-04).

(b) Narratives per participating employee.

Table 2: EBI Data: Index Numbers

H2                    H3

Questions                  VLBS                Schools
Index       SD        Index        SD

2003-2004             2003-2004

Mean all items except     100.00               100.00
Question 10 (a,b)

Question 10 (b)           100.00               100.00

2005-2006             2005-2006

Mean all items except     99.65      6.97       97.12       6.91
Question 10 (a,b)

Question 10 (b)           111.32              93.58 (d)

All Questions             99.79      7.04       97.07       6.88

Mean all items except     100.00
Question 10 (a)

Mean 4 New Questions    106.27 (e)   8.88

(a) The sample size for all items except question 10 was 83.

(b) Question 10: faculty satisfaction with “secretarial assistance”.

(c) Difference between Question 10 and Overall Index = +5.49 SEM
(Standard Errors of Measurement).

(d) Difference between Question 10 and Overall Index = -1.24 SEM
(Standard Errors of Measurement).

(e) Difference between 4 New Questions and Overall Index = +2.99
SEM (Standard Errors of Measurement).

Table 3: Survey of Participating Administrative Staff Members

Variable                                   Proportion
Affirmative (%)   p-value
Thought prior Service Excellence Award
recipients were deserving (a)                  100         < .001

Had previously been a Service                  38            ns
Excellence Award recipient

Had nominated someone for the Award            61            ns

Thought the nomination process was             69          < .01
fair (a)

Thought the Award process might
encourage their colleagues to improve          59          < .05
the service they provide (a)

Thought the Award process might
encourage improvement in their own             28          < .001
service (a)

Recommend that the Service Excellence          89          < .001
Award be continued (a)

(a) Significance levels are based on the a priori assumption of
p = .5; ns = not significant.