Hello world!

Welcome to

The Learning Effects of Monitoring

Dennis Campbell

Marc Epstein

Asis Martinez-Jerez

ABSTRACT:

This paper investigates the relationship between monitoring, decision-making,

and learning among lower level employees. We exploit a field-research setting in which

business units vary in the ―tightness‖ with which they monitor employee decisions. We find that

tighter monitoring gives rise to implicit incentives in the form of sharp increases in employee

termination linked to ―excessive‖ use of decision-rights. Consistent with these implicit

incentives, we find that employees in tightly monitored business units are less likely than their

loosely monitored counterparts to: (1) use decision-rights; and (2) adjust for local information,

including historical performance data, in their decisions. These decision-making patterns are

associated with large and systematic differences in learning rates across business units. Learning

is concentrated in business units with ―loose monitoring‖ and entirely absent in those with ―tight

monitoring‖. The results are consistent with an experimentation hypothesis in which tight

monitoring of decisions leads to more control but less learning.

I. Introduction

It is well understood that management control choices in organizations need to balance

the encouragement of effective decision-making with the mitigation of risky outcomes due to

either poor decisions or employee opportunism (Baiman 1990; Merchant and Van der Stede

2003; Simons 2000). However, an additional consideration that is often overlooked in the

academic literature is that management control choices that alter employee decision-making can

also have a powerful influence on learning (Sprinkle 2000; Lee et al. 2004; Campbell 2008). In

this paper, we add to the limited literature on this topic by investigating the relationship between

monitoring and the decision-making patterns and learning rates of lower level employees.

Determining appropriate management controls for lower level employees is an important

issue for most organizations. These employees often have unique knowledge about an

organization‘s individual customers, its local markets, and its production processes. The

delegation of decision-rights to such employees can allow organizations to gain the benefits from

effective and timely use of local information without incurring the costs of collecting and

transmitting this information to top management (Jensen and Meckling 1992; Nagar 2002). This

delegation, however, comes with the obvious management control problem of ensuring that

employees use their decision-rights in the best interest of the organization.

One mechanism that is widely used across organizations to achieve management control

in this context is direct monitoring of employee decision-making via management reports or

other review processes. For example, bank officers may have discretion in underwriting

consumer loans, but most banks have in place ―exception reports‖ which flag loans underwritten

outside of formal guidelines for further review. Similarly, a local sales representative may have

wide latitude in granting price discounts to local clients, but headquarters may have guidelines in

2

place for flagging and reviewing ―excessive‖ price discounting. In this paper, we focus on the

decision-making and learning implications of this form of ―exception-report‖ monitoring.

These types of monitoring mechanisms are expected to have a direct influence on

employee decision-making. Employees facing more ―evaluative pressure‖ are less prone to

opportunism and less likely to experiment or take risky decisions, preferring instead the certainty

of managing towards explicit guidelines (Nagin et al. 2002; Lee et al. 2004; Hunton 2008).

The learning implications of monitoring are less clear. As employees use their decision

rights, they may learn over time the conditions under which their decisions are effective. If

monitoring leads to less use of discretion by employees, then they are in effect performing fewer

―experiments‖ and have fewer opportunities to learn (Lee et al. 2004). This ―experimentation

hypothesis‖ would predict negative learning implications of increased monitoring. Alternatively,

if the threat of detection inherent in monitoring leads employees to be more selective in utilizing

decision rights or to expend more effort in learning how to use them effectively, then monitoring

may lead to enhanced learning. This ―selective utilization‖ hypothesis would predict positive

learning implications of increased monitoring. Which of these alternatives is likely to prevail is

an open empirical question which we address in this paper.

Our findings, based on field and quantitative data from the MGM-Mirage Group,

generally point to a tradeoff between employee learning and the intensity by which their

decisions are monitored by superiors. Consistent with the ―experimentation hypothesis‖,

employees in ―tightly monitored‖ business units face strong implicit incentives to experiment

less by deviating less often from explicit decision guidelines and have fewer opportunities to

learn. These decision-making patterns are associated with large and systematic differences in

learning rates across business units with learning concentrated in those with ―loose monitoring‖

and entirely absent in those with ―tight monitoring‖.

3

This study contributes to managerial accounting research by documenting how

monitoring can influence the implicit incentives faced by employees in ways that alter both their

decision-making patterns and rates of learning. With the exception of a few studies, the

relationship between management control choices and learning is a topic that has been largely

unexplored in the accounting literature (Campbell 2008; Sprinkle 2000). Similarly, much of the

literature on learning in organizations has documented variation in rates of learning both within

and across organizations but, with the exception of a handful of studies, has been relatively silent

about the sources of this variation (Pisano et al. 2001; Lapre and Tsikriktsis 2006; Wiersma

2007). Our results contribute to this literature by documenting systematic variation in rates of

learning attributable to differences in management control practices across business units.

II. Literature Review

Our paper draws on, and extends, two broad and related research streams: (1) studies on

learning by experience and the ―learning curve‖ and (2) theoretical and empirical work on

monitoring and its influence on behaviors related to experimentation, risky decision-making, and

opportunism all of which we view as potentially important antecedents to learning.

Learning by Experience and Variation in the “Learning Curve”

The literature on organizational learning is large and diverse. Studies in this literature

tend to distinguish learning on at least two dimensions: (1) the level at which learning takes place

and (2) the source of knowledge acquisition. Learning has been documented at the individual,

team, business unit, and organizational levels and has been attributed to a variety of different

underlying sources of knowledge acquisition ranging from deliberate organizational search

processes to learning by experience (Huber 1991; Dodgson 1993).

Our study focuses on differences in individual rates of learning across business units with

different control structures and is most conceptually related to the literature on learning by

4

experience. Collectively, this literature has shown a fairly robust relationship between

cumulative experience and performance improvement. This ―learning curve‖ phenomenon has

been documented in a variety of contexts and with respect to a variety of different measures of

experience (e.g. production volume, time) and performance (e.g. cost reduction, customer

satisfaction) (Argote and Epple 1990; Pisano et al. 2001; Lapre and Tsikriktsis 2006).

Beyond demonstrating the existence of learning, this literature has also documented

significant variation in rates of learning both across and within organizations (Jarmin 1994;

Pisano et al. 2001; Lapre and Tsikriktsis 2006; Wiersma 2007). Learning curves have been found

to vary with labor versus capital intensity (Adler and Clark 1991); the degree of vertical

integration (Sorenson 2003); the degree of task heterogeneity (Wiersma 2007); and the explicit

(e.g. bonus) and implicit (e.g. promotion) incentives faced by individuals (Campbell 2008;

Sprinkle 2000). This paper extends this line of inquiry to examine monitoring as a factor that

can have an important influence on individual rates of learning within organizations.

Monitoring and Antecedents to Learning

Studies on actual rates of learning within organizations have been relatively silent on the

role of monitoring and other management control choices. There has been more progress in this

area within a diverse literature that recognizes experimentation – defined generally as a trial-anderror

process where each trial generates new insights – as an important antecedent to learning

(Sitkin 1992; Thomke 1998; Thomke et al. 1998). Researchers have shown that perceptions of

evaluative pressure, arising from the degree to which an organization‘s formal and informal

reward systems are viewed as punishing failure, negatively influence experimentation

(Edmondson 1999, Lee et al. 2004). Our study is conceptually related to these in that we focus

on monitoring as a mechanism that can give rise to evaluative pressure. We also characterize the

use of decision-rights as a form of experimentation in which employees can deviate from formal

5

decision-guidelines, observe outcomes, and learn about the quality of their decisions. However,

unlike our study which focuses on monitoring as an explicit management control choice, these

studies rely largely on individual perceptions of evaluative pressure. The actual underlying

reward systems and management control structures that affect experimentation are typically not

well specified.

The accounting literature focuses on the related issue of managerial willingness to

undertake risky investment projects.1 Similar to the notion of experimentation, risky investment

decisions involve ex ante uncertainty about the outcomes of the decision and potential ex post

opportunities to learn from ―mistakes‖. This literature speaks more directly to the impacts of

monitoring which has been shown in experimental settings to reduce participant‘s willingness to

undertake risky investment projects (Hunton 2008). A related literature in economics has shown

reductions in employee opportunism due to increased monitoring (Wiseman and Gomez-Mejia

1988; Nagin et al. 2002).

In general, these literatures document the potential influence of monitoring on

experimentation, risky decision making, and employee effort – all of which can be important

antecedents to learning – but do not make the link to actual learning within organizations. Our

paper fills this gap by focusing on the existence and magnitude of the relationship between

monitoring and actual individual rates of learning by experience.

Empirical Predictions on the Relationship between Monitoring and Learning

The predictions that emerge from these literatures about the relationship between

monitoring and learning are not straightforward. The arguments from the literatures on

experimentation and risky decision-making in organizations would predict that more intensive

monitoring would lead employees to use decision-rights less frequently and, in effect, perform

1 See Baiman (1990) or Lambert (2001) for a review of some of this extensive literature.

6

fewer ―experiments‖ leading to reduced opportunities for learning. This ―experimentation

hypothesis‖ would predict negative learning implications of increased monitoring.

On the other hand, theoretical models of information acquisition within organizations

have distinguished between employee effort aimed at collecting decision-relevant information

and effort aimed at using such information effectively on the organization‘s behalf (Lambert

1986; Demski and Sappington 1987). If the threat of detection inherent in monitoring leads

employees not only to supply more productive effort as in Nagin et al. (2002) but also to expend

more effort in pre-decision information collection activities, then monitoring may lead to

enhanced learning via experience in this task. Employees could use decision-rights less

frequently but do so more selectively based on better pre-decision information. This ―selective

utilization‖ hypothesis would predict positive learning implications of increased monitoring.

III. Research Setting

The data for this study come from six of the major hotel properties of MGM-Mirage

Holdings that share a common information system for collecting customer performance data. As

of the time of this study, the group operates numerous properties within Las Vegas and across

the United States and Asia. Although these properties have been united under the same

corporate group through a series of acquisitions, each property retains its own personality and is

managed independently of other MGM-Mirage holdings. The typical property in the group

includes a casino, which offers activities such as table games, slot machines, and video poker; a

hotel, which ranges in size to over 5,000 rooms; and entertainment offerings including live

shows, restaurants, and bars and nightclubs.

Customer Profitability

Properties in the MGM-Mirage group place great emphasis on managing the profitability

of their gaming (e.g. casino) customers. Each property tracks customer gaming behavior and

7

performance in detail through the “Players Club” loyalty card and associated database. The

Players Club database also tracks all ―comps‖, which may include reduced hotel and

entertainment costs, free restaurant meals, or tickets for a show. Customers receive comps based

on both the money they wager—a figure that determines their current profitability—and

expected future gaming behavior. MGM executives commonly refer to customers by their

expected gambling expenditures per trip. For example, a luxury suite might be offered to a

―$30,000 per-trip customer.‖ The basic components of gaming customer profitability are the

gaming standard margin and the comps.

The basis of gaming customer profitability is the casino‘s so-called ―theoretical win‖

from the customer—the margin that the property could theoretically make from the amount the

customer bet based on the mix of games played and their respective house advantage—not the

―actual win.‖ One employee explained that the group did not want to penalize lucky customers:

―…we don’t care if you win or lose as long as you give us a shot at your money.‖

The amounts bet are recorded differently at slot machines and table games. Slot machine

play is recorded in great detail by the Players Club card, which the customer inserts in the

machine. In contrast, in table games—e.g. blackjack, roulette, baccarat or craps—floormen ―rate

the play‖ of the customers on handwritten slips, which reflect the floorman‘s assessment of the

average bet, time played, and speed of play. Properties are able to trace around 65% of all slot

revenues and 85% of all table revenues to individual Players Club customers.

Data captured in the database via the Players Club card covers more than ten years of

operations and more than 8 million customers. In the calculation of customer profitability, costs

such as operations and marketing personnel, equipment maintenance, and real estate are not

assigned to the customer.

Comps

8

Comps are considered a customer-specific expense because they are used to reward

gaming behavior. Gaming customer profitability is calculated by subtracting the comps from the

theoretical win. If the comps are soft costs in the form of complimentary services provided by

the property, they are recorded at the value at which they are purchased by full-paying

customers. Any other complimentary service that implies a payment to an outside provider (for

instance, reimbursements for plane tickets) is recorded at its full amount.

Casino Hosts

Individual properties rely heavily on employees known as ―casino hosts‖ in order to

initiate, manage, and ensure the profitability of customer relationships. Hosts interact with all

segments of customers whose gambling levels justify comps. Prior to a trip, many customers call

their host to arrange for room and show reservations. One host explained that she is the ―go-to

person in Vegas‖ for her clients, getting them priority tickets and access to ―the most coveted

clubs.‖ She keeps track of her clients‘ travel information, greeting them personally when they

arrive or arranging for a colleague to meet them if she is away from the property. Moreover, if a

client has not visited the property in a while, she calls them ―to understand why and bring them

back.‖ Hosts use a combination of subjective observation and historical customer data to inform

their decisions. New clients flagged by the system by their level of play while at the property or

captured by the MGM-Mirage network of branches are randomly assigned to a host. Hosts may

also add referrals from existing customers to their portfolio.

Decision-Rights, Monitoring, and Incentives for Casino Hosts

Casino hosts ultimately have decision rights on the comps awarded to their customers.

They are free to choose the comps under the general guideline that the dollar value awarded as a

percentage of the customer’s theoretical win on the current trip—known as ―comp percentage‖—

does not exceed 40%. This explicit definition of decision-rights for casino hosts appears to be

common practice in the industry. According to one highly experienced senior executive we

9

interviewed, ―…the general rule of thumb in the industry is 40% for the maximum comp

percentage. This has been true for at least the past 20 years.‖ However, if the host believes that a

customer is likely to be highly profitable in the future they can, and often do, use their decisionrights

to award comps in excess of the 40% limit. In those cases, an ―exception report‖ is

triggered for review by the CFO of the property. As discussed in the next section, an important

feature of this site for our research purposes is that properties vary substantially in the intensity

with which host decisions are monitored through this exception reporting process.

Two features of this site point to the relative exogeneity of these differences in control

practices for purposes of this study. First, we examine decision-making and performance at the

employee rather than property level. Casino hosts operate within a given control system that is

exogenous from their perspective. Second, each of the properties were independently founded

and brought under the same corporate umbrella through a process of mergers and acquisitions.

Our interviews with both corporate and property level management suggested that these

independent histories coupled with path-dependence in management practices, rather than

corporate optimization, has led to these cross-property differences in control practices.

The level and type of comps awarded to a customer on a given trip are typically agreed

upon between the host and the customer at the start of the trip based on the expected gaming

behavior. If ex post customer play does not meet the expected level, the host can adjust the comp

award downward to meet the 40% threshold. For example, as one host noted: ―Customers can

sweet-talk you into getting a suite, but if their play doesn‘t justify it in the end, you can make

adjustments to the expenses we will and will not cover.‖

Interestingly, all incentives for hosts to directly manage the key decision variable of

comp percentage appear to be implicit rather than explicit. Although there is no formulaic

relationship between the hosts‘ bonuses and performance indicators, the incentive plan for casino

10

hosts at MGM-Mirage’s properties explicitly indicates that the hosts‘ annual bonus will be

determined by the following criteria: 60% is based on total gaming revenues for the property,

consistent with an objective to reward team work; 15% on individual goals for acquiring new,

and reactivating ―inactive‖ customers; and 25% on subjective evaluation of the performance of

the host by the property‘s senior management. Incentives to limit comp awards appear to be

primarily determined by the exception report review process but may also depend on managers‘

view of comp levels in the subjective evaluation component of the bonus plan.

The Casino Host Decision-Process

As part of this study, we interviewed several casino hosts across properties and

―shadowed‖ some of them as they performed their work. Our observations revealed that hosts

consider their interactions with customers in the context of a relationship. They process both

hard and soft information to support each comp decision, although the heuristics applied to hard

information and the confidence with which they incorporate soft information differ from host to

host. Additionally, they explicitly engage in conversations about comps in order to manage

clients‘ expectations.

In general, hosts take a dynamic rather than static view of the financial performance of

customer relationships. One host explained: ―I usually give it a couple of trips to see what

happens. I want to see how that customer is likely to perform over the next year or so. When I

overcomp a customer, I am looking at what that will mean for [his or her] profitability for the

entire year.‖ Many hosts also spoke of customer loyalty, with one noting that he can usually

evaluate loyalty ―within three to six trips.‖ In short, it is clear that hosts‘ expectations of future

customer performance shape their individual comp decisions and that they think of customers in

terms of ―relationships.‖ One host described using comp discretion ―based on longevity and

11

what I know about the customer‖; another noted that it can be difficult to separate ―the social

relationship vs. the business relationship.‖

Our interviews and observations suggest that hosts vary greatly in the relative extent to

which they base their discretionary comp decisions on “hard” information contained in the

firm’s database, such as average theoretical win in the last ―N‖ trips, versus “soft” information

based on local knowledge of customers. When asked about how they make comp decisions,

hosts replied including expressions such as ―nothing is black and white in what we do, it is all

grey,‖ or ―sometimes I have to take a shot at a customer.‖ Often, this soft information is in the

form of behavioral cues which hosts believe can indicate a potentially valuable customer. For

instance, one host looks at body language: ―It probably makes sense to focus on the person

sitting at the slot machine with his feet up and 500 credits in the machine.‖ While all hosts seem

to rely on soft information to some extent, the more experienced hosts that we interviewed

appeared more confident in their ability to incorporate such information into their decisions.

According to one of these, ―good customers play differently. I‘ve been here a long time, and I

just know when it makes sense to overcomp.‖

We also observed considerable heterogeneity in the heuristics hosts have developed for

incorporating hard information from the database into their comp decisions, as illustrated by

the following comments: ―I tend to do more of a trip-by-trip evaluation‖; ―I look at the past four

trips, throw out the lowest one, and take the average of the remaining three. I’ll then update

based on current play‖; and ―I look at both year-to-date theoretical win and lifetime to date.‖

Customers are often very conscious of the comps they receive for their play. As one host

noted, ―people read books [on Vegas] and ask what they need to do to get this or that.‖

Consistent with their dynamic view of customer interaction, casino hosts tend to think carefully

about the future implications of current discretionary comp decisions: ―Once you have given too

12

much to a customer, you are stuck. When you get them back to their natural level they think you

have taken away from them‖.

Data

The source for this study, the Players Club database, contains data on over 9 million

customer trips between 1993-2004. To focus on host level decision-making and learning, we

restrict our sample to the 349,887 customer-trip observations with an assigned casino host. We

exclude from our study customers who choose not to interact with hosts and those whose level

of play does not warrant host interaction.

In several cases in the data, one-time customers with zero or very low levels of

theoretical win (e.g. due to limited gaming, small bets, etc.) received comps valued in the tens

of thousands of dollars. We eliminate these observations—which are likely attributable to

family or friends of highly valuable customers—to arrive at our final sample.

2

For each trip-level observation we observe the unique identity of the host interacting with

the customer, the theoretical win for the customer on the trip, and the dollar value of comps

awarded to the customer for that trip. Because we observe data from 1993 (the first year of the

firm‘s Players Club program and database) onwards, we can reconstruct each host‘s history of

interactions with individual customers.

IV. Empirical Tests and Results

Definition of Tight vs. Loose Monitoring and the Link to Implicit Incentives

Hosts have the discretion to award comps in excess of the 40% limit to manage the

lifetime value of a customer. However, the ―exception report‖ triggered by each such event

creates an implicit incentive to limit the use of this decision right. The exception report sent to

the property CFO provides information on the type and dollar value of comps awarded and the

customer‘s current and historical theoretical win.

2 We eliminate data from trips in which a player was awarded comps that were greater than 20 times the curre nt trip theoretical win

and in which a player was awarded comps when theoretical win was zero. This eliminates 0.38% of our trip -level observations.

13

Because the report is used differently across properties, there are also significant

differences in the implicit incentives to limit the use of discretion. We characterize some

properties as having ―loose monitoring‖. In these properties, the exception report is used in a

relatively lax way. One representative property CFO said the report was used mainly to detect

egregious cases of comps misuse, and another reported that review of the report was delegated to

the director of player development. In these properties, management actions with respect to hosts

usually took a broad perspective: ―if we observe hosts systematically exceeding the 40% limit,

we call them in to discuss their performance and develop an action plan when necessary.‖ By

contrast, properties that we characterize as having ―tight monitoring‖ use exception reports to

monitor employee decision-making more intensively. A CFO of one such property explained: ―I

review the report every day and email the hosts, who have a week to respond. If I see something

strange I call the host right away.‖ Another noted: ―I ask hosts for a written explanation of all

comps that are $200 or 5% above the limit. I read every single explanation and I further question

about 10% of them.‖

We base our definition of tight versus loose monitoring in part on qualitative interview

based data as noted above but also in part on the observed frequency of monitoring inherent in

each property‘s exception reporting process.3 Table 1 provides descriptive information on both

types of properties. In tight monitoring properties, host comp decisions are monitored daily by

the CFO of the property and a broader review of each host‘s overall customer portfolio is

conducted at the end of each month. In loose monitoring properties, host comp decisions are

monitored only once per week and the broader review of a host‘s portfolio is conducted once per

quarter. We shared this classification with several MGM-Mirage Group managers all of whom

found it valid based on their own observations and experience.

3 Frequ en cy of m on itorin g is n oted in accou n tin g texts as a gen eral featu re of ―tigh t‖ con trol system s (Merch an t an d Van d er St ede

2003). Frequency of feedback can also influence perceptions of a loss of personal control by individuals (Ilgen et. al. 1979).

14

We further validated our classification of tight versus loose monitoring properties by

examining the sensitivity of host exit from a property (EXIT) to a variety of measures of host

performance. If a property discourages overcomping, hosts not conforming with the comps

guidelines will likely be asked to leave or will voluntarily leave when they realize their behavior

is not condoned. To the extent that ―excessive‖ use of decision-rights is linked to departure, hosts

will face incentives to limit their use of decision-rights.

To examine the potential strength of these implicit incentives, and how they vary for

properties with tight or loose monitoring, we estimate the following exit-performance

relationship:

0 1 1 2 1 3 1

4 1 5 1 6 1 7

( ) (

+ %

jpt jpt jpt jpt

jpt jpt jpt

P Exit f CustomerGrowth TripsPerCustomerGrowth TheoreticalWinPerTripGrowth

Discretion Overcomped ExcessComps ExcessComps TightMonitori

b b b b

b b b b

– – –

– – –

= + + +

+ + + ´ 1

6 2003

8 1

2 1994

Property Year ) (1)

t

jpt

j j k k

jpt p jpt

j k

ng

b Experience g g e

= =

+ +å + å +

Our dependent variable, Exitjpt, is set equal to 1 if host j departs from property p during

year t. We model the probability of departure as a function of a number of host-level

performance metrics including growth in the number of customers in the host‘s portfolio

(CustomerGrowth), growth in the number of trips per customer in the host‘s portfolio

(TripsPerCustomerGrowth), and growth in theoretical win per trip for these customers

(TheoreticalWinPerTripGrowth). Because these metrics are primary objectives of MGMMirage‘

s properties, we expect each of them to be negatively associated with the probability of

departure. All measures are aggregated at the host-year level.

We also include in the specification a number of measures of the extent to which hosts

use decision-rights: the annual proportion of individual customer-trips managed by the host in

which comps exceeded 40% of the theoretical win (Discretion%); an indicator for whether the

portfolio comp percentage—i.e. the total annual comps awarded by a host divided by the total

15

theoretical win across all customers in the host‘s portfolio—exceeded 40% (Overcomped); and a

measure of the extent to which a host overcomped customers for the year, taking the value of the

portfolio comp percentage minus 40% for hosts who overcomped and zero otherwise

(ExcessComps). This last measure is of particular interest as it captures not only the extent to

which a host awarded comps to individual customers outside of the 40% guideline but also the

host‘s failure to absorb any overcomping to individual customers in the portfolio overall. Thus,

this measure partially captures the effectiveness of a host‘s use of her decision-rights. As a result,

we expect that, all else being equal, high levels of this measure would be associated with a

higher probability of departure.

We interact ExcessComps with TightMonitoring, an indicator for whether the host is

employed at any of the three properties which we classify as ―tight monitoring.‖ If our

classification of properties is valid, then we expect that an increase in ExcessComps will lead to a

higher increase in the probability of departure for properties classified as having ―tight

monitoring‖ vs. those classified as having ―loose monitoring.‖ As additional control variables,

we include property indicators, year indicators, and Experience measured as the number of years

a host has been employed at a property at the beginning of each year.

Table 2 contains results from logit estimation of equation (1). All standard errors are

adjusted for clustering of observations within hosts over time prior to inference. Column 1

demonstrates that hosts who are able to grow their customer base or to improve the theoretical

win generated per customer trip face a lower probability of departure. Managers with higher

levels of experience face lower departure probabilities. Holding other performance metrics

constant, hosts who are overcomped across all customers in their portfolios (Overcomped) are

more likely to leave the organization. This increase in the probability of departure is increased

further by the degree to which the host is overcomped (ExcessComps). Interestingly,

16

Discretion% is unrelated to the probability of departure. Overall, these results provide evidence

of incentives in this organization whereby the use of decision rights (Discretion%) is not

discouraged per se, but where hosts face strong implicit incentives for managing the

effectiveness with which these decision rights are used.

The results in column 2 are largely consistent with those in column 1 but also point to the

validity of our classification of properties in terms of tight versus loose monitoring. The

coefficient on ExcessComps ´TightMonitoring is positive and significant, suggesting that

―overcomping‖ is more strongly discouraged in the properties that we classify as having tight

monitoring compared to those we classify as having loose monitoring. These estimates show that

a host with the mean levels of all performance measures and operating below the threshold comp

percentage of 40% has a 0.3% probability of departing from the organization. An overcomped

host in a ―loose monitoring‖ property with the mean levels of all performance measures but

operating with a portfolio level comp percentage in the 90th percentile has a probability of

departure that is approximately four times higher at 1.3%. By contrast, a similar overcomped

host in a ―tight monitoring‖ property has a probability of exiting the organization of 6.3%, an

increase of almost fivefold compared to the host in a ―loose monitoring‖ property.4

In summary, the results in this section provide further support for the validity of our

―tight‖ versus ―loose‖ monitoring classification which appears to capture real differences in the

implicit incentives faced by hosts when using their decision-rights. In the next two subsections,

we explore whether host decision-making is consistent with these differences in implicit

incentives and the implications, if any, for learning in this environment.

Does Decision-Making Vary across Properties with Tight versus Loose Monitoring?

4 We also estimated a version of eq. (1) which allowed separate coefficients on ExcessComps for each property. Coefficients for each

tight monitoring property are larger than those for each loose monitoring property. The implied marginal effects for host s with

excess comps in the 90th percentile relative to those within the comp guideline of 40% holding all other variables at their mean are

0.30%, 1.04%, and 1.58% for loose monitoring and 3.65%, 6.51%, and 8.08% for tight monitoring properties respectively. All effects

other than the 0.30% estimate are significant at least at the 10% level.

17

There are at least two potential effects of tighter monitoring on the exercise of decisionrights:

(1) employees may be less likely to use discretion to deviate from guidelines in general;

(2) employees may be more or less likely to incorporate local information, including historical

customer data, when making decisions about individual customers. The first effect is

straightforward: employees who are more likely to be penalized for mistakes are less likely to

―experiment‖ (Lee et al. 2004), preferring instead to manage towards explicit guidelines.

The second effect is not as straightforward. In many settings, including ours, operational

employees have access to local information that can signal when decision-rights should be

exercised to exceed guidelines. Such local information can be ―hard” (e.g. historical information

on customers in the firm’s database) or “soft” (e.g. from direct interaction with a customer). The

latter is not, while the former is, observable to non-local decision-makers. We might expect that

employees in tight-monitoring environments, where they face more pressure to make the ―right‖

decision, would be less likely to incorporate local information into their discretionary decisions.

This is particularly true for ―soft‖ local information. In this case, if employees‘ discretionary

decisions do not pay off in current or future customer performance (e.g. increased sales or

retention), then these decisions will be more difficult to justify to superiors in the organization.

However, to the extent that ―hard‖ local information can be used as a basis for exercising

discretion, employees may be more likely to incorporate it in their decision making even in tight

monitoring environments. In particular, if employees facing outcome-based incentives (e.g.

customer growth or retention) feel restrained from making decisions based on ―soft‖ information,

then they may be more willing to exercise decision-rights when justified by observable (to local

and non-local decision-makers) ―hard‖ information.

Turning first to the question of whether employees in tight monitoring environments are

less likely to exercise decision-rights in general, the results presented in Table 3 demonstrate that

18

this is indeed the case. As measured by Discretion%, hosts in tight-monitoring properties are

less likely to exceed the 40% comp guidelines than those in loose-monitoring properties (mean

for tight-monitoring properties=19.6%; mean for loose monitoring properties=29.2%; difference

significant at p<.01). Hosts in tight-monitoring properties are also almost three times less likely

than those in loose monitoring properties to be overcomped at a portfolio level (Overcomped) in

any given year (mean for tight-monitoring properties=0.139; mean for loose monitoring

properties=0.368; difference significant at p<.01). Finally, hosts in tight-monitoring properties

award substantially less comps relative to the theoretical win of their customer portfolios

(Comp%) in a given year (mean for tight-monitoring properties=35.6%; mean for loose

monitoring properties=59.8%; difference significant at p<.01). Overall, the results in Table 3 are

consistent with implicit incentives arising from tight versus loose monitoring across properties.

By all measures of the extent to which hosts are using decision rights, discretionary decisions

which deviate from comp guidelines are significantly less prevalent in properties we classify as

having tight monitoring.

Turning next to the question of whether employees in tight monitoring environments are

more or less likely to use decision-rights based on local information, we develop and test an

empirical model of host decisions at the individual customer-trip level. As is clear from the

qualitative evidence presented in Section III, hosts vary significantly in both how, and the

horizons over which, they combine “hard” (e.g. historical information on customers in the firm’s

database) and “soft” (e.g. direct observation of customer behavior) information in their decisions.

Capturing the complexities of these decisions in observable data is not trivial. As an

approximation, we characterize the casino host decision process as:

1 2

1

( )

Ti

ijpt ijpt j ipt k

k

COMP bTheoreticalWin bE TheoreticalWin +

=

= + å

19

where ijpt COMP is the dollar value of comps awarded to customer ‘i’ by host ‘j’ at property ‘p’

during trip ‘t’. In this characterization of their decision process, hosts determine the dollar value

of comps to award a customer based on two pieces of information: the observed theoretical win

of the customer on the current trip ( ijpt TheoreticalWin ) and the host’s expectation of the customer’s

future theoretical win at the property (

1

( )

Ti

j ipt k

k

E TheoreticalWin +

= å

).5 The customer’s future theoretical

win at the property is determined by both the level of theoretical win per trip and the number of

future trips to the property by the customer ( i T ). If hosts simply award comps to customers

based on their observed level of play during the current trip without using their decision-rights to

exceed the 40% limit, then 2 b = 0 and 1 b £ 0.40. Alternatively, if hosts use discretion to

deviate from the prescribed 40% limit based on their expectations about the customer’s level of

future theoretical win at the property, then 2 b > 0 and 1 b £ 0.40.

Consistent with our interviews and observations, we assume that hosts form expectations

about future customer performance at the property based in part on historical data on the

customer’s theoretical win (

1

i L

ipt s

s

TheoreticalWin

= å

)6 and in part based on idiosyncratic soft information

about the customer observed by the host at the property during the customer’s current trip ( ijpt a ).

The past number of trips, i L , considered for each customer depends on the horizon considered

relevant by the host. The host may consider the full tenure of the customer relationship with the

property in which case i L would equal the total number of past trips by the customer to the

property. Alternatively, hosts may discount information on older trips, in which case i L would

5 The absence of the ‘j’ subscript in is intentional and captures the notion that hosts have incentives to bring customers back to the

property even if that customer switches hosts in the future.

6 The absence of the ‘j’ subscript in is intentional and captures the notion that hosts are simply using the customer’s past le vel of

performance at the property to make inferences about performance in the future. The p ast performance of the customer need not be

the result of trips in which the customer interacted with the host making the decision on the current trip.

20

be determined by a shorter time period. The soft information represented by ijpt a can be of

several types including direct interactions between the host and customer in which the host

inquires about the intent of the customer on current and future trips, inferences the host makes

about the customer’s appearance and behavior, or any other local information the host gains

outside of the systematic customer data captured in the firm’s information system. With this

characterization, the host comp decision can be modeled as:

1 2 1

1

(2)

Li

ijpt ijpt ipt s ijpt

s

COMP b TheoreticalWin b l TheoreticalWin a –

=

æ ö

= + ç + ÷

è ø

å

This characterization of the host decision process has intuitive appeal. Conditional on the

customer‘s current level of play, hosts increase (decrease) the comp awarded when the

customer’s past level of play is high (low) which may signal that the current trip is a deviation

from a pattern of performance established by the customer in past trips. Similarly, hosts increase

the comp percentage when they observe local information on the customer that suggests higher

future levels of play at the property ( ijpt a ).

As evidenced by our interviews, the horizons considered relevant for decision making

vary considerably across hosts. In our empirical specifications, we choose the relevant past

horizon as the 18 months prior to the current trip start date. That is, in equation (2), we set Li

equal to the number of past trips the customer has taken to the property within the 18-month

period prior to the current trip. We make this choice for two primary reasons. First, a customer

formally becomes classified as “inactive” if they have not returned for 18 months from their last

trip to the property. Whether the customer remains active, and how active they remain, over the

18 months subsequent to the current trip appear to be salient criteria for decision-making in our

setting. Second, while many hosts suggested that they consider the entire history of customer

data in their decision-making, they almost universally noted that they discount information that is

21

greater than 18 to 24 months old. In the remainder of the paper, for each customer-hostproperty-

trip observation, we refer to

1

i L

ipt s

s

TheoreticalWin

= å

as LagTheoreticalWin. For empirical

purposes, we estimate the following version of equation (2):

6 2003

1 2

2 1994

ˆ ˆ Property Year (2′)

t

j j k k

ijpt ijpt ipt p j ijpt

j k

COMP b TheoreticalWin b LagTheoreticalWin g l m e

= =

= + +å + å + +

where μj denotes a host fixed effect controlled for through the use of a series of host indicators,

and Property and Year represent property and year fixed effects respectively.

7

To examine

whether employees in tight monitoring environments are more or less likely to exercise decisionrights

based on local customer information, we also estimate a version of equation (2‘) where we

allow the empirical weights, 1

ˆb

and 2

ˆb

, on current and historical theoretical win to vary for tight

versus loose monitoring properties.

Results from OLS estimation of equation (2‘) are presented in Table 4. All standard

errors are adjusted for clustering of observations within customers over time prior to inference.

The results in column 1 demonstrate that, on average, hosts weight both current and past

customer information in their comp decisions. The coefficient estimates show that, conditional

on past customer performance, hosts on average award $0.218 in comps per dollar of current trip

theoretical win – well within the comp guidelines of all properties. On average, the comp award

is adjusted upward by $0.011 per dollar of historical theoretical win (over the previous 18

months). Based on the coefficient estimate on LagTheoreticalWin, a customer‘s past

performance would have to be significantly higher than that on the current trip for a host to

substantively shift the average comp percentage to be in excess of the 40% guideline. For the

7 Property fixed effects are largely subsumed by host fixed effects. However, there are 18 hosts who sw itch properties over our

sample period. In practice, after controlling for host fixed effects, the property fixed effects are estimated based on data from this

sm all sam p le of ―sw itch ers‖. N ot su rp risin gly given th e sm all n um ber of sw itch ers, ou r resu lts are not sensitive to either the

omission of property fixed effects or to the omission of this small sample of switchers from the analysis.

22

median customer-trip in our sample, LagTheoreticalWin is approximately two times the current

trip theoretical win while it is approximately 16 times current trip theoretical win for customertrips

in the 90th percentile. For the median customer-trip, hosts would on average increase the

comp percentage by 2.2% (2*1.1%) to 24% whereas for the customer-trip in the 90th percentile,

the comp percentage would on average increase by 17.6% (16*1.1%) to 39.4%. Thus, on

average, past customer performance would have to deviate significantly from current trip

performance for hosts to use decision-rights so as to exceed formal comp guidelines.

Column 2 contains the results from estimating a version of equation (2‘) which allows the

empirical weights on current and historical customer information to vary for properties with tight

versus loose monitoring. The coefficient on TheoreticalWint-kxTightMonitoring is negative and

significant (coefficient=-0.061; p<0.01). This result is consistent with those in Table 3 and

documents that hosts in properties with tight monitoring tend to have lower comp percentages

compared with their loosely monitored counterparts. The coefficient estimates show that,

conditional on past customer performance, hosts in properties with loose monitoring on average

award $0.259 in comps per dollar of current trip theoretical win while those in properties with

tight monitoring on average award $0.198 (0.259-0.061) per dollar of current trip theoretical win.

The coefficient on LagTheoreticalWint-kxTightMonitoring is negative and significant

(coefficient=-0.007; p<0.01). On average, the comp award is adjusted upward by $0.015 per

dollar of historical theoretical win (over the previous 18 months) for loose monitoring properties

compared with $0.008 (0.015-.007) for tight monitoring properties. For the median customer-trip

in our sample, hosts in loose monitoring properties would on average increase the comp

percentage by 3% (2*1.5%) to 29% whereas for the customer-trip in the 90th percentile, the comp

percentage for hosts in these properties would on average increase by 24% (16*1.5%) to

approximately 50% – well over the formal 40% guideline. The comparable numbers for hosts in

23

tight monitoring properties are a 1.6% (2*0.8%) increase in the comp percentage for the median

customer-trip and a 12.8% (16*0.8%) increase for a customer-trip in the 90th percentile of

historical theoretical win – neither of these increases would lead hosts to exceed formal comp

guidelines on average. Thus, hosts in tight monitoring properties tend to adjust comp awards less

in response to historical customer information than their counterparts in loose monitoring

properties.

Column 3 contains results from estimating a version of equation (2‘) which allows the

host trip-level comp decision for an individual customer to vary with the performance of the

host‘s entire customer portfolio. Specifically, we add the host-year level variables Overcomped,

ExcessComps, and their interactions with TightMonitoring to the specification. Overcomped and

ExcessComps are measured for the year prior to the year of the current customer trip. The

qualitative data from our interviews with hosts (discussed in Section III) along with the empirical

results linking host-performance to departure in Table 2 demonstrate that hosts face incentives to

manage their entire customer portfolios in addition to individual customer relationships. These

incentives may lead hosts to vary their comp decisions for individual customer trips based on the

extent to which they are overcomped across all customers in their portfolio. That is, hosts may

face implicit pressure to reduce their comp awards to an individual customer in response to being

over the comp limit of 40% at a portfolio level.

The results in column 3 show that this is the case for properties with tight monitoring but

not for those with loose monitoring. The coefficient on Overcomped is positive and significant

(coefficient=49.95; p<.01) while that on OvercompedxTightMonitoring is negative and

significant (coefficient=-36.7; p<.01). This suggests that, in properties with loose monitoring,

hosts with overcomped portfolios in the prior year continue to award higher levels of comps

conditional on current and past theoretical win. In properties with tight monitoring, there is no

24

relationship between the customer-trip level comp decision and being overcomped at a portfolio

level per se (coefficient for tight monitoring properties=49.95-36.7=13.25; F=1.01, p=0.32).

However, the extent to which a host‘s portfolio is overcomped in the prior year (ExcessComps)

appears to influence the customer-trip level comp decision for hosts in properties with tight

monitoring but not in those with loose monitoring (coefficient on ExcessComps=1.745, p>.10;

coefficient on ExcessCompsxTightMonitoring=-2.635, p<.05; sum of two coefficients=-0.89;

F=3.70, p=0.054). The coefficient estimates demonstrate that, on average, each 1% increase in

the extent to which a host in a tight monitoring property is overcomped at a portfolio level is

associated with a $0.89 decrease in the comps awarded to a particular customer on a given trip.

These results provide evidence that hosts in properties with tight monitoring weight aggregate

portfolio level information on their customers when making individual customer-trip comp

decisions. However, this effect is relatively small. For the median theoretical win in the

customer-trip level sample of approximately $700, a host in a tight monitoring property with

ExcessComps=40 (e.g. overcomped at twice the existing guidelines) would reduce the comp

percentage on an individual customer trip by only 5.1% (40*0.89/700).

In summary, the results in this section document three specific decision-making patterns

which are consistent with implicit incentives from ―tight‖ versus ―loose‖ monitoring. First,

deviation from decision-guidelines, or ―experimentation‖, is significantly less prevalent in

properties we classify as having tight rather than loose monitoring. Second, the decisions of

hosts in tight monitoring properties are less responsive to ―hard information‖ (past customer

performance) than are those of hosts in loose monitoring properties.8 Finally, while responding

less to hard information, hosts in tight monitoring properties respond more strongly to aggregate

8 We found similar results with an alternative, but complementary, approach of measuring – in separate regressions for tight and

loose monitoring properties – the incremental variation in comps due to variation in current trip theoretical win after controlling for

host and year fixed effects. Consistent with hosts in tight monitoring properties deviating less from cur rent trip performance in their

decisions, current trip theoretical win explains 39% versus 31% of the within -host and year variation in comp awards in tight versus

loose monitoring properties respectively.

25

information on their own overall performance compared to their loosely monitored counterparts.

In the next subsection, we explore the implications of these decision-making patterns for

employee learning.

Learning and the Tight vs. Loose Monitoring Effect

Documenting learning in the decentralized information processing activities of our

sample of casino hosts requires that we develop an empirical model to identify how the link

between these decisions and performance outcomes varies as hosts gain experience. To the

extent that hosts develop skill in incorporating unobservable (to the researcher) local information

in their comp percentage decisions, these decisions should be correlated with actual realizations

of future customer performance after controlling for observable historical customer performance

leading to the following empirical specification:

1 1 2 1 3 1 4

6

2

+ Pro

ijpt ijpt ijpt ijpt jt jt

j

j

TheoreticalWin TheoreticalWin Comps Comps b b b Experience b Experience

g

– – –

=

= + + ´ + +

å

2003

1994

perty Year (3)

t

j k k

p j ijpt

k

l m e

=

+ å + +

where ‗i‘, ‗j‘, ‗p‘, and ‗t‘ subscript customer, host, property, and time respectively and μj denotes

a host fixed effect.

Equation (3) is our basis for identifying learning in the customer management decisions

of casino hosts in our research setting. If hosts are, on average, skilled at incorporating local

information that is informative of future customer performance into their comp decisions, then

we expect 2 b > 0 – hosts deviate from basing comp decisions purely on historical customer data

only when future customer performance is high relative to current customer performance. If

ability in acquiring and incorporating local information into comp decisions increases as hosts

gain experience interacting with customers, then the relationship between these decisions and

future performance outcomes should increase with experience implying 3 b > 0 . In some of our

26

estimations, we will also allow the learning effect, 3 b , to vary for properties with tight vs. loose

monitoring.

We estimate equation (3) in two ways. First, we aggregate all data up to the annual host

portfolio level and analyze whether hosts‘ investments into their customer portfolios, in the form

of comp awards, lead to increased future theoretical win at the portfolio level. This approach

will allow us to capture learning effects related to managing a portfolio of customer relationships

as opposed to individual customers. For this specification, we measure Experience as the

number of years a host has been employed at a property at the beginning of each year. Comps

and Experience are mean centered prior to interaction to maintain interpretability of coefficients.

To avoid bias due to the inclusion of the lagged dependent variable, we estimate the model using

the generalized method-of-moments dynamic panel data model of Arrelano and Bond (1991).

The results are presented in Table 5. Consistent with the notion that hosts are, on average,

skilled at incorporating local information, the estimate of 2 b

shown in column 1 is positive and

significant (coefficient=1.38; p<.01). The coefficient estimate demonstrates that each $1 in

comps invested in a host‘s portfolio of customers for the year yields $1.38 of theoretical win in

the next year.

The results in column 2 point to evidence of learning. The coefficient on

CompsxExperience is positive and significant (coefficient=0.043; p<.05) consistent with the

notion that hosts gain ability in acquiring and incorporating local information into their decisions

as they gain experience. The coefficient estimate on CompsxExperience documents that the

―return‖ on each $1 in comps invested in a host‘s portfolio of customers in terms of future

theoretical win increases by $0.043 per year of host experience. Column (3) contains results

from estimation of a version of equation (3) which allows the learning effect to vary for

properties with tight vs. loose monitoring. The results show that all learning effects are

27

concentrated in properties with loose monitoring. The coefficient estimate on CompsxExperience

in the specification in this column (coefficient=0.089; p<.05) captures the learning effect for

loose monitoring properties. This estimate shows that the ―return‖ on each $1 in comps invested

in a host‘s portfolio of customers in terms of future theoretical win increases by $0.089 per year

of host experience in loose monitoring properties. The coefficient estimate on

CompsxExperiencexTightMonitoring (coefficient=-0.083; p<.05) captures the differential

learning effect for tight monitoring properties. This coefficient estimate suggests that any

learning effects are essentially negated for properties with tight monitoring.

Our second approach to estimating equation (3) is to aggregate data at the customer-hostproperty-

year level. Exploiting customer-level data allows us to use alternate measures of

experience to capture different types of learning. Specifically, we estimate a version of equation

(3) in which experience is decomposed into general experience (ExpGeneral) measured as the

cumulative number of all customer-trips handled by the host up to the start of the current year

and customer-specific experience (ExpSpecific) measured as the cumulative number of trips

handled by the host for a specific customer up to start of the current year. Both types of

experience may be important. Employees have multiple opportunities to learn about the

performance consequences of their discretionary decisions from general experience interacting

across customers, but customer heterogeneity may limit the extent to which such learning is

transferable across customer relationships. Conversely, employee experience interacting with

specific customers should lead directly to better discretionary decisions as employees learn about

the performance consequences of their decisions for those customers.

Before turning to estimation of equation (3) using the customer level observations, we

first document a simple pattern in the data that is suggestive of learning in the decentralized

information processing activities of hosts. Figure 1 illustrates how a measure of the relationship

28

between future customer performance and current comp decisions varies with the general

experience level of casino hosts. We measure the “return on comps” (ROC) for each customerhost-

property-year observation as the total theoretical win for the customer at the property over

the subsequent year divided by the dollar value of comps awarded to the customer by a host in

the current year. To control for heterogeneity across properties and years, we then adjust this

measure by subtracting its property-year level mean from each observation. We form experience

portfolios by splitting the sample into 100 quantiles based on ExpGeneral and then taking the

mean level of the adjusted return-on-comps measure for each portfolio. Figure 1 provides

evidence consistent with learning – the ratio of future performance to the current dollar value of

comps increases as hosts gain general experience interacting with customers. Hosts in the lowest

experience quantiles perform significantly worse than the average for each property-year and

their performance does not tend to meet or exceed property-year average performance until their

experience levels are in the 10th quantile and beyond.

Table 6 contains results from estimating equation (3) using the customer-host-propertyyear

level data. The results in column 1 provide evidence that skill in acquiring and incorporating

local information into comp decisions at least partially arises via learning through general

experience interacting with customers. The interaction between Comps and ExpGeneral is

positive and significant at the 1% level. Consistent with the results of the host-portfolio level

analyses reported in Table 5, the results in column 2 of Table 6 show that learning effects from

general experience are weaker for properties with tight monitoring. The coefficient on

CompsxExpGeneralxTightMonitoring is negative and significant at the 1% level.

The results in Table 6 are not consistent with learning occurring via experience

interacting with specific customers (ExpSpecific). Surprisingly, the coefficient on the interaction

between ExpSpecific and Comps is negative and significant at the 1% level. On average, it

29

appears that the quality of discretionary decisions declines as hosts gain experience with

individual customers. There are at least two potential explanations for this result. First,

customers may themselves be learning from repeated interaction about the organization‘s comp

policies. If this were the case, then customers may become more demanding of comp awards as

they gain experience with a property or a specific host. In this scenario, we would expect the

problem to be attenuated in properties with tight monitoring where employees are less likely to

deviate from decision guidelines and exacerbated in properties with loose monitoring where

employees are more likely to do so. The results in Table 6 provide mixed evidence that this is

the case. The coefficient on CompsxExpSpecificxTightMonitoring is positive in all

specifications, but is only significant in column (3) which excludes lagged theoretical win.

The second potential explanation for the negative coefficient estimate on the interaction

between ExpSpecific and Comps is that this result reflects the attempts of hosts to dynamically

manage the cumulative comp percentage awarded to a customer over time rather than the comp

percentage awarded to a customer during an individual time period (e.g. individual trip or year).

If hosts overcomp a customer on one trip, they may try to recoup the “investment” by limiting

comp percentages on future trips. Similarly, in managing the expectations of repeat customers,

hosts may attempt to generally limit comp percentages over time. Figure 2 provides evidence

that this is the case. This figure plots the cumulative comp percentage awarded by a host to a

customer against the relationship-specific experience decile of the host. The cumulative comp

percentage is defined for each customer-host-property-trip as the dollar value of all comps

awarded to a customer by a host during all past trips divided by the total theoretical win for that

customer over all past trips with the host. Relationship-specific experience deciles are formed by

splitting the sample into deciles based on ExpSpecific and then taking the mean level of the

cumulative comp percentage for each decile. Figure 2 demonstrates that comp percentages tend

30

to be significantly higher during the customer‘s first trip with the host, but cumulatively, the

comp percentage awarded gradually converges to the 40% guideline specified in hosts‘ formal

decision-rights. The pattern that emerges in Figure 2 is one in which hosts dynamically manage

the total comps awarded to individual customers towards the decision-guidelines prescribed by

the firm.

V. Conclusion

We view our paper as among the first attempts to document the relationship between

learning and management control through monitoring. We find strong learning effects in our

setting which are concentrated among employees in business units that are ―loosely monitored‖

and almost entirely absent in those that are ―tightly monitored‖. We also show a mechanism by

which these learning effects occur. Employees in ―tightly monitored‖ business units face implicit

incentives to experiment less in their decisions leaving them fewer opportunities to learn.

In addition to the obvious caveats related to the generalizability of a field-study, we

acknowledge that the proxy used in this paper to classify business units in terms of ―tight‖ versus

―loose‖ monitoring is based on a limited amount of data. We have attempted to combine both

qualitative and quantitative data to validate our classification of business units. However, it

remains for us and for future researchers to develop stronger proxies to capture variation in both

the intensity and form of monitoring in organizations. Our results also speak to a tradeoff

between control and learning inherent in tight monitoring but not to how this tradeoff is related

to overall performance. Future research can make a contribution by identifying the long-term risk

and performance implications of different monitoring choices.

31

References

1. Adler, P.S. and K. B. Clark. 1991. ―Behind the learning curve: A sketch of the learning

process,‖ Management Science. 37(3) 267–281.

2. Argote, L. and D. Epple. 1990. Learning curves in manufacturing. Science 247 920–924.

3. Arrelano, M. and S. Bond (1991), ―Some Tests of Specification for Panel Data: Monte

Carlo Evidence and an Application to Employment Equations,‖ Review of Economic

Studies, 58: 277-297.

4. Baiman, S. 1990. ―Agency research in managerial accounting: A second look,‖

Accounting, Organizations and Society 15 (4): 341-371.

5. Campbell, D. 2008. ―Nonfinancial Performance Measures and Promotion-Based

Incentives‖. Journal of Accounting Research. 46 (2).

6. Demski, J., Sappington, D., 1987. ―Delegated Expertise,‖ Journal of Accounting

Research. 25(1): 68–90.

7. Dodgson, M. 1993. ―Organizational Learning: A Review of Some Literatures,‖

Organization Studies. 14(3): 375-394.

8. Edmondson, A. C. 1999. Psychological safety and learning behavior in work teams.

Admin. Sci. Quart. 44 350–383.

9. Hwang, Y., D.H. Erkens, and J.H. Evans. “Knowledge Sharing and Incentive Design in

Manufacturing: Theory and Evidence,” The Accounting Review 84 (2009): 1145-1170.

10. Huber, G.P. 1991. Organizational Learning: The Contributing Processes and the

Literatures. Organization Science. 2(1): 88-115.

11. Hunton, J., E. Mauldin, and P. Wheeler. 2008. ―Potential Functional and Dysfunctional

Effects of Continuous Monitoring.‖ The Accounting Review. 83(6): 1551-1569.

12. Ilgen, D., C. Fisher, and M. Taylor. 1979. ―Consequences of Individual Feedback on

Behavior in Organizations,‖ Journal of Applied Psychology. 64(4): 349-371.

13. Jarmin, R. S. 1994. ―Learning by doing and competition in the early rayon industry,‖

Rand Journal of Economics. 25: 441–454.

14. Jensen, M.C. and W.H. Meckling. 1992. Specific and general knowledge, and

organizational structure. In Contract Economics, edited by Lars Werin and Hans

Wijkander, Oxford: Blackwell.

15. Lambert, R., 1986. ―Executive effort and the selection of risky projects,‖ The Rand

Journal of Economics. 17(1): 77–88.

16. Lambert, R. 2001. ―Contracting Theory and Accounting‖ Journal of Accounting and

Economics, 2001, vol. 32, issue 1-3, pages 3-87

17. Lapre, M., and Tsikriktsis, N. 2006. Organizational Learning Curves for Customer

Dissatisfaction: Heterogeneity Across Airlines. Management Science. 52(3): 352-366.

18. Lee, F., A. Edmondson, S. Thomke, and M. Worline. 2004. “The Mixed Effects of

Inconsistency on Experimentation in Organizations.” Organization Science 15, no. 3:

310-326

19. Merchant, K. 1985. ―Organizational Controls and Discretionary Program Decision-

Making: A Field-Study.‖ Accounting, Organizations, and Society. 10(1): 67-85

20. Merchant, K. A. & Wim Van der Stede. 2007., ―Management Control Systems:

Performance Measurement, Evaluation, and Incentives‖, London, Prentice Hall, Second

Edition

21. Nagar, V. 2002. Delegation and Incentive Compensation. The Accounting Review. 77 (2):

379-395.

32

22. Nagin, D. S. Rebitzer, J. B. Sanders, S. Taylor, L. J. 2002. ―Monitoring, Motivation, and

Management: The Determinants of Opportunistic Behavior in a Field Experiment,‖

American Economic Review. 92(4): 850-873.

23. Pisano, G., R. Bohmer, and A. Edmondson. 2001. ―Organizational Differences in Rates

of Learning: Evidence from the Adoption of Minimally Invasive Cardiac Surgery,‖

Management Science. 47(6): 752-768.

24. Scott, J., 2005. The Frugal Gambler. Las Vegas, NV: Huntington Press.

25. Simons, R. 2000. Performance Measurement and Control Systems for Implementing

Strategy. Prentice Hall.

26. Sitkin, S. B. 1992. Learning through failure: The strategy of small losses. Res. Organ.

Behavior 14 231–266.

27. Sprinkle, G. B. (2000). The Effect of Incentive Contracts on Learning and Performance.

The Accounting Review 75 (July): pp. 299-326.

28. Thomke, S. 1998. ―Managing experimentation in the design of new products,‖

Management Science. 44(6) 743–762.

29. Thomke, S., E. von Hippel, R. Franke. 1998. ―Modes of experimentation: An innovation

process—and competitive—variable,‖ Research Policy 27 315–332.

30. Wiersma, E. 2007. ―Conditions That Shape the Learning Curve: Factors That Increase the

Ability and Opportunity to Learn,‖ Management Science. 53(12): 1903-1915.

31. Wiseman, R. M., and L. R. Gomez-Mejia. 1998. ―A behavioral agency model of

managerial risk taking,‖ Academy of Management Review 23 (1): 133-153.

32. Wooldridge, J. 2002. Econometric Analysis of Cross-Section and Panel Data. Cambridge,

MA: The MIT Press.

33

Figure 1

Adjusted “Return-on-Comps” Across Experience Quantiles*

*Experience quantiles based on experience measured as cumulative number of trips assigned to a host; Adjusted

return-on-comps measured as itj yj . ROC ROC where

itj ROC denotes return on comps for customer ‘i’ on trip ‘t’ at

property ‘j’ and

yj. ROC denotes the mean for ROC across all customer-host trips in year ‘y’ at property ‘j’.

-4

-3

-2

-1

0

1

2

0 10 20 30 40 50 60 70 80 90 100

Experience Quantile

Adjusted Return on Comps

34

Figure 2

Cumulative Comp Percentage across Relationship Specific Experience Deciles*

*Relationship-specific experience deciles based on experience measured as cumulative number of trips with a

specific customer assigned to a host; Cumulative comp % is defined as the total dollar value of comps awarded to a

customer by a host over all past trips with the customer divided by the theoretical win of that customer over all

past interactions with the host.

0.3

0.35

0.4

0.45

0.5

0.55

0.6

1 2 3 4 5 6 7 8 9 10

Relationship-Specific Experience Decile

Cumulative Comp %

35

Table 1

Property and Host Characteristics

Properties with:

Tight Monitoring Loose Monitoring

Property 1 Property 2 Property 3 Property 4 Property 5 Property 6

Mean Med. S.D. Mean Med. S.D. Mean Med. S.D. Mean Med. S.D. Mean Med. S.D. Mean Med. S.D.

Trips per Host 456.7 131 807.5 509.3 140 1043.3 217.9 87 469.9 424.7 133 941.9 37.2 31 82.1 283.6 99 1209.9

Theoretical Win Per Trip 1521.4 1058.9 1491.4 1007.8 627.1 1155.6 423.3 121.5 819.1 868.7 484.6 1108.3 1247.8 853.3 1241.7 1276.1 783.2 1451.3

Data Range 1993-2004 1997-2004 1999-2004 1993-2004 1993-2004 1998-2004

Number of Unique Hosts 62 20 95 62 39 83

Number of Host-Years 594 131 445 390 269 422

Number of Host Exits 16 7 13 22 8 20

Frequency of Comp Exception Reviews Daily and Monthly Daily and Monthly Daily and Monthly Weekly and Quarterly Monthly Weekly and Quarterly

36

Table 2

The Exit-Performance Relation for Properties with Tight vs. Loose Monitoring

Dependent Variable: EXIT

(1) (2)

Constant -2.726** -2.612**

(1.15) (1.15)

CustomerGrowth -1.483*** -1.607***

(0.44) (0.53)

TripsPerCustomerGrowth 0.39 0.377

(0.44) (0.42)

TheoreticalWinPerTrip Growth -0.092** -0.105**

(0.04) (0.05)

Discretion % -2.186 -2.065

(1.53) (1.47)

Overcomped 1.424*** 1.275***

(0.44) (0.46)

ExcessComps 0.001** 0.001**

(0.00052) (0.00056)

ExcessComps x TightMonitoring 0.004***

(0.00145)

Experience -1.037*** -1.010***

(0.17) (0.18)

Property Fixed Effects +++ +++

Year Fixed Effects +++ +++

Pseudo R-Squared 0.41 0.42

Implied Probabilities

Overcomped=0, and all other variables at mean 0.003

Overcomped=1; ExcessComps in 90th percentile; Loose Monitoring

Property 0.013

Overcomped=1; ExcessComps in 90th percentile; Tight Monitoring

Property 0.063

Standard errors in parentheses are adjusted for clustering of observations within hosts over time; * significant at

10%; ** significant at 5%; *** significant at 1% ; +++ denotes jointly significant at the 1% level using χ2 test;

Table reports logit estimates of equation (1) using data on 1,189 host-year observations; Exit=1 if host departs from

a property in the subsequent year, 0 otherwise; CustomerGrowth = annual growth in the number of customers

managed by the host over the prior year; TripsPerCustomerGrowth = annual growth over the prior year in the

average number of trips taken to a property by customers managed by the host; TheoreticalWinPerTripGrowth =

annual growth over the prior year in the average theoretical win per trip for all customers managed by the host;

Discretion%=percentage of all customer-trips managed by the host during the year in which comps were awarded in

excess of 40% of the trip-level theoretical win; Overcomped=1 if total comps awarded by the host to all customers

in a given year is greater than 40% of the aggregate theoretical win across all customers managed by the host for that

37

year, 0 otherwise; ExcessComps= [100*(total comps awarded by the host to all customers in a given year divided by

the aggregate theoretical win across all customers managed by the host for that year)-40] when Overcomped=1 and

0 otherwise. TightMonitoring=1 if host is employed at properties 1, 2, or 3 and equals 0 otherwise; Experience =

number of years of host experience at property as of the start of the year; Main effects of TightMonitoring controlled

for via property fixed effects; Model used for estimation is:

0 1 1 2 1 3 1

4 1 5 1 6 1 7

( ) (

+ %

jpt jpt jpt jpt

jpt jpt jpt

P Exit f CustomerGrowth TripsPerCustomerGrowth TheoreticalWinPerTripGrowth

Discretion Overcomped ExcessComps ExcessComps TightMonitori

b b b b

b b b b

– – –

– – –

= + + +

+ + + ´ 1

6 2003

8 1

2 1994

Property Year )

t

jpt

j j k k

jpt p jpt

j k

ng

b ______Experience g g e

= =

+ +å + å +

38

Table 3

Use of Decision-Rights for Properties with Tight vs. Loose Monitoring

Properties with:

All Properties Tight Monitoring Loose Monitoring

t-test for

Difference

Discretion %

23.7

(21.8)

19.6

(19.3)

29.2

(23.8) 4.43***

Overcomped 0.235

(0.424)

0.139

(0.346)

0.368

(0.482) 7.35***

Comp % 45.8

(1.2)

35.6

(1.0)

59.8

(1.4) 3.21***

Table reports mean for each host-year level variable across 2,251 host-year observations; Standard deviations in

parentheses; *** significant at the 1% level; Discretion%=percentage of all customer-trips managed by the host

during the year in which comps were awarded in excess of 40% of the trip-level theoretical win; Overcomped=1 if

total comps awarded by the host to all customers in a given year is greater than 40% of the aggregate theoretical win

across all customers managed by the host for that year, 0 otherwise; Comp%= total comps awarded by the host to all

customers in a given year is divided by the aggregate theoretical win across all customers managed by the host for

that year.

39

Table 4

Determinants of the Trip-Level Comp Decision for Properties with Tight vs. Loose Monitoring

Dependent Variable:

Comps

1 2 3

TheoreticalWin 0.218*** 0.259*** 0.259***

(0.002) (0.004) (0.004)

TheoreticalWin x TightMonitoring -0.061*** -0.061***

(0.005) (0.005)

LagTheoreticalWin 0.011*** 0.015*** 0.015***

(0.001) (0.002) (0.002)

LagTheoreticalWint-k x TightMonitoring -0.007*** -0.007***

(0.002) (0.002)

Overcomped 49.951**

(20.008)

Overcomped x TightMonitoring -36.667*

(21.100)

ExcessComps 1.745

(1.129)

ExcessComps x TightMonitoring -2.635**

(1.209)

Host Fixed Effects +++ +++ +++

Site Fixed Effects +++ +++ +++

Year Fixed Effects +++ +++ +++

Number of Host-Customer-Trips 220,223 220,223 220,223

R-Squared 0.41 0.42 0.42

Standard errors in parentheses are adjusted for clustering of observations within customers over time; * significant at

10%; ** significant at 5%; *** significant at 1% ; +++ denotes jointly significant at the 1% level using χ2 test;

Table reports OLS estimates of equation (2‘) using host-customer-trip level data; TheoreticalWin=theoretical win

generated by the customer on the current trip; LagTheoreticalWin = Cumulative theoretical win generated by the

customer over the 18 months prior to the current trip start-date; Overcomped and ExcessComps are measured at the

host-year level and are defined in the notes to Table 2. TightMonitoring=1 if host is employed at properties 1, 2, or 3

and equals 0 otherwise; Main effects of TightMonitoring controlled for via property fixed effects; The baseline

model used for estimation is:

6 2003

1 2

2 1994

ˆ ˆ Property Year

t

j j k k

ijpt ijpt ipt p j ijpt

j k

COMP TheoreticalWin b b LagTheoreticalWin g l m e

= =

= + +å + å + +

40

Table 5

Learning and the Return on Comps for Properties with Tight vs. Loose Monitoring

Theoretical Win

1 2 3

Theoretical Wint-1 0.062 0.059 0.06

(0.045) (0.045) (0.045)

Compst-1 1.383*** 1.241*** 1.370***

(0.152) (0.164) (0.167)

Compst-1xExperiencet-1 0.043** 0.089***

(0.017) (0.023)

Compst-1xExperiencet-1xTight Monitoring -0.083***

(0.025)

Experiencet-1 110.353*** 112.739*** 91.853***

(18.603) (18.536) (19.060)

Year Indicators +++ +++ +++

Number of Host-Years 1,720 1,720 1,720

Number of Unique Hosts 324 324 324

Standard errors in parentheses are adjusted for clustering of observations within hosts over time; * significant at

10%; ** significant at 5%; *** significant at 1% ; +++ denotes jointly significant at the 1% level using χ2 test;

Table reports Arrelano-Bond dynamic panel data estimates of equation (3) using host-year level data;

TheoreticalWin=aggregate theoretical win across all customers managed by a host during the year;

Comps=aggregate comps awarded by host to all customers managed by that host during the year;

TightMonitoring=1 if host is employed at properties 1, 2, or 3 and equals 0 otherwise; Experience = number of

years of host experience at property as of the start of the year. The baseline model used for estimation is:

1 1 2 1 3 1 4

6

2

+ Pro

ijpt ijpt ijpt ijpt jt jt

j

j

TheoreticalWin TheoreticalWin Comps Comps b b b Experience b Experience

g

– – –

=

= + + ´ + +

å

2003

1994

perty Year

t

j k k

p j ijpt

k

l m e

=

+ å + +

41

Table 6

General and Specific Learning for Properties with Tight vs. Loose Monitoring

TheoreticalWint+1

(1) (2) (3)

TheoreticalWint 0.460*** 0.458***

(0.0060) (0.0060)

Compst 0.254*** 0.240*** 1.223***

(0.0200) (0.0210) (0.0200)

Compst x ExpGeneral 0.0004*** 0.0006*** 0.002***

(0.0001) (0.0002) (0.0002)

Compst x ExpSpecific -0.003*** -0.003*** -0.007***

(0.0010) (0.0010) (0.0010)

Compst x ExpGeneral x TightMonitoring -0.001*** -0.003***

(0.0003) (0.0003)

Compst x ExpSpecific x TightMonitoring 0.0001 0.003**

(0.0010) (0.0010)

ExpGeneral 4.757*** 4.857*** 7.132***

(0.2810) (0.2840) (0.3390)

ExpSpecific 27.910*** 27.698*** 51.718***

(1.5980) (1.6000) (2.2840)

Host Fixed Effects +++ +++ +++

Site Fixed Effects +++ +++ +++

Year Fixed Effects +++ +++ +++

Observations 229,861 229,861 229,861

R-squared 0.34 0.34 0.25

Standard errors in parentheses are adjusted for clustering of observations within customers over time; * significant at 10%; **

significant at 5%; *** significant at 1% ; +++ denotes jointly significant at the 1% level using χ2 test; Table reports OLS

estimates of equation (4) using customer-host-year level data; TheoreticalWin=customer‘s total theoretical win during the year;

Comps=total comps awarded by the host to the customer during the year; ExpGeneral=Cumulative number of trips managed by

the host up to the start of the year; ExpSpecific=Cumulative number of trips for a specific customer managed by the host up to the

start of the year; TightMonitoring=1 if host is employed at properties 1, 2, or 3 and equals 0 otherwise. Note that the estimates

provided in column 3 of Table 6 exclude lagged theoretical win. The reason we include this column is to check whether the

relatively low baseline ―return on comps‖ (e.g. coefficient on lagged theoretical win) reported in columns 1 and 2 is an artifact of

aggregating the data at the customer-year level. If comps awarded during the current year are associated with a persistent

increase in theoretical win that arises later in the same year for a given customer (e.g. from repeat trips within the year), then this

portion of the ―return on comps‖ would be obscured by the inclusion of lagged theoretical win (particularly when current and

lagged theoretical win are correlated as is clearly the case given the coefficients on lagged theoretical win in columns 1 and 2).

The results in column 3 suggest that this is indeed the case as the baseline ―return on comps‖ rises to approximately $1.22 in the

absence of lagged theoretical win as an additional explanatory variable: Baseline model tested is:

1 1 2 1 3 1 4 1

5 6

ijpt ijpt ijpt ijpt jt ijpt ijt

jt

TheoreticalWin TheoreticalWin Comps Comps ExpGeneral Comps ExpSpecific

ExpGeneral ExpSpecific

b b b b

b b

– – – – = + + ´ + ´

+ +

6 2003

2 1994

+ Property Year

t

j j k k

ijt p j ijpt

j k

g l m e

= =

+ å + å + +

 

 

WordPress.com. This is your first post. Edit or delete it and start blogging!

About faizahsunaryo

I am a teacher, I teach in Vocational high school in Malang City
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Hello world!

  1. Mr WordPress says:

    Hi, this is a comment.
    To delete a comment, just log in, and view the posts’ comments, there you will have the option to edit or delete them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s