Tuesday, September 24, 2013

Aren’t Bad Apples the Cause of Ethical Problems in Organizations?



According to the bad apple theory, people are good or bad and organizations are
powerless to change these folks. This bad apple idea16 is appealing in part because
unethical behavior can then be blamed on a few individuals with poor character.
Although it’s unpleasant to fire people, it’s relatively easier for organizations to
search for and discard a few bad apples than to search for some organizational
problem that caused the apple to rot.

Despite the appeal of the bad apple idea, ‘‘character’’ is a poorly defined concept,
and when people talk about it, they rarely define what they mean. They’re probably
referring to a complex combination of traits that are thought to guide individual behavior in ethical dilemma situations. If character guides ethical conduct, training
shouldn’t make much difference because character is thought to be relatively stable:
it’s difficult to change, persists over time, and guides behavior across different contexts.
Character develops slowly as a result of upbringing and the accumulation of
values that are transmitted by schools, families, friends, and religious organizations.
Therefore, people come to educational institutions or work organizations with an
already defined good or poor character. Good apples will be good and bad apples
will be bad.

In fact, people do have predispositions to behave ethically or unethically. And sociopaths can certainly slip into organizations with the sole intent of helping themselves to the organization’s resources, cheating customers, and feathering their own nests at the expense of others. Famous scoundrels like Bernie Madoff definitely come to mind. Such individuals have little interest in ‘‘doing the right thing,’’ and when this type of individual shows up in your organization, the best thing to do is discard the bad apple and make an example of the incident to those who remain.

But discarding bad apples generally won’t solve an organization’s problem with
unethical behavior. The organization must scrutinize itself to determine if something
rotten inside the organization is spoiling the apples. For example, Enron encouraged
a kind of devil-may-care, unethical culture that is captured in the film, Enron: The
Smartest Guys in the Room. Arthur Andersen’s culture morphed from a focus on the
integrity of audits to a consulting culture that focused almost exclusively on feeding
the bottom line. You’ll learn that most people are not guided by a strict internal moral compass. Rather, they look outside themselves—to their environment—for cues about how to think and behave. This was certainly true in the financial crisis when the mantra became ‘‘everyone is doing it’’ (and making a lot of money besides). At work, managers and the organizational culture transmit many cues about how employees should think and act. For example, reward systems play a huge role by rewarding short-term thinking and profits, as they did in the recent financial crisis. You’ll learn about theimportance of these organizational influences and how to harness them to support ethical behavior and avoid unethical behavior.

So, apples often turn bad because they’re spoiled by ‘‘bad barrels’’—bad work
environments that not only condone, but may even expect unethical behavior. Most
employees are not bad folks to begin with. But their behavior can easily turn bad if
they believe that their boss or their organization expects them to behave unethically
or if everyone else appears to be engaging in a particular practice. In this view, an
organization that’s serious about supporting ethical behavior and preventing misconduct
must delve deeply into its own management systems and cultural norms and
practices to search for systemic causes of unethical behavior. Management must take
responsibility for the messages it sends or fails to send about what’s expected. If
ethics problems are rooted in the organization’s culture, discarding a few bad apples
without changing that culture isn’t going to solve the problem. An effective and lasting
solution will rely on management’s systematic attention to all aspects of the organization’s culture and what it is explicitly or implicitly ‘‘teaching’’ organizational
members.

This question about the source of ethical and unethical behavior reflects the
broader ‘‘nature/nurture’’ debate in psychology. Are we more the result of our genes
(nature) or our environments (nurture)? Most studies find that behavior results from
both nature and nurture. So, when it comes to ethical conduct, the answer is not
either/or, but and. Individuals do come to work with predispositions that influence
their behavior, and they should take responsibility for their own actions. But the

work environment can also have a large impact.

Those Who Were Supposed to Protect Us Didn’t



One protection against financial calamity was thought to be the rating agencies such
as Standard and Poor’s and Moody’s. They rate the safety or soundness of securities,
including those securitized mortgage products. A credit opinion is defined as one
which rates the timeliness and ultimate repayment of principal and interest. But, like
everyone else, the rating agencies say they didn’t foresee a decline in housing prices;
and consequently, they rated the mortgage securities as being AAA—the highest
rating possible, which meant that the rating agencies considered these securities to be
highly safe. The agencies are the subject of much criticism for their role in the crisis.
If they had done a better job analyzing the risk (their responsibility), much of the
crisis might have been avoided. But note that these rating agencies are hired and paid
by the companies whose products they rate, thus causing a conflict of interest that
many believe biased their ratings in a positive direction. So, people who thought they
were making responsible investments because they checked the ratings were misled.

Another protection that failed was the network of risk managers and boards of
directors of the financial community. How is it that one 400-person business that was
part of the formerly successful insurance behemoth, AIG, could invest in such a way
that it brought the world’s largest insurance company to its knees? The risk was
underestimated all around by those professionals charged with anticipating such
problems and by the board of directors that didn’t see the problem coming. The U.S.
government (actually taxpayers) ended up bailing out AIG to the tune of $170 billion.
The risk managers and boards of other financial firms such as Citigroup, Merrill
Lynch, Lehman Brothers, Bear Stearns, and Wachovia were similarly blind.

On Wall Street, there were other contributing factors. First, bank CEOs and other
executives were paid huge salaries to keep the price of their firms’ stocks at high
levels. If their institutions lost money, their personal payouts would shrink. So, bank
executives were paid handsomely to bolster short-term profits. The Wall Street traders were similarly compensated—they were paid multimillion-dollar bonuses for
taking outsized risks in the market. What seemed to matter most were the short-term
profits of the firm and the short-term compensation of those making risky decisions.
The traders took risks, the bets were at least temporarily successful, and the bankers
walked off with multimillion-dollar bonuses. It didn’t matter that the risk taking was
foolish and completely irresponsible in the long run. The bonus had already been
paid. Consequently, a short-term mentality took firm root among the nation’s bankers,
CEOs, and boards of directors.

Finally, we can’t examine the financial crisis without questioning the role of
regulatory agencies and legislators. For example, for a decade, investor Harry
Markopolos tried on numerous occasions to spur the Securities and Exchange
Commission to investigate Bernard L. Madoff. The SEC never did uncover the
largest Ponzi scheme in the history of finance. The $65-billion-dollar swindle
unraveled only when Madoff admitted the fraud to his sons, who alerted the SEC
and the U.S. attorney’s office in New York in December 2008. Others who are
culpable in the financial crisis are members of the U.S. Congress, who deregulated
the financial industry, the source of some of their largest campaign contributions.
Among other things, they repealed the Glass-Steagall Act, which had been
passed after the U.S. stock market crash in 1929 to protect commercial banking
customers from the aggression and extreme risk taking of investment bank
cultures. The act created separate institutions for commercial and investment
banks, and they stayed separate until the merger of Citicorp and Travelers to
form Citigroup in 1998. The two companies petitioned Congress to eliminate
Glass-Steagall, claiming that it was an old, restrictive law and that today’s markets
were too modern and sophisticated to need such protection. And Congress
listened. Those 1930s congressmen knew that if two banking cultures tried to
exist in the same company—the staid, conservative culture of commercial banking
(our savings and checking accounts) and the razzle-dazzle, high-risk culture
of investment banking—the ‘‘eat what you kill’’ investment bank culture would
win out. Some said that staid old commercial banks turned into ‘‘casinos.’’ But,
interestingly, casinos are highly regulated and are required to keep funds on hand
to pay winners. In the coming months, we expect to learn more about the behavior
that led to this crisis. As we noted earlier, much if not most of it was probably
legal because of the lack of regulation in the mortgage and investment banking
industries. But look at the outcome! If only ethical antennae had been more sensitive,
more people might have questioned products they didn’t understand, or spoken
out or refused to participate in practices that were clearly questionable. As
just one tiny example, could anyone have thought it was ethical to sell a product
they called a liar loan, knowing that the customer surely would be unable to repay

(even if it was legal to do so)?

Mortgage Originators Peddled ‘‘Liar Loans’’.



In the early 2000s, as housing investments increased in popularity, more and more
people got involved. Congress urged lenders Freddie Mac and Fannie Mae to expand
home ownership to lower-income Americans. Mortgage lenders began to rethink the old rules of financing home ownership. As recently as the late 1990s, potential home
owners not only had to provide solid proof of employment and income to qualify
for a mortgage, but they also had to make a cash down payment of between 5 and
20 percent of the estimated value of the home. But real estate was so hot and returns
on investment were growing so quickly that mortgage lenders decided to loosen those
‘‘old-fashioned’’ credit restrictions. In the early 2000s, the rules for obtaining a mortgage
became way less restrictive. Suddenly, because real estate values were rising so
quickly, borrowers didn’t have to put any money down on a house. They could borrow
the entire estimated worth of the house; this is known as 100-percent financing.
Also, borrowers no longer needed to provide proof of employment or income. These
were popularly called ‘‘no doc’’ (no documentation) or ‘‘liar loans’’ because banks
weren’t bothering to verify the ‘‘truth’’ of what borrowers were claiming on their
mortgage applications.

Banks Securitized the Poison and Spread It Around

At about the same time liar loans were becoming popular, another new practice was
introduced to mortgage markets. Investors in developing countries were looking to
the United States and its seemingly ‘‘safe’’ markets for investment opportunities.
Cash poured into the country from abroad—especially from countries like China
and Russia, which were awash in cash from manufacturing and oil respectively.
Wall Street bankers developed new products to provide investment vehicles for
this new cash. One new product involved the securitization of mortgages. (Note:
structured finance began in 1984, when a large number of GMAC auto receivables
were bundled into a single security by First Boston Corporation, now part of Credit
Suisse.) Here’s how it worked: Instead of your bank keeping your mortgage until it
matured, as had traditionally been the case, your bank would sell your mortgage—
usually to a larger bank that would then combine your mortgage with many others
(reducing the bank’s incentive to be sure you would pay it back). Then the bankers
sold these mortgage-backed securities to investors, which seemed like a great idea
at the time. Real estate was traditionally safe, and ‘‘slicing and dicing’’ mortgages
divided the risk into small pieces with different credit ratings and spread the risk
around. Of course, the reverse was also true, as the bankers learned to their horror.
This method of dividing mortgages into little pieces and spreading them around
could also spread the contagion of poor risk. However, starting in 2002 and for
several years thereafter, people couldn’t imagine housing values falling. So much
money poured into the system, and the demand for these mortgage-backed security
products was so great, that bankers demanded more and more mortgages from
mortgage originators. That situation encouraged the traditional barriers to getting a
home mortgage to fall even farther. These investment vehicles were also based
upon extremely complex mathematical formulas (and old numbers) that everyone
took on faith and few attempted to understand. It looks like more people should
have followed Warren Buffett’s sage advice not to invest in anything you don’t
comprehend! Add to that toxic mix the relatively new idea of credit-default swaps (CDS). These complex financial instruments were created to mitigate the risk financial
firms took when peddling products like securitized mortgages. CDS are insurance
contracts that protect the holder against an event of default on the part of a
debtor. One need not own the loan or debt instrument to own the protection, and
the amount of capital tied up in trading CDS is very small compared to trading
other debt instruments. That is a very significant part in the increase in popularity
at sell-side and buy-side trading desks. The big insurance company, AIG, was a
huge player in this market, and so were the large banks. The firms that were counterparties
to CDS never stepped back from the trading frenzy to imagine what
would happen if both the structured finance market and the real estate bubble burst
(as all bubbles eventually do) at the same time. Both underwriters and investors
would be left holding the bag when the music stopped playing—and the U.S. taxpayer
has had to bail out most of the financially-stressed firms to save the entire
financial system from collapse. Please note that all of this happened in a part of the

market that was virtually unregulated.

Real Estate Became the Investment of Choice



Of course, people also want to invest in something safe, and what could be safer than
real estate? There had been relatively few instances of real estate values declining,
and when they did the declines were generally shallow and short-lived. A point of
pride in the United States was the high percentage of Americans who owned their
own homes. Investing in a home traditionally had been a very safe investment and
one that was slow to appreciate in value. But suddenly in the early 2000s, real estate
investing became a real moneymaker. With a backdrop of historically low interest
rates, real estate became such a popular way to invest that demand soon outstripped
supply and prices soared. The value of homes skyrocketed—homes that were selling
for $300,000 in one year sold for $450,000 the next. Prices rose so fast that speculation
grew tremendously. People bought houses with almost no down payment,
remodeled them or waited a few months, and then resold the houses for a quick profit.
A number of popular television programs showed viewers how to ‘‘flip’’ real estate
properties for profit.

Since the cost of borrowing was so low and home equity had grown so quickly,
many consumers borrowed on the equity in their homes and purchased additional real
estate or a new car or financed a luxury vacation. For example, suppose someone
purchased a house for $500,000 in 2003. By 2005, the home might have been worth
$800,000. The home owner refinanced the mortgage—borrowing as much as the
entire current worth of the house (because its value could only go up, right?), which
resulted in a $300,000 cash infusion for the home owner. This practice was very
popular, and it laid the groundwork for a huge disaster when the housing values fell
off a cliff in 2008 and 2009. Imagine the home owner who refinanced the home
just described. Imagine that he took the $300,000 and purchased a summer home and
a sports car and paid for his children’s college educations. Suddenly, home values
plummeted and his house lost 30 percent of its value, which was common in markets
such as California, Florida, Nevada, or Arizona, where the real estate bubble
was particularly inflated. After the real estate bubble burst, his house was worth
$560,000. Now suppose he loses his job and needs to sell his house because he can’t
afford the mortgage payments. He can’t get $800,000 for his home, which is what he
owes on his mortgage. His only choice is to work with the mortgage holder (probably
a bank) to refinance (unlikely) or declare bankruptcy and walk away from the house.
This is what a lot of home owners have done, and it is one of the factors at the heart
of the current financial crisis. Lots of folks were in on this bubble mentality, getting
what they could in the short term and not thinking very much about the likelihood (or
inevitability) that the bubble would burst.