Risk Theory and Credibility

Risk theory and credibility can most easily be understood by breaking the topic down into two components: Risk theory and credibility theory. Both topics are theories studied by actuaries and insurers; risk theory deals with the financial impact of risk on an insurer's overall insurance portfolios (product offerings). Credibility theory was originally developed as a means of determining insurance rates as they related to the risks associated with the groups being insured. In today's complex insurance industry, risk theory and credibility encompass similar concepts, but in many cases the terminology associated with risk assignment and premium calculations has changed. Many newer terms associated with risk and credibility are introduced and defined in this essay. Actuaries have sophisticated databases of information about policy holders and groups that were only dreamed about when risk theory was in its infancy. In addition to vast data repositories of information and criteria about groups, actuaries also employ sophisticated computer formulas and models to calculate risk with ever increasing accuracy. Rate making is the common term for calculating the price of an insurance premium for a particular group (and its associated risk). The ability to calculate insurance premiums that are adequate, reasonable and fair encompasses many factors. These factors include: Calculating appropriate risk for the insured, keeping rates competitive and ensuring availability of insurance coverage (non-discrimination against groups). Other factors include reliance by insurers on sophisticated risk models that are able to predict catastrophic risk without having access to years of historical data that is typically used in traditional predictive analysis. Insurance companies are also being required to have on hand adequate financial reserves to be able to cover catastrophic losses. Insurance companies can bolster capital reserves by buying reinsurance or financing risk through investment vehicles such as catastrophic bonds. If insurance companies are not able to secure adequate capital reserves to cover large risks, they may be forced to drop out of some insurance markets-leaving policy holders with fewer or no options for insurance coverage.

Keywords Actuary; Credibility Theory; Experience Rating; Long Tail Insurance Event; Rate Making; Rating Agencies; Reinsurance; Risk Models; Risk Theory; Risk-Reward Relationship; Short Tail Insurance Event; Third Party Model or Product (Risk Models)

Actuarial Science > Risk Theory & Credibility

Overview

To many individuals and organizations, insurance is thought of as an unavoidable but necessary expense. In theory, we all understand that insurance is a way to protect ourselves, our families and our assets from loss caused by unforeseen future events. Because insurance is acquired to mitigate the risk of illness, theft, fire or other "disasters," the entire idea of insurance has a rather unpleasant connotation to many.

In fact, insurance is a tool that individuals and businesses use to manage financial risk. In other words, an insurance policy transfers risk from an individual or business and onto an insurance company in return for a premium payment. An insurance company assumes risk for policy holders which may at first seem to be an altruistic act; however, insurance companies are businesses that are focused on making money and remaining solvent.

Risk Theory

Insurance risk theory has historically dealt with a number of factors. From the insurer's standpoint, risk theory encompasses the following:

  • Analysis of risk for given populations or classes (policy holders).
  • Determination of insurance premium rates.
  • Reinsurance required to mitigate risk for primary insurers.
  • Defining how much capital to reserve to cover potential claims.

Much of the risk that insurance companies assume for policy holders is somewhat predictable. Insurance companies use predictive modeling that analyzes data and trends from many previous years' data. "For short tail insurance risks such as property, motor damage and theft, and for common forms of loss such as those arising from theft, building fire, accident, and small to moderate storm damage, the risk is generally estimated from loss records" (Walker, Garnder, & Johnstone, 2004).

If all risk could be modeled in this manner, insurance companies would have a much easier time in predicting how many claims were likely to be paid and how much money would need to be kept in reserve to pay claims, meet operation expenses and ensure profitability. Risk models for catastrophic risks related to natural disaster or acts of terrorism have proven to be deficient or just plain wrong. The insurance industry is responding by developing new risk models for these types of events.

"The risk of large losses from catastrophic events such as earthquakes and tropical cyclones is based on complex computer models utilizing geographic information systems (GIS) technology that simulate a large number of events representative of what would be expected over thousands of years. Actuarial techniques involving triangulation have been developed to estimate the long tail insurance risks that are characteristic of casualty insurance" (Walker, Garnder, & Johnstone, 2004).

Actuaries have historically focused on the analysis of risk to determine premiums; the technological advances in data storage, data mining and computing are transforming risk management in the insurance industry. With sophisticated data mining tools, it is possible to track an enormous number of variables related to policy holders. It is now possible for insurers to segment populations with greater and greater precision; therefore allowing insurers to assign risk more accurately.

Insurance companies manage risk by transferring policy holder risk to other insurance companies and reinsurance companies. Transferring and dispersing risk is founded in the "central limit theorem." The mathematical basis of this risk transfer (also known as law of large numbers) facilitates the interaction between risk, capital and solvency, provided that there is sufficient knowledge about the risk (Walker, Garnder, & Johnstone, 2004).

As Walker, Gardner, and Johnstone explain, “A primary requirement of an insurance company is that it should be sustainable into the future in terms of both returns to the shareholder and solvency. If a company has a target average annual rate of return on equity and a maximum risk of insolvency which it wishes to achieve over, for example, the next 10 years, then these systems can be used to determine the optimum premium rates, investment strategy and reinsurance programs to do this” (2004).

Insurers need to maintain adequate financial capital to meet their obligations in terms of paying claims as well as to maintain solvency. In 1992, Hurricane Andrew ruined a record number of insurers that didn't have adequate capital reserves in place. Rating agencies, which assign solvency risks to insurers, are working to change capital models and are requiring higher capital reserves. Contemporary risks are creating more volatility in the insurance marketplace. Higher volatility requires higher levels of capital kept in reserve to fund inevitable occasional large losses. A new generation of risk models is being developed to help insurers better calculate risks-especially catastrophic risk. The impact of new risk models on rating agency recommendations, insurers and the policy holders is discussed later in this essay.

Additionally, the results of the 2011 Risk Premium Project update found that behavioral insurance and new instruments of alternative risk transfer are popular fields of research in nonlife insurance. Capital allocation and enterprise risk management are also very important research topics. Furthermore, the financial crisis has stimulated new work on corporate governance and insurance (Eling, 2013).

Applications

Determining Premiums

A large part of success for insurance companies remaining solvent and making a profit can be attributed to calculating the correct premium for insurance coverage. As one might imagine, this is one of the most difficult undertakings for insurance carriers. Insurance companies need to charge enough for premiums to cover not only their claims losses, but also operation expenses and overhead. At the same time, insurers are facing stiff competition in many insurance lines and must keep premiums reasonable to keep customers from fleeing to lower priced competitors. Ratemaking is defined as the process of calculating a premium that is (All business, 2007):

  • Adequate: Enough to cover losses according to anticipated frequency and severity, thereby safeguarding against the possibility of the insurance company becoming insolvent;
  • Reasonable: The insurance company should not be earning excessive profit; and

• Not unfairly discriminatory or inequitable.

Rate Setting

Rate setting (historically) required basic information on a customer and then some previous history (experience). For example, an experienced driver has a driving record over a number of years that tracks accidents, but a new driver has no driving history to factor into rate setting. In such a case, computer databases make it easy to exclude certain criteria for certain rate predicting models and more accurately assign risk to given populations. Rate monitoring is the process of managing risk and rate structures for existing customers. Rate setting has two basic components: Pure premium and loss ratio (Using Data Mining for Rate Making in the Insurance Industry, 2003).

  • The pure premium is defined as the amount of total premiums of an insurance policy that is sufficient to offset all accident claim costs, also known as rock bottom of any premium structure. In this scenario, an insurance company could cover all claims from premiums, but would not make a profit or show a loss. Every premium dollar would be spent to cover claims with nothing left over for operating expenses.
  • Loss ratio is defined as the fraction of claim cost to premium. If the fraction for auto policies is 70% then this simply means that 70% of the premium collected needs to cover claim costs. The other 30% would go toward covering operation expenses or be recorded as profit. Lowering cost ratios is the major goal for insurance companies and it can be done in one of two ways: Premiums can be increased to cover claims; policy holders (customers) can be adjusted to get rid of higher risk customers-thereby lowering the number of potential claimants.

Credibility

Another insurance term related to rate making is "credibility." Credibility theory was developed as a means to combine the experience of a group of policy holders with the risk of an individual in an effort to better calculate premiums. There are two requirements for each group:

  • The group must be sufficiently similar.
  • The group must be large enough that an adequate statistical analysis of the claims can be completed in order to calculate the premium.

When an insurance company is calculating a premium it first divides the policy holders into groups based on demographics and other characteristics. For example, a young man driving a fast car might be considered a high risk while an old woman driving a small car would be considered a low risk. None of the groups contain completely identical risks. The insurance company then has to combine the experience of the group with the experience of the individual to better calculate the premium.

Rate Making & Credibility Theory

Rate making and credibility theory require that an insurance company divide policy holders into groups by certain criteria such as age, sex, marital status, etc. as previously noted. The criteria directly affect what the insurance company charges for a premium-this is no surprise to those of us who buy insurance and are sometimes rewarded for having good health or a clean driving record.

Premiums would be a more accurate representation of the potential risk if each insurance applicant paid a premium uniquely created to match their risk assesment. That, however, is not practical for large insurance companies who need to run their business. The compromise is that insurance companies group applicants according to similar expectations of loss and amount of potential risk (All business, 2008).

Traditional risk theory involved complicated mathematical formulas that constrained actuaries to a relatively small number of attributes with which to calculate risk. The law of large numbers allowed insurers to spread risk across wide numbers of share holders; by requirement, insurance rates had to be affordable, so poor risk customers often benefited from the larger risk pool, while lower risk customers absorbed costs for higher risk clients ("Using data mining for rate making in the insurance industry," 2003).

Technological Improvements

While it is quite impractical for an insurance provider to calculate a unique rate for every customer, the advent of computer technology and ever increasing competition are revolutionizing how risk is being assigned to groups for rate making; there is a clear movement away from "product oriented" insurance products to "customer oriented" insurance products. Computers allow insurers to develop more "product variants and customer segments than was possible before" ("Using data mining for rate making in the insurance industry," 2003).

Innovative products are able to target selected customers, and e-business allows for leaner insurance business with fewer agents required, which lowers overall costs. Data mining of data warehouses offers unprecedented access to customer information that is limited only by ingenuity, availability, and legislation. There's "a movement away from standard policies with broadly defined risk classes, moving toward individualized risk assessment, and almost individualized pricing" ("Using data mining for rate making in the insurance industry," 2003). This individualization of services and policies is supported by sophisticated business applications. Data warehouses provide a repository of efficient data mining. Electronic trails of transactions within the confines of an organization can be mined, including: investment, demographic information, personal and professional information, and credit travel. Such vast data collection allows for an analysis of customers and prospects to aid in rate making ("Using data mining for rate making in the insurance industry," 2003).

Additionally, in a paper published in 2013, Kim and Jeon proposed a credibility theory via truncation of loss data, or the trimmed mean. The proposal contained the classic credibility theory as a special case and was based on the idea of varying the trimming threshold level to "investigate the sensitivity of the credibility premium." After showing that the trimmed mean was not a coherent risk measure, the authors investigated some related "asymptotic properties" of the structural parameters in credibility. They argued that the proposed credibility models could "successfully capture the tail risk of the underlying loss model," thereby providing "a better landscape of the overall risk that insurers assume."

Determining how much to charge policy holders is the ultimate task of insurers and requires great agility in today's marketplace. Insurance company customers are more sophisticated and demanding than ever before and may have many choices about where to buy their insurance coverage. Insurers can leverage technology in powerful ways to group policy holders by risk and numerous other criteria; thereby lowering rates for "good customers."

Solvency

Insurance companies are in the business of taking on the risk of others for a price, but they must also ensure that they remain solvent so that they can pay future claims to their current customers. Insurance is a highly regulated industry and must maintain high levels of transparency and accountability to customers. The chance of insolvency (insurers going out of business) due to high numbers of catastrophic claims from 2005's Hurricane Katrina has shaken the insurance industry to its core. There is simply no assurance that a premium paid in full by a policy holder will result in a claim paid by an insurer. Some insurers simply did not have the financial capital reserves in place to cover such huge losses. In many cases, risk models that predicted a hurricane such as Katrina as a 1:100 year storm are being blamed for not adequately advising insurance carriers of how much capital would be required to cover such losses. The changing nature of risk models is discussed in the issues section of this article. The other topic that will be discussed has to do with the new capital requirements that are being instituted by rating agencies to protect consumers and the solvency of insurers.

Issues

Risk Modeling

The risk modeling industry got its start post-Hurricane Andrew in 1992 when insurers realized that they did not have the in-house expertise to create the sophisticated models that would be needed to more accurately model catastrophic events. Companies purchase the software models and apply them to customer databases to see where they are over-exposed. Scenarios are run to determine whether current rates (premiums) are adequate to cover potential risks. One such scenario uses claim data from 2004-2005 to model "post even loss amplification"; the models take into account "vulnerabilities" that were not previously considered in previous models ("Insurance risk models rise with elevated storm frequency, severity," 2006).

Other variables that have been incorporated into risk models since the late 1990s include (Babcock, 2006):

  • Tracking the consistency of building code enforcement
  • Increases in recovery costs due to increased demand for labor and materials.
  • Lost profits due to possible extended periods of business interruption or ruin.
  • Tracking problems with building materials that failed in certain weather situations to avoid their future use.

According to a 2006 interview with a number of Insurance industry executives, one of the most pressing issues for insurance companies at the time was the review of risk models. The insurance and reinsurance industries rely on third-party developers for many of their current risk models. Insurers use software models to predict damage from catastrophic events and the potential liability for insurance payouts (Babcock, 2005). Since 2005, risk models have been undergoing some significant modification.

"Risk models were slammed for not preparing insurers and reinsurers for the severity of Hurricane Katrina in 2005. This prompted an upgrade of the main third-party models in May of 2006, which the industry has had to adapt to and learn from extremely quickly to ensure the changes could be reflected in the various renewals throughout the year" ("High-level thinking," 2006).

One company that develops risk models is Risk Management Solutions (RMS). RMS provides products and services to more than 400 insurers, reinsurers and financial institutions. The trend toward using third-party risk models will continue as models become increasingly complex. For example, RMS teamed with experts in the field of hurricane climatology to develop methodologies for modeling near-term models for hurricane activity. Previous models had been based on long-term historical averages that were proving unreliable in predicting the probability of land-falling hurricanes. The 2004 and 2005 U.S. hurricane seasons prompted a great deal of movement in risk modeling ("Insurance risk models rise with elevated storm frequency, severity," 2006).

There's no doubt the third-party developers of risk modeling software are here to stay and models will only continue to improve. Insurance executives in a 2006 survey were quick to point out that third-party risk models are only part of the solution. Risk models are no substitute for common sense, and insurers need to focus on gathering quality data (exposure information) that will be put into the modeling scenarios. In the words of one executive, "put garbage in and you'll get garbage out" ("High-level thinking," 2006).

Rating Agencies

Rating agencies assess insurance companies' solvency, financial strength and ability to pay claims in the future. Since many policy holders don't expect to file claims for many years, it is important to assess the long-term viability of an insurance company. Policy holders paying premiums today want assurance that the insurance company will still be solvent when it comes time to collect on their policy. There are five major rating agencies that are watched closely in the U.S. insurance markets; they are: A.M. Best, Standard and Poor's, Fitch Ratings, Moody's Investor Service, and Weiss Ratings. A fall 2006 Guy Carpenter report, Rating Agencies Update, summarized developments in the major rating agencies' view of catastrophic risk and economic capital models. The Rating Agencies Update report contained information about how the major rating agencies changed their approach to evaluating insurance companies after Hurricane Katrina. According to the report, A.M. Best has stated that natural and humanmade catastrophes make up the number-one threat to a company's financial strength. The major changes focus on rating agencies' evaluation of a company's capital adequacy and risk management processes and controls ("Rating agencies update," 2006).

Mark Hofmann (2012) reported that an insurer's enterprise risk management (ERM) process is considered when rating agencies determine what rating to assign the company. Rating-agency analysts consider "how insurers measure risks, how they approach emerging risks, and how they model risk as part of their ERM assessment." But an insurer's ERM process is just one of a number of factors rating agencies take into account when examining property/casualty insurers, Hoffman wrote. For example, at Standard & Poor's, ERM is one of eight rating components. The others, as of September 2012, were financial stability, capital, liquidity, investments, operating performance, competitive position, and management and corporate strategy.

Impact on Insurance Companies

Rating agency methods and new risk models have had a significant impact on the amount of capital or reinsurance protection that is needed. Insurance companies have responded to the new requirement in one of the three following ways ("Rating agencies update," 2006):

  • Insurance companies have reduced exposure (dropped coverage and clients).
  • Insurance companies have purchased more reinsurance coverage (spread their risk to other insurers).
  • Insurance companies are accessing more non-traditional capital (such as catastrophic bonds or sidecars).

A survey of insurance executives reveals that rating agency changes will pose major challenges to many insurers. Some of the comments are as follows ("High-level thinking," 2006):

  • Rating agencies are over-reacting and talking catastrophic reinsurance products into a death spiral.
  • Rating agencies are becoming risk adverse and overly sensitive about volatility.
  • Rating agencies may be putting too much emphasis on the current catastrophic risk models which are considered flawed.
  • Pressure from rating agencies is exacerbating a stressed market.
  • Insurers are adding increased capital charges to rates and passing along costs to customers.
  • Management of catastrophic risk is of fundamental importance to a company's well being.

Criticisms of Risk Models

New risk models are improving all the time, but are still suspect as is pointed out in this quote by Toby Esser, CEO of Cooper Gay: "It would appear that the rating agencies are making it up as they go along. Hurricanes Rita and Katrina were both deemed by the rating models to be a one-in-100-year event. They happened within three weeks of each other. The size of these losses also exceeded the model's anticipated realistic disaster scenario for Gulf of Mexico windstorms. Can the market really become 100% dependent on their output if the losses keep beating the predictions?" He continues, "The over-reliance on rating models by the rating agencies to inform their credit rating allocations is having a knock-on effect throughout the market. Investors are relying on the output to ensure that they are supporting entities that will give them the maximum return. Similarly, reinsurers are reallocating capital internally to focus on risks that offer greatest margins over modeled return periods" ("High-level thinking," 2006).

Rating agencies are relying heavily on new risk models to help determine capital reserves for insurance companies. It is clear that many in the insurance industry believe that rating agencies may be relying too heavily on risk models, but because of the lack of long term historical data about catastrophic risk and its costs, the risk models may be one of the best predictive tools that rating agencies have at their disposal.

Rating agencies are required to assess the viability of insurance companies in the event of large economic payouts due to catastrophic events, and the updated requirements are proving to be a challenge. Many insurance companies have simply dropped coverage in high-risk areas such as the Gulf Coast of the U.S. Highly publicized stories sometimes paint insurers as villains for not insuring homeowners and businesses, but many insurers cannot garner the required capital to cover potential losses of great magnitude. Still other insurers are able to secure capital through selling risk to other insurance companies or convincing investors to finance their risk with the promise of high returns if no large losses are sustained.

Risk Management Solutions updated models in July, 2013, to reflect fresh intelligence on hurricane behavior. RMS Version 13 hurricane model updates include research and data from two main areas: medium-term rates forecasts and storm-surge-coverage leakage assumptions. The medium-term rates forecast is a "probabilistic forecast of the average number of hurricane landfalls per year through 2018, taking into account changing cycles of hurricane activity." Between 2013 and 2017, RMS's medium-term rates forecast is lower than the forecast for 2012 to 2016; however, it is still higher than historical averages for hurricane activity ("RMS launches hurricane model updates," 2013).

Terms & Concepts

Actuary: A business professional who deals with the financial impact of risk and uncertainty.

Credibility Theory: A branch of actuarial science that that combines the individual risk experience with the group risk experience to calculate expected loss and appropriate insurance premium.

Experience Rating: AA measurement used by insurance companies based on an employer’s claims history to determine the likelihood of future claims and the appropriate premium.

Long Tail Insurance Event: Insurance claims that may be incurred or sustained over a long period of time. These types of claims are likely associated with large catastrophic events.

Third Party Model or Product (Risk Models): This model was developed by the NAIC in 2001 and refers to when insurance payments and other activities are handled by a third party entity separate from both the insurer and the insured.

Rating Agencies: Provide credit ratings to insurance regulators that aid in determining the strength of the reserves held by insurance companies.

Reinsurance: The practice of insurance companies selling their risk to other insurance companies to spread out their risk in an attempt to limit their vulnerability to loss.

Risk Theory: A study common among actuaries and insurers used to assess the financial impact on a carrier of insurance policies.

Risk-Reward Relationship: The relationship of an insurer's portfolio of products and its risk to the potential profitability that can be gained by selling the product.

Short Tail Insurance Event: An insurance claim that is likely to be paid in one claim that is closely associated with the timing of a specific event. Ex: auto accident, reimbursement for theft event.

Bibliography

Babcock, C. (2005). A new model for disasters. Information Week, (1059), 47-48.

Cummins, I. (1991). Statistical and financial models of insurance pricing and the insurance firm. Journal of Risk & Insurance, 58, 261-302. Retrieved November 15, 2007, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=9607251392&site=bsi-live

Eling, M. (2013). Recent research developments affecting nonlife insurance - the CAS risk premium project 2011 update. Risk Management & Insurance Review, 16, 35-46. Retrieved November 15, 2013, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=86693633&site=ehost-live

Glick, R. (2007). Congress targets Mccarran-Ferguson. Insurance Advocate, 118, 11-11. Retrieved November 19, 2007, from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=24447599&site=ehost-live

Hofmann, M.A. (2012). Rating agencies assess ERM processes when scrutinizing insurers. Business Insurance, 46, 14. Retrieved November 15, 2013, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=79827270&site=ehost-live

Kim, J.T., & Jeon, Y. (2013). Credibility theory based on trimming. Insurance: Mathematics & Economics, 53, 36-47. Retrieved November 15, 2013, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=89139270&site=ehost-live

Lee., E. (2006). Insurance industry basics: Combined ratio. The Motley Fool. Retrieved November 16, 2007, from http://www.fool.com/personal-finance/insurance/2006/12/12/insurance-industry-basics-combined-ratio.aspx

High-level thinking. (2006). Reactions, 26, 34-44. Retrieved November 19, 2007, from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=22485278&site=ehost-live

Insurance risk models rise with elevated storm frequency, severity. (2006, April 13). Environment News Service. Retrieved November 30, 2007, from: http://www.ens-newswire.com/ens/apr2006/2006-04-13-05.asp

O'Donnell, A. (2007, August 1). Actuaries adopt new risk-modeling technologies. Business Innovation. Retrieved November 27, 2007, from http://www.businessinnovation.cmp.com/bizagility/re%5fbizagility%5f08202007.jhtml

RMS launches hurricane model updates. (2013). Reactions, 56. Retrieved November 15, 2013, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=87355654&site=ehost-live

Rating agency update. (2006, November). Guy Carpenter. Retrieved November 30, 2007, from http://www.guycarp.com/portal/extranet/pdf/GCPub/Rating%5fAgency%5fUpdate%5f2006.pdf

Using data mining for rate making in the insurance industry. (2003). SAS Institute. Retrieved November 16, 2007, from http://jobfunctions.bnet.com/whitepaper.aspx?docid=104140

Walker, G. Gardner, W. & Johnstone, D. (2004). The future of insurance risk. Lawyers Weekly Online. November 16, 2007, from http://www.lawyersweekly.com.au/articles/The-future-of-insurance-risk-modelling%5fz65119.htm

Suggested Reading

Fernández-Durán, J., & Gregorio-Dominguez, M. (2004). Relative entropy credibility theory. AIP Conference Proceedings, 735, 60-67. Retrieved November 19, 2007, from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15143010&site=ehost-live

Langdon, D. & Bell, K. (2007, February). Integrating catastrophe modeling into underwriting. Emphasis Magazine. Retrieved November 30, 2007, from http://www.towersperrin.com/tp/jsp/tillinghast%5fwebcache%5fhtml.jsp?webc=Tillinghast/United%5fStates/News/Emphasis/2007/02/emp%5fq2%5fart4.htm

Luo, Y., Young, V., & Frees, E. (2004). Credibility ratemaking using collateral information. Scandinavian Actuarial Journal, 2004, 448-461. Retrieved November 19, 2007, from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15314052&site=ehost-live

McGiffin, G. (2007). Are insurers flying blind? National Underwriter / Property & Casualty Risk & Benefits Management, 111, 12-15. Retrieved November 19, 2007, from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=23738531&site=ehost-live

Essay by Carolyn Sprague, MLS

Carolyn Sprague holds a BA degree from the University of New Hampshire and a Masters Degree in Library Science from Simmons College. Carolyn gained valuable business experience as owner of her own restaurant which she operated for 10 years. Since earning her graduate degree Carolyn has worked in numerous library/information settings within the academic, corporate and consulting worlds. Her operational experience as a manger at a global high tech firm and more recent work as a web content researcher have afforded Carolyn insights into many aspects of today's challenging and fast-changing business climate.