More than a decade after the failures of risk management in cases such as Barings PLC, Metallgesellschaft and Orange County, risk management has evolved a lot, but there is still a long way to go. From Bernoulli’s experiment to Miller and Modigliani’s Portfolio Theory and Fama and French’s 3 factor model, the latest trend in risk management is Value-at-Risk.
Most of the existing research focuses on a single area of risk management. There exists a need to establish the missing links between these risk management techniques in application. This thesis attempts to do just that. It evaluates the performance of selected portfolios by calculating performance measurements such as Treynor Ratio, Sharpe Ratio, Jensen’s Alpha, Fama & French 3 factor model and Value-at-Risk. This thesis examines the period from 2003-2010 using the companies listed on the Karachi Stock Exchange. The benchmark used in this analysis is the KSE-100 index.
The purpose of the study is to determine which portfolios would be better investments in terms of risk and return. Also, through the application of a variety of methods, drawbacks and pitfalls of these methods will become more apparent when comparing and contrasting.
The main objectives are firstly to identify risk assessment techniques, and secondly to classify portfolios according to risk and return. Last but not least, comparison of results will prove the consistency and validity of the research.
The Oxford dictionary defines the word risk as “hazard; chance of bad consequences, loss, etc.; exposure to mischance; to expose oneself, or be exposed to loss.” Traditionally, risk is viewed negatively. The Chinese symbol for crisis gives a more complete picture of what risk represents:
Of the two symbols, the former represents danger, while the latter signifies opportunity. This shows that it is important to manage risk in good times in order to plan for possible crises and in bad times so that you can look for opportunities. Above all, risk must be dealt with calmly. “Risk management is not just about minimizing exposure to the wrong risks but should also incorporate increasing exposure to good risks.” (Damodaran, Risk Management: A Corporate Governance Manual n.d.)
Since risk implies uncertainty, risk assessment is largely concerned with uncertainty in connection with probability. In essence, risk assessment is a method of examining risks so that they can be controlled, reduced or evaded. In order to lend meaning to any form of risk assessment, the results must be compared against a benchmark or similar assessment (Wilson and Crouch 1987).
Risk vs. Probability: Probability involves only the likelihood of an event occurring, whereas risk encompasses both the likelihood and the consequences of the event. The practice of making probability centric decisions about risk leads to ignoring new risks or unusual risks which may not be numerically quantifiable.
Risk vs. Threat: A threat may be defined as “an indication of coming evil” or a low probability event with very large negative consequences whose probability is difficult to determine. A risk is a higher probability event whose probability and consequences can be determined.
All outcomes vs. Negative outcomes: A focus on negative outcomes relates to downside risk. But variability in risk should include both the good and the bad i.e. all outcomes should be taken into account when determining risk. The practice of making negative outcomes the highlight of risk assessment tends to narrow risk management to simply hedging. (Damodaran, Risk Management: A Corporate Governance Manual n.d.)
Financial risk management has been defined by the Basel Committee (2001) as a sequence of four processes
The identification of events into one or more broad categories of market, credit, operational, and ‘‘other” risks and into specific subcategories;
The assessment of risks using data and a risk model;
The monitoring and reporting of the risk assessments on a timely basis;
The control of these risks by senior management. (Alexander 2005)
Necessity is the mother of all inventions. Nowadays, there are a variety of methods used to measure risk, but the development of these measures was fueled by the risk aversion of investors.
The principle development in the area of modern risk management techniques is the idea of expected loss and unexpected loss. Expected Loss (EL) may be defined as the recognition of certain risks that can be statistically gauged, while Unexpected Loss (UL) is an estimate of the potential variability of EL from one year to the next, and includes the potential for stress loss events. The quantified approximation of EL leads to estimation of UL (International Financial Risk Institute n.d.).
St.Petersburg Paradox – In the 1700s Nicholas Bernoulli conducted a simple game of chance which determined an individual’s willingness to take risk. Through this game he determined two important aspects of human behavior regarding risk:
Certain individuals were willing to pay more than others, indicating the level of risk aversion of individuals.
The utility from gaining an additional dollar decrease with wealth – The marginal utility of gaining an additional dollar was greater for the poor than it was for the wealthy. “One thousand ducats is more significant to a pauper than to a rich man though both gain the same amount”.
Daniel Bernoulli (Nicholas Bernoulli’s cousin) resolved the paradox of risk aversion and marginal utility with the following distinction linking price and utility:
“…the value of an item must not be based upon its price, but rather on the utility it yields. The price of the item is dependent only on the thing itself and is equal for everyone; the utility, however, is dependent on the particular circumstances of the person making the estimate.” (Damodaran, Strategic Risk Taking: A Framework for Risk Management 2008)
As organizations find themselves exposed to more complex risks, it is worth considering confronting concerns about uncertainty more directly in analysis. That is what probabilistic approaches do. While they will not make the uncertainty go away or even reduce it, they can let firms see the effect of uncertainty on their value and decisions and modify their behavior accordingly. (Damodaran, Risk Management: A Corporate Governance Manual n.d.)
Treynor Ratio- The Treynor ratio was developed by Jack Treynor and it measures the returns of a portfolio earned in excess of a riskless investment per each unit of market risk. It is a risk-adjusted measure of return based on systematic risk and uses beta as the measurement of volatility. It is otherwise known as a “reward-to-volatility ratio”, and is calculated as:
TP = rp – rf / βp
Sharpe Ratio- The Sharpe ratio was derived in 1966 by William F. Sharpe and it measures the risk-adjusted performance of a portfolio. The ratio helps to make the performance of one portfolio comparable to that of another portfolio by making an adjustment for risk. It is calculated by subtracting the risk-free rate from the return of a portfolio and dividing the result by the standard deviation of the portfolio returns. This ratio basically shows if a portfolio’s returns are due to smart investment decisions or exposure to excessive risk. It is valuable because it shows which portfolios are better investments because they will result in higher returns without exposure to too much risk. The general rule of thumb is the higher the Sharpe ratio, the better the investment is in terms of a risk/return perspective. Also, a negative Sharpe ratio shows that a risk-free asset would perform better than the portfolio being analyzed. The Sharpe ratio formula is:
SP = rp – rf / σp
A risk-adjusted performance measure that represents the average return on a portfolio over and above that predicted by the capital asset pricing model (CAPM), given the portfolio’s beta and the average market return. This is the portfolio’s alpha. In fact, the concept is sometimes referred to as “Jensen’s alpha.”
The basic idea is that to analyze the performance of an investment manager you must look not only at the overall return of a portfolio, but also at the risk of that portfolio. For example, if there are two mutual funds that both have a 12% return, a rational investor will want the fund that is less risky. Jensen’s measure is one of the ways to help determine if a portfolio is earning the proper return for its level of risk. If the value is positive, then the portfolio is earning excess returns. In other words, a positive value for Jensen’s alpha means a fund manager has “beat the market” with his or her stock picking skills. (Investopedia n.d.)
αp= rp – [ rf + βp (rm – rf)]
Fama and French – Kenneth R. French and Eugene Fama attempted to improve measures for market returns. Through their research, they found that value stocks outperform growth stocks, and small cap stocks outperform large cap stocks. Using these two valuable pieces of research, they developed a three factor model that expands on the Capital Asset Pricing Model (CAPM) to adjust for the outperformance tendency (Investopedia n.d.). The Fama–French model is only one possible multifactor model that could be applied to clarify excess or absolute stock returns ( (Reilly and Brown n.d.), (Wolfram n.d.)). Using the three factor model, it is important to understand that portfolios with a large number of value stocks or small cap stocks will perform lower than the CAPM result because F&F model adjusts downward for small cap and value outperformance.
The biggest debate regarding outperformance of certain stocks is about market efficiency and market inefficiency. Proponents of the model state that the outperformance is explained by the excess risk that value and small cap stocks face as a result of their higher cost of capital and greater business risk. Opponents state that the outperformance is explained by market participants mispricing the value of these companies, which provides the excess return in the long run as the value adjusts.
Value at Risk (VaR) is currently one of the most popular measures of volatility. Volatility can be defined as movement in stock returns, and it doesn’t distinguish between positive or negative movement. It measures the potential loss in value of a risky portfolio for a given confidence interval over a defined period of time. It is the measurement that attempts to answer questions, such as, “What is the most I can expect to lose in rupees over the next year with a 95% level of confidence?” This measure makes a probabilistic estimate of the worst-case scenario using the defined parameters, as opposed to stating a single statistic with absolute certainty.
In VaR, it is important to keep the following points in mind:
Define the probability distributions of individual risks, the correlation across these risks and the effect of such risks on value.
VaR focuses on potential losses and downside risk.
There are three key elements of VaR – a specified level of loss in value, a fixed time period over which risk is assessed and a confidence interval. The VaR can be specified for an individual asset, a portfolio of assets or for an entire firm.
It is possible to specify the risks being measured in VaR according to the portfolio or project being assessed. For example, the VaR for a large investment project can be defined in terms of competitive and firm-specific risks.
In the last decade, a variety of methods have been developed for calculating Value-at-Risk. There are three basic approaches, two involve running hypothetical portfolios though Historical Simulation and Monte Carlo Simulation, while analytical assumptions are made in the Variance-covariance approach.
The variance-covariance method derives a probability distribution of potential values. It requires two factors: average portfolio return and standard deviation. It assumes that stock returns are normally distributed. When actual daily returns are plotted against a normal curve, it is possible to identify the worst case scenario (either 5% or 1%) to determine maximum loss possible. This approach has the benefit of simplicity but the difficulties associated with deriving probability distributions. In addition the focus on standardized returns implies that we should focus on the size of the return relative to the standard deviation.
Historical simulations are a popular and relative easier method of VaR to implement. In this approach, the VaR for a portfolio is estimated by creating a hypothetical time series of returns on that portfolio, obtained by running the portfolio through actual historical data and computing the changes that would have occurred in each period. In this method there are no underlying assumptions of normal distribution of returns and the estimate is based on actual historic returns. While this approach saves a lot of trouble, it also assumes that the returns of the past are representative of future expected earnings, which is a difficult assumption to sustain.
The Monte Carlo simulation reefers to any method that randomly generates trials. This method involves developing a model for future stock prices and running multiple hypothetical trials through the model. On its own it doesn’t tell us anything about the underlying methodology, which is why it is essential to use it alongside a second VaR estimate. In addition, you can bring in subjective judgments to modify these distributions.
While Value at Risk has acquired a strong following in the risk management community, there is reason to be skeptical of both its accuracy as a risk management tool and its use in decision making. There are many dimensions on which researcher have taken issue with VaR and we will categorize the criticism into those dimensions.
VaR can be wrong – There is no precise measure of VaR and each measure comes with its own drawbacks.
Narrow Focus – The simplicity of VaR stems from its narrow definition of risk, which could lead managers to a false sense of contentment and decisions which may not serve best interests.
Suboptimal decisions – it remains unclear if VaR leads to more rational decisions because it could lead to overexposure to risk.
Hallmarks of Scientific Research
Purposiveness
Rigor
Testability
Replicability
Precision and Confidence
Objectivity
Generalizability
Parsimony
These hallmarks of scientific research contribute to the soundness of the information derived from the research conducted. This research has a very specific aim i.e. to establish the link between risk and return from various aspects, so that one may reinforce the other, maintaining objectivity in the results. The data collected for this research has been carefully and scrupulously filtered, keeping the highest degree of exactitude possible to maintain rigor. By utilizing more than one method of evaluation for risk and return, the information obtained will maintain testability. The decision to collect daily data for eight years contributes to the precision, while the use of a 95% confidence interval adds to the confidence in the research. The use of all stocks in building portfolios, as opposed to to selecting a sample, contributes to the generalizability.
The valuation of a portfolio, regardless of the method chosen, is always for the same purpose- to determine which portfolio yields the minimum risk or maximizes returns. The main purpose of this paper is to determine which portfolios, constructed from stocks from KSE, would be most risky and which ones would be least risky.
From an investor’s perspective risk may be defined as danger of loss, or according to finance theory: “dispersion of unexpected outcomes due to movements in financial variables” (P. Jorion, The Need for Risk Management 2000). The recent unprecedented changes in financial markets have made it essential for firms to develop better risk-assessment techniques i.e. increase in volatility of financial markets is the driving force behind increased interest in risk management. Theories on modern portfolio state that portfolio returns are positively correlated with risk as measured by beta and sigma. However, portfolio performance ranked by the Treynor and Sharpe ratio frequently lacks consistency over time. In addition Bauman & Miller state the importance of evaluating portfolio performance over a complete stock market cycle to test for consistency of performance rankings over time. Performance should be evaluated over complete stock market cycles because the volatility of annualized stock market returns diminishes as the investment horizon increases (Bauman and Miller 1994).
In a study regarding persistence performance of mutual fund investors, it was found that while the common trend in research has been in the direction of shorter selection and holding periods, employing longer term designs proves effective in determining winning portfolios ex-ante. In addition, past performance of portfolios is not a concrete indicator of future earnings due to constantly changing circumstances. At best, the chances of selecting a profitable portfolio that performs better than average can be increased to some extent when selected on the basis of past performance. However, this is dependent on the time frame of the past study. It is also important to avoid selecting underperforming funds that have high expense ratios (Patari 2009). The more constraining a portfolio is in terms of mix of risky assets, the more diversified it will be. More diversified portfolios lead to less realized risk, as well as less realized return, but the reduction in return is not worth the reduction in risk (Gruaer and Shen 2000).
In terms of portfolio evaluation, there are numerous methods which can be adopted to determine which portfolios are better or worse. Probabilistic approaches include scenario analysis, decision trees and simulations. In contrast, risk adjustment approaches include sector comparisons, market capitalization or size, ratio based comparisons and statistical controls (Damodaran, Risk Management: A Corporate Governance Manual n.d.). When analyzing portfolios it is important to keep in mind that according to quadratic utility, increases in wealth always increase risk aversion, which can induce negative correlation between beta and return (Grinblatt and Titman 1990). Covariance’s are usually the direct parameter inputs for optimal portfolio choice, while betas are primarily useful in understanding assets’ systematic risks associated with the market/factors in general (Hong, Tu and Zhou 2007). Hence, identifying the level of both these risks in a portfolio is essential for thorough analysis of risk exposure.
Benchmarking is one of the most common methods for portfolio performance evaluation. It involves comparing a portfolio against a broader market index. If the returns in the portfolio exceed those of the benchmark index, measured during identical time periods, then the portfolio is said to have beaten the benchmark index. The only disadvantage of this assessment is the possibility of mismatch between the levels of risk of the investment portfolio and those of the benchmark index portfolio which could lead to invalid results (Samarakoon and Hasan 2005).
Ratio dividing the excess return of the fund by its risk
Sharpe Ratio (SR)
Treynor Ratio (TR)
Differential return between fund and risk-adjusted market index
Total Risk Alpha (TRA)
Jensen Alpha (JA)
Return of the risk-adjusted fund
Risk-adjusted Performance (RAP)
Market Risk-adjusted Performance (MRAP)
Differential return between risk-adjusted fund and market index
Differential Return based on RAP (DRRAP)
Differential Return based of MRAP (DRMRAP)
(From A Jigsaw Puzzle of Basic Risk-Adjusted Performance Measures by Hendrick Scholz, Ph.D. and Marco Wilkens, Ph.D.) (Scholz and Wilkens 2005)
Sharpe ratio has been widely used in the portfolio management industry as well as fund industry. It is also crucial to understand that if a portfolio represents the majority of the investment, it should be evaluated on the basis of Sharpe ratios. If the investment of funds is more diversified, then the Treynor ratio should be used (Scholz and Wilkens 2006). One of the main assumptions of describing the applicability of ratios such as Sharpe and Treynor is asset return normality and the general trend of the financial market drives the assessment quality of these measures (Gatfaoui 2009).
Fama and French (F&F) three factor model has emerged as one possible explanation for the irregularity of CAPM results. Previous work shows that size, earnings/price, cash flow/price, book-to-market equity, past sales growth, long-term past return, and short-term past return all contribute to average returns on common stock. These contributions are known as anomalies. The F&F three factor model accounts for these anomalies in long-term returns (Fama and French, Multifactor Explanations of Asset Pricing Anomalies 1996). To prove that higher returns on small cap stocks and value stocks accounted for excessive returns, they used SMB (small minus big) to address size risk and HML (high minus low) for value risk. The positive SMB factor represents more returns for small cap stocks vis-à-vis big stocks while a positive HML represents more returns for value stocks than growth stocks (Nawazish 2008). On a national and international level, International Value and Growth Stock Returns found that value stock outperformed growth stocks in each country, both absolutely and after adjustment for risk. In addition, small spreads of cross-country correlations of value-growth indicate that a more effective strategy would be to diversify globally in terms of value stocks (Capaul, Rowley and Sharpe 1993) (Griffin 2002). The results of the 1995 research paper by Kothari, Shanken and Sloan conflict with these results. Using betas calculated from time-series regressions of annual portfolio returns on the annual return don the equally weighted market index, the relation established between book-to-market equity and returns was a lot weaker and inconsistent than the relationship proposed by F&F. They did however, attribute this to selection bias (Kothari, Shanken and Sloan 1995). Another important detail about F&F is that it does not perform significantly better than CAPM when applied to individual stocks, however, it remains the model of choice when it comes to evaluating portfolios (Bartholdy and Peare 2002).
Another method of risk assessment which has rapidly gained popularity is Value-at-Risk. VAR is the preferred method of measurement when a firm is exposed to multiple sources of risk because it captures the combined effect of underlying volatility and exposure to financial risk. It borrows liberally from both the risk adjusted and probabilistic approaches and serves as an alternative risk measure. There are several models which may be used in VAR assessment. According to research, the two best performing models are: (1) a hybrid method, combining a heavy-tailed generalized autoregressive conditionally heteroskedastic (GARCH) filter with an extreme value theory-based approach, (2) a variant on a filtered historical simulation. Conditional VaR models lead to much more volatile VaR predictions than unconditional models and may arguably cause problems in allocating capital for trading purposes (Kuester, Mittnik and Paolella 2006). In relation to risk assessment, establishment of capital regulations became essential when allocation of resources in a free market became inefficient. To analyze the need for regulations, the conditions implemented by Basel Committee will be discussed (P. Jorion 2000).
Considering the plethora of risk measures available to managers, it is difficult to decide which would be best (Christoffersen, Hahn and Inoue 1999). For real life lessons on risk management regarding VaR, it would be helpful to look towards the firm Long-Term Capital Management which gives us very important lessons such as the fact that the 1998 failure of LTCM, which was said to have nearly blown up the world’s financial system, occurred because LTCM had severely underestimated its risk due to its reliance on short-term history and risk concentration. LTCM also provides a good example of risk management taken to the extreme. Using the same covariance matrix to measure risk and to optimize positions inevitably leads to biases in the measurement of risk. This approach also induces the strategy to take positions that appear to generate “arbitrage” profits based on recent history but also represent bets on extreme events, like selling options. Overall, LTCM’s strategy exploited the intrinsic weaknesses of its risk management system. (P. Jorion, Risk Management Lessons from Long-Term Capital Management 1999)
While covering all the aspects of risk, as pertinent in VAR, it is also important to identify the shortcomings of each method of valuation. In Abken’s An Empirical Evaluation of VaR by Scenario Simulation, he conducts scenario simulation on a series of test portfolios. Then he shows the relationship between scenario analysis, standard Monte Carlo and principal component simulation. Then he applies the three methods on several test portfolios in order to assess their relative performance (Abken 2000). This paper was essential because it identifies the inadequacy of scenario simulation. Furthermore, it recommends cross-checking results from scenario analysis with other computationally intensive VaR methods. In addition, preceding research indicates a lack of similarities among the various VaR approaches, this is the basis why Value at Risk standards have not been implemented across the board (Marshall and Siegel 1996).
In every study there are always some assumptions and preconceived notions. The biases that most studies are exposed to include survivorship bias, look ahead bias, self-selection bias, backfilling bias, or data-snooping bias induce more often spurious persistence than spurious non-persistence. (Patari 2009). In addition, apparent anomalies can be the result of methodology because most long-term anomalies disappear with reasonable changes in technique. This is consistent with the market efficiency prediction (Fama, Market efficiency, long-term returns, and behavioural finance 1998). Lastly, how important is risk assessment? When classifying managers into categories of “good” and “bad”, the level of risk that they take should be determined by the risk taking nature/desire for return of the client. Therefore, assessment upon both aspects is crucial (Palomino and Uhlig 2007).
TP = rp – rf / βp
Dependent variable= systematic risk
Independent variable = excess return of portfolio, beta of portfolio
SP = rp – rf / σp
Dependent variable= total portfolio risk
Independent variable = excess return of portfolio, standard deviation of portfolio returns
αp= rp – [ rf + βp (rm – rf)]
Dependent variable = differential return
Independent Variable = market risk premium, risk-free rate, portfolio return
Dependent variable= excess portfolio return
Independent variable = market risk premium, size factor, and value factor
A VaR statistic has three components:
A time period
A confidence level
A loss amount or loss percentage
Variance-Covariance Method: One way to estimate VaR is the analytical method, also called the variance-covariance method. This method assumes a normal distribution of portfolio returns, which requires estimating the expected return and standard deviation of returns for each asset. As the number of securities in a portfolio increases, these calculations can become unwieldy. As a result, a simplifying assumption of zero expected return is sometimes made. This assumption has little effect on the outcome for short-term (daily) VaR calculations but is inappropriate for longer-term measures of VaR.
There are generally four steps involved in this process:
Take each of the assets in a portfolio and map that asset on to simpler, standardized instruments. The 95% confidence intervals translate into 1.96 standard deviations on either side of the mean. Then map every financial asset into a set of instruments representing the underlying market risks. The resulting matrix can be used to measure the Value at Risk of any asset that is exposed to a combination of these market risks.
Then each financial asset is stated as a set of positions in the standardized market instruments.
Once the standardized instruments that affect the asset or assets in a portfolio have been identified, estimate the variances in each of these instruments and the covariances across the instruments in the next step. In practice, these variance and covariance estimates are obtained by looking at historical data. They are key to estimating the VaR.
Lastly, the Value at Risk for the portfolio is computed using the weights on the standardized instruments computed in step 2 and the variances and covariances in these instruments computed in step 3. The VaR is estimated based upon the co-variances between the underlying instruments. [CITATION Dam1 l 1033]
The advantage of this method is its simplicity. The disadvantage is that the assumption of a normal return distribution can be unrealistic (Financial Education n.d.).
Historical Simulation: An advantage of the historical method is that it is non-parametric, which means it does not require assumptions for probability distribution. The disadvantage is that the past may have very different risk characteristics from the future. To run a historical simulation, I will begin with time series data on each market risk factor and the changes in the portfolio over time yield all the information you need to compute the Value at Risk. Then I will separate the daily price changes into positive and negative numbers, and analyze each portfolio. Then I will select the 95th percentile of the negative price changes to determine the worst case scenario according to historical simulation.
Monte Carlo Simulation: Using the Monte Carlo method to estimate Value at Risk (VaR) produces a set of random outcomes reflecting the effects of particular sets of risks. Each set of outcomes is based on a probability distribution for each variable of interest. The distributions for each variable can be normal or non-normal. Monte Carlo simulations are frequently the only method that provides a practical means to generate necessary risk management information (Financial Education n.d.).
This study will test the performance of these methods on 6 portfolios:
Index based portfolio, based on KSE100
Growth stock portfolio
Value stock portfolio
Large cap portfolio
Medium cap portf
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more