Evaluation of Portfolios Linking Risk and Return

More than a decade after the letdown of risk management in cases such as Barings PLC, Metallgesellschaft and Orange County, risk management has evolved a lot, but there is still a long way to go. From Bernoulli’s experiment to Miller and Modigliani’s Portfolio Theory and Fama and French’s 3 factor model, the latest trend in risk management is Value-at-Risk.

Most of the existing research focuses on a single area of risk management. There exists a need to establish the missing links between these risk management techniques in application. This thesis attempts to do just that. It evaluates the performance of selected portfolios by calculating performance measurements such as Treynor Ratio, Sharpe Ratio, Jensen’s Alpha, Fama & French 3 factor model and Value-at-Risk. This thesis examines the period from 2003-2010 using the companies listed on the Karachi Stock Exchange. The benchmark used in this analysis is the KSE-100 index.

The purpose of the study is to determine which portfolios would be better investments in terms of risk and return. Also, through the application of a variety of methods, drawbacks and pitfalls of these methods will become more apparent when comparing and contrasting.

The main objectives are firstly to identify risk assessment techniques, and secondly to classify portfolios according to risk and return. Last but not least, comparison of results will prove the consistency and validity of the research.

INTRODUCTION

Risk

The Oxford dictionary defines the word risk as “hazard; chance of bad consequences, loss, etc.; exposure to mischance; to expose oneself, or be exposed to loss.” Traditionally, risk is viewed negatively. The Chinese symbol for crisis gives a more complete picture of what risk represents:

Of the two symbols, the former represents danger, while the latter signifies opportunity. This shows that it is important to manage risk in good times in order to plan for possible crises and in bad times so that you can look for opportunities. Above all, risk must be dealt with calmly. “Risk management is not just about minimizing exposure to the wrong risks but should also incorporate increasing exposure to good risks.”(DAMODARAN, Aswath)

Since risk implies uncertainty, risk assessment is largely concerned with uncertainty in connection with probability. In essence, risk assessment is a method of examining risks so that they can be controlled, reduced or evaded. In order to lend meaning to any form of risk assessment, the results must be compared against a benchmark or similar assessment (WILSON,  Richard and Crouch,  E. A. C., 1987).

Interpretations of Risk

Risk vs. Probability: Probability involves only the likelihood of an event occurring, whereas risk encompasses both the likelihood along with the consequences of the event. The practice of making probability centric decisions about risk leads to ignoring new risks or unusual risks which may not be numerically quantifiable.

Risk vs. Threat: A threat may be defined as “an indication of coming evil” or a low probability event with very large negative consequences whose probability is difficult to determine. A risk is a higher probability event whose probability and consequences can be determined.

All outcomes vs. Negative outcomes: A focus on negative outcomes relates to downside risk. But variability in risk should include both the good and the bad i.e. all outcomes should be taken into account when determining risk. The practice of making negative outcomes the highlight of risk assessment tends to narrow risk management to simply hedging. (DAMODARAN, Aswath)

Evolution of risk assessment

The idea of measuring performance has appealed to both investors and financial analysts alike. The process of doing so has evolved over time. Initially it involved evaluation based on total returns. When the concepts of efficiency and benchmarks were added to the mix, it further refined the process. With every passing day new methods and hybrid methods are tested in an effort to develop an accurate method of assessment (MODIGLIANI, Franco and Modigliani, Leah, 1997). The following table depicts the evolution of risk assessment:

Period

Risk Measure

Key Event

Pre-1494

None or gut feeling

Fate or divine providence

1494

Computed probabilities

Luca Pacioli’s coin tossing game

1654

Pascal and Fermal’s Probability Estimation Theory

1662

Graunt’s Life Table

1711

Sample-based probabilities

Bernoulli’s Law of Large Numbers

1738

The birth of the normal distribution

1763

Bayes contributions

1800s

Expected loss

The development of the insurance business

1900

Price variance

Bachelier’s random walk hypothesis

1909-1915

Stock and bond ratings

Moody’s, Fitch and Standard Statistics Bureau

1952

Variance added to portfolio

Markowitz’s efficient portfolio theory

1964

Market beta

The birth of CAPM

1960s

Power law, Asymmetric and Jump process distributions

1976

Factor betas

Ross’s Arbitrage pricing model; introduction of multiple market risk factors

1986

Macroeconomic betas

Macroeconomic Multifactor model

1992

Proxies

Fama & French optimize CAPM with their multifactor model

Financial risk management has been defined by the Basel Committee (2001) as a sequence of four processes:

The identification of events into one or more broad categories of market, credit, operational, and ‘‘other” risks and into specific subcategories;

The assessment of risks using data and a risk model;

The monitoring and reporting of the risk assessments on a timely basis;

The control of these risks by senior management.(ALEXANDER, Carol, 2005)

Necessity is the mother of all inventions. Nowadays, there are a variety of methods used to measure risk, but the development of these measures was fueled by the risk aversion of investors.

The principle development in the area of modern risk management techniques is the idea of expected loss and unexpected loss. Expected Loss (EL) may be defined as the recognition of certain risks that can be statistically gauged, while Unexpected Loss (UL) is an estimate of the potential variability of EL from one year to the next, and includes the potential for stress loss events. The quantified approximation of EL leads to estimation of UL (International Financial Risk Institute).

St.Petersburg Paradox – In the 1700s Nicholas Bernoulli conducted a simple game of chance which determined an individual’s willingness to take risk. Through this game he determined two important aspects of human behavior regarding risk:

Certain individuals were willing to pay more than others, indicating the level of risk aversion of individuals.

The utility from gaining an additional dollar decrease with wealth – The marginal utility of gaining an additional dollar was greater for the poor than it was for the wealthy. “One thousand ducats are more significant to a pauper than to a rich man though both gain the same amount”.

Daniel Bernoulli (Nicholas Bernoulli’s cousin) resolved the paradox of risk aversion and marginal utility with the following distinction linking price and utility:

“…the value of an item must not be based upon its price, but rather on the utility it yields. The price of the item is dependent only on the thing itself and is equal for everyone; the utility, however, is dependent on the particular circumstances of the person making the estimate.”(DAMODARAN, Aswath, 2008)

Implications for Risk Management

As organizations find themselves exposed to more complex risks, it is worth considering confronting concerns about uncertainty more directly in analysis. That is what probabilistic approaches do. While they will not make the uncertainty go away or even reduce it, they can let firms see the effect of uncertainty on their value and decisions and modify their behavior accordingly (DAMODARAN, Aswath).

Methods chosen

Treynor Ratio- The Treynor ratio was developed by Jack Treynor and it measures the returns of a portfolio earned in excess of a riskless investment per each unit of market risk. It is a risk-adjusted measure of return based on systematic risk and uses beta as the measurement of volatility. It is otherwise known as a “reward-to-volatility ratio”, and is calculated as:

TP = rp – rf / βp

Sharpe Ratio- The Sharpe ratio was derived in 1966 by William F. Sharpe and it measures the risk-adjusted performance of a portfolio. The ratio helps to make the performance of one portfolio comparable to that of another portfolio by making an adjustment for risk. It is calculated by subtracting the risk-free rate from the return of a portfolio and dividing the result by the standard deviation of the portfolio returns. This ratio basically shows if a portfolio’s returns are due to smart investment decisions or exposure to excessive risk. It is valuable because it shows which portfolios are better investments because they will result in higher returns without exposure to too much risk. The general rule of thumb is the higher the Sharpe ratio, the better the investment is in terms of a risk/return perspective. Also, a negative Sharpe ratio shows that a risk-free asset would perform better than the portfolio being analyzed. The Sharpe ratio formula is:

SP = rp – rf / σp

Jensen’s Alpha

Jensen’s Alpha is a risk-adjusted performance measure that represents the average return on a portfolio on top of those predicted by the capital asset pricing model (CAPM). The independent variables of this computation are portfolio beta, average market return and risk-free rate.

The concept revolves around the fact that an investment manager’s performance is determined by the level of risk in the portfolio, as well as, the return. For example, if two portfolios have a 15% return, a rational investor will want to invest in the less risky portfolio. Jensen’s alpha helps to determine if the return on a portfolio is proportionate to its risk. If alpha is positive then the portfolio is earning surplus returns, which can be attributed to the investment manager’s skills (Investopedia).

αp= rp – [ rf + βp (rm – rf)]

Fama and French – Kenneth R. French and Eugene Fama attempted to improve measures for market returns. Through their research, they found that value stocks outperform growth stocks, and small cap stocks outperform large cap stocks. Using these two valuable pieces of research, they developed a three factor model that expands on the Capital Asset Pricing Model (CAPM) to adjust for the outperformance tendency (Investopedia). The Fama–French model is only one possible multifactor model that could be applied to clarify excess or absolute stock returns ((Multifactor Models in Practice), (Wolfram)). Using the three factor model, it is important to understand that portfolios with a large number of value stocks or small cap stocks will perform lower than the CAPM result because F&F model adjusts downward for small cap and value outperformance.

The biggest debate regarding outperformance of certain stocks is about market efficiency and market inefficiency. On the efficiency side, it is said that the outperformance of value stocks and small cap stocks is due to the exposure to excessive risk. On the inefficiency side, it is said that the outperformance is due to market participants mispricing the value of these companies, which provides the excess return in the long run as the value adjusts.

Value at Risk

Value at Risk (VaR) is currently one of the most popular measures of volatility. Volatility can be defined as the relative rate at which the price of a security moves up or down. More importantly, it doesn’t distinguish between positive or negative movement. VaR is sometimes referred to as the “new science of risk management” and it measures the potential loss in value of a risky portfolio for a given confidence interval over a defined period of time. It is the measurement that attempts to answer questions, such as, “What is the most I can expect to lose in rupees over the next year with a 95% level of confidence?” VaR marks the boundary between normal days and extreme events. This measure makes a probabilistic estimate of the worst-case scenario using the defined parameters, as opposed to stating a single statistic with absolute certainty. The greatest shortcoming of VaR is that it doesn’t say anything about the magnitude of extreme losses beyond the VaR. This means that it is possible to lose more than the calculated VaR, but not very probable.

In VaR, it is important to keep the following points in mind [1] :

Define the probability distributions of individual risk factors, the correlation across these risks along with the effect of risks on value.

VaR focuses on potential losses and downside risk.

It assumes no trading over the course of the period and normal distribution of returns. This is so that the greatest possible loss will actually be observable.

There are three key elements of VaR – a specified level of loss in value, a fixed time period over which risk is assessed and a confidence interval. The VaR can be specified for an individual asset, a portfolio of assets or for an entire firm.

It is possible to specify the risks being measured in VaR according to the portfolio or project being assessed. For example, the VaR for a large investment project can be defined in terms of competitive and firm-specific risks.

Measuring Value at Risk

In the last decade, a variety of methods have been developed for calculating Value-at-Risk. There are three basic approaches, two involve running hypothetical portfolios though Historical Simulation and Monte Carlo Simulation, while analytical assumptions are made in the Variance-covariance approach.

VaR.png

Variance-Covariance Method

The variance-covariance method is also known as the parametric VaR. It is nothing but the standard deviation of an asset or a portfolio scaled according to a selected time period and confidence level. The purpose of the variance-covariance method is to develop a probability distribution of potential values. It requires only two factors for input: average portfolio return and standard deviation. This method assumes that stock returns are normally distributed. As such, when actual daily returns are plotted against a normal curve, it is possible to identify the worst case scenario (either 5% or 1%) to determine maximum loss possible. This approach has the benefit of simplicity but the difficulties associated with deriving probability distributions. In addition the focus on standardized returns implies that we should focus on the size of the return relative to the standard deviation.

VaR%α = zσ

For example, VaR%95=(-1.645) σ or VaR%99=(-2.33) σ

Historical Simulation

Historical simulations are a popular and relative easier method of VaR to implement. In this approach, the VaR for a portfolio is estimated by creating a theoretical time series of returns on that portfolio. This is attained by running the portfolio through actual historical data and computing the changes that would have occurred in each period. There are no underlying assumptions of normal distribution of returns in this method. The estimate is based on actual historic returns. The upside to this method is that it is easy to implement. The downside is that it presumes the fact that past returns are indicative of future returns. This assumption is problematic. In addition, its results are indifferent to the timing of the returns, for example, a high positive return in the last week is given the same importance as a low negative return six months ago.

Monte Carlo Simulation

The Monte Carlo simulation refers to any method that randomly generates trials. This method involves developing a model for future stock prices and running multiple hypothetical trials through the model. On its own it doesn’t tell us anything about the underlying methodology, which is why it is essential to use it alongside a second VaR estimate. Additionally, subjective judgments can be made to modify these distributions.

Limitations of VaR

While Value at Risk has gained considerable popularity in the risk management community, there remains substantial doubt regarding its accuracy as a risk management tool and the level of reliability managers should have on its results in terms of decision making. There are three main problems with VaR:

VaR can be wrong – There is no precise measure of VaR and each measure comes with its own drawbacks.

Narrow Focus – The simplicity of VaR stems from its narrow definition of risk, which could lead managers to a false sense of contentment and decisions which may not serve best interests.

Suboptimal decisions – It remains unclear if VaR leads to more rational decisions because it could lead to overexposure to risk.

Why apply all these methods?

Hallmarks of Scientific Research ()

Purposiveness

Rigor

Testability

Replicability

Precision and Confidence

Objectivity

Generalizability

Parsimony

These hallmarks of scientific research contribute to the soundness of the information derived from the research conducted. This research has a very specific aim i.e. to establish the link between risk and return from various aspects, so that one may reinforce the other, maintaining objectivity in the results. The data collected for this research has been carefully and scrupulously filtered, keeping the highest degree of exactitude possible to maintain rigor. By utilizing more than one method of evaluation for risk and return, the information obtained will maintain testability. The decision to collect daily data for eight years contributes to the precision, while the use of a 95% confidence interval adds to the confidence in the research. The use of all stocks in building portfolios, as opposed to selecting a sample, contributes to the generalizability.

LITERATURE REVIEW

The valuation of a portfolio, regardless of the method chosen, is always for the same purpose- to determine which portfolio yields the minimum risk or maximizes returns. The main purpose of this paper is to determine which portfolios, constructed from stocks from KSE, would be most risky and which ones would be least risky.

From an investor’s perspective risk may be defined as danger of loss, or according to finance theory: “dispersion of unexpected outcomes due to movements in financial variables” (The Need for Risk Management, 2000). The recent unprecedented changes in financial markets have made it essential for firms to develop better risk-assessment techniques i.e. increase in volatility of financial markets is the driving force behind increased interest in risk management. Theories on modern portfolio state that portfolio returns are positively correlated with risk as measured by beta and sigma. However, portfolio performance ranked by the Treynor and Sharpe ratio frequently lacks consistency over time. This may be attributed to overconfidence of traders. It has been proven that high trading levels correspond to lower portfolio returns. It is actually more profitable to invest in a set of assets that may consist of high-beta companies, small cap stocks, or value stocks and hold that portfolio over a complete stock market cycle to realize a higher return (BARBER, Brad A. and Odean, Terrance, 2000). In addition Bauman & Miller state the importance of evaluating portfolio performance over a complete stock market cycle to test for consistency of performance rankings over time. Performance should be evaluated over complete stock market cycles because the volatility of annualized stock market returns diminishes as the investment horizon increases (BAUMAN, W.Scott and Miller, Robert E., 1994).

In a study regarding persistence performance of mutual fund investors, it was found that while the common trend in research has been in the direction of shorter selection and holding periods, employing longer term designs proves effective in determining winning portfolios ex-ante. In addition, past performance of portfolios is not a concrete indicator of future earnings due to constantly changing circumstances. At best, the chances of selecting a profitable portfolio that performs better than average can be increased to some extent when selected on the basis of past performance. However, this is dependent on the time frame of the past study. It is also important to avoid selecting underperforming funds that have high expense ratios (PATARI, Eero J., 2009). The more constraining a portfolio is in terms of mix of risky assets, the more diversified it will be. More diversified portfolios lead to less realized risk, as well as less realized return, but the reduction in return is not worth the reduction in risk (GRUAER, Robert R. and Shen, Frederick C., 2000).

In terms of portfolio evaluation, there are numerous methods which can be adopted to determine which portfolios are better or worse. Probabilistic approaches include scenario analysis, decision trees and simulations. In contrast, risk adjustment approaches include sector comparisons, market capitalization or size, ratio based comparisons and statistical controls(DAMODARAN, Aswath). When analyzing portfolios it is important to keep in mind that according to quadratic utility, increases in wealth always increase risk aversion, which can induce negative correlation between beta and return (GRINBLATT, Mark and Titman, Sheridan, 1990). Covariances are usually the direct parameter inputs for optimal portfolio choice, while betas are helpful in understanding assets’ systematic risks linked to the general market (HONG, Yongmiao et al., 2007). Hence, identifying the level of both these risks in a portfolio is essential for thorough analysis of risk exposure.

Benchmarking is one of the most common methods for portfolio performance evaluation. It involves comparing a portfolio against a broader market index. If the returns in the portfolio exceed those of the benchmark index, measured during identical time periods, then the portfolio is said to have beaten the benchmark index. The only disadvantage of this assessment is the possibility of mismatch between the levels of risk of the investment portfolio and those of the benchmark index portfolio which could lead to invalid results (SAMARAKOON, Lalith P. and Hasan, Tanweer, 2005).

Sharpe ratio has been widely used in the portfolio management industry as well as fund industry. It is also crucial to understand that if a portfolio represents the majority of the investment, it should be evaluated on the basis of Sharpe ratios. If the investment of funds is more diversified, then the Treynor ratio should be used (SCHOLZ, Hendrik and Wilkens, Marco, 2006). One of the main assumptions of describing the applicability of ratios such as Sharpe and Treynor is asset return normality and the general trend of the financial market drives the assessment quality of these measures (GATFAOUI, Hayette, 2009).

Interpretation/Risk

Total Risk

Market Risk

Ratio dividing the excess return of the fund by its risk

Sharpe Ratio (SR)

Treynor Ratio (TR)

Differential return between fund and risk-adjusted market index

Total Risk Alpha (TRA)

Jensen Alpha (JA)

Return of the risk-adjusted fund

Risk-adjusted Performance (RAP)

Market Risk-adjusted Performance (MRAP)

Differential return between risk-adjusted fund and market index

Differential Return based on RAP (DRRAP)

Differential Return based of MRAP (DRMRAP)

(From A Jigsaw Puzzle of Basic Risk-Adjusted Performance Measures by Hendrick Scholz, Ph.D. and Marco Wilkens, Ph.D.) (SCHOLZ, Hendrick and Wilkens, Marco, 2005)

Fama and French (F&F) three factor model has emerged as one possible explanation for the irregularity of CAPM results. Previous work shows that size, long-term past return, short-term past return, book-to-market equity, earnings/price, cash flow/price and past sales growth all contribute to average returns on common stock. These contributions are known as anomalies. The F&F three factor model accounts for these anomalies in long-term returns (FAMA, Eugene F. and French, Kenneth R., 1996). To prove that higher returns on small cap stocks and value stocks accounted for excessive returns, Fama and French added two factors. SMB (small minus big) denotes size risk while HML (high minus low) denotes value risk. A positive SMB indicates that small cap stocks receive higher returns than large cap stocks. A positive HML factor indicates that value stocks receive higher returns than growth stocks(NAWAZISH, Elahi Mirza, 2008). On a national and international level, International Value and Growth Stock Returns found that value stock outperformed growth stocks in each country, both absolutely and after adjustment for risk. In addition, small spreads of cross-country correlations of value-growth indicate that a more effective strategy would be to diversify globally in terms of value stocks (CAPAUL, Carlo et al., 1993) (GRIFFIN, John M., 2002). The results of the 1995 research paper by Kothari, Shanken and Sloan conflict with these results. Using betas calculated from time-series regressions of annual portfolio returns on the annual return done on the equally weighted market index, the relation established between book-to-market equity and returns was a lot weaker and inconsistent than the relationship proposed by F&F. They did however, attribute this to selection bias (KOTHARI, S.P. et al., 1995). Another important detail about F&F is that it does not perform significantly better than CAPM when applied to individual stocks, however, it remains the model of choice when it comes to evaluating portfolios (BARTHOLDY, Jan and Peare, Paula, 2002).

Another method of risk assessment which has rapidly gained popularity is Value-at-Risk. VAR is the preferred method of measurement when a firm is exposed to multiple sources of risk because it captures the combined effect of underlying volatility and exposure to financial risk. It borrows liberally from both the risk adjusted and probabilistic approaches and serves as an alternative risk measure. There are several models which may be used in VAR assessment. According to research, the two best performing models are: (1) a hybrid method, combining a heavy-tailed generalized autoregressive conditionally heteroskedastic (GARCH) filter with an extreme value theory-based approach, (2) a variant on a filtered historical simulation. Conditional VaR models lead to much more volatile VaR predictions than unconditional models and may arguably cause problems in allocating capital for trading purposes (KUESTER, Keith et al., 2006). In relation to risk assessment, establishment of capital regulations became essential when allocation of resources in a free market became inefficient. To analyze the need for regulations, the conditions implemented by Basel Committee will be discussed (Regulatory Capital Standards with VaR, 2000).

Considering the plethora of risk measures available to managers, it is difficult to decide which would be best (CHRISTOFFERSEN, Peter et al., 1999). For real life lessons on risk management regarding VaR, it would be helpful to look towards the firm Long-Term Capital Management. It gives us very important lessons, such as the fact that the 1998 failure of LTCM occurred because the firm had rigorously underestimated its risk due to its reliance on short-term history and risk concentration. This firm also shows the consequences of taking an extreme stance on risk management. LTCM’s risk managers used the same covariance matrix to measure risk that they used to optimize positions. This led to biases in the measurement of risk. It also provided them an opportunity to generate “arbitrage” profits based on recent history and extreme events. By and large, LTCM’s strategy took advantage of the fundamental limitations of its risk management system. (JORION, Philippe, 1999)

While covering all the aspects of risk, as pertinent in VAR, it is also important to identify the shortcomings of each method of valuation. In Abken’s An Empirical Evaluation of VaR by Scenario Simulation, he conducts scenario simulation on a series of test portfolios. Then he shows the relationship between scenario analysis, standard Monte Carlo and principal component simulation. Then he applies the three methods on several test portfolios in order to assess their relative performance (ABKEN, Peter A., 2000). This paper was essential because it identifies the inadequacy of scenario simulation. Furthermore, it recommends cross-checking results from scenario analysis with other computationally intensive VaR methods. In addition, preceding research indicates a lack of similarities among the various VaR approaches, this is the basis why Value at Risk standards have not been implemented across the board (MARSHALL, Christopher and Siegel, Michael, 1996).

In every study there are always some assumptions and preconceived notions. The biases that most studies are exposed to include survivorship bias, look ahead bias, self-selection bias, backfilling bias, or data-snooping bias induce more often spurious persistence than spurious non-persistence (PATARI, Eero J., 2009). In addition, apparent anomalies can be the result of methodology because most long-term anomalies disappear with reasonable changes in technique. This is consistent with the market efficiency prediction (FAMA, Eugene F., 1998). Lastly, how important is risk assessment? When classifying managers into categories of “good” and “bad”, the level of risk that they take should be determined by the risk taking nature/desire for return of the client. Therefore, assessment upon both aspects is crucial (PALOMINO, Frederic and Uhlig, Harald, 2007).

METHODOLOGY

Treynor

TP = rp – rf / βp

Dependent variable= systematic risk

Independent variable = excess return of portfolio, beta of portfolio

Sharpe

SP = rp – rf / σp

Dependent variable= total portfolio risk

Independent variable = excess return of portfolio, standard deviation of portfolio returns

Jensen’s Alpha

αp= rp – [ rf + βp (rm – rf)]

Dependent variable = differential return

Independent Variable = market risk premium, risk-free rate, portfolio return

Fama & French 3 Factor Model

Dependent variable= excess portfolio return

Independent variable = market risk premium, size factor, and value factor

Daily portfolio return: RPt = ΣNt=1 wiRit where Rit = LN[Pt/Pt-1]

Market Return: Rmt = LN [KSE100t/KSE100t-1]

Excess Portfolio Return: Rp – Rf

Market Risk Premium: Rm – Rf

Size calculation: Calculate market Capitalization = Price * number of shares. The median of the sample will be used to split stocks into two categories i.e. Big (B) and Small (S)

Book to Market (BM) calculation: BM ratio = Book value of equity / Market value of equity. Ranking and categorization will be = bottom 30% -> Low (L), middle 40% -> Medium (M), top 30% ->High (H).

SMB: risk premium in return related to firm size

SMB = ([S/L + S/M + S/H]/3) – ([B/L + B/M = B/H]/3)

HML: risk premium related to firm value

HML = ([S/H + B/H]/2) – ([S/L + B/L]/2)

Value-at-Risk

A

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our Guarantees

Money-back Guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism Guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision Policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy Policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation Guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more