Predicting volatility with neural networks

Predicting realized volatility is critical for trading signals and position calibration. Econometric models, such as GARCH and HAR, forecast future volatility based on past returns in a fairly intuitive and transparent way. However, recurrent neural networks have become a serious competitor. Neural networks are adaptive machine learning methods that use interconnected layers of neurons. Activations in one layer determine the activations in the next layer. Neural networks learn by finding activation function weights and biases through training data. Recurrent neural networks are a class of neural networks designed for modeling sequences of data, such as time series. And specialized recurrent neural networks have been developed to retain longer memory, particularly LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit). The advantage of neural networks is their flexibility to include complex interactions of features, non-linear effects, and various types of non-price information.

(more…)

How to estimate credit spread curves

Credit spread curves are essential for analyzing lower-grade bond markets and for the construction of trading strategies that are based on carry and relative value. However, simple spread proxies can be misleading because they assume that default may occur more than once in the given time interval and that losses are in proportion to market value just before default, rather than par value. A more accurate method is to estimate the present value of survival-contingent payments – coupons and principals – as the product of a risk-free discount factor and survival probability. To this, one must add a discounted expected recovery of the par value in case of default. This model allows parametrically defining a grid of curves that depends on rating and maturity. The estimated ‘fair’ spread for a particular rating and tenor would be a sort of weighted average of bonds of nearby rating and tenor.

(more…)

Statistical learning and macro trading: the basics

The rise of data science and statistical programming has made statistical learning a key force in macro trading. Beyond standard price-based trading algorithms, statistical learning also supports the construction of quantamental systems, which make the vast array of fundamental and economic time series “tradable” through cleaning, reformatting, and logical adjustments. Fundamental economic developments are poised to play a growing role in the statistical trading and support models of market participants. Machine learning methods automate the process and are a basis for reliable backtesting and efficient implementation.

(more…)

How to estimate factor exposure, risk premia, and discount factors

The basic idea behind factor models is that a large range of assets’ returns can be explained by exposure to a small range of factors. Returns reflect factor risk premia and price responses to unexpected changes in the factors. The theoretical basis is arbitrage pricing theory, which suggests that securities are susceptible to multiple systemic risks. The statistical toolkit to estimate factor models has grown in recent years. Factors and exposures can be estimated through various types of regressions, principal components analysis, and deep learning, particularly in form of autoencoders. Factor risk premia can be estimated through two-pass regressions and factor mimicking portfolios. Stochastic discount factors and loadings can be estimated with the generalized method of moments, principal components analysis, double machine learning, and deep learning. Discount factor loadings are particularly useful for checking if a new proposed factor does add any investment value.

(more…)

Variance risk premia for patient investors

The variance risk premium manifests as a long-term difference between option-implied and expected realized asset price volatility. It compensates investors for taking short volatility risk, which typically comes with a positive correlation with the equity market and occasional outsized drawdowns.
A recent paper investigates a range of options-related strategies for earning the variance risk premium in the long run, including at-the-money straddle shorts, strangle shorts, butterfly spread shorts, delta-hedged shorts in call or put options, and variance swaps. Evidence since the mid-1990s suggests that variance is an attractive factor for the long run, particularly when positions take steady equal convexity exposure. Unlike other factor strategies, variance exposure has earned premia fairly consistently and typically recovered well from its intermittent large drawdowns.

(more…)

Classifying market regimes

Market regimes are clusters of persistent market conditions. They affect the relevance of investment factors and the success of trading strategies. The practical challenge is to detect market regime changes quickly and to backtest methods that may do the job. Machine learning offers a range of approaches to that end. Recent proposals include [1] supervised ensemble learning with random forests, which relate the market state to values of regime-relevant time series, [2] unsupervised learning with Gaussian mixture models, which fit various distinct Gaussian distributions to capture states of the data, [3] unsupervised learning with hidden Markov models, which relate observable market data, such as volatility, to latent state vectors, and [4] unsupervised learning with Wasserstein k-means clustering, which classifies market regimes based on the distance of observed points in a metric space.

(more…)

The risk-reversal premium

The risk reversal premium manifests as an overpricing of out-of-the-money put options relative to out-of-the-money call options with equal expiration dates. The premium apparently arises from equity investors’ demand for downside protection, while most market participants are prohibited from selling put options. A typical risk reversal strategy is a delta-hedged long position in out-of-the-money calls and an equivalent short position in out-of-the-money puts. Historically, the returns on such a strategy have been positive and displayed little correlation with the returns of the underlying stocks. The strategy does incur gap risk with a large downside, however. The long-term profit of risk-reversal strategies reflects implicit market subsidies related to “loss aversion”.

(more…)

Fundamental value strategies

Value opportunities arise when market prices deviate from contracts’ present values of all associated entitlements or obligations. However, this theoretical concept is difficult and expensive to apply. Instead, simple valuation ratios, such as real interest rates or equity earnings yields with varying enhancements, have remained popular. Moreover, value strategies can take a long time to pay off and positive returns may be concentrated on episodes of “critical transitions”.
Historically, it has been easier to predict relative value between similar contracts rather than absolute value. Also, simple valuation ratios become more meaningful when combined with related economic indicators. Thus, long-term bond yields are plausibly related to inflation expectations and the correlation of bond prices with economic cycles and market trends. Equity earnings yields can be enhanced by economic trends and market information. And effective exchange rates become a more meaningful metric when combined with inflation differentials and measures of competitiveness of a currency area.

(more…)

Equity factor timing with macro trends

Plausibility and empirical evidence suggest that the prices of equity factor portfolios are anchored by the macroeconomy in the long run. A new paper finds long-term equilibrium relations of factor prices and macro trends, such as activity, inflation, and market liquidity. This implies the predictability of factor performance going forward. When the price of a factor is greater than the long-term value implied by the macro trends, expected returns should be lower over the next period. The predictability seems to have been economically large in the past.

(more…)

Measuring the value-added of algorithmic trading strategies

Standard performance statistics are insufficient and potentially misleading for evaluating algorithmic trading strategies. Metrics based on prediction errors mistakenly assume that all errors matter equally. Metrics based on classification accuracy disregard the magnitudes of errors. And traditional performance ratios, such as Sharpe, Sortino and Calmar are affected by factors outside the algorithm, such as asset class performance, and rely on the normal distribution of returns. Therefore, a new paper proposes a discriminant ratio (‘D-ratio’) that measures an algorithm’s success in improving risk-adjusted returns versus a related buy-and-hold portfolio. Roughly speaking, the metric divides annual return by a value-at-risk metric that does not rely on normality and then divides it by a similar ratio for the buy-and-hold portfolio. The metric can be decomposed into the contributions of return enhancement and risk reduction.

(more…)