A big data Bayesian approach to earnings profitability in the S&P 500

Teik-Kheong Tan (Department of Graduate Studies, Asia e University (AeU), Kuala Lumpur, Malaysia)
Merouane Lakehal-Ayat (Department of Accounting and Finance, St. John Fisher College, Rochester, New York, USA)

PSU Research Review

ISSN: 2399-1747

Article publication date: 13 March 2018

Issue publication date: 12 April 2018

2020

Abstract

Purpose

The impact of volatility crush can be devastating to an option buyer and results in a substantial capital loss, even with a directionally correct strategy. As a result, most volatility plays are for option sellers, but the profit they can achieve is limited and the sellers carry unlimited risk. This paper aims to demonstrate the dynamics of implied volatility (IV) as being influenced by effects of persistence, leverage, market sentiment and liquidity. From the exploratory factor analysis (EFA), they extract four constructs and the results from the confirmatory factor analysis (CFA) indicated a good model fit for the constructs.

Design/methodology/approach

This section describes the methodology used for conducting the study. This includes the study area, study approach, sources of data, sampling technique and the method of data analysis.

Findings

Although there is extensive literature on methods for estimating IV dynamics during earnings announcement, few researchers have looked at the impact of expected market maker move, IV differential and IV Rank on the IV path after the earnings announcement. One reason for this research gap is because of the recent introduction of weekly options for equities by the Chicago Board of Options Exchange (CBOE) back in late 2010. Even then, the CBOE only released weekly options four individual equities – Bank of America (BAC.N), Apple (AAPL.O), Citigroup (C.N) and US-listed shares of BP (BP.L) (BP.N). The introduction of weekly options provided more trading flexibility and precision timing from shorter durations. This automatically expanded expiration choices, which in turned offered greater access and flexibility from the perspective of trading volatility during earnings announcement. This study has demonstrated the impact of including market sentiment and liquidity into the forecasting model for IV during earnings. This understanding in turn helps traders to formulate strategies that can circumvent the undefined risk associated with trading options strategies such as writing strangles.

Research limitations/implications

The first limitation of the study is that the firms included in the study are relatively large, and the results of the study can therefore not be generalized to medium sized and small firms. The second limitation lies in the current sample size, which in many cases was not enough to be able to draw reliable conclusions on. Scaling the sample size up is only a function of time and effort. This is easily overcome and should not be a limitation in the future. The third limitation concerns the measurement of the variables. Under the assumption of a normal distribution of returns (i.e. stock prices follow a random walk process), which means that the distribution of returns is symmetrical, one can estimate the probabilities of potential gains or losses associated with each amount. This means the standard deviation of securities returns, which is called historical volatility and is usually calculated as a moving average, can be used as a risk indicator. The prices used for the calculations are usually the closing prices, but Parkinson (1980) suggests that the day’s high and low prices would provide a better estimate of real volatility. One can also refine the analysis with high-frequency data. Such data enable the avoidance of the bias stemming from the use of closing (or opening) prices, but they have only been available for a relatively short time. The length of the observation period is another topic that is still under debate. There are no criteria that enable one to conclude that volatility calculated in relation to mean returns over 20 trading days (or one month) and then annualized is any more or less representative than volatility calculated over 130 trading days (or six months) and then annualized, or even than volatility measured directly over 260 trading days (one year). Nonetheless, the guidelines adopted in this study represent the best practices of researchers thus far.

Practical implications

This study has indicated that an earnings announcement can provide a volatility mispricing opportunity to allow an investor to profit from a sudden, sharp drop in IV. More specifically, the methodology developed by Tan and Bing is now well supported both empirically and theoretically in terms of qualifying opportunities that can be profitable because of the volatility crush. Conventionally, the option strategy of shorting strangles carries unlimited theoretical risk; however, the methodology has demonstrated that this risk can be substantially reduced if followed judiciously. This profitable strategy relies on a set of qualifying parameters including liquidity, premium collection, volatility differential, expected market move and market sentiment. Building upon this framework, the understanding of the effects of persistence and leverage resulted in further reducing the risk associated with trading options during earnings announcements. As a guideline, the sentiment and liquidity variables help to qualify a trade and the effects of persistence and leverage help to close the qualified trade.

Social implications

The authors find a positive association between the effects of market sentiment, liquidity, persistence and leverage in the dynamics of IV during earnings announcement. These findings substantiate further the four factors that influence IV dynamics during earnings announcement and conclude that just looking at persistence and leverage alone will not generate profitable trading opportunities.

Originality/value

The impact of volatility crush can be devastating to the option buyer with substantial capital loss, even for a directionally correct strategy. As a result, most volatility plays are for option sellers; however, the profit is limited and the sellers carry unlimited risk. The authors demonstrate the dynamics of IV as being influenced by effects of persistence, leverage, market sentiment and liquidity. From the EFA, they extracted four constructs and the results from the CFA indicated a good model fit for the constructs. Using EFA, CFA and Bayesian analysis, how this model can help investors formulate the right strategy to achieve the best risk/reward mix is demonstrated. Using Bayesian estimation and IV differential to proxy for differences of opinion about term structures in option pricing, the authors find a positive association among the effects of market sentiment, liquidity, persistence and leverage in the dynamics of IV during earnings announcement.

Keywords

Citation

Tan, T.-K. and Lakehal-Ayat, M. (2018), "A big data Bayesian approach to earnings profitability in the S&P 500", PSU Research Review, Vol. 2 No. 1, pp. 35-58. https://doi.org/10.1108/PRR-04-2017-0023

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Teik-Kheong Tan and Merouane Lakehal-Ayat.

License

Published in the PSU Research Review: An International Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Can an earnings announcement provide a volatility arbitrage opportunity that allows an investor to profit from a sudden, sharp drop in implied volatility (IV) that triggers a similarly steep decline in an option’s value? Tan and Bing (2014) developed a methodology that allows an investor to profit from this volatility crush phenomena. In particular, the strategy allows for shorting strangles while containing the risk associated with this option strategy. This profitable strategy relies on a set of qualifying parameters including liquidity, premium collection, volatility differential, expected market move and market sentiment. Building upon this framework, we investigate the effects of persistence and leverage on reducing risk associated with trading options during earnings announcement, in the post earnings event scenario. The objective of the research is to determine the association between the effects of market sentiment, liquidity, persistence and leverage in the dynamics of IV during earnings announcement. The causal relationship between persistence and leverage as well as sentiment and liquidity are modeled using SPSS Analysis of Moment Structure (AMOS). Data collected were analyzed using factor analysis by principal component. From the exploratory factor analysis (EFA), we proceed to the confirmatory factor analysis (CFA) to focus on the link between the factors and their measured variables. Within AMOS, the provision of structural equation modeling (SEM) is used to compare, confirm and refine the model. Also integrated within AMOS is the ability to conduct Bayesian analysis to further improve estimates of the model parameters. In addition, the ability to conduct Bayesian analysis, a process that will improve estimates of the model estimates, is integrated within AMOS. This feature provides the opportunity to compare estimated values derived from both the Maximum Likelihood (ML) and Bayesian approaches to analyses of the same CFA model. Successful option traders use volatility to their advantage on most trades. Even a basic option strategy, like buying a call, can be statistically helped out when the dynamics of volatility are well understood. This study focuses on the problem of volatility forecasting in the financial markets. It begins with a general description of volatility and its properties, and discusses its usage in deterministic events such as corporate earnings. These events usually witness a rise in IV, which allows the investor to profit from the subsequent drop (volatility crush) after the event by using strategies such as strangles. The study on information disclosures (earnings) and their associated risks have been a topic of extensive research since the pioneering work of Ball and Brown (1968). Some of these risks however can be mitigated by qualifying the trade appropriately. As the volatility crush is the key determinant for profitability, we modeled the crush between the IV of the front and next earliest expiration using Bayesian statistics. The accuracy of the Bayesian model is quantified using examples from the tech sector, such as Google and eBay (Tan and Bing, 2014). This study investigates theoretically and empirically the dynamics of the IV around earnings announcements dates. To do this, we present a theoretical framework for the change dynamics in IV that takes into account two well-known features: volatility clustering and the leverage effect. In this context, the IV should decrease after an earnings announcement, but the post-announcement IV path depends on the content of the earnings announcement: good news or bad news. The empirical investigation was conducted on the selected S&P 500 stocks over the period 2010-2014.

Motivation for the study

By trading on corporate earnings, traders and investors can reliably profit from the markets in all directions while avoiding market risk for an entire quarter. As trading around earnings is a highly volatile event, the selling of strangles can be profitable because of volatility crush, but also comes with additional risk. This risk however can be mitigated by qualifying the trade appropriately. However, without proper qualification, the risk is virtually unlimited. Therefore, it is imperative for successful options traders to know the forces that influence IV during earnings to mitigate the risk associated with shorting strangles.

Research methodology

This section describes the methodology used for conducting the study. This includes the study area, study approach, sources of data, sampling technique and the method of data analysis.

Background of study area

The stock price behavior of companies in the Standard and Poor’s 500 Stock Index (S&P 500) has long been an area of interest to financial economists (Beneish and Whaley, 1996; Lynch and Mendenhall, 1997). We select companies that qualify in terms of liquidity, volume and open interest. Liquidity refers to the tightness of the option bid/ask spread and is characterized by a high level of trading activity. The minimum acceptable volume for the front month strike must be at least $500 and the open interest must be at least $1,000. For the bid-ask spread, we would prefer the difference to be 1 cent, and normally this is hard to achieve. We will accept no more than a 10-cent spread.

Study approach

The methodology used examines a first-order EFA and CFA model designed to test the multidimensionality of the theoretical construct. Specifically, this methodology tests the hypothesis that IV crush for earnings announcement is a multidimensional construct composed of four factors – market sentiment, liquidity, persistence and leverage. The theoretical underpinning of this hypothesis derives partly from the paper by Black (1976). In this paper, some well-known characteristics common to many financial time series are explained. Volatility clustering is often observed (i.e. large changes tend to be followed by large changes and small changes tend to be followed by small changes (Mandelbrot, 1963). Second, the “leverage effect,” refers to the fact that changes in stock prices tend to be negatively correlated with changes in volatility (i.e. volatility is higher after negative shocks than after positive shocks of same magnitude).

The goal of this study is to investigate the dynamics of IV around earnings dates, and more specifically, the behavior of the volatility implied in options prices around these events. In particular, this research investigates the relationship between the pre-event and post-event effects on volatility crush after the earnings announcement date. The post-event effects that cause this effect on volatility crush are persistence and leverage. While much has been reported on these two effects, very little has been reported on the pre-event effects such as liquidity and market sentiment. A natural outcome of this understanding allows the investor to profit from the rise and fall of IV via writing strangles and other option strategies. The order of study proceeds as follows: First, we establish measurements that help qualify good earnings candidates for premium collection in anticipation for volatility crush. Second, we ensure that liquidity parameters are met for ease of execution. This includes the bid-ask spread, open interest and volume. Next, we perform a series of analysis to study the behavior of the volatility implied in options prices on different types of stocks during their earnings announcement season. This includes measuring the increase in pre-event IV, event date and subsequent IV crush post event. The post-event effects of persistence and leverage will be also analyzed. In particular, the crush impact because of the nature of the earnings announcement will be studied. The direct relation between the four constructs will be analyzed. Based on these relations the indirect relation between the persistence effect, leverage effect and market sentiment and liquidity through volatility will be derived.

Sample size and data selection

This study focuses on stock exchange quoted firms primarily from the S&P 500. The S&P 500 stock market index, maintained by S&P Dow Jones Indices, comprises 500 common stocks issued by 500 large-cap companies and traded on American stock exchanges, and covers about 75 per cent of the American equity market by capitalization. The index is weighted by free-float market capitalization, so more valuable companies account for relatively more of the index. The index constituents and the constituent weights are updated regularly using rules published by S&P Dow Jones Indices.

Statistically, the sample should be large enough to obtain a reliable regression model. The rule of thumb provided in the Statistics Handbook is that there should be 15 cases of data per predictor (independent variable). To increase the data, a period of five years is selected. This period runs from 2010 up to and including 2014. The sample included 53 stock exchange-listed companies.

Method of data analysis

Factor analysis by principal components was adopted in the data analysis for the purpose of partitioning of the variables into factors that influence the dynamic of IV. The purpose of factor analysis is to summarize the interrelationship and establish levels of variances in decision variables as they influence the given phenomenon. We present the results from the template observed in Adebayo (2008).

Variables used in the analysis

To demonstrate how certain variables affect the dynamics of IV, we select 16 of the following applicable variables to study.

EXPMOVE.

This represents the expected market maker move (MMM) for the underlying stock. It is a measure of the expected magnitude of price movement based on market volatility. The MMM is derived by using stock price, volatility differential and time to expiration. It helps to identify the implied move because of an event between now and the front month expiration (if an event exists). In this study, the MMM is a measure of the implied move based of volatility differential between the front and back month. This is useful in cases where an event (i.e. earnings) takes place in the front month and one would like to estimate the implied move because of that event.

VOLDIFF.

The volatility differential typically represents the difference in volatility between the front period and the back period. The period could represent weekly or monthly depending on the option term structure.

VOLRANK.

For every option chain of an underlying (asset, index, future, exchange traded funds, etc.), there is a calculated IV (most option platforms provide this information). By comparing the current IV for the option chain with the IV range over the last 52 weeks (its highest and lowest values), we can determine where within that range it falls as a percentage. When using option strategies that generate premium (i.e. credit spreads, naked shorts, iron condors, strangles), the greater the IV, the higher the premium and the further out of the money the short strikes. The equation for IV Rank as a percentage is:

100 × (Current IV52 Week Low IV)(52 Week High IV52 Week Low IV)

EVENTM1.

This represents the IV of the underlying one day before the event (earnings announcement). The IV is expected to be at its peak.

PREMIUM.

This is the amount that could be collected by shorting a strangle.

BIDASK.

The difference between the bid and asked prices, or the spread, is a key indicator of the liquidity of the asset. Generally speaking the tighter the spread, the more favorable it is for the investor.

OPENINT.

The total number of options contracts that are not closed or delivered on a particular day.

VOLUME.

Trading volume gives you important insight into the strength of the current market direction for the option’s underlying stock. The volume, or market breadth, is measured in shares and tells you how meaningful the price movement in the market is.

EVENT0.

This is the measure of the IV on the day when earnings are announced.

EVENTP1.

This is the measure of the IV one day after earnings are announced.

EVENTP2.

This is the measure of the IV two days after earnings are announced.

EVENTP3.

This is the measure of the IV three days after earnings are announced.

TERM0.

This is the measure of the IV for the current term.

TERM1.

This is the measure of the IV for the next term.

TERM2.

This is the measure of the IV for the term after the next.

EPS.

This is the earnings per share from the most current earnings cycle.

Theoretical framework

Having established the variables and their interaction and the underlying theory governing them, we now consider the theoretical aspect of this study. From the Black–Scholes option pricing model (Hull, 2002), we know the price of a call option on a non-dividend stock can be written as:

(1.1) Ct=StN(d1)XerτN(d2)
and the price of a put option on a non-dividend stock can be written as:
(1.2) Pt=XerτN(d2)StN(d1)

where

P = option price;

S = stock price;

X = exercise price of the option;

T = time to expiration of the option;

r = continuous risk-free rate of interest; and

σ = standard deviation of continuous returns on the stock per unit time.

(1.3) d1=ln(StX)+(r+σs22)τσsτ
(1.4) d2=ln(StX)+(rσs22)τσsτ=d1σsτ
τ=Tt

N(•) is the cumulative density function of normal distribution and

(1.5) N(d1)=d1f(u)du=d112πeu22du

Specifically, the Black–Scholes model may be written as follows:

(1.6) P = P (S, X, T, r,σ)

A key assumption of the Black–Sholes model is that volatility is constant. While volatility can be relatively constant in a very short time, it is never constant all the time, especially during binary events like earnings announcements. Over the years, there have been various extensions made to overcome most of these restrictions. Of paramount importance is the assumption that the volatility is still a constant. Bear in mind that the Black–Scholes formulas are primarily used by European option. This closed form solution has become a standard in the financial community. One parameter in the Black–Scholes model that cannot be directly observed is the volatility – hence IV is derived as a proxy. Essentially, implied volatilities are the volatilities implied by the market prices of the options. Using the market prices of calls and put options for different maturities and with all other parameters known (except volatility), one can compute the Black–Scholes formula by working backwards and estimate the volatility. Unfortunately, this is not exact science and is usually obtained via a trial-and-error method to improve the accuracy of IV. Given that the volatility is a constant in the Black–Scholes formula, several researchers have made realistic improvements to the formula. Of particular interest is the work by Merton who showed that the Black–Scholes valuation formula is virtually unchanged if volatility is a deterministic function of time (Merton, 1973). The only difference is in the definition of the variance σ2. Per Black–Scholes, σ2 is the constant instantaneous volatility. However, Merton showed that σ2 can be somewhat more generally defined as the average volatility from the valuation date to the option expiration date:

(2.1) σ2(T)=T1 0Tσ2(t)  dt
where σ2 (t) is the instantaneous variance at time t. From the original Black–Scholes, which assumes a constant volatility, we now have a model to define average volatility from the valuation date to expiration date of each option at the appropriate strikes. However, if volatility evolves independently of the underlying asset price and no priced risk is associated with the option, the correct price of an option should equal the expected value of the Black–Scholes formula, evaluating the variance argument at average variance until expiry.

Leveraging the work of Merton (1973), research on the time-series behavior of IV around earnings announcements was extended by Donders and Vorst (1996), and Patell and Wolfson (1979). They postulated that for earnings announcements, traders know in advance when corporate information will be released. Obviously, this must occur before the option expiration for any profit to be realized. This creates a buzz especially with high-profile stocks such as AAPL or GOOG because of the uncertainty of the announcement as documented by Tan and Bing (2014). A higher instantaneous volatility on the earnings release is expected, as there is risk of the unknown associated with the news break. Prior to this date, they assumed that the instantaneous volatility is constant. Given this assumption, the expected average volatility (i.e. IV) to expiration rises to a maximum immediately before the earnings disclosure because of the steady decrease in the time to expiration. Once the earnings disclosure is made, the expectation is for IV to drop to its normal level. However, there have been numerous cases whereby the earnings conference further exacerbated the IV fluctuations. Mathematically, Donders and Vorst rewrote equation (2.1) in a way that reflected the way IV rises to maximum before earnings disclosure and how it dropped back to its normal level after the uncertainty has been removed (Donders and Vorst, 1996). They define IV as the average volatility until maturity of the option:

(2.2) IV=(x1xσ2normal+1xσ2high)
where is the number of days until the expiration date of the option, σ2normal is the volatility on a day when there is no news announcement and σ2high is the volatility on a day when scheduled earnings is released. Graphically this is depicted in Figure 1, which shows the IV as a function of time until and after the earnings announcement.

The bars [graphic] represent the instantaneous volatility, which is constant except on the announcement date, and the [graphic] line depicts the evolution of the IV (Donders and Vorst, 1996). This figure assumes the following: maturity of the option is 20 days after the event date, instantaneous volatility = 20 per cent except on the announcement date where it is equal to 40 per cent. The visual of the model indicates that IV tends to increase progressively before hitting the top. This typically happens on the day of the announcement (just before the announcement, which could be 30 min before market opens on that day or 30 min after the close of the market). What happens thereafter depends on a few factors. The conventional thinking is that IV should drop to its normal level. However, it is not always the case. Drawing upon this model, we extended the assumptions regarding the post-event announcement, taking into account the Bayesian parameters used in this study. Equation (2.2) is re-written to reflect weekly options term structure and the varying degrees of volatility crush based on the time to expiration. The limitation of equation (2.2) lies in the assumption established by Donders et al. (2000). The research however was conducted for the Dutch stock market whereby they observed a two-day decrease in IV following earnings. An earlier paper by Donders and Vorst (1996) concluded that IV decreases sharply on the earnings announcement day. Clearly the results today indicate the effects of persistence and leverage tend to cause the IV to decrease according to a variety of factors, chief among which is the effect of volatility rank and historic volatility. What is unclear however is the rate at which IV decreases. The Donders paper point to a sharp decline on the day of the announcement; however, empirical data from the US market research we conducted shows a difference which is influenced by the nature of the market sentiment on the days following the announcement (Donders and Vorst, 1996). Borrowing the concepts established by David, we modeled the inclusion of previous earnings and current volatility rank by way of several variables for the earnings together with some factors to account for the after-earnings days (David and Veronesi, 2008). Equation (2.3) would now allow for the measure of the daily variation of IV based on market sentiment as well.

(2.3) I V = ( x 1 x σ 2 n o r m a l + 1 x σ 2 h i g h )

where

(2.4) dIVit=ρ + ωDannounce+n=110znDn   

ρ = average variation of IV right before earning announcement;

ω = average deviation from on announcement date;

Dannounce = 1 on earnings announcement date; 0 on non-announcement days;

zn = average variation of IV after announcement date on n-th day; and

Dn = 1 on n-th day after announcement date, 0 otherwise

for i = 1, …250 and t = −10, …, +10.

Results and discussion

Test of sampling adequacy

Aside from the raw data matrix, the first matrix encountered in the factor analysis is the correlation matrix. From Table I, there are many medium to large correlations in the matrix, and every variable has some large correlations. This is a reasonable result to expect (no negative correlations).

We can test the appropriateness of factor analysis via the use of Bartlett test of sphericity – a statistical test for the presence of correlations among the variables. This test (Table II) can be used to test the null hypothesis that our sample was randomly drawn from a population in which the correlation matrix was an identity matrix. Bartlett’s test was used in the test for the appropriateness of the sample from the population and the suitability of factor analysis. It tests for the adequacy of the sample as a true representation of the population under study. A significance level <0.05 indicates presence of correlations among the variables. Kaiser’s (Kaiser–Meyer–Olkin [KMO]) Measure of Sampling Adequacy (MSA) is another measure of sample adequacy. It is an index for comparing magnitudes of the observed correlation coefficients between all pairs of variables. Kaiser has described MSAs above 0.9 as marvelous, above 0.8 as meritorious, above 0.7 as middling, above 0.6 as mediocre, above 0.5 as miserable and below 0.5 as unacceptable. In our case, the KMO value of 0.843 indicates a strong measure of sample adequacy.

Factor extraction

Under the factor extraction method, the Principal Component Analysis method is deployed. Data reduction is our primary concern in addition to assessing overall model fit. To decide the number of factors to extract, we consider the Latent root criterion (most commonly used). Under this method, only factors having latent roots or eigenvalues greater than 1 are considered significant; all other factors less than 1 – insignificant and disregarded.

Total variance explained

The 16 variables used in the study were subjected to factor extraction by principal component. The output of the analysis contained the initial component matrix, which was subjected to rotation to fine-tune the loadings on each factor. The initial Eigenvalues, the percentage variance explained and the rotation sum of square loadings are presented in Table III. If we take 65 per cent of the total variance as satisfactory, we would potentially have four components as illustrated.

Scree test criterion

Another method for deciding on the number of components to retain is the scree test. This is a plot with eigenvalues on the ordinate and component number on the abscissa. The plot (Figure 2) provides a visual aid for deciding at what point including additional components no longer increases the amount of variance accounted for by a nontrivial amount.

Number of components in rotated solution

There are two forms of rotation, namely, orthogonal and oblique solution. We used the Varimax rotation method, which is an orthogonal rotation method. The reason is that it produces more meaningful loadings and also because the rotation converged after five iterations, which is acceptable. The result of the Varimax rotation was used for interpretation and the component matrix is presented in Figure 3 together with the output of the Structured Equation modeling (SEM) Path diagram.

This provides the critical linkage between the EFA and CFA. Figure 3 shows how we derived the four components (factors) which form the basis for our hypothesis that influence the dynamics of IV during earnings announcement. Even though the rotated component matrix shows three factors, from experience, it is deemed more appropriate to distribute the loadings of the dependent variables (BIDASK, OPENINT, VOLUME and PREMIUM) onto a new factor (denoted as liquidity below). This decision is also supported from Table III above.

From exploratory factor analysis to confirmatory factor analysis

From Figure 3, we now have the extracted components (factors) to progress to the next stage of CFA. The results of the study will be discussed in the following order: First, the results from SEM will be addressed. This includes assessing the model fit and estimates. This section is broken into Model summary, Model variables and parameters and Model Evaluation. Second, the results regarding Bayesian estimation will be presented. This allows the opportunity to compare estimated values derived from both the ML and Bayesian approaches to analyses of the same CFA. This section ends with the results of the two validity analyses.

Variable dependencies

The model to be tested postulates a priori that IV dynamic is a four-factor structure composed of Sentiment, Liquidity, Persistence and Leverage; it is presented schematically as a path diagram in Figure 4. This represents our NULL hypothesis. Before any discussion on the testing of this model, we will dissect the model and list its component parts as follows:

  • there are four IV factors, as indicated by the four ellipses labeled Sentiment, Liquidity, Persistence and Leverage;

  • the four factors are intercorrelated, as indicated by the two-headed arrows;

  • there are 16 observed variables, as indicated by the 16 rectangles;

  • the observed variables load on the factors in the following pattern: EXPMOVE, VOLDIFF, VOLRANK and EVENTM1 load on Factor 3 (Sentiment), PREMIUM, BIDASK, OPTNINT and VOLUME load on Factor 4 (Liquidity), EVENT0, EVENTP1, EVENTP2 and EVENTP3 load on Factor 2 (Persistence) and TERM0, TERM1, TERM2 and EPS load on Factor 3 (Leverage);

  • each observed variable loads on one and only one factor; and

  • errors of measurement associated with each observed variable are uncorrelated.

Summarizing these observations, a more formal description of the hypothesized model can be made. As such, we state that the CFA model presented in Figure 3 hypothesizes a priori that:

  • IV dynamics can be explained by four factors: Sentiment, Liquidity, Persistence and Leverage;

  • each item-pair measure has a nonzero loading on the IV dynamic factor that it was designed to measure (termed a target loading), and a zero loading on all other factors; and

  • the four IV factors, consistent with the theory, are correlated.

Unobserved, exogeneous variables

Sentiment.

This variable represents the build-up of volatility as represented by the option chain for each term structure. As we are dealing primarily with options, the market sentiment is reflected primarily through the option chain and terms structure of the options respectively. This is where the volatility measures are determined.

Liquidity.

This variable describes the viability or ease of which an option strategy can be executed. Without sufficient liquidity, it becomes very difficult to enter or exit a trade.

Leverage.

The leverage effect relates to the way the instantaneous volatility reacts to past news. The volatility has been shown to increase more after a negative shock (bad news) than after a positive shock (good news). This effect implies that a negative shock (bad news) has a larger impact on volatility than a positive shock (good news) of the same magnitude.

Persistence.

Persistence effect relates to recurrent observations in volatility shocks. In other words, when volatility rises abruptly, it then takes some time to return to normal. In our context, it is reasonable to assume that higher volatility persists after a disclosure of information.

Structural equation modeling methodology

To overcome the gaps in the study of IV, we developed a theoretical framework for earnings announcement analysis including the extended unique domain-relevant components. This framework will provide support for future researchers who wish to develop a theoretical framework of IV dynamics (framework for domain-specific earnings play). To simultaneously investigate the interactions, SEM will used. SEM is a powerful method of quantitative analysis used to investigate the complex relationship between independent variables and dependent variables. The biggest strengths of SEM are the minimization of measurement error and simultaneous estimation (Hair et al., 2006). Tabachnick and Fidell (2001) stated that the measurement error are reduced because the error has been estimated and removed, leaving common variance. Complex relationships can be examined by SEM technique because this technique is the only way to allow simultaneous tests of all the relationships (Tabachnick and Fidell, 2001).

Results from confirmatory factor analysis

From the standardized estimates output of Figure 4, a quick eyeball test indicates the loading factors on each variable together with its R2 are reasonably good (>0.7). The correlations are also well within the limits.

Empirical results

In general, SEM requires a large sample size as some of the statistical computations used by SEM are unreliable with small samples. Sample size provides the basis for the estimation of sampling errors. In our case, the sample size = 1,060, which well meets the requirements.

Number of indicator variables per construct

Generally, researchers prefer as many indicator variables as possible to represent all constructs and maximize reliability. However, the concept of parsimony encourages researchers to use smallest number of indicator variables. More indicator variables do not mean it is necessarily better. According to Hair et al. (2006), good practice dictates a minimum of three, preferably four, indicator variables per construct/factor.

Having at least four indicator variables per construct/factor will lead to one having an over-identified model. Over-identified model has more unique covariance and variance terms than parameters to be estimated and this results in a good fit. This results in positive degrees of freedom that allows for rejection of the model, thereby rendering it of scientific use (Byrne, 2010). In our case, the over-identified model has the following summary (Table IV):

  • assessing measurement model validity; and

  • research findings show that when the measurement model is valid, it means that the model fits the theory or in other words we have a theory-fitting model.

To determine the model’s fit with the theory, two sets of criteria need to be met:

  1. establish acceptable levels of goodness-of-fit for the measurement model; and

  2. find specific evidence of construct validity.

Establishing acceptable levels of goodness-of-fit for the measurement model

Once a model is estimated, model fit compares the theory to reality by assessing the similarity of the estimated covariance matrix (theory) to reality (observed covariance matrix).

If the researcher’s theory were perfect, then the observed and estimated covariance matrices would be the same.

Goodness-of-fit statistics

The Chi-square (χ2) is the starting point of judging this model fit. The implied null hypothesis is that the observed sample and SEM estimated covariance matrices are equal, meaning the model fits perfectly. The χ2 test determines the statistical probability (denoted by the p-value) that the observed sample and SEM estimated covariance matrices are actually equal in a given population.

For each set of fit statistics, the default model represents the hypothesized model, while the saturated and independence model serve as comparative models. The value of 526.266, under Chi square in AMOS (CMIN) (Table V), represents the χ2 statistic. This statistic is equal to (N–1)Fmin and, in large samples, is distributed as a central χ2 with degrees of freedom equal to 1 2(p)(p+1)t, where p is the number of observed variables, and t is the number of parameters to be estimated (Bollen, 1989). Because the χ2 statistic equals (N–1)Fmin, this value tends to be substantial when the model does not hold and when sample size is large (Joreskog and Sorbom, 1986). Yet, the analysis of covariance structures is grounded in large sample theory. As such, large samples are critical to the obtaining of precise parameter estimates, as well as to the tenability of asymptotic distributional approximations (MacCallum et al., 1996). Thus, findings of well-fitting hypothesized models, where the χ2 value approximates the degrees of freedom, have proven to be unrealistic in most SEM empirical studies. More common are findings of a large χ2 relative to degrees of freedom, thereby indicating a need to modify the model to better fit the data (Joreskog and Sorbom, 1986). Hence, this result is not surprising and we need to focus on other goodness-of-fit measures as described in Table V.

For other techniques, we typically look at the smaller p-value (less than 0.05) to indicate that a significant relationship exists. But with the χ2 goodness-of-fit test used in SEM, we make inferences in a way that is exactly opposite.

When we find a p-value for the χ2 test to be small (statistically significant), it indicates that the two covariance matrices are statistically different and indicates problems with the fit or there is a poor fit. In SEM, we look for a relatively small χ2 value (and correspondingly large p-value) indicating no statistically significant difference between the two matrices, to support the idea that a proposed theory fits reality.

Looking at the next set of indicators, we now consider root mean square residual (RMR), Goodness-of-Fit Index (GFI), Adjusted Goodness-of-Fit Index (AGFI) and Parsimony Goodness-of-Fit Index (PGFI). The RMR represents the average residual value derived from the fitting of the variance–covariance matrix for the hypothesized model Σ(θ) to the variance–covariance matrix of the sample data (S). However, because these residuals are relative to the sizes of the observed variances and covariances, they are difficult to interpret. Thus, they are best interpreted in the metric of the correlation matrix (Hu and Bentler, 1999). The standardized RMR, then, represents the average value across all standardized residuals, and ranges from zero to 1.00; in a well-fitting model, this value will be small (0.05 or less). The value of 0.074 shown in Table VI represents the unstandardized residual value. The GFI is useful, as it is less sensitive to sample size as N is not included in the formula. It is an absolute fit index. The range of GFI values is 0-1, with higher values indicating better fit. Values greater than 0.90 are considered good and because the value of the default model in our example is 0.942, we have a good fit. The AGFI differs from the GFI only in the fact that it adjusts for the number of degrees of freedom in the specified model. As such, it also addresses the issue of parsimony by incorporating a penalty for the inclusion of additional parameters. The GFI and AGFI can be classified as absolute indices of fit because they basically compare the hypothesized model with no model at all (Hu and Bentler, 1999). Although both indices range from 0 to 1.00, with values close to 1.00 being indicative of good fit. Based on the GFI and AGFI values reported in Table VI (0.942 and 0.919, respectively), we can once again conclude that our hypothesized model fits the sample data fairly well. The last index of fit in this group, the PGFI, was introduced by James et al. (1982) to address the issue of parsimony in SEM. As the first of a series of “parsimony-based indices of fit (Williams and Holahan, 1994),” the PGFI takes into account the complexity (i.e. number of estimated parameters) of the hypothesized model in the assessment of overall model fit. As such, “two logically interdependent pieces of information”, the goodness-of-fit of the model (as measured by the GFI) and the parsimony of the model, are represented by the single index PGFI, thereby providing a more realistic evaluation of the hypothesized model (Mulaik et al., 1989). Typically, parsimony-based indices have lower values than the threshold level generally perceived as “acceptable” for other normed indices of fit. Mulaik et al. (1989) suggested that nonsignificant χ2 statistics and goodness-of-fit indices in the 0.90s, accompanied by parsimonious-fit indices in the 50s, are not unexpected. Thus, the finding of a PGFI value of 0.679 would seem to be consistent with our previous goodness-of-fit statistics.

Another important goodness-of-fit statistic is the root mean square error of approximation (RMSEA), which tells us how well the model, with unknown, but optimally chosen parameter estimates would fit the population covariance matrix. In recent years, it has been regarded as “one of the most informative fit indices” because of its sensitivity to the number of estimated parameters in the model. In other words, the RMSEA favors parsimony in that it will choose the model with the lesser number of parameters. It is generally reported in conjunction with the RMSEA and in a well-fitting model, the lower limit is close to 0 while the upper limit should be less than 0.08. Therefore, our value of 0.064 in Table VII is reasonably acceptable.

Bayesian analysis

In lieu of ML estimation, AMOS analyses are based on Bayesian estimation. One of the motivations of using Bayesian is that it allows the opportunity to compare estimated values derived from both the ML and Bayesian approaches to analyses of the same CFA model. Two characteristics of the derived joint distribution are important to CFA analyses. First, the mean of this posterior distribution can be reported as the parameter estimate. Second, the standard deviation of the posterior distribution serves as an analog to the standard error in ML estimation. The numbers in each of the columns are constantly changing. The reason for these ongoing number changes is because as soon as you request Bayesian estimation, the program immediately initiates the steady drawing of random samples based on the joint posterior distribution. This random sampling process is accomplished in AMOS via an algorithm termed the Markov chain Monte Carlo (MCMC) algorithm. The basic idea underlying this ever-changing number process is to identify, as closely as possible, the true value of each parameter in the model. Table VIII below displays the Bayesian SEM window which shows the Posterior distribution sampling and convergence status together with related statistic and estimates.

A few parameters from Table VIII help to explain the results. Each row in the table describes the posterior distribution value of a single parameter, while each column lists the related statistic. The first column (labeled Mean) represents the average value of the posterior distribution and can be regarded as the final parameter estimate. These values represent the Bayesian point estimates of the parameters based on the data and prior distribution. Given our sample size of 1,060, the mean values are close to the ML estimates. The second column (S.E.) reports an estimated standard error that implied how far the estimated posterior mean may lie from the true posterior mean. It represents the precision of the MCMC estimate determined by how long we let the process run (it is not the standard error as commonly mistaken). As more samples are generated, the estimate of the posterior mean becomes more accurate and the S.E. will gradually drop. The next parameter (labeled S.D.) is interpreted as the likely distance between the posterior mean and the unknown true parameter; this number is analogous to the standard error in ML estimation. The analog of a confidence interval may be computed from the percentiles of the marginal posterior distribution; the interval that runs from the 2.5 percentile to the 97.5 percentile forms a Bayesian 95 per cent credible interval. If the marginal posterior distribution is approximately normal, the 95 per cent credible interval will be approximately equal to the posterior mean ± 1.96 posterior standard deviations. In that case, the credible interval becomes essentially identical to an ordinary confidence interval that assumes a normal sampling distribution for the parameter estimate. If the posterior distribution is not normal, the interval will not be symmetric about the posterior mean. In that case, the Bayesian version often has better properties than the conventional one. Unlike a conventional confidence interval, the Bayesian credible interval is interpreted as a probability statement about the parameter itself; Prob (a ≤ θ ≤ b) = 0.95 literally means that we are 95 per cent sure that the true value of θ lies between a and b. Tail areas from a marginal posterior distribution can even be used as a kind of Bayesian p-value for hypothesis testing. If 96.5 per cent of the area under the marginal posterior density for θ lies to the right of some value a, then the Bayesian p-value for testing the null hypothesis θ ≤ a against the alternative hypothesis θ > a is 0.045. In that case, one could say that there is 96.5 per cent assurance that the alternative hypothesis is true. In our case, the 95 per cent lower and upper bound intervals is the Bayesian credible interval. As none of the 95 per cent credible intervals include the value of 0, it indicates that we are 95 per cent sure that the true values of the parameters fall within the confidence intervals and are nonzero. The rest of the parameters represent the posterior distribution values related to the C.S., skewness, kurtosis, min and max values, respectively. AMOS also provides several diagnostic plats to check the convergence of the MCMC sampling method. The first of such plot is the polygon plot that enables the determination of the likelihood that the MCMC samples have converged to the posterior distribution via a simultaneous distribution based on the first and last thirds of the accumulated samples. As seen in Figure 5 below, the frequency polygon displays the sampling distribution of VOLDIFF across 51,656 samples (the number sampled after the 500 burn-in samples were deleted). From the display in Figure 5, we observe that the two distributions are almost identical, thereby suggesting that AMOS has successfully identified important features of the posterior distribution of VOLDIFF. This posterior distribution appears to be centered at some value near 4.407, which is consistent with mean value of 4.408 noted in Table VIII. Another set of diagnostic plots are the histogram and trace plots shown in Figure 6 and Figure 7 respectively. The trace plot (also known as time-series plot) is a diagnostic plot that helps evaluate how quickly the MCMC sampling procedure converged in the posterior distribution. The plot is considered good as it exhibits rapid up-and-down variation with no long-term trends. Alternatively, one can imagine viewing the plot as having distributions that are broken up into parts. The results would indicate none of the sections to deviate much from the rest. This confirms that the convergence in distribution occurred rapidly, a clear indicator that the SEM model was correctly specified.

As a final analysis, we will compare the unstandardized factor-loading estimates for the ML estimation with the Bayesian posterior distribution estimates. A listing of both sets of estimates is presented in Table IX. As expected, based on our review of the diagnostic plots, these estimates are very closely related to both the first and second factor loadings which are a further testament to the validity of our hypothesized structure of the IV dynamic model.

Conclusion

Although there is extensive literature on methods for estimating IV dynamics during earnings announcement, few researchers have examined the impact of expected MMM, IV differential and IV Rank on the IV path after the earnings announcement. One reason for this research gap is because of the recent introduction of weekly options for equities by the Chicago Board of Options Exchange (CBOE) back in late 2010. Even then, the CBOE only released weekly options four individual equities – Bank of America (BAC.N), Apple (AAPL.O), Citigroup (C.N) and US-listed shares of BP (BP.L) (BP.N). The introduction of weekly options provided more trading flexibility and precision timing from shorter durations. This automatically expanded expiration choices which in turn offered greater access and flexibility from the perspective of trading volatility during earnings announcement. This study has highlighted the impacts of market sentiment and liquidity as part of the forecasting model for IV during earnings. This understanding in turn helps traders to formulate strategies that can circumvent the undefined risk associated with trading options strategies such as writing strangles.

Limitations

The first limitation of the study is that the firms included in the study are relatively large, the results of the study can therefore not be generalized to medium-sized and small firms.

The second limitation lies in the current sample size, which in many cases was not enough to be able to draw reliable conclusions on. Scaling the sample size up is only a function of time and effort. This can be easily overcome and should not be a limitation in the future.

The third limitation concerns the measurement of the variables. Under the assumption of a normal distribution of returns (i.e. stock prices follow a random walk process), which means that the distribution of returns is symmetrical, one can estimate the probabilities of potential gains or losses associated with each amount. This means the standard deviation of securities returns, which is called historical volatility and is usually calculated as a moving average, can be used as a risk indicator. The prices used for the calculations are usually the closing prices, but Parkinson suggests that the day’s high and low prices would provide a better estimate of real volatility (Parkinson, 1980). One can also refine the analysis with high frequency data. Such data enable the avoidance the bias stemming from the use of closing (or opening) prices, but they have only been available for a relatively short time. The length of the observation period is another contentious topic in itself. There are no criteria that enable one to conclude that volatility calculated in relation to mean returns over 20 trading days (or one month) and then annualized is any more or less representative than volatility calculated over 130 trading days (or six months) and then annualized, or even than volatility measured directly over 260 trading days (one year). Nonetheless, the guidelines adopted in this study represent the best practices of researchers thus far.

Recommendations

The model used in this study to measure the dynamics of IV is designed for the US S&P 500 industry. To obtain more reliable results concerning other indices, further studies should be conducted on other market indices in other geographies.

This study has indicated that an earnings announcement can provide a volatility mispricing opportunity to allow an investor to profit from a sudden, sharp drop in IV. More specifically, the methodology developed by Tan and Bing is now well supported both empirically and theoretically in terms of qualifying opportunities that can be profitable because of the volatility crush. Conventionally, the option strategy of shorting strangles carries unlimited theoretical risk; however, the methodology has demonstrated that this risk can be substantially reduced if followed judiciously. This profitable strategy relies on a set of qualifying parameters including liquidity, premium collection, volatility differential, expected market move and market sentiment. Building upon this framework, the understanding of the effects of persistence and leverage resulted in further reducing the risk associated with trading options during earnings announcements. As a guideline, the sentiment and liquidity variables help to qualify a trade and the effects of persistence and leverage help to close the qualified trade. The causal relationship between persistence and leverage as well as sentiment and liquidity are modeled using SPSS Amos. Within AMOS, the provision of SEM is used to compare, confirm and refine the model. Additionally, Bayesian estimation was used to compare estimated values derived from both the ML and Bayesian approaches to analyses of the same CFA model.

In conclusion, we find a positive association between the effects of market sentiment, liquidity, persistence and leverage in the dynamics of IV during earnings announcement. These findings substantiate further the four factors that influence IV dynamics during earnings announcement and conclude that just looking at persistence and leverage alone will not generate profitable trading opportunities.

Figures

IV before and after earnings announcement

Figure 1.

IV before and after earnings announcement

Scree plot

Figure 2.

Scree plot

Rotated component matrix and SEM path diagram

Figure 3.

Rotated component matrix and SEM path diagram

AMOS output path diagram for hypothesized model

Figure 4.

AMOS output path diagram for hypothesized model

Bayesian SEM diagnostic first and last combined polygon plot

Figure 5

Bayesian SEM diagnostic first and last combined polygon plot

Bayesian SEM diagnostic histogram plot for VOLDIFF

Figure 6

Bayesian SEM diagnostic histogram plot for VOLDIFF

Bayesian SEM diagnostic trace plot for VOLDIFF

Figure 7.

Bayesian SEM diagnostic trace plot for VOLDIFF

Correlation matrixa

EPS VOLDIFF VOLRANK EXPMOVE PREMIUM BIDASK OPENINT VOLUME TERM1 TERM2 EVENTP1 EVENT0 EVENTP2 TERM0 EVENTP3 EVENTM1
Correlation
EPS 1.000 0.145 0.249 0.193 0.251 0.429 0.400 0.439 0.575 0.609 0.031 0.029 −0.008 0.644 −0.010 0.054
VOLDIFF 0.145 1.000 0.374 0.454 0.222 0.278 0.210 0.247 0.234 0.218 0.105 0.055 0.115 0.263 0.056 0.125
VOLRANK 0.249 0.374 1.000 0.302 0.349 0.329 0.264 0.293 0.212 0.246 0.117 0.054 0.055 0.306 0.056 0.130
EXPMOVE 0.193 0.454 0.302 1.000 0.175 0.259 0.213 0.260 0.249 0.167 0.112 0.086 0.108 0.296 0.045 0.112
PREMIUM 0.251 0.222 0.349 0.175 1.000 0.432 0.371 0.271 0.355 0.306 0.181 0.094 0.061 0.397 0.077 0.137
BIDASK 0.429 0.278 0.329 0.259 0.432 1.000 0.573 0.521 0.462 0.477 0.167 0.128 0.096 0.551 0.105 0.146
OPENINT 0.400 0.210 0.264 0.213 0.371 0.573 1.000 0.513 0.412 0.471 0.161 0.145 0.112 0.531 0.063 0.117
VOLUME 0.439 0.247 0.293 0.260 0.271 0.521 0.513 1.000 0.367 0.451 0.144 0.109 0.097 0.465 0.095 0.167
TERM1 0.575 0.234 0.212 0.249 0.355 0.462 0.412 0.367 1.000 0.664 0.034 0.043 0.020 0.791 0.032 0.125
TERM2 0.609 0.218 0.246 0.167 0.306 0.477 0.471 0.451 0.664 1.000 0.058 0.045 0.016 0.655 −0.005 0.097
EVENTP1 0.031 0.105 0.117 0.112 0.181 0.167 0.161 0.144 0.034 0.058 1.000 0.720 0.524 0.066 0.449 0.059
EVENT0 0.029 0.055 0.054 0.086 0.094 0.128 0.145 0.109 0.043 0.045 0.720 1.000 0.722 0.050 0.311 0.040
EVENTP2 −0.008 0.115 0.055 0.108 0.061 0.096 0.112 0.097 0.020 0.016 0.524 0.722 1.000 0.037 0.251 0.119
TERM0 0.644 0.263 0.306 0.296 0.397 0.551 0.531 0.465 0.791 0.655 0.066 0.050 0.037 1.000 0.022 0.163
EVENTP3 −0.010 0.056 0.056 0.045 0.077 0.105 0.063 0.095 0.032 −0.005 0.449 0.311 0.251 0.022 1.000 0.058
EVENTM1 0.054 0.125 0.130 0.112 0.137 0.146 0.117 0.167 0.125 0.097 0.059 0.040 0.119 0.163 0.058 1.000

Note:

a

Determinant = 0.001

KMO and Bartlett’s test

KMO MSA 0.843
Approx. Chi square 6932.675
Bartlett’s test of sphericity df 120
Significance 0.000

Total variance explained

Component Initial eigenvalues Extraction sums of squared loadings Rotation sums of squared loadings
Total % of variance Cumulative (%) Total % of variance Cumulative (%) Total % of variance Cumulative (%)
1 5.005 31.281 31.281 5.005 31.281 31.281 4.218 26.365 26.365
2 2.513 15.705 46.987 2.513 15.705 46.987 2.572 16.077 42.442
3 1.368 8.548 55.535 1.368 8.548 55.535 2.095 13.093 55.535
4 0.966 6.039 61.573
5 0.922 5.761 67.335
6 0.820 5.124 72.458
7 0.786 4.915 77.373
8 0.657 4.106 81.479
9 0.555 3.467 84.946
10 0.471 2.946 87.893
11 0.420 2.625 90.518
12 0.409 2.559 93.077
13 0.386 2.412 95.488
14 0.343 2.142 97.630
15 0.202 1.261 98.891
16 0.177 1.109 100.000
Notes:

Extraction method: principal component analysis

Model summary

Number of distinct sample moments 136
Number of distinct parameters to be estimated 38
Degrees of freedom (136-38) 98
Result (Default model)
Minimum was achieved
Chi-square 526.266
Degrees of freedom 98
Probability level 0.000

Goodness of fit – CMIN

Model NPAR CMIN df p CMIN/df
Default model 38 526.266 98 0.000 5.370
Saturated model 136 0.000 0
Independence model 16 6973.281 120 0.000 58.111
Note:

NPAR: nonparametric

Goodness of fit – RMR, GFI

Model RMR GFI AGFI PGFI
Default model 0.074 0.942 0.919 0.679
Saturated model 0.000 1.000
Independence model 0.588 0.422 0.344 0.372

Goodness of fit – RMSEA

Model RMSEA LO 90 HI 90 PCLOSE
Default model 0.064 0.059 0.070 0.000
Independence model 0.232 0.228 0.237 0.000

References

Adebayo, O.S. (2008), “Performance evaluation and indices of cyber café business: a factor analytic approach”, Journal of Information and Communication Technology, Vol. 7, pp. 89-102

Ball, R. and Brown, P. (1968), “An empirical evaluation of accounting income numbers”, Journal of Accounting Research, Vol. 6 No. 2, pp. 159-178.

Beneish, M. and Whaley, R. (1996), “An anatomy of the S&P game: the effect of changing the rules”, Journal of Finance, Vol. 51 No. 5, pp. 1909-1930.

Black, F. (1976). “Studies of stock price volatility changes”, Proceedings of the 1976 Meetings of the American Statistical Association, Business and Economical Statistics Section, American Statistical Association, pp. 177-181.

Bollen, K. (1989), Structural Equations with Latent Variables, John Wiley & Sons, New York, NY.

Byrne, B.M. (2010), Structural Equation Modeling with Amos: Basic Concepts, Applications, and Programming, 2nd ed., Taylor and Francis Group, New York, NY.

David, A. and Veronesi, P. (2008). “Inflation and earnings uncertainty and volatility forecasts: a structural form approach”, Chicago GSB Research Paper.

Donders, M., Kouwenberg, R. and Vorst, T. (2000), “Options and earnings announcements: an empirical study of volatility, trading volume, open interest and liquidity”, European Financial Management, Vol. 6 No. 2, pp. 149-171.

Donders, M.W. and Vorst, T.C. (1996), “The impact of firm specific news on implied volatilities”, Journal of Banking & Finance, Vol. 20 No. 9, pp. 1447-1461.

Hair, J., Anderson, R., Tatham, R., Black, B. and Babin, B. (2006), Multivariate Data, 6th ed., Prentice-Hall, Upper Saddle River, NJ.

Hu, L.-T. and Bentler, P. (1999), “Cutoff criteria for fit indexes in covariances structure analysis: conventional criteria versus new alternatives”, Structural Equation Modeling, Vol. 6, pp. 1-55.

Hull, J. (2002), Fundamentals of Futures and Options Markets, Prentice Hall, Upper Saddle River, NJ.

James, L., Mulaik, S. and Brett, J. (1982), Causal Analysis: Assumptions, Models and Data, Sage, Beverly Hills, CA.

Joreskog, K.G. and Sorbom, D. (1986), LISREL VI: Analysis of Linear Structural Relationships by Maximum Likelihood and Least Square, Scientific Software.

Lynch, A. and Mendenhall, R. (1997), “New evidence on stock price effects associated with changes in the S&P 500”, Journal of Business, Vol. 70, pp. 351-384.

MacCallum, R.C., Brown, M.W. and Sugawara, H.M. (1996), “Power analysis and determination of sample size for covariance structure modeling”, Psychological Methods, Vol. 1 No. 2, pp. 130-149.

Mandelbrot, B. (1963), “The variation of certain speculative prices”, Journal of Business, Vol. 36 No. 4, pp. 392-417.

Merton, R.C. (1973), “Theory of rational option pricing”, The Bell Journal of Economics and Management Science, Vol. 4 No. 1, pp. 141-183.

Mulaik, S., James, L., Van Altine, J., Bennett, N., Lind, S. and Stilwell, C. (1989), “Evaluation of goodness-of-fit indices for structural equation models”, Psychological Bulletin, Vol. 105, pp. 430-445.

Parkinson, M. (1980), “The extreme value method for estimating the variance of the rate of return”, Journal of Business, Vol. 53, pp. 61-65.

Patell, J. and Wolfson, M. (1979), “Anticipated information releases reflected in call option prices”, Journal of Accounting and Economics, Vol. 1 No. 2, pp. 117-140.

Tabachnick, S. and Fidell, S. (2001), Using Multivariate Statistics, Allyn and Bacon, Boston, MA.

Tan, T.K. and Bing, B. (2014). “Options strategy for technology companies”, International Conference on Computer and Information Sciences, Kuala Lumpur, IEEE Explore.

Williams, L. and Holahan, P. (1994), “Parsimony-based fit indices for multiple indicator modes: do they work?”, Structural Equation Modeling, Vol. 1, pp. 161-189.

Further reading

Bentler, P.M. and Bonnett, D. (1980), “Significance tests and goodness of fit in the analysis of covariance structures”, Psychological Bulletin, Vol. 88, pp. 588-606.

Corresponding author

Teik-Kheong Tan can be contacted at: tktan@ieee.org

Related articles