More than half of the dissertations and theses in India are on financial markets. Various aspects such as pricing of options, efficiency of markets, volatility of markets, its impact on the real sector, futures markets, effect of foreign trade, etc are analysed. Financial markets refer to the stock market, the derivatives market, the commodity markets, etc. For our purposes, we will take into account only the stock/share market as it is the one that is most well-understood in comparison to the rest. This blog post echoes a lot of my concerns with the way financial markets are analysed, and also indicates some of the broader concerns about econometric work in general. I have been greatly motivated and moved by Benoit Mandelbrot’s and Richard Hudson’s book The (Mis)Behaviour of Markets in writing this post. All quotations in this post are from their book.
On attending several pre-submission, post-submission, work-in-progress and viva-voce seminars, I have often wondered about economists fascination with the ‘normality assumption’. We assume that price changes follow a normal distribution, that is, outliers (both small and large) do not significantly affect the average/expected value. That is, standard theories of finance “assume the easier, mild form of randomness. Overwhelming evidence shows markets are far wilder, and scarier, than that.” Now, in natural sciences, this is a common enough assumption. Is there any empirical evidence supporting the use of such a distribution in economics, mainly the analysis of changes in prices and quantities? One wonders. In fact, it is this distribution which underlies the most commonly used tool in regression – the method of least squares. Most studies (academic and corporate) measure volatility using variance or standard deviation of the normally distributed variables. As Mandelbrot asks, “is this the only way to look at the world?”
Apart from the normality assumption, orthodox financial theory makes the following assumptions. This list is directly based on Mandelbrot’s book. (1) People are rational and aim only to get rich. (2) All investors are alike and they are price-takers, not makers. (3) Price change is practically continuous. (4) Price changes follow a Brownian motion, that is each price change appears independently from the last, the price changes are statistically stationary and that the price changes are normally distributed.
Assumptions (1) and (2) need no discussion, owing to their obvious falsity. Now it is assumption (3) that allows the use of continuous and differential functions; whereas, the reality is that “prices do jump, both trivially and significantly” and that discontinuity is an “essential ingredient of the market.” The meaning of independent price changes is that, price at t+1 is not dependent on price at t. In other words, prices have no memory. An example from tossing a fair coin will illustrate this better. Suppose a fair coin is tossed once, we get a head. The outcome of the next toss is not based on the outcome of the previous one. Again, how true this is of stock markets or of prices is questionable. How can such an assumption cope up with ‘expectations’ of investors? The statistical stationarity of price changes implies that the process generating the price changes stays the same over time.
Very often, in research, we do not have the time to question these assumption; not only that, these assumptions function as received wisdom. However, as Mandelbrot comments, “They work around, rather than build from and explain, the contradictory evidence” because “It gives a comforting impression of precision and competence.” For, a high kurtosis (the measure of how closely the data fits the bell curve) has been found in the prices of commodities, stocks and currencies.
To conclude, how does one as a researcher overcome such problematic/unreal/easy assumptions? Is this what academic “discipline” means? Or are we to learn adequate mathematics and statistics so that we can find a way around it? Or do we cooperate and seek help from mathematicians and statisticians? Mandelbrot has developed tools and concepts such as ‘fractal analysis’ and ‘long memory’ which can aid economics, which is inherently not a study of normally distributed variables.