The magic of an Allaisian appraisal: implied and historical volatility revisited the VIX: once bitten, twice shy?


Many contributions have dealt with the relation between implied and historical volatility in reference to the S&P100 index and on mostly limited samples of data. A large part of this literature finds that implied volatility defined directly from option prices is a biased estimator of future realized volatility, although some dissent has been expressed Christensen and Prabhala (1998). We investigate the issue on the larger market of the S&P500, using the VIX index as the measure of implied volatility and on a much larger sample (314 months), extending from January 1990 to December 2016. Our results are in line with most of the literature inasmuch as they invalidate the efficient market hypothesis. More originally, however, we use a time series analysis derived from Maurice Allais’s “lost” work on monetary theory and show that the VIX incorporates a subtle version of perceived and memorized past data – the “missing link” in relating implied to realized volatility - rather than reflecting any kind of “rational expectation” of future realized volatility. Incidentally, we show that the VIX seems to have been over-valued until the middle of the first decade of our century and to be since then averagely under-valued. Amazingly enough, this trend of affairs seems to be steadily confirmed by the financial market, which calls for additional research, even if we offer two possible explanations.


Volatility has been a long debated issue in Finance. Whether it really is a sufficient and coherent measure of risk on markets has been discussed at length and we shall not dwell on this topic here. This paper rather aims at raising the issue of the relation between the ‘historical’ and the ‘implied’ concepts of volatility. Are historical volatility and implied volatility completely separate concepts, or can some relation between them be inferred from observation? It is usually admitted that anticipating the future – the task usually assigned to implied volatility - rests on psychological attitudes, having an impact on the assessment of the market, whereas examining the past – the very idea of historical volatility - deals only with factual observations. It therefore seems difficult, at first sight, to assess any clear relation between them. On another hand, ‘implied volatility’ is often represented by the VIX index, which is then taken as a form of expectation of market’s volatility in the near future, essentially the next month(s). Is such a view empirically validated? If not, can we suggest an intermediate variable, which could bridge the gap between historical volatility and the VIX?

Such a debate encompasses wider issues than purely financial ones. Indeed, economists – who used to rely on adaptive expectations – have come to develop several rational expectation hypotheses which they regarded for a long time as “the revolution in macroeconomic theory of the frame of forward looking anticipations rather than expectations based on past data only” (Tirole, 2012) and presumably impossible to link to the former. More recently, however, behavioral economics as well as behavioral finance have challenged this belief and brought some authors to change their mind on the issue. This paper wants to argue that Allais’s insightful developments of “psychological time” allow bridging that gap. More specifically, we claim that a “variability” expectation constitutes the missing link in the analytical framework.

In a first section, we shall recall the Allaisian world of relativity, i.e. of “psychological time” and “physical time”, either of these frames being possibly taken as a reference, but never independently from the other. More specifically, we show that, although developed for specific purposes, Allais’ framework opens on an explanation of expectation formation, if not of anticipations, which can either be regarded as a way to formalize some sophisticated aspects of behavioral financial expectations or, alternatively, as a tool to analyze time series in financial analysis. In the second part, we use the results to provide an account of historical volatility of the S&P500 index and to try relating it to implied volatility as measured by the VIX index. We tentatively use the tool to successively determine periods of “reasoned” over- or under-valuation of the VIX within our sample.

The next natural question bears on the issue of forecasting volatility using a similar framework. We briefly discuss this last issue and conclude that it calls for some hard questions to be solved.

Psychological time and market expectation

It is little known that Maurice Allais came out in the Sixties and Seventies with an impressive theory of the demand for money. From a monetary point of view, his contribution amounts to a subtle and sophisticated reformulation of the quantitative theory of money. This development is outside the focus of the present paper and will not be further mentioned here. However, part of Allais’s contribution offers the basis for a model of market expectation formation. When properly used, this model can be regarded as the core of an original time series analysis, although the foundations of it owe more to a psychology of finance and economics than to statistics. Several financial phenomena can therefore be scrutinized, using Allais’s contribution as a tool, when looking for an explanatory reconstruction within a given sample of observations.

Time is a relative notion. Two different types of time do coexist: the “psychological” time and the physical time we use every day. Within the psychological time reference, by definition, the future is discounted in the exactly symmetrical way to the one in which the past is being forgotten about, i.e. exponentially. The rate of “memory decay” per unit of psychological time (the French “Taux d’oubli” was translated into English in Allais’s writings as “rate of forgetfulness”. We prefer here the more appropriate translation of “rate of memory decay”. We leave the original translation only in Allais’s quotations and in his critics of the time), χ0, is a constant equal to the rate i0 at which the future is discounted per unit of psychological time (Allais, 1966 p.1129). Within this time reference, we thus have i0= χ0, a constant parameter to be estimated. Allais expresses a postulate of ‘temporal psychological symmetry’: “At any moment, an instantaneous rate of forgetfulness χ(t) can be defined for the society considered, which plays the same part in the process of memory as that played by the psychological rate of interest i(t) in evaluating the present impact of the future” (Allais, 1972 p.48). The further away in the past, the more an event will be forgotten about; the further away in the future, the more an event will be strongly discounted.

However, the physical time reference does not exhibit the same property of a constant rate of memory decay, as is the case with psychological time. Experience and historical accounts suggest, in our usual time reference, that events of same length in physical time are being felt as having various subjective time lengths according to the circumstances. For example, common sense suggests that time elapses differently for someone sentenced to death and for someone in charge of repetitive tasks in a boring activity without any end horizon. Similarly, from an economic point of view, memory decay of changes in prices is much quicker if prices are strongly increasing (for example during the German hyper-inflation) than when prices remain stable (Allais, 1965 pp.23-25). Clearly, the instantaneous rate of memory decay is not constant in the physical time reference. To express that this rate is all the higher in physical time, because experienced changes happen at a high speed, one can write, following Allais:




where Z(t) stands for some variable epitomizing the experienced or psychologically felt past rates of change at some given time t of a generic factor influencing the phenomenon under scrutiny, while Ψ. is a logistic function mapping that psychological feeling into a rate at which the past is being forgotten. Differentiating (2), we can write:


In his monetary theory, for example, Allais takes Z(t) to be a psychological appraisal of the rate of growth of nominal expenditures, for which the nominal GDP serves as a proxy. But when referring to other contexts, we will be entitled to choose in each case a different Z(t). As a matter of fact, we often are being led to consider that Z(t) is a dimensionless discretionary variable, from which we simply require that it illustrates some psychological rate of change related in one way or the other to the time series studied.

Consistently with the idea that any change in affairs in the past will be exponentially forgotten about at some variable rate of memory decay, the present impact of past changes (past growth rates, for example) will be captured by Z(t) the present psychologically felt value of the xτ:


An equation in which xτ is the rate of change in affairs at time τ and where the instantaneous rate of memory decay at time u, χu, varies through time in the interval -,t. As Philip Cagan, commenting on Allais’s thought, put it: “When we then calculate what is remembered of the past at any particular point in time, a past event has undergone ‘memory decay’ to the extent of the rates of forgetfulness attached to the periods through which that memory has been carried. In the weighted average of past events, therefore, the weight attached to each past period is an exponential raised to the power of the sum of the rates of forgetfulness prevailing during each intervening period. If we think of the weighting pattern as a declining exponential curve into the past, that curve in calendar time does not have a constant rate of change, but flattens or steepens as it passes along periods with rates of forgetfulness which are lower or higher, respectively” (Cagan, 1969 p.428). Consistently with such a view, when Z(t)=0 (case of a stationary regime or of a stationary time series), we should have a constant rate of memory decay (as in a standard EWMA model of time series analysis) i.e. we should be able to write χt=χ0=constant. This implies in turn to calibrate Ψ. in equation (1) such thatΨ0=1.

This relativity of time perception in the precise sense defined by equations (1) to (3) above is what allowed Allais applying a same unified theory of money demand to various historical episodes: hyperinflation, depression and ‘normal times’, an achievement which should be hailed here en passant. But, as already stated, this view can serve as a more general analysis of economic and financial time series.

At time t, our expectation based on the memorized changes in affairs Zt as defined in equations (1) to (3) can be derived as:


which forms the best possible opinion we may have on the phenomenon studied, i.e. further down, the market volatility – what we could best expect regarding that volatility. For this reason, we shall use further below the term of “expected variability” for z*.

Let us now assume that, at time (t-1), the market has formed some expectation z*(t-1) of the rate of change in our time series at time (t-1), based on the memory of the past as described above. At time t, once one has been made aware of what actually happened, it is only natural to think that z*(t-1) will be confronted to the actual realization x(t), the latter being the benchmark against which we shall measure our surprise rate k’:


Clearly, the ideal would be that our surprises along time were all equal to zero. In the standard case, alas, such is not the case. Could we then reformulate our notion of surprise rate as a gap in relative terms between rational and adaptive expectation at time (t-1)? Indeed, if our psychological reaction were an instantaneous and fully correcting one, the expectation formation process described above would lead to a result, which would be virtually equivalent to a rational expectation scheme. Experience shows unfortunately that the prerequisites just mentioned are not met in practice. Let us more modestly admit, following Allais’s view, that the expectation formation process is a process of psychological trial and error and that it is only natural to think of a psychological correction. This correction happens once the event at time t comes to be known. This amounts to a self-correcting adaptive expectation formation process, a new sophistication with respect to a standard EWMA model!

How do we correct our expectation, once we have taken notice of the surprise rate k’? Allais suggests that our reaction is to adjust our rate of memory decay for period t as a function of our surprise kt.

More specifically, starting from equation (3), keeping (1) in mind and developing (2) as a Taylor’s expansion, we can write, using notations Zt in place of Z(t) as well as Ψt in place of ΨZtfor simplicity’s sake:


Setting then yt=Zt-Zt-1, we can write:




Which can be rewritten as:


And finally, keeping equation (6) in mind:


if we set:


The discussion of this last equation could lead to quite interesting issues, but it is out of the scope of the present paper. The interested reader is referred to Barthalon (2014) for a more complete exposition as well as to Allais’s papers to be found in the references at the end of this paper.

Finally, past absolute psychological changes reach at time t the amount value Zt. How can we derive this value at time t of the psychological coefficient Z? The clue is equation (9) above. Integrating, it yields:


II. Implied volatility indirectly related to historical volatility

We have hastily sketched, in section I above, the Allaisian dual framework of time reference. One could wonder where to these equations will lead us. We claim that z*(t) is the best expectation one can figure out when analyzing a temporal series of data, conditioned on the window of observation – the sample we can secure of that series of data. Note that “expectation” – what we can best expect from the phenomenon as of today – does not necessarily mean anticipation, which necessarily refers to some yet unknown future (the Cambridge Dictionary defines ‘anticipation’ as “a feeling of excitement about something that is going to happen in the near future” while expectation refers generically to something, which one regards as best epitomizing the phenomenon under examination, in other words to what one could expect, without necessary reference to time). We come back in our conclusion on that issue.

For our present purpose, we examine the time series of historical volatility v(t). As is well known, volatility can be defined as:


This is indeed the from under which it is most of the time computed in trading rooms, where the time series of the Pn is the S&P500 workdays data. We compute from that series the monthly series Pt for each month t in our sample. As v(t) is homeomorphic to a rate of increase, it can be directly taken as a substitute for x(t) in (4), (6), (7) and (8) above. As already mentioned, we refer further down to z*(t) as the expected variability. We claim that this concept is the crucial concept to link implied volatility – which we take as being represented by the VIX index - to historical volatility. At the same time, we can show that several beliefs regarding the VIX are far from being confirmed by statistical observation.

To make our argument explicit, we consider now a sample of our monthly data stretching over twenty-six years, namely from January1990 to December 2016. We assume to know all values taken by the S&P500 index over that entire period and compute, month by month, the historical volatility series. On that basis we use the equations mentioned in the preceding section to compute the expected variability z*(t) for every month t of the sample, where t denotes here the physical time. From the surprise resulting from the comparison of z*(t) with the actual figure of observed volatility at time t, denoted by v(t) below, we revise our expectation formation process out of equations (7), (10), and (11) in order to produce some adjusted expected variability for the next time period, z*(t+1).

To fulfill these computations, we have to choose a parameter Z0 to initialize the process. As already mentioned above, we choose Z0 as dimensionless, the rule of choice chosen here being to minimize the absolute sum of surprises (7) over the whole time span under scrutiny. Note that this rule of setting Z0 is not logically compelling – we can pick up another rule, leading to slightly different results in the very first periods under scrutiny, but converging quickly to practically equivalent results. In this framework, we claim that several propositions can be established, out of which we select here five. We now take on them and illustrate our results successively.

Proposition Nr. 1: Expected variability guesses quite effectively the historical trend of volatility

Figure 1 shows the results of our computation. We confront z*(t-1), the expected variability, to the actual volatility at time t. The graph shows clearly that the Allaisian algorithm of relativity allows finding a very meaningful trend of the time series. The coefficient of determination (R2 further below) between the actual volatility observed v(t) and the expected variability computed z*(t-1) is at the 78.16% level, even though our sample is not the easiest one for an algorithm to guess the underlying trend : To the “irrational exuberance” of the 1990’s, one has to add the bursting out of the Internet bubble, the 9/11 of 2001 and its sequel, the major recession of 2008-2009, the peak of 2011, etc. More significant than the R2 in this context of high autocorrelation, the Root of the Mean Squared Error (RMSE further below) is at a 4.5%, a low figure when considering financial volatility on such a sample.

Why do we lag z* w.r.t. v? We must do so because, without such a time lag, the R2 deteriorates and the RMSE increases to 5.03%. Intuitively equation (5) above shows that the weighting of past observations is of an exponential type and it is quite clear that the last observation, say v(t-1) – especially when Zt turns out to be large – has a heavy weight in contributing to z*(t). Hence the stronger link of z*(t) to v(t-1) with respect to v(t). In other words, we see from Figure 1 that the algorithm has the power to track the core tendency of the time series, albeit more closely with a one period limited time lag. We also see that the time lag necessary to obtain a good fit between z* and v will be all the more limited, because the speed of change in financial affairs (as in hyperinflation times, e.g.) will be high.

Figure1. S&P500 1-month-lagged expected variability and monthly volatility- 1990-2016

The next question is then to know what the link to the VIX index of that new variable z*(t) can be.

Proposition Nr. 2: The VIX index at time t is more closely linked to z*(t) than to z*(t+1) and, for a given month t, it is a still closer replica of z*(t) than of actual volatility v(t). If we focus on the sole volatility v, then the VIX is closer to v(t) than to v(t+1), contrary to what common ‘truth’ suggests.

These results may sound surprising, as the VIX has won the fame of being linked to the risks to come up (journalists mention it as “the index of fear”) and hence to the volatility to come up. Let us just mention that the RMSE between VIX(t) and v(t) amounts to 5.89% while it raises to 7.94% if we relate VIX(t-1) to v(t). The same is true of the relation of VIX(t) to z*(t) (RMSE= 5.08%) when compared to the relation of VIX(t-1) to z*(t) (RMSE=5.25%).

To check for possible common trend effects, we analyze the changes in VIX and relate them to the changes in volatility and/or in z*. The results but reinforce the statements above. Figure 2 illustrates these facts.

On this graph, the two ascending curves show the RMSE values computed for the relations examined. The higher of these curves is the RMSE from equating the VIX to the same month volatility (time lag =0), to the next month volatility (lag=1) and finally to the volatility 2 months later (lag=2). The lower of these two curves is the counterpart when equating the VIX to z*. One can see that the link between the VIX and z*(t) is much closer – it is RMSE-dominated – than the one to v(t), and this is true for any time lag considered.

The two other downward sloping curves display the R2 values (adjusted for a factor 1/10) for the relation VIX-z* (higher curve) and VIX-historical volatility (lower curve), under the same three different time lags. Again, there is a domination in terms of R2 of the relation between VIX and z*(t) with respect to the one between VIX and volatility v(t). The order is unchanged whatever the time lag considered.

Figure 2. Relations of changes in VIX(t) related to expected variability z*(t) and to historical volatility v(t) under 3 different time lags. 1990-2016

These results validate, on the given sample, proposition 2 above.

Proposition Nr. 3: The preceding proposition Nr.2 can be but quantitatively strengthened, when introducing the risk premium

The concept of risk premium on a financial market can be computed in various ways. In their yearly survey of the market risk premia in use, Fernandez et al. (2017) report for 2016 values ranging, in the US, from 1.5% to 12%, with a median (and, for that matter, an average as well) of 5.7%. We compute as a risk premium relevant to financial markets investors, the spread between annualized returns on the S&P500 and the expected annualized returns over 10yrs T-Bonds, adjusted for the rate of expected inflation as computed by the University of Michigan. The resulting monthly series for ERP are in line with the usual estimates for the US in the period under examination here. We then deduct the risk premium from the VIX, to see whether this VIX net of risk premium (VIXnRP further below) better corresponds to the usual representation market participants have of it. Under this adjustment, however, similar results hold and confirm the conclusions reached above (Figure 3).

On Figure 3, the two dotted curves refer to the respective RMSE between the VIXnRP and expected variability (lower curve) and between the VIXnRP and the volatility (upper curve). By contrast, the two negatively sloped solid curves on Figure 3 refer to the R2 between respectively the VIXnRP and z*(t) (higher curve) and the VIXnRP and the historical volatility (lower curve). For whatever lag (the abscissa in months in Figure 3) that we may consider, both for z* and for v, the VIXnRP is closer to the expected variability z* than to volatility v.

Figure 3. Relation of changes in VIXnRP to changes in expected variability z*(t) and in historical volatility v(t), under three different time lags (in months). 1990-2016

Finally, for either z* or v, the VIXnRP is always closer to the value taken by the variable considered at the same period than to the value taken at the next period and even more so than at the second next period. Proposition 3 is thus validated by our data.

Proposition Nr. 4: The link of the VIX to the risk premium appears as a myth.

The VIX as a relevant “index of the fear” on the market is another widely held representation that is being knocked down when inspecting data. To test this new proposition, several ideas can be used. The first idea, which comes to mind, consists simply in confronting the VIX and the risk premium. The computation of a simple R2 yields an appalling 1.47%. To minimize autocorrelation effects, we look at the correlation between changes in the VIX and changes in the risk premium. The R2 falls then to 0.10%, which shows that it can be dispensed with looking further in that direction.

To give another chance to discovering a link between the VIX and the risk premium, we can use our result of Proposition 1: We can adjust the VIX for the trend in volatility represented by z*(t) at each date t and investigate the link with the risk premium. There should then be a “pure” direct link to the risk premium. Figure 4 shows the results of such a computation.

The correlation coefficient between the two curves on Figure 4 is -0.304 and the determination coefficient is 9.23%, a quite low level. Again, we look at the monthly changes in the two series. The R2 obtained is then 6.09% and the RMSE is 15.87%, both being very unsatisfactory results.

Figure 4: The VIX corrected for the trend of volatility (blue solid line) confronted to the risk premium (thick redline), with their respective linear tendencies

The VIX evolves inversely to the risk premium, contrary to current beliefs.

Figure 4 reveals however a rather stunning result. Not only does the VIX exhibit little link with the risk premium, but it has evolved in the opposite direction to it! It is difficult at this point to say after which date the VIX has (or will) become undervalued, but this is a more than likely event, should the present 25 years trend continue. The world has become more turbulent, some would say more violent, than it was for half a century, and the VIX has fallen down almost continuously! At first sight, there are only two explanations for such a development. The first is that successive “quantitative easing” policies pursued in the US - and later in Europe - have made investors hunting for borrowers and accepting any level of risk premia; the alternative explanation would be that markets have become myopic to a deeper extent than before. This would require further research, out of the scope of the present paper.

The question of determining when the VIX will or has become undervalued remains however to be solved. Our next proposition is a contribution in that direction.

Proposition Nr. 5: From a risk premium perspective, the VIX index has been overvalued until the mid-2000s and has since then been undervalued.

Although the VIX is widely regarded as an implied forecast of volatility once adjusted for a risk premium, we just saw that available data challenge that vision. In particular, Propositions 1 to 3 above suggest that the VIX reflects the present trend of volatility as spotted by z*(t) more than the next period’s volatility and even more so than the volatility of the periods thereafter. But, to detect any “overvaluation” or “undervaluation” of the VIX, we must refer to what the index has been designed for, i.e. to provide an expectation of stock market volatility in the near future, once the risk premium has been taken into account.

Figure 5. Immediate (thin blue line) and reasoned (thick red line) over (+) or under (-) -valuation of the VIX The VIXnRP is respectively compared to the next month and to the average next three months volatility.

Figure 5 represents two ways of measuring that “overvaluation” (or undervaluation). One is the “immediate” version, meaning that the VIXnRP would over-forecast (or under-forecast) the volatility of the next month only. The other way – which we call here “reasoned” over- or under-valuation – consists in comparing the VIXnRP to the average of the next three months volatility. The declining trend reveals over-valuation until 2007, continued and increasing under-valuation since then.

The results appear again paradoxical: the VIX was on average rather over-valued before 2008 and, on average again, rather under-valued since then. One can only offer as comments the two possible explanations already offered in the preceding section

Concluding comments: Issues on volatility and volatility forecasting.

A frequently encountered rule of thumb among traders has it that a “normal” implied volatility is equal to the average of recent months historical volatility. There is, of course, no scientific ground whatsoever to such a rule, which is frequently invalidated by experience itself. On the other hand, in line of most of the literature, we find that implied volatility, far from being a reliable forecast of future volatility, appears to rise once an accident has happened, in a somehow similar way as many insurance premia do. This is but a kind of new illustration of the saying used as a subtitle to this paper. And still, this saying is a generous judgment, for, as seen from Proposition 4 above, the index has followed a trend inversely related to the risk-premium for some twenty years now.

What this paper has shown is that the “missing link” between historical and implied volatility lies in the Allaisian analysis of time series derived from that great author’s monetary theory. This missing link yields what we have termed above the “expected variability”, computed from Allais’s subtle and sophisticated algorithm using historical prices as input and based on the theory of psychological time.

More generally, our results support the idea that expectations are not really grounded on prospective insights, but more so on past representation of data. Between today and tomorrow, the link is a subtle representation of the past. This was Maurice Allais’ firm conviction. Our data support it, even on a market driven by sophisticated investors (not everybody trades on derivatives) and even considering a relatively short horizon, as is the case with the VIX. The magic of Allais’ psychological time theory bridges the gap on this market, as it does with even greater force in other domains. Between today and tomorrow, the link is a subtle representation of the past.

Further research is called for regarding our results on over- and under-evaluation of the VIX, which, contrary to what the rule of thumb evoked at the beginning of this paragraph suggests, can only be judged with respect to the near future’s figures of volatility. The situation can therefore only be back-tested, as has been done in this paper. But the factors of over- or under-valuation are still to be determined. We could only offer hypotheses in this paper.

An entirely different question is the one of volatility forecasting. It is quite clear that the Allaisian algorithm briefly developed in section I of this paper cannot be any tool of forecasting, as has been noted above: Allais’s algorithm assumes implicitly that all data of the time span under consideration are known with certainty, which cannot obviously be the case if one is to consider forecasting future unknown data. Clearly, analytical complements of different types have to be added to the initial algorithm, if one wants to base a prediction tool on similar ideas. Riskinnov has fine-tuned such a computation procedure, which performs better than the best-known sophisticated models of market volatility, including GJR-Garch. But that is in itself the topic of another paper.


  • Allais M. 1965. Reformulation de la Théorie Quantitative de la Monnaie : La Formulation Héréditaire, Relativiste et Logistique de la Demande de Monnaie. Bulletin SEDEIS, n°928 September
  • Allais M. 1966. A Restatement of the Quantity Theory of Money. The American Economic Review 56(5): 1123-1157
  • Allais M. 1972. Forgetfulness and Interest. Journal of Money, Credit and Banking 4(1): 40-73
  • Barthalon E. 2014. Uncertainty, Expectations and Financial Instability – Reviving Allais’s Lost Theory of Psychological Time, Columbia University Press
  • Cagan P. 1969. Allais’ Monetary Theory: Interpretation and Comment. Journal of Money, Credit and Banking 1(3): 427-432
  • Christensen BJ, Prabhala NR. 1998. The Relation between Implied and Realized Volatility. Journal of Financial Economics 50: 125-150
  • Fernandez P, Pershin V, Acin IF. 2017. Discount Rate, Risk-Free Rate and Market Risk Premium Used for 42 Countries in 2017: A Survey. IESE Business School WP
  • Tirole J. 2012. Notice sur la vie et les travaux de Maurice Allais. Discours de réception à l’Académie des Sciences Morales et Politiques, November, 26. Reproduced in : Rayonnement du CNRS 61: 22-31


Bertrand MUNIER

Affiliation : Emeritus Professor, Sorbonne’s Business School, Head of Research, Riskinnov Ltd., London


Affiliation : Global Head of Capital Markets Research, Allianz Investment Management SE, Munich

Séverine MENGUY

Affiliation : Associate Professor of Economics, The Sorbonne-Paris V, Social Science Department, Paris


No supporting information for this article

Article statistics

Views: 1716


PDF: 246

XML: 153