The visible hand in economics

Archive for October 2007

According to a number of religious people, there is a Jesus economics. Now I don’t agree with their idea of the sort of economics Jesus presented at all, that may be because I’m not a religious academic, or it could be that these people just didn’t understand what economics actually is (for example they complain that we study scarcity, when Jesus economics is about abundance. That is silly, economics is the study of scarcity. Just because its Jesus economics doesn’t mean it can break literal definitions.)

Read the rest of this entry »

Time is tight so I can’t do a real blog post just now. However, I thought I could mention oil shocks and the difference between their impact in the 1970’s and now.

Oil prices have gone through the roof in recent times as a result of high world prices. In 2006, New Zealand was also suffering from its own oil shock, with world prices high and our dollar falling. However, we haven’t really seen much of an impact on the domestic economy, other than a slight fall off in domestic consumption. Compare this to the 1970’s, when oil shocks caused periods of massive inflation, forcing economists to see that the Phillips curve idea didn’t hold up in a dynamic setting.

Greg Mankiw goes on to discuss three reasons why oil shocks haven’t lead to massive inflation (note that the current oil shock will be severe in the USA, as their dollar has fallen strongly and world prices have risen):

  1. The economy is more energy-efficient
  2. Labour markets are more flexible, monetary policy has been designed better
  3. The inflationary impact of an oil price shock is different when it is the result of greater demand for oil (as it is currently) compared to the 1970’s when their was a cut in the supply of oil.

If I find time, I might try to talk about these issues, and how I think oil price shocks influence the NZ economy, but don’t count on it 🙂 . So, does anyone have anything to say about oil price shocks (preferably not about peak oil, but if you really have to 🙂 )

Update: There was a fourth reason in the Mankiw article. 4) The increase in oil prices was not as sudden, giving economic variables the chance to adjust (this concept comes from his New Keynesian belief in sticky wages and prices).

Update 2: Even NZPA has something to say about Oil prices and inflation, it might be a hot topic.

Update 3: Looks like the topic of Oil prices is making its rounds on the blogsphere. With US economic growth at 3.9% (this is an annualised rate, it is equivalent to have just under 1.0% quarterly growth in terms of how we measure GDP in NZ) with these high oil prices, it looks like this will be an important and interesting issue.

There seems to be a significant debate between the left and right wing blogs about whether New Zealand is over-taxed or not. However, there is one thing that both sides agree on, if taxes are cut by $1,000, this gives people $1,000 more to spend. This is the point I’m going to discuss.

Households receive a net wage, which is their gross wage – income tax. The household requires a certain net wage before it will enter the labour market (say the benefit + the opportunity cost of leisure time), and may also require a premium to choose one firm before another firm (when labour supply is restricted).

Now the gross wage + non-labour costs (which we will assume are exogenous, even though they aren’t really 🙂 ) is the cost to the firm of hiring that employee. If taxes fall, the net wage the household receives would be higher. However, the relationship between employers and employees determines the gross wage. If the employer knows that taxes will fall, they can reduce their employees gross wage and leave their net wage the same (I know that firms often can’t do this because of labour laws and wage stickiness, however in a dynamic sense they could just reduce the rate at which they increase an employees wage). Ultimately, the division of the tax depends on the relative bargaining power of the different agents.

If there were ‘many’ firms and ‘many’ employees, the incidence of tax would depend on the relative elasticities of demand and supply for labour. Often labour demand is assumed to be relatively elastic while labour supply is highly inelastic. In this case most of the tax is borne by the employee and so a cut in taxes will mainly benefit them.

However, if we have a high rate of unemployment, labour supply will become relatively more elastic, which implies that some of the burden shifts onto employers.

If we have a monopoly firm and many (homogeneous) low-skilled employees (flat labour supply curve) the tax burden will be fully taken up by the firm. This is because the monopoly will only want to pay enough to get the employees to work, and so the net wage will be set at the reservation level. If you cut taxes you cut the gross wage required to get this net wage. Note: This result would not hold with asymmetric information (worker effort) or heterogeneous agents (as a higher net wage would then be required to intice more workers – labour supply would be upward sloping).

Ultimately, where the burden of income tax falls is a difficult issue, and depends on the specifics of the labour and goods markets. However, it is not clear cut that if my taxes are cut I would end up with that much extra money. As a result, we have to realise that a cut in income taxes will result in a reduction in firm costs as well as an increase in consumers spending power.

The OCR was left unchanged at 8.25%.

It seems that the RBNZ took a relatively neutral tone, stating that it was happy that without any external shocks the current rate should be sufficient to keep inflation in the target band of 1-3%.

However, looking at the set of external shocks the Bank provided seems to indicate that there is still some prospect of rate increases in the near future.  Specifically, the Bank indicated that any increase in government spending would be taken as an upward shock.  As we are entering an election year where the parties will base policy on the fundamentals of lollynomics rather than fiscal restraint, the balance of probabilities seems to suggest further hikes may be on there way.

We mentioned a while back that the way politicians’ reputations are besmirched by allegations of misconduct coudl be due to the mistaken recollections of their constituents. Of course, there are rational reasons to mistrust politicians who are investigated for misconduct too, as Stuart Armstrong points out:

more guilty people get tried and acquitted than the average of the population. So … the trial is evidence of guilt – noisy evidence, but evidence none the less.

So, barring a complete exoneration (rather than a mere acquittal) perhaps I am silly to have faith in people who’ve been tried for crimes or misconduct. Our justice system is designed to prevent the conviction of the innocent, rather than preventing the acquittal of the guilty. As such we should expect that far fewer people are wrongly convicted than are wrongly acquitted. Given that there are a reasonable number of convictions overturned in light of later evidence, it must be that plenty of those acquitted are guilty of what they are accused of. They can’ t be punished but that doesn’t make them innocent, and our beliefs should rationally reflect that.

You all probably know by now that Eric Maskin was among the recipients of this year’s ‘Nobel prize’ in economics for his work on mechanism design. Browsing a few of his papers I came across one that reminded me of Will’s post on free software. Contrary to the obvious intuition, Bessen and Maskin propose that the software industry is an example of just the type of industry which could benefit from the removal of IP protections.

Software exhibits two characteristics that make this possible: sequential innovation and complementarities in technology. Since new software often builds on old software it is socially efficient to allow others to build on current platforms rather than have to reinvent the wheel. Of course, it means that rents that could be extracted through licencing are foregone. However, because software is complementary it is also in firms’ best interests to allow their ideas to be freely used. The complementarity means that others’ innovations increase the value of your future innovations. Thus, if you can speed the development of others’ software through letting them use your ideas then you can massively increase the expected value of your future developments. The conclusion they reach is that removal of patent protections would leave only the most innovative firms in the market and increase both their profitability and total surplus.

The paper discusses patents, not copyright, so it doesn’t directly pertain to open source software: Bessen and Maskin are talking about protection of ideas, not protection of actual written code. They don’t discuss allowing people to directly build on an existing code base but rather allowing ideas to be copied by others using their own code. Thus their proposal still provides significant barriers to entry in the form of an initial investment, but does not bar entry through the creation of monopolies over concepts. I wonder whether the removal of copyright protections would further enhance the incentive to innovate in the industry? Presumably the removal of barriers to entry would reduce profits and increase consumer surplus, but would it reduce or further enhance technological progress?

Some Canadian developers are making a computer game which simulates driving home drunk. They and police think it will be a great educational tool, showing teenagers how dangerous it is to drive while intoxicated. I’m not sure if I completely agree.

Assume that this game will realistically represent driving while drunk. As some people do get home when driving drunk, there must be some probability of getting home without crashing. Since teenagers can play this game over and over again, they can get better at not crashing in the game, which may make them think that they are less likely to crash when they actually drink and drive. Now it might do this, or it might just give teenagers a false belief of being good at drunk driving (given that the road and obstacles will be different on your home road). However, if our teenagers are rational this shouldn’t be the problem (as they will update their beliefs appropriately), the problem is that drunk driving is an experience good with negative externalities.

People who haven’t gone drunk driving don’t know how likely it would be that they would crash, they are uncertain (they don’t know the probability density function and so base probabilities on arbitrary beliefs). Once someone has consumed drunk driving, they gain information, and they know what the risks are when they drive. Now if current social advertising his mis-led teenagers, to believe that the risks are greater than they truly, a situation with no computer game may be preferable to a situation where kids have played the game.

Although full information is usually preferable, teenagers decision to drink drive has a negative externality which is the damage they cause when they crash. By showing kids the true probability of crashing, we increase their consumption of drink driving (assuming that their prior belief was that it was more likely they would crash) to the point where the social cost outweighs the social benefit of their driving activity.

However, there might still be scope for the game and full information. If we can ‘tax’ the negative externality, we can bring the quantity of drink driving down to the socially optimal level. This would require having police fining people when they catch them drink driving. The fine would have to equal [‘cost of outcomes’ x ‘probability of outcomes’]/[probability of being caught and fined]. In this case the driver takes on the full social cost of their drunken activity, and so will only consume the socially optimal amount. The problem with apply this rule come from quantifying the costs. If a drunk driver kills someone, what is the cost of that in monetary terms?

Ultimately, given the difficulty of quantifying outcomes, I think this may be the case where mis-information (at least a focus on the negatives) may be the best way to improve social outcomes. Discuss 🙂

Matt posted earlier today about someone who gains utility from thinking about buying something even if they never actually buy it. There’s actually quite a lot of work that’s been done on utility gained from anticipation. George Loewenstein’s 1987 paper reports a study in which people were willing to pay more for a kiss they received in three days than they would be willing to pay for the same kiss today. They gained pleasure simply from thinking about the kiss and anticipating how good it would be. I don’t think the students in the study ever actually got the kiss so he doesn’t mention whether it lived up to expectations. Interestingly, this idea of utility from anticipation only seems to hold over non-monetary items. Nobody values receiving $1000 in three days more than $1000 today: monetary gains are discounted as one would normally expect.

Of course, if you get benefits from anticipation then that might affect your consumption of ‘real’ items. Botond Koszegi has a working paper in which he models a ‘personal equilibrium’ which describes the interaction between anticipated and actual consumption. Two phenomena which he identifies as arising from his model are self-fulfilling expectations and behaviours which depends upon unchosen alternatives. Both of these are commonly observed: the second in particular is often referred to by psychologists as a criticism of the standard economic model of consumer choice. We value consumption differently depending upon the proximate reference points: for instance, if you go on a fantastic holiday then even if the next holiday is good it is a disappointment. If incorporation of anticipation into our models of behaviour can make them that much more representative of reality then it’s certainly worth thinking about.

Economists will often look at outcomes and associate values to things based on the outcome. Often other disciplines (namely Sociology and Anthropology) criticise our focus on outcomes, stating that we do not pay enough attention on the means of deriving those outcomes.

A post over at Stackelberg follower (love that blog name) discusses the utility he derives from shopping for a product, even a product that he may not actually buy. This made me think again about the issue of means and outcomes.

I don’t believe that economists forget about the means when describing the payoff from outcomes. The utility gained from receiving a pie (I love pies) by legitimate means will likely exceed the utility from a pie gained by killing someone. I tend to think that the means of getting to a certain point, or consuming a certain product, influences the payoff associated with the consumption of that product.

In the case that the Stackelberg follow guy showed, the consumption choice was to NOT buy the game. It might sound weird, but in that case he choose to not purchase anything and that was still some form of consumption. Now the utility associated with that consumption comes from the direct value of that consumption (having nothing) and the means to that consumption (shopping). Another way of looking at it is to change the idea of what was being consumed. In the Stackelberg follower case he choose to consume shopping rather than consume sit in the park.

As a result, I think the means to an outcome are important. However, economists do not ignore them (like some disciplines think we do) as the means to consumption influences the value of that consumption activity. Ultimately, the pie I brought and the pie I could have potentially killed for are different products, with different values, even if there is no physical difference in the pies.

What do you guys think? (assuming more than one person reads this 😉 )

The October official cash rate review will be on the 25th October.

The main points that will be driving RBNZ policy are:

  1. Strong economic growth (including revisions) in the June year.
  2. The surprising strength in the export industry, even before the TOT shock has fully eventuated
  3. A weak CPI reading, however many of the shocks driving down the number where one-off and so will be over-looked
  4. Credit market uncertainty and the volatility of the dollar
  5. Weakness in the housing market – Potential weakness in domestic demand

It seems more than likely that the RBNZ will keep rates steady. Any cuts would be irresponsible in the current climate, while a rate rise would seem heavy-handed given weakness in some domestic indicators. The probability of a hike is greater than the probability of a cut, given underlying inflationary pressure.

The government has decided to ban the construction of new fossil fuel plants for the next 10 years, as they believe that they are unnecessary.   However, I feel that this policy is unnecessary.

I do believe that if we left power generation to the free market, too much CO2 would be produced, and our liability under the Kyoto protocol would be ‘too high’.  But wasn’t that why the government introduced a carbon trading scheme?

In economics terms, the externality from fossil fuel plants is greater than the externality from Hydro, or wind power generation.  The government can try to fix this externality by putting an externality charge on unit production (which is what a tax or a carbon trading scheme does) or by directly regulating the industry.  As the carbon trading scheme is coming into place, the firm producing the power will have to pay the full social cost of producing the power.  In this case, if the firm still decides to build a coal power plant instead of a wind farm, it must be because the full social cost of the coal plant is lower than the full social cost of the wind farm.  As a result, banning the construction of fossil fuel plants seems unnecessary, as in this example, society is better off with this coal plant than with the wind farm.

My attention has been drawn (thanks Paul) to an article which describes how one might find the optimal number of members of parliament in a representative democracy.

In a nutshell, a parliament with too few representatives is not “democratic” enough, possibly leading to an unstable political system, in which various undesirable forms of political expression, including of course violent ones, will develop. In contrast, too many representatives entail substantial direct and indirect social costs, they tend to vote too many acts, interfere too much with the operation of markets, increase red tape and create many opportunities for influence, rent-seeking activities and corruption.

In their paper the authors derive a formula in which the optimal number of representatives is approximately proportional to the square root of the population of the country.

They test their formula against the data and find that Israel, New Zealand, the Netherlands and the USA have far too few representatives. I’m not sure how this correlates with their predictions about the instability of under-represented democracies: all four countries are very stable, non-violent democracies as far as I’m aware. I don’t have access to the CEPR paper but I’d be curious as to whether they address this issue. Their model makes some predictions which are easily falsifiable and I’d like to know how it stacks up against the evidence.

Economists like to claim that their discipline is about providing tools for analysis, not answers to ready-made problems. OK, so they’re hardly alone in that but Dani Rodrik links an interesting paper by Michael Greenstone that walks the walk. Greenstone

…shows how data from world financial markets can be used to shed light on the central question of whether the Surge has increased or diminished the prospect of today’s Iraq surviving into the future. In particular, I examine the price of Iraqi state bonds, which the Iraqi government is currently servicing, on world financial markets. After the Surge, there was a sharp decline in the price of those bonds, relative to alternative bonds. This decline signals a 40% increase in the market’s expectation that Iraq will default. This finding suggests that, to date, the Surge is failing to pave the way toward a stable Iraq and may in fact be undermining it.

I really like how he uses modern economic and econometric techniques to tackle important public policy issues. However, we shouldn’t rely too heavily on the wisdom of the market in evaluating the effectiveness of the surge. While the aggregation of information that occurs in market pricing might be a better indicator than anecdotal evidence, ‘The Market’ doesn’t have superpowers that grant it access to information nobody else has. Many of the decision makers in that market likely have little more access to information about the surge than you or I. Aggregation tends to dampen the influence of extreme views but it doesn’t guarantee accuracy.

An article on Stuff reports that the government is choosing to do nothing about a potentially dangerous protein found in most milk produced by NZ cows. Apparently it would be very damaging to the NZ dairy industry to act on the rather uncertain scientific evidence, so the Food Safety Authority is downplaying the situation. The nutrition expert who wrote a report for the NZFSA says, “it does raise the whole question about how well… the question of uncertainty is dealt with by the authority” and partially attributes the problem to “government risk aversion com[ing] into play.” Risk aversion is well known to economists and is exhibited by most people; however, the government here is not exhibiting risk aversion at all.

What the government is doing is avoiding certain costs in favour of maintaining the dairy industry’s profits with a huge cloud of uncertainty sitting over them. Doing something about the protein now would be costly but would avoid any future risk of adverse scientific results hurting our exports. Doing nothing exposes the industry to the entire risk of such results. Why would the government act in such an apparently risk-seeking fashion? The answer comes from the field of behavioural economics and, in particular, prospect theory.

This theory was proposed by Daniel Kahneman and Amos Tversky to explain two observations pertinent to the NZFSA’s actions: loss aversion and risk seeking behaviour for losses. Loss aversion describes how people feel the pain of a loss far more keenly than the happiness of a gain of equal size. Tversky and Kahneman also found that, while people are risk averse for gains, they are risk seeking for losses. They will be willing to take huge risks in an attempt to avoid making any sort of loss. Taking these factors in to account perfectly explains the NZFSA’s behaviour: they are more than willing to take on the risk of future damage to NZ’s dairy industry in order to avoid the guaranteed losses of taking action now. Whether this is in the best interests of the country or the dairy is quite another matter as the comments in the article make clear.

So according to a recent study, Chimpanzees play the ultimatum game more ‘rationally’ than humans (hat tip Marginal Revolution).

For those who don’t know, the ultimatum game goes like this. There are two players, and a sum of money that can be split between them (say $1). The first player gives decides on how this dollar should be divided between the two of them. Given this division the second player decides whether to accept this division or reject it. If the second player accepts they divide the dollar, if the second player rejects the offer they both get nothing.

If both players only value the amount of money they get then the first player will set up the division so they give player two only an infinitesimally small amount. However, when humans play the game we find that they divide the dollar up quite evenly. Furthermore, we find that when people divide the dollar up very unevenly, the offer is rejected, even though that means player two misses out on some money.

When they say that Chimps play a more rational game than humans, they mean that chimps behave in the way closer to what we would expect if the agents involved only valued money. All this really tells us is that humans value concepts like fairness at a higher rate than chimps do. Hardly a surprising result.

However, this does make a good point for economists to take on board. Humans obviously do value fairness,. Part of this is instinct, and part of this is institutional. By institutional, I mean that it is a preference we have developed as a result of the society we live in. Although economists are happy to abstract from ideas like this, when we apply economic theory it is important not to forget about the social norms that people also value.

But more importantly, the social norms that are created through the application of economic ideas may eventually change the preferences of the individuals in society. Fairness is useful as it helps reinforce co-operation in situations where a prisoners dilemma occurs. If analysts introduce policy that undermines fairness, or in some way degrades the social norm of fairness, then the socially optimal co-operative result becomes more difficult to achieve. Even more fundamentally, how does the change in preferences influence the way individuals value things, could the loss of fairness as a social norm leave people feeling more upset ceteris paribus? I think this is what sociologists have been telling economists for a while.

Ultimately, I accept the idea that social norms are important in determining preferences, but academic economists have good reason for not looking at them. Academic economists want to focus on thing they feel that they can objectively measure, so that their work does not become value laden. Defining preferences is not value neutral, and so is steered away from in academic work (except maybe in Evolutionary game theory? Not that I would know 😉 ). However, economists that want to apply their ideas to reality must realise that societies affect on preferences is non-trival. This makes the questions of how a given policy will impact on preferences more difficult.


Add to Google