Sunday, January 30, 2011

Risk and uncertainty (revisited)

I have previously posted about the difference between risk and uncertainty based on economist Frank Knight's famous differentiation between risk and uncertainty:
Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. ... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.
As I noted, not perhaps the clearest distinction.

In the passage above, Knight expresses that difference as risk being “a quantity susceptible of measurement”, uncertainty as being where that was not possible. I would put it a little differently, I would say that ordinary risk is where the expected dangers are sufficiently structured that a pattern of expected risks can be derived from it (so that, even if specific values cannot be calculated, general rankings and ranges can be reasonably derived, even if only of the x is greater than y form) and uncertainty is where there is insufficient confidence in knowledge of how the dangers [likely outcomes] are structured so as to frustrate calculation, even in general terms. That the dangers [likely outcomes] are not able to be expressed mathematically in any useful sense, taking mathematics to be the science of pattern and structure, because there is insufficient pattern or structure within which likely outcomes can be assessed.

Expressed that way, we can think of uncertainty as a realm of possible outcomes across which people have little or no confidence in forming expectations whereas, with risk, people do have confidence in forming expectations. Hence, risk is where the factors driving outcomes are felt to be sufficiently patterned that expectations sufficiently specific to act on can be reasonably formed, uncertainty as where that is not so.

If we think of uncertainty as the range of possible events over which people do not believe they can form sufficiently confident expectations to act on, we can see the inhibiting effect on economic activity [negative] uncertainty is likely to have: it will encourage more holding of money and other liquid assets as “buffers” against adverse outcomes and/or resources to take advantage of opportunities that may present themselves.

Even if the uncertainty is arising out of some change felt to be positive, it would have to be confidently bounded as a positive in all aspects not to retain a “buffer” against adverse and change and, even then, there would be reason to hold resources to use when such positive opportunities present themselves. Either way, increasing one’s holding of money as a store of value rather than using it as a medium of exchange would be sensible, with contractionary effects due to people engaging in fewer transactions.

Though uncertainty due to negative factors will naturally tend to have greater contractionary effect than that due to positive factors. Not merely because the need for a safety “buffer” is a more direct response to fear of loss but also due to the fear of loss being generally greater than hope of gain. A well known, and rational, tendency: for, while both fear of loss and hope of gain are directed to what might happen, what you already do have has far more existential power than what you might have. The gain does not yet exist, and has not been experienced: that which one might lose already does and has been. So, fear of loss naturally tends to be cognitively stronger than hope of gain. Uncertainty has more fearful power in period of contracting economic activity than of expanding economic activity since the possibility of loss looms greater than any hope of gain.

Reducing uncertainty – i.e. increasing the ambit of matters over which expectations can be reasonably formed – will tend to promote economic activity, particularly economic actions with delayed pay-offs (such as creating and benefiting from capital). [Even uncertainty that is read positively is likely to be very unstable, to be easily subject to reversal – reading what Keynes called 'animal spirits' as how uncertainty is currently being framed – since it exists on the absence of a basis in which to frame expectations and so easily reversed by new information.] So business will often prefer policy clarity – even if the policy is hostile or otherwise problematic – to policy uncertainty, since the former gives some structure within which to calculate likely results from actions over time (particularly investment). Creating and sustaining a “bazaar” economy of transactions that are immediate swaps is easy and historically common. Creating and sustaining an economy where transactions across time, notably the production of capital, is encouraged, and so becomes extensive, is harder, and historically rarer.

Hence the long-term economic benefits of the rule of law. It encourages the creating and utilising of capital because it lessens uncertainty about whether one will continue to benefit from the capital one creates, moving beyond what people call ‘sovereign risk’: not the possibility of public debt default, but more general cases of official actions seriously undermining the value of assets (such as confiscation).

But the value of the rule of law extends beyond restraining officials. Contracts, for example, can be seen, not merely as ways of reducing risk, but of lessening uncertainty: but only if they can be enforced. Just as having well-defined and enforceable property rights does much to move economic (and other social actions) out of the realm of uncertainty and into that of mere risk. So people can form expectations with a reasonable degree of confidence, and act upon them.

It is impossible to completely abolish uncertainty, just as it is impossible to completely abolish risk. But public policy that seeks to sustain a stable and prosperous society should aim to decrease uncertainty, and avoid actions that increase it.

Tuesday, January 25, 2011

Avoidance

There are few markers of the pervasive intellectual dishonesty of much of the progressivist intelligentsia, than their willingness to state or imply other folk are fascists except for brown-skinned folk who actually are.

This thought was prompted by an engaging essay on cultural tensions in Iran. The essay takes a hopeful view that the mullahocracy is too antipathetic to powerful currents in modern Iranian culture to survive: a comforting thought.

It was the following passage which prompted the above thought:
Recent remarks by Sheik Hassan Nasrallah, the head of Lebanon’s Hezbollah, that Iran’s leaders in the last thirty years are all, in fact, Arabs and that their claims of being descendants of the prophet (symbolized by the black turbans they wear) reassert their Arab blood show clearly the continuing tensions between Persian identity and the Islamism of the rest of the Shia Middle East. Nasrallah needs to convince his followers thus that these Arab brothers have left nothing of a “Persian culture” to survive. These controversial comments indicate both the prevalence among ordinary Arabs of this view that Shiism might be an “un-Islamic invention”—and Iranian in origin. To justify his fealty to the country’s current supreme leader, Ayatollah Ali Khamenei, Nasrallah had to first make him an Arab.
That one of the sicknesses that infects Arab thought is self-conception as a master-race has long been clear. Indeed, dates as far back as the Ummayyad Caliphate. Yet the passage is a pertinent reminder, if one is aware of the wider context, of just how fascist Hezbollah (and, for that matter, Hamas) really is.

Of course, if one admits Hezbollah and Hamas are armed fascist movements, that makes Israel look better and one of the rules of much of the modern progressivist intelligentsia is that The Jewish State Is Always Wrong. Which is much easier to hold to if you romanticize its enemies rather than if you look at them in any sort of clear-eyed fashion (and not calling Hamas or Hezbollah fascist is romanticizing them).

The broader habit of romanticizing Israel’s enemies definitely extends to the Palestinians. So statements such as:
We are now in the 92nd year of a peace process in which the Palestinians are the first people in history to be offered a state seven times, reject it seven times, and set preconditions for discussing an eighth offer
are in incredibly bad form.

But the Western progressivist intelligentsia has long displayed, in Clive James’s lapidary phrase, “enormous resources of inattention”. Something that works overtime when it comes to received wisdom about the Middle East.

Still, such failing is hardly theirs alone. Abbas Milani’s own essay has a form of it. He makes much of Khomeini’s duplicity: promising sweetness and light when aspiring to power and cracking down savagely when he achieved it.

But Khomeini was, above all else, a Quranic literalist. He had the perfect example for such behaviour: the Prophet himself. Before power, Khomeini was acting as the Prophet of the Meccan suras spoke and acted. After achieving power, Khomeini acted as the Prophet of the Medinan suras spoke and acted, if in a modern context.

I have no doubt that Khomeini was a sincere follower of the Prophet. That included in his revolutionary strategy. But to admit that Khomeini was just following the Prophet’s example is to suggest a deep canker at the heart of Islam. And perhaps, even now, Milani just does not want to go there; at least not publicly.

Sunday, January 23, 2011

Bubble trouble?

This extends a comment I made here.


Something that Australia generally does well is migration, following the principle that, when going for migration, go multi-source. Australia has a considerably bigger percentage of its population overseas born than the US, and manages that pretty well in large part because our migrants come from lots of different places. (The only real misfire in migration was accepting a lot of Muslim Lebanese in a big lump in the 1970s, and even that is a problem only in Sydney, which is -- by Australian standards -- a dysfunctional city; Christian Lebanese have proved to be no problem because they plug into the Catholic networks.) We are also very good at cherry-picking our migrants – being an English-speaking island-continent helps.

That being said, Australian has two economic "bubble" issues. The first is we are currently exporting a lot to China: since China's economy now looks a lot like the Asian economies prior to the 1997 Asian Crisis and the Japanese economy prior to the 1991 Bubble Economy collapse, a bit of a worry. Admittedly, we rode out the 1997 Asian Crisis well and, more recently, a massive dip in commodity prices.

The issue is, how resilient is the Australian economy if the Chinese bubble economy bursts, or if some other economic shock comes along? Which leads into the second problem, that our housing now looks like California/UK/Ireland before their housing collapses, only more so. Since 1990, owner-occupied and investment property credit has expanded its share of total credit from 23% to 58%. (Business credit has dropped from 63% to 34%.) One-in-ten Australians owns an investment property (pdf). While I agree with Scott Sumner that housing is an asset, that credit pattern still looks very unhealthy to me. As do the house price trends it is all based on. (Australia has the British/Californian system of complete official control over land use and building, not the Texan/German system of open land use.)

Australia had its own "bubble economy" burst in 1974 when an extended mining boom within an economic policy regime based on suppression of risk (the Federation/Deakinite system of white Australia; industry protection; civilised wage and arbitration system; state paternalism, and imperial benevolence, though white Australia was in the process of being abandoned) met global stagflation: our economic performance was flat for the next decade.

I agree entirely with Scott Sumner that you can reliably predict neither the time nor level of tipping points: if there was such a reliable predictor, you would not have asset price bubbles because people would not buy at prices which would get "caught" so the tipping point would happen earlier, so people would not buy at prices to get “caught” by that, in a regress which would preclude any bubble happening in the first place. The inability to predict tipping points is necessary for bubbles to happen. (Besides, how can you predict future information?)

But, after the event, you can look back and say "that was a bubble" because you now know to what degree the expectations of income growth/continuing capital gain were not realised. We do not have an excess of housing, but we have a lot of highly leveraged housing investment based on an expectation of indefinitely continuing capital gain: that is why prices are so high, due to systematic discounting of downside risk as a result of the dynamics of regulation-constrained supply generating surging house prices. (Just as price controls have quantity effects, so quantity controls have price effects.) All of which might still be vulnerable to a terms of trade shift. Say, from a collapse of the Chinese bubble economy?

So, the Australian economy might be in for some bubble trouble.

ADDENDA The US and UK both had surges in household debt which have recently peaked and declined. Looking at the Anglosphere, only Australian house prices are still surging: Ireland, US and UK have seen house prices peak and fall, NZ and Canada they have recently plateaued. In the UK, households are finding credit harder to come by as they pay off debt, increasing equity in their houses. (All via.)

Thursday, January 20, 2011

Why Kids Kill Parents: Child Abuse and Adolescent Homicide

Criminologist Kathleen M. Heide’s Why Kids Kill Parents: Child Abuse and Adolescent Homicide is a lucid examination of how adolescents come to kill their parents. (Appropriately, the book was lent to me by a legal aid solicitor who is also a crime writer.)

Studying such acts (how much they are crimes rather varies) is inhibited by its taboo nature and its comparative rarity. In the US, between 1977 and 1986, more than 300 parents were killed each year. Which sounds like a lot, but is not in a country the size of the US: particularly given only a minority of the killers were adolescents. Few of the latter have been studied.

Fathers were killed at slightly younger ages than mothers, stepparents at slightly younger ages than biological parents. Adolescents were the killers for 15% of mothers killed, 25% of fathers, 30% of stepfathers and 34% of stepmothers. By comparison, only 10% of those arrested for homicide in the same period were under 18 (Pp3-4). (Children are wildly more at risk of abuse or murder by stepparents than by biological parents: the former likely affects the propensity for children to kill stepparents.)

The book is divided into three parts: facts and issues; case studies; implications, practical suggestions, and future directions.

The facts and issues section is fairly grim, but enlightening. Heide writes very clearly, which makes what she has to say more powerful. The first part includes things such as a nice differentiation of terror from horror:
Individuals experience horror when they are witnesses to events that are so shocking that their minds cannot fully comprehend them. Though there is no physical threat, the events are traumatizing to them and may stay lodged in their minds for years. Although fear may be an element of horror, the predominant feelings associated with horror are shock and dread, not fear. Individuals react with terror or experience intense fear when their physical survival is threatened. With terror, both body and mind are affected. Terror immobilizes the body just as horror stuns the mind. Events that are terrifying, unlike those that are horrifying, have a beginning and an end: one wins or loses, one escapes or is captured, one is assaulted or spared, one lives or dies (p.22).
Which is why people watch horror films for the intense experience but terrorism is a political strategy, and one that requires regular reinforcement to keep the fear alive.

So much of the story Heide tells is about power imbalances, abuse and despair. She is a notably clear and unsentimental thinker:
A child’s consent to be sexually expressive with a parent is meaningless because of differences in development and power between adult and child. A child who appears to consent to sexual behaviour with a parent may simply be acceding to the parent’s sexual demands in an unconscious effort to get his or her own needs for attention, affection, love, safety, and a sense of belonging met (p.23).
Her discussion of dysfunctional families is clear and comprehensive. But, as her analysis makes clear, understanding family dysfunction is necessary for understanding why children can be driven to kill their parents.

For that is what most such cases by children under 18 clearly involve. Heide covers physical, sexual, verbal, and psychological or emotional abuse, then physical and medical neglect, emotional neglect and emotional incest. Being a child of emotional abuse and neglect (a largely emotionally absent father, a very controlling mother apparently unable to provide praise or physical affection but very ready with the “why do you do this to me?” criticism) the analysis rang true for me. (The issue of abuse is largely independent of the question about whether parents love their children: indeed, if the child is convinced that the parent loves them, that can make the abuse more destructive, since the child is even more likely to blame themselves – after all, if a parent loves their child but shows no affection, then of course it must because the child is flawed and the parents are therefore heroic for loving such an unlovable person.)

Heide then examines the data concerning which youths kill. The serious study of such cases is relatively recent, the first study by a mental health professional being published in 1941 with fewer than a dozen case studies specifically addressing adolescent parricide appearing from then to the publication of Heide’s Why Kids Kill Parents. Her assessment interviews with about 75 adolescents charged with murder or attempted murder, which included seven cases of adolescents who killed parents, thus provide a significant expansion of available data (Pp35-6). She tabulates data from previous case studies to identify commonly identified factors (Pp40ff).

Heide notes that the small number of cases of adolescents killing parents makes it clear that it is impossible to identify which children will kill, but it is possible to identify which youths are at risk. She identifies the key risk factors as:
(1) The youth is raised in a chemically dependant or other dysfunctional family.
(2) An ongoing pattern of family violence exists in the family.
(3) Conditions in the home worsen, and violence escalates.
(4) The youth becomes increasingly vulnerable to stressors in the home environment.
(5) A firearm is readily available in the home (p.44).
A friend who was raised by a physically abusive malignant narcissist of a mother said the only useful thing his father did was bring home a copy of Lord of the Rings when he and his brother were 13 and 15, so that they were able to see what was happening in their house was small stuff and there were much bigger issues in a much wider world.

Heide identifies the characteristics of functional families before moving on to dysfunctional families. The rules of dysfunctional families can be identified as: don’t talk, don’t trust, and don’t feel (p.48). Needless to say, children in dysfunctional families are far more at risk of abuse than functional families. The children of addicted parents, for example, are at very high risk of emotional neglect. Part of the problem being that the non-addicted parent is often so busy attending to the issues arising from their partner’s alcoholism, drug use, gambling or whatever they have no emotional attention left for their children (Pp49-50).

This all leads to isolation as a serious problem for children of dysfunctional families. A pattern of increasing abuse, murder of partner or flight by the non-abusive parent followed by, in the case of flight, the child being expected to take over more and more household responsibilities (which both stresses the child and provides more points of potential conflict) is likely. If a downward spiral continues, the situation may become increasingly intolerable. If a firearm is present, the chance of the child killing the parent rises dramatically (Pp50ff).

The presence of firearm is the most common risk factor: 82% of fathers, 75% of stepfathers, 65% of mothers and 56% of stepmothers murdered by juveniles were killed by a firearm of some sort (p.53). Clearly, the pattern partly reflects whether a firearm more easily overcome an imbalance in size and strength.

Heide then examines the legal and psychological issues, since the two interact: particularly, but not only, in the question of mental competence at the time of the act – there is also the issue of whether the child is in situation of continual threat even if the threat was not immediate at the time of the killing. Part of the story here is increasing, if still erratic, understanding that there is often a much larger story behind the final, possibly desperate, act.

But part of the problem is the genuine complexities. If one puts homicides on two axes (intention to kill, desire to hurt) then we can identify situational (low desire to hurt or kill), intentional (low desire to hurt, high intent to kill), emotionally reactive (high desire to hurt, low intent to kill) and nihilistic (high desire to hurt and to kill) murders. Identifying which a case falls into is important, both for justice and because nihilistic killers are likely to remain dangerous individuals (Pp60ff).

In Part II, Heide precedes the case studies with an examination of assessment and its implications. This really ticked some boxes for me. Heide starts with:
Understanding why a particular youth killed a parent requires knowledge of the adolescent, his or her family, and the home environment (p.67).
This is why I tend to be deeply sceptical of therapy: the counsellor or therapist is, far too often, completely dependant on what the client tells them for their knowledge of the particular case. Taking into account self-delusion, ignorance, failure of perception, etc, how likely is this to be reliable information? Let alone sufficient information. Narcissists, for example, often get worse under counselling or therapy, as it validates their emotions – the physically abusive malignant narcissist of a mother mentioned above once announced to her children that her therapist had agreed she had not been hard enough on them (apparently, she should have broken more of their bones: in reality, the therapist probably had no idea that broken bones were an issue).

Forensic psychologist Nigel Latta’s point that he never bothers reading case files, since that is only a record of the lies the offender has told previous clinicians, may apply with particular force with the sex offenders who are his clients but the problem is extendable. Especially given Latta can use the trial records to find out what happened, a source of information not normally available about clients of therapy.

But it is worse than that. According to an article in Quadrant by Dale Atrens, Reader Emeritus in Psychobiology at the University of Sydney:
Treatments ranging from old-fashioned talk therapy through to modern cognitive-behavioural therapy are typically little better than placebos. Almost any treatment produces some benefit in the short-term. Alas, this therapeutic benefit is often short-lived. Therapy typically becomes less effective over time. More is not better: it is worse. Strange medicine
For example, those subject to therapeutic debriefing after a traumatic event are 22% more likely to suffer post-traumatic stress disorder after 3 to 5 months: one study that they were three times more likely to suffer post-traumatic stress disorder after 12 months than those who did not de-brief. Similarly, folk who “go it alone” in quitting smoking (or drug addiction, or gambling) are much more likely to stop and keep to it than those use therapy or support groups.

Between the wish that there be an effective available treatment and contemporary belief in The Awesome Power of Words, a dangerous superstition is widely supported: even, at times, legally mandated.

None of which negates Heide’s point that an in-depth personality assessment, analysis of the family dynamics and the home environment of the adolescent killer is necessary to understand what went on.

The three case studies Heide presents are fascinating, in a grim sort of way. They also include analysis, comments and asides on psychological processes and personality disorders. Such as:
The narcissist is absorbed with his image and invests himself in maintaining an image rather than being who he is (p.121).
Not quite how I would put it: I would say the narcissist invests in maintaining a fictional self rather than facing reality inconvenient to that such that their convenience becomes their reality principle. They invest rather a lot in being narcissistic.

Depressingly, there is evidence that narcissism is increasing and empathy decreasing, at least in the US. But wasn’t it John Kenneth Galbraith, or possibly Gore Vidal, who said the advantage of sharing a planet with the US is that you get to see what was going to happen in your society before it did?

The most vivid story is, however, not from any of the three in-depth case studies, but one that Heide relays in her Preface: of a 16 year old boy with two older sisters, who had left home, leaving him the sole target of abuse, trying to sneak out to leave himself. The house was arranged so he had to go through his parents’ bedroom. His father wakes up, knocks him down, pushes him over when he got up, then backs him into a closet. Which had a loaded shotgun. The terrified juvenile then shot his father. At this point, his mother awoke and sat up. The terrified juvenile then shot her too. In trying to recount the incident later, the teenager reported:
when she sat up in bed … the agony within the terror … The rest of its is more or less, sort of hazed out for me. I remember waking up completely. Standing there looking at the two bodies. Two people. What have I done now, you know. Like it was a dream (p.xiii).
Four petitions detailing neglect, abuse and physical abuse had been filed about that household. Three have been filed in the previous two years, all naming the son as being subject to abuse. He spent three months in foster care with an older sister and her husband. Then nine months in supervised care back with his parents. As Heide reports:
Ten months after the state agency terminated supervision, Mr and Mrs Adams were dead. Terry Adams was charged with two counts of first-degree murder (p.xiv).
Terry pled guilty to two counts of second-degree murder, was sentenced to life imprisonment and released on parole after seven years. The sentencing judge had presided over the dependency proceedings.

So, what to do? Heide is a supporter of therapy beginning as soon as possible after the killing and based, as indicated, on thorough and careful assessment. The issues she discusses are basically about providing after-the-fact replacement-parenting. While the discussion of the pattern and consequences of abuse is perspicacious, there is, alas, good reason to be sceptical about the prospects of therapeutic success. (Though Heide is, at least, advocating a course of action that avoids the normal epistemic problems of therapy.)

One of the themes is how often teachers, neighbours, relatives did nothing when told of abuse. Terry Adams is eloquent about why one should respond to a child who tells of abuse, while another case study killer, Scott Anders, is bitter about those who would not believe, would not help, when he told them. Since a sense of isolation and despair – that it will never get better – seems to be a key factor in adolescents killing their parents, this is a powerful point. (When I intimated to my mother’s younger sister – my Godmother – that there may have been a few issues with my upbringing, her response was so defensive of my mother – and hence so dismissive of what I had to say about my experience – that we have not spoken since. And I was in my 40s: how much more easily dismissed is a child or teenager?)

Heide suggests a range of measures such as having easily available classes in child development and parenting skills and school courses in child abuse and neglect (though the latter immediately make me wonder what use toxic kids may make of such information). Heide makes a point that is easily forgotten:
Abuse and neglect are not always obvious to their victims. When abuse and neglect are discussed in university classes, some students become aware for the first time that they were abused or neglected as children (p.54).
Or, in my case, they read an appropriate blog post by a psychblogger. If abuse and neglect are your “normality” and have been for years, are what you grew up with, it can be very hard to see it for what it is. Which helps create the pattern that Heide notes, that abusive parents were often abused children.

Heide suggests that Florida’s volunteer child guardian system is one worthy of wider adoption (Pp157ff). If isolation and despair are risk factors, providing an effective support and advocacy network is clearly likely to be beneficial. Heide suggests that the media can usefully popularise the existence of help services and centres, again breaking down the isolation and despair. Most of all, Heide hopes that criminal justice systems show intelligent mercy for an abused child driven to kill their parent or parents.

She concludes the main part of her text (there is a brief appendix, summarising the statistical evidence) with a story attributed to the Minnesota Literacy Council:
An old fisherman stood on the beach watching a young boy at the shoreline. As the fisherman approached, he saw that the boy was picking up starfish, which had been washed ashore by the waves, and was throwing them back into the sea. When the fisherman caught up with the boy, he asked the boy what he was doing. The boy did not stop his effort as he told the fisherman that he was throwing the starfish back into the sea so that they could live. If left until the morning sun, the starfish would die. The fisherman’s eye scanned the beach, revealing thousands of starfish ashore. He said, “But, son, there are thousands upon thousands of starfish on the beach. What difference can your actions possibly make?” As the boy hurled another starfish into the sea, he looked the old man in the eye and said, “It makes a difference to this one” (p.161).

Tuesday, January 18, 2011

1215: The Year of Magna Carta (2)

This the second part of my review of Danny Danziger and medieval historian John Gillingham’s 1215: The Year of Magna Carta, a delightfully accessible “time travelogue” of England for the year of Magna Carta. The first part was in my previous post.


The medieval Church supported itself with tithes, which were unpopular and widely evaded. One priest made the point of the importance of following Church rules by confiscating one tenth of the grain in farmers’ barns and then burning it in public. The custom of giving gifts for services such as baptism, marriage and burial was also hard to avoid. England had about 9,000 parishes, with the salary of priests being set at 3 pounds a year (three pence a day when an oxen cost about 80 pence). No new bishoprics were added between Carlisle in 1133 and the Reformation, but monasteries proliferated – seven Cistercian abbeys in 1118 had become well over 500 by 1200. About 5% of the population were clerks, but very few became priests. Most were in minor orders, so were permitted to marry, and worked in various administrative roles (Pp202ff).

The Cistercians insisted on a minimum age of 16 for entry, and a year’s novitiate. Their example was followed by other orders, eventually leading even the Benedictines to abandon their tradition of parents donating children to the Order. This was also the period of the founding of the Franciscan and Dominican Orders of travelling friars. Of all the new orders only one, the Gilbertines, was founded in England, which was very much part of a single Latin Christendom. (Monastic discipline could also be quite harsh.)

The medievals were well aware that the Earth was round – as the authors note:
We live in our own age of faith, the faith that ‘we’ are superior, more rational, than the superstitious people of the past (p.237).
The medievals also correctly called what we call ‘Arabic numerals’ ‘Indian numerals’. The tale of the gross errors of calculation later made by Columbus make amusing reading (Pp236-7).

Adelard of Bath’s enthusiasm for knowledge, and the promotion thereof, led him to dedicate a treatise on astronomy to the young Henry II. Whether astronomical events had any predictive value was much debated at the time, with learned opinion on both sides: the authors quote the monk Richard of Devizes against astronomical events having predictive significance (Pp240-1).
Read More...
This was the age where the Eastern Roman defeat at Myriokephalon and Saladin’s unification of Egypt and Syria made the states of Outremer more vulnerable, leading to the crushing loss at the Hattin and a King of England, Richard I, to famously go crusading. This gave him enormous prestige: his brother John refrained from revolt until Richard was captured by Duke Leopold of Austria on the way back. It also led to considerable interest in Ṣalāḥ ad-Dīn Yūsuf ibn Ayyūbi (i.e. Saladin) himself (Pp243ff).

On of the signs of the greater integration of trade, travel and Church governance was the increasing use of common (typically saint’s) names across Europe and of Churches being named after common saints across Latin Christendom, rather than the local saints which had been the custom (p.254).

The rationale of rebellion
All the English kings since William the Bastard had faced rebellions, but rebellions in the name of either alternative royal dynasties or other members of the royal family. Lacking such a royal focus, the rebels of 1215 against John developed a new one: a program of reform. Henry I’s Coronation Charter of 1100 had never had much force, but its example remained and provided a framework for the rebels to build on. They built something new while looking to a justifying past (not to be confused with what had actually happened). They took an oath to “stand fast together for the liberty of the church and realm”. John responded by ordering loyal castellans to prepare their castles for war, by appealing to the Pope and “taking the cross” (i.e. promising to go on crusade). Pope Innocent III responded by supporting the authority of this loyal son of the church.

Those closer to the action, with more direct experience of John, thought his taking the cross a cynical manoeuvre. John had so little credibility with his subjects that, days after he had granted the city of London a new charter which allowed them to elect their own Mayor, the city opened its gates to the rebels: his past taxation of the city weighed against him. The baronial opposition won landslide support: John was reduced to a few loyal magnates and the realisation that major concessions were required. After much negotiation, the Great Charter was signed at Runnymede in June 1215, with (most of) the rebels renewing their previously renounced fealty to John. Peace was declared (Pp255ff).

Despite the partial precedents of Henry I’s Coronation Charter and Charters issued by Stephen in 1135 and 1136, Magna Carter was something new:
As the product of rebellion it was conceived and drawn up in an atmosphere of crisis. John and his enemies were bidding against each other for political and military support. In these circumstances the barons could not afford to be identified with a programme that suited only their sectional interests. They ended up demanding a charter of liberties that was long, detailed and contained something for everyone (p.260).
It was, in fact “a thoroughgoing commentary on a whole system of government”, creating “the first written constitution in European history” (Pp260-1).

In its immediate purpose of being a peace treaty, the Great Charter was a complete failure. Civil war broke out within three months because the Great Charter had overreached. Clauses 52 and 61 set up a committee of 25 barons to judge property grievances: a committee of John’s enemies given power over every act of government. This was something no king was going to accept:
The barons had created a political monstrosity, a constitution that could not possibly survive. The Magna Carta of 1215 was the cause of its own undoing (p.262).
John pretended to comply while working to improve his own position. In September 1215, Innocent III denounced the Charter: this had little or no effect on the barons but did undermine those Churchmen seeking to bridge the differences.

John, encouraged by the Pope’s words and his improved military forces, marched on London. The rebel barons offered the crown to Prince Louis of France. War broke out across the kingdom: the structure of government broke down and John found himself increasingly short of funds. John displayed his normal pattern of alternating between displaying resolution and foresight with displaying neither and the war increasingly turned against him: Louis controlling London and more and more of the countryside. John then did the only thing he could to save the Plantagenet dynasty: on the night of 18-19 October 1215, he died.

His heir, Henry III, was a nine-year old boy. The magnates of England steadily decided that the regency for a child-king was much preferable to rule by a French prince, particularly after the young king’s ministers reissued the Great Charter. Louis was told his services were no longer required. He lingered until 1217, relying more and more on French officers and troops as English support ebbed away, increasing his unpopularity. He finally accepted the inevitable and sailed back to France, to the consolation prize of becoming King of France in 1223, a much larger and more prosperous kingdom than England (if also a much less centrally controlled one) (Pp264ff).

He did, however, negotiate for decent treatment to his supporters: something he was able to achieve for lay folk but not the clerics (since they were in defiance of King and Pope). A prominent clerical supporter of the Great Charter, Elias of Dereham, was forced into exile. He seems to have been greatly respected as a man of principle, as several clerics and others appointed him executors to their will including William the Marshal, who had been a loyal king’s man, and he later worked for Henry III (Pp273ff).

A living document
The Magna Carta had failed as a peace treaty, but it kept being re-issued, with modifications that marked as a living document. The version issued in 1225 remained on the statue books until the Law Reform Act of 1863. In 1265, Simon de Montfort decreed that the Charter should be proclaimed twice a year and nailed on church doors. Edward I re-issued it in 1297 and, from then on, it had pride of place in books of English statutes. It became a Good Thing, with mythic status, echoing down English history. It was made so much of in the C17th that it was still resonant when the American colonies were being settled, and when the Founding Fathers were creating the US Constitution (Pp278ff).

In the end, as the author’s state in the end of the final chapter and before the appendix with the English text of the Charter:
Although there is not a word in it about the right to protest, there is a sense in which Magna Carta in its entirety represents protest. It was in origin the product of direct political action, of negotiation after rebellion. As a symbol of the struggle against tyranny it will always retain its value (p.284).
Quite so.

The last line of the Introduction of this splendid book is “for all mistakes the authors blame each other” (p.13) which expresses beautifully the good-natured fun of this book, an excellent “timelogue” of the world of medieval England in the early C13th and the role of Magna Carta in Anglosphere history.

Thursday, January 13, 2011

1215: The Year of Magna Carta

After the success of The Year 1000, Danny Danziger teamed up with medieval historian John Gillingham to produce 1215: The Year of Magna Carta, another delightfully accessible “time travelogue” of England, one for the year of Magna Carta.

An introduction sets the scene for England, the British Isles and northern France in the early C13th. Then we get 16 chapters, each prefaced by a quote from the Great Charter itself. An appendix provides the complete text of Magna Carta, rendered into modern English.

The early C13th was a period of expanding trade, technological and institutional innovation – it saw the founding of the Universities of Oxford and Cambridge, the great age of cathedral building was well underway – and of intellectual ferment., as Latin Christendom wrestled with Aristotelianism and other intellectual and scientific imports from the Islamic world and Greek Christendom. We have much more suriving from this period than that of the earlier book: the authors point out that there are far more survivals in England of the stone building from 1215 than of the wooden buildings of 1000 (p.10).

Medieval living
This was an era when being accompanied everywhere you went was a sign of status. The tasks of a chamberlain included supervising his lord’s bath, inspecting the privy before he used it and handing him well-pressed hay to wipe his bottom. Crude humour was enjoyed at the highest circles – the records show that one Roland le Pettour (‘the farter’) was rewarded with an estate in Suffolk for ‘leaping, whistling and farting’ before the court as Christmas entertainment. Meanwhile, a seriously ill Archbishop of York was prescribed sex as a necessary restorative. A young woman was supplied to his private room (his secretum). Inspection of his urine next morning revealed that he had not partaken of his prescribed remedy: he confessed he had not broken his vow of celibacy but had pretended to do so to spare his doctor’s feelings (Pp30-1).

This was a sun-up-to-sun-down society. The stables and the lord’s bedchamber would be lit at night, but nowhere else was. With the shutters down, rooms could get very dark, giving moralists a field day for warning moral parables. The porter had the heavy responsibility of ensuring no unauthorised persons entered. The provision of sexual services to overwhelmingly male household staff complicated this somewhat. Since proximity to the king was so valuable, there was effectively a royal brothel incorporating a dozen demoiselles to service the king’s household separated from wives and family. (Nowadays, the royal family has generally gay staff, keeping things largely “in-house”.) At Christmas 1204, the wife of Hugh de Neville, one of King John’s household officials and gambling companions, offered the king 200 chickens to sleep one night with her husband: the king accepted (p.32).

Poor people had fewer children than did those better off (in the reverse of modern patterns: but unwanted mouths no longer threaten starvation). People generally had good teeth and the average height was 5’7” (170cm) with the typical farming family having three children. Most houses had two rooms with an attached croft of up to an acre for growing vegetables, fruit and herbs. Ten acres could comfortably feed a family with enough surplus to pay rents, dues and tithes and purchase household items. It was those with five acres or less who were threatened with starvation when famine struck. Male farm animals were generally castrated to encourage fattening or, in the case of oxen, placidity. Still, farming was hard work: the twenty acres a modern tractor could plough in a day would take them 40 days (Pp34-35).

It was a period of urbanisation – from 1066 to 1230, more than 125 towns were founded, making it the period of the fastest rate of town-creation in English history. People were already contrasting English prosperity with Irish poverty. Towns were ‘boroughs’, places where people were granted by the local lord various freedoms and exemptions from dues, the aim being to attract settlers and encourage economic activity. (The modern term is ‘enterprise zone’.) This led to acceptance that serf who could live in a borough for a year and day became free – hence the saying that “town air makes you free” (Pp52ff).

Silver was the medium of exchange, and the discovery of silver lodes in Germany, the Alps and Tuscany expanded the money supply. It was not until the C19th that the amount of silver minted per year in the C13th was regularly exceeded. The new English “Short Cross” silver penny had marks which made it easier to cut into halves and quarters (‘farthing’, fourths) if you wanted smaller denominations. London became the largest city in North-Western Europe after Paris (Pp56-7).

London also became a place of convenience foods, disastrous fires (such as the London Bridge fire that may have killed 3,000 people, which led to an ordinance banning thatched rooves) and the sons of respectable families engaging in criminal gangs. King John levied customs dues on trade (both imports and exports), the surviving records for which show some of the new towns were already very successful. Great fairs, some of which lasted weeks, were free of residency restrictions, making them attractive to foreign traders, while the fees that could be charged made them valuable to their local lords (Pp60ff).

A well-brought up young aristocrat would play chess (a simpler version than modern version) and music. This was a period when most towns could boast a school, which ambitious parents might be able to pay the fees for: monasteries no longer had a monopoly of formal learning. Most people, however, learnt from their parents. As they got older, brothers and sisters would stop sharing beds, the boys would help their father plough, reap, building, minding the sheep and cattle, girls would help their mothers cooking, baking, cleaning spinning, and weaving (Pp76-7).

Even beyond children-as-pension-plan, marriage represented an exchange of services; one that went back to the males-hunt, females-gather of foraging times.

Student life
This was also the period of growth of universities. For those with an interest in law, the Bologna law school was famous. Roman law had limited application in England (even after Henry II’s creation of the common law: via royal judges who offered judgements based on the common – i.e. shared – elements of Norman, Anglo-Saxon and Danish law that England had previously operated under), so most English students who studied overseas went to the University of Paris – over a third of its masters whose origin were known in the period 1179-1215 were English. Some English towns – Exeter, Lincoln, London, Northampton and (pre-eminent) Oxford – had schools of advanced study, though not with the status of Paris or Bologna (Pp82-3).

The story of how the University of Oxford was created sounds very modern in some ways, very medieval in others. In 1209, some (male) students were sharing a group house. One of them accidentally killed a woman in the house, and fled. The Mayor of Oxford, unable to find the culprit, apprehended his three housemates; who were taken out and hanged, on the king’s orders. The entire body of students and masters – about 3,000 people according to a chronicler – left Oxford in protest. Some went to Cambridge, others to Reading. The dispute dragged on for years, the students and masters forming themselves into a corporation (from the Latin corpora, body: the English word university comes from the Latin universitas, meaning corporation). Finally, in 1214, the borough officials agreed to do public penance (admitting fault), regulate the price of food and rent and pay money to the university for poor students: the institutional division of Town and Gown thereby being established. Oxford re-opened for business as a university town, with charitable donations later in the C13th creating the first college (Merton). Meanwhile, those who had decamped to Cambridge found it so comfortable that they stayed (Pp83-4).
Read More...
The classic subjects for advanced study were law – for those seeking well-paid careers – theology – for those interested in thinking through problems for their own sake – and medicine. For the last, Montpellier and Salerno were the medical schools with the greatest reputation, the latter owing much of its standing to a converted Tunisian Muslim, Constantine the African, whose medical text was famous (Pp84ff).

Regulation of the forests was a major source of revenue – sending around a forest commission to discover and fine unauthorised usage could be very profitable to the Exchequer: one of Henry II’s forest commissions raised 12,000 pounds when the annual royal revenue rarely went above 20,000 pounds. These exactions and controls were extremely unpopular. Richard I found that offering to declare an area not a royal forest was a source of revenue. Local areas organising together to petition and pay encouraged the development of county communities was part of the lead in to Magna Carta and eventually lead to Henry III’s ministers issuing a Forest Charter in 1217 (Pp126ff).

This was the period when the Church was increasingly controlling marriage law (marriage only became a sacrament in the C11th), law having previously been a purely secular affair. In 1200, a Church council at Winchester decreed that a proposed marriage must be proclaimed three times. But marriage was an unusual sacrament – it required no priest. Exchange of vows by the couple was sufficient: God was witness enough. The Church originally banned marriages between those related to the seventh degree (i.e. shared great, great, great, great grandparents). This made so many marriages notionally incestuous, that the Lateran Council reduced it to four degrees (i.e. sharing great, grandparents): in 1537, the Church reduced it to two degrees of separation for Amerindians, in 1897 for blacks and in 1917 for the general Catholic population (Pp146-7).

We have reason to be grateful for the Church’s restrictions – by banning cousin marriage, the Church foreclosed the lineage structures of tribal societies and forced more investment in the wider provision of public goods.

Law develops
There were no police forces as such. If a crime was committed, the victim had to raise a hue-and-cry, to which able-bodied men were required to respond to the utmost in their power (in Latin, pro toto posse suo: hence the posse so beloved of American Westerns) to chase and apprehend the accused. Most crimes were dealt with by the local ‘hundred’ courts of the sheriff or his deputies, more serious crimes went to the county court:
This, if a woman was raped, she had to go at once to the nearest vill, show her injuries, blood and torn clothes to reliable men there, then go to the hundred bailiff and do the same, and lastly proclaim it publicly at the next meeting of the county court (p.186).
In a familiar pattern, crackdowns on crime led to political disputes and institutional change. Henry II cracking down on criminals who hid behind clerical status led to his infamous clash with Thomas a’Beckett. Another crackdown, beginning in 1166, transformed English law with a public prosecution service, the growth of a legal profession, the establishment of a central court of justice in Westminister and the system of travelling royal judges. This was the beginning of a common law rather than law as local custom. The only provision in Magna Carta which asked for more government, rather than putting limits on what the king could do, was Clause 18 where the king promised:
to send to each county four times a year two judges whose job it would be, sitting together with four knights from that county, to hold assizes at the county court (p.187).
The re-issue of the Great Charter in 1217 reduced it to a more realistic once a year, though that was still beyond the organising capacity of the government. Still, it was something the government was doing that people wanted more of: Clause 18 also covered using royal courts to settle property disputes with writs (i.e. royal commands) of novel disseisin (a claim of wrongful dispossession), mort d’ancestor (a claim to be the rightful heir) and darrein presentment (dispute of patronage over churches) (Pp187-8).

The Magna Carta’s most famous requirement of requiring conviction by a jury of one’s peers might limit the power of royal judges but the demand for ways to deal with crime and property disputes was clearly strong.

Trial by combat (a form of appeal to God) was still used, with paid fighters being accepted part of the proceedings. Men and women, being one flesh, the only appeal of homicide a wife could make was for the death of her husband (Pp190-1).

Henry II’s 1166 reform required that counties empanel “juries of presentment” (the forerunner of American grand juries) to prosecute crimes against those believed to be guilty. There was no compensation paid to victims in such prosecutions, but it did mean that crimes by those too poor to pay compensation were subject to the machinery of the law. The most common sentence was outlawry, when the accused fled and were sentenced in absentia. The Kings Bench began to sit permanently in Westminister, to deal with the growth of cases, and legal specialists grew up to help people plead (Pp192-3).

Juries could be asked whether an accused was to be put to trial by ordeal. This path was stymied by the Fourth Lateran Council, on solid theological grounds:
The problem was, however, that the basis of the ordeal was that God was required to work a miracle every time he was asked to do so, but since a miracle was surely a free act of God, this was theologically unacceptable unless the ordeal was, like the Mass, a sacrament (p.196).
But it had not been instituted by the Church and had no Biblical bases, so most educate churchmen had come to conclusion that trial by ordeal was wrong, hence the Lateran Council’s decision prohibiting priests from taking part, thereby taking the point out of such ordeals. It also meant giving up a lucrative right (priests were paid for their blessing and preparation services) that had increased the role of the church in local life.

The Church’s disavowal of trial by ordeal created a problem for judicial systems – how to deal with the hard cases? On the continent, increasing adoption of Roman law encouraged the search for confession – under Roman law, the best of proofs – and so to torture. In England (and Denmark) increased use of juries was the response instead. Torture became the province of cases of alleged treason and “pressing” (piling up heavier and heavier stones until the accused entered a plea for trial). If one believed one was going to be found guilty, and put to death anyway, death by pressing might be preferable if it meant that your family kept your property, as in the case of Giles Cory in the Salem witch trials of 1692 (Pp198ff).

The English disavowal of torture as a way of getting evidence went as far as the common law developing strictures against self-incrimination – the Crown had to prove its case from evidence. One wonders if British empiricism may have developed out of these legal resonances, just a Continental fascination for more deductive and phenomenological systems of philosophy may have been connected to Roman belief in confession-as-best-proof. I have long believed that the English-cum-British reputation for fair play (which dates back centuries) was connected to only the holders of titles being noble – their spouses and children were all commoners: giving the nobility an interest in how the law treated commoners absent from continental systems which treated the entire family as noble, and so were a legal group apart. I have also argued that the British reputation for honest government (which was largely a C19th creation) flowed from decades of abolishing official privileges, monopolies and discretions. Law is about patterns of thinking and their application, so why should it not affect patterns of thinking (and consequent behaviour) beyond the law and matters legal?


This review will be concluded in my next post.

Sunday, January 9, 2011

The logic of belief and the logic of believers

Both the US and Pakistan have suffered two attempted political assassinations in the same week. One was the act of a lone nut. One was not.

Both were examples of wider patterns, but very different patterns. The act of the lone nut killed more people but not the Congresswoman who was apparently the principal target. The attack on Congresswoman Giffords and her staff was part of patterns of massacres by unstable and alienated individuals that happened, this time, to have had a political target, but that was not its central feature.

The assassin of Saleem Taseer, the Governor of Punjab, was member of his security detail who killed him for religious reasons. This is not a new thing in the region: then Indian Prime Minister Indira Gandhi was also assassinated by two of her bodyguards for religious reasons.

What is very different is the reactions: Sikhs are a minority in India and the Sikh community suffered days of murderous violence, with thousands being killed, in reaction to the murder.

Governor Taseer had been speaking up against the death penalty being imposed on a Christian woman for blasphemy. That is, he was speaking up for the rights of a minority. His murderer is widely seen in Pakistan as a hero. Mainstream Muslim organisations have applauded his treacherous homicide, making a mockery of the “religion of peace” rhetoric. Governor Taseer’s estranged illegitimate son has expressed his anguish (via) at the religious and popular support for his father’s killer:
Already, even before his body is cold, those same men of faith in Pakistan have banned good Muslims from mourning my father; clerics refused to perform his last rites; and the armoured vehicle conveying his assassin to the courthouse was mobbed with cheering crowds and showered with rose petals.
I should say too that on Friday every mosque in the country condoned the killer's actions; 2,500 lawyers came forward to take on his defence for free; and the Chief Minister of Punjab, who did not attend the funeral, is yet to offer his condolences in person to my family who sit besieged in their house in Lahore.
Even as the murder was being first reported, it was clear that the murdered man was seen as the one really in error:
It seemed too early for analysis, but the [news] presenter's friend looked mildly smug, as if he had been mulling over arguments in his head long before the governor was shot. Although it wasn't required, the presenter egged him on. "But you see these are sensitive matters. He should have watched his words. He shouldn't have spoken so carelessly."
If anyone wanted to see Islam as a pernicious force in world affairs, here is a prime example. Indeed, Awais Aftab, a Pakistani medical student, blogging about the murder and reaction, reaches exactly that conclusion (via):
Fundamentalist Islam has demonstrated such wide-spread consensus and domination that they are now the current representatives of Islam. Liberal Muslims who are reading this will no doubt protest, but the facts are in front of all of us. Liberal Islam has failed. Liberal Islam has no consensus, has no scholars, has no properly worked out theology. It is all just a bunch of individual voices, shouting "No, this isn't Islam."
It is also time that Western thinkers realize that this consensus in the favor of Fundamentalists has taken place. Fundamentalists are no longer in minority; Islam is no longer benign. It has become the current successor in the dynasty of fascists, nazis and communists, and it must be dealt with accordingly. Rome has spoken, the matter is settled.
It turns out, his own mother is disappointed in his refusal to sanction religious murder.
Read More...
There is a long tradition in Islam of struggles between reformers (who want to go back to the original texts, the original purity) and modernizers (who want to update Islam according to growth in human knowledge and experience). Generally, the reformers win, but never absolutely.

And yet, something else occurred this week: thousands of Egyptian Muslims offered themselves as human shields for Egyptian Copts to celebrate Coptic Christmas in the wake of a murderous bombing on a Coptic Church at New Year that killed 23 people. In my previous post on monotheism, I made the point that
In the end, monotheism only comes in two versions: that which uses the authority of God to protect and succour one’s fellow humans and that which uses the authority of God to strip people of their moral protections. Most believers play both games, they just vary in how intensively and to whom they do it.
The Egyptians offering themselves a human shields were doing the former, it is clear that Islam in Pakistan is largely lost to the latter.

It would be nice to see the actions of thousands of Egyptians as a marker of change. It does point out, yet again, how the logic of belief is not necessarily the logic of believers, given believers have a choice about which logics to follow.

And yet the long-term trends are not good. The New Year Coptic church bombing is just part of a pattern of salafist violence towards Christians in the Middle East. The Middle East is a region with a bad record in the treatment of minorities. Over a century, the Christian proportion of the population of the Middle East has fallen from about 20% to 5%. The first megacide of the C20th was of a Christian minority in a Muslim empire.

Modernisation made things worse, since the notion of equality before the law between believer and non-believers is widely regarded as offensive to Islam: it was part of the lead up to the Armenian genocide. Nationalism was often a way for Christians to support a common political identity that did not make them second-class citizens. Unfortunately, they too often made a Faustian bargain, accepting demonization of Jewish aspirations as part of Arab nationalism: as with such bargains, it turned out that merely made them the next in the queue. (One can reasonably argue that liberal Muslims such as Governor Taseer were another version of the same error: his son tangentially suggests so.) The failure of Arab (or whatever) nationalism to defeat Zionism, establish decent politics or provide economic development just left the way open to the rise of Islamic politics.

The problem here is not Islam per se, it is monotheism conceived as the Absolute Moral Authority of God being used to strip people of moral (and legal) protections. Gays in Africa are under similar pressure from Christian activism. Islam just tends to be more serious and all-embracing about such monotheism while being taken more seriously in such by its adherents.

The problem is particularly intense in Pakistan because it has no unifying identity beyond Islam, so the more “Islamic” one is, the more “Pakistani” one is. Bangladesh, which has an overwhelmingly Bengali identity, is a much more successful polity. Kossovo, full of Muslims who are “cultural Christians”, is a country of genuinely moderate Muslims. Kurdistan in Iraq is a similar success story. But these positive examples are also examples of peoples who know what it is – recently – to be collectively oppressed. That makes a difference too.

It makes a difference because it changes the logics that resonate with people: the logic of believers is not necessarily the logic of belief indeed.

But the logic of Absolute Moral Authority, that is always problematic. For the version of monotheism which uses the authority of God to offer succour to one’s fellow human beings gives them moral authority. In Pakistan, it is losing out to the other version of monotheism, the one that cites the Absolute Moral Authority to strip people of moral and legal protections, a struggle that is very much one within Islam, hence some of the harshest critiques of wahhabbi influence comes from deeply religious Muslims. Alas, the repressive version of monotheism has oil money and a murderous simplicity behind it that offers both absolute conviction and absolute moral superiority. Right down to treacherous murder, engaging in the ultimate betrayal of someone you had sworn to protect, being a noble, religious act.

Nor is the response of Muslim religious organisations to be wondered at. The more God can be used to strip people of moral and legal protections, the more authority clerics and priests have as "gatekeepers of righteousness".

The pathology of Pakistan is not a lone killer, it is the heroic status given to him. It is moral nihilism parading itself as God’s work and being applauded as such. The essence of bigotry is that clothes itself in the robes of morality, of defending moral decency, while subverting the morality at its core. It is the pathology that monotheism is naturally prone to, and the more so the more it is unrestrained by any sense of human frailty, the more one's sense of self is inflated by God's authority. If Pakistan has no identity beyond Islam, then it is naturally inclined to be what Hanif Kureshi’s Uncle Nasser called it a quarter of century ago, a country sodomised by religion: or, as Australian colloquialism would have it, a country buggered by religion.

The question with monotheism is not are people Jews, Christians, Muslims or Zoroastrians. It is whether they seek to use their religion to strip other people of moral and legal protections. In the contemporary world, it is an inclination that Islam displays with particular flagrance and brutality, but does not remotely have any monopoly thereof.

That the logic of belief is not necessarily the logic of believers still leaves us with a basic epistemic question: how do you tell who is whom? Governor Taseer died of that question. It is the difficulty of that question which encourages its own easy responses. Still, we need to at least to know what the correct question is to ask. And that question is: is your God a weapon against your fellow humans or not?

And, if it is, against whom and how much? There are plenty of people who get exorcised by the issue of radical Islam, because they see it as a threat to them. Monotheists attacking queers fails to bother them, however, because it is not a threat to them: they may even endorse it. But it is all the same game, they are then just playing different roles in different versions of it.

Friday, January 7, 2011

About monotheism

This is derived from a comment I made in an email discussion group.


That there is a considerable amount of tendentious Catholic scholarship around is not news to anyone moderately well-read in history. Italian philosopher Benedetto Croce held that religions were incomplete philosophies because they could not tell the truth about the past: leaving aside entirely the issue of revelations and miracles, the problem of maintaining some moral and epistemic authority down the twisted path of events seems to be enough to cause problems. My favourite spectacle of such is various Catholic writers who end up attempting to argue (or, at least, strongly imply) that Western civilisation has been in decline since the Reformation: even on the simple metric of the proportion of the human population raised within the Catholic faith that is a nonsense, let alone if one applies wider criteria. It is one of those positions that, if you are left arguing for it, the time has come to examine one’s premises and, if you cannot bring yourself to do that, well, then, clearly Croce had a point.

It does not work to tell the story of the Reformation as either the noble Catholic defence of Christian tradition or the noble triumph of Protestantism. It was a time of appalling hatred and brutality. Those ignorant and naïve souls who claim that “Islam needs a Reformation” understand neither Islam (which is the belief in the authority of scripture On Steroids, With Boosters) nor the Reformation (historians still argue what percentage of the population of central Europe was killed or starved during the wars of religion, estimates ranging from about 15% to about a third with some regions losing up to 75% of their population). Europeans decided to generally eschew killing each other over religion in revulsion against having done so much of it and the realisation that neither side could overwhelm the other.

Wholesale slaughter of civilians only became the pattern of European history again when new sources of Absolute Moral Authority were imported into politics. But most of the techniques of totalitarianism were pioneered by the Catholic Church because totalitarianism is all about Absolute Moral Authority: in totalitarian states, commissars or gauleiters replace Papal legates, agitprop function as did friars and other Christian preachers, show trials replace auto de fe. We see propaganda, heresy hunts, inquisition, censorship, informers, even population culls: both the Albigensian crusade (1209-1229) and St Bartholomew’s Day Massacre (1572) foreshadowed the repression of the Vendee and the September massacres of the French Revolution. The problem with modern totalitarianism is not that it is Godless, but that it has substitute Gods, substitute Absolute Moral Authorities.

In the end, monotheism only comes in two versions: that which uses the authority of God to protect and succour one’s fellow humans and that which uses the authority of God to strip people of their moral protections. Most believers play both games, they just vary in how intensively and to whom they do it.

Priests, alas, get authority from using God to strip people of moral protections. It is notable that Jesus spends very little time in the Gospels preaching about the actions of temporal government and a great deal preaching against priestly power and the use of priestly rules and interpretations to strip people of moral protections. It makes priests inherently somewhat dubious vehicles for preaching the Gospel message: indeed, that tension – the doctrine of love thy neighbour being propagated by priests who get power and authority from subverting love thy neighbour so as to become the “gatekeepers of righteousness” – is, in many ways, the central dynamic of Christian history.

Tuesday, January 4, 2011

Why do the poor remain with us?

Norman Geras raises a point that recurs in his commentary in his excellent blog:
but what a mark against the world's wealthiest countries that there remains in them such a category of people - the poor - who can be spoken about in this way. These are societies fat, bulging, overflowing, with stuff; oozing personal wealth, economic crisis notwithstanding; and they are yet to provide all their citizens with a standard of material well-being such that no one would any longer need to be referred to as the poor but might enjoy, even as unequals, the advantage both of a more comfortable state and a more dignified style of description.
An obvious response is that, by the standards of history and of much of the globe, the people referred to as ‘the poor’ in developed democracies are not poor. [This point is made very powerfully via a graph here.] They have life expectancies, security of food and shelter and rates of possession of consumer durables that mark them out as among the blessed of history. Indeed, as Michael Cox and Richard Alm point out in their Myths Of Rich And Poor: Why We're Better Off Than We Think, poor people in the US in the mid-90s had an average level of possession of consumer durables that would have marked them off as middle class in the early 1970s. [This point is expressed graphically here.]

But, by the standards of their own societies, they are poor, even if poor means “middle class two or so decades ago”. So, why do we have a persistent category of people who lag behind the general prosperity?

Well, for no single reason. As Norman Geras intimates, it is not a matter of how productive the society is, as used to be the case when poverty was the general human condition. There have been sharp drops in the general level of poverty in Western societies over time. Which is another way of saying that developed societies have been great engines of mass prosperity: that is what makes them “developed societies”. But these drops in poverty rates slowed and then stopped: for example, the proportion of people in poverty in the US dropped steadily, even dramatically, during the postwar boom until the mid 1960s and has been stubbornly persistent ever since. Rather discouragingly, the apparent ending of mass exit from poverty coincided with increased government effort against war on poverty: the US “war on poverty” has been about as successful as the “war on drugs”. But similar patterns can be discerned in other developed societies.

Indeed, one way to put the question is “why has poverty persisted despite massive expansions in the welfare state?” The question is not often put like this, but it is a very reasonable question to ask, on the evidence. After all, the welfare state is a century or more old: the failure of eliminate poverty is a reasonable criteria to evaluate it by, particularly given its massive expansion from the 1960s onwards. (It can hardly be the fault of “capitalism”, as its success in generating unprecedented and steadily increasing mass prosperity is what has made the elimination of poverty a remotely plausible goal in the first place. Indeed, the first post-classical public welfare measures – Venetian public health measures, English poor law provisions – grew up in the most commercial societies in part precisely because they were the richest societies.)

One answer to the persistence of poverty might be: because of the expansion of the welfare state. After all, the great mass exits from poverty clearly were not products of the welfare state: they were the result of massive expansion in productive capacities. The welfare state needs clients: if there are no poor people, then there are no poor people to be clients. Milton Friedman pointed out that, if one took the entire expenditure on anti-poverty programs and divided it by the number of poor Americans, there would be no poor Americans. Clearly, employing people in secure jobs with good pensions in welfare bureaucracies, and the transferring of funds to people who are not poor, take up a considerable amount of welfare resources and generate a considerable number of beneficiaries: beneficiaries who might be of some risk of losing said benefits if poverty was abolished.

So, waste and failure in welfare might be one reason for the persistence of poverty. Particularly if such retards economic growth – given why the mass exits from poverty have occurred – by, for example, reducing the level of productive investment.

Or it might be due to welfare subsidising unfortunate patterns of behaviour. The richer the society, the less absolute the penalties for destructive behaviour patterns tend to be, but they still exist. One of the effects of welfare can be to soften the effects of folly (or, to be less blunt, lessen the penalty for patterns of behaviour not conducive to increased income). People can get away more with clinging to leisure preferences, instant gratification preferences or familiar attitudes and patterns of behaviour which are not conducive to good incomes. (And the behaviour of parents may well have effects on the prospects of their children.) If there is a bell-curve of income-producing behaviour, then there will always be a tail end. The richer the society, the better off the tail-end will tend to be. But they will still be the tail-end.

In the US, if one completes high school, get and stays married, get and stays employed (even starting at a minimum wage job) and avoids becoming involved in crime, one’s chances of staying poor are small.

We also get into some stubborn persistences here. Consider that, in the US, students of Asian ethnic backgrounds do far more homework, on average, than do black students. If lifetime income prospects are connected to educational achievement (as they are) and educational achievement is connected to student effort (as it is) then we can reasonably predict that poverty will be more common among black Americans than Asian-Americans on that one indicator alone (as it is).
Read More...
So, what can we do about this? If doing less homework leads to higher rates of poverty, does poverty lead to doing less homework? No, but the patterns of behaviour and outlooks which lead to poverty (for example, by discouraging scholastic effort) may do so. A society where human capital is important, and increasingly important, has limited ability to get specific groups to value the acquisition of human capital. But, if they fail to do so, they will have higher rates of poverty. So poverty will persist due to a failure to take advantages of the opportunities available (with some depressive effect on the general productivity of the society, since the level of human capital will be lower than it otherwise would be).

Moreover, how good is the welfare system likely to be at putting itself out of business by encouraging patterns of behaviour that lead to exiting from poverty? Noting that, to the extent that a social system can be said to have “an interest”, poverty is not in the interest of “capitalism” – there is far more profit to be made from selling to rich consumers than poor ones. More precisely, the logic of capitalism has clearly been to generate mass prosperity, since capitalism is the best system ever developed for creating and using capital (the produced means of production) and the more capital, the more production, the more prosperity, the less poverty.

The welfare system can also create barriers to exit from it. Public housing can “trap” people in high unemployment areas, as to move is to lose one’s eligibility. The very high effective marginal tax rates that beneficiaries face (from their benefits reducing, and taxes increasing, as they earn more money) also constitute a barrier to exiting from poverty. But such are expensive to fix and tend to keep the level of clients for the welfare system higher, so there is little incentive from within the system to push for reform.

There are also some forms of poverty that simply are not much of a concern. That university students have low incomes in their 20s is not a concern if they end up being high-earning professionals in their 40s. Indeed, as Cox and Alm point out, the increased participation in higher education is a major reason for increased income inequality – we can tell this, because the slope of “life cycle” income changes (i.e. average income by age group) has become much steeper than it used to be.

So, given the increased participation in higher education, something that also took off in the 1960s, some of the persistence in poverty is a life-cycle effect.

Some of the persistence of poverty is a “recent entry” effect. New migrants, lacking skills and entre into various networks, will tend to start off with low incomes. Increased low-skill migration will also tend to lead to persistence in poverty rates, particularly if there is an increase in the importance of human capital in an economy. It is likely that the children and grandchildren of new migrants will not live in poverty, but if the flow-in is constantly replenished, then the poor are being replenished.

And, of course, if migration to a developed democracy becomes a guarantee that one will not be poor, the incentive to migrate will be greatly increased. Milton Friedman famously argued that the welfare state was incompatible with open borders: certainly the welfare state is likely to increase the resentment of migrants if people believe they are paying for people whose arrival they had no say in.

The low-skill migrant point interconnects with the educational point. It is clear that the Anglosphere is better at attracting productive migrants than much of Europe (and, apparently, my own country of Australia is the very best at cherry-picking its migrants).

But that second-generation male Muslim migrants in Europe are “going backwards” in their economic participation points to another difficulty – barriers to economic participation. Some of these can arise from the behaviour of those with lower levels of economic participation (e.g. the lower levels of homework among black American students). Others can flow from regulation or other institutional factors.

Regulation has a persistent tendency to protect the interest of incumbents: this is particularly true in land use regulation and labour regulation: unfair dismissal laws, for example, protect incumbents against new entrants to labour markets (since they raise the risk of employing new people, particularly for small businesses). Faced with increased risks in employing new staff created by such regulations which is not compensated for by increased productivity, businesses respond by cutting back on hiring, relying more on certification and on “vouching for” networks, became more reluctant to deal with differences that might get in the way of communication (i.e. the transaction costs of cultural differences) and so on. If migrant Muslim males put less effort into school and so are less certificated, are more likely to “have attitude” (or are believed to be so), are less plugged into networks, have less skills then they will be disproportionately excluded by such regulation. Though young people generally suffer from such “protect incumbent” laws.

The problem of persistence of attitudes not conducive to exit from poverty are not only a matter for the poor, they can be attitudes among the better connected as well. Labour market regulation penalising the more marginal in the labour market, land use regulation driving up rents and housing prices by restricting the supply of land for housing are not created or justified by the poor, and certainly do not benefit them, but do disproportionately penalise them.

The capacity for “progressives” (or, as former Labor Senator John Black puts it [pdf] the inner city rich, the code word for which is apparently, ‘progressive’) to romanticise green fields (which are every bit as much human creations as any suburb, and may well have less biodiversity), thereby driving up the value of their inner city properties by restricting the supply of land able to be used for housing, and to frame labour market regulation as “protecting workers” (as, indeed it does: it protects incumbent workers against competition from marginal workers) does its bit to increase barriers to economic participation and so to the persistence of poverty.

Add all these factors together and the elimination of poverty – that is, of a category of “middle class minus two or so decades” – becomes difficult, to say the least.

So, is it a “mark against the world’s wealthiest countries”? Well yes, though not as much as it may seem at first blush and those who are most likely to hold it so are often very much part of the problem.

Or, to put it another way, the sort of mushy, self-satisfied reasoning that Norman Geras likes to berate Guardianistas for in international affairs has its domestic equivalents. There is even some suggestive social science research (pdf) that implies that conservatives signal competence while progressives signal trust: hence the importance to the latter of policy positions which allow one to signal one’s good intentions (and conservative contempt for any disastrous consequences, which the liberals deride as being unfeeling or otherwise lacking in virtue).

Indeed, we observe people who not that many years ago would have been nodding along to descriptions of science as a “patriarchal Western discourse”, not worthy of any privileging, now holding the results of climate science as absolutely authoritative: attitudes to science clearly being subordinated to the commitment to signalling virtuous intentions. But embracing of such serial, or even concurrent, contradiction actually improves the capacity to signal that one’s priority is membership of the club of the ostentatiously virtuous.

If we allow actions to have income consequences (since that promotes productive behaviour) but not negative ones (since that can lead to poverty), stop low-skill migration, ensure that the welfare system promotes independence and not dependence (even at the risk of losing its client base) but otherwise pays those who cannot be independent enough not to be poor, only permit students in higher education who won’t be on low incomes while they are studying and eliminate regulations and other institutional factors that are barriers to economic participation (which will require neutering the “progressive” intelligentsia having any effective capacity to frame public debate so as to block such changes), we in developed countries can have “tail ends” which are not poor by the standards of our societies.

Good luck with that.

Still, the good news is that we could do better: the bad news is that we probably won’t (beyond general increases in productivity).