Category / Mathematics and Science

So dark it might not even be there November 2, 2013 at 5:24 pm

The latest result on the search for dark matter is so important it has even made it to the pages of the economist. Basically, the Lux experiment didn’t find any. None, nada, nothing at all.

Why does this matter? Well, dark matter is a compelling idea that fixes a lot of problems: without it everything from the the rotation curves of galaxies to the profile of the microwave background is hard to explain. There are other theories which try to explain these issues, like MOND, but they aren’t as successful as dark matter. So dark matter has been a leading candidate for ‘something we’ll find if we look hard enough’ for some time.

However we have now looked hard, and it isn’t there. There are other places to look, and it might show up there; indeed, it is close to orthodoxy that it will. But, purely on the (not very reliable) ground of mood music, it feels a little different to me.

After all, this is the second `nothing new’ in a big particle physics/cosmology experiment in two years. First we had the Higgs exactly where the standard model said it would be, with no new physics; now we have no dark matter at Lux. These are only related in the sense that there was hope that the new LHC physics, had it been seen, would point out how to integrate the standard model with some theory of gravity, while of course dark matter’s presence is (mostly) inferred from our current understanding of gravity. The lack of surprises is part of an Edwardian feel to modern cosmology and particle physics: it has become too arcane, too mannered, too etiolated. In that sense I’m in Peter Woit‘s camp: much of theoretical particle physics has become too much like theology, and it would hardly be a surprise if something new and different came along to explain the things we so plainly don’t understand about gravity.

Healing circuits March 29, 2013 at 12:51 pm

I have endured a couple of talks recently on the use of network methods in financial stability analysis. While the general idea is interesting, the specific applications struck me as dubious in the extreme. So it was with some relief that I read something about robustness that was useful and impressive – albeit with no finance connection. Science Daily reports:

a team of engineers at the California Institute of Technology (Caltech)… wanted to give integrated-circuit chips a healing ability akin to that of our own immune system — something capable of detecting and quickly responding to any number of possible assaults in order to keep the larger system working optimally. The power amplifier they devised employs a multitude of robust, on-chip sensors that monitor temperature, current, voltage, and power. The information from those sensors feeds into a custom-made ASIC unit on the same chip… [This] brain analyzes the amplifier’s overall performance and determines if it needs to adjust any of the system’s actuators…

[The ASIC] does not operate based on algorithms that know how to respond to every possible scenario. Instead, it draws conclusions based on the aggregate response of the sensors. “You tell the chip the results you want and let it figure out how to produce those results,” says Steven Bowers… “The challenge is that there are more than 100,000 transistors on each chip. We don’t know all of the different things that might go wrong, and we don’t need to. We have designed the system in a general enough way that it finds the optimum state for all of the actuators in any situation without external intervention.”

Best logo for a quantum mechanics course ever November 11, 2012 at 4:37 pm

From Umesh Vazirani’s Quantum Mechanics and Quantum Computation on Coursea:

Superposition of cats

Local vs global optimization for corporations April 5, 2012 at 6:34 am

Doug had a very interesting comment to my post about evolutionary diversity and banking. I’ll set up the problem, then quote some of his comment and try to give my spin on his questions.

Essentially we are concerned with an unknown fitness landscape where we are trying to find the peaks (best adapted organisms or most profitable financial institutions) based on changes in makeup (genetics or business model). The landscape might have one peak, or two, or twenty seven. The peaks might be similar height, or they might be wildly different; the local maxima may or may not be close to the global maxima or they might not. Moreover, you only have knowledge of local conditions. The question is how you optimize your position in this landscape.

This is related to the topic of metaheuristics… A typical scenario would be a doing a non-linear regression and finding the model parameters that maximizes fit on a dataset. In the scenario there’s no analytical solution (e.g. in linear regression) so the only thing you can do is successively try points [to see how high you are] until you exhaust your resource/computational/time limits. Then you hope that you’ve converged somewhere close to the global maximum or at least a really good local maximum.

The central issue in metaheuristics is the “exploitation vs exploration” tradeoff. I.e. do you spend your resources looking at points in the neighborhood of your current maximum (climbing the hill you’re on)? Or do you allocate more resources to checking points far away from anything you’ve tested so far (looking for new hills).

One of the most reliable approaches is simulated annealing. You start off tilting the algorithm very far towards the exploration side, casting a wide net. Then as time goes on you favor exploitation more and more, tightening on the best candidates.

Simulated annealing is good for many of these kinds of problems; there are also lots of other approaches/modifications. A couple of things to note though; there is no ‘best’ algorithm (ones that are good on some landscapes tend to fail really badly on others, while those that do OK on everything are always highly suboptimal compared with the best approach for that terrain); moreover, this class of problems for arbitrary fitness landscapes is known to be really hard.

In what follows, I’ve taken the liberty of replacing the terms ‘exploration’ and ‘exploitation’ for ‘wide ranging exploration’ and ‘local exploration’ as I don’t think ‘exploitation’ really captures the flavour of what we mean. Back to Doug:

I believe the boom/bust cycle of capitalism operates very much like simulated annealing. Boom periods when capital and risk is loose tend to heavily favor wide ranging exploration (i.e. innovation). It’s easy to start radically new businesses. Bust periods tend to favor local exploration (i.e. incremental improvements to existing business models). Businesses are consolidated and shut down. Out of those new firms and business strategies from the boom periods the ones that proved successful go on to survive and are integrated into the economic landscape (Google), whereas those that weren’t able to establish enough of a foothold during the boom period get swept away (

All of this is tangentially related, but it brings up an interesting question. Most of the rest of the economy (technology is particular) seems to be widely explorative during boom times. Banking in contrast seems to be locally explorative even during boom times, i.e. banking business models seem to converge to each other. Busts seem to fragment banking models and promote wider exploration.

So why is banking so different that the relationship seems to get turned on its head?

The cost of a local vs. global move is part of it. Local moves are expensive for non-financials, almost as expensive as (although not as risky as) global moves. That makes large moves attractive in times when the credit to finance them is cheap. When credit is expensive and/or rationed, incremental optimization is the only thing a non-financial can afford to do.

It’s different for many financials however. The cost of climbing the hill – hiring CDO of ABS traders – is relatively small compared to the rewards. Moreover there is more transparency about what the other guys are doing. Low barriers to entry and good information flow make local maximisation attractive. To use the simulated annealing analogy, banking is too cold; there isn’t enough energy around to create lots of diversity.

And is this a bad thing for the broader economy, and if so why?

I think that it is, partly because the fitness landscape can change fast and leave a poorly adapted population of banks. Also, there are economies of scale for financial institutions and high barriers to entry for deposit takers and insurers (if not hedge funds), so there are simply not enough material size financial institutions. It is as if all your computation budget in simulated annealing is being spent exploring the neighbourhood of two or three spots in the fitness space.

A large part of the answer, it seems to me, is to make it easier to set up a small bank and much harder to become (or remain) a very large one.

Holding back evolution March 30, 2012 at 7:26 am

Hans asked a good question in comments to a prior post:

when you start off with 40 medium size banks, eventually a few will have a better business model than the others. And then the business model gets copied (due to shareholders seeing the return at the more successful banks and wanting the same) which leads to a convergence to 40 banks with (more or less) the same business model. Basically what we saw in the run-up to the financial crisis. After which the take-overs can begin due to economy of scale.

In other words: I agree that ‘evolution’ thrives on diversity, but how do you prevent convergence to one (or 2/3) business models?

I have to say that that one has me stumped for now. The fitness landscape changes fast for banks, so rapid change (what an evolution biologist would call Saltation) is the norm. If we let evolutionary pressure bear on a diverse set of creatures in a fitness landscape with a single peak – a single business model – the ones that don’t climb the peak aren’t very successful. So do we have to imagine legislators coming in like comets every 50 years and imposing diversity again? That’s pretty depressing.

The problem is the premise: a single-peaked fitness landscape. Diversity is encouraged when there are lots of local maxima in the fitness landscape. We need, in other words, to make sure that lots of banking different models are acceptably profitable. There are two ways to do this of course: lifting up the little guys (aka the wildlife sanctuary approach) or crushing the big guys (aka a cull). To your elephant guns, gentlemen.

Too lazy to be knowns January 30, 2012 at 4:15 pm

You will recall the famous (and unfairly derided) Rumsfeld quote:

[T]here are known knowns; there are things we know we know.

We also know there are known unknowns; that is to say we know there are some things we do not know.

But there are also unknown unknowns – there are things we do not know we don’t know.

Until recently, I thought that this was sensible. But musing about what people knew about AAA RMBS, it strikes me that there is another situation: the things that we are too lazy to know.

Let me explain. In 2006, some people – including some people I knew – knew that AAA RMBS were sensitive to house prices. They knew that prices could fall, and that if they did, the securities would drop in value. However, they had not done the analysis to go beyond that, so they didn’t know how sensitive some of these securities were to a drop in house prices (especially one early in their life, before reserve accounts had been built up). In short, the risk of these assets was not a known unknown, nor an unknown unknown, but more of a too-lazy-to-be-known.

This form of ignorance is widespread and important in finance. People are peripherally aware that there is more to be known about a topic, and that that knowledge is somewhat relevant, but they don’t find out. They prioritize, they ignore stuff that’s difficult; whatever. The point is that it isn’t just Knightian uncertainty and risk that can get you: it is the stuff that some people know, but that you have never looked at properly.

Most things are wrong (and it doesn’t matter) January 13, 2012 at 5:40 pm

For my sins, perhaps in a past life, I used to manage a model verification group. We looked at derivatives pricing models and checked their accuracy. Many of the ones we looked at were somewhat wrong, and some of these we passed anyway. Why?

  • A model is only designed to be used with a domain of applicability. Provided that their are controls in place to make sure it is not used outside that domain, it doesn’t matter that it is wrong there.
  • Moreover all models are a simplifications. They will always break if you stress them enough.
  • Time to market sometimes beats correctness. Being first, even with a slightly wrong model, is sometimes better than being seventh with a more correct one.

In other words, modelling is like crossing a river on lily pads. It isn’t a question of whether things are secure – you know that they are not – it is a question of having sufficiently good judgement that you avoid taking a bath.

It does not surprise me, then, to learn that many research results may be false. People doing complicated things make mistakes, even without bias. Having open data (so that others can build their own model) and open models (so that they can see where yours breaks) helps, but mistakes are still going to slip through. Science, like finance, isn’t ‘correct’; the best it can aim for is ‘not obviously false’, and it might not hit that bar some of the time.

Indeed, ‘correctness’ is a really unhelpful idea in most modelling. Few models are absolutely correct, and certainly very few interesting ones. ‘Close enough, enough of the time’ is much more apposite, and ‘open enough that you can figure that out’ is a good way of helping to get there.

Geeky hashtag of the day December 13, 2011 at 12:27 pm

#Higgsmas. (The seminar starts in half an hour: see here for a liveblog, or here for even more tweets.)

Update. And the answer is… Santa may have brought us a Higgs at around 125GeV, but we won’t know for sure for a year or more.

Why do psychology majors do so badly in the US? November 8, 2011 at 12:34 pm

I am puzzled. The WSJ has a useful, sortable list of earnings and employment percentages by undergraduate major based on 2010 US census data. Mostly, the data makes sense: reading science, engineering or a classic professional subject like medicine makes it less likely that you are unemployed and more likely to have a high salary. What’s bizarre, though, at least to my eyes, is the poor performance of those who read psychology. The three majors which generate the lowest median earnings are school student counseling, counseling psychology and educational psychology, all coming in below $35K. (OK, student counseling makes sense, but the other two?) Even clinical psychology and straight psychology only give median earnings of $40K or so.

The employment picture is even worse. Clinical psychology majors have the worst employment prospects of all, with nearly 20% unemployed at survey date, while social psychology, miscellaneous psychology, and educational psychology are all in the bottom 15 (out of 173).

Can anyone explain what is going on here?

The capitalist network October 25, 2011 at 11:26 am

Sorry, I am suffering from post backlog. Here’s something I have been meaning to get to for a little while.

An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

Now I don’t entirely agree with the methodology but that does not matter too much: the important part is that the authors have tried to ‘empirically identify such a network of power’. Chapeau.

The long view of gold September 24, 2011 at 3:38 pm

The Baseline Scenario (HT Naked Capitalism) has a lovely post on the long term gold price. It asks and answers the question:

…the reason gold is so rare in the crust of the earth? Well, that’s because it’s relatively abundant in the earth’s core. When the earth was cooling, most iron-loving minerals sank toward a massive lump of molten iron down near the core. That includes gold and other heavy metallic elements. So where did the gold we currently have come from? Recent evidence suggests it came from smaller asteroid impacts that were not sufficiently large to break through the existing crust of the Earth…

So that means gold is very rare in the crust of the earth, but not so rare in space. In fact, in space, it might be quite common. A detailed study of the moderate sized asteroid Eros over a decade ago indicated it contained over $20 trillion of precious metals—at 1999 prices.

Back then, gold was around $300. Now it’s $1,800, and other commodity metals have increased in price too. The nominal value of Eros alone is probably now over $50 trillion.

Now anything to do with space is really, really expensive. But if, say, you could grab an asteroid, strap a cheap, slow, chemical rocket (or a solar sail) to it, and move it into Earth orbit, for tens of billions, then you might have something. At that point you can use 1960s technology to send up mining robots, grab decent sized chunks of high quality ore, spray ablative on them so that they do not burn up too much on re-entry, and then guide them down.

That is, of course, a pretty big negative for the long term gold price. But in the short term, with the world lacking a plainly safe reserve currency, I wouldn’t go that aggressively short despite recent events

Corporate psychiatry September 4, 2011 at 10:50 pm

There has been a meme floating around in the last few years of corporate psychopathy. Actually there are two ideas here: co-workers as (potential or actual) psychopaths; and corporations as psychopaths. It is the latter I find interesting.

Let me explain. In the US at least, corporations have had many of the rights of natural persons for many years. The most famous example of this is the recent supreme court decision relating to election funding (specifically striking down provisions in the McCain–Feingold Act that prohibited all corporations and unions from broadcasting “electioneering communications”).

So, if corporations benefit from many of the rights of people, shouldn’t they be similarly judged by their acts?

DSM V, the latest edition of the American Psychiatric Associations’ Diagnostic and Statistic Manual for Mental Disorders, proposes a number of criteria for antisocial personality disorder:

  1. Significant impairments in personality functioning manifest by:

    1. Impairments in self functioning (a or b):

      1. Identity: Ego-centrism; self-esteem derived from personal gain, power, or pleasure.
      2. Self-direction: Goal-setting based on personal gratification; absence of prosocial internal standards associated with failure to conform to lawful or culturally normative ethical behavior.


    2. Impairments in interpersonal functioning (a or b):
      1. Empathy: Lack of concern for feelings, needs, or suffering of others; lack of remorse after hurting or mistreating another.
      2. Intimacy: Incapacity for mutually intimate relationships, as exploitation is a primary means of relating to others, including by deceit and coercion; use of dominance or intimidation to control others.
  2. Pathological personality traits in the following domains:
    1. Antagonism, characterized by:

      1. Manipulativeness: Frequent use of subterfuge to influence or control others; use of seduction, charm, glibness, or ingratiation to achieve one’s ends.
      2. Deceitfulness: Dishonesty and fraudulence; misrepresentation of self; embellishment or fabrication when relating events.
      3. Callousness: Lack of concern for feelings or problems of others; lack of guilt or remorse about the negative or harmful effects of one’s actions on others; aggression; sadism.
      4. Hostility: Persistent or frequent angry feelings; anger or irritability in response to minor slights and insults; mean, nasty, or vengeful behavior.
    2. Disinhibition, characterized by:
      1. Irresponsibility: Disregard for – and failure to honor – financial and other obligations or commitments; lack of respect for – and lack of follow through on – agreements and promises.
      2. Impulsivity: Acting on the spur of the moment in response to immediate stimuli; acting on a momentary basis without a plan or consideration of outcomes; difficulty establishing and following plans.
      3. Risk taking: Engagement in dangerous, risky, and potentially self-damaging activities, unnecessarily and without regard for consequences; boredom proneness and thoughtless initiation of activities to counter boredom; lack of concern for one’s limitations and denial of the reality of personal danger.
  3. The impairments in personality functioning and the individual’s personality trait expression are relatively stable across time and consistent across situations.
  4. The impairments in personality functioning and the individual’s personality trait expression are not better understood as normative for the individual’s developmental stage or socio-cultural environment.
  5. The impairments in personality functioning and the individual’s personality trait expression are not solely due to the direct physiological effects of a substance (e.g., a drug of abuse, medication) or a general medical condition (e.g., severe head trauma).

OK, let’s do some diagnosing.

A1 is pretty straightforward. Most corporates pass a or b or both.

A2a is slightly more difficult, but again many corporates pass, while A2b is pretty much the definition of being an employee.

B1 is harder. A diagnosis for a corporate would have be based on a or c, but certainly many PR/investor relations/government affairs groups have an element of B1a about what they do, while B1c can be met simply by the pursuit of profit at the expense of (most) other things.

I would not argue B2a or b are common in corporates, so we need B2c to get a diagnosis. Sadly (or fortunately depending on your point of view), that is not easy either. Yes, corporations take risk, but typically not unnecessarily’ and without regard for consequences. Moreover, while some display ‘a lack of concern for [their] limitations’, they at least try not to. That is what risk management is about. So it seems we stumble on the requirement for disinhibition in our diagnosis.

C and E are straightforward passes for many corporates, but D is difficult too as, frankly, the way corporates behave is normative, at least in North American culture. So, reluctantly, while I think that a lot of what corporates do is antisocial, it would be hard to make a case for involunatry commitment (or sectioning, as we call it in the UK) under the DSM V criteria for antisocial personality disorder. Badly behaved, yes; psychopaths, (mostly) no. Before you get too comfortable though, check out the criteria for Narcissistic Personality Disorder and Personality Disorder Trait Specified: these are a lot easier for corporates to pass…

Prices as modified Schelling Points September 3, 2011 at 7:14 pm

This idea comes from Doug’s comment to my last post. First, what is a Schelling Point?

From Wikipedia (mildly edited):

Tomorrow you have to meet a stranger in New York City. Absent any means of communication between you, where and when do you meet them? This type of problem is known as a coordination game; any place in the city at any time tomorrow is an equilibrium solution.

Schelling asked a group of students this question, and found the most common answer was “noon at (the information booth at) Grand Central Station.” There is nothing that makes “Grand Central Station” a location with a higher payoff (you could just as easily meet someone at a bar, or in the public library reading room), but its tradition as a meeting place raises its salience, and therefore makes it a natural “focal point.”

The crucial thing then is that Schelling points are arbitrary but (somewhat) effective equillibrium points. (For an interesting TED talk on Schelling points, try here.)

Schelling won the Nobel prize in Economics in part for the Points which bear his name. But what do they have to do with prices?

Well, in a sense a price is Schelling point. Two people need to agree on it in order to trade. There is no particular reason that BofA stock at $7.25 is a better price than $5 or $10; sure, stock analysts may well disagree, but I am willing to bet that few of them could get to $7.25 for BofA based on publically available information excluding prices.

As Doug says, this is even more the case for an illiquid financial asset. Here there are few prior prices to inform the decision as to what solution to propose to the Schelling coordination problem. A proper appreciation of the arbitary nature of the problem is required here.

Note, by the way, that I called a price a modified Schelling point. This is because, unlike a typically Schelling problem, you often know the solution that others have picked because you can often see the prior prices at which assets have traded. After all, the Schelling problem for strangers meeting in New York is a lot easier if you know the most common answer is ‘Grand Central Station at noon’.

I like the way that the metaphor of price-as-Schelling points emphasises the arbitrary nature of prices. Another good thing about it is that it highlights that buyer and seller together construct the equilibrium. If we say the answer is PDT at 9pm, then it is. Besides, PDT serves Benton’s old fashioneds, which are apparently things of genius, so this solution has particular merit. Note that I can make this solution more plausible by publishing maps of PDT, linking to positive reviews, etc. – think of this as the equivalent of equity research. I make my solution better known so that there is more chance that you will pick it too.

Not one Higgs February 28, 2011 at 7:47 pm

The LHC is warming up. Sighting of the Higgs boson are confidently predicted.

I don’t think so. Now, I studied physics in the 80s, so I have less understanding of modern particle physics than a cave man had of a bicycle – yes, physics does move that fast – but I can smell an epicycle pretty well, and the whole Higgs field/dark energy thing whiffs to high heaven. Hence my entirely unscientific and seat of the pants prediction: the LHC will find either zero or more than one Higgs boson, and egg will be better distributed around faces than if L’Oreal put yolks in their moisturiser.

Sweet pea, serious problem July 15, 2010 at 8:04 am

The good news is that June’s sunshine has left me with some gorgeous flowers.

Sweet Pea

The bad news is that last month was the hottest June ever recorded worldwide and the fourth consecutive month that the combined global land and sea temperature records have been broken, according to the US government’s climate data centre. Our failure to act on Kyoto, and the subsequent disappointment of the Copenhagen climate summit is going to have serious consequences. The flowers might be nice now, but life in an overheated world won’t be in ten or twenty years time.

How culture specific is the Kruger-Dunning effect (and what can you do about it)? July 1, 2010 at 9:32 pm

I do try, on occasion, to write boring titles, but with this one, honestly, I think I’ve set the bar fairly high. Bear with me.

The Kruger-Dunning effect is the phenomenon whereby the worse someone is at performing in a given domain, the worse they are at estimating how good they are. The great performers know they are great; the terrible ones think they are kinda OK verging on good. As Kruger and Dunning say

In essence, we argue that the skills that engender competence in a particular domain are often the very same skills necessary to evaluate competence in that domain-one’s own or anyone else’s.

Hence the bad performers don’t have the skills to know they are bad.

The K-D effect is well known and explains a lot (especially to those of us who sometimes have to spend time talking to people in authority whose understanding is, shall we say, imperfect). But it occurred to me the other day that cultural factors can mitigate or accentuate it. For instance in cultures where praise of effort rather than results is common, one might expect the K-D effect to be stronger, whereas in less supportive, more results-driven cultures, there is more evidence that the bad are indeed bad, and hence the effect might be less. Now I am not nearly stupid enough to suggest paradigmatic countries in each group, but the evidence here (see for instance the section ‘Cross Cultural comparison’ in this link) is interesting.

For financial companies of course, the moral is clear. If you don’t impose objective performance standards, not only will the good guys get upset and leave, the bad ones will actually start to think that they are good. They might even (if they are talented self-publicists) convince you that they are. The only defence against K-D is objective assessment. Or, to put it in slightly more populist terms, show me the money.

Lo uncertainty April 2, 2010 at 6:06 am

Andrew Lo and Mark Mueller have a new paper which has a very nice explanation of an idea that I have long held, namely the importance of distinguishing between those situations where you know the form of the distribution and it suffices to estimate its parameters, and those situations where the parameters change over time.

I gave the example of a dice with sides between n and n+5. If you know n is fixed, then it does not take many observations to know what n is. Once you have seen a 10 and a 15, for instance, you know n = 10. But if n itself varies, then you are in a much more difficult situation. Seeing a 10 and a 15 does not prove that n = 10 since it might have been 8 for the first observation and 11 for the second. Even if n only varies slowly, you need a lot more data to make good statistical estimates.

Lo and Mueller propose a more detailed hierarchy of model uncertainty as follows. I will use my dice rather than their examples as illustrations.

  • Level 1: complete certainty. The dice has 5 on every one of its sides.
  • Level 2: risk with (Knightian) uncertainty. Standard six sided dice. The probability of each outcome is fully known.
  • Level 3: fully reducible uncertainty. Six sided dice with each of the numbers between n and n+5 on one of the sides. n is unknown but it is fixed and therefore can be precisely deduced with enough observations.
  • Level 4: partially reducible uncertainty. Six sided dice with each of the numbers between n and n+5 on one of the sides. n is unknown and with some fairly low probability may go up or down by one each throw. Here we know something about the path of n from observing the dice rolls, but we can never be certain what it is at any point in time. The distribution of outcomes can therefore never be completely known.
  • Level 5: irreducible uncertainty. Completely random numbers on the sides of the dice which change on every throw. We know nothing.

Like Lo, I think that in finance we often pretend to be in level 2 when in fact we are in level 3 or 4.

Was there a huge magnet over Madoff’s place? April 1, 2010 at 6:06 am

From the BBC:

Scientists have shown that they can change people’s moral judgements by disrupting a specific area of the brain with magnetic pulses.

They identified a region of the brain just above and behind the right ear which appears to control morality. And, by using magnetic pulses to block cell activity, they impaired volunteers’ notion of right and wrong.

Experience has taught me that relying on summaries of scientific papers posted on news sites is fraught with danger. Nevertheless, if the paper does say what the BBC says that it says, this is fascinating. (Note that unlike the other post today, the BBC story was prior to 1st April.)

In praise of argument February 6, 2010 at 7:15 am

Jonathan Hopkin has an excellent piece on the current climate change research debacle:

My argument is basically that the incriminating emails probably don’t significantly undermine the findings of Phil Jones and his team, and the whole story has been overblown – scientific research is always imperfect, and there are always issues about the reliability of data. And, sometimes, academics behave badly over email. Amazing but true.

He’s right, but then he understands how science is done, as this passage demonstrates:

The truth is that the world of scholarly research is a world which revolves around argument and disagreement: present a paper at a conference and, if it is at all interesting, hands will go up as other researchers seek to challenge and scrutinize your findings. The main reason for this is probably vanity – asking a tricky question and putting another scholar on the spot wins you respect and standing. But the fortunate side effect is that poor research has a good chance of being revealed as such.

In other words, science as she is practiced does not involve mining little nuggets of truth. It is a fermenting brew, with lots of people stirring the mixture, adding ingredients, trying to throw a batch out and start again. Only years later – in some cases decades later – does it settle down enough that we have some chance of seeing what we have really got.

Note too the use of `interesting’ in the above. Being wrong isn’t the main scientific sin. Being boring is. Fortunately many scientists have low standards (although perhaps not quite as low as economists), so they take an interest in quite a lot of ideas. But it is certainly career enhancing to produce something that a lot of people find interesting, even if it turns out to be wrong. Those wrong ideas are useful in part because opposing them often helps to create a less wrong idea.

I think it is extremely probable – but not certain – that current climate science is correct. We have a big problem, and we need to do more to fix it, even while acknowledging that we do not know quite how big it is. It would be a tragedy if a failure to understand how science happens were to be the reason we don’t do enough.

Making safe vs. predicting danger January 20, 2010 at 11:22 pm

There is an interesting letter in the current LRB from Wilhelm Schneider:

… In many cases it is true that the ‘future’ (of chemical processes, say) can be precisely predicted on the basis of the laws of nature, and engineers take advantage of those fortunate circumstances. But there are also counter-examples: it is notoriously difficult, for example, to predict the exact time an earthquake will occur.

Should predicting the occurrence of financial crises in any case be the aim of an economic theory? Those of us who work in engineering have adopted a more modest, though still challenging, approach. In aerodynamics, for instance, we investigate how to get turbulent air flows under control, instead of trying to predict ‘catastrophic’ events.

That is a useful distinction I think. Predicting a crisis seems to be a difficult problem. Building the system in such a way that crises are less likely seems to be simpler. Now often it turns out that seemingly hard problems are simple and vice versa (AI cracked `diagnosing heart disease’ very quickly; building a robot that could walk turned out to be a much harder problem) — but still, tackling the simpler problem seems sensible.