Friday 24 August 2012

Some remarks about Neal Stephenson

I recently attended a very interesting event down at the Watershed cinema - American science fiction author Neal Stephenson came to give a talk, centred around his new book, "Some Remarks". This a collection of non fiction essays on science, technology, history, gaming, and other such fun things. I've not the book yet - in fact I've only actually read one of Stephenson's books, namely "Diamond Age" -  but from what I know he's part of the interesting new wave of hard sci-fi that emerged in the last few decades.



The talk was certainly interesting. He spoke on various topics connected to the book, including an interesting comparison between the rise of modern internet technology in places like Silicon Valley, and the innovation of the 1850s, when Victorian scientists and engineers were making a big push to make the world a more connected place. A big part of this was the project to lay cables across the globe, to broaden the range and usefulness of the telegraph network. It turns out the act of cable laying is a lot more difficult that one might initially assume - the ocean floor isn't flat, so the speed of the ship needed to be constantly varied so that the cable would mold to the shape of the sea floor, as it touched the bottom several miles behind the ship. The whole thing proceeded in quite a hacky way, with engineers basically inventing the necessary technology as they went along - and at one point ruining a transatlantic cable by running 2000 Volts through it; so it makes an interesting contrast to the carefully designed and planned nature of large projects today.

Stephenson's fascination with science, both old and new, was quite evident, especially when he talked about the potential for what can and should be achieved - he spoke passionately about the need to move on from oil as a fuel source and fix some of the mess we made; this was balanced, and in part motivated by, an anger at avoidable catastrophes such as the Deepwater Horizon oil spill, and a frustration that we haven't managed to get beyond this kind of thing. The difficulties of actually stopping the leak and the vast resources needing to clean it up notwithstanding, I can certainly sympathise with this - after all, oil leaks are something we've been suffering for decades, and it's not exactly the best use of our time.
Massive cock-ups: haven't we got beyond this yet?

However, following on from this, there was one point with which I (and several others, it seems) profoundly disagreed. While discussing technology and innovation, he made the claim that technological progress has slowed down, and we are currently living in an age of relative stasis. This immediately struck me as odd - we appear to live in a golden age of futuristic shiny technology, the like of which has never seen before, with capabilities beyond the imagining of the science fiction of the recent past - so how could that be so?

Of course, being technologically sophisticated and having all manner of complicated machinery is not the same as progress, and just because technology of the past improved doesn't mean we are continuing to innovate and invent. Stephenson argued this point as follows: imagine taking someone from around the year 1900, sending them forward to the 1960s, allow them to experience the world as it is and see how it's changed; then send them back, and get them to describe what they saw to their peers. Chances are, they will be unable to make themselves understood, lacking even the basic vocabulary to convey the mysteries of the future. On the other hand, take a person from the 1960s and bring them forward to the present: they would see that we have cars with smoother curves, that we still fly in 747s, that telephones have got smaller, and computers have got faster. Progress indeed, but nothing that is so utterly inconceivable that they could never tell anyone, or that would be forever beyond their comprehension.

This might seem a reasonable argument on the face of it; but the more I thought about it, the less convincing it became. Admittedly, a number of the concepts present in the 1960s are still alive and well today, just smaller/faster/better/stronger/chewier. But on the other hand, we now have the internet, an idea likely to be almost completely alien to your average 1960s inhabitant. It's not just that we can send messages between machines fast and reliably: it's the previously unheard of idea that the wealth of human knowledge is instantly accessible by almost anyone; the way that we can communicate and interact with people on the other side of the world in complex ways in real-time; the way it has spawned entire new communities and ways of doing business. Then there's the rise of the smartphone, meaning that many people carry around in their pockets a device allowing them to be constantly connected to a vast network of friends and collaborators. We have become accustomed to these things now, but it is worth remembering that they are not just better ways of doing old things: they are fundamentally new ways of using technology and interacting with other people, which have undeniably changed society in innumerable ways. And it's not just communications technology that would astound our 1960s traveller: the sheer computing power that not only exists, but resides in our pockets; the ability to build detailed 3D models of the inside of a living person, and peer inside their bodies in real time; how almost continuously for the last several years, there has been a constant human presence in space. The list goes on.

Then again, I will concede one thing: as amazing as these advances would seem to the people of fifty years ago, they would probably still get the hang of it, more so than one from over a century ago. But I don't know if this is so much down to the nature of technology, as the way people think: the imagination of people in the 60s and their capacity to imagine the future would probably have been a lot better than their predecessors: impossible things were no longer quite so out of reach. I'd say that this increased ability to contemplate the improbable was helped in no small part by the science fiction of the time: people were willing to look to the future and seem something new.

"The space programme is dead".  No it isn't.
But this argument that Stephenson made was more than just about the slowdown of technological growth: it was about ambition and inspiration. He lamented that there were no big projects any more, and that the space program was dead. Seriously: he said this, while there is currently a robot the size of a car exploring the surface of Mars. The landing of the Curiosity rover, and audacious feat consisting of being lowered from a sky crane lofted by retro rockets, and controlled entirely by autonomous software, is one of the most impressive things I've seen this year, and looks like pure sci-fi. That we can do such things - and moreover, that we want to, and that it is capturing the imagination of so many people - shows that the space programme is alive and well. Not only that, but can all view full colour panoramas taken on the surface. The only reason NASA isn't doing more is a chronic lack of funding; but what it achieves with what it has is quite remarkable.
Go explore this 360° Mars panorama. Seriously, do it.

On the other hand, reports of the demise of human space flight are, at least at present, fairly accurate. With the end of the shuttle programme the only country now capable of putting humans in space is Russia, using their trusty yet antiquated rockets. Moreover, our hypothetical 1960s time traveller would be shocked to hear that, while we reached the moon within a decade of first touching space, we've not been back in forty years - which is indeed a cause for some sadness and concern. It's worth remembering though that that momentous achievement took a big chunk of the resources of the most powerful nation on the planet, and was motivated by a dangerous game of one-upmanship with a rival superpower. It's incredibly impressive that humans reached the moon with 60s technology, but to do it repeatedly was simply not sustainable - we still need to find ways of doing it cheaper, safer, and without depending on the full support of an American president.

That's why recent developments in the private space industry are so exciting: individual companies are starting to get into space, from the development of Scaled Composites' SpaceShipTwo to the recent docking of the SpaceX Dragon capsule with the ISS*. This is important, since once private industry gets in on it, and finds it to be possible and profitable, then the human exploration of space can really start happening.  The exploration of space and other planets by government organisations is a fine thing and long may it continue - but there's no way space is going to become a new frontier for humanity if it is government controlled. It took a long while to get started, but as I'm sure you can imagine, the technological difficulties of building low-cost, reliable, safe spacecraft are pretty challenging, and it's taken a lot of advances in materials science, aeronautics, rocket science and computing to make it happen. Such things can't possibly happen overnight.

[*an aside: I love the way we can casually just refer to the ISS now, as if it's just some other piece of hardware... it's a freaking space station. In real life. In orbit. With people living on it. Yeah.]

Private space industry docking with the space station

This leads on to one of the reasons I think that it's easy to get the perception that technological progress is slowing down, and what might be behind some of Stephenson's remarks. Science, technology and engineering are still progressing at a fast pace, but there is so much more of it; and as discoveries are made, they become harder, as only the difficult things remain to be done. Then, with every discovery made, it opens up whole new areas of knowledge, and whole branches of science come into being that also need studying. Progress is being made all the time in all these fields all the time, but it's just that a breakthrough in one area can seem insignificant on the scale of the whole - and because it takes a large number of small breakthroughs before something radically new can be done. These things add up - for example the advances in display technology, interfaces, computing power and battery capacity that allowed devices like iPhone to be invented - but they all take time. There's probably some law that says even if discoveries are made at an exponentially increasing rate, that technology requires them in ever greater numbers, so the growth in technological sophistication won't just blow out of control (actually, I would just love to see what would happen if you put Neal Stephenson in a room with Ray Kurtzweil).

To explain this, it's useful to think of scientific research as the expanding edge of a circle. In the beginning there was only a small circle, where there weren't all that many things to work on; virtually any upstanding gentleman with a workbench and some magnets could make a discovery (as one questioner could put it, you would work for a while and discover ten physical laws), and make a noticeable dent in the circle. One person could even reach opposite edges of the circle. But as the circle gets bigger there is more of a front to work on, and any one person can only make a small contribution to its size, and only along a relatively small part of its length. There are more and more people working on expanding this circle as time goes on, but the circumference keeps getting larger, which means the area of the circle is increasing at a proportionately slower rate - so even though vast swathes of understanding are covered by even a small increase in radius, this doesn't look so big compared to the circle as a whole.

So I think it's an easy trap to fall into, to think that we're slowing down at progress - especially since we live in such a technologically advanced time, where we're used to things being amazing and expect new and faster gadgets on a regular basis. But difficult things take time, and the bigger projects depend on so many areas to make crucial advances that their overall progression seems unbearably slow. Neal Stephenson lamented the fact that we still haven't managed to devise a fuel to replace gasoline, or a way to efficiently capture solar energy: I'd argue that this is not so much due to a lack of imagination, but just the fact that these things are really hard, and we're really trying. Admittedly a big kick like oil running out or some political enthusiasm and guidance would spur things on, but it's not that people stopped working on it.

I can see where he's coming from when he laments the apparent stasis of technology, as we muddle on with our fossil-fuelled existence in a world without flying cars and moonbases. The dreams of so many sci-fi authors never quite materialised, and it's tempting, especially for someone whose job it is to imagine fantastic visions of the future, to think we've lost our way: that  humans stopped dreaming. But given the vast advances made in medicine, science, artificial intelligence, space flight, nano technology and so on in recent times, I don't think we have to worry about any sort of innovation drought; the big advances I've seen even in my lifetime remind me of what we can still do. Exploring the vast frontier of science isn't quick or easy - but it's certainly going to keep us busy.

Incidentally, I'm writing this on a Raspberry Pi - a computer costing £25 small enough to fit in a matchbox (if you have really big matches), with computing power several times that of some desktop machines I've owned. How's that for progress.











Tuesday 31 July 2012

Selfish genes and the meaning of life

Using selfish genes to ponder the meaning of life, and what the alternatives might be


Some time ago, I wrote about the meaning of life, and stated that while there doesn't seem to actually be one, that's the way I would prefer it: that any imposed meaning would detract from the wonder of the universe, and that the freedom to decide our own fate is the best of all possible alternatives.

However, that article missed out two fairly important points. First, what does science have to say about a possible meaning to life; and second, from whose point of view are we actually considering there to be a meaning? This post addresses both points.

All the major religions have some concept of a meaning to life, whether it be for our own personal enlightenment or the glorification of some ineffable higher being. These sufficed for a time, but as scientific understanding progressed humans began to ask the ultimate question in earnest: what are we here for? What is it all about? Attempting to answer such questions has driven many branches of science, in order to try and understand the world and our place in it - from a geological explanation of why the land on Earth is as it is, the physics of how the Earth, moon and sun came into being, to attempts to understand the origins of the universe itself.

However, these merely explain the backdrop, and it is to biology that we must turn to understand the origin or purpose of humanity. While explaining meaning and purpose can be considered rather lofty goals, I think that they are actually addressed quite satisfyingly by the explanation of how our species became to be, because in understanding how it happened, we get the answers to why it happened. The primary reason as to why we are here must be evolution: the process of adaptation, survival and selection that led to the emergence of our species over tens of millions of years. This provides a fairly succinct answer of why we are here: because our ancestors were here, and they were good at it.

But to find 'meaning' in this, we need to dig deeper. The mechanics of the process are well understood, but that is not quite the same as the reason it happens - that is, what ultimately drives evolution? Perhaps the best explanation of this is to consider it not from the point of view of species or organisms, but to take the gene's-eye-view of evolution: a perspective which was popularised in the 70s by Richard Dawkins's book The Selfish Gene, and draws on the work of many evolutionary biologists such as W.D. Hamilton and Robert Trivers. The basic idea is that genes are the fundamental unit of selection, and that good genes are ones which confer some advantage to their host, and so are more likely to be passed on to future generations.  This means there is no guiding hand on evolution, nor any sort of 'good-of-the-species' consideration - the only mechanism required is that genes are passed down through the generations, and the gradual development of their host organisms is a consequence. Complex life was never part of a 'plan'; but once the first primitive replicators, whatever form they may have taken, were forced to compete for resources against their neighbours, the race was on to survive.

This is quite important when considering a scientific answer to the meaning of life. The reason that humans, or any other animal, lives, is in order to propagate the genes it carries. If a species is not good at this, it will die out, while genes conferring improved survival are retained, leading to the vast array of incredibly complex survival machines we see today. To the extent that any organism can be seen to have a purpose, it is to reproduce and pass on its genetic material - anything that happens afterwards is largely irrelevant to its genetic legacy, and the diverse and bizarre behaviour of birds, bees, plants, and mammals can be explained by taking this gene-centric, rather than individual-centric, view of life.

But can it really be that the purpose of life is merely to pass it on? Doesn't this seem both self-serving and pointless? In a way, yes, but it is the only answer that makes sense. Genes are passed on simply because they are good replicators, and the evolution of complex life is nothing more than the best way of replicating genetic material. As Dawkins makes plain in his books, this is not because genes are conscious entities with a will to survive, but simply that they are sets of instructions which will be copied, and the best at being copied will be copied more. There is no need for any thought or intelligence in this, merely the mechanism for replication and a criteria (survival) by which fitness is defined; our own success, intelligence, creativity and technological accomplishments are just examples of successful strategies that genes have found to survive, much as claws, echolocation and venom are successful gene transportation mechanisms in other organisms.

Thus the meaning of life can be seen not even as a desire for our own reproduction, but rather as the propagation of genetic material, which is as invisible to us as we transport machines are to the genes themselves. The 'meaning' of all of this is that good genes will be passed on, and our 'purpose' is to facilitate this transfer. We no longer need to ascribe some divine or spiritual intent, or speculate endlessly about what it's all for, because science has provided the most succinct and plausible answer: we are here because it is convenient for genes.

At first glance this may appear somewhat bleak, but as I said before, the absence of any particular meaning is not a problem. If our lives are merely a by-product of low-level replicators, that should not matter: passing on genetic material may be what we are 'for', but it is not all that we can do. We are here, and we can do great things, and whether or not this benefits our genes does not have to matter to us, now that we have evolved the intelligence to comprehend it. Indeed, rather than being slaves to genetic determinism, we can actively work against our genes' best interest, via contraception, care for the weak and sick, life-preserving technologies and genetic engineering. The 'meaning' of our existence is the survival of genes but we have already shown that we can move so much beyond this, and decide our own future. Indeed, it is quite empowering to think of it this way: we have pulled ourselves up from amongst the animal kingdom and into the stars, even though it was never meant to be our purpose. That we have achieved so much without divine intervention or controlling factors is impressive, as I said before; that we did it even though our 'meaning' or 'reason for existing' are nothing to do with it is even better.


Even so, I can imagine some people being uncomfortable with this reductionist view of human existence. Assuming evolution is true (it is), does that mean we have to take the gene's-eye-view of life? Why does that mean we don't have another meaning assigned to our lives, and replication of genes was merely the mechanism by which it was implemented (implausible but worth considering). Does the selfish gene model have to be the only way to think of our purpose, or can there be others?

These questions are worth exploring, because they beg the question of who's point of view the 'meaning of life' is to be evaluated from. There has to be some perspective from which meaning is defined, otherwise it makes no sense as a concept. The universe does not care about us any more than it cares about the collision of rocks in space - and even then, that would be ascribing thought and intention to some abstract entity, even if it is 'the universe' as a whole. So the question of the meaning of life, if not yet answered satisfactorily, must address this: from who's point of view are we asking?

First, consider our own point of view. It is a valid question to ask, from our vantage point as intelligent beings, what our purpose is for being here (so not the scientific reason as above, but what it means to us). However, I would argue that trying to decide our own meaning of life for our own purposes, and having no a priori imposed meaning, are exactly the same, and is basically the situation I described before. So, the question of a meaning of life at this level is resolved in that it doesn't make sense to think that there can be one: we are free.

On the other hand, what other point of view could there be? As I mentioned at the beginning, one historically prevalent concept is that it is from the perspective of a supernatural being that this question must be considered. For example: the meaning of life is to be good, devout, pious, and subservient, from the point of view of God. That would make sense logically, since it is God, not humans, who decide the meaning, and so humans do have a meaning for their lives (even if it isn't a very nice one to think about). Or, to take another example: the meaning of life is to live humbly and compassionately, in order to be re-incarnated in a better situation in a future life from the point of view of whatever karma or divinity makes the decision. The problem with all of these ideas is that they invoke supernatural entities to be the source of meaning and judges of our actions - for this to be the meaning of life, they would need to actually exist. Obviously, no evidence exists for God or any other super-human intelligence, but that is not the point here: my point is that if anyone wants to advance any alternative candidate for the meaning of life, other than the make-it-up-for-ourselves model that I have outlined, based on the indifference of the underlying selfish genes, they must choose a point of view from which this meaning is defined, and provide evidence that it is real.

Image: Jeff Johnson, Hybrid Medical Animation, taken from linked site
So we are left with scientific explanations of the world, which as I said have made considerable progress in understanding why everything is here. To date, the most compelling and convincing explanation of why humans exist is the gene-centric model of evolution: root through all the scientific discoveries which tell us how the universe, this planet, or our species came into being, and this is the theory which goes furthest to tell us why we are here, and what the purpose of our continued existence really is. Given all the possible alternatives, all the variety of ways that meaning could have been assigned, I'm very glad we have one that gives us the option to choose for ourselves; and that we can choose to overrule our genetic programming and to do what we think is right. The selfish gene may be the meaning of life, but what we do with it is up to us.


Sunday 29 July 2012

Foundation and Patriarchy


This post is about Isaac Asimov and the Foundation series. Before I go further I'll point out that I like Asimov, and greatly value what he has done for science fiction, as well as his contributions to science, science popularisation, skepticism and humanism. If I appear to become quite critical, bear that in mind. Also, Spoiler Alert, for anyone who has not read the Foundation trilogy. So, on with the rant...

I recently finished reading Isaac Asimov's Foundation trilogy, his epic galaxy-spanning tale of conquest, survival and rebellion after the decline and fall of the first galactic empire (I'm focusing on the three books of the original trilogy, not the sequels and prequels he wrote much later). In general, I found the series to be excellent, with many interesting concepts, both in terms of science and how people deal with it. It was fascinating the way that technology, in the first book, is turned into a kind of religion, in order to spread it to people without them being able to understand the science behind it, and then of using it to control those who revere its mystic qualities. There were interesting ideas in the third book concerning mind control, namely if someone was influencing your desires and altering your loyalties, would you be able to tell; and if your mind was compromised in such a way, would you still be the same you? One of the bigger themes, pervading the three books, was the way people might behave if they thought the future was pre-ordained, and the faith and complacency they feel when they believe they are destined to win no matter what.

However, it became obvious mid-way through the first book ('Foundation') that every single one of the characters was male. Not only all of the lead characters, such as the rulers of the Foundation or their adversaries, and the rogueish merchant traders, but pretty much anyone they deal with. In a whole galaxy, that seems a bit lop-sided. There are two exceptions, whose appearance was conspicuous given the obvious omission so far. One of these is the wife of a planetary ruler, acquired for diplomatic purposes, being the daughter of a neighbouring warlord; whose entire dialogue (her existence spans only a few pages) seems to consist solely of nagging at, moaning about, or generally belittling her husband, in some kind of clichéd 50s stereotype. The other female character's presence is even more fleeting, her entire purpose being to go "ooh shiny!" when some heroic space-trader presents her with a high-tech necklace. In all, not a particularly flattering or considered portrayal of half the population of the galaxy; and the perception of space as a massive boys' playground was wearing rather thin by the end of the book. Not that any of the men had particularly deep or developed characters though, each being rather single-minded and flat: but that's partly down to the way the book is written, consisting of a series of short segments of the Foundation's history, spaced decades apart.

Things seem to get somewhat better in the second instalment though ('Foundation and Empire'), with the introduction of Bayta Darell (see - I can actually remember her name). She is one of the main characters of the book, albeit primarily because she happens to be the wife of a figure involved in the democratic rebellion. She's a reasonably strong character, though much of her purpose revolves around being the understanding, compassionate, fragile, weak-but-caring trope. Her actions are pivotal to the end of the novel (really, spoiler alert), where her quick thinking (after a few chapters of muddling along) in shooting the old psycho-historian, just on the brink of the big reveal, is pivotal in the development of the whole story arc. So while her actions consist mostly of following her husband from planet to planet and being all wifey, she is at least a female character that exists, with some importance to plot - or so it at least seems, but we'll get to that...


The third book ('Second Foundation'), for the most part, continues in a similar vein, with good old space men zipping around the galaxy being all daring and wise, or plotting their cunning treachery as the plot gets ever more complex, never quite knowing who the good guys are (actually, for most of the book I was convinced there were none). Here is perhaps the best potential for a strong female character, where Arcadia (14 year old daughter of a prominent scientist and grand-daughter of the above-mentioned Bayta) shows some actual courage and determination, eager to be part of the action and basically have an adventure. She goes as far as managing to acquire high-tech listening equipment (in the time-honoured method of flirting with the nerdy kid) to find out what her father and his mysterious associates are up to, then stows away on a ship to see for herself, and to be part of the action. It seems that the female hero has finally arrived, albeit with rather a lot of being scared, followed by pretty much running away to a quiet corner of the galaxy until everything is alright (admittedly she has fairly sound reasons for doing so, but still). She still manages to be quite pro-active in achieving what she needs - namely, getting a message across the galaxy to her father, without knowing who she can trust or who might be under the influence of the mysterious Second Foundation (achieved by profiteering in a massive war, incidentally, but that's fairly standard by now), and so basically saved the day, or so it would appear.

But all this cunning and bravery can't be natural, right? The girls and women can't really be that clever and daring, can they? Indeed, no: and this is the point at which I got properly annoyed about the whole lack-of-actual-women-in-the-whole-of-space thing. It turns out that Arcadia, and Bayta before her, were acting as they did because they too had been under the subtle influence of the Second Foundation. Yes: they only actually showed any initiative, bravery, aggression, or sense to run away because they were being mind-controlled by a conspiracy of psychologists on another planet in order to maintain the thousand-year plan for the next empire. OK, that's all valid plot-wise, and the long time-scale over which the grand plans unfold is a major (and fascinating) theme of the books - but this also gives the impression that Asimov considers women to be pretty much incapable of anything noteworthy, unless they happen to be made artificially heroic by not-entirely-explained psychological manipulation. This third book actually lessens the regard which I had for characters in the second book, when things started to be looking up.

I should mention for completeness that there are a few other women in the third book - the one that is not a fairy bland housewife / maid is the clingy, needy, pathetic mistress of the ruler of a strategically important planet. Her purpose seemed to be to annoy her man with inappropriate pet names and be chastised for it, and desperately latch onto any form of contact with another female. Or so it seemed, for it turned out she was an agent of the Second Foundation, merely playing the part of a pathetic weak-willed hanger-on - her identity is revealed when her disguise slips momentarily, since she considers Arcadia too stupid to notice. So at least women are capable of being lying and manipulative members of this secret and highly advanced gang of psycho-historians.


So in all, the trilogy doesn't paint a very good picture of the author's consideration of women (I'm not saying he hated women, or was fundamentally misogynistic - just that he didn't deem it necessary to actually write any decent ones into these books). Now, obviously, this trilogy was written in the 1950s, so we perhaps shouldn't have too high expectations; it is pretty much contemporaneous with The Lord of the Rings, that other epic trilogy in which women, for the most part, are content with being wives, queens and signposts. But still, this is science fiction, set thousands of years into mankind's (oh, I mean humanity, how did I forget...) future - so the thing which I find most odd is that Asimov, while capable of thinking up the most fantastic vision of the future of the human race, as it spreads out amongst the stars with an array of fabulous technology, assumes that societal organisation will remain totally unchanged. Without exception, all of the societies in the Foundation trilogy, on innumerable planets through five hundred years of history, are modelled in the same patriarchal way as a 50s nuclear family, or worse as dynastic monarchies typical of medieval history and fantasy epics.  Again, this has to be seen in its historical context, but even so there had been vast progress in women's liberation and rights in the decades preceding these books, with women's suffrage, the right to actually be considered human, and the rise of second-wave feminism being reasonably recent occurrences, with progress showing no signs of slowing down by the mid 50s - so I am confused as to why Asimov did not extrapolate this further, or even play with the ideas in the various future cultures he invokes. In fact, Asimov considered himself a feminist, and regarded the education of women as a key need for society, in part in order to reduce the rapidly growing population - so it's not as if he would have been alien to such ideas.

Of course, the above is based entirely upon the three books of the Foundation trilogy. I've not read much else of his work, so it might not be a particularly fair sampling. Indeed, the other three books in the series - added many years later - may redress the imbalance; and I can't vouch for the content of his hour hundred or so other novels. However, it's worth noting that the Foundation trilogy is his best known work, and the books he wrote during that period generally remain his most popular.

One of the only other books by Asimov I've read is The Stars Like Dust, which has quite a similarly disappointing portrayal of women. One of the main characters (I forget her name) is a princess-like daughter of one of the main planetary rulers. As important as her actions may be to the plot, her main purpose seems to be as a love interest to the protagonist, and to tag along on his adventure. From what I recall she isn't portrayed in a particularly enlightening way, with plenty of "oh you know what women are like" moments. The main things I remember about her is that at one point it's necessary to attach a separate living pod onto their tiny spaceship to house her (while being on the run, or on a secretive mission, or generally in a bit of a rush), lest she have to be indecently close to the males present, and because she needs a lot of space because... well, woman stuff, or something. I read Stars Like Dust a few years before Foundation (twice actually, without realising), and remember back then being struck by his less than egalitarian portrayal of the major characters; and so this further bolsters my impression of Asimov's writing having a disappointingly misogynist slant.

Again, this was written about sixty years ago, and so are very much a product of their time - so I don't hold it against Asimov particularly. Then again, as I alluded to above, while his writing, being science fiction, is predominantly about futuristic technology and its consequences (and perhaps can't be expected to do the whole social justice thing as well), science fiction has always really been about people and society; so when quite a large aspect of society is assumed to be completely static, it is quite an obvious omission.

It's interesting to compare Asimov to other science fiction writers of the time. I recently read The Drowned World by J.G. Ballard, and the level of misogyny there was appalling, with the sole female character being generally described as listless, unreasonable, hysterical and generally incapable (often referred to simply as "the girl"). This would have been even more striking if it weren't for the bizarre neuroscience woo pervading the book, and the horrendous levels of blatant, unjustified racism - and so things could definitely be worse. My limited experience of reading Brian Aldiss and Ursula K. Le Guin hinted at some better things, with a sprinkling of key female characters, with no particularly memorable instances of sexism to complain about. Arthur C. Clarke's books (the Odyssey series in particular) fared rather better, with there being plenty of characters involved in the plot, who just happened to be female. This is especially pronounced in the later books (though these were written decades later), set in the near and distant future, where women are spaceship captains, engineers, scientists and so on, without that being a particularly remarkable thing - just as one would hope society would develop, given its trajectory so far.

Even though Asimov's Foundation books were written quite some time ago, and we can perhaps forgive him for merely reflecting the prevailing views of the times, I still think it's something worth mentioning. This is classic science fiction, and has had a big impact on a lot that followed, both in books and film, so it's important to realise what it got wrong as well as what it got right. Given the sexism, racism, and classist attitudes prevalent in many books of the time, and earlier (both in science fiction and in literature in general - I'm looking at you Conrad and Doyle) things could have been worse, but when trying to imagine the far distant future, things could have been better. It's always worth bearing such issues in mind when considering the genre often called 'hard' sci-fi: while it may well appear as scientifically rigorous as possible, it isn't always so thorough in other matters.

Friday 27 July 2012

Fractal distraction


It seems I've not got around to much writing recently. This is partly because I've been quite busy at work (conference paper - yey), but also because I had something else to get distracted with. Recently, there has been a lot of interesting distraction in my lab with people making fractals, and various other patterns, so I had a lot of fun getting involved with all that - here are some of the results.

It all started when my friend Jack, in a bout of epic procrastination, created a rotating fern-like fractal. We all spent ages gazing in wonder at it - it's mesmerising, especially once he altered it so that its acceleration changed randomly, meaning its was smooth but still totally unpredictable. There are videos of it here.



The problem was that this is written in C++, and uses the OpenCV software library for drawing, so isn't exactly portable to everybody's machines - and while you do get to watch the video, it's the same every time. This is where I came in, with the plan of making it available to anyone: so I re-implemented it in javascript (a browser-based scripting language). This was a bit tricky, since I'd not done any javascript in years, and that was just to flip some images around on links on websites. Fortunately, once I got the hang of drawing a line, it was reasonably straightforward to translate (except for some odd properties of javascript and the way error reporting consists of it simply not working).  My initial implementation can be seen here, from before I worked out how to have lines of different colours (I haven't bothered with the random acceleration, so motion is a bit more jerky as speed randomly changes). [Note - all these examples work in chrome, and maybe newer versions of firefox... anything else, I can't guarantee]


So, a bit of background to how this works: it's a recursive drawing algorithm, which basically means that at each stage, the algorithm does something, then calls itself, and so on indefinitely (up to a certain limit). The basic step is to "draw two branches", where to draw a branch, it draws two more branches. The interesting patterns result from the angle between subsequent branches being passed down the levels, at each level being relative to the one before, which is what leads to the curling effect. It's quite interesting to play with the different parameters, such as the ratio by which the lines get shorter at each level (turns out a scaling of 0.7 is almost always the best; yes we tried the golden ratio.)

Stacks and colours

It's a well-known fact that any recursive algorithm (one that achieves its aim by repeatedly calling itself) can be implemented instead as an iterative algorithm - that is, a more 'normal' one where the instructions are layed out entirely within one function. Obviously, it's not possible to write it all out explicitly (well, it would be very tedious), since it isn't generally known how far deep to go - but this can be achieved using a structure like a queue to store the future branches that need to be explored. So, basically as a programming exercise, this is what I did. Javascript isn't exactly bursting with sophisticated data structures, but I managed to implement it as a stack, where each item in the stack holds the starting position, angle, length, and the depth of the current branch. It can be strange to imagine how an iterative algorithm can produce patterns like this - but basically, the first branch is put onto the stack; then at each iteration of a loop, the last thing on the stack is read off, drawn, and replaced with its two child branches, and so on until the deepest allowed level. Sometimes iterative algorithms can be (much) more efficient than their recursive equivalents - that might be the case here but I've not checked.

In re-writing it I played with a few other things too. Now the speed of the two main branches vary according to sinusoidal functions, so they reliably speed up, slow down, and change direction. This means there's actually no randomness at all, and yet complex and seemingly unpredictable patterns emerge, and you'd have to watch for a long time until it repeats. Mmmm complexity. The main thing that makes this version look nicer is the colours: the colour of each branch is now determined by its angle, by mapping it to a hue wheel. (Briefly: colour can be represented by hue, saturation, and lightness, where hue is a value that cycles from red, through yellow, green etc, finally via purple back to red. Saturation controls how intense the colour is, and lightness is where it sits between black and full colour).

The great thing about writing in javascript is that not only can anyone with a suitable browser see these, but you can also see the code (right click, view source). So feel free to copy it and have a play around; it may not be the most elegantly written piece of code but hopefully should be reasonably self-explanatory.

Clocks

At some point someone mentioned that the two branches rotating around each other at different speeds are a bit like the hands of a clock... and this gave me an idea. Could it be turned into a clock? Well, first it would need three hands, to do it properly and actually see the motion - fortunately this is pretty easy, and in fact the number of branches at each level was just a parameter in the original code. Generally more than two branches just looked a mess though, so for the clock version I altered the length scaling to thin it out. Secondly, the hands on a clock are different thicknesses and different widths - this required a few more parameters to be passed down the levels, to change length and width of the three branches independently (and to make the lines thinner as they get shorter, lest it become a big mess of overlapping lines). That, plus the hour-ticks, and the Fractal Clock is born:



It gets the current time via a javascript function, so it should always be the correct time wherever you are. The motion of each hand is smoothed by getting the exact, fractional number of hours, minutes and seconds, otherwise the hands tick (you can turn this on/off to see it, but I think the smooth version is nicer). You can read the clock pretty much as a normal clock, by basically just looking at the three main arms coming from the centre (the fractal ends don't point to anything in particular) - but if you look closely, at each junction there is another mini-clock, at some other orientation, and so on down the branches.

Again, in this clock, the colours are determined by the angles, where red is up (so the wheel shown above is rotated, but that's because having red=0 at the top makes sense on a clock face). In fact, this means you could tell the time simply from the colours of the three main hands, without bothering to look at their orientation... and this leads inevitably to this, the Hue Clock:


Yes, you can really tell the time from it. Imagine the hue wheel - red corresponds to straight up, so if the hour panel (the left-most) is red, it means it's midnight (or mid day, it's a 12 hour clock). If the second panel is greeny-yellow, it's quarter past the hour; and the third panel goes all the way around the hue wheel once per minute. Now we can finally tell the time with ease! (As I write this it's lawn green past cyan - that image was taken at orange past cyan, earlier in the hour)

Where next for the clock? Is this as minimalist as it can be? Not quite... see, the 'coordinates' of the current time (hour,minute,second), requires three variables. A colour, in the red-green-blue colour space (which is what's used to specify all the colours), needs three variables... you can see where this is going. So here it is, the RGB clock.

Admittedly, it's slightly harder to read the time off it. Basically, the redder it is, the closer it is to midnight (or mid day); the green component gradually increases over the hour, flipping back to no green at :00 (unlike hue, channel brightness is not cyclic); and if you watch it over the course of a minute, the blue component will gradually increase, before going back to 0 at the start of the next minute.

Now that I'd made pretty much the most minimalist clock possible, I moved back to other uses of the fractal. Well, it could have just been a single pixel, but that wouldn't look so good. I probably should have just made it the web page background and got rid of the canvas tag... I'm sure someone else can do that if they want.

The third dimension

The thing that seemed obvious to me with the original fractal is that it's only in two dimensions... and I do like to generalise things. So, my intention since the beginning was to adapt it to 3D. For this, I need some sort of 3D display environment, but didn't want to give up javascript, which doesn't have any sort of 3D environment. So I built one.

The can be done fairly easily because all you actually need to make a 3D environment is the ability to project a point from some 3D coordinate system, to 2D coordinates on the screen. That essentially defines the virtual camera with which you view the world, so you can see things in proper perspective, and freely move around the environment, just by changing the camera parameters. Dealing with the perspective projection of a pin-hole camera is something I've dealt with extensively in my work (it's a rather fundamental concept in computer vision), so now I just needed to use it the other way around, to create a 3D environment as seen from a 2D one. I'd actually done something quite similar before when I implemented a 3D environment with SDL (a basic 2D drawing library), using only a line drawing routine, in pure C (basically to see if I could) - so repeating this in javascript only took a couple of evenings. The biggest job when writing it with SDL one was to write a simple matrix library from scratch, to do all the necessary matrix-vector multiplications... but I realised I this was not strictly necessary, since it was quicker just to hard-code the relevant equations for projection and camera motion.

(Yes, I know I could have used VRML or something, or found some extension for javascript that renders in 3D, or at least matrix algebra; or used some 3D Java in an applet - but I like a challenge, and most of the fun was in proving I could make a 3D environment using only a line-drawing function and no matrix library in pure javascript... that's just the sort of thing I do).

Here's the 3D environment. Basically you can walk around the grid, see some simple, static fractal trees, and play with some particle effects (easy and fun!).


It's quite tempting to turn this into some sort of 3D shooter. Not now.

The simplest thing is to simply put the 2D fractal into some plane in this space - which is quite nice in itself, since then you can walk around and behind it and view it edge-on (this is still possible in the full implementation below).

The next step was to generalise the fern fractal to 3D. This actually required only a few changes: the main one is that the 2D version needs one joint angle per branch, while in 3D, it needs 2 (angle in the plane, and angle out of the plane, is one way to think of it). These two angles define a vector in 3D, which along with a length, specifies the start and end of each line in 3D space. This means that the fern is now fully in 3D, branching out into a volume of a (hemi)sphere, not just a circle. It looks a lot more complicated (it's still completely deterministic), and it's harder to understand what's happening from one view, so it helps to walk around. Controls are explained on the web page, allowing you to change the colour and speed (since there are two angles, one maps to hue, the other to saturation, giving each 3D line a unique colour).


So this is the kind of fun stuff that's been going on in our lab for the last few weeks (I love my lab). What's interesting is that after I ported it to Javascript, we each picked a language to see if we could write it with that - C# was done (that looked very nice); then Visual Basic for Applications (embedded in Microsoft Excel!) was surprising but actually looks great; there was even a cut-down Matlab implementation. It would be fun to try in something a bit more unusual, maybe Lisp, Haskell or assembler. It's a nice way to learn a language, a sort of visual Hello World - I've learned a ton of javascript by doing this, that I'll probably not use again.

That's basically it for the fractals and clocks - I think I've taken them as far as I want to for now (I do actually have work, see). Play with the code and see what you can do. People running Windows can set these to the desktop background (as the active desktop) - having the fractal or hue clocks as the background would be kind of awesome, if not a bit CPU hungry. Android coders might want to make a Hue Clock widget for a phone. It should also be possible to turn these into screensavers, using OpenGL in Linux (my first though on seeing Jack's fractal was "screensaver!" but I don't have the time to get into it right now). Then there's the potential for other types of fractal, or even more dimensions. I'd be interested to know if anyone gets some other adaptations running...

Thursday 14 June 2012

I Prefer Reality (Part 2)


Further joy at religious versions of reality being untrue


A month or two ago, I wrote about why I am so very glad that the versions of reality described by various religions are almost certainly untrue. I spoke about the tyrannical vision of life that Christianity, for example, teaches, and how I am glad to live in a world where this is almost definitely not the case.

But for all the suffering that humans might endure, on the larger scale of the universe, this is not really of much significance. What are the lives and deaths of a few score billion people, in the vast cold depths of the universe? But most religions not only have a position on what humans can and should do, the meaning of their lives, and the path to their ultimate destiny: they also have quite specific ideas about how the universe as a whole works. In a way, I think it would be an even greater shame if these were actually true, since not only would humanity by living under the yoke of an all seeing slave driver / condemned to karma-driven reincarnation / punished infinitely for finite crimes (and so on), but the universe itself would not be the wondrous thing we perceive it to be.


Take, for example, the Christian creation myth. That the planet was created in about a week, and populated immediately with all flora, fauna, and mankind in pretty much their current forms almost immediately, has a superficial appeal. As creation myths go, it isn't so bad, and implies a touch of artistry on the part of the God who created it all (after all, it took a whole six days' work). But on further consideration, I find this to be so, so, dull. The whole thing, as a working system, arrived in one go? What about the coalescing of a cosmic dust cloud, the ages of fiery volcanism, the majestic drift and shuddering collision of continents, the cycles of ice and fire, that made up the natural history of the Earth, as science tells it? That is what we stand to lose, if the Christian creation myth is true, and I would see that as a big let-down. Four billion years of geological chaos is so much more entertaining than a few days of divine construction.


And it's not just the Earth's marvellous history that we'd lose: remember, that particular creation myth - and all the others, for that matter - generally speak about the creation of the whole universe. The big bang, re-ionisation, hyper-inflation, the collision of multidimensional membranes, the birth and death of stars... all these wonders of science would turn out to be an illusion, placed into the cosmos such that they just happen to look as though they tell a rich and awe-inspiring history, for some reason. That may be an over-literal interpretation of these myths, but even so - if the universe did evolve as we suppose it did, but either under the direction of, or set in motion by, some God, a pantheon of gods born of chaos, supernatural force, divine congress, or whatever, that would still remove a lot of the fun from trying to figure it out, and the wonder at how it all came to be.

by Zen Pencils
The question of our own origins is another matter in which the scientific truth far outweighs any religious claim in terms of wonder and inspiration. The evolutionary origin of humanity is something which still drags on as a point of contention amongst the more literalistic religious believers, as countless debates and school-board hearings sadly demonstrate. One of the prime motivations, it seems, behind those who would deny our descent from ape ancestors, is the unwillingness to associate humans with being mere animals, a disgust of what it says about our un-divine natures. I see it exactly the other way around: evolution by natural selection is a far more empowering way to have come into being than to be merely placed, fully formed, in a garden one sunny Saturday morning. If we are to believe we were created pure and perfect, then presumably we must be nothing other than a disappointment, ever declining and sinning and failing to achieve our god-given potential - certainly not a very inspiring view of life (that sadly seems prevalent in a lot of religious thinking). The alternative, that we started as apes eking a living by whatever means from a merciless environment, and gradually developed into the thinking, feeling, rational beings we are, able to achieve great feats of engineering and exploration, to extend our own lives to many times their natural length and live in comfort undreamed of by any other creature: that is surely the more pleasing interpretation of our existence, and one which I am very happy to find is much more likely to be true.

Lest it seem like I'm going after Christianity in particular (though as I said before, that's just because it's the one I know most about), I'll emphasise the point that it doesn't really matter which mythical view of reality one considers: none of them match the real physical universe for sheer awesomeness. No religious sect ever dreamed up anything as majestic as spirals of stars, hundreds of quadrillions of miles wide, spinning towards each other over uncountable billions of years, to eventually collide and pass though each other. No creation myth is as impressive as an exploding star, spewing heavy elements formed in the fires of nuclear fusion out into space, to gradually coalesce into balls of rock and gas capable of sustaining life. No fantastical account of human beginnings is as empowering and satisfying as a billion year struggle from single celled life, through countless different forms, pitted against all the relentless forces of nature and the jaws of predators, and against all the odds, to survive, to produce beings capable not just of rising themselves out of the dust but to actively transcend our own biology, and to be capable of comprehending the whole dizzying journey.


That's why I get somewhat annoyed at religious believers who ignore such majesty; who insist that as wondrous as it is, it was produced by one God for one purpose; or to deny it even happened: because what they are missing is amazing. Similarly, for any semi-spiritual, New-Age philosophy that insists there's 'something' out there, that there are are things 'beyond science' or facts we are 'not meant to know'... well yes, of course there's something out there: there's a whole universe of supermassive black holes and unimaginably small strings, of nebulae bigger than a thousand suns and wasps smaller than a pinhead. And isn't this enough - just this? Why invoke ghosts and spirits and gods and demons, when we have the mystery of consciousness, the ingenuity of evolution, and the search for life elsewhere to occupy us? It seems that any alternative fantasies that people invoke serve only to detract from the real nature of reality, and are a poor substitute for the wonders that are really there to be found.

That's why I'm happy with the real, physical universe we have, and why I see it not as some barren, bleak, godless void, but a playground of rich wonders of which we could never have imagined. And why, as we begin to comprehend how the whole thing works, I am immensely glad that it is not a six-thousand year old project of an omnipotent tyrant, a result of a snake eating itself, or produced by a melody over dark water: but thirteen billion years of chaos and confusion and matter and exploding stars and dinosaurs and pulsars and drifting continents and giant sloths and hot Jupiters and the Casimir effect and common descent and redshift and quantum entanglement and emergent consciousness. This is much more fun.

Wednesday 13 June 2012

Untangling the media



Reflections on the state of traditional, online, and social media, and on attempts to understand them


It is quite clear that the media has a large influence on how we perceive the world. Indeed, almost all our knowledge of reality comes not from first-hand experience, but other sources such as TV, newspapers, film and the internet. The massive increase in availability of such forms of information in the past century has made us much better informed, being able to get a reasonable idea of events happening on the other side of the planet in almost real-time - something which was completely inconceivable not too long ago.

As we consume more and more media, it becomes not just a way of finding out about what's happening elsewhere: it is reality. And since this reality is based on second hand information, fed to us by third parties with their own aims and agenda, we can easily get a false view of the world if the information is inaccurate or biased. This need not be deliberate deception, or even a conscious manipulation of facts, but a consequence of providing information in a way in which it is most easily absorbed. For example, the portrayal of violence on television (not just in news but also films and other fiction) is known to be far in excess of its actual incidence. This has various causes - sensationalism, the fact that violence shocks and sells copies, and because it's what viewers expect to hear - and people then genuinely believe their environment to be a more dangerous, unpleasant place than it actually is (this is called the mean world syndrome).  Similarly, people's views on scientific topics, such as climate change and the state of the economy, are often incorrect and grounded in misleading and biased reporting.

These issues were discussed at a lecture I attended a few weeks ago, hosted by the University of Bristol. Justin Lewis, a professor of communication at Cardiff University, discussed his work studying people's consumption of various media, and how this affects their beliefs about the world. In his work he has studied people's reaction to climate change, and how even though this is one of the most dangerous, urgent problems that humanity faces, we are in general remarkably relaxed about the whole thing.

It is interesting to note to that, while blatantly biased reporting obviously plays a role in all this, one of the ways in which people come to believe falsehoods is by erroneously imagining a connection between frequently co-occurring facts. For example, there is a reasonably widespread belief that the Earth is warming due to the hole in the ozone layer; this is totally untrue, but the point is that they are both often mentioned together, and this gives people the false impression that they must therefore be related, and even to construct an illusory causal relationship. Another example is the disturbingly pervasive belief in the US (and apparently in the UK) that Saddam Hussein had a role in the 9/11 terrorist attacks, in league with Bin Laden: this is nothing but paranoia, but it is very likely that the two figures were frequently mentioned in the same news items, leaving people to inadvertently come to their own specious conclusions. Such unintended interaction between unrelated pieces of information would be hard to predict and track, and harder still to counter; it is a property of an incredibly complex system of information, over which individual people or organisations have a diminishing amount of control.

An interesting point made by Lewis was that, in this age of unparalleled access to information, where almost the entire knowledge of the species is available at our fingertips, people will still tend to seek out what is familiar to them. So rather than find new perspectives, people tend to go for that which confirms what they already know, and fits in with their current world view. This is quite understandable, since we generally prefer that which is familiar; but it encourages media providers to provide what they know their public wants: they make sure people hear what they expect to hear, packaging stories in a way that is most appealing (this is clearly evident in newspapers such as the UK's Daily Mail, well known for giving a hysterical, fear-mongering, xenophobic slant to everything; as well as traditionally more 'upmarket' publications such as the Guardian, whose left-leaning stance on issues such as immigration and the environment can usually be expected to chime with their readers' expectations.) The unfortunate consequence of this is that people's view of reality comes from the media they choose to consume, which is produced by organisations who sell them what they want to hear - a self-perpetuating cycle of misinformation and distortion. This is rather worrying, not least because politics and the media are so often intertwined, with policies aimed at appeasing papers and their readers - but also because such feedback loops and cycles of obfuscation become increasingly hard to comprehend.

The attempt to understand the way media is generated and consumed is the research interest of the second speaker at the lecture, Nello Cristianini, who followed Lewis's talk with an overview of the work carried out by his research group here at Bristol. The vast output of all the world's media is far beyond the capability of any human to comprehend, but using software which they have developed, it is becoming possible to automatically analyse news reports, from a variety of outlets, in an attempt to understand the constant flow of information. Obtaining, processing and storing the data is a feat of engineering in itself, requiring some rather substantial computing resources; but the real challenge is to autonomously understand the content of these articles.

One interesting area of this work is analysing which news articles become popular (defined as those that make the 'most popular' list), and attempting to predict this. It seems that it is not possible to simply classify, for a given article, whether it will be popular or not (understandably, since what is popular changes from day to day); but given a set of news articles it is possible to quite accurately predict which will be more popular than others from the same day. One thing that becomes evident from such analysis is that stories about people and celebrities tends to be more popular; and even amongst the readers of outlets specialising in topics such as politics and business, the articles they tend to prefer are of a more tabloid variety. This is perhaps not too surprising, and fits with the idea mentioned above that people will tend to seek out what interests them the most. I can imagine this kind of research being used by content providers to more accurately target their consumers with more of what sells well; but also as a useful tool for understanding how people consume information, and the most effective way of disseminating news stories to people so that they remain interested.

However, one thing which is important to realise about news stories is that they are not, in fact, "stories". Humans are a storytelling species, and our voracious consumption of novels, film, soap operas, musicals, opera, comic books, and theatre go to show how engaging a well-told tale can be. However, as we discussed after the lecture (I was fortunate enough to have a drink and a chat with the two speakers and a few of their students), news media is not presented this way. There is a headline, which essentially states the main, salient point; there is a preamble outlining basically what happened; and then there is the main body of the article, outlining the finer detail and background to the event. Maybe it's this that drives people to be disinterested in weightier topics, and seek items which are more easy to relate to personal narratives, or have a more direct relevance to their own lives. It's interesting that news coverage is almost the only genre that does this: in film, music, computer games, there is always the tension caused by not quite knowing what will happen next. Newspapers did not do this; radio and television news followed this model, with short snappy headlines followed by a progressive spiral of illumination; online news outlets naturally fell into the same format.

I must admit though, I do have some reservations about this hypothesis, and can't quite imagine actual news presented in an exciting, narrative-driven fashion, with key facts being withheld until the gripping conclusion; news written in this way would, I suspect, be infuriating.

In this discussion of how news stories aren't stories at all, another obvious example came to mind that takes this hierarchical approach to disseminating information: academic papers. Here, even more than in news articles, there is a concise, informative title, followed by an abstract quickly summarising the purpose, methods, and conclusions of a piece of work, followed by a lengthier account of the technical details. As my supervisor in my earlier days used to tell me: this is not a thriller, don't withhold information - tell them what they need to know and give detail later. This may well be a more efficient means of describing research after all - though as Justin Lewis lamented, few people actually read these papers. A dry, no-suspense delivery style might have something to do with it (of course, the lack of attention span is exactly why we structure papers like this - we have abstracts because we know they don't read the whole thing).

But this pessimism about the homogeneity of individual's media consumption may not be so well deserved. After all, some stories/articles/videos become immensely popular, and spread over the whole internet with virtually no top-down control, exposing people to viewpoints they might not have actively sought - that is, they go viral. Naturally enough, our conversation turned to memes - where ideas are considered replicators, spreading through a population simply because they are good at spreading. Perhaps this could be a way of spreading information, in an easy to consume format: produce many variants of a news story, release them, and the ones which are better at spreading will reach the widest audience.  Exploiting the new-found knowledge of which kind of stories people want to read could make such an approach more likely to succeed (indeed, the secret to creating viral, self-perpetuating content has long been a goal of advertising and marketing). Whether packaging news in viral memes and spreading them - essentially tricking people into reading what an editor thinks they should be reading - is a good idea, I'm not so sure; then again, packaging content in an easy to access, relateable format is exactly what newspapers, TV and the internet have always done (I should emphasise that I'm not talking about memes in the lolcats / face-with-caption sense, but rather the construction of news stories in which makes people want to share them).

Of course, consumption of top-down media is only part of the story. User generated content, such as blogs and social networking, are becoming a significant factor in the way people react to and find out about the world; and as complicated as older media styles are, these promise to be another level again. But again, recent research is making some headway into trying to untangle this web of communication, by looking at twitter data from the UK over a period of a few years. This consisted of several million tweets, analysed by examining the words to identify the sentiment expressed by the tweet. While one individual tweet is not particularly meaningful, taken together clear patters begin to emerge, and correlate strongly with real-world events happening. For example, the 'joy signal' - a measure of how happy people on average are according to their tweets - reliably peaks each Christmas, as does the 'fear signal' at Hallowe'en.

So far so predictable. But a really interesting thing happened after the pre-budget announcement in October 2010, when the first wave of spending cuts was announced by the new Conservative government: the national level of fear shot up, as people immediately began to worry about losing services, benefits or livelihoods. What's remarkable about this is that the fear level, after an initial spike, did not go back down - it remained at an elevated level until the end of the sample period, deviating noisily from what looked like a new baseline. A similar thing happened in mid 2011, when riots broke out in many UK cities, and for a while the anger signal dominated. This gradually diminished as the rioting died out (back to the baseline fear over continued cuts). The researchers, of course, were quick to downplay suggestion that this could be used to predict future riots.

Whether the sampled data was actually long enough to show this is a long term change, or if such drastic changes in average mood are common, is not yet clear. Either way, such analysis promises to be a useful tool in understanding not only how people are using their own media generation abilities, but how they react to wider events. I am reminded of a plan by the government a year or so ago to measure the nation's happiness, and to use that as an indicator of prosperity (arguably makes as much sense as other arbitrary measures like GDP) - this would be a useful alternative, requiring no interventionist surveying, and giving a more immediate picture of what's going on. That this kind of information can be gleaned from tweets alone, using a fairly simple model of content understanding, is quite encouraging, given the wealth of data which more sophisticated tools could exploit.

I think that, despite the concern over what people are reading and how they see the world, or the effect that new media may be having on our perceptions, there is grounds for optimism. On the one hand, it seems inevitable that people will gravitate towards the media they prefer, in turn reinforcing their entrenched view of reality - and with a better model of this, ways to target them more precisely will emerge. But with this knowledge will also come the possibility of disentangling the complex web of media interaction, figuring out why people seek the information they do, and how to use this for the improved dissemination of information. User generated content will only become more important in how people interact with the world - but rather than being a cause for concern, this can be a valuable source of information, to directly measure how they perceive reality, and how people respond to events. The media as it stands is a vast and complex system which, much like the world economy, is beyond the comprehension of most humans: but with more sophisticated tools in data mining and artificial intelligence, it may be possible to get a better understanding of how this all works, and how we should deal with it.

Justin Lewis, speaking on climate change and the pervasive advertising industry:

Interview with Nello Cristianini on BBC news about the twitter sentiment analysis project:


Sunday 10 June 2012

Uncountably many

Reflections on the incomprehensibly large number of people that exist 


I travel around on trains quite a lot. And one thing that always strikes me is the sheer number of houses I see as I flit by - in their hundreds, thousands, in every town and city. Not that there are an inappropriate number - I'm sure that there are approximately the right amount of houses for the people who need to live in them. But it's unsettling.

Because in every one of those houses, there will be people. This is not surprising, as there are several million people in the country, and they have to live somewhere. But the number of people is merely a statistic, as easily ignored as national debt or a famine death count; we are accustomed to hearing large numbers and used to ignoring them, of not seeing their true significance.

not actually taken from a train. obviously


The thing is, every one of those houses contains unique, individual people, each with their own lives, a full back story, an identity, hopes for the future. To glimpse briefly one house, its back garden open to the train tracks and strewn with children's toys, is to realise that it has real, live inhabitants, as real as myself or anyone I know. Theirs will be a house full of possessions, items of varying utility and consequence, the detritus that gathers around human life. I think of my house, filled with so many objects, each with its own meaning, a collection of memories accumulated over decades - this seems normal, it is right that my house should be so full of personal history... and yet, so will the house of this unknown, unseen family, and every other; rich histories that make up real people's lives, but totally unknown. This is difficult to grasp, the enormous quantity of stuff that exists out there, meaningful to some and unknown to the rest.

An instant later, the house is gone, disappearing into the distance as the train speeds on, to be replaced by more, similar yet still unique, innumerably many stretching beyond the horizon. It's strange to think I am not likely to look upon it again, to notice it amongst all the others; and that its inhabitants I will never meet, and know of them only by implication of the building they inhabit. A piece of momentary scenery, hiding whole existences utterly unknown, to me and almost everyone else, but lived out as fully and purposefully as any other.

That, I find, is what's difficult to fully comprehend. The physical amount of brick and steel that make up houses is itself vast, but quantifiable; the gargantuan amount of material that make up these millions of homes is all just so much inert matter. But for so, so many living, breathing, individual humans to inhabit them seems unbelievable, almost too far fetched to be true. Given the immense (and largely unremembered) detail that makes up my own life, can it really be possible that such complexity is replicated millions, billions, of times?


This is why it's so hard to understand the meaning of large numbers of people. Thousands die every day from preventable causes, but a nation's heart is captured by stories of individuals, personal narratives to which we can easily relate; an incongruity that makes tackling humanity's larger problems so difficult.  The world's population has reached seven billion - but what is that, other than just a billion more than it was last time we checked? To try and think of them all as people is impossible - not only because we can't really visualise what a one and nine noughts actually represents, but because it just seems so much beyond imagining, that this many individual lives can actually be happening, right here, right now, as real and detailed as our own. If truly contemplating the existence of all the people implied by my view from the train is daunting, to remember that this is a tiny, insignificant fraction of the humanity that's out there is dizzying indeed. I might just look away from the window for a while.