The genius of making a little go a long way

Visiting Illuminating India at London’s Science Museum for New Scientist, 10 October 2017

One can taste the boosterism in the air at London’s Science Museum as it introduces its two-gallery exhibition, Illuminating India.

There is a cafe serving excellent Indian street food. Someone next to me used the word “Commonwealth” without irony. Would there have been such a spirit without Brexit? Probably not: this is a show about the genius of another country that very much wants to project Britain’s own global aspirations. Any historian of Anglo-British relations will give a sardonic smile at this.

When you visit (and you should), try to look around the smaller, artefacts-driven gallery first.

This room tells the stories of Indian science – stories plural because there can never be one, linear account of how such dissimilar and contesting cultures struggled and more or less succeeded in understanding and exploiting a space of such extraordinary complexity.

Naturally, since India has a past to boast of, pride of place goes to its indigenous cultures. It was the Indus valley civilisation, after all, whose peoples fashioned standardised weights around 4000 years ago: items that indicate high levels of arithmetical literacy, communication and trade.

And there are reconstructions of Ayurvedic surgical instruments described in records dating back to around 500 BC. Also on show is a 1800-year-old document containing the first example of the use of zero. Wonderfully, radiocarbon dating pushed the document’s age back by 500 years just before the exhibition opened.

It is a measure of the wisdom of the curators that such an illustrious past isn’t allowed to overshadow India’s more recent achievements. For example, Jagadish Chandra Bose’s early-20th-century crescograph, designed to observe plant growth at a magnification of 10,000 times, reminds us why he is often called the father of modern Indian science.

Another winning object is Chandrasekhara Raman’s spectrometer. Raman was the first Indian to win a Nobel prize, for physics, in 1930.

And what of that other great empire far to the north? Well, there is a map of George Everest’s career-defining Great Trigonometrical Survey of India – the teamwork of 70 years distilled on a single, meticulously drawn map. And nearby are details of a recent collaboration between Surrey Satellite Technology in the UK and the Indian Space Research Organisation on the Earth-surveying NovaSAR satellite.

Some of the deeper, darker questions about Anglo-Indian relations are posed in the second, photographic half of the exhibition.

There, the anthropometric photographs of Maurice Portman make a depressingly silly impression next to the respectful, revealing and entirely unlicentious photographs Ram Singh took of the women of his own harem: powerful political players all, who shaped the country through marriage and allied treaties.

It is hard to say why the split nature of Illuminating India works as well as it does. It has something to do with the way the rooms handle political power.

India’s science, from its ancient stepwells that gathered monsoon waters to the bureaucratic and algorithmic marvel that is today’s tiffin tin-based lunch delivery system, has been driven by the complex needs of a massive population making a living.

Similarly, its doing-more-with-less style of innovation is reflected in everything from the world’s cheapest artificial leg (the Jaipur leg, made of rubber, plastic and wood) to the world’s cheapest Mars-orbiting camera.

Visitors to Illuminating India will leave thinking that technology may, after all, be making the world a better place, and that what people do is ultimately more influential than who they are.

Hot photography

Previewing an exhibition of photographs by Richard Mosse for New Scientist, 11 February 2017

Irish photographer Richard Mosse has come up with a novel way to inspire compassion for refugees. He presents them as drones might see them – as detailed heat maps, often shorn of expression, skin tone, and even clues to age and sex. Mosse’s subjects, captured in the Middle East, North Africa and Europe, don’t look back at us: the infrared camera renders their eyes as uniform black spaces.

Mosse has made a career out of repurposing photographic kit meant for military use. The images here show his subjects as seen, mostly at night, by a super-telephoto device designed for border and battlefield surveillance. Able to zoom in from 6 kilometres away, the camera anonymises them, making them strangely faceless even while their sweat, breath and sometimes blood circulation patterns are visible.

The results are almost closer to the nightmarish paintings of Hieronymus Bosch than the work of a documentary photographer. Making sense of them requires imagination and empathy: after all, this is how a smart weapon might see us.

Mosse came across his heat-mapping camera via a friend who worked on the BBC series Planet Earth. Legally classified as an advanced weapons system, the device is unwieldy and – with no user interface or handbook – difficult to use. But, working with cinematographer Trevor Tweeten, Mosse has managed to use it to make a 52-minute video. Incoming will wrap itself around visitors to the Curve Gallery at the Barbican arts centre in London from 15 February until 23 April.

Stanisław Lem: The man with the future inside him

lem

From the 1950s, science fiction writer Stanisław Lem began firing out prescient explorations of our present and far beyond. His vision is proving unparalleled.
For New Scientist, 16 November 2016

“POSTED everywhere on street corners, the idiot irresponsibles twitter supersonic approval, repeating slogans, giggling, dancing…” So it goes in William Burroughs’s novel The Soft Machine (1961). Did he predict social media? If so, he joins a large and mostly deplorable crowd of lucky guessers. Did you know that in Robert Heinlein’s 1948 story Space Cadet, he invented microwave food? Do you care?

There’s more to futurology than guesswork, of course, and not all predictions are facile. Writing in the 1950s, Ray Bradbury predicted earbud headphones and elevator muzak, and foresaw the creeping eeriness of today’s media-saturated shopping mall culture. But even Bradbury’s guesses – almost everyone’s guesses, in fact – tended to exaggerate the contemporary moment. More TV! More suburbia! Videophones and cars with no need of roads. The powerful, topical visions of writers like Frederik Pohl and Arthur C. Clarke are visions of what the world would be like if the 1950s (the 1960s, the 1970s…) went on forever.

And that is why Stanisław Lem, the Polish satirist, essayist, science fiction writer and futurologist, had no time for them. “Meaningful prediction,” he wrote, “does not lie in serving up the present larded with startling improvements or revelations in lieu of the future.” He wanted more: to grasp the human adventure in all its promise, tragedy and grandeur. He devised whole new chapters to the human story, not happy endings.

And, as far as I can tell, Lem got everything – everything – right. Less than a year before Russia and the US played their game of nuclear chicken over Cuba, he nailed the rational madness of cold-war policy in his book Memoirs Found in a Bathtub (1961). And while his contemporaries were churning out dystopias in the Orwellian mould, supposing that information would be tightly controlled in the future, Lem was conjuring with the internet (which did not then exist), and imagining futures in which important facts are carried away on a flood of falsehoods, and our civic freedoms along with them. Twenty years before the term “virtual reality” appeared, Lem was already writing about its likely educational and cultural effects. He also coined a better name for it: “phantomatics”. The books on genetic engineering passing my desk for review this year have, at best, simply reframed ethical questions Lem set out in Summa Technologiae back in 1964 (though, shockingly, the book was not translated into English until 2013). He dreamed up all the usual nanotechnological fantasies, from spider silk space-elevator cables to catastrophic “grey goo”, decades before they entered the public consciousness. He wrote about the technological singularity – the idea that artificial superintelligence would spark runaway technological growth – before Gordon Moore had even had the chance to cook up his “law” about the exponential growth of computing power. Not every prediction was serious. Lem coined the phrase “Theory of Everything”, but only so he could point at it and laugh.

He was born on 12 September 1921 in Lwów, Poland (now Lviv in Ukraine). His abiding concern was the way people use reason as a white stick as they steer blindly through a world dominated by chance and accident. This perspective was acquired early, while he was being pressed up against a wall by the muzzle of a Nazi machine gun – just one of several narrow escapes. “The difference between life and death depended upon… whether one went to visit a friend at 1 o’clock or 20 minutes later,” he recalled.

Though a keen engineer and inventor – in school he dreamed up the differential gear and was disappointed to find it already existed – Lem’s true gift lay in understanding systems. His finest childhood invention was a complete state bureaucracy, with internal passports and an impenetrable central office.

He found the world he had been born into absurd enough to power his first novel (Hospital of the Transfiguration, 1955), and might never have turned to science fiction had he not needed to leap heavily into metaphor to evade the attentions of Stalin’s literary censors. He did not become really productive until 1956, when Poland enjoyed a post-Stalinist thaw, and in the 12 years following he wrote 17 books, among them Solaris (1961), the work for which he is best known by English speakers.

Solaris is the story of a team of distraught experts in orbit around an inscrutable and apparently sentient planet, trying to come to terms with its cruel gift-giving (it insists on “resurrecting” their dead). Solaris reflects Lem’s pessimistic attitude to the search for extraterrestrial intelligence. It’s not that alien intelligences aren’t out there, Lem says, because they almost certainly are. But they won’t be our sort of intelligences. In the struggle for control over their environment they may as easily have chosen to ignore communication as respond to it; they might have decided to live in a fantastical simulation rather than take their chances any longer in the physical realm; they may have solved the problems of their existence to the point at which they can dispense with intelligence entirely; they may be stoned out of their heads. And so on ad infinitum. Because the universe is so much bigger than all of us, no matter how rigorously we test our vaunted gift of reason against it, that reason is still something we made – an artefact, a crutch. As Lem made explicit in one of his last novels, Fiasco (1986), extraterrestrial versions of reason and reasonableness may look very different to our own.

Lem understood the importance of history as no other futurologist ever has. What has been learned cannot be unlearned; certain paths, once taken, cannot be retraced. Working in the chill of the cold war, Lem feared that our violent and genocidal impulses are historically constant, while our technical capacity for destruction will only grow.

Should we find a way to survive our own urge to destruction, the challenge will be to handle our success. The more complex the social machine, the more prone it will be to malfunction. In his hard-boiled postmodern detective story The Chain of Chance (1975), Lem imagines a very near future that is crossing the brink of complexity, beyond which forms of government begin to look increasingly impotent (and yes, if we’re still counting, it’s here that he makes yet another on-the-money prediction by describing the marriage of instantly accessible media and global terrorism).

Say we make it. Say we become the masters of the universe, able to shape the material world at will: what then? Eventually, our technology will take over completely from slow-moving natural selection, allowing us to re-engineer our planet and our bodies. We will no longer need to borrow from nature, and will no longer feel any need to copy it.

At the extreme limit of his futurological vision, Lem imagines us abandoning the attempt to understand our current reality in favour of building an entirely new one. Yet even then we will live in thrall to the contingencies of history and accident. In Lem’s “review” of the fictitious Professor Dobb’s book Non Serviam, Dobb, the creator, may be forced to destroy the artificial universe he has created – one full of life, beauty and intelligence – because his university can no longer afford the electricity bills. Let’s hope we’re not living in such a simulation.

Most futurologists are secret utopians: they want history to end. They want time to come to a stop; to author a happy ending. Lem was better than that. He wanted to see what was next, and what would come after that, and after that, a thousand, ten thousand years into the future. Having felt its sharp end, he knew that history was real, that the cause of problems is solutions, and that there is no perfect world, neither in our past nor in our future, assuming that we have one.

By the time he died in 2006, this acerbic, difficult, impatient writer who gave no quarter to anyone – least of all his readers – had sold close to 40 million books in more than 40 languages, and earned praise from futurologists such as Alvin Toffler of Future Shock fame, scientists from Carl Sagan to Douglas Hofstadter, and philosophers from Daniel Dennett to Nicholas Rescher.

“Our situation, I would say,” Lem once wrote, “is analogous to that of a savage who, having discovered the catapult, thought that he was already close to space travel.” Be realistic, is what this most fantastical of writers advises us. Be patient. Be as smart as you can possibly be. It’s a big world out there, and you have barely begun.

 

When art and technology pull each other to bits

ars-electronica

Visiting Ars Electronica for New Scientist, 21 September 2016

In a disused mail sorting office in Linz, Austria, an industrial robot twice my height has got hold of Serbian-born artist Dragan Ilic and is wiping him over a canvas-covered wall. Ilic is clutching pencils, and as the robot twirls and dabs him against the wall, the artist makes his own frantic marks – a sort of sentient brush head. These performances are billed as a collaboration between artist and machine. All I see is the user getting used.

Every September in Linz, the Ars Electronica festival tries to marry technology and art. Andy Warhol took up screen-printing in the 1960s, and a whole generation of gallery-goers have since grown up with the notion that this match is easy.

Indeed, the coupling of art and technology has become a solid pillar of art education, especially now that so much funding comes from IT big hitters such as Sony and private institutions like the Wellcome Trust.

But if the venerable Ars Electronica has demonstrated anything in its 37-year history – beyond the ability of the arts to remake and rejuvenate a city – it is that technology and art make astonishingly unhappy bedfellows.

This year, for example, Swiss artist Daniel Boschung used an industrial robot controlled by bespoke software to take 900-million-pixel portraits of people – forensic surveys so detailed that they drained all emotion from their subjects’ faces.

Not far away, another robot, Davide Quayola’s Sculpture Factory, chiselled through partially completed life-size stone renditions of Michelangelo’s David. The attention was directed to the pixelated nature of 3D scanning, which smeared, spread and tessellated the biblical giant-killer to the point of incoherence: here a limb, there an eye.

Walking the corridor towards the current sculpture-in-progress, one passed attempt after attempt of clone Davids peeking in more or less agonised fashion from their cuboid stone prisons, in a sort of mineral retread of that infamous scene from Alien: Resurrection.

The provoking thing about Ars Electronica is that it jams together boutique displays of the latest technology, trenchant criticisms of the post-industrial project, jokes and honest failures. It is a gargantuan vessel powered by enthusiasm, but steered by nothing remotely resembling taste.

Eventually, the visitor comes to rest against one of the more obvious wins. Boris Labbé’s film Rhizome, which netted the festival’s annual animation prize, unites watercolour illustration and digital effects to tell an epic tale of evolution, civilisation and cosmic transformation in one steady, heart-stopping reverse zoom.

And then there is Frank Kolkman’s OpenSurgery, which combines 3D printing and laser-cutting with hacked surgical pieces and components bought online. This is a DIY robot that can perform laparoscopic surgery – a terrifying comment on the way that hacking and “making” are increasingly expected to stand in for the real thing.

You’ll need a drink after all that, so head for Max Dovey’s A Hipster Bar. And good luck – this genuine pop-up drinking hole will, in true neo-liberal fashion, keep the gate shut unless its face-recognition system considers you “90 per cent hipster”.

As you sip, ponder this: it was the assertion of the Romantic movement that art makes us appreciate the beauty, richness and sheer size of the world. And technology, used appropriately, brings us closer to that sublime. As the Romantics’ acolyte, the writer and pioneering pilot Antoine de Saint-Exupéry, put it: “The machine does not isolate man from the great problems of nature but plunges him more deeply into them.”

Even if that was true in 1939, it’s not true now: not now our drones do our flying for us; not now our technology has got away from us to the point where large portions of nature are being erased; not now we live mired in media and, indeed, have to make special efforts to escape it.

Naturally, artists want to explore the new technology of their day, but these days the best results seem to come when we misappropriate the machines and kick them into new shapes, or simply point and laugh.

How two dead power stations fuel the art of catastrophe

fuikushima

Borrowed Time, Jerwood Space, London, for New Scientist, 16 March 2016.

THE current crop of young artists showing in London look pretty incorruptible. Handed £20,000 each to make films about economic unease and ecological anxiety, Alice May Williams (fresh-ish out of Goldsmiths, University of London) and Karen Kramer (who cut her artistic teeth at the Parsons School of Design, New York City) have made video installations that deliver on this minatory brief. And they have done so with the sort of bloodless precision that leaves a visitor to Borrowed Time unsure whether to admire their high seriousness or worry at their apparent lack of character.

Be patient: both pieces reward closer attention. There are, ultimately, two very strong, staggeringly incompatible visions at work here.

Through its fictional narrator-protagonist, Kramer’s The Eye That Articulates Belongs on Land gives viewers the opportunity to wander the deserted, out-of-bounds byways neighbouring Japan’s Fukushima nuclear power plant while growing increasingly upset.

Actor Togo Igawa’s choked voice-over suggests a wronged salaryman driving back and forth over his pet shih-tzu. Pictures of urban dereliction lovingly reference the 2011 release of radioactive material from the plant (worldwide casualties to date: nil) while providing not much more than a passing reference to the tsunami (Tohoku district casualties: just shy of 16,000) that triggered the plant’s meltdowns.

The power plant offered us “a false promise of dominion” apparently – a formulation I’m sure to recall next time I turn on a kettle for a cuppa – before Nature Wrought Her Terrible Judgement.

Actually, Kramer might not be going this far – it’s hard to tell. But she is dangerously close, achieving with the line “They let loose a reaction here that belongs on the surface of the sun!” an impressive hat-trick: at once morally irrelevant, intellectually vacuous and factually incorrect. The piece then degenerates into a paranoid animation involving shards of uranium glass and a mummified fox.

Meanwhile, in Dream City – More, Better, Sooner, Alice May Williams invites us to stare at her toes, and, beyond, at the towers of the long-since decommissioned Battersea power station, a crumbling Art Deco masterpiece. This gem is currently aswarm with builders, surveyors, architects and their ilk as that swampy, vital, smelly, industrious corner of London gets a landscaped corporate makeover after 30 years of dithering.

Williams is taking deep, centring breaths, following the advice of a meditation teacher. She is learning to let go of past errors and future plans, and to embrace the now. In other words, Williams’s well-being involves letting go of the very forces, prejudices and habits that make her city tick. Can you imagine the mess we would be in if our utilities “embraced the now”? The disjunct between personal time and civic time is built steadily, with humour and poetry and a tremendous sense of mounting threat. “SHOP STAY EAT LIVE WORK and PLAY”, a hoarding screams. A promise or a threat?

“Sometimes we are right inside the drawings,” Williams sighs, interleaving the view from her window with corporate videos, blueprints and historical footage to capture the inevitable bind of city living. That bind has us living inside other people’s visions, hardly able to distinguish between big-business blather and the untethered voices of our own suicidal ideations.

Both films play to our fears, but only Williams understands what’s worth fearing. Disasters are not and never were the point. They are like rain and eclipses: inevitable. The reason we have complex societies is to handle disasters. A famine here, a flood there, a cave-in at the mine. The rest is window dressing, and none of it comes out the way it is meant to.

All over London, the kettles are boiling merrily as the old power station is turned into a retail-residential park “with community built in”. We can embrace the now all we want, but the city has no such luxury. That is what makes it such a terrifying friend.

Putting the wheel in its place

wheel

The Wheel: Inventions and reinventions by Richard W. Bulliet (Columbia University Press), for New Scientist, 20 January 2016

IN 1870, a year after the first rickshaws appeared in Japan, three inventors separately applied for exclusive rights. Already, there were too many workshops serving the burgeoning market.

We will never know which of them, if any, invented this internationally popular, stackable, hand-drawn passenger cart. Just three years after its invention, the rickshaw had totally displaced the palanquin (a covered litter carried on the shoulders of two bearers) as the preferred mode of passenger transport in Japan.

What made the rickshaw so different from a wagon or an ox-cart and, in the eyes of many Westerners, so cruel, was the idea of it being pulled by a man instead of a farm animal. Pushing wheelchairs and baby carriages posed no problem, but pulling turned a man into a beast. “This quirk of perception,” Bulliet says, “reflects a history of human-animal relations that the Japanese – who ate little red meat, had few large herds of cattle and horses, and seldom used animals to pull vehicles – did not share with Westerners.”

In answer to some questions that seem far more difficult, Bulliet provides extraordinarily precise answers. He proposes an exact birth for the wheel: the wheel-set design, whereby wheels are fixed to rotating axles, was invented for use on mine cars in copper mines in the Carpathian mountains, perhaps as early as 4000 BC.

Other questions remain intractable. Why did wheeled vehicles not catch on in pre-Columbian America? The peoples of North and South America did not use wheels for transportation before Christopher Columbus arrived. They made wheeled toys, though. Cattle-herding societies from Senegal to Kenya were not taken in by wheels either, though they were happy enough to feature the chariots of visitors in their rock paintings.

Bulliet has a lot of fun teasing generations of anthropologists, archaeologists and historians for whom the wheel has been a symbol of self-evident utility: how could those foreign types not get it? His answer is radical: the wheel is actually not that great an idea. It only really came into its own once John McAdam, a Scot born in 1756, introduced a superior way to build roads. It’s worth remembering that McAdam insisted the best way to manufacture the small, sharp-edged stones he needed was to have workers, including women and children, sit beside the road and break up larger rocks. So much for progress.

The wheel revolution is, to Bulliet’s mind, a recent and largely human-powered one. Bicycles, shopping carts, baby strollers, dollies, gurneys and roll-aboard luggage: none of these was conceived before 1800. At the dawn of Europe’s Renaissance, in the 14th century, four-wheeled vehicles were not in common use anywhere in the world.

Bulliet ends his history with the oddly conventional observation that “invention is seldom a simple matter of who thought of something first”. He could have challenged the modern shibboleth (born in Samuel Butler’s Erewhon and given mature expression in George Dyson’s Darwin Among the Machines) that technology evolves. Add energy to an unbounded system, and complexity is pretty much inevitable. There is nothing inevitable about technology, though; human agency cannot be ignored. Even a technology as ubiquitous as the wheel turns out to be a scrappy hostage to historical contingency.

I may be misrepresenting the author’s argument here. It is hard to tell, because Bulliet approaches the philosophy of technology quite gingerly. He can afford to release the soft pedal. This is a fascinating book, but we need more, Professor Bulliet!

 

 

 

VR: the state of the art

 

mg22730320.500-1_800

For New Scientist

THEY will tell you, the artists and engineers who work with this gear, that virtual realities are digital environmental simulations, accessed through wearable interfaces, and made realistic – or realistic enough – to steal us away from the real world.

I can attest to that. After several days sampling some of the latest virtual environments available in the UK, I found that reality righted itself rather slowly.

Along the way, however, I came across a question that seemed to get to the heart of things. It was posed by Peter Saville, prime mover of Manchester’s uber-famous Factory Records, and physicist Brian Cox. They explained to an audience during Manchester’s International Festival how they planned to fit the story of the universe on to sound stages better known for once having housed legendary soap Coronation Street.

Would The Age of Starlight, their planned immersive visualisation of the cosmos, give audiences an enriched conception of reality, or would people walk home feeling like aliens, just arrived from another planet?

Cox enthused about the project’s educational potential. Instead of reading about woolly mammoths, he said, we will be able to “experience” them. Instead of reading about a mammoth, trying to imagine it, and testing that imagined thing against what you already know of the world, you will be expected to accept the sensory experience offered by whoever controls the kit.”We will be able to inject people with complex thoughts in a way that’s easier for them to understand!” Cox exclaimed. So, of course, will everyone else.

Institutions of learning, then, had best associate their virtual reality experiments with the most trustworthy figure they can find, such as David Attenborough. His First Life is the London Natural History Museum’s joyride through perilous Cambrian shallows, built on the most recent research.

“When the film starts, try to keep your arms to yourselves,” begged the young chap handing out headsets at the press launch, for all the world as though this were 1895 and we were all about to run screaming from Louis Lumière’s Arrival of a Train. The animator, given free rein, renders tiny trilobites on human scale. This is a good decision – we want to see these things, after all. But such messing around with scale inevitably means that when something truly monstrous appears, we are not as awed as we perhaps ought to be.

VR sets awkward challenges like this. From a narrative perspective, it is a big, exciting step away from film. Camera techniques like zooming and tracking ape the way the eye works; with VR, it is up to us what we focus on and follow. Manipulations have a dreamlike effect. We do not zoom in; we shrink. We do not pan; we fly.

Meanwhile, virtual reality is still struggling to do things everyone assumes it can do already. Accurately reading a user’s movements, in particular, is a serious headache. This may explain the excitement about the two-person game Taphobos, which solves the problem by severely limiting the player’s movements. Taphobos, a play on the Greek words for “tomb” and “fear”, traps you in a real coffin. With oxygen running out, the entombed player, equipped with an Oculus Rift headset, must guide their partner to the burial site over a radio link, using clues dotted around the coffin.

“This combination,” say the makers, master’s students in computing at the University of Lincoln, UK, “allows you to experience what it would be like if you were buried alive with just a phone call to the outside world.” Really? Then why bother? By the time you have addressed virtual reality’s many limitations, you can end up with something a lot like, well, reality.

London’s theatre-makers know this. At first, immersive entertainments such as Faust (2006) and The Masque of the Red Death (2007), pioneered by the theatre company Punchdrunk, looked like mere novelties. Now they are captivating bigger audiences than ever.

Traditional theatregoers may grow weary of running confused across gargantuan factories and warehouses, trying to find where the action is, but for gamers such bafflement is a way of life, and to play scenarios out in the real world is refreshing.

Until 27 September, London-based Secret Cinema offers a similar sort of immersion: inviting you to come battle the evil Empire through several meticulously realised sets as a warm-up to a screening of The Empire Strikes Back. It’s all played at a gentle, playful pace: something between a theatrical experience and a club night.

Right or wrong, VR promises to outdo these entertainments. It’s supposed to be better, more engaging than even the most carefully tailored reality. That’s a very big claim indeed.

More likely, VR may be able to present the world in a way that would otherwise be inaccessible to our unaugmented senses. The first tentative steps in this direction were apparent at this year’s Develop games conference in Brighton, where the Wellcome Trust and Epic Games announced the winner of their first Big Data Challenge. Launched in March, the competition asked whether game designers could help scientists to better visualise incomprehensibly large data sets.

Among the front runners was Hammerhead, a team taking on the enormous task of designing a decent genomics browser. They have barely changed in a decade. Once they held barely a dozen data fields, now they need hundreds since studying the behaviour of different genes under different conditions is a multidimensional nightmare. Martin Hemberg of the Sanger Institute, who set the challenge, explained: “Genomics is very data-intensive. Trying to integrate all this and make sense is a huge challenge. We need better visualisation tools.”

Hammerhead’s proposal promises something close to SF writer William Gibson’s original conception of cyberspace: a truly navigable and manipulable environment made of pure information. Not surprisingly, it will take more than the challenge’s modest $20,000 to realise such a vision.

Instead, the prize was handed to two London studios, Lumacode and Masters of Pie, who collaborated on a tool that is already proving itself as it takes the 14,500 family health records in the Avon Longitudinal Study of Parents and Children, and spits them out in real time so researchers can follow their hunches. It even boasts privacy tools to facilitate the work of hundreds of researchers worldwide.

On current evidence, today’s VR is going to change everything by just a little bit. It will disconcert us in small ways. It will not give us everything we want. But reality doesn’t either, come to that. We can afford to be patient.

Put out more flags

long

The meetngreet staff at NESTA, the UK’s National Endowment for Science, Technology and the Arts, have a lot invested in the idea that their bulgy Fetter Lane new-build is larger and more complex than it actually is.

There’s an open-plan space with a desk and two meeting pods made of safety glass and egg boxes. The cloakroom is to the left of the right-hand pod and the room where they’re launching the 2014 Longitude Prize is to the right. You go left (there isn’t a cloakroom as such, just a cupboard) and immediately you’re intercepted by a meetngreet following a clockwise orbit around the left-hand pod. “You must be lost,” she says, pointing you in a direction you don’t want to go. All this in a space about 400-foot square.

Inside the room, the brains behind the revivification of the British government’s Longitude Prize of 1714 are taking it in turns to downplay the significance of the enterprise. Iain Grey, chief executive of the Technology Strategy Board, worries at the value of prize before jettisoning the word entirely in favour of “challenge-led agendas”, whatever the hell they are.

Honestly, it’s as if the X-Prize had never happened. The razzmatazz, the music, the black T shirts. The working laptop presentations. Here it’s all apologies and self-deprecation and a recalcitrant Windows 7 install making everyone look like a bit of a tit.

The canapes were excellent but there should have been bunting, damn it. There should have been flags. A good, worthwhile prize is always welcome. True, there’s a world of difficulty to be got through, making a prize good and worthwhile. But so far, NESTA seem to have paid their dues, and anyone who watched the BBC’s Horizon documentary last night may reasonably conclude that they’ve come up with a winner.

Until June 25, the public can vote for one of six challenges which, if met, would go some way to changing the world for the better. Do you want ecologically sustainable air travel, nutrition sufficient to keep the world’s population going, something to replace defunct antibiotics, machines to ameliorate paralysis, clean water, or independent lives for those with dementia?

As a piece of public engagement with science, it’s a triumph – and that’s before the competition proper gets started. The winning challenge stands for a decade or so, and whoever meets it wins ten million quid. The expectation, I presume, is that consortiums representing commercial and academic interests will spend much, much more than they could possibly be recouped from the prize money. The victory’s the thing, after all. The kudos. The column inches, and venture capitalists waving their chequebooks outside the door.

“This prize, on it’s own, won’t change the world,” says the prize’s lead, health entrepreneur Tamar Ghosh, underselling all her hard work. She should read more aviation history. These sorts of prizes can, and do, precisely that.

The Rise of Augmented Reality

Img_0320

 

Thanks to the rise of smartphone technologies, the virtual territories of cyberspace are increasingly free to roam around in the real world.

LondonCalling.com is hosting a panel discussion on the current and future trends of augmented reality on Tuesday 27 March, 6.30pm – 9pm, at The Vibe Gallery, Bermondsey. (That’s five minutes from Bermondsey tube station on the Jubilee Line.)

Tamara Roukaerts, head of marketing at the AR company Aurasma and Frank Da Silva, creative director for Earth 2 Hub (a sort of thinktanky thing, with video) are going to be singing the technology’s praises, I assume, while I crouch in the corner painting my face with ashes and portending doom. Because I am a writer, and that is my job. (Think Emile Zola; think railways.)

Tom Hunter’s in the chair. (Or is he…?) More details at http://bit.ly/x2xflN

Come and heckle if you’re in London. It’s free, and it’s about the closest you’ll ever get to being in an episode of Nathan Barley.

[youtube http://www.youtube.com/watch?v=95azfZKJo4Q?wmode=transparent]