Updating Magic Universe
So what’s a Majorana fermion then?
A news item in today’s Nature reminds me that last week it was all happening with quantum computing at a meeting of the American Physical Society. IBM announced a breakthrough in the technology, predicting practical computers of unimaginable power within 10 or 15 years. And in Nature Eugenie Samuel Reich discusses what seems to be a discovery of cosmic importance by a team in Delft, announced at the APS meeting. I’ll sum up two strands of progress in a brief update.
In Magic Universe the last section of the story called “BITS AND QUBITS: the digital world and its quantum shadow looming” reads so far:
Towards quantum computers
For a second revolution in information technology, the experts looked to the spooky behaviour of electrons and atoms known in quantum theory. By 2002 physicists in Australia had made the equivalent of Shannon’s relays of 65 years earlier, but now the switches offered not binary bits, but qubits, pronounced cue-bits. They raised hopes that the first quantum computers might be operating before the first decade of the new century was out.
Whereas electric relays, and their electronic successors in microchips, provide the simple on/off, true/false, 1/0 options expressed as bits of information, the qubits in the corresponding quantum devices will have many possible states. In theory it is possible to make an extremely fast computer by exploiting ambiguities that are present all the time, in quantum theory.
If you’re not sure whether an electron in an atom is in one possible energy state, or in the next higher energy state permitted by the physical laws, then it can be considered to be both states at once. In computing terms it represents both 1 and 0 at the same time. Two such ambiguities give you four numbers, 00, 01, 10 and 11, which are the binary-number equivalents of good old 0, 1, 2 and 3. Three ambiguities give eight numbers, and so on, until with fifty you have a million billion numbers represented simultaneously in the quantum computer. In theory the machine can compute with all of them at the same time.
Such quantum spookiness spooks the spooks. The world’s secret services are still engaged in the centuries-old contest between code-makers and code-breakers. There are new concepts called quantum one-time pads for a supposedly unbreakable cipher, but some experts suspect that a powerful enough quantum computer could crack anything. Who knows what developments may be going on behind the scenes, like the secret work on digital computing by Alan Turing at Bletchley Park in England during the Second World War?
The Australians were up-front about their intentions. They simply wanted to beat the rest of the world in developing a practical machine, for the sake of the commercial payoff it would bring. The Centre for Quantum Computer Technology was founded in January 2000, with federal funding, and with participating teams in the Universities of New South Wales, Queensland and Melbourne.
The striking thing was the confidence of project members about what they were attempting. A widespread opinion at the start of the 20th Century held that quantum computing was beyond practical reach for the time being. It was seen as requiring exquisite delicacy in construction and operation, with the ever-present danger that the slightest external interference or mismanagement could cause the whole multiply parallel computation to cave in, like a mistimed soufflé.
The qubit switches developed in Australia consist of phosphorus atoms implanted in silicon using a high-energy beam aimed with high precision. Phosphorus atoms can sustain a particular state of charge for longer than most atoms, thereby reducing the risk of the soufflé effect. A pair of phosphorus atoms, together with a transistor for reading out their state, constitutes one qubit. Unveiling the first example at a meeting in London, Robert Clark of New South Wales said, ‘This was thought to be impossible just a few years ago.’
Update March 2012 – subject to confirmation of the Majorana fermion
Ten years later, when many others had joined in a prolonged experimental quest for quantum computing, IBM researchers at Yorktown Height s claimed to be within sight of a practical device within 10 or 15 years. Dogging all the experimenters was a problem called decoherence – would the qbits survive long enough to be checked for possible errors?
In 2012 Matthias Steffen of IBM told a reporter, “In 1999, coherence times were about 1 nanosecond. Last year, coherence times were achieved for as long as 1 to 4 microseconds. With [our] new techniques, we’ve achieved coherence times of 10 to 100 microseconds. We need to improve that by a factor of 10 to 100 before we’re at the threshold [where] we want to be. But considering that in the past ten years we’ve increased coherence times by a factor of 10,000, I’m not scared.”
Then it would be a matter of scaling up from devices handling one or two qbits to an array with, say, 250 qubits., That would contain more ordinary bits of information than there are atoms in the entire universe and it would be capable of performing millions of computations simultaneously. No existing code could withstand its probing, which probably explains why the US Army funded IBM’s work.
A by-product of quantum computing research was the discovery of a new particle in the cosmos. In 1937 the Italian physicist Ettore Majorana adapted a theory by the British Paul Dirac to predict a particle that is its own antiparticle – a very strange item indeed! It would be electrically neutral and exhibit peculiar behaviour.
A team led by Leo Kouwenhoven at Delft University of Technology in the Netherland, tested experimentally a suggestion from 2010 about how to create a pair of these particles. At a very low temperature and in a magnetic field, you touch a superconductor with an extremely fine semiconducting wire. As the signature of the presence of “Majorana fermions”, confirmed by the experimental team, the resistance in the wire becomes very low at zero voltage.
The Majorana particle opened a new route to quantum computing, because of its special ability to remember if it swaps places with a sibling. It was expected to be particularly resistant to the decoherence that plagued other techniques. So the Delft discovery promised a new research industry.
References
Steffen quoted by Alex Knapp in Forbes 28 February 2012 http://www.forbes.com/sites/alexknapp/2012/02/28/ibm-paves-the-way-towards-scalable-quantum-computing/
IBM Press Release 28 February 2012 http://www-03.ibm.com/press/us/en/pressrelease/36901.wss
Nature News 8 March 2012: http://www.nature.com/news/a-solid-case-for-majorana-fermions-1.10174
Nature News 28 Feb 2012 http://www.nature.com/news/quest-for-quirky-quantum-particles-may-have-struck-gold-1.10124
“A suggestion from 2010”: paper by Lutchyn et al. in PRL available at arXiv:1002.4033v2
Editors’ personal opinions
27/10/2011Climate Change – News and Comments
Nature muddies the water
You’d expect clear guidance from leading journals on that subject. How bewildering then, to read an editorial “Scientific climate” in today’s Nature (vol. 478, p. 428). It’s on the subject of the Berkeley Earth / Richard Muller furore noted in my recent posts. The editorial’s sub-heading is:
Results confirming climate change are welcome, even when released before peer review.
… Where “climate change” is to be understood, I suppose, as “catastrophic manmade global warming”. Other points from the editorial are, as I construe them:
What on earth does all that mean, to scientists and journalists who are just trying to tell their stories promptly? Here are three extracts from Nature’s instructions to authors concerning embargoes, which can be seen in full here http://www.nature.com/authors/policies/embargo.html
“Material submitted to Nature journals must not be discussed with the media, except in the case of accepted contributions, which can be discussed with the media no more than a week before the publication date under our embargo conditions. We reserve the right to halt the consideration or publication of a paper if this condition is broken.”
“The benefits of peer review as a means of giving journalists confidence in new work published in journals are self-evident. Premature release to the media denies journalists that confidence. It also removes journalists’ ability to obtain informed reactions about the work from independent researchers in the field.”
“… communicate with other researchers as much as you wish, whether on a recognised community preprint server, on Nature Precedings, by discussion at scientific meetings (publication of abstracts in conference proceedings is allowed), in an academic thesis, or by online collaborative sites such as wikis; but do not encourage premature publication by discussion with the press (beyond a formal presentation, if at a conference).”
What the new editorial means, in my opinion, is that the politicisation of science has now penetrated right through to the workaday rituals of publication. On no account must you publicise your new work prematurely, unless you do it to bash the climate sceptics or the Republican Party or supporters of Special Relativity or anyone else the editors happen to dislike today. In that case they’ll forgive you.