Category Archives: News

The publication with zero authors

Screen Shot 2013-04-23 at 5.20.02 PM

I keep coming across commentary lamenting the increase in the number of authors on scientific papers (or patents), such as by Philip J. Wyatt in Physics Today and related posts in the blogosphere, e.g. a very recent one by UNC Chapel Hill marine ecologist John Bruno on Seamonster. This intriguing development seems diametrically opposed to the calls for crowd-sourced science put forward by people like Michael Nielsen. The conservative single-author advocates think that more and more authors reflect an erosion of individual creativity, while the Science 2.0 crowd is convinced that more people working on a problem delivers faster, better, and more diverse science.

So which aspect is more important: the noble aspiration to individual scientific excellence or the more modern result-driven push for large-scale collaboration?

My opinion is that you can have both things and that this important question has very little to do with actual authorship. Let’s talk about authorship first. What I certainly support is that automatic authorship such as that often demanded by organizational heads with 50+ published papers per year should be banned. What I don’t support are the stringent guidelines suggested by some journals. Take for example the oft-cited rules by the International Committee of Medical Journal Editors:

Authorship credit should be based on 1) substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published. Authors should meet conditions 1, 2, and 3.

Let’s apply those rules to a hypothetical scenario: Prof A invited a long-term collaborator, researcher B. This visitor has an original idea for an experiment which can be implemented with a new setup in A’s lab. This setup has been assembled over the past 4 years by PhD student C, who completed his work but unfortunately never got to do any actual science before he had to write up his thesis. The experiment will this be carried out by promising new PhD student D, who was introduced to the apparatus by C. Since taking the data is time-consuming, and D is new to research, the data will be analyzed by PhD student E, a real Matlab genius. For good measure, throw in postdoc F who spends most of his time teaching, planning and supervising other projects, writing grants etc. F will write the paper while data is being collected and analyzed.

According to the ICMJE guidelines, the resulting paper will have zero authors. Because none of our protagonists, starting with the researcher who had the idea, to the grunt who built the experiment, all the way to our Professor without whom none of the others would even have been there, meets the suggested criteria for scientific authorship.

I offer a much simpler criterium:

If the manuscript didn’t exist at this time, in this form, without person X—even if X could have been replaced by any other similarly qualified person, then person X should be an author.

Just as the IMCJE criteria above, my suggestion offers room for interpretation. Obviously, even I don’t think that authorship should extend all the way to Adam and Eve. But it does extend to the guy who owns the lab, created that particular line of research, hired the involved people, and financed the experiments. And it very certainly includes the researcher(s) who provided the initial idea. Because an idea of sufficient quality, i.e. somewhat more specific than ‘you should look into curing cancer’, is worth more than data that could probably have been taken in a dozen other labs. (So take that, John Bruno, unless your idea was entirely obvious, you should definitely have been an author of Brian Helmuth’s Science paper). It also obviously includes the guy who built the experiment. Unfortunately, it’s not uncommon in experimental science that whole PhDs are spent on setting up an experiment from scratch with no immediate outcome while some luckster simply walked in a few years later and started milking the setup for results. In this case our student would do well to arrange an agreement with the lab owner on how many papers they can get out of their work.

The best justification for my authorship criterium is exemplified by large scientific collaborations. Publications by CERN or LIGO routinely sport hundreds of authors none of whom would qualify for individual authorship according to some official guidelines. No one in his right mind would however suggest that a PhD degree in experimental science is worth less when obtained while contributing to the most exciting scientific endeavors undertaken by humankind. In this context, suggestions (see the comments on Wyatt’s Physics Today article) of a per-author normalization of scientific indicators like h-factors are laughable. While those huge projects are certainly extreme cases, the principle is scalable: if a project benefits from more participants, then by all means, they should all be there, and they should probably all be authors.

But back to the initial quandary of whether multi-author papers erode individual creativity. Will the fact that our paper has 6 authors instead of 1 have that effect? No it won’t. Most of the creative achievement was contained in the initial idea, some in the experimental design, and the question of authorship won’t change that. Should we have left every aspect of the experiment to our new guy D in the spirit of a wholesome scientific training? Maybe, but that means it would have taken much longer to complete the research which cannot be in the best interest of science or the taxpayer. With good supervision, student D can easily learn the components they missed out on in the time saved. It would furthermore be silly to underestimate the learning effect of sharing the process of scientific research with more experienced colleagues. Do we really want to return to yesteryear when researchers were supposed to do everything on their own, isolated from the environment? I don’t think so. And finally, if the opportunity really arises, any aspiring academic will cherish publishing a single-author paper anyway,

The only reasonable argument I see against increasingly multi-author papers is that hiring committees will have a harder job separating truly creative minds from mere data analyzers. This problem is already mitigated by author contribution statements, as nowadays requested by major journals such as Nature and Science. It would certainly be welcome if those declarations were standardized and taken up by more journals. Beyond that, if two or three job references and an extensive interview still isn’t enough for our struggling committee, then maybe the data analyzer is actually more creative than we had thought.

In summary, anyone who contributed to scientific research should be considered as an author, no matter whether their contribution was restricted to “just” data taking or any other singular aspect of the research. We will still have scientifically brilliant individuals, probably more so because of the far broader opportunities offered through larger collaborations, and if you find it harder to identify those individuals, maybe it’s your fault.


Sampling bosons

If ever there was a paper the linear optics community got as excited about as the now famous KLM paper, it was Aaronson and Arkhipov’s “The computational complexity of linear optics“. Fast forward two years and we have just published a first experimental implementation of the BosonSampling task introduced by the two ‘As’ in Science.

This work, and a similar one by our friends and competitors in Oxford, has attracted quite a lot of attention in the science media. Here’s a (probably incomplete) list of articles about it:

Science, “New form of quantum computation promises showdown with ordinary computers

Scientific American, “New machine bridges classical and quantum computing

New Scientist, “Victorian counting device gets speedy quantum makeover

arstechnica, “Can quantum measurements beat classical computers?

Physicsworld, “‘Boson sampling’ offers shortcut to quantum computing, “Rise of the BosonSampling computer

IEEE spectrum, “New machine puts quantum computers’ utility to the test

Physorg, “At the solstice: Shining light on quantum computers

ABC Science, “Proving the need for quantum computers

But there’s more. Since Andrew presented our preliminary results at last year’s QCMC conference, two other groups in Vienna and Rome also raced to get their results out and all four manuscripts appeared within a day on the arXiv.

Since the titles of our papers don’t offer much of an explanation of the differences between the results, an explanation might be in order. Let’s talk about the similarities first. All of our experiments utilized down-conversion photons sent through some sort of linear optical network. We all observed three-photon interference which is the minimum test of the BosonSampling idea. The team in Oxford also measured four-photon interference patterns, albeit in a limited sense, where instead of four photons being sent into four distinct optical modes, they simply used the sporadic double-pair emissions from a single downconverter.

One difference is that the groups in Oxford, Italy and Vienna realized their optical circuits via integrated waveguides, while we did it in a three-port fiber beamsplitter (with the polarization degree of freedom giving us a six-by-six network). The waveguides provide a stable network, they are however quite lossy which is why we probably have the best quality three-photon data. Another difference is that while these circuits are in principle tunable via thermal heaters, the circuits were actually fixed in the respective experiments. Our circuit can easily be tuned over a large range of interesting unitaries.

An aspect which sets our work apart, and which in my opinion important for testing the validity of BosonSampling, is that we used an different method of characterizing our photonic network. Instead of using two-photon interference for this characterization, which rests on the same assumption as BosonSampling itself and thus does not allow independent verification of predicted three-photon amplitudes, we used a simple classical method for characterizing unitary circuits we recently developed.

Making light matter

Photonic split-step quantum walk implementation
Photonic split-step quantum walk implementation for the observation of topologically protected bound states.

We have a new paper in Nature Communications, Observation of topologically protected bound states in photonic quantum walks. Here’s our press release which unfortunately didn’t quite make it into the official press channels because of a fundamental disconnect between what we researchers wanted to write and the official PR guidelines at UQ:

At first glance, a donut and a coffee cup do not have much in common, except that they complement each other really well.

A second glance reveals that they share a geometrical property, their topology: the shape of one can be continuously deformed into the shape of the other.

Topology explains many phenomena in modern science: transitions between physical regions with different topology cause exotic effects such as insulators which act like conducting metals at the surface.

These effects are hard to control and study since they usually appear in complex materials where quantum particles are hard to observe. Researchers at the University of Queensland and Harvard University have simulated transitions between quantum topologies—predicted to exist, but never observed, in polymers and high-energy physics—in an experiment where light is made to act like matter.

“We observed for the first-time bound states—where a quantum particle is trapped at a topological interface—which have long been predicted to play an important role”, says Matthew Broome, joint lead author of this work and PhD student at the University of Queensland, “It was easy to observe these trapped photons, which is usually a challenging task in material sciences.”

The team at Harvard recently predicted that quantum walks can simulate systems with different topological regions. The experimentalists at Queensland persuaded single particles of light—photons—to walk through an optical network.

“Quantum walks have been previously realized in variety of settings with ions, atoms and photons, but nobody really knew that they could observe the exciting topological phenomena with quantum walks before our discovery”, says Takuya Kitagawa, joint lead author who developed the theory with his colleagues at Harvard, “This discovery came as a complete surprise to everybody, including us.”

Furthermore the versatile system invented by the UQ team allowed a surprising new discovery, the existence of a pair of bound states—a topological phenomenon which arises in dynamic time-dependent systems only.

This discovery bears exciting prospects for the development of novel materials and even powerful—but so far elusive—quantum computers.

The study, “Observation of topological bound states in photonic quantum walks”, by UQ’s Matthew Broome, Alessandro Fedrizzi, Ivan Kassal, and Andrew White, and Harvard’s Takuya Kitagawa, Erez Berg, Mark Rudner, Alán Aspuru-Guzik and Eugene Demler, was published in Nature Communications.

The experiment was conducted by researchers from the ARC Centre for Engineered Quantum Systems (EQuS) and the ARC Centre for Quantum Computation and Communication Technology (CQC2T) in Australia; and Harvard University, USA.

Photonic ups and downs

The photons I work with have experienced a roller coaster lately, having been down-converted here and now up-converted in a new paper we just published in PRA. This time, the photons didn’t have to go their own way though. They were assisted in their journey from higher (810 nm) to lower wavelengths (532 nm) by a strong telecom seed laser. This wouldn’t be big news though, as many other groups have previously reported this so-called sum-frequency generation between single photons and strong seed lasers. The actual news is that we have managed to up-convert polarization-entangled photons.

Experimental setup for frequency up-conversion of entangled photons
We generated polarization-entangled photon pairs at 820 nm in a ppKTP Sagnac source and superposed one photon of each pair with a strong 1550 nm laser in two additional ppKTP crystals. The resulting 532 photons were still highly entangled with their original partners.

We started by generating polarization-entangled photon pairs at 820 nm, mixed them with a strong 1550 nm laser and sent them through the reverse version of Paul Kwiat’s sandwich source: two nonlinear crystals quasi-phase-matched for type-I up-conversion of 820+1550->532 nm, with orthogonal orientations. The overall efficiency of the up-conversion scheme was atrocious, but it precisely matched theoretical expectations for our scheme, given the available pump power and limitations in geometry an optical loss. More importantly, the entanglement between the up-converted green photons and their original 820 nm partners was almost perfect.

Our results show for only the second time (the first being Nicolas Gisin’s fabulous time-bin entanglement up-conversion experiment) that entanglement is preserved in a sum-frequency experiment with a strong seed laser. As we argued in our paper, this type of coherent interconversion between wavelengths will be an important tool in the larger picture of practical quantum information processing. Nature Photonics included our paper in their March 2012 research highlights.

Quantum steering without the detection loophole

Our paper “Conclusive quantum steering with superconducting transition edge sensors” has now been published in Nature Communications.

Quantum steering was introduced by fellow Austrian Erwin Schroedinger in 1935 alongside the much better known quantum phenomenon of entanglement. It describes the ability of one party to steer the measurement outcome of a second party on a quantum state shared between them. The interesting feature of quantum steering is that it can demonstrate quantumness in a regime which is weaker than that needed to disprove local realistic theories.

Original demonstrations of quantum steering involved experimental tests of the Einstein Podolsky Rosen (EPR) paradox. Since then, Howard Wiseman (one of our co-authors) and others have given the concept of steering a facelift, reformulating it in the context of quantum information.

Our contribution was to experimentally demonstrate steering while closing the so-called detection loophole, which would allow one of the two involved parties to cheat the other into thinking their state was steered properly, even though it wasn’t. The detection loophole is an issue which also pops up in Bell inequalities performed with photons; it originates in our inability to detect entangled single photon pairs with 100% efficiency. In practice, a conclusive demonstration of quantum steering is performed by violating a steering inequality, which in it’s simplest form requires a conditional detection efficiency of at least 50%.

This number was considered out of bounds until only a few years ago, when superconducting transition edge sensors hit the scene. These detectors—pioneered by our collaborators at the National Institute for Standards and Technology (NIST) in Boulder, Colorado—are in principle capable of detecting incoming single photons with near-unity efficiency. The experimental challenge for us was to build a source for entangled photons which, combined with these detectors, would allow us to surpass the 50% efficiency limit. We were quite successful at that and achieved an unprecedented conditional detection efficiency of 62%.

Simultaneous to our efforts, two other groups reported steering experiments which successfully closed the detection loophole. The experiment performed in Vienna (Bernhard Wittmann et al.) simultaneously closed the locality loophole (similar to our previous experiment described here), which would allow cheating by information leakage between the measurement devices used by our two parties. The second experiment, performed at Griffith university also here in Brisbane (A. Bennet et al.) demonstrated steering over a 1 km fiber channel. Both of these experiments were performed with conventional photon detectors, exploiting clever theory which allows one to lower the efficiency requirements by using more measurement settings.


Hardy’s paradox and Bell inequalities in time

Extending on our previous work on Leggett-Garg inequalities, we recently demonstrated the venerable Hardy paradox and the violation of a state-independent Bell inequality in a temporal scenario. Our work has now appeared in PRL.

Tests of quantum mechanics, such as the Bell inequality, are usually carried out in a spatial scenario, where measurements performed on two or more remote, spatially separated (quantum) systems reveal correlations which are stronger than those allowed in classical models.

Experimental setup for testing temporal quantum phenomena
Experimental setup for testing temporal quantum phenomena. A single photon serves as the system on which we perform two temporally separated measurements. The first measurement is non-destructive, enacted with an auxilliary photon which reads out the state of the system photon via two-photon interference in a controlled phase gate. The second measurement is then performed in a standard way.

They can however also be translated into a temporal scenario, where two or more measurements are performed on the same system, at different times. This idea was cooked up by Tony Leggett and Anupam Garg, motivated by the quest to show that quantum and classical world views were incompatible also for macroscopic objects. The usual assumptions of realism and locality in the spatial scenario are here replaced with the assumptions of (macro)realism and, because locality cannot be enforced for temporal measurements on a single object, non-invasiveness, which posits that a classical measurement on a macro-realistic system can determine the state of the system without affecting either the state or the subsequent system dynamics. Just like in the spatial scenario one can derive inequalities which show a disagreement of quantum mechanics with these assumptions.

In our experiment, we implemented such a temporal scenario with single photons, similar to a previous demonstration of the original Leggett-Garg inequality in our group. The experimental scheme is explained in the figure. Following an idea by Tobias Fritz, we first show that Hardy’s paradox is stronger in its temporal form. Furthermore, we measure the temporal equivalent of a Bell CHSH inequality as suggested by Caslav Brukner and colleagues. Surprisingly, the violation of this inequality is not dependent on the quantum state, which means it can be maximally violated by any state, including fully mixed states.

Efficient quantum process tomography via compressive sensing

Illustration of a quantum process reconstructed via compressive sensing
A quantum process matrix reconstructed via compressive sensing. Instead of potentially 576 measurement configurations, a mere 18, selected at random, suffice for a high-fidelity estimate.

We have a new paper, titled “Efficient measurement of quantum dynamics via compressive sensing” in PRL. I already spent a significant amount of time with my co-authors on writing the paper and the press release so I’m not gonna invent the wheel a third time and just post some excerpts from the UQ press release as introduction:


At present it is extremely difficult to characterise quantum systems — the number of measurements required increases exponentially with the number of quantum parts. For example, an 8-qubit quantum computer would require over a billion measurements.

“Imagine that you’re building a car but you can’t test-drive it. This is the situation that quantum engineers are facing at the moment,” said UQ’s Dr Alessandro Fedrizzi, co-author of the study that was recently published in Physical Review Letters.

“We have now found a way to test quantum devices efficiently, which will help transform them from small-scale laboratory experiments to real-world applications.”

The team also include UQ collaborators Dr Marcelo de Almeida, Professor Andrew White and PhD student Matthew Broome, as well as researchers from Princeton University, the Massachusetts Institute of Technology (MIT), and SC Solutions, Inc. The researchers adapted techniques from “compressive sensing”, a hugely successful mathematical data compression method and for the first time, have applied it to experimental quantum research.

“Audio signals have natural patterns which can be compressed to vastly smaller size without a significant quality loss: this means we now store in a single CD what used to take hundreds. In the same way, compressive sensing now allows us to drastically simplify the measurement of quantum systems,” said Dr Alireza Shabani, the study’s main author from Princeton University.

“A common example for data compression is a Sudoku puzzle: only a few numbers will allow you to fill in the whole grid. Similarly, we can now estimate the behaviour of a quantum device from just a few key parameters,” said co-author Dr Robert Kosut from SC Solutions, Inc., who developed the algorithm with Dr Shabani, Dr Masoud Mohseni (MIT) and Professor Hershel Rabitz (Princeton University).

The researchers tested their compressive sensing algorithm on a photonic two-qubit quantum computer built at UQ, and demonstrated they could obtain high-fidelity estimates from as few as 18 measurements, compared to the 240 normally required.

The team expects its technique could be applied in a wide range of architectures including quantum-based computers, communication networks, metrology devices and even biotechnology.

To summarize, we have performed process tomography of a two-qubit quantum gate with just 18 measurement configurations out of the potential 576 (which we call an overcomplete set). The compressive sensing algorithm therefore offers a huge reduction in resources and time already for the really small scale lab demonstrations we’re working on at the moment.

It should be noted that, at the time of writing, there is one other (theory) proposal on quantum tomography with compressive sensing. The paper by D. Gross et al. explicitly treats quantum state tomography, which is not quite the same as process tomography.  However, according to the authors, and in particular Steve Flammia, who happened to visit UQ recently, their algorithm can be extended to process tomography in a straightforward manner.

The main difference between the two methods is, in a nutshell, the following: Our method scales with ~o(s log d), where s indicates the sparsity of the process matrix in a chosen basis and d the dimension of the quantum system. The method by Gross et al. scales with ~o(r d^2 log d), where r is the rank of the quantum state, or, in extension, the quantum process.

At a first glance, our method scales more favorably. This is indeed the case, but only when the process basis is known, because a process will only be maximally sparse in its eigenbasis. The algorithm is therefore best applied for certification of a device with a defined target process. The method by Flammia et al., in contrast, can be applied to black-box processes, because rank is basis-independent.

The two methods are therefore, as I like to think, very complementary, and should both be further investigated as there are still plenty of open questions. One promising, and maybe not immediately obvious feature we found during our tests was that the process estimates returned by the compressive sensing algorithm allowed us to improve our gates in practice. This is a key requirement for any efficient tomographic estimation, and can thus be seen as a successful litmus test for compressive sensing methods entering the field of quantum information processing.


Phone-a-refereeThe referee reports are in and you’re faced with a familiar situation: they are positive, hooray, but there is an ambiguous suggestion which you don’t quite understand. What follows is a lot of second-guessing, a number of meetings with your co-authors, lengthy editing of the paper to implement whatever you think the referees wanted you to do, a letter to the editor in which you explain that you didn’t quite understand, but that you assumed it was X and that you tried to answer it as best as you could and so on.

As often as not, you assumed wrongly which either leads to another round in which the referees tell you that what they in fact meant was not X but, (often again in ambiguous terms) indeed Y and that you should better fix Y this time or else…

Another familiar situation is that you’re the referee. You’re reviewing this paper and you think it is great but that the explanation offered by the authors is quite unclear. You’re not quite sure whether that is due to your lack of expertise in that exact field or whether it’s just not very well written. Then you try your best to express your concerns but again, it’s not exactly your field, so how can you be expected to give accurate advice, and also, you have better things to do than these people’s homework.

I don’t know how much research time and thus funding money goes down the drain in the ensuing prolonged review process but it must be significant. So what is the solution to all this? In my opinion it’s quite simple: allow the referee or the authors to talk to each other. Just imagine how gloriously straightforward it would be in example 1 to contact a reviewer to ask him what precisely he was suggesting. In example 2, a couple of question would be enough to find out whether the paper needed improvement or your understanding of physics.

The refereeing process, of course, is single-blind. The email contact would have to be handled by the journal, which nowadays has a powerful online portal for publication management anyway. How do you avoid abuse, e.g. authors bombarding their referees with messages? You install a unilateral opt-out system, or limit the number of emails that can be exchanged per refereeing round. Even people who get arrested are allowed a phone call, right? Or people on TV game shows. Why not scientists?

I think this is a brilliant idea, but is it ever going to be implemented? The handful of readers who randomly stumble upon my idle musings on this website will probably not take to the streets and bring about the required revolution. I’m going to use a trick. Last time I posted about the upcoming APS open-access journal Physical Review X (PRX), the APS contacted me within hours to point out a factual error in my post. This wasn’t because their editors are avid readers of this blog but because they had set up a Google alert for keywords involving their new journal. In the hope that this alert is still active, I now invoke the power of Google to get my idea across:

Dear anonymous APS editor or underling who happens to check the hundreds of alerts which are created for PRX every minute,

I hereby suggest to implement a limited messaging system between referees and authors. The best place to start would be your new journal Physical Review X. I’m sure the suggested feature would create a lot of interest in the community.



New open-access journals

For those who haven’t seen it yet, there are two new journals, the American Institue of Physics’ (AIP) AIP Advances and American Physical Society’s (APS) Physical Review X (PRX). Both are representative of a recent trend for traditional publishers to move to open access, online-only publishing models. Another example would be Nature Communications, a journal recently launched by the Nature Publishing Group.

AIP Advances supposedly focuses on applied physics with the promise on rapid publication. A quick look at the papers in their first issue confirms this claim to some extent. There are fourteen papers and the average time between submission and acceptance was around 6 weeks. The longest time was 9 weeks and the shortest just one week. Given that most papers were reviewed during the Christmas break, this is certainly an achievement. It will be interesting to see whether they can keep up this speed once they receive more submissions. The sample size is so far not large enough to allow a clear picture of the eventual content of AIP Advances. The term “applied” does certainly not fit all of the papers in the first issue. The fee for publication in AIP Advances is 1350 USD.

While the first issue of AIP Advances just appeared, PRX has only just issued their call for papers will announce their first call for papers later this month. Issue one is expected to appear in (northern-hemisphere) fall 2011. The scope of PRX is as broad as Physical Review Letters itself, so all fields of physics are covered, including some which formerly might not have fit into the more traditional APS Physical Review publications, especially interdisciplinary research. This sounds a little bit like the scope of Nature Communications and I can imagine that the APS is trying to position PRX to counter the success of both Nature Communications and the increasingly popular New Journal of Physics, which also has an open access model. Publishing in PRX will hit your (or your funding agency’s) wallet for 1500 USD.

In addition, there is now the option to choose open access for most APS journals. The papers will be published under the Creative Commons license. The fees are 1700 USD for Physical Review papers and 2700 USD for Physical Review Letters.

EDIT: In my original post, I had foolishly assumed that the AIP was part of APS publishing. Gene Sprouse, the APS editor in chief has kindly pointed out this mistake to me.