Phone-a-refereeThe referee reports are in and you’re faced with a familiar situation: they are positive, hooray, but there is an ambiguous suggestion which you don’t quite understand. What follows is a lot of second-guessing, a number of meetings with your co-authors, lengthy editing of the paper to implement whatever you think the referees wanted you to do, a letter to the editor in which you explain that you didn’t quite understand, but that you assumed it was X and that you tried to answer it as best as you could and so on.

As often as not, you assumed wrongly which either leads to another round in which the referees tell you that what they in fact meant was not X but, (often again in ambiguous terms) indeed Y and that you should better fix Y this time or else…

Another familiar situation is that you’re the referee. You’re reviewing this paper and you think it is great but that the explanation offered by the authors is quite unclear. You’re not quite sure whether that is due to your lack of expertise in that exact field or whether it’s just not very well written. Then you try your best to express your concerns but again, it’s not exactly your field, so how can you be expected to give accurate advice, and also, you have better things to do than these people’s homework.

I don’t know how much research time and thus funding money goes down the drain in the ensuing prolonged review process but it must be significant. So what is the solution to all this? In my opinion it’s quite simple: allow the referee or the authors to talk to each other. Just imagine how gloriously straightforward it would be in example 1 to contact a reviewer to ask him what precisely he was suggesting. In example 2, a couple of question would be enough to find out whether the paper needed improvement or your understanding of physics.

The refereeing process, of course, is single-blind. The email contact would have to be handled by the journal, which nowadays has a powerful online portal for publication management anyway. How do you avoid abuse, e.g. authors bombarding their referees with messages? You install a unilateral opt-out system, or limit the number of emails that can be exchanged per refereeing round. Even people who get arrested are allowed a phone call, right? Or people on TV game shows. Why not scientists?

I think this is a brilliant idea, but is it ever going to be implemented? The handful of readers who randomly stumble upon my idle musings on this website will probably not take to the streets and bring about the required revolution. I’m going to use a trick. Last time I posted about the upcoming APS open-access journal Physical Review X (PRX), the APS contacted me within hours to point out a factual error in my post. This wasn’t because their editors are avid readers of this blog but because they had set up a Google alert for keywords involving their new journal. In the hope that this alert is still active, I now invoke the power of Google to get my idea across:

Dear anonymous APS editor or underling who happens to check the hundreds of alerts which are created for PRX every minute,

I hereby suggest to implement a limited messaging system between referees and authors. The best place to start would be your new journal Physical Review X. I’m sure the suggested feature would create a lot of interest in the community.



New open-access journals

For those who haven’t seen it yet, there are two new journals, the American Institue of Physics’ (AIP) AIP Advances and American Physical Society’s (APS) Physical Review X (PRX). Both are representative of a recent trend for traditional publishers to move to open access, online-only publishing models. Another example would be Nature Communications, a journal recently launched by the Nature Publishing Group.

AIP Advances supposedly focuses on applied physics with the promise on rapid publication. A quick look at the papers in their first issue confirms this claim to some extent. There are fourteen papers and the average time between submission and acceptance was around 6 weeks. The longest time was 9 weeks and the shortest just one week. Given that most papers were reviewed during the Christmas break, this is certainly an achievement. It will be interesting to see whether they can keep up this speed once they receive more submissions. The sample size is so far not large enough to allow a clear picture of the eventual content of AIP Advances. The term “applied” does certainly not fit all of the papers in the first issue. The fee for publication in AIP Advances is 1350 USD.

While the first issue of AIP Advances just appeared, PRX has only just issued their call for papers will announce their first call for papers later this month. Issue one is expected to appear in (northern-hemisphere) fall 2011. The scope of PRX is as broad as Physical Review Letters itself, so all fields of physics are covered, including some which formerly might not have fit into the more traditional APS Physical Review publications, especially interdisciplinary research. This sounds a little bit like the scope of Nature Communications and I can imagine that the APS is trying to position PRX to counter the success of both Nature Communications and the increasingly popular New Journal of Physics, which also has an open access model. Publishing in PRX will hit your (or your funding agency’s) wallet for 1500 USD.

In addition, there is now the option to choose open access for most APS journals. The papers will be published under the Creative Commons license. The fees are 1700 USD for Physical Review papers and 2700 USD for Physical Review Letters.

EDIT: In my original post, I had foolishly assumed that the AIP was part of APS publishing. Gene Sprouse, the APS editor in chief has kindly pointed out this mistake to me.

Spectral bi-photon wave-packet shaping

We have a new paper in Optics Express, “Engineered optical nonlinearity for quantum light sources“. We demonstrate a simple technique of longitudinal shaping of bi-photon wavepackets created via spontaneous parametric downconversion (SPDC).

In a standard SPDC experiment, wavepackets have a sinc frequency spectrum. This is due to the fact that a crystal has finite length and a rectangular shape—the nonlinear interaction between the pump beam and the crystal is thus turned on abruptly, to its full strength, when the pump enters the crystal, and remains constant until it  is turned off to zero when the pump exits the crystal. In the frequency domain, this temporal step-function transforms into a sinc shape.

This spectral shape has a detrimental effect on the purity of the downconversion photons, as shown here. The purity determines the quality of two-photon interference between photons generated in independent SPDC sources. This, in turn, directly affects the performance of, e.g., photonic quantum gates.

Microscope image of our custom-poled KTP crystal.

Microscope image of our custom-poled KTP crystal. The transition from first-order poling on the right to second-order poling on the left is clearly visible.

In our paper, we solve this problem by longitudinally engineering the effective nonlinearity in a periodically poled KTPcrystal. We give the crystal a Gaussian nonlinearity profile by patterning it with discrete sections of increasingly higher-order polings. The pump beam which enters the crystal will first encounter a section with very high order poling, which effectively means it will experience a very weak nonlinearity. This effective nonlinearity then increases in discrete steps, peaks in the crystal center and then drops off symmetrically. We confirmed that our method works by measuring two-photon interference patterns, which are indeed Gaussian instead of triangular, which they normally are for sinc-shaped biphotons.

What surprised us was that, even though the crystal consists of dozens of section with different polings, the agreement of the measurements with our domain-by-domain theory predictions was excellent. In conclusion, the technique clearly works and has the big advantage that the usually hard work of spectral shaping is outsourced to the crystal manufacturer, who did an excellent job in our case.

UQ back from flood break

After one week of closure due to floodings, UQ is now almost back to full speed. The lower lying areas were pretty hard hit but the essential buildings and services survived unscathed. Here’s a Courier Mail video from UQ and surrounding suburbs. UQ footage starts at 3:31 but the whole video is interesting, really.

Closing the freedom-of-choice loophole in a Bell test

Our paper “Violation of local realism with freedom of choice” has just been published in the Proceedings of the National Academy of Sciences (PNAS).

An explanatory post on a paper about Bell inequalities usually starts with recounting the history and controversy of entanglement starting in 1935. I’ll spare you this part, you’ve probably read it countless times before. So let’s cut to the chase. We experimentally closed the so-called (as of now) freedom-of-choice loophole.

Freedom of choice is related to, but not quite the same, as its better known cousin the locality loophole. The locality loophole arises when the measurement results on one system can be influenced by a measurement or by the setting choice event (the choice of which measurement will be performed) on a second, spatially separated system and vice versa. This causal influence is defined within the framework of special relativity – an event can influence another by signals travelling at, or below the speed of light. Experimentally, this can be guaranteed by locating these events outside each others future light-cone, i.e. further apart than a signal at light speed can travel within the timing difference of the involved events.

Freedom of choice means that the setting choices must themselves be free of any potential influence by the event which created the two systems in the first place. In other words, similar to above, the choice of measurement settings, in practice generated by a quantum random number generator (which is another crucial requirement), has to occur outside of the future light cone of the event that created the two (entangled) systems and hence imprinted the hidden variable on them.

The freedom-of-choice loophole is a crucial requirement for the derivation of Bell’s inequality. Interestingly though, it had until now not been addressed experimentally, and even been somewhat overlooked in recent literature on this topic.

The experiment itself was carried out between the islands of La Palma and Tenerife, a great place for holidays, astronomy and free-space experiments (in that order). We created entangled photons in a source at La Palma. One photon of each pair was kept at La Palma, the other one sent to Tenerife, where it was received by the European Space Agency’s Optical Ground Station telescope (see photo below). To close the locality and freedom-of-choice loopholes, the source, the quantum random number generators, and the measurements were distributed over 3 carefully selected locations. The measurement settings were applied via fast electro-optical switches. Eventually, we measured an experimental Bell value of ~2.37, well above the local realistic bound of 2.

Interesting side fact: the entangled photon source was operated at a maximum output of 2 million detectable pairs, which is AFAIK a record for a mobile, diode-pumped setup.

An equally interesting part of this paper is that we make an attempt to find a simple classification for the multitude of existing hidden variable models. For more details, read the paper.

Optical Ground Station, Tenerife, pointing in the direction of La Palma, where the quantum transmitter was located.

Optical Ground Station (OGS), Tenerife, pointing in the direction of La Palma, where the quantum transmitter was located. The green laser beam was used as a beacon for the closed-loop optical tracking system which kept the transmitted and receiver telescopes aligned to each other. This photo was taken and digitally enhanced by Thomas Herbst, University of Vienna.

Quantum Matchgates

Today, New Journal of Physics has published our paper “Matchgate quantum computing and non-local process analysis“.

Matchgates are an intriguing class of two-qubit quantum logic gates. Circuits built up solely by matchgates, acting on neighbouring qubits, are efficiently simulatable classically. If the gates are however, allowed to act on any two qubits, which can be achieved by a simple two-qubit SWAP, they allow universal quantum computing.

In our paper we show a simple decomposition of the somewhat mystic matchgates into gates that are better known, such as single-qubit unitaries, and 2-qubit CNOT gates and controlled-unitaries. We then implement the only non-trivial matchgate needed for universal matchgate computing—the so called G(H,H) gate—with single photons and linear optics.

Non-local fidelity map for our experimental 2-qubit matchgate process

Non-local fidelity map for our experimental 2-qubit matchgate process.

In the second part of the paper, we analyze the resulting quantum process in a novel way. We calculate 3-dimensional fidelity maps which show the overlap, maximized over local operations, between a unitary projection of our experimental process and all possible non-local 2-qubit operators, parametrized in the so-called Weyl chamber. In order to understand what that means, you’d have to be really interested in this sort of thing and read the paper. What it allows us to do is, first of all, to identify error sources in our process. Second, we can create pretty pictures (see figure on the right) which give us insight into the non-local properties of our process which would not be immediately obvious from standard process analysis tools.

This work was done in collaboration with Sven Ramelow and Aephraim Steinberg, who both happened to visit us some time ago.

Single-photon downconversion

Artist's rendition of cascaded downconversion

Cascaded downconversion. A laser beam creates photon pairs in a noninear crystal via the process of downconversion. One photon of the created pair then pumps a second crystal and is again downconverted. The result is a photon triplet.

Single-photon pair sources based on the nonlinear process of spontaneous parametric downconversion are still a relatively young development. They are however probably one of the most successful tools of modern experimental science in terms of the massive impact they had on the field of quantum information processing. They provided the first bright source of  entangled particles, which were then used in countless proof-of-principle experimental demonstrations which have shaped the field we’re working in. Examples include violations of Bell inequalities, the first quantum state teleportation, entanglement purification, multi-partite entanglement, quantum computing, both in the circuit and the cluster-state paradigm, the entire field of entangled-state quantum communication, and so on.

The hand-waiving explanation for the downconversion process is that a photon from a strong laser beam, which is focussed in the nonlinear crystal at the heart of the source, is “split” into a photon pair. However, the actual downconversion of a single photon has not been observed before. Until now. We have just demonstrated exactly this effect – the downconversion of a single photon, which was itself created as one photon of a downconverted pair. The experiment was carried out at IQC, in the group of Thomas Jennewein, and was published in Nature today.

810 nm QKD in telecom fibers

We have a new paper, ‘Quantum entanglement distribution with 810 nm photons through telecom fibers‘, in Applied Physics Letters as of today (also available on the arXiv here). The experiment has been done in Thomas Jennwein’s group at IQC in Canada and demonstrates that entangled photons at 810 nm can be transmitted through 1550 nm, standard telecom fibers with reasonably high fidelity.

Most quantum information processing experiments are performed at wavelengths around 810 nm where silicon avalanche photodetectors are most efficient. The standard wavelength used in the telecom industry is however 1550 nm, which is where fiber loss is minimal. In our new experiment we show that, even though 1550 nm fibers are bi-modal for 810 nm, they can still be used for the transmission of entanglement at that shorter wavelength. As it turns out, the cross-talk between the two supported modes in the fiber is not critical and we can employ simple mode filtering, either temporal or spatial, to get a high-fidelity signal. In other words, for medium distance quantum information experiments, you might just as well use the much cheaper (and easily available) telecom fibers instead of expensive custom single-mode fibers for shorter wavelengths.

This idea goes back to efforts by Paul Townsend and then Gerald Buller and co-workers, who have pioneered high clock-rate quantum key distribution protocols in telecom fibers with weak coherent states at 850 nm (see this recent paper, for example).

The fact that telco fibers are bi-modal around 810 nm and that the cross-talk between these modes is low begs the question whether one could transmit spatially encoded qu-dits through standard telecom fibers, which would allow QKD with higher coding density. This is certainly a long stretch in practical terms. Interestingly, the group of Han Woerdman in Leiden has demonstrated just such an experiment in a hollow-core fiber, coincidentally pre-published on the arXiv the same day as our paper, so it’s definitely an idea worth thinking about.

The conference session chair

QCMC is over and the physics entourage has moved on to the ICAP conference in Cairns or back home. Watching all those talks has prompted me to share my thoughts on chairing a conference session. Here they are:

1. Your Speakers

The job of a session chair is really not that hard. It therefore always puzzles me that some chairs don’t know who the speakers in their session are, how to pronounce their names and what they are talking about. So please – memorize the list of speakers and try to talk to them before the session starts. It’s a real bonus to know the title instead of awkwardly reading it off the first slide of the presentation. A particularly nice touch would be to actually introduce the speakers, i.e. where they are from, what they are working on, etc.

2. Question Time

When a presentation is over the chair usually prompts the audience for questions. If there aren’t any, you should try not to embarrass the speaker by repeatedly asking for questions, e.g.: “No questions? Really, no questions? Maybe you there, at the back? Ah, you were just scratching your nose. Well obviously, the talk was clear enough, haha.”.

Often, the chair will himself come up with a question to fill this dreaded silence from the audience. This is not bad as such, the chair is supposed to be an expert on the session topic after all. However, it should still be a relevant question, which may indeed have come up during the talk. Alternatively, many chairs invent token questions from the title and abstract of the respective talks but these can usually be spotted easily. Also, they can backfire pretty quickly, such as when the speaker doesn’t understand your question, asks you to clarify it and therefore exposes that you have no idea (as witnessed at QCMC). So – if you have a good question, ask it. Maybe even ask first, before referring to the audience. That might break the ice. If you don’t, just let it go, the next speaker will thank you for the extra 30 seconds.

Another unnecessary thing is to tell the audience that there is time for “one or two short questions”. A question may be short but the answer might not be. And please, don’t interrupt a discussion just because you’re one minute late and the coffee is waiting outside. It’s not the speaker’s fault that he came last in the session. There will always be more coffee and discussions is what a conferences are for.

3. The Equipment

Know how to use the presentation hardware. Usually, you will have tech support, but what if you don’t? It’s therefore a good idea to check with the speakers whether they have all tried out their talks, be it on the conference system or their own computer. Also, have a laser pointer ready if at all possible.

4. The Audience

Have you noticed that there is someone in the audience who will have a question, or rather a long and not very relevant comment, for every single speaker? It is within your rights to ignore him.

Finally, a request to the audience proper. Some speakers knowledge of the conference language does often not extend beyond giving their talk, which is actually quite a brave thing to do. If during the first question it becomes clear that they struggle to understand, please don’t go on torturing them.

These are my thoughts on the topic. A Google search has just unsurprisingly revealed to me that there are excellent guides for Session Chairs on the web. This one here for example.