Category Archives: News

Quantum steering without the detection loophole

Our paper “Conclusive quantum steering with superconducting transition edge sensors” has now been published in Nature Communications.

Quantum steering was introduced by fellow Austrian Erwin Schroedinger in 1935 alongside the much better known quantum phenomenon of entanglement. It describes the ability of one party to steer the measurement outcome of a second party on a quantum state shared between them. The interesting feature of quantum steering is that it can demonstrate quantumness in a regime which is weaker than that needed to disprove local realistic theories.

Original demonstrations of quantum steering involved experimental tests of the Einstein Podolsky Rosen (EPR) paradox. Since then, Howard Wiseman (one of our co-authors) and others have given the concept of steering a facelift, reformulating it in the context of quantum information.

Our contribution was to experimentally demonstrate steering while closing the so-called detection loophole, which would allow one of the two involved parties to cheat the other into thinking their state was steered properly, even though it wasn’t. The detection loophole is an issue which also pops up in Bell inequalities performed with photons; it originates in our inability to detect entangled single photon pairs with 100% efficiency. In practice, a conclusive demonstration of quantum steering is performed by violating a steering inequality, which in it’s simplest form requires a conditional detection efficiency of at least 50%.

This number was considered out of bounds until only a few years ago, when superconducting transition edge sensors hit the scene. These detectors—pioneered by our collaborators at the National Institute for Standards and Technology (NIST) in Boulder, Colorado—are in principle capable of detecting incoming single photons with near-unity efficiency. The experimental challenge for us was to build a source for entangled photons which, combined with these detectors, would allow us to surpass the 50% efficiency limit. We were quite successful at that and achieved an unprecedented conditional detection efficiency of 62%.

Simultaneous to our efforts, two other groups reported steering experiments which successfully closed the detection loophole. The experiment performed in Vienna (Bernhard Wittmann et al.) simultaneously closed the locality loophole (similar to our previous experiment described here), which would allow cheating by information leakage between the measurement devices used by our two parties. The second experiment, performed at Griffith university also here in Brisbane (A. Bennet et al.) demonstrated steering over a 1 km fiber channel. Both of these experiments were performed with conventional photon detectors, exploiting clever theory which allows one to lower the efficiency requirements by using more measurement settings.


Hardy’s paradox and Bell inequalities in time

Extending on our previous work on Leggett-Garg inequalities, we recently demonstrated the venerable Hardy paradox and the violation of a state-independent Bell inequality in a temporal scenario. Our work has now appeared in PRL.

Tests of quantum mechanics, such as the Bell inequality, are usually carried out in a spatial scenario, where measurements performed on two or more remote, spatially separated (quantum) systems reveal correlations which are stronger than those allowed in classical models.

Experimental setup for testing temporal quantum phenomena
Experimental setup for testing temporal quantum phenomena. A single photon serves as the system on which we perform two temporally separated measurements. The first measurement is non-destructive, enacted with an auxilliary photon which reads out the state of the system photon via two-photon interference in a controlled phase gate. The second measurement is then performed in a standard way.

They can however also be translated into a temporal scenario, where two or more measurements are performed on the same system, at different times. This idea was cooked up by Tony Leggett and Anupam Garg, motivated by the quest to show that quantum and classical world views were incompatible also for macroscopic objects. The usual assumptions of realism and locality in the spatial scenario are here replaced with the assumptions of (macro)realism and, because locality cannot be enforced for temporal measurements on a single object, non-invasiveness, which posits that a classical measurement on a macro-realistic system can determine the state of the system without affecting either the state or the subsequent system dynamics. Just like in the spatial scenario one can derive inequalities which show a disagreement of quantum mechanics with these assumptions.

In our experiment, we implemented such a temporal scenario with single photons, similar to a previous demonstration of the original Leggett-Garg inequality in our group. The experimental scheme is explained in the figure. Following an idea by Tobias Fritz, we first show that Hardy’s paradox is stronger in its temporal form. Furthermore, we measure the temporal equivalent of a Bell CHSH inequality as suggested by Caslav Brukner and colleagues. Surprisingly, the violation of this inequality is not dependent on the quantum state, which means it can be maximally violated by any state, including fully mixed states.

Efficient quantum process tomography via compressive sensing

Illustration of a quantum process reconstructed via compressive sensing
A quantum process matrix reconstructed via compressive sensing. Instead of potentially 576 measurement configurations, a mere 18, selected at random, suffice for a high-fidelity estimate.

We have a new paper, titled “Efficient measurement of quantum dynamics via compressive sensing” in PRL. I already spent a significant amount of time with my co-authors on writing the paper and the press release so I’m not gonna invent the wheel a third time and just post some excerpts from the UQ press release as introduction:


At present it is extremely difficult to characterise quantum systems — the number of measurements required increases exponentially with the number of quantum parts. For example, an 8-qubit quantum computer would require over a billion measurements.

“Imagine that you’re building a car but you can’t test-drive it. This is the situation that quantum engineers are facing at the moment,” said UQ’s Dr Alessandro Fedrizzi, co-author of the study that was recently published in Physical Review Letters.

“We have now found a way to test quantum devices efficiently, which will help transform them from small-scale laboratory experiments to real-world applications.”

The team also include UQ collaborators Dr Marcelo de Almeida, Professor Andrew White and PhD student Matthew Broome, as well as researchers from Princeton University, the Massachusetts Institute of Technology (MIT), and SC Solutions, Inc. The researchers adapted techniques from “compressive sensing”, a hugely successful mathematical data compression method and for the first time, have applied it to experimental quantum research.

“Audio signals have natural patterns which can be compressed to vastly smaller size without a significant quality loss: this means we now store in a single CD what used to take hundreds. In the same way, compressive sensing now allows us to drastically simplify the measurement of quantum systems,” said Dr Alireza Shabani, the study’s main author from Princeton University.

“A common example for data compression is a Sudoku puzzle: only a few numbers will allow you to fill in the whole grid. Similarly, we can now estimate the behaviour of a quantum device from just a few key parameters,” said co-author Dr Robert Kosut from SC Solutions, Inc., who developed the algorithm with Dr Shabani, Dr Masoud Mohseni (MIT) and Professor Hershel Rabitz (Princeton University).

The researchers tested their compressive sensing algorithm on a photonic two-qubit quantum computer built at UQ, and demonstrated they could obtain high-fidelity estimates from as few as 18 measurements, compared to the 240 normally required.

The team expects its technique could be applied in a wide range of architectures including quantum-based computers, communication networks, metrology devices and even biotechnology.

To summarize, we have performed process tomography of a two-qubit quantum gate with just 18 measurement configurations out of the potential 576 (which we call an overcomplete set). The compressive sensing algorithm therefore offers a huge reduction in resources and time already for the really small scale lab demonstrations we’re working on at the moment.

It should be noted that, at the time of writing, there is one other (theory) proposal on quantum tomography with compressive sensing. The paper by D. Gross et al. explicitly treats quantum state tomography, which is not quite the same as process tomography.  However, according to the authors, and in particular Steve Flammia, who happened to visit UQ recently, their algorithm can be extended to process tomography in a straightforward manner.

The main difference between the two methods is, in a nutshell, the following: Our method scales with ~o(s log d), where s indicates the sparsity of the process matrix in a chosen basis and d the dimension of the quantum system. The method by Gross et al. scales with ~o(r d^2 log d), where r is the rank of the quantum state, or, in extension, the quantum process.

At a first glance, our method scales more favorably. This is indeed the case, but only when the process basis is known, because a process will only be maximally sparse in its eigenbasis. The algorithm is therefore best applied for certification of a device with a defined target process. The method by Flammia et al., in contrast, can be applied to black-box processes, because rank is basis-independent.

The two methods are therefore, as I like to think, very complementary, and should both be further investigated as there are still plenty of open questions. One promising, and maybe not immediately obvious feature we found during our tests was that the process estimates returned by the compressive sensing algorithm allowed us to improve our gates in practice. This is a key requirement for any efficient tomographic estimation, and can thus be seen as a successful litmus test for compressive sensing methods entering the field of quantum information processing.


Phone-a-refereeThe referee reports are in and you’re faced with a familiar situation: they are positive, hooray, but there is an ambiguous suggestion which you don’t quite understand. What follows is a lot of second-guessing, a number of meetings with your co-authors, lengthy editing of the paper to implement whatever you think the referees wanted you to do, a letter to the editor in which you explain that you didn’t quite understand, but that you assumed it was X and that you tried to answer it as best as you could and so on.

As often as not, you assumed wrongly which either leads to another round in which the referees tell you that what they in fact meant was not X but, (often again in ambiguous terms) indeed Y and that you should better fix Y this time or else…

Another familiar situation is that you’re the referee. You’re reviewing this paper and you think it is great but that the explanation offered by the authors is quite unclear. You’re not quite sure whether that is due to your lack of expertise in that exact field or whether it’s just not very well written. Then you try your best to express your concerns but again, it’s not exactly your field, so how can you be expected to give accurate advice, and also, you have better things to do than these people’s homework.

I don’t know how much research time and thus funding money goes down the drain in the ensuing prolonged review process but it must be significant. So what is the solution to all this? In my opinion it’s quite simple: allow the referee or the authors to talk to each other. Just imagine how gloriously straightforward it would be in example 1 to contact a reviewer to ask him what precisely he was suggesting. In example 2, a couple of question would be enough to find out whether the paper needed improvement or your understanding of physics.

The refereeing process, of course, is single-blind. The email contact would have to be handled by the journal, which nowadays has a powerful online portal for publication management anyway. How do you avoid abuse, e.g. authors bombarding their referees with messages? You install a unilateral opt-out system, or limit the number of emails that can be exchanged per refereeing round. Even people who get arrested are allowed a phone call, right? Or people on TV game shows. Why not scientists?

I think this is a brilliant idea, but is it ever going to be implemented? The handful of readers who randomly stumble upon my idle musings on this website will probably not take to the streets and bring about the required revolution. I’m going to use a trick. Last time I posted about the upcoming APS open-access journal Physical Review X (PRX), the APS contacted me within hours to point out a factual error in my post. This wasn’t because their editors are avid readers of this blog but because they had set up a Google alert for keywords involving their new journal. In the hope that this alert is still active, I now invoke the power of Google to get my idea across:

Dear anonymous APS editor or underling who happens to check the hundreds of alerts which are created for PRX every minute,

I hereby suggest to implement a limited messaging system between referees and authors. The best place to start would be your new journal Physical Review X. I’m sure the suggested feature would create a lot of interest in the community.



New open-access journals

For those who haven’t seen it yet, there are two new journals, the American Institue of Physics’ (AIP) AIP Advances and American Physical Society’s (APS) Physical Review X (PRX). Both are representative of a recent trend for traditional publishers to move to open access, online-only publishing models. Another example would be Nature Communications, a journal recently launched by the Nature Publishing Group.

AIP Advances supposedly focuses on applied physics with the promise on rapid publication. A quick look at the papers in their first issue confirms this claim to some extent. There are fourteen papers and the average time between submission and acceptance was around 6 weeks. The longest time was 9 weeks and the shortest just one week. Given that most papers were reviewed during the Christmas break, this is certainly an achievement. It will be interesting to see whether they can keep up this speed once they receive more submissions. The sample size is so far not large enough to allow a clear picture of the eventual content of AIP Advances. The term “applied” does certainly not fit all of the papers in the first issue. The fee for publication in AIP Advances is 1350 USD.

While the first issue of AIP Advances just appeared, PRX has only just issued their call for papers will announce their first call for papers later this month. Issue one is expected to appear in (northern-hemisphere) fall 2011. The scope of PRX is as broad as Physical Review Letters itself, so all fields of physics are covered, including some which formerly might not have fit into the more traditional APS Physical Review publications, especially interdisciplinary research. This sounds a little bit like the scope of Nature Communications and I can imagine that the APS is trying to position PRX to counter the success of both Nature Communications and the increasingly popular New Journal of Physics, which also has an open access model. Publishing in PRX will hit your (or your funding agency’s) wallet for 1500 USD.

In addition, there is now the option to choose open access for most APS journals. The papers will be published under the Creative Commons license. The fees are 1700 USD for Physical Review papers and 2700 USD for Physical Review Letters.

EDIT: In my original post, I had foolishly assumed that the AIP was part of APS publishing. Gene Sprouse, the APS editor in chief has kindly pointed out this mistake to me.

Spectral bi-photon wave-packet shaping

We have a new paper in Optics Express, “Engineered optical nonlinearity for quantum light sources“. We demonstrate a simple technique of longitudinal shaping of bi-photon wavepackets created via spontaneous parametric downconversion (SPDC).

In a standard SPDC experiment, wavepackets have a sinc frequency spectrum. This is due to the fact that a crystal has finite length and a rectangular shape—the nonlinear interaction between the pump beam and the crystal is thus turned on abruptly, to its full strength, when the pump enters the crystal, and remains constant until it  is turned off to zero when the pump exits the crystal. In the frequency domain, this temporal step-function transforms into a sinc shape.

This spectral shape has a detrimental effect on the purity of the downconversion photons, as shown here. The purity determines the quality of two-photon interference between photons generated in independent SPDC sources. This, in turn, directly affects the performance of, e.g., photonic quantum gates.

Microscope image of our custom-poled KTP crystal.
Microscope image of our custom-poled KTP crystal. The transition from first-order poling on the right to second-order poling on the left is clearly visible.

In our paper, we solve this problem by longitudinally engineering the effective nonlinearity in a periodically poled KTPcrystal. We give the crystal a Gaussian nonlinearity profile by patterning it with discrete sections of increasingly higher-order polings. The pump beam which enters the crystal will first encounter a section with very high order poling, which effectively means it will experience a very weak nonlinearity. This effective nonlinearity then increases in discrete steps, peaks in the crystal center and then drops off symmetrically. We confirmed that our method works by measuring two-photon interference patterns, which are indeed Gaussian instead of triangular, which they normally are for sinc-shaped biphotons.

What surprised us was that, even though the crystal consists of dozens of section with different polings, the agreement of the measurements with our domain-by-domain theory predictions was excellent. In conclusion, the technique clearly works and has the big advantage that the usually hard work of spectral shaping is outsourced to the crystal manufacturer, who did an excellent job in our case.

UQ back from flood break

After one week of closure due to floodings, UQ is now almost back to full speed. The lower lying areas were pretty hard hit but the essential buildings and services survived unscathed. Here’s a Courier Mail video from UQ and surrounding suburbs. UQ footage starts at 3:31 but the whole video is interesting, really.

Closing the freedom-of-choice loophole in a Bell test

Our paper “Violation of local realism with freedom of choice” has just been published in the Proceedings of the National Academy of Sciences (PNAS).

An explanatory post on a paper about Bell inequalities usually starts with recounting the history and controversy of entanglement starting in 1935. I’ll spare you this part, you’ve probably read it countless times before. So let’s cut to the chase. We experimentally closed the so-called (as of now) freedom-of-choice loophole.

Freedom of choice is related to, but not quite the same, as its better known cousin the locality loophole. The locality loophole arises when the measurement results on one system can be influenced by a measurement or by the setting choice event (the choice of which measurement will be performed) on a second, spatially separated system and vice versa. This causal influence is defined within the framework of special relativity – an event can influence another by signals travelling at, or below the speed of light. Experimentally, this can be guaranteed by locating these events outside each others future light-cone, i.e. further apart than a signal at light speed can travel within the timing difference of the involved events.

Freedom of choice means that the setting choices must themselves be free of any potential influence by the event which created the two systems in the first place. In other words, similar to above, the choice of measurement settings, in practice generated by a quantum random number generator (which is another crucial requirement), has to occur outside of the future light cone of the event that created the two (entangled) systems and hence imprinted the hidden variable on them.

The freedom-of-choice loophole is a crucial requirement for the derivation of Bell’s inequality. Interestingly though, it had until now not been addressed experimentally, and even been somewhat overlooked in recent literature on this topic.

The experiment itself was carried out between the islands of La Palma and Tenerife, a great place for holidays, astronomy and free-space experiments (in that order). We created entangled photons in a source at La Palma. One photon of each pair was kept at La Palma, the other one sent to Tenerife, where it was received by the European Space Agency’s Optical Ground Station telescope (see photo below). To close the locality and freedom-of-choice loopholes, the source, the quantum random number generators, and the measurements were distributed over 3 carefully selected locations. The measurement settings were applied via fast electro-optical switches. Eventually, we measured an experimental Bell value of ~2.37, well above the local realistic bound of 2.

Interesting side fact: the entangled photon source was operated at a maximum output of 2 million detectable pairs, which is AFAIK a record for a mobile, diode-pumped setup.

An equally interesting part of this paper is that we make an attempt to find a simple classification for the multitude of existing hidden variable models. For more details, read the paper.

Optical Ground Station, Tenerife, pointing in the direction of La Palma, where the quantum transmitter was located.
Optical Ground Station (OGS), Tenerife, pointing in the direction of La Palma, where the quantum transmitter was located. The green laser beam was used as a beacon for the closed-loop optical tracking system which kept the transmitted and receiver telescopes aligned to each other. This photo was taken and digitally enhanced by Thomas Herbst, University of Vienna.

Quantum Matchgates

Today, New Journal of Physics has published our paper “Matchgate quantum computing and non-local process analysis“.

Matchgates are an intriguing class of two-qubit quantum logic gates. Circuits built up solely by matchgates, acting on neighbouring qubits, are efficiently simulatable classically. If the gates are however, allowed to act on any two qubits, which can be achieved by a simple two-qubit SWAP, they allow universal quantum computing.

In our paper we show a simple decomposition of the somewhat mystic matchgates into gates that are better known, such as single-qubit unitaries, and 2-qubit CNOT gates and controlled-unitaries. We then implement the only non-trivial matchgate needed for universal matchgate computing—the so called G(H,H) gate—with single photons and linear optics.

Non-local fidelity map for our experimental 2-qubit matchgate process
Non-local fidelity map for our experimental 2-qubit matchgate process.

In the second part of the paper, we analyze the resulting quantum process in a novel way. We calculate 3-dimensional fidelity maps which show the overlap, maximized over local operations, between a unitary projection of our experimental process and all possible non-local 2-qubit operators, parametrized in the so-called Weyl chamber. In order to understand what that means, you’d have to be really interested in this sort of thing and read the paper. What it allows us to do is, first of all, to identify error sources in our process. Second, we can create pretty pictures (see figure on the right) which give us insight into the non-local properties of our process which would not be immediately obvious from standard process analysis tools.

This work was done in collaboration with Sven Ramelow and Aephraim Steinberg, who both happened to visit us some time ago.

Single-photon downconversion

Artist's rendition of cascaded downconversion
Cascaded downconversion. A laser beam creates photon pairs in a noninear crystal via the process of downconversion. One photon of the created pair then pumps a second crystal and is again downconverted. The result is a photon triplet.

Single-photon pair sources based on the nonlinear process of spontaneous parametric downconversion are still a relatively young development. They are however probably one of the most successful tools of modern experimental science in terms of the massive impact they had on the field of quantum information processing. They provided the first bright source of  entangled particles, which were then used in countless proof-of-principle experimental demonstrations which have shaped the field we’re working in. Examples include violations of Bell inequalities, the first quantum state teleportation, entanglement purification, multi-partite entanglement, quantum computing, both in the circuit and the cluster-state paradigm, the entire field of entangled-state quantum communication, and so on.

The hand-waiving explanation for the downconversion process is that a photon from a strong laser beam, which is focussed in the nonlinear crystal at the heart of the source, is “split” into a photon pair. However, the actual downconversion of a single photon has not been observed before. Until now. We have just demonstrated exactly this effect – the downconversion of a single photon, which was itself created as one photon of a downconverted pair. The experiment was carried out at IQC, in the group of Thomas Jennewein, and was published in Nature today.