Distributing entanglement without actually sending it

Distributing entanglement with separable carriers

The protocol: Alice wants to establish entanglement with Bob. Surprisingly, they can do so without actually communicating entanglement. Figure courtesy Margherita Zuppardo.

We have a new PRL paper online, where we demonstrate that two parties can establish entanglement between their labs without directly communicating any entanglement between them. Physical Review Letters was kind enough to honor our work with an Editor’s Suggestion and an accompanying Physics Viewpoint written by Christine Silberhorn—a big thanks to the Physics editors and to Christine!

The idea goes back to a paper by Toby ‘Qubit’ Cubitt and co-authors, but it took a decade for people to figure our that the resource that allows Alice and Bob to achieve this task is—at least to some degree—quantum discord. This was elaborated in a series of theory papers by Streltsov et al., Alastair Kay, and our collaborators in Singapore. The protocol works as follows. Alice and Bob have separate quantum systems that they want to entangle. Alice starts by doing some local (in respect to her lab) encoding between her system and a carrier, and then sends Bob this carrier. Bob does some local decoding and Alice and Bob’s systems end up being entangled. Importantly, this can be achieved without ever entangling the carrier with either Alice’s or Bob’s system.

This is not only really cute from a foundational point of view it also has practical applications. In our paper, we describe scenarios in which at certain noise levels—either in the systems themselves, or in the channel—entanglement distribution with separable carriers works better than the alternative of direct entanglement sharing. So the protocol could indeed be useful in future quantum networks.

For more information, I recommend reading Christine’s viewpoint, or Margherita’s writeup for the popular science website 2physics.

 

Compressing single photons

Coherent conversion of single photons from one frequency to another is nowadays a mature process. It can be achieved on both directions, up, or down, it preserves quantum properties such as time-bin, or polarization entanglement, and internal conversion efficiencies approach 100%.

The main motivation for coherent frequency conversion is to interface photonic quantum bits in optical fibers to quantum repeater nodes: fiber loss is minimal at wavelengths around 1550 nm, but the currently most efficient optical quantum memories—which form the core of quantum repeaters—operate at around 800 nm.

A crucial problem that is often swept under the carpet in high-profile conversion papers is however that there isn’t just a drastic difference in center wavelengths, but also a drastic difference in spectral bandwidths. Single photons used in quantum communication, in particular when produced via the widely popular process of parametric down-conversion, usually have bandwidths of the order of 100 GHz. The acceptance bandwidth of quantum memories relying on atomic transitions can in contrast be as narrow as MHz.

Frequency conversion is therefore necessary, but by no means sufficient for interfacing these technologies. What’s far more important is the ability to match the spectra of the incoming photons and the receiver.

We address this issue in a new paper published earlier this year in Nature Photonics. The concept is relatively straightforward. A single photon is, as usual, up-converted with a strong laser pulse in the process of sum-frequency generation in a nonlinear crystal. However, the frequency components of single photon and pump are carefully synchronized such that every frequency in the photon bandwidth can only convert into one center frequency. To achieve this, we (that is, mostly first author and experimental wizard Jonathan Lavoie, and John Donohue and Logan Wright in Kevin Resch’s lab at IQC in Waterloo) imparted a positive frequency chirp on the photon, and a corresponding negative chirp on the pump pulse before conversion.

The results are quite impressive, we achieved a spectral single-photon compression by a  factor 40. This is still a far cry from the factors of >1000 we would need to really match a GHz photon to a MHz memory, but there is as always room for improvement. The conversion efficiency in our experiment wasn’t stellar either, but it could easily be increased by using techniques which do achieve near-unity efficiency.

Importantly, this type of chirped single-photon up-conversion also serves as a time-to-frequency converter, and the Kevin’s group has now in fact demonstrated this by detecting time-bin qubits with ultra-short time bin delays in the frequency domain.

A special acknowledgment goes to Sven Ramelow, without whom this research wouldn’t have happened at this time and in this form.

The publication with zero authors

Screen Shot 2013-04-23 at 5.20.02 PM

I keep coming across commentary lamenting the increase in the number of authors on scientific papers (or patents), such as by Philip J. Wyatt in Physics Today and related posts in the blogosphere, e.g. a very recent one by UNC Chapel Hill marine ecologist John Bruno on Seamonster. This intriguing development seems diametrically opposed to the calls for crowd-sourced science put forward by people like Michael Nielsen. The conservative single-author advocates think that more and more authors reflect an erosion of individual creativity, while the Science 2.0 crowd is convinced that more people working on a problem delivers faster, better, and more diverse science.

So which aspect is more important: the noble aspiration to individual scientific excellence or the more modern result-driven push for large-scale collaboration?

My opinion is that you can have both things and that this important question has very little to do with actual authorship. Let’s talk about authorship first. What I certainly support is that automatic authorship such as that often demanded by organizational heads with 50+ published papers per year should be banned. What I don’t support are the stringent guidelines suggested by some journals. Take for example the oft-cited rules by the International Committee of Medical Journal Editors:

Authorship credit should be based on 1) substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published. Authors should meet conditions 1, 2, and 3.

Let’s apply those rules to a hypothetical scenario: Prof A invited a long-term collaborator, researcher B. This visitor has an original idea for an experiment which can be implemented with a new setup in A’s lab. This setup has been assembled over the past 4 years by PhD student C, who completed his work but unfortunately never got to do any actual science before he had to write up his thesis. The experiment will this be carried out by promising new PhD student D, who was introduced to the apparatus by C. Since taking the data is time-consuming, and D is new to research, the data will be analyzed by PhD student E, a real Matlab genius. For good measure, throw in postdoc F who spends most of his time teaching, planning and supervising other projects, writing grants etc. F will write the paper while data is being collected and analyzed.

According to the ICMJE guidelines, the resulting paper will have zero authors. Because none of our protagonists, starting with the researcher who had the idea, to the grunt who built the experiment, all the way to our Professor without whom none of the others would even have been there, meets the suggested criteria for scientific authorship.

I offer a much simpler criterium:

If the manuscript didn’t exist at this time, in this form, without person X—even if X could have been replaced by any other similarly qualified person, then person X should be an author.

Just as the IMCJE criteria above, my suggestion offers room for interpretation. Obviously, even I don’t think that authorship should extend all the way to Adam and Eve. But it does extend to the guy who owns the lab, created that particular line of research, hired the involved people, and financed the experiments. And it very certainly includes the researcher(s) who provided the initial idea. Because an idea of sufficient quality, i.e. somewhat more specific than ‘you should look into curing cancer’, is worth more than data that could probably have been taken in a dozen other labs. (So take that, John Bruno, unless your idea was entirely obvious, you should definitely have been an author of Brian Helmuth’s Science paper). It also obviously includes the guy who built the experiment. Unfortunately, it’s not uncommon in experimental science that whole PhDs are spent on setting up an experiment from scratch with no immediate outcome while some luckster simply walked in a few years later and started milking the setup for results. In this case our student would do well to arrange an agreement with the lab owner on how many papers they can get out of their work.

The best justification for my authorship criterium is exemplified by large scientific collaborations. Publications by CERN or LIGO routinely sport hundreds of authors none of whom would qualify for individual authorship according to some official guidelines. No one in his right mind would however suggest that a PhD degree in experimental science is worth less when obtained while contributing to the most exciting scientific endeavors undertaken by humankind. In this context, suggestions (see the comments on Wyatt’s Physics Today article) of a per-author normalization of scientific indicators like h-factors are laughable. While those huge projects are certainly extreme cases, the principle is scalable: if a project benefits from more participants, then by all means, they should all be there, and they should probably all be authors.

But back to the initial quandary of whether multi-author papers erode individual creativity. Will the fact that our paper has 6 authors instead of 1 have that effect? No it won’t. Most of the creative achievement was contained in the initial idea, some in the experimental design, and the question of authorship won’t change that. Should we have left every aspect of the experiment to our new guy D in the spirit of a wholesome scientific training? Maybe, but that means it would have taken much longer to complete the research which cannot be in the best interest of science or the taxpayer. With good supervision, student D can easily learn the components they missed out on in the time saved. It would furthermore be silly to underestimate the learning effect of sharing the process of scientific research with more experienced colleagues. Do we really want to return to yesteryear when researchers were supposed to do everything on their own, isolated from the environment? I don’t think so. And finally, if the opportunity really arises, any aspiring academic will cherish publishing a single-author paper anyway,

The only reasonable argument I see against increasingly multi-author papers is that hiring committees will have a harder job separating truly creative minds from mere data analyzers. This problem is already mitigated by author contribution statements, as nowadays requested by major journals such as Nature and Science. It would certainly be welcome if those declarations were standardized and taken up by more journals. Beyond that, if two or three job references and an extensive interview still isn’t enough for our struggling committee, then maybe the data analyzer is actually more creative than we had thought.

In summary, anyone who contributed to scientific research should be considered as an author, no matter whether their contribution was restricted to “just” data taking or any other singular aspect of the research. We will still have scientifically brilliant individuals, probably more so because of the far broader opportunities offered through larger collaborations, and if you find it harder to identify those individuals, maybe it’s your fault.

 

Sampling bosons

If ever there was a paper the linear optics community got as excited about as the now famous KLM paper, it was Aaronson and Arkhipov’s “The computational complexity of linear optics“. Fast forward two years and we have just published a first experimental implementation of the BosonSampling task introduced by the two ‘As’ in Science.

This work, and a similar one by our friends and competitors in Oxford, has attracted quite a lot of attention in the science media. Here’s a (probably incomplete) list of articles about it:

Science, “New form of quantum computation promises showdown with ordinary computers

Scientific American, “New machine bridges classical and quantum computing

New Scientist, “Victorian counting device gets speedy quantum makeover

arstechnica, “Can quantum measurements beat classical computers?

Physicsworld, “‘Boson sampling’ offers shortcut to quantum computing

photonics.com, “Rise of the BosonSampling computer

IEEE spectrum, “New machine puts quantum computers’ utility to the test

Physorg, “At the solstice: Shining light on quantum computers

ABC Science, “Proving the need for quantum computers

But there’s more. Since Andrew presented our preliminary results at last year’s QCMC conference, two other groups in Vienna and Rome also raced to get their results out and all four manuscripts appeared within a day on the arXiv.

Since the titles of our papers don’t offer much of an explanation of the differences between the results, an explanation might be in order. Let’s talk about the similarities first. All of our experiments utilized down-conversion photons sent through some sort of linear optical network. We all observed three-photon interference which is the minimum test of the BosonSampling idea. The team in Oxford also measured four-photon interference patterns, albeit in a limited sense, where instead of four photons being sent into four distinct optical modes, they simply used the sporadic double-pair emissions from a single downconverter.

One difference is that the groups in Oxford, Italy and Vienna realized their optical circuits via integrated waveguides, while we did it in a three-port fiber beamsplitter (with the polarization degree of freedom giving us a six-by-six network). The waveguides provide a stable network, they are however quite lossy which is why we probably have the best quality three-photon data. Another difference is that while these circuits are in principle tunable via thermal heaters, the circuits were actually fixed in the respective experiments. Our circuit can easily be tuned over a large range of interesting unitaries.

An aspect which sets our work apart, and which in my opinion important for testing the validity of BosonSampling, is that we used an different method of characterizing our photonic network. Instead of using two-photon interference for this characterization, which rests on the same assumption as BosonSampling itself and thus does not allow independent verification of predicted three-photon amplitudes, we used a simple classical method for characterizing unitary circuits we recently developed.

Making light matter

Photonic split-step quantum walk implementation

Photonic split-step quantum walk implementation for the observation of topologically protected bound states.

We have a new paper in Nature Communications, Observation of topologically protected bound states in photonic quantum walks. Here’s our press release which unfortunately didn’t quite make it into the official press channels because of a fundamental disconnect between what we researchers wanted to write and the official PR guidelines at UQ:

At first glance, a donut and a coffee cup do not have much in common, except that they complement each other really well.

A second glance reveals that they share a geometrical property, their topology: the shape of one can be continuously deformed into the shape of the other.

Topology explains many phenomena in modern science: transitions between physical regions with different topology cause exotic effects such as insulators which act like conducting metals at the surface.

These effects are hard to control and study since they usually appear in complex materials where quantum particles are hard to observe. Researchers at the University of Queensland and Harvard University have simulated transitions between quantum topologies—predicted to exist, but never observed, in polymers and high-energy physics—in an experiment where light is made to act like matter.

“We observed for the first-time bound states—where a quantum particle is trapped at a topological interface—which have long been predicted to play an important role”, says Matthew Broome, joint lead author of this work and PhD student at the University of Queensland, “It was easy to observe these trapped photons, which is usually a challenging task in material sciences.”

The team at Harvard recently predicted that quantum walks can simulate systems with different topological regions. The experimentalists at Queensland persuaded single particles of light—photons—to walk through an optical network.

“Quantum walks have been previously realized in variety of settings with ions, atoms and photons, but nobody really knew that they could observe the exciting topological phenomena with quantum walks before our discovery”, says Takuya Kitagawa, joint lead author who developed the theory with his colleagues at Harvard, “This discovery came as a complete surprise to everybody, including us.”

Furthermore the versatile system invented by the UQ team allowed a surprising new discovery, the existence of a pair of bound states—a topological phenomenon which arises in dynamic time-dependent systems only.

This discovery bears exciting prospects for the development of novel materials and even powerful—but so far elusive—quantum computers.

The study, “Observation of topological bound states in photonic quantum walks”, by UQ’s Matthew Broome, Alessandro Fedrizzi, Ivan Kassal, and Andrew White, and Harvard’s Takuya Kitagawa, Erez Berg, Mark Rudner, Alán Aspuru-Guzik and Eugene Demler, was published in Nature Communications.

The experiment was conducted by researchers from the ARC Centre for Engineered Quantum Systems (EQuS) and the ARC Centre for Quantum Computation and Communication Technology (CQC2T) in Australia; and Harvard University, USA.

Two-photon quantum walk paper highlighted by New Journal of Physics

New Journal of Physics Highlights of 2011, Cover PageThe New Journal of Physics has for the second year in a row included one of our papers in their annual highlights. Last year, our paper on Matchgate quantum computing made it into NJP’s Best of 2010 list. This year, our work on Two-photon quantum walks in an elliptical direct-write waveguide array was voted into the Best of 2011collection. Congratulations to first author Jimmy Owens!

Photonic ups and downs

The photons I work with have experienced a roller coaster lately, having been down-converted here and now up-converted in a new paper we just published in PRA. This time, the photons didn’t have to go their own way though. They were assisted in their journey from higher (810 nm) to lower wavelengths (532 nm) by a strong telecom seed laser. This wouldn’t be big news though, as many other groups have previously reported this so-called sum-frequency generation between single photons and strong seed lasers. The actual news is that we have managed to up-convert polarization-entangled photons.

Experimental setup for frequency up-conversion of entangled photons

We generated polarization-entangled photon pairs at 820 nm in a ppKTP Sagnac source and superposed one photon of each pair with a strong 1550 nm laser in two additional ppKTP crystals. The resulting 532 photons were still highly entangled with their original partners.

We started by generating polarization-entangled photon pairs at 820 nm, mixed them with a strong 1550 nm laser and sent them through the reverse version of Paul Kwiat’s sandwich source: two nonlinear crystals quasi-phase-matched for type-I up-conversion of 820+1550->532 nm, with orthogonal orientations. The overall efficiency of the up-conversion scheme was atrocious, but it precisely matched theoretical expectations for our scheme, given the available pump power and limitations in geometry an optical loss. More importantly, the entanglement between the up-converted green photons and their original 820 nm partners was almost perfect.

Our results show for only the second time (the first being Nicolas Gisin’s fabulous time-bin entanglement up-conversion experiment) that entanglement is preserved in a sum-frequency experiment with a strong seed laser. As we argued in our paper, this type of coherent interconversion between wavelengths will be an important tool in the larger picture of practical quantum information processing. Nature Photonics included our paper in their March 2012 research highlights.

Quantum steering without the detection loophole

Our paper “Conclusive quantum steering with superconducting transition edge sensors” has now been published in Nature Communications.

Quantum steering was introduced by fellow Austrian Erwin Schroedinger in 1935 alongside the much better known quantum phenomenon of entanglement. It describes the ability of one party to steer the measurement outcome of a second party on a quantum state shared between them. The interesting feature of quantum steering is that it can demonstrate quantumness in a regime which is weaker than that needed to disprove local realistic theories.

Original demonstrations of quantum steering involved experimental tests of the Einstein Podolsky Rosen (EPR) paradox. Since then, Howard Wiseman (one of our co-authors) and others have given the concept of steering a facelift, reformulating it in the context of quantum information.

Our contribution was to experimentally demonstrate steering while closing the so-called detection loophole, which would allow one of the two involved parties to cheat the other into thinking their state was steered properly, even though it wasn’t. The detection loophole is an issue which also pops up in Bell inequalities performed with photons; it originates in our inability to detect entangled single photon pairs with 100% efficiency. In practice, a conclusive demonstration of quantum steering is performed by violating a steering inequality, which in it’s simplest form requires a conditional detection efficiency of at least 50%.

This number was considered out of bounds until only a few years ago, when superconducting transition edge sensors hit the scene. These detectors—pioneered by our collaborators at the National Institute for Standards and Technology (NIST) in Boulder, Colorado—are in principle capable of detecting incoming single photons with near-unity efficiency. The experimental challenge for us was to build a source for entangled photons which, combined with these detectors, would allow us to surpass the 50% efficiency limit. We were quite successful at that and achieved an unprecedented conditional detection efficiency of 62%.

Simultaneous to our efforts, two other groups reported steering experiments which successfully closed the detection loophole. The experiment performed in Vienna (Bernhard Wittmann et al.) simultaneously closed the locality loophole (similar to our previous experiment described here), which would allow cheating by information leakage between the measurement devices used by our two parties. The second experiment, performed at Griffith university also here in Brisbane (A. Bennet et al.) demonstrated steering over a 1 km fiber channel. Both of these experiments were performed with conventional photon detectors, exploiting clever theory which allows one to lower the efficiency requirements by using more measurement settings.

 

Hardy’s paradox and Bell inequalities in time

Extending on our previous work on Leggett-Garg inequalities, we recently demonstrated the venerable Hardy paradox and the violation of a state-independent Bell inequality in a temporal scenario. Our work has now appeared in PRL.

Tests of quantum mechanics, such as the Bell inequality, are usually carried out in a spatial scenario, where measurements performed on two or more remote, spatially separated (quantum) systems reveal correlations which are stronger than those allowed in classical models.

Experimental setup for testing temporal quantum phenomena

Experimental setup for testing temporal quantum phenomena. A single photon serves as the system on which we perform two temporally separated measurements. The first measurement is non-destructive, enacted with an auxilliary photon which reads out the state of the system photon via two-photon interference in a controlled phase gate. The second measurement is then performed in a standard way.

They can however also be translated into a temporal scenario, where two or more measurements are performed on the same system, at different times. This idea was cooked up by Tony Leggett and Anupam Garg, motivated by the quest to show that quantum and classical world views were incompatible also for macroscopic objects. The usual assumptions of realism and locality in the spatial scenario are here replaced with the assumptions of (macro)realism and, because locality cannot be enforced for temporal measurements on a single object, non-invasiveness, which posits that a classical measurement on a macro-realistic system can determine the state of the system without affecting either the state or the subsequent system dynamics. Just like in the spatial scenario one can derive inequalities which show a disagreement of quantum mechanics with these assumptions.

In our experiment, we implemented such a temporal scenario with single photons, similar to a previous demonstration of the original Leggett-Garg inequality in our group. The experimental scheme is explained in the figure. Following an idea by Tobias Fritz, we first show that Hardy’s paradox is stronger in its temporal form. Furthermore, we measure the temporal equivalent of a Bell CHSH inequality as suggested by Caslav Brukner and colleagues. Surprisingly, the violation of this inequality is not dependent on the quantum state, which means it can be maximally violated by any state, including fully mixed states.

Efficient quantum process tomography via compressive sensing

Illustration of a quantum process reconstructed via compressive sensing

A quantum process matrix reconstructed via compressive sensing. Instead of potentially 576 measurement configurations, a mere 18, selected at random, suffice for a high-fidelity estimate.

We have a new paper, titled “Efficient measurement of quantum dynamics via compressive sensing” in PRL. I already spent a significant amount of time with my co-authors on writing the paper and the press release so I’m not gonna invent the wheel a third time and just post some excerpts from the UQ press release as introduction:

 

At present it is extremely difficult to characterise quantum systems — the number of measurements required increases exponentially with the number of quantum parts. For example, an 8-qubit quantum computer would require over a billion measurements.

“Imagine that you’re building a car but you can’t test-drive it. This is the situation that quantum engineers are facing at the moment,” said UQ’s Dr Alessandro Fedrizzi, co-author of the study that was recently published in Physical Review Letters.

“We have now found a way to test quantum devices efficiently, which will help transform them from small-scale laboratory experiments to real-world applications.”

The team also include UQ collaborators Dr Marcelo de Almeida, Professor Andrew White and PhD student Matthew Broome, as well as researchers from Princeton University, the Massachusetts Institute of Technology (MIT), and SC Solutions, Inc. The researchers adapted techniques from “compressive sensing”, a hugely successful mathematical data compression method and for the first time, have applied it to experimental quantum research.

“Audio signals have natural patterns which can be compressed to vastly smaller size without a significant quality loss: this means we now store in a single CD what used to take hundreds. In the same way, compressive sensing now allows us to drastically simplify the measurement of quantum systems,” said Dr Alireza Shabani, the study’s main author from Princeton University.

“A common example for data compression is a Sudoku puzzle: only a few numbers will allow you to fill in the whole grid. Similarly, we can now estimate the behaviour of a quantum device from just a few key parameters,” said co-author Dr Robert Kosut from SC Solutions, Inc., who developed the algorithm with Dr Shabani, Dr Masoud Mohseni (MIT) and Professor Hershel Rabitz (Princeton University).

The researchers tested their compressive sensing algorithm on a photonic two-qubit quantum computer built at UQ, and demonstrated they could obtain high-fidelity estimates from as few as 18 measurements, compared to the 240 normally required.

The team expects its technique could be applied in a wide range of architectures including quantum-based computers, communication networks, metrology devices and even biotechnology.

To summarize, we have performed process tomography of a two-qubit quantum gate with just 18 measurement configurations out of the potential 576 (which we call an overcomplete set). The compressive sensing algorithm therefore offers a huge reduction in resources and time already for the really small scale lab demonstrations we’re working on at the moment.

It should be noted that, at the time of writing, there is one other (theory) proposal on quantum tomography with compressive sensing. The paper by D. Gross et al. explicitly treats quantum state tomography, which is not quite the same as process tomography.  However, according to the authors, and in particular Steve Flammia, who happened to visit UQ recently, their algorithm can be extended to process tomography in a straightforward manner.

The main difference between the two methods is, in a nutshell, the following: Our method scales with ~o(s log d), where s indicates the sparsity of the process matrix in a chosen basis and d the dimension of the quantum system. The method by Gross et al. scales with ~o(r d^2 log d), where r is the rank of the quantum state, or, in extension, the quantum process.

At a first glance, our method scales more favorably. This is indeed the case, but only when the process basis is known, because a process will only be maximally sparse in its eigenbasis. The algorithm is therefore best applied for certification of a device with a defined target process. The method by Flammia et al., in contrast, can be applied to black-box processes, because rank is basis-independent.

The two methods are therefore, as I like to think, very complementary, and should both be further investigated as there are still plenty of open questions. One promising, and maybe not immediately obvious feature we found during our tests was that the process estimates returned by the compressive sensing algorithm allowed us to improve our gates in practice. This is a key requirement for any efficient tomographic estimation, and can thus be seen as a successful litmus test for compressive sensing methods entering the field of quantum information processing.