The Edinburgh Mostly Quantum Lab

After seven years as a research fellow at the University of Queensland, it’s time for me to move on. On September 1, I will officially start a new quantum photonics group as Associate Professor at Heriot-Watt University in the wonderful city of Edinburgh, Scotland. This website will see a bit of a redesign accordingly. I’m offering fully funded postdoc and PhD position in Quantum Photonics, so get in touch if you’re looking for a job and want to work with me in the near future.

 

P5173003

Quantum reality check

Nothing ever becomes real till it is experienced — John Keats

This quote by John Keats has aged well in time, it wouldn’t be amiss in a debate of quantum physics centuries later. The debate over what is real has been going since the early days of quantum, and even today we don’t really know whether there is an objective—i.e. observer-independent—reality at the quantum level or not.

Quantum reality check

Investigating the reality of the wavefunction.

However, for those of you who do think that there is an objective reality, we have a new result which sheds more light on its nature. There is two major interpretations of the quantum wavefunction, the central object at the heart of quantum mechanics in respect to this underlying reality. Either the wavefunction represents our limited knowledge of that reality, which is called the ψ-epistemic interpretation, or it corresponds to it directly, which we call ψ-ontic.

Our result, published this week in Nature Physics, now rules out maximally ψ-epistemic models of the wavefunction. This work goes back to a paper by Matthew Pusey, Johnatahn Barrett, and Terry Rudolph (PBR), who in 2012 came up with a no-go theorem for ψ-epistemic models. It quickly turned out though that no-go theorems couldn’t be upheld without assumptions which are outside the minimal, or standard, set required for ontological models.

A new approach was then found in a series of theory papers, amongst them two by our co-authors Cyril Branciard and Eric Cavalcanti, which, instead of trying to rule out all ψ-epistemic models, tried to at least put restrictions on them. The key insight was that a central feature of quantum mechanics—the fact that non-orthogonal states cannot be perfectly distinguished—cannot be explained by some supposed overlap of epistemic states in ψ-epistemic models. The Oxford press release on this theory achievement explains this approach quite nicely.

Our theorists proposed that a number of non-orthogonal states in at least three dimensions should be prepared and subsequently measured. The measured statistics allows us to infer the epistemic state overlap for these states, and by comparing them to the quantum predictions we see that the classical overlap doesn’t suffice to explain state indisinguishability.

Experimentally, we achieved this by preparing qutrits and ququarts on photons encoded in their polarization and path degree of freedom. Our results rule out maximally ψ-epistemic models with over 250 standard deviations, and restrict the remaining ψ-epistemic explanations to below a ratio of 0.69. We now aim to further reduce this number, ideally getting as close to zero as possible. Congratulations to everyone involved, and in particular our prolific first author Martin Ringbauer, and our Honours student Benjamin Duffus. What a start into a promising scientic career.

Our paper has already attracted some attention in the media. Our own UQ press release can be found here, and the early birds at NewScientist have featured the result as well. More to come in the following days.

UPDATE: and here’s a news roundup on this story.

NewScientist, “Wavefunction gets real in quantum experiment”

The Conversation, “Schrödinger’s cat gets a reality check”

Vice Motherboard, “New measurements show that the unrealest part of quantum physics is-very real”phys.org, Researchers describe the wavefunction of Schrödinger’s cat [This one wins for reprinting our press release with the most misleading title.]

And another one, in German:

Futurezone.at, “Schrödinger’s Katze ist tot und lebendig”

UPDATE 2:

The New York Times refers to our paper here, in a story by Ed Frenkel.

And, finally, Martin and I were interviewed by the fabulous Zeeya Merali for the FQXI podcast. You can listen to our episode here:

UPDATE 3:

Zeeya Merali who interviewed us for the FQXI podcast has written a Nature News Feature, “What is really real?” that prominently features our work.

The 2014 UQ Foundations Research Excellence Awards

Here’s some old news on a personal achievement. I post this now because the paper associated with the project has now appeared, but more on that in a later post. In 2014, I won an UQ Foundation Research Excellence Award, for experimental tests on the reality of the wavefunction. The report from the ceremony is here, and see also the somewhat embarrassing PR video below.

 

Distributing entanglement without actually sending it

Distributing entanglement with separable carriers

The protocol: Alice wants to establish entanglement with Bob. Surprisingly, they can do so without actually communicating entanglement. Figure courtesy Margherita Zuppardo.

We have a new PRL paper online, where we demonstrate that two parties can establish entanglement between their labs without directly communicating any entanglement between them. Physical Review Letters was kind enough to honor our work with an Editor’s Suggestion and an accompanying Physics Viewpoint written by Christine Silberhorn—a big thanks to the Physics editors and to Christine!

The idea goes back to a paper by Toby ‘Qubit’ Cubitt and co-authors, but it took a decade for people to figure our that the resource that allows Alice and Bob to achieve this task is—at least to some degree—quantum discord. This was elaborated in a series of theory papers by Streltsov et al., Alastair Kay, and our collaborators in Singapore. The protocol works as follows. Alice and Bob have separate quantum systems that they want to entangle. Alice starts by doing some local (in respect to her lab) encoding between her system and a carrier, and then sends Bob this carrier. Bob does some local decoding and Alice and Bob’s systems end up being entangled. Importantly, this can be achieved without ever entangling the carrier with either Alice’s or Bob’s system.

This is not only really cute from a foundational point of view it also has practical applications. In our paper, we describe scenarios in which at certain noise levels—either in the systems themselves, or in the channel—entanglement distribution with separable carriers works better than the alternative of direct entanglement sharing. So the protocol could indeed be useful in future quantum networks.

For more information, I recommend reading Christine’s viewpoint, or Margherita’s writeup for the popular science website 2physics.

 

Compressing single photons

Coherent conversion of single photons from one frequency to another is nowadays a mature process. It can be achieved on both directions, up, or down, it preserves quantum properties such as time-bin, or polarization entanglement, and internal conversion efficiencies approach 100%.

The main motivation for coherent frequency conversion is to interface photonic quantum bits in optical fibers to quantum repeater nodes: fiber loss is minimal at wavelengths around 1550 nm, but the currently most efficient optical quantum memories—which form the core of quantum repeaters—operate at around 800 nm.

A crucial problem that is often swept under the carpet in high-profile conversion papers is however that there isn’t just a drastic difference in center wavelengths, but also a drastic difference in spectral bandwidths. Single photons used in quantum communication, in particular when produced via the widely popular process of parametric down-conversion, usually have bandwidths of the order of 100 GHz. The acceptance bandwidth of quantum memories relying on atomic transitions can in contrast be as narrow as MHz.

Frequency conversion is therefore necessary, but by no means sufficient for interfacing these technologies. What’s far more important is the ability to match the spectra of the incoming photons and the receiver.

We address this issue in a new paper published earlier this year in Nature Photonics. The concept is relatively straightforward. A single photon is, as usual, up-converted with a strong laser pulse in the process of sum-frequency generation in a nonlinear crystal. However, the frequency components of single photon and pump are carefully synchronized such that every frequency in the photon bandwidth can only convert into one center frequency. To achieve this, we (that is, mostly first author and experimental wizard Jonathan Lavoie, and John Donohue and Logan Wright in Kevin Resch’s lab at IQC in Waterloo) imparted a positive frequency chirp on the photon, and a corresponding negative chirp on the pump pulse before conversion.

The results are quite impressive, we achieved a spectral single-photon compression by a  factor 40. This is still a far cry from the factors of >1000 we would need to really match a GHz photon to a MHz memory, but there is as always room for improvement. The conversion efficiency in our experiment wasn’t stellar either, but it could easily be increased by using techniques which do achieve near-unity efficiency.

Importantly, this type of chirped single-photon up-conversion also serves as a time-to-frequency converter, and the Kevin’s group has now in fact demonstrated this by detecting time-bin qubits with ultra-short time bin delays in the frequency domain.

A special acknowledgment goes to Sven Ramelow, without whom this research wouldn’t have happened at this time and in this form.

The publication with zero authors

Screen Shot 2013-04-23 at 5.20.02 PM

I keep coming across commentary lamenting the increase in the number of authors on scientific papers (or patents), such as by Philip J. Wyatt in Physics Today and related posts in the blogosphere, e.g. a very recent one by UNC Chapel Hill marine ecologist John Bruno on Seamonster. This intriguing development seems diametrically opposed to the calls for crowd-sourced science put forward by people like Michael Nielsen. The conservative single-author advocates think that more and more authors reflect an erosion of individual creativity, while the Science 2.0 crowd is convinced that more people working on a problem delivers faster, better, and more diverse science.

So which aspect is more important: the noble aspiration to individual scientific excellence or the more modern result-driven push for large-scale collaboration?

My opinion is that you can have both things and that this important question has very little to do with actual authorship. Let’s talk about authorship first. What I certainly support is that automatic authorship such as that often demanded by organizational heads with 50+ published papers per year should be banned. What I don’t support are the stringent guidelines suggested by some journals. Take for example the oft-cited rules by the International Committee of Medical Journal Editors:

Authorship credit should be based on 1) substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published. Authors should meet conditions 1, 2, and 3.

Let’s apply those rules to a hypothetical scenario: Prof A invited a long-term collaborator, researcher B. This visitor has an original idea for an experiment which can be implemented with a new setup in A’s lab. This setup has been assembled over the past 4 years by PhD student C, who completed his work but unfortunately never got to do any actual science before he had to write up his thesis. The experiment will this be carried out by promising new PhD student D, who was introduced to the apparatus by C. Since taking the data is time-consuming, and D is new to research, the data will be analyzed by PhD student E, a real Matlab genius. For good measure, throw in postdoc F who spends most of his time teaching, planning and supervising other projects, writing grants etc. F will write the paper while data is being collected and analyzed.

According to the ICMJE guidelines, the resulting paper will have zero authors. Because none of our protagonists, starting with the researcher who had the idea, to the grunt who built the experiment, all the way to our Professor without whom none of the others would even have been there, meets the suggested criteria for scientific authorship.

I offer a much simpler criterium:

If the manuscript didn’t exist at this time, in this form, without person X—even if X could have been replaced by any other similarly qualified person, then person X should be an author.

Just as the IMCJE criteria above, my suggestion offers room for interpretation. Obviously, even I don’t think that authorship should extend all the way to Adam and Eve. But it does extend to the guy who owns the lab, created that particular line of research, hired the involved people, and financed the experiments. And it very certainly includes the researcher(s) who provided the initial idea. Because an idea of sufficient quality, i.e. somewhat more specific than ‘you should look into curing cancer’, is worth more than data that could probably have been taken in a dozen other labs. (So take that, John Bruno, unless your idea was entirely obvious, you should definitely have been an author of Brian Helmuth’s Science paper). It also obviously includes the guy who built the experiment. Unfortunately, it’s not uncommon in experimental science that whole PhDs are spent on setting up an experiment from scratch with no immediate outcome while some luckster simply walked in a few years later and started milking the setup for results. In this case our student would do well to arrange an agreement with the lab owner on how many papers they can get out of their work.

The best justification for my authorship criterium is exemplified by large scientific collaborations. Publications by CERN or LIGO routinely sport hundreds of authors none of whom would qualify for individual authorship according to some official guidelines. No one in his right mind would however suggest that a PhD degree in experimental science is worth less when obtained while contributing to the most exciting scientific endeavors undertaken by humankind. In this context, suggestions (see the comments on Wyatt’s Physics Today article) of a per-author normalization of scientific indicators like h-factors are laughable. While those huge projects are certainly extreme cases, the principle is scalable: if a project benefits from more participants, then by all means, they should all be there, and they should probably all be authors.

But back to the initial quandary of whether multi-author papers erode individual creativity. Will the fact that our paper has 6 authors instead of 1 have that effect? No it won’t. Most of the creative achievement was contained in the initial idea, some in the experimental design, and the question of authorship won’t change that. Should we have left every aspect of the experiment to our new guy D in the spirit of a wholesome scientific training? Maybe, but that means it would have taken much longer to complete the research which cannot be in the best interest of science or the taxpayer. With good supervision, student D can easily learn the components they missed out on in the time saved. It would furthermore be silly to underestimate the learning effect of sharing the process of scientific research with more experienced colleagues. Do we really want to return to yesteryear when researchers were supposed to do everything on their own, isolated from the environment? I don’t think so. And finally, if the opportunity really arises, any aspiring academic will cherish publishing a single-author paper anyway,

The only reasonable argument I see against increasingly multi-author papers is that hiring committees will have a harder job separating truly creative minds from mere data analyzers. This problem is already mitigated by author contribution statements, as nowadays requested by major journals such as Nature and Science. It would certainly be welcome if those declarations were standardized and taken up by more journals. Beyond that, if two or three job references and an extensive interview still isn’t enough for our struggling committee, then maybe the data analyzer is actually more creative than we had thought.

In summary, anyone who contributed to scientific research should be considered as an author, no matter whether their contribution was restricted to “just” data taking or any other singular aspect of the research. We will still have scientifically brilliant individuals, probably more so because of the far broader opportunities offered through larger collaborations, and if you find it harder to identify those individuals, maybe it’s your fault.

 

Sampling bosons

If ever there was a paper the linear optics community got as excited about as the now famous KLM paper, it was Aaronson and Arkhipov’s “The computational complexity of linear optics“. Fast forward two years and we have just published a first experimental implementation of the BosonSampling task introduced by the two ‘As’ in Science.

This work, and a similar one by our friends and competitors in Oxford, has attracted quite a lot of attention in the science media. Here’s a (probably incomplete) list of articles about it:

Science, “New form of quantum computation promises showdown with ordinary computers

Scientific American, “New machine bridges classical and quantum computing

New Scientist, “Victorian counting device gets speedy quantum makeover

arstechnica, “Can quantum measurements beat classical computers?

Physicsworld, “‘Boson sampling’ offers shortcut to quantum computing

photonics.com, “Rise of the BosonSampling computer

IEEE spectrum, “New machine puts quantum computers’ utility to the test

Physorg, “At the solstice: Shining light on quantum computers

ABC Science, “Proving the need for quantum computers

But there’s more. Since Andrew presented our preliminary results at last year’s QCMC conference, two other groups in Vienna and Rome also raced to get their results out and all four manuscripts appeared within a day on the arXiv.

Since the titles of our papers don’t offer much of an explanation of the differences between the results, an explanation might be in order. Let’s talk about the similarities first. All of our experiments utilized down-conversion photons sent through some sort of linear optical network. We all observed three-photon interference which is the minimum test of the BosonSampling idea. The team in Oxford also measured four-photon interference patterns, albeit in a limited sense, where instead of four photons being sent into four distinct optical modes, they simply used the sporadic double-pair emissions from a single downconverter.

One difference is that the groups in Oxford, Italy and Vienna realized their optical circuits via integrated waveguides, while we did it in a three-port fiber beamsplitter (with the polarization degree of freedom giving us a six-by-six network). The waveguides provide a stable network, they are however quite lossy which is why we probably have the best quality three-photon data. Another difference is that while these circuits are in principle tunable via thermal heaters, the circuits were actually fixed in the respective experiments. Our circuit can easily be tuned over a large range of interesting unitaries.

An aspect which sets our work apart, and which in my opinion important for testing the validity of BosonSampling, is that we used an different method of characterizing our photonic network. Instead of using two-photon interference for this characterization, which rests on the same assumption as BosonSampling itself and thus does not allow independent verification of predicted three-photon amplitudes, we used a simple classical method for characterizing unitary circuits we recently developed.

Making light matter

Photonic split-step quantum walk implementation

Photonic split-step quantum walk implementation for the observation of topologically protected bound states.

We have a new paper in Nature Communications, Observation of topologically protected bound states in photonic quantum walks. Here’s our press release which unfortunately didn’t quite make it into the official press channels because of a fundamental disconnect between what we researchers wanted to write and the official PR guidelines at UQ:

At first glance, a donut and a coffee cup do not have much in common, except that they complement each other really well.

A second glance reveals that they share a geometrical property, their topology: the shape of one can be continuously deformed into the shape of the other.

Topology explains many phenomena in modern science: transitions between physical regions with different topology cause exotic effects such as insulators which act like conducting metals at the surface.

These effects are hard to control and study since they usually appear in complex materials where quantum particles are hard to observe. Researchers at the University of Queensland and Harvard University have simulated transitions between quantum topologies—predicted to exist, but never observed, in polymers and high-energy physics—in an experiment where light is made to act like matter.

“We observed for the first-time bound states—where a quantum particle is trapped at a topological interface—which have long been predicted to play an important role”, says Matthew Broome, joint lead author of this work and PhD student at the University of Queensland, “It was easy to observe these trapped photons, which is usually a challenging task in material sciences.”

The team at Harvard recently predicted that quantum walks can simulate systems with different topological regions. The experimentalists at Queensland persuaded single particles of light—photons—to walk through an optical network.

“Quantum walks have been previously realized in variety of settings with ions, atoms and photons, but nobody really knew that they could observe the exciting topological phenomena with quantum walks before our discovery”, says Takuya Kitagawa, joint lead author who developed the theory with his colleagues at Harvard, “This discovery came as a complete surprise to everybody, including us.”

Furthermore the versatile system invented by the UQ team allowed a surprising new discovery, the existence of a pair of bound states—a topological phenomenon which arises in dynamic time-dependent systems only.

This discovery bears exciting prospects for the development of novel materials and even powerful—but so far elusive—quantum computers.

The study, “Observation of topological bound states in photonic quantum walks”, by UQ’s Matthew Broome, Alessandro Fedrizzi, Ivan Kassal, and Andrew White, and Harvard’s Takuya Kitagawa, Erez Berg, Mark Rudner, Alán Aspuru-Guzik and Eugene Demler, was published in Nature Communications.

The experiment was conducted by researchers from the ARC Centre for Engineered Quantum Systems (EQuS) and the ARC Centre for Quantum Computation and Communication Technology (CQC2T) in Australia; and Harvard University, USA.

Two-photon quantum walk paper highlighted by New Journal of Physics

New Journal of Physics Highlights of 2011, Cover PageThe New Journal of Physics has for the second year in a row included one of our papers in their annual highlights. Last year, our paper on Matchgate quantum computing made it into NJP’s Best of 2010 list. This year, our work on Two-photon quantum walks in an elliptical direct-write waveguide array was voted into the Best of 2011collection. Congratulations to first author Jimmy Owens!

Photonic ups and downs

The photons I work with have experienced a roller coaster lately, having been down-converted here and now up-converted in a new paper we just published in PRA. This time, the photons didn’t have to go their own way though. They were assisted in their journey from higher (810 nm) to lower wavelengths (532 nm) by a strong telecom seed laser. This wouldn’t be big news though, as many other groups have previously reported this so-called sum-frequency generation between single photons and strong seed lasers. The actual news is that we have managed to up-convert polarization-entangled photons.

Experimental setup for frequency up-conversion of entangled photons

We generated polarization-entangled photon pairs at 820 nm in a ppKTP Sagnac source and superposed one photon of each pair with a strong 1550 nm laser in two additional ppKTP crystals. The resulting 532 photons were still highly entangled with their original partners.

We started by generating polarization-entangled photon pairs at 820 nm, mixed them with a strong 1550 nm laser and sent them through the reverse version of Paul Kwiat’s sandwich source: two nonlinear crystals quasi-phase-matched for type-I up-conversion of 820+1550->532 nm, with orthogonal orientations. The overall efficiency of the up-conversion scheme was atrocious, but it precisely matched theoretical expectations for our scheme, given the available pump power and limitations in geometry an optical loss. More importantly, the entanglement between the up-converted green photons and their original 820 nm partners was almost perfect.

Our results show for only the second time (the first being Nicolas Gisin’s fabulous time-bin entanglement up-conversion experiment) that entanglement is preserved in a sum-frequency experiment with a strong seed laser. As we argued in our paper, this type of coherent interconversion between wavelengths will be an important tool in the larger picture of practical quantum information processing. Nature Photonics included our paper in their March 2012 research highlights.