Paving the way for standards to measure quantum computing performance

Called cycle benchmarking, the method allows researchers to assess the potential of scalability and to compare one quantum platform against another.

"This finding could go a long way toward establishing standards for performance and strengthen the effort to build a large-scale, practical quantum computer," said Joel Wallman, an assistant professor at Waterloo's Faculty of Mathematics and Institute for Quantum Computing. "A consistent method for characterizing and correcting the errors in quantum systems provides standardisation for the way a quantum processor is assessed, allowing progress in different architectures to be fairly compared."

Cycle benchmarking provides a solution that helps quantum computing users to both determine the comparative value of competing hardware platforms and increase the capability of each platform for their applications of interest.

This breakthrough comes as the quantum computing race is rapidly heating up, and the number of cloud quantum computing platforms and offerings is quickly expanding with recent announcements from Microsoft, IBM and Google.

This method helps to determine the total probability of error under any given quantum computing applications when the application is implemented through randomised compiling. This means that cycle benchmarking provides the first cross-platform means of measuring and comparing the capabilities of quantum processors that is customised to users' applications of interest.

"We are now at the dawn of what I call the `quantum discovery era', said Joseph Emerson, a faculty member at IQC. "This means that error-prone quantum computers will deliver solutions to interesting computational problems, but the quality of their solutions can no longer be verified by high-performance computers.

"Cycle benchmarking provides a much-needed solution for improving and validating quantum computing solutions in this new era of quantum discovery."

Emerson and Wallman founded the IQC spin-off Quantum Benchmark, which has already licensed technology to several world-leading quantum computing providers, including Google's Quantum AI effort.

Quantum computers offer a fundamentally more powerful way of computing, thanks to quantum mechanics.

Characterizing a quantum system produces a profile of the noise and errors, indicating if the processor is performing the tasks or calculations, it is being asked to do. To understand the performance of any existing quantum computer for a complex problem or to scale up a quantum computer by reducing errors, it's first necessary to characterize all significant errors affecting the system.

Wallman, Emerson and a group of researchers at the University of Innsbruck identified a method to assess all error rates affecting a quantum computer. They implemented this new technique for the ion trap quantum computer at the University Innsbruck and found that error rates don't increase as the size of that quantum computer scales up.

"Cycle benchmarking is the first method for reliably checking if you are on the right track for scaling up the overall design of your quantum computer," said Wallman. "These results are significant because they provide a comprehensive way of characterizing errors across all quantum computing platforms."