Making Noisy Intermediate Scale Quantum (NISQ) Computing Useful
Speaker: Swaroop Ghosh – State College, PA, United StatesTopic(s): Architecture, Embedded Systems and Electronics, Robotics
Abstract
The concerted effort by industry/academia has produced commercial quantum computers and algorithms that offer speed-up over classical counterparts (at least in principle). The basic building block of a quantum computer is a qubit that has to satisfy DiVincenzo criteria. Several quantum systems satisfy them e.g., superconducting, ion-trap and solid-state qubits each with its own merits/caveats. In spite of the promises and potentials, quantum computers are still in nascent stage. On the device front, the qubits are fragile and susceptible to noise and error due to decoherence. This is specifically true for Noisy Intermediate Scale Quantum (NISQ) Computers. Any sequence of gate operation (~tens of nanoseconds) has to be completed before the relaxation time which limits the maximum number of allowed gate operations to thousands at the very best. This is not sufficient for meaningful system implementation. New noise tolerant qubits and their accurate modeling is needed to quantify the reliability (or fidelity) of the output. Quantum error correction (QEC) e.g., Surface code increases the overhead substantially e.g., 20X. Therefore, advances in low-cost resiliency techniques is indispensable.
Two lines of research are current being pursued to address the above challenges. First, EDA techniques are developed to model the noise and enhance the resilience of computing by exploring trade-off opportunities present at various levels of design hierarchy. Second, low-depth approximate algorithms such as, quantum approximate optimization algorithm (QAOA) and variational quantum eigensolver (VQE) are being developed to solve computationally relevant problems such as, max-cut using the noisy quantum hardware. The approximate algorithms are implemented based on classical-quantum hybrid setup and one of the key challenges in this domain is to shorten the computation time needed to identify the parameters needed to optimize the quantum algorithm.
This talk will present the above challenges in the context of practical applications such as, classification, and optimization using data supported by real quantum hardware from IBM and Rigetti. The first part will introduce various qubit technologies, their operating principles and comparative analysis. It will also describe various noise sources, static/dynamic variability and a generic modeling framework that can be used to evaluate the resilience of quantum circuits. The second part will present quantum circuits and evaluate the resilience using metrics such as, fidelity in presence of noise and variations. It will describe EDA flow to optimize the resilience. Newly proposed techniques such as, re-allocation and multi-constrained optimization will also be presented. The third part will focus on application of quantum computing to solve real world problems using approximate algorithms namely, QAOA and VQE. The steps to solve the problems such as, defining a cost function, designing problem Hamiltonian and parameter optimization using classical optimizers will be presented. The challenges such as, number of iterations needed for optimization will be quantified and techniques such as, machines learning based parameter modeling will be discussed. Finally, design space exploration will be presented to identify the bounds on various error rates and variability to make quantum hardware useful to solve optimization problems.
About this Lecture
Number of Slides: 50Duration: 60 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.