Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10949 publications
Preview abstract
For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step.
View details
Preview abstract
How many T gates are needed to approximate an arbitrary n-qubit quantum state to within
a given precision ϵ? Improving prior work of Low, Kliuchnikov and Schaeffer, we show that the
optimal asymptotic scaling is Θ(sqrt{2^n log(1/ε)} + log(1/ε)) if we allow an unlimited number of ancilla qubits. We also show that this is the optimal T-count for implementing an arbitrary
diagonal n-qubit unitary to within error ϵ. We describe an application to batched synthesis of
single-qubit unitaries: we can approximate a tensor product of m = O(log log(1/ϵ)) arbitrary
single-qubit unitaries to within error ϵ with the same asymptotic T-count as is required to
approximate just one single-qubit unitary.
View details
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
CrossCheck: Input Validation for WAN Control Systems
Rishabh Iyer
Isaac Keslassy
Sylvia Ratnasamy
Networked Systems Design and Implementation (NSDI) (2026) (to appear)
Preview abstract
We present CrossCheck, a system that validates inputs to the Software-Defined Networking (SDN) controller in a Wide Area Network (WAN). By detecting incorrect inputs—often stemming from bugs in the SDN control infrastructure—CrossCheck alerts operators before they trigger network outages.
Our analysis at a large-scale WAN operator identifies invalid inputs as a leading cause of major outages, and we show how CrossCheck would have prevented those incidents. We deployed CrossCheck as a shadow validation system for four weeks in a production WAN, during which it accurately detected the single incident of invalid inputs that occurred while sustaining a 0% false positive rate under normal operation, hence imposing little additional burden on operators. In addition, we show through simulation that CrossCheck reliably detects a wide range of invalid inputs (e.g., detecting demand perturbations as small as 5% with 100% accuracy) and maintains a near-zero false positive rate for realistic levels of noisy, missing, or buggy telemetry data (e.g., sustaining zero false positives with up to 30% of corrupted telemetry data).
View details
Low temperature magnetic thermometry with LSCI 372 AC Resistance Bridge
Vladimir Shvarts
Ashley Huff
(2025)
Preview abstract
Electronic paramagnets, such as salts and metallic alloys, were used in the past as a convenient way to measure ultra-low temperatures. Custom-built sensors demonstrated simple 1/T temperature dependence of magnetisation and excellent thermal equilibration times in milliKelvin temperature range.
Modern temperature controllers are part of the 3He-4He dilution refrigerators automated systems, and can measure both resistance and reactance of commercially available temperature sensors.
Here we demonstrate an example of such a system, based LCSI 372 AC resistance bridge. It enables in-situ calibrations of Cerium Magnesium Nitrate as well as PdFe paramagnetic susceptibility sensors against calibrated resistive sensors.
Such calibrations are validated with noise thermometry and superconducting fixed point devices down to 8 mK.
View details
Preview abstract
Adaptive methods with non-diagonal preconditioning have demonstrated state-of-the-art performances on various tasks. However, the high memory cost of the existing non-diagonal preconditioning methods makes them unsuitable for the training of Low Rank Adaptions (LoRA). Additionally, these methods do not meet the criteria of efficient feature learning, which is important for LoRA optimization. In this work, we propose a non-diagonal preconditioning method to improve LoRA optimization. It has a low memory cost and achieves efficient feature learning through transformation invariance among equivalent LoRA weights. We provide theoretical justifications for our method. Our experiments on LLM LoRA finetuning demonstrate the effectiveness of our method.
View details
Preview abstract
The need for characterizing global variability of atmospheric carbon dioxide (CO2) is quickly increasing, with a growing urgency for tracking greenhouse gasses with sufficient resolution, precision and accuracy so as to support independent verification of CO2 fluxes at local to global scales. The current generation of space-based sensors, however, can only provide sparse observations in space and/or in time, by design. While upcoming missions may address some of these challenges, most are still years away from launch. This challenge has fueled interest in the potential use of data from existing missions originally developed for other applications for inferring global greenhouse gas variability.
The Advanced Baseline Imager (ABI) onboard the Geostationary Operational Environmental Satellite (GOES-East), operational since 2017, provides full coverage of much of the western hemisphere at 10-minute intervals from geostationary orbit at 16 wavelengths. We leverage this high temporal resolution by developing a single-pixel, fully-connected neural network to estimate dry-air column CO2 mole fractions (XCO2). The model employs a time series of GOES-East's 16 spectral bands, which aids in disentangling atmospheric CO2 from surface reflectance, alongside ECMWF ERA5 lower tropospheric meteorology, solar angles, and day of year. Training used collocated GOES-East and OCO-2/OCO-3 observations (2017-2020, within 5 km and 10 minutes), with validation and testing performed on 2021 data.
The model successfully captures monthly latitudinal XCO2 gradients and shows reasonable agreement with ground-based TCCON measurements. Furthermore, we demonstrate the model's ability to detect elevated XCO2 signals from high-emitting power plants, particularly over low-reflectance surfaces. We also confirm that removing bands 5 (1.6 µm) and 16 (13.3 µm) substantially decreases performance, indicating that the model is able to extract useful information from these bands.
Although GOES-East derived XCO2 precision may not rival dedicated instruments, its unprecedented combination of contiguous geographic coverage, 10-minute temporal frequency, and multi-year record offers the potential to observe aspects of atmospheric CO2 variability currently unseen from space, with further potential through spatio-temporal aggregation.
View details
Avoid global outages by partitioning cloud applications to reduce blast radius
Karan Anand
https://cloud.google.com/ (2025)
Preview abstract
Cloud application development faces the inherent challenge of balancing rapid innovation with high availability. This blog post details how Google Workspace's Site Reliability Engineering team addresses this conflict by implementing vertical partitioning of serving stacks. By isolating application servers and storage into distinct partitions, the "blast radius" of code changes and updates is significantly reduced, minimizing the risk of global outages. This approach, which complements canary deployments, enhances service availability, provides flexibility for experimentation, and facilitates data localization. While challenges such as data model complexities and inter-service partition misalignment exist, the benefits of improved reliability and controlled deployments make partitioning a crucial strategy for maintaining robust cloud applications
View details
Preview abstract
Measuring software development can help drive impactful change. However, it’s a complex task, and getting started can be daunting as it involves understanding what you should measure, and determining what you can measure. This article provides a guide to selecting a framework that aligns with organizational measurement strategy.
View details
Global earthquake detection and warning using Android phones
Marc Stogaitis
Youngmin Cho
Richard Allen
Boone Spooner
Patrick Robertson
Micah Berman
Greg Wimpey
Robert Bosch
Nivetha Thiruverahan
Steve Malkos
Alexei Barski
Science, 389 (2025), pp. 254-259
Preview abstract
Earthquake early-warning systems are increasingly being deployed as a strategy to reduce losses in earthquakes, but the regional seismic networks they require do not exist in many earthquake-prone countries. We use the global Android smartphone network to develop an earthquake detection capability, an alert delivery system, and a user feedback framework. Over 3 years of operation, the system detected an average of 312 earthquakes per month with magnitudes from M 1.9 to M 7.8 in Türkiye. Alerts were delivered in 98 countries for earthquakes with M ≥4.5, corresponding to ~60 events and 18 million alerts per month. User feedback shows that 85% of people receiving an alert felt shaking, and 36, 28, and 23% received the alert before, during, and after shaking, respectively. We show how smartphone-based earthquake detection algorithms can be implemented at scale and improved through postevent analysis.
View details
Beyond Digital Literacy: Building Youth Digital Resilience Through Existing “Information Sensibility” Practices
Mia Hassoun
Ian Beacock
Todd Carmody
Patrick Gage Kelley
Beth Goldberg
Devika Kumar
Laura Murray
Rebekah Park
Behzad Sarmadi
Social Sciences Journal, 14(4) (2025)
Preview abstract
Youth media consumption and disordered eating practices have historically been subjects of moral panics, often resulting in protective, deficit-based interventions like content removal. We argue for interventions which instead equip youth to evaluate and manage risks in their online environments, building upon their existing “information sensibility” practices. Drawing upon ethnographic research and intervention testing with 77 participants in the US and India, we analyze how youth (aged 13–26), including those with diverse political perspectives and those recovering from disordered eating (DE), engage with online news and health information. Participants generally algorithmically encountered (rather than searched for) information online, and their engagement was shaped more by social motivations—like belonging—than truth seeking. Participants interpreted online information collaboratively, relying on social cues and peer validation within their online communities. They demonstrated preference for personal testimonies and relatable sources, particularly those with similar social identities. We propose resilience-building interventions that build upon these youth online information practices by: (1) leveraging peer networks, promoting critical information engagement through collaborative learning and peer-to-peer support within online communities; (2) developing social media sensibility, equipping youth to critically evaluate information sources in situ; (3) providing pathways offline, connecting youth to desired in-person communities; and (4) encouraging probabilistic thinking.
View details
Faster electronic structure quantum simulation by spectrum amplification
Guang Hao Low
Robbie King
Alec White
Rolando Somma
Dominic Berry
Qiushi Han
Albert Eugene DePrince III
arXiv (2025) (to appear)
Preview abstract
We discover that many interesting electronic structure Hamiltonians have a compact and close-to-frustration-free sum-of-squares representation with a small energy gap. We show that this gap enables spectrum amplification in estimating ground state energies, which improves the cost scaling of previous approaches from the block-encoding normalization factor $\lambda$ to just $\sqrt{\lambda E_{\text{gap}}}$. For any constant-degree polynomial basis of fermionic operators, a sum-of-squares representation with optimal gap can be efficiently computed using semi-definite programming. Although the gap can be made arbitrarily small with an exponential-size basis, we find that the degree-$2$ spin-free basis in combination with approximating two-body interactions by a new Double-Factorized (DF) generalization of Tensor-Hyper-Contraction (THC) gives an excellent balance of gap, $\lambda$, and block-encoding costs. For classically-hard FeMoco complexes -- candidate applications for first useful quantum advantage -- this combination improves the Toffoli gates cost of the first estimates with DF [Phys. Rev. Research 3, 033055] or THC [PRX Quantum 2, 030305] by over two orders of magnitude.
https://drive.google.com/file/d/1hw4zFv_X0GeMpE4et6SS9gAUM9My98iJ/view?usp=sharing
View details
Quantum Simulation of Chemistry via Quantum Fast Multipole Transform
Dominic Berry
Kianna Wan
Andrew Baczewski
Elliot Eklund
Arkin Tikku
arXiv:2510.07380 (2025)
Preview abstract
Here we describe an approach for simulating quantum chemistry on quantum computers with significantly lower asymptotic complexity than prior work.
The approach uses a real-space first-quantised representation of the molecular Hamiltonian which we propagate using high-order product formulae.
Essential for this low complexity is the use of a technique similar to the fast multipole method for computing the Coulomb operator with $\widetilde{\cal O}(\eta)$ complexity for a simulation with $\eta$ particles. We show how to modify this algorithm so that it can be implemented on a quantum computer. We ultimately demonstrate an approach with $t(\eta^{4/3}N^{1/3} + \eta^{1/3} N^{2/3} ) (\eta Nt/\epsilon)^{o(1)}$ gate complexity, where $N$ is the number of grid points, $\epsilon$ is target precision, and $t$ is the duration of time evolution.
This is roughly a speedup by ${\cal O}(\eta)$ over most prior algorithms.
We provide lower complexity than all prior work for $N<\eta^6$ (the only regime of practical interest), with only first-quantised interaction-picture simulations providing better performance for $N>\eta^6$. However, we expect the algorithm to have large constant factors that are likely to limit its practical applicability.
View details
Preview abstract
We study the multi-armed bandit problem with adversarially chosen delays in the Best-of-Both-Worlds (BoBW) framework, which aims to achieve near-optimal performance in both stochastic and adversarial environments. While prior work has made progress toward this goal, existing algorithms suffer from significant gaps to the known lower bounds, especially in the stochastic settings. Our main contribution is a new algorithm that, up to logarithmic factors, matches the known lower bounds in each setting individually.
In the adversarial case, our algorithm achieves regret of $\widetilde{O}(\sqrt{KT} + \sqrt{D})$, which is optimal up to logarithmic terms, where $T$ is the number of rounds, $K$ is the number of arms, and $D$ is the cumulative delay. In the stochastic case, we provide a regret bound which scale as $\sum_{i:\Delta_i>0}\roundy{\logp T/\Delta_i} + \frac{1}{K}\sum \Delta_i \sigma_{max}$, where $\Delta_i$ is the sub-optimality gap of arm $i$ and $\sigma_{\max}$ is the maximum number of missing observations.
To the best of our knowledge, this is the first \textit{BoBW} algorithm to simultaneously match the lower bounds in both stochastic and adversarial regimes in delayed environment. Moreover, even beyond the BoBW setting, our stochastic regret bound is the first to match the known lower bound under adversarial delays, improving the second term over the best known result by a factor of $K$.
View details
Envisioning Aboriginal and Torres Strait Islander AI Futures
Journal of Global Indigeneity (2025)
Preview abstract
In January 2025, over forty Aboriginal and Torres Strait Islander researchers, practitioners, community members, and allies, gathered at the Centre for Global Indigenous Futures at the Wallumattagal Campus of Macquarie University in Sydney to envisage Aboriginal and Torres Strait Islander AI futures. This publication reports on attendees' vision for the future of AI for Aboriginal and Torres Strait Islander people.
View details