Hey folks, while we’re all still chasing that elusive “quantum winter is over” meme, I stumbled on a couple of arXiv drops from the last month that might actually move the needle-or at least make the hype trains derail in interesting ways.
First up: “Scalable Logical Qubits via Hybrid Cat Qubits” (arXiv:2408.XXXX) from the Yale crew. They’re claiming error rates below 10^-6 with just 50 physical qubits per logical one, using those funky cat states that don’t decoherce faster than your average coffee break. Teasing the error-correction purists here: remember when surface codes were the only game in town? This hybrid approach laughs in their face by borrowing from continuous-variable quantum optics. Anyone simulated this yet, or is it still vaporware until AWS Quantum launches it?
Then there’s “Quantum Kernel Alignment for Provable Generalization in QML” (arXiv:2409.YYYY), pushing quantum machine learning past the “it works on toy datasets” phase. They prove that kernel methods on near-term hardware beat classical SVMs on certain NP-hard problems, with bounds tighter than your favorite skeptic’s grin. Sarcasm alert for the QML evangelists: yes, it’s not full-blown quantum advantage, but it’s the first paper I’ve seen with actual regret bounds that don’t require fault-tolerance fairy dust. Who’s benchmarking this on IonQ or Rigetti rigs?
Links in comments if the mods don’t nuke ’em. Thoughts? Rebuttals? Or should we just keep pretending we’re five years from breaking RSA? 😏