Home / Chroniques / Quantum computing and AI: less compatible than expected?
AI concept using deep learning and cloud computing incorporating big data technology Modern machine learning with neural network and coding 3D rendered illustration
π Science and technology

Quantum computing and AI: less compatible than expected?

Flippo Vincentini
Filippo Vicentini
Assistant Professor of AI and Quantum Physics at Ecole Polytechnique (IP Paris)
Key takeaways
  • There is a belief that quantum computing could revolutionise artificial intelligence and in particular deep learning.
  • However, quantum computing will not necessarily advance AI because it encounters difficulties in processing information from neural networks and voluminous data.
  • In particular, quantum computers are very slow and only very short calculations can be carried out without breakdowns.
  • However, AI machine learning is an essential tool for learning how to design and operate quantum computers today.

With a num­ber of tech firms promis­ing to be able to solve some small real-world prob­lems with­in the next few years, it would seem that the world is on the cusp of a quan­tum com­put­ing break­through. As such there had been much hope that access to such quan­tum com­put­ing would trans­form arti­fi­cial intel­li­gence too. But a grow­ing con­sen­sus sug­gests this may not yet be with­in reach.

What can be said about the origins behind the belief that quantum computing could revolutionise AI?

Fil­ip­po Vicen­ti­ni. AI is a very wide-rang­ing term. So, I’ll focus on “deep learn­ing”, which is behind the new tech­nolo­gies like text, audio and video gen­er­a­tive mod­els that we are see­ing explode today. The idea that quan­tum com­put­ing could boost AI devel­op­ment became more promi­nent around 2018–19. Com­pa­nies were com­ing out with ear­ly quan­tum com­put­ers with 1, 2, 3, or 4 noisy qubits. Because of their lim­i­ta­tions, these machines could not be used to do larg­er real-world cal­cu­la­tions, which is where we expect quan­tum com­put­ing to real­ly shine. Instead, they were tasked with doing many short “quan­tum” sub­rou­tines (com­mon­ly known as quan­tum cir­cuits), feed­ing back into a clas­si­cal opti­mi­sa­tion algo­rithm. This approach is strik­ing­ly sim­i­lar to how neur­al net­works are trained in deep learning.

The hope, back around that time, was that a rea­son­ably sized “quan­tum cir­cuit” would be more expres­sive — mean­ing it could present more com­plex solu­tions to a prob­lem with few­er resources — than a neur­al net­work, thanks to quan­tum phe­nom­e­na like inter­fer­ence and super­po­si­tion. In short, this could mean that quan­tum cir­cuits could enable algo­rithms that could learn to find cor­re­la­tions with­in data more effec­tive­ly. Hence, the field of quan­tum machine learn­ing was born, and sev­er­al researchers start­ed to try to bring ideas from one side to the oth­er. There was a lot of excite­ment at the time.

Several companies have touted the advent of more powerful quantum computers in the next few years. Does that mean we should expect a leap forward in AI as well?

The brief answer would be no, I don’t think quan­tum com­put­ing will boost AI for­ward. It’s becom­ing increas­ing­ly clear that quan­tum com­put­ers will be very use­ful for appli­ca­tions that require lim­it­ed input and out­put but a huge amount of pro­cess­ing pow­er. For instance, to solve com­plex physics prob­lems relat­ed to super­con­duc­tiv­i­ty or sim­u­late chem­i­cal mol­e­cules. How­ev­er, for any­thing relat­ed to big data and neur­al net­works, the con­sen­sus is grow­ing that it may ulti­mate­ly not be worth the effort.

This posi­tion was recent­ly laid out in a paper1 by Swiss Nation­al Super­com­put­ing Centre’s Torsten Hoe­fler, Amazon’s Thomas Hän­er, and Microsoft’s Matthias Troy­er. I just fin­ished review­ing sub­mis­sions for the QTML24 (quan­tum tech­niques in machine learn­ing) con­fer­ence and the tone of the quan­tum machine learn­ing com­mu­ni­ty was on a down­ward trend.

Why is that?

More and more experts are recog­nis­ing that quan­tum com­put­ers will like­ly remain very slow when it comes to input and out­put of data. To give you an idea, we expect that a quan­tum com­put­er that could exist maybe five years from now — if we are being opti­mistic — will have the same speed to read and write as an aver­age com­put­er from 1999 or 2000.

If we try to run quan­tum com­put­ers faster to increase the amount of data we can inject, we will start to intro­duce more errors in the cal­cu­la­tion, and the result will dete­ri­o­rate. There seems to be a speed lim­it for the oper­a­tion of these machines above which the noise and errors are too strong to be cor­rect­ed, even when we look about 20 years in the future.

Both clas­si­cal and quan­tum com­put­ers are noisy. For one, a bit or a qubit, at some point, can ran­dom­ly switch to a 1. While we can address this effec­tive­ly in clas­si­cal com­put­ers, we don’t have that tech­nol­o­gy in quan­tum com­put­ers. We esti­mate it will take at least anoth­er 15 years to devel­op ful­ly fault-tol­er­ant quan­tum com­put­ers. That means we can only do very ‘short’ calculations.

More­over, the out­put of a quan­tum com­put­er is prob­a­bilis­tic, which cre­ates addi­tion­al chal­lenges. Clas­si­cal com­put­ers give you a deter­min­is­tic result – run the same sim­u­la­tion twice and you’ll get the same answer. But every time you run a quan­tum algo­rithm the out­put will be dif­fer­ent. The result must be extract­ed from the dis­tri­b­u­tion of the out­puts (how many times you see 0s and 1s). To recon­struct the dis­tri­b­u­tion accu­rate­ly you must repeat the cal­cu­la­tion many, many times which increas­es the over­head. This is anoth­er rea­son why some algo­rithms seemed very pow­er­ful a few years ago, but even­tu­al­ly were shown not to yield a sys­tem­at­ic advan­tage over the clas­si­cal ones we can already run on nor­mal computers.

Does that mean that AI and quantum computing will be distant cousins, with little overlap?

Not at all. In fact, my col­leagues and I have recent­ly launched a peti­tion2 to ask for fund­ing at the Euro­pean Union lev­el for machine learn­ing and quan­tum sci­ences. Machine learn­ing is quick­ly becom­ing an essen­tial tool to learn how to design and oper­ate quan­tum com­put­ers nowa­days. For exam­ple, every device is slight­ly dif­fer­ent. Rein­force­ment learn­ing tech­niques can analyse your machine and its par­tic­u­lar pat­terns to help fit algo­rithms specif­i­cal­ly to that device. There’s one com­pa­ny called Q‑CTRL3, for instance, that has been doing pio­neer­ing work in that field. Google’s quan­tum AI4 and Amazon’s Braket5 are two oth­er lead­ers lever­ag­ing these ideas.

AI could also be very com­pli­men­ta­ry to quan­tum com­put­ing. Let’s take Microsoft’s Azure Quan­tum Ele­ments, which used a com­bi­na­tion of Microsoft Azure HPC (high-per­for­mance com­put­ing) and AI prop­er­ty-pre­dic­tion fil­ters to whit­tle down a selec­tion of 32 mil­lion can­di­dates for a more effi­cient recharge­able bat­tery mate­r­i­al to just 18 can­di­dates. These were run through pow­er­ful, estab­lished, pro­cess­ing-inten­sive algo­rithms, which are quite lim­it­ed because they con­sume a lot of ener­gy, and can’t work with very com­pli­cat­ed mol­e­cules. That is exact­ly where quan­tum com­put­ing could step in, in the near future.

I believe AI and quan­tum com­put­ing will be dif­fer­ent com­po­nents in a stack of tools — com­ple­men­tary but not com­pat­i­ble. We want to keep push­ing those direc­tions and many more by cre­at­ing a joint team called “PhiQus” between Ecole Poly­tech­nique (IP Paris) and Inria togeth­er with Marc-Olivi­er Renou and Titouan Carette.

Interview by Marianne Guenot
1https://​cacm​.acm​.org/​r​e​s​e​a​r​c​h​/​d​i​s​e​n​t​a​n​g​l​i​n​g​-​h​y​p​e​-​f​r​o​m​-​p​r​a​c​t​i​c​a​l​i​t​y​-​o​n​-​r​e​a​l​i​s​t​i​c​a​l​l​y​-​a​c​h​i​e​v​i​n​g​-​q​u​a​n​t​u​m​-​a​d​v​a​n​tage/
2https://www.openpetition.eu/petition/online/support-the-machine-learning-in-quantum-science-manifesto‑2
3https://q‑ctrl.com
4https://​quan​tu​mai​.google
5https://​aws​.ama​zon​.com/​f​r​/​b​r​aket/

Our world explained with science. Every week, in your inbox.

Get the newsletter