Even though it has only been used for a few months, ChatGPT is already making some significant waves across industries. From writing news articles to supplying computer code, the AI chatbot seems to already be replacing many different types of jobs. While this new technology is creating a paradigm shift for many different industries, ChatGPT is still being improved and developed, as are its counterparts, like Google’s Bard program. One way to improve these AI chatbots even further could be by applying quantum computing technology. With quantum computing’s promise of faster data analysis and more thorough optimization algorithms, the combination of these two nascent technologies could create something even more novel than what we have currently seen.
Some Simple Benefits
What better way to understand the potential benefits of quantum computing than to ask ChatGPT itself? When I asked the program this question, it responded with the following: “Firstly, quantum computing can potentially speed up the training and inference processes of machine learning algorithms, which could make ChatGPT’s responses faster and more accurate. Secondly, quantum computing can help improve natural language processing tasks, which could enhance ChatGPT’s ability to understand and generate human-like language. Thirdly, quantum computing can provide new ways of solving complex optimization problems, which could help ChatGPT improve its decision-making and recommendation capabilities.”
QML Could Bridge the Gap
To connect ChatGPT to quantum computing, many experts posit that software, like Quantum Machine Learning (QML) algorithms, should be a first priority. “I would say though that there’s a lot of confidence among vendors that QML will show an advantage over classical, however, and generative AI (like the kind used for ChatGPT) will be a good area of opportunity for QML,” states Sam Lucero, the Chief Quantum Computing Analyst at Omdia. QML could potentially bridge the gap between the abilities of AI and quantum computing and offer some significant benefits. According to Lucero: “Theoretically, QML could provide an ability to (perhaps exponentially) reduce the volume of training data needed to achieve the same level of inferencing capability, relative to classical generative AI. This in itself is key because large foundational models like ChatGPT are enormous, expensive (millions of dollars) to train, take a long time to train (months), and are energy intensive, which isn’t great from a sustainability perspective (megawatts of electricity to power the data center). Also, there’s a data availability concern: ChatGPT is trained on basically ‘the Internet,’ but many enterprises would like to be able to use just their own internal data for training, but still get the same level of performance in inferencing. That could be a use case benefit, or a regulatory benefit (i.e., using HIPAA-protected data for training), in addition to a benefit in terms of cost, time, sustainability, etc.” Not only would QML offer more cost-effective models for the future ChatGPT, but it could also run at faster speeds and have a higher capacity for data, providing more powerful solutions. QML can also add a wider range of data for the model to run though. “QML for generative AI is able to explore a wider search space than classical ML,” Lucero says. “Practically speaking, this means that QML should be better at, for example, translating idiomatic language, or translating between two languages that are structured very differently at a core grammatical level.” This has huge implications for many different fields, from politics and international translation devices to the anthropology of reconstructing lost languages.
These benefits can also help in cutting the training time in half for AI chatbots. “One of the most important qualities of ChatGPT is that it is equipped with reinforcement learning from human feedback (RLHF), which is basically reward-based learning,” science and technology writer Ayush Jain adds. “A key challenge to using this is that the model has to go through a lot of trial and error before it knows what is the rewarded behavior – which means that only those organizations who can afford to train the model in public despite its initial flaws can win the game. But, this could change as one study from a few years ago showed that in hybrid systems, reinforcement learning AI was able to learn 60 percent faster than its non-quantum counterpart.” Quantum language models are actually already in the works. As Jain states: “We have seen companies like Cambridge Quantum have developed software toolkit and library for natural language processing.” Cambridge Quantum (now Quantinuum) may be starting a trend by looking into the language models for AI while developing quantum computers simultaneously.
Current Limitations for the Integration of ChatGPT with Quantum
While these benefits could be game changing for the AI chatbot, the actual process of integrating the two technologies is much more complicated. Some experts, like Jain, believe that the most natural next step is to combine the classical computing AI models with quantum computing in a hybrid platform. “The hybrid model focuses on techniques to implement learning on classical data using quantum computers, as opposed to developing fully quantum algorithms that work with quantum data,” Jain explains. “And platforms like NVIDIA QODA and others have already – to some extent –levelled the steep learning curve since the development of hybrid quantum-classical systems can leverage existing classical software stacks, programming models, and libraries, which can help accelerate the adoption of quantum computing.” As quantum computers are still being developed, and are rather fragile and finicky, it is difficult to say when this next-generation technology would be ready to take on ChatGPT. Should the two eventually come together, perhaps a similar paradigm shift, like the one we’re currently experiencing, may happen, once again forcing us to adapt to our ever-evolving technology.
Kenna Hughes-Castleberry is a staff writer at Inside Quantum Technology and the Science Communicator at JILA (a partnership between the University of Colorado Boulder and NIST). Her writing beats include deep tech, the metaverse, and quantum technology.