To prompt or not to prompt? That is the AI question.
AI in the workplace is gaining momentum, but what is the key difference between Generative and Unprompted AI?
AI has begun to take root in the enterprise workplace. Companies are reading the tea leaves, as rapid breakthroughs changing the technology landscape on a weekly basis. With all this change and innovation, there is a renewed momentum toward adoption, and executives are getting up the learning curve. They are finding lots of different terms being hurtled at them, while frantically trying to separate buzzwords from reality. As with any complex problem, it helps to break it down into pieces or categories. We believe one of the key lines of demarcation with AI’s enterprise adoption will be between Generative and Unprompted AI. Both have amazing potential to provide significant leverage and functionality to knowledge workers, but they go about it in very different ways. We’ll unpack this below.
Quick Primer – Generative AI
Generative AI (“GenAI”) is the nascent technology that has catalyzed today’s interest in AI. Like IBM’s historical gambit with Deep Blue vs Kasparaov, people can tangibly feel its impact through consumer-facing technologies like ChatGPT and Bard. They play around with it at home, and their imaginations run wild with awe at its clever output. GenAI centers around the use of large language models (“LLMs”). LLMs are a breakthrough, connecting a novel machine learning algorithm called transformers, with cutting-edge compute in the form of large GPU clusters, trained on humongous sources of data, namely the internet! Arguably the most interesting output a casual user sees through this technology is the ability to ‘remix’ the production of information across genres or patterns that we would traditionally deem a creative endeavor. For example, ask an LLM to write a Shakespearean sonnet in the voice of Drake about your dog Fido and sit back in awe as it produces something pretty incredible in microseconds.
LLMs have also found niches of enterprise functionality, most notably in the writing of code. Given they effectively work by pattern matching through language and attempting to predict the next “correct” letter, word, or sentence, the application to programming languages has been manifold, driving quick adoption across technologies like GitHub’s Copilot, as well as CodeWhisperer. The potential benefits of increasing engineering output are promising.
Beyond the ability to write “generic” essays/emails and to help autocomplete code, corporate tech teams are actively finding ways to adapt LLMs for their existing business practices. There are a few hurdles to cross, namely the tendency of the LLM to confidently hallucinate a response, especially when the training data is conflicting, or thin. This becomes a more serious issue at the enterprise level, where data sets are less vast than the internet, and accuracy is mission-critical. Additionally, there are security concerns given the opaque process which produces the output.
Enterprise-specific use cases often need audit trails, and LLMs aren’t designed with that in mind. Think about the historical breakthroughs across verticals such as manufacturing processes and supply chains. There was an iterative process of adapting techniques through empirical trial and error. In the future, LLMs might be able to create incredible supply chain optimization, but we may not know how or why such approaches are so effective.
Additionally, there is a certain degree of “prompt engineering” required to get the most out of these tools. The way a user prompts the LLM can significantly shape the output, which also questions the “objectivity” of the response. For example, making the same request but saying “please” to the LLM can change the result meaningfully. We’re not quite sure why!
Finally, most LLMs require significant training runs to stand up, which are expensive (ranging in the tens of millions of dollars). This has led most providers to deploy their solutions as of a “knowledge date”, which can be somewhat stale. The most popular LLM (ChatGPT) is trained on data as of 2021. This can be problematic for fast-movers, such as financial market participants, and the need to constantly feed data externally threatens companies whose proprietary data is their “crown jewel”, should they choose to plug into external LLMs through APIs.
With all of that being said, there is an entire world of exciting use cases waiting to be discovered, and some of the smartest companies in the world are hard at work developing and applying these technologies.
If GenAI delivers utility by prompting the user for a question (like a Google search), Unprompted AI delivers value by pushing the user salient information based upon their existing knowledge graph. This inversion of the use case is tailor-made for enterprise use because it addresses common problems with knowledge workers. Knowledge workers are increasingly being asked to deliver more efficiency through a myriad of external tools such as data vendors, improved software systems, and databases. They often find themselves overwhelmed by disparate systems, login management, and the switching costs of losing task orientation (for more, read our piece on increasing tech spend ROI at your firm). The goal of a great Unprompted AI is like a wonderful assistant who provides the right information at the right time, surfaced directly to your workflow.
This type of assistance necessitates cutting-edge AI technology. But the type of AI is different from GenAI. We believe that enterprise-specific AI applications will largely come from Unprompted AI. Because Unprompted AI isn’t an LLM-centric solution, it doesn’t require a black-box vacuuming in all of an enterprise’s classified information, across its firewall. Productivity-enhancing assistance can be done within the confines of their business.
Besides security, one of the core tenants behind Unprompted AI is to foster adoption. To do so, frictions towards adoptions – such as logging in and learning platforms, let alone prompting a query – have to be eliminated. By pushing content directly to enterprise employees at the heart of their communications hubs, latent knowledge and valuable information within a business is surfaced with zero search cost (to learn more about this, read about how banks can lower attrition by leveraging Unprompted AI). Unprompted AI helps corporations get more of their existing knowledge spend. It allows them to stave off (or be the catalysts of their own!) disruption. Most importantly, for mission-critical objectives, it is an accuracy rather than a creativity-focused solution.
At ModuleQ, our goal is to deliver the best enterprise-ready Unprompted AI solution for knowledge workers, starting with Investment Bankers. Bankers face the unique challenge of being some of the busiest workers in the world who are simultaneously hamstrung by strict regulations and hyper-competitive markets. Unprompted AI is the hand-in-glove solution for the modern Investment Bank which wants to empower its most valuable assets.