Artificial Intelligence

To prompt or not to prompt? That is the AI question.

AI in the workplace is gaining momentum, but what is the key difference between Generative and Unprompted AI?

Key Takeaways: 

  • Generative AI is largely a “pull paradigm”, requiring users to prompt the AI in order to retrieve insights and information

  • Unprompted AI is a “push paradigm”, delivering insights, relevant news, and timely alerts directly into the user’s workflow 

  • Unprompted AI aims to improve productivity by reducing distraction while tailoring its delivery to the individual user’s needs and role 

  • The enterprise use case for both technologies will vary depending on these key attributes, as well as a company’s concerns regarding data security, user specificity, and the criticality of accuracy 

 

Quick Primer – Generative AI

Generative AI (“GenAI”) is the nascent technology that has catalyzed today’s interest in AI. Like IBM’s historical gambit with Deep Blue vs Kasparaov, people can tangibly feel GenAI’s impact through consumer-facing technologies like ChatGPT, Gemini, and Claude. They play around with it at home, and their imaginations run wild with awe. GenAI centers around the use of large language models (“LLMs”). LLMs are a breakthrough, connecting a novel machine learning algorithm called transformers with cutting edge compute in the form of large GPU clusters. They are trained on humongous sources of data, namely the internet. 

Arguably the most delightful output a casual GenAI user sees is the ability to ‘remix’ the production of information across genres or patterns that we would traditionally deem a creative tasks. For example, ask an LLM to write a Shakespearean sonnet in the voice of Drake about your dog Fido, and sit back in wonder as it produces something pretty incredible in mere seconds. 

LLMs have also found enterprise use cases, most notably in the writing of code. Given they effectively work by pattern matching the language and predicting the next “correct” letter, word, or sentence, the application to programming languages has been manifold. We’ve already seen quick adoption across platforms like GitHub’s Copilot, as well as CodeWhisperer. We are broadly learning that much of code is less user specific and more general in its implementation. A useful analogy might be how buildings were previously built by stonemasons through artisanal handcrafting, whereas now there is a more efficient means of erecting a foundation through drywall or a steel exoskeleton. Of course, there is style and specific vernacular for different programming languages and use cases, but the most common code written makes it a strong candidate for the prediction engine of a GenAI. The potential benefits to increasing engineering output are already tantalizing. 

Outside of code, the ability to write “generic” essays/emails has also taken root. A worker can get 80% through this common tasks through a GenAI plugin. From there, corporate tech teams are experimenting with ways to adapt LLMs for their business practices. There are a few hurdles to cross, namely the tendency of the LLM to confidently hallucinate a response, especially when the training data is conflicting, or thin. This becomes a more serious issue at the enterprise level, where datasets are less vast than the internet and accuracy is mission critical. Finally, there are ever-present concerns that need to be addressed given the opaque process which produces the output and the need for auditability in regulated industries. 

Enterprise Needs: The Pull Nature of GenAI 

A fundamental design characteristic to be aware of with GenAI is that it is a pull paradigm. The user needs to navigate to a prompt (e.g., ChatGPT’s website or app) and pull information from it by asking a question. The critical notion to unpack here is that the user needs to know what to ask. Often, that is less than clear when an enterprise worker is researching a green field, or at a crossroads where they’re not sure what would likely be the next step. Workflows can fall into the “unknown unknowns” where the information is buried somewhere but surfacing it is an exercise in and of itself.  

Beyond knowing what to ask, the user of GenAI must know how to ask it. There is a certain degree of “prompt engineering” required to get the most of these tools. The way a user prompts the LLM can greatly shape the output, which also questions the “objectivity” of response. For example, making the same request but saying “please” to the LLM can change the result meaningfully. We’re not quite sure why. 

As mentioned previously, enterprise-specific use cases often need audit trails, and LLMs aren’t designed with that in mind. Think about the historical breakthroughs across verticals such as manufacturing processes and supply chains. There has always been an iterative process of adapting techniques through empirical trial and error. In the future, LLMs might be able to create incredible supply chain optimization, but we may not know how or why such approaches are so effective. 

Finally, most LLMs require significant training runs to stand up, which are expensive: ranging in the tens of millions of dollars or more. This has led most providers to deploy their solutions as of a “knowledge date”, which can be somewhat stale. For most of 2023, the most popular LLM (ChatGPT) was trained on data as of 2021. This can be problematic for fast-movers, such as financial market participants. These enterprise use cases would necessitate data being constantly fed into the LLM to ensure its analytical capacity remains current. Pushing data externally poses a security concern for companies whose proprietary data is their “crown jewel” when they plug into external LLMs through APIs. 

With all of that said, there is an entire world of exciting use cases waiting to be discovered, and some of the smartest companies in the world are hard at work developing and applying these technologies. 

Unprompted AI: The Push Solution

If GenAI delivers utility by prompting the user for a question (like a Google search), Unprompted AI delivers value by pushing the user salient information based upon their existing knowledge graph. This inversion of user interface is tailor-made for enterprise use. This is because pushing information addresses common problems with knowledge workers. Knowledge workers are increasingly being asked to deliver more efficiency by being equipped with a myriad of external tools such as data vendors, software systems, and internal databases. They often find themselves overwhelmed by disparate systems, login management, and the switching costs of losing task orientation. The goal of Unprompted AI is to provide the right information, to the right person, at the right time, and surfaced directly to your workflow.

One of the core tenants behind Unprompted AI is fostering adoption. Frictions towards adoptions – such as logging in and learning platforms & prompting a query – must be eliminated. By pushing content directly to enterprise employees at the heart of their communications hubs, latent knowledge and valuable information within a business is surfaced with zero search cost. (To learn more about this, read about how banks can lower attrition by leveraging Unprompted AI). Unprompted AI helps corporations get more of their existing knowledge spend. It allows them to stave off disruption. Most importantly, for mission-critical objectives, it is an accuracy-focused solution.  

Of course, if an AI is pushing information to a user, that AI must understand the user. Unprompted solutions can’t match the relevance from a user-prompted request. This means that Unprompted AI must be tailored to industry verticals, differentiating between specific companies, with specificity all the way down to the user’s role and seniority. This is the promise (and the difficulty) of Unprompted AI. 

ModuleQ’s Unprompted Solution 

At ModuleQ, our goal is to deliver the best enterprise-ready Unprompted AI solution for knowledge workers, starting with Financial Professionals. By focusing on a specific industry, we are able to deliver bespoke insights to a particular vertical of knowledge workers. Bankers, for example, face the unique challenge of being some of the busiest workers in the world who are further restricted by strict regulations and hyper-competitive markets. Unprompted AI is the solution for the modern financial institution that wants to empower its most valuable assets. 

FREE EBOOK

Microsoft Teams as an Insights Hub - Delivering Unprompted AI

 

 

 

Similar posts