Share this
Debunking the Prompt Engineering Myth
by ModuleQ on Nov 2024
Myth: Mastering prompt engineering is essential to unlock AI's full potential.
No matter how powerful and large your language model is, it still can’t read your mind. You need to specify to the model—often in precise terms—what you want it to fetch, generate, or synthesize. That input is called a prompt and is essential to the quality of output you get from an LLM. The process of tuning your request, and providing enough context to the model for a quality reply, is called prompt engineering.
Prompting has been tested, tweaked, fine-tuned, and researched, in the hopes of improving LLM responses. These include noted deficiencies such as hallucination, and false confidence. Prompt engineering usually focuses on improving the context around the ask, for instance indicating the role that the AI should imitate or specifying the structure of the response it should generate. You are a physics Professor…The accuracy of this response is extremely important to me… Follow a step-by-step chain of thought…
The enterprise space is taking note. The market for prompt engineering is expected to be over $200MM USD by 2030, representing a 32% annualized growth rate. This has led to a cottage industry of prompting guides, prompting debate, and prompting accreditation.
All of these prompt engineering techniques have been used to deal with various LLM output issues. But what if all this prompt engineering lays upon faulty base-level assumptions?
Invert, always invert – Charlie Munger
Truth: While prompt engineering can enhance AI interactions, it's not a prerequisite for deriving significant value from AI.
In a recent paper by researchers at Google (Liu et al., 2024), the entire premise of prompt engineering was inverted through the practice of constrained generation. This means using tools with prespecified ranges of outcomes acceptable for an LLM’s response, thereby constricting the output permissible by the model, and mitigating the risks of hallucinations.
As the authors note, "…users not only need low-level constraints, which mandate the output to conform to a structured format and an appropriate length but also desire high-level constraints, which involve semantic and stylistic guidelines that users would like the model output to adhere to without hallucinating. Notably, developers often have to write complex code to handle ill-formed LLM outputs, a chore that could be simplified or eliminated if LLMs could strictly follow output constraints."
The key point is that prompting the LLM to constrain its output to say 20 words or within a JSON object will not guarantee such. However, constrained generation, which includes effectively a membrane layer of output review to ensure its adherence to prescribed constraints, provides a “heightened sense of assurance that the constraints will be strictly followed.”
Our founders, David Brunner, PhD, and Anupriya Ankolekar, PhD, wrote in a previous piece about AI governance and the promise of constraint, “Between the extremes of unconstrained Generative AI models (highest risk) and models guaranteed to produce outputs in well-defined ranges (lowest risk), there are constrained Generative AI models that incorporate mechanisms to constrain the output of the Generative AI.”
As the researchers note, “applying output constraints could not only streamline the currently repetitive process of developing, testing, and integrating LLM prompts for developers but also enhance the user experience of LLM-powered features and applications.”
Their survey identified six primary categories of use cases for output constraints:
This provides a good starting point or framework for applying constraint generation, where enterprises that focus on mission-critical accuracy can gain more comfort with the application of LLMs. When approaching the problems surrounding unconstrained generative AI, often through the lens of prompt engineering, take a step back and think about methods of applying constrained generation. A good constraint framework will be a valuable tool in optimizing risk rewards for LLM use.
Prompt Alternatives
Truth: Unprompted AI offers a new paradigm where AI can proactively provide relevant information and insights without explicit user input.
The way most practitioners and casual users view AI is through the “Pull” paradigm of a prompt: meaning you must prompt the AI model to give you something, pulling (if you will) the information out of it. But that paradigm may be shifting as a new type of AI emerges and transforms the way businesses interact with AI.
Unprompted is Push, not Pull
Unprompted AI is a push paradigm, pioneered by ModuleQ. It takes the mental load off the user and places it on the AI itself. It still sits on top of a massive corpus of data, able to synthesize information and deliver value. Instead of relying on prompts, it learns about the user and pushes the right data at the right time.
Unprompted AI uses a patented process called Personal Data Fusion. Personal Data Fusion builds a digital twin about each user, ensuring the right information reaches them through Unprompted AI. This is guided by our philosophy toward personalizing the enterprise work experience. As we note in our company's mission statement:
Your digital twin is like a reflection of your professional self in the digital world. It is sufficiently accurate and detailed to be recognizable, but really just a very shallow representation compared to all the complexity of your mind in the real world. Although your digital twin is not very deep, it enables ModuleQ to understand your work context and priorities. Like an ever-diligent assistant, the AI knows about the business relationships and topics that you care about most. Our highly refined algorithms are so attentive, they can predict the information that will be useful with up to 90% accuracy.
Main differences between Prompted and Unprompted AI and the uses of each
Prompted AI: Requires user input (prompts) to produce text, images, or other content. This model excels in creative generation but requires precise direction.
Unprompted AI: Learns from user behavior, context, and preferences to automatically deliver insights. More aligned with real-time, context-sensitive applications in businesses.
Prompted AI: Great for content creation and specific, detailed orchestration.
Unprompted AI: Ideal for business workflows, where relevant information is needed without asking to improve productivity.
Real-World Applications of Unprompted AI
Today, enterprise clients are leveraging ModuleQ’s Unprompted AI through a series of alerts directed into the user's Microsoft Teams environment. These include breaking alerts such as news, cadenced alerts such as weekly summaries, and calendar-synced updates such as Pre-Meeting Alerts.
ModuleQ works with some of the most sophisticated financial institutions in the world to deliver greater productivity and top-line revenue generation capability to senior bankers by making them more efficient with their time and expanding their view of new business opportunities through Unprompted AI alerts.
Conclusion: Rethinking AI Value Beyond Prompt (Engineering)
While prompt-driven AI tools will continue to be a driving force for AI adoption across the enterprise workspace, placing constraints on LLM output is one way to fold in governance and accuracy concerns around generative AI use. Ultimately, the myth that mastering prompt engineering is essential to unlocking AI's full potential is rooted in truth but the future of prompt engineering is having no prompts at all.
Additionally, Unprompted AI will be a worthy compliment to all things prompt, especially in dealing with the costs of context switching, the need for certain regulated industries to keep workflows within their perimeters, and the benefit of having AI understand more about the actual human leveraging it, delivering a more harmonious and productive workspace.
Share this
- Artificial Intelligence (27)
- Press Release (13)
- Investment Banking (11)
- Industry (7)
- Product Announcements (4)
- Research (3)
- Sales Acceleration (3)
- Sales Enablement (3)
- Enterprise AI (2)
- Hybrid Work (2)
- Large Language Model (LLM) (2)
- Microsoft Teams (2)
- Security (2)
- Generative AI (1)
- Knowledge Management (1)
- Remote (1)
- Revenue Intelligence (1)
- November 2024 (1)
- October 2024 (2)
- September 2024 (2)
- August 2024 (2)
- July 2024 (7)
- April 2024 (1)
- January 2024 (1)
- December 2023 (1)
- November 2023 (2)
- October 2023 (2)
- September 2023 (1)
- May 2023 (2)
- April 2023 (1)
- March 2023 (3)
- February 2023 (3)
- January 2023 (1)
- December 2022 (1)
- November 2022 (2)
- September 2022 (3)
- August 2022 (3)
- July 2022 (3)
- June 2022 (2)
- May 2022 (2)
- April 2022 (1)
- March 2022 (1)
- February 2022 (1)
- November 2021 (1)
- October 2021 (1)
- June 2021 (1)
- March 2020 (1)
- September 2018 (1)
- July 2018 (2)
- June 2018 (1)
- March 2017 (1)
- November 2016 (1)
- April 2015 (1)
- January 2015 (1)