The Innovator’s Guide to AI Use Cases in Investment Banks
Forward
The Innovator’s Guide to AI Use Cases in Investment Banks
- In 2024, Investment Banks must navigate AI adoption with various concerns
- We believe decision making should be governed by security, accuracy, and employee adoption
- CIOs should look to sandbox prospective solutions for testing while partnering with established vendors that have compliant AI solutions
- They should focus on solutions that deliver role specific value rather than generic features
- To aid adoption, AI Innovation Checklists can help balance competing value propositions
- We provide tangible examples to accelerate adoption of these checklists
- Finally, we suggest building AI solutions on modern architectures, “leapfrogging” the old, reducing barriers to usage, and getting quick wins on the board
The business world is rapidly adopting AI. We read it in the headlines every day. Another panel at Davos extolling how AI is reshaping a multi-national’s help desk, another OpenAI developer conference talking about successful corporate integrations, another splashy venture investment into a promising new enterprise-focused startup. It seems inevitable that most workflows will be enhanced in some way shape or form by AI. The question is really about adoption rates.
As a firm that works closely with highly regulated financial institutions, we understand how challenging it can be to deploy software inside these walls. Banks develop architectures whose resilience and compliance demands develop vigorous immune responses toward newly proposed workloads. These types of deployments entail vastly different onboarding processes compared to venture-backed startups. And so, we have learned many lessons (with the battle scars to prove it!) about successfully bringing innovation to Investment Banks.
Investment bankers need to be on top of the AI trend. They are essentially in the intellectual capital business. The competitive pressure will eventually transform their workflows. However, we know that technology adoption is never easy at banks. Old solutions can be surprisingly sticky. Path dependencies are unique to organizations (often befuddling “plug and play” vendors), training requirements are challenging for busy employees, technical debts stymie new features, and regulations complicate adoption. The fundamental question is how can banks stay innovative while being measured?
In this short ebook, we’ll walk you through our house view toward navigating this challenge. ModuleQ is solely focused on AI solutions for investment bankers, and our focus has allowed us to accumulate deep insights into adopting productive solutions for this niche industry. For investment banks, we keep coming back to three key principles: safety, precision, and adoption.
We’ll unpack how compliance as the cornerstone for any new solution is table stakes, how accuracy is important where mistakes aren’t acceptable, and how gaining traction with employees at the first crack is crucial to win mindshare. We hope this guide is a helpful resource for the CIO or COO trying to navigate this bold new frontier.
How does a CIO deploy a promising new technology?
The modern CIO often comes across exciting new technologies but very rarely do the stars align in a way that makes them actionable. Any new solution must tick several boxes for it to be truly considered. First of all, it must add palpable value or fix an immediate pain point. Ideally in the short term. Crucially, it must be bulletproof when it comes to security and compliance. It also helps to have several key stakeholders clamoring for the solution, which signals the potential for broad adoption. Obviously, there must be a budget set aside, and the cost of acquisition and integration have to make sense. Finally, the initiative must be aligned with leadership’s broader vision of digital transformation and how they want to enact change.
For most IT spend at a bank, there is often a top-down push to adopt a new solution or technology. Management wants its employees to migrate to a certain data platform for strategic reasons. They are keen on adopting a partner solution’s communication interface, or for pushing towards a new, holistic CRM. The top down prescriptive comes with pushback. Bankers bristle at having their legacy systems displaced, requiring new learning curves to replace existing workflows. Many simply don’t login to the “superior” new solutions, rendering them inert. They tend to cling to old workflows, circumventing trainings and carving out exceptions. But every once in a while, a technology comes across that leads to “bottoms up” adoption; where bankers are pushing their IT teams for these tools because they feel the immediate benefit in helping them manage workflows, drive business, and keeping clients happy.
AI Adoption: Limit Scope Creep, Focus on Outcomes
This is the nascent promise of AI solutions. These tools seek to give bankers superpowers. Generative AI is at the spearpoint of this push, leaving many CIOs in a predicament. The most powerful, cutting edge of generative AI requires a certain degree of data access that makes banking compliance nervous. It introduces a certain degree of unpredictability that makes senior MDs nervous about the accuracy of their staff’s work. In short, it presents challenges to adoption.
One way to underwrite and expedite adoption is to limit scope creep and focus on specific outcomes. Sandbox the solution within the bank’s firewalls find early adopters to A/B test use cases, and introduce tools based upon known workflows rather than unimagined possibilities. This approach focuses on onboarding while ensuring that data integrity isn’t compromised. As a bonus, uptake tends to be more brisk. The specificity of outcomes ensures that all stakeholders can measure progress and that expectations can be met.
Working with trusted and vetted partners that have existing relationships with banks is key. It reduces a significant amount of the headwind associated with bringing on an exciting technology. Every bank has a lengthy onboarding process for a partner vendor, and going back to a trusted and reputable source is a far easier path than starting from step one. For example, ModuleQ partners with Microsoft and LSEG to bring its Unprompted AI (more on what that is later) to investment banks. Our partnerships were a concerted effort to ensure that banks could harness our productivity enhancing solutions within the umbrella of their trusted counterparties.
Sandbox
In technology adoption, a sandbox represents an environment where a new tool or software can be tested without having access to the rest of the company’s backend architecture. It offers a safe and contained environment for testing.
Find Use Cases that Drive Value
While bankers are clamoring for AI tools, the business case has been less than straightforward. We liken Generative AI to an infinitely available intern. The intern has very little autonomy to drive productivity and will sit idle without instruction. But when they are directed with clear instruction, they can prove extremely valuable. One challenge with GenAI currently is that value-add doesn’t map squarely to business objectives such as winning new clients or improving relationships. It is more general in the sense of extra horsepower. It is abundantly clear that however smart an intern is, they lack the reps and experience to be given autonomous responsibility. The same goes for GenAI use. While these prompt tools such as ChatGPT and Gemini provide exciting opportunities for value add, they can’t be left to their own devices.
To really accelerate adoption while ensuring quality, IT departments should focus on use cases and then outcomes. This all starts from the individual’s role, understanding clearly the need for that role, and then finding solutions that tackle them discreetly. Take a busy MD who has little time to login to CRMs or to read the dozens of reports that come through their inbox. Their inbox is something of a battlefield, with competing interests attempting to win mindshare. The MD is most focused on execution and can’t spend time reading through every detail. What they need is curation, simplification, and direction. That is a different use case than the first year analyst, who is trying to quickly get up to speed, to improve their accuracy on rote tasks, and to improve the timeliness of their work product. AI solutions must be tailored to role-specific use cases for them to drive outcomes!
Our approach at ModuleQ has always been to tailor our solutions specific to roles. Our Unprompted AI solution is designed to deliver the right information to the right person at the right time. It does this through our Personalized Data Fusion, which curates the type of information we serve to bankers based upon their own personas. This is quite different from a GenAI prompt box.
Our Unprompted AI is powered by LSEG market data and Microsoft technology, delivering customized insight such as pre-meeting and new mandate alerts. It is enriched by actually relevant breaking news through the trusted market data of LSEG. All of this is delivered directly into the banker’s communication hub, Microsoft Teams. The goal is not to bring the whole internet to the fingertips of the banker but to help the banker organize and deliver on existing datasets and workflows. Put another way: the goal is to maximize the ROI on all your existing data storage and collaboration systems while reducing distraction.
This focus means that we address specific needs for specific bankers: salient insights across target areas of interest, tailored to their specific role/vertical, synced to their calendar for timeliness. The goal is to move past information overload, reduce the loss of task orientation caused by diverting attention to a separate portal, while also overcoming the deficiencies we all have found with email. We think this approach makes sense for any technological adoption, whether it is Generative AI solutions such as Microsoft’s CoPilot (which works synergistically with our Unprompted AI), or new datasets from trusted vendors such as LSEG.
AI Innovation Checklists
Ok, let’s say you have role-specific needs, assessed from the ground level. The question is what are the necessary criteria required to make a quality spending decision? To answer this, we suggest developing an AI Innovation checklist or rubric. This helps evaluate the inevitable trade-offs associated with introducing new technologies and workflows to bankers.
In terms of checklist criteria, we would suggest a few items highlighted in the graphic below: First: will this be adopted by my bankers? This should be a function of the required learning curve, the friction toward activation or login, and the displacement of a ‘favored’ mode of work. To put more flesh on that, how hard will it be for a banker to learn how to use the tool? Does it require them to login to a separate portal from the existing portals they are comfortable with? And in doing so, are you forcing them to abandon a favored workflow? Is there a tradeoff between the quality of information that can be surfaced with the need to divert one’s task orientation through a separate channel?
Second: does this drive real value? How can this solution provide actual productivity versus the existing set of solutions? Will it help win new business, or does it retain existing business? For example, an exciting dataset that provides deal-flow information to capital markets teams may seem valuable until it is compared to the existing deal-flow data that is embedded in existing data portals or within the bank’s four walls. The true problem may not be data availability but data retrieval. Similarly, a large database of interesting contacts may require significant time and effort to sort or analyze for it to translate into new business or to enrich existing clients. Will that necessity be a part of your underwriting decision of the dataset, or will that fall by the wayside, thereby failing to move the needle?
Finally: how complimentary is this to the existing strategy on spend? Are there positive synergies with the existing data and systems in place? How does it co-integrate? Is it siloed? Does it unlock value on existing spend, or does it aim to replace it? Let’s say your bank has a great relationship with a data vendor (such as LSEG). Does the solution play nice with that vendor’s suite of tools? Can it sit on top of your CRM and ease friction? Is it general purpose or specific purpose?
↗️ Value Driver
- Productivity Gain
- New Business
- Improves Client Retention
✴️ Efficiency
- Add or Detract Steps
- Distract from Task Orientation
- Email Overload
🔁 Integrations
- Positive synergies with existing data
- Existing system integrations
- Embedded or siloed
How to Fast-Track Big Bets
At some point, the firm’s strategic vision and IT roadmap coalesce around a big bet. In the past, that bet might have been data modernization, vendor & third-party partnership, or embedding specific tools into workflows. If your bank has decided that AI adoption is its next big bet, the question is how to increase the probability of success.
In our experience, high-probability big bets are structured around establishing a value hypothesis and then proving that hypothesis using a POV. The POV can demonstrate alignment and increase the organization’s tacit understanding of the solution. AI sounds like a great thing to have, but do you really know what the vendor is selling you? A POV is often the best way to determine that.
A quantified value hypothesis is equally relevant for a successful roll-out. Without it, misalignment can occur, leading to frustration for the product champion, the vendor, and the user. Having a misaligned POV occurs for many reasons: budget authorization, changing need, vendor “sleight of hand”, scope creep, and in-house competition, to name a few. In short, how do you gather the right supportive evidence to convince the executive suite that this big bet won’t flop? In our experience there should be a pre-determined quantitative framework such as increased client engagement, active usage, and new opportunities. There should also be qualitative data capture from trial users, to give the organization confidence that there is tangible value to users being delivered. This type of feedback refines one’s tack for a solution’s broader rollout.
Finally, think first about internal areas of pushback. We almost liken navigating internal stakeholders to an immune response, designed to attack hasty mistakes that could pose a credible risk to the bank. Common areas of stakeholder pushback for AI implementation include cloud architecture review (given that AI workloads by and large run in the cloud), AI governance (given that these people are tasked with oversight behind a potentially risky technology), and legal/compliance (who are tasked with mitigating risk rather than driving innovation). Bringing each of these stakeholders to the table takes time, coordination, and handholding. Winning mindshare towards a promising new tool ultimately requires activation energy, but it will help drive increased productivity and efficiency for all.
At ModuleQ, we roll up our sleeves to engage in POVs with banks and their innovation heads, even though we know these are time-intensive and less-than-certain endeavors. We work in concert with innovation heads and with users to solicit feedback, structure pilots, broadening the aperture when the time is right. Ultimately, it’s about creating win/wins with new technologies across multiple levels of the organization. Having experience is paramount in getting these audacious procurements over the finish line.
POV: Proof of Value
A “paid trial” for a product or service integration, focused on incremental testing. The first stage is smaller in scope, cost, and duration, allowing for a demonstration of actual value prior to broader implementation.
Leapfrog & Focus on Easy Adoption
Having established a plan of action, we suggest a few approaches that ensure things actually materialize with your bankers: first, leapfrog old rails. We would recommend avoiding slowly rotting communication infrastructure such as email. Accenture’s CEO Julie Sweet recently provided a pointed anecdote while at a panel at Davos: she recounted how the beginning of her career (at a law firm) was before email. Traditionally, end-of-day communications were faxed to clients, and unsurprisingly there was a long queue at the fax machine. When email was introduced, there was significant reticence by management to allow sensitive attachments in emails, hampering its adoption and productivity boost. Their insistence was to continue to build improvements on the existing rails of fax. The analogy here would be to not build AI solutions on the rails of email but to leapfrog them to more dynamic and foot-forward architectures such as Microsoft Teams. This is where AI integrations, communication, and workflow efficiency will be accelerated.
Second, avoid logins/passwords that bog down adoption. By this, we don’t mean throwing safety out the window! Rather, find ways to integrate the tool directly into existing, secure workflows. In the case of ModuleQ’s Unprompted AI solution, we push insights directly into Microsoft Teams. A user of our AI solution doesn’t need to navigate to a separate portal in order to reap value. Teams authentication sits natively in the Windows (or phone application) architecture, reducing the friction of use. As a result, we see significant adoption with DAUs in excess of 70%, dwarfing just about any other technology platform or productivity solution.
Finally, drive easy wins instead of boiling the ocean. While it would be great to create a solution that could turn your investment bankers into rainmakers, the reality is that every additional degree of complexity is an impediment to short-term value creation. Slow adoption hinders both iterative feedback and the chances of a successful onboarding. Start with simple things that seem like you’re scratching the surface but drive tremendous value to the bankers.
As an example, we deliver updated alerts on deal flow activity directly to our bankers. You may think this data is relatively well understood and available, but the lack of connectivity and specificity in surfacing it to the right banker was shocking to us. We often found that our banks ended up forwarding these notifications in length email chains rather than a systematic and tailored notification. That simple win unlocked significant value and got our early adopters using Unprompted AI.
DAU: Daily Active Users
A common metric for gauging the usage health of platforms, as wide as social media sites to infrastructure business solutions.
Circling back to the Foundations
If we navigate toward our foundational principles (Safety, Precision, Uptake), the adoption of AI will coalesce around a safe solution that drives meaningful productivity gains through simple but effective wins. We liken it to a similarly important step-function improvement as the introduction of modern spreadsheets and modern communication. The first electronic spreadsheets didn’t have vlookups and complex API integrations, but they laid the foundation for a generation of productivity enhancement. Like prior instances of disruptive technological innovation, taking the right approach toward integration is crucial in improving the probability of success.
At ModuleQ, we focus specifically on providing enterprise AI solutions to investment banks. We can’t wait to connect with you as you begin navigating this journey.