Optimizing AI for Private Equity: A Guide to Prompt Engineering

September 27, 2024

By Lila Karl and Michael Van Demark, AI Researchers - DiligentIQ

Introduction to Prompt Engineering

In the expanding world of artificial intelligence, prompt engineering has become an important tool to obtain the best output. Prompt engineering is the process of optimizing prompts to improve the quality, accuracy, and relevance of responses. Large language models (LLMs) are sensitive to minor changes in prompts, proving that prompt engineering can significantly enhance responses. Drawing from our own experience and research with prompt engineering, we aim to provide insights on how effectively refined AI prompts can significantly improve data analysis. This will ultimately have profound implications within the private equity landscape, where precise and insightful analysis is critical for the due diligence process.

Challenges in Traditional Private Equity Due Diligence

Private Equity (PE) Due Diligence describes the evaluation process that PE firms perform prior to investing in or acquiring a company. This process takes into account various different considerations, specifically aimed at gaining better insight into the company's financial health, operational efficiency, legal standing, market position, ultimately confirming that the investment fully aligns with the PE firm’s strategic goals and risk tolerance (Affinity). While this thorough analysis is essential, the traditional due diligence process often faces significant challenges. These challenges include data overload, inconsistent data quality, time constraints, human bias, and the need for substantial resources. 

Data Overload 

Evaluating potential investments involves PE analysts gathering and analyzing vast amounts of information from a plethora of different sources. While companies traditionally provide a confidential information memorandum (CIM), which gives a detailed overview of a company’s financials, operations, market position, and strategy, quality due diligence requires going far beyond this document to ensure a fully comprehensive evaluation of a company (Affinity). As such, juggling so many different sources presents a real challenge to the due diligence process, potentially diverting attention away from the main task at hand, which is conducting an accurate and thorough analysis to make informed investment decisions.

Inconsistent Data Quality

Another significant challenge is the inconsistency in data quality. This challenge is particularly relevant to private equity in the way that their business largely operates around analyzing unique companies to inform potential investment decisions. Analyzing diverse companies that are inherently different in nature lends itself to firms being provided with data that may vary in depth, format, and potentially even accuracy. Also, given that many of the companies that PE firms deal with are often private, acquiring all relevant data to make informed investment decisions can pose an additional challenge (Affinity). Evidently, efforts to standardize relevant data is a process in itself. Inconsistent data quality can significantly complicate the due diligence process, potentially hindering the ability to accurately and reliably value companies. 

Time Constraints 

PE companies are frequently under intense time pressure to carry out due diligence processes, as the competitive landscape they operate within requires closing deals quickly. However, obtaining a complete picture of a company’s overall health requires meticulous analysis, which takes time. With this in mind, PE companies are forced to wrestle with striking the right balance between performing efficient due diligence, while also ensuring that the quality of the analysis is not compromised. The inability to do so can be detrimental, as failure to accurately highlight issues around asset quality was the number one reason for unsuccessful capability deals (Bain & Company, "Due Diligence").

Human Bias 

In a process that is traditionally very manual, PE due diligence is susceptible to human bias in the evaluation process. Whether they intend to or not, analysts innately bring their own preconceived notions and biases to decision making, potentially having an impact on investment decisions. Some analysts may also fall victim to confirmation bias, favoring information that supports their preconceived notions and biases, while ignoring evidence that contradicts them. Although PE professionals strive to avoid mitigating these biases to the best of their ability, some degree of it remains inevitable in the due diligence process. 

Substantial Resources 

Traditional PE due diligence often requires a large, dedicated team of analysts to thoroughly and effectively evaluate a target company. In addition to this, due diligence processes often also require external consultants, highlighting the manpower required to carry out standard business assessments. Having so many people involved requires extensive coordination across various teams. As such, the traditional due diligence process lends itself to being time-consuming and expensive.

Benefits of Prompt Engineering

AI has the potential to significantly improve the efficiency of the due diligence process by automating tasks and speeding up complex processes, with insights from industry analysis suggesting that companies can achieve margin improvements of 10% to 15% in the midterm by adopting AI tools (Bain & Company, "Harnessing Generative AI"). To fully capitalize on these benefits, leveraging prompt engineering to direct LLMs to produce accurate and reliable data analysis, is essential. Formulating well-crafted prompts is a key differentiator in getting the most out of AI tools, and by generating the optimal output, complex processes can become largely streamlined. Therefore, mastering the fundamentals of prompt engineering is crucial to unlocking the full potential of AI tools.

Prompt engineering greatly enhances decision-making and data analysis capabilities across five out of the six main types of due diligence: 

  1. Commercial - AI tools allow for a fast, and comprehensive analysis of industry and market trends, competitive landscape, customer base and more. Well-engineered prompts in this context help ensure a relevant and holistic assessment of information pertaining to these factors, providing better insight into the target company.
  1. Financial - AI-driven due diligence solutions are built to process large amounts of financial data, allowing for more streamlined processes in examining financial health, quality of earnings, evaluation of revenue streams, and much more. Prompts generated through prompt engineering enable a more efficient and tailored approach to conducting accurate financial analysis. 
  1. Legal - AI tools can easily process legal documents, contracts, and compliance information, along with various other materials pertaining to legal matters. Precise prompts allow PE firms to conduct thorough legal assessments with greater efficiency. 
  1. Sustainability - Evaluating sustainability practices and corporate responsibility can be greatly improved through the streamlining of due diligence processes. As such, precisely crafted prompts are essential in analyzing key metrics to ensure a target company operates in a way that aligns with sustainability goals requirements. 
  1. Technology - Evaluating a target company’s technology systems and infrastructure often involves analyzing visual data, an area in which AI is still evolving. Persisting challenges in handling such data emphasizes the need for precise, targeted prompts, which will ultimately produce crucial insights in the technology due diligence process. 
  1. Management - AI tools do not yet effectively assess human factors focused around evaluating company leadership and management, which traditionally involves interviewing key personnel. 

Prompt engineering has the potential to greatly enhance multiple facets of the PE due diligence process. By crafting precise prompts, firms can fully unlock AI's capabilities, driving more informed and efficient decision-making. 

Foundations of Effective Prompt Engineering

Choosing the Right Model

At DiligentIQ, researchers have found several cases where prompt engineering has led to optimal results. Our research team re-engineered a series of prompts that pilot clients initially found to be unsuccessful, discovering that revising these prompts led to significantly improved responses. To engineer prompts, no expertise is needed due to the fact that LLMs can be used as helpful tools in the process. In this case, the team utilized GPT4-Omni and Claude V3.5 Sonnet models to improve the ineffective prompts. The researchers analyzed the outputs from both models, finding that GPT4-Omni generally provided more structured and refined answers, while Claude V3.5 Sonnet was more detailed and comprehensive. As such, Claude V3.5 Sonnet was then chosen to help re-engineer all of the prompts from the client due to its consistency and preferred prompt structure for this project. However, it is important to keep in mind that GPT4-Omni would have been better suited for prompts that require more structure or summarizations. Therefore, recognizing a model's strengths and weaknesses is crucial in prompt engineering, as it significantly improves the overall efficiency of the process. 

Prompt Structure: Bullet Points vs. Paragraphs 

AI Researcher, Emily Van Demark, conducted an additional analysis regarding the significance of prompt formatting. In this study, researchers analyzed the difference between bullet point and paragraph prompts when asking AI to perform multiple tasks. In general, it was discovered that a prompt formatted in a paragraph led to more helpful and accurate responses compared to bullet point prompts. The researchers also found that the format of the given input would most likely be the format of the output response. Therefore, as a user, it is important to keep the desired output format in mind while crafting a prompt. 

Large language models (LLMs), like GPT-4 or Claude, are designed to process text in a linear fashion, meaning that they understand information based on the sequence in which it is presented. When information is presented in paragraph form, it provides a continuous flow of context. The model can better understand relationships between ideas because the context is naturally connected. For example, instructions, data, and desired output are perceived as part of a single task, which helps the model provide a cohesive response. Bullet points tend to break information into isolated chunks. While this might seem clear to humans, LLMs may treat each bullet as a separate task, leading to responses that might feel disconnected or incomplete. Bullet points might miss the nuance of how different tasks are related or interdependent, making it harder for the model to integrate the information fully.

Case Studies and Implementation

Refining Prompts for Clarity and Structure

After identifying the appropriate model and prompt structure for the task, the next step is to refine the prompts themselves to further enhance clarity, specificity, and structure. In the same project that focused around re-engineering prompts for pilot clients, the initial prompts were inputted into Claude V3.5 Sonnet and modifications were made to improve the quality of responses. The updated prompts were created through the use of contextual prompting, which enhanced the instructions provided by incorporating more additional background information and relevant details. They also utilized instruction-based prompting in the way that they included the desired output structure, such as bullet point or paragraph format, to guarantee accuracy and clarity in the analysis. After using this iterative prompting to further refine the prompts, they were tested in the DiligentIQ environment and the responses were compared to the initial ones. The new responses were found to be much more thorough and accurate than the initial responses. This project highlights the importance of prompt engineering when using AI to analyze and draw conclusions regarding business decisions.

Prompt Revisions for Complex Problem Solving

Recently, our research team was tasked with designing prompts for fund of fund calculations, such as average annual return, annualized returns, drawdown, Sharpe Ratio, and Sortino Ratio. We started with simple prompts for two calculations: average annual return and annualized returns, both of which worked successfully. The LLM proved to be capable of simple calculations, although additional prompt engineering was required for more complex calculations. In particular, prompt engineering was found to be very helpful when asking LLMs complex problem-solving questions. For example, the initial prompt given to calculate the Sharpe Ratio was “Calculate the Sharpe Ratio using the dataset provided”. The prompt calculated an incorrect ratio, highlighting how prompt engineering was needed in order to yield the correct result. The prompt was entered into Chat GPT4-Omni and Claude V3.5 Sonnet for revision. An incorrect result was produced again after the first round of revisions. A second iteration of the prompts was done, leading to the correct ratio. The final prompt is shown below. 

 “Using the dataset of monthly returns provided, calculate the Sharpe Ratio by first computing the average annual return based on the monthly data. Next, calculate the compound annualized standard deviation of the returns. Assume a fixed risk-free rate of 2% per year. Apply the Sharpe Ratio formula by subtracting the risk-free rate from the average return, and then dividing the result by the standard deviation of returns. Ensure that the calculation handles negative returns and high volatility accurately, and present the final Sharpe Ratio as a decimal. Provide a detailed explanation of your methodology and indicate all assumptions made during calculation.” 

As seen in above, in order to correctly calculate the Sharpe Ratio, the LLMs needed very specific context and directions given in the prompt. As such, the researchers used a form of Chain-of-Thought prompting, where the user gives step-by-step directions to ensure that the model can reason through complex problems. The quality of the output increases drastically when the prompt is clear and comprehensive. Another key takeaway from this project is that multiple iterations are essential for achieving consistency. While the prompt may not have produced the desired result after the first round of revision, by the second, the output was accurate. In some other cases, it can take several more attempts to reach the correct output. 

Adjusting Prompts for Visual Output Accuracy 

During our most recent chart test, we attempted to solve some of the failed chart creation issues with prompt engineering. Iterative prompting was used to make the initial failed prompts more specific and clear, which resulted in better-looking and more accurate charts for the Bubble Chart and Radar Chart. For example, an initial prompt given to the LLM was “Create a Bubble Chart based on the data I uploaded. Include top 5 articles, total sales, and quantity”, which created a chart that had X’s as data points rather than bubbles, as seen in the image below. 

Bubble Chart Before Adjustments (May) 

Adding a specification to the prompt indicating that the data points should be circles instead of X’s lead to the desired chart, as seen in the image below. The revised prompt was “Create a Bubble Chart based on the data I uploaded with circles instead of x's. Include top 5 articles, total sales, and quantity.” This is a form of Instruction-Based Prompting, where the user gives explicit directions to the model with desired output formats and content to include, leading to a more accurate response.

Bubble Chart After Adjustments (May) 

In this case, one minor change in a prompt can result in large changes in the outputs. There are some cases where users must go through several iterations of a prompt to obtain the desired output. This process can be tedious, but with more knowledge on prompt engineering and what makes a successful prompt, it becomes easier. 

Conclusion

As AI is increasingly used in the business world, it is important to utilize prompt engineering to optimize outputs. Whether it's interacting directly with LLMs such as ChatGPT and Claude, or with RAG-based systems like DiligentIQ, prompt engineering remains an essential part of effectively optimizing outputs. It ensures that models using training data from Open AI and Anthropic, as well as those leveraging company documents stored in vector databases, produce accurate and relevant analysis, which enhances the user experience.

By optimizing outputs, prompt engineering also plays a crucial role in addressing specific challenges faced during the due diligence process. For instance, it helps overcome data overload by guiding AI models to sift through documents and relevant data efficiently. Beyond this, it reduces human bias through ensuring that the LLM is focused on objective data, preventing the influence of preconceived notions in the due diligence process. The optimization of the AI performance saves time and resources for businesses, reducing costs and greatly improving efficiency. AI is incredibly valuable in private equity, and adopting prompt engineering as a valuable tool can improve the due diligence process.

POWERED BY

AI will rapidly change Private Equity due diligence

Get out ahead of the change.
Partner with a team that knows PE.
Now in pilot at multiple major firms.
Get Started
PRIVACY POLICYTERMS OF USESUBPROCESSOR LIST


© 2024, DiligentIQ