
Blog
Artificial Intelligence in the Investment Management Space
PINE Advisor Solutions | 19 May 2025
What is Artificial Intelligence (“AI”)? What are AI’s capabilities across different platforms? How can AI assist with efficiency? What are the compliance risks associated with using AI?
More and more, investment advisers are attempting to wrangle the many questions that arise from the emerging technology’s rising presence in the financial space. While AI has the potential to enhance productivity across different facets of an investment firm’s business, AI’s abilities also present compliance risks that the Securities and Exchange Commission (“SEC”) is attempting to grapple with. On March 27th, the SEC hosted a roundtable discussion to highlight the benefits, costs, risks, and other considerations of AI’s rise in the financial services industry. The SEC also noted in their 2025 Examination Priorities that emerging technologies such as AI will be a focus of investment advisers’ compliance examinations going forward.
So, what does this mean? Below, we break down relevant regulatory guidance and provide some insight into AI compliance frameworks for addressing the emerging technology.
What is AI?
AI is the simulation of the human intelligence process by machines, enabling the machine to learn, reason, and provide information and responses to inputs in its system. AI uses algorithms and large datasets to recognize patterns, solve problems, and is designed to improve over time without explicit human instructions.
AI can be used in a wide variety of manners; one common type of AI technology is Large Language Models (“LLMs”), such as OpenAI’s ChatGPT or Microsoft’s Co-Pilot. LLMs are a subset of AI technology that uses vast amounts of data to understand and generate human-like text. LLMs are trained on a broad dataset from a variety of sources to perform tasks and complex data analysis.
AI can also support efficiency gains such as organizing complex data into more understandable context, completing routine administrative tasks, assisting with drafting emails and policy updates, notetaking, and can be used as a tool for researching.
Compliance Risks of Using AI
While the opportunities for AI may often seem boundless, the technology also poses a wide array of risks with its use in the investment management space. Specifically, several risks with AI’s use are important to note:
Compliance Mitigating Factors with AI
With the SEC’s focus on AI in the investment management space, several key compliance protocols can be implemented to ensure your firm is taking the best steps forward with the new emerging technology:
Conclusion
AI’s prominence in the investment management space will continue to evolve and change the way firms operate. AI is a novel technology, and its uses can bring efficiency and optimization to many components of an investment firm’s business. Ensuring your firm maximizes AI’s value to your organization, while remaining transparent to investors on its uses, and establishing firmwide compliance controls on AI usage, will continue to be prioritized by the SEC as the regulatory agency expands on its examinations and enforcement of AI uses in the investment space.
More and more, investment advisers are attempting to wrangle the many questions that arise from the emerging technology’s rising presence in the financial space. While AI has the potential to enhance productivity across different facets of an investment firm’s business, AI’s abilities also present compliance risks that the Securities and Exchange Commission (“SEC”) is attempting to grapple with. On March 27th, the SEC hosted a roundtable discussion to highlight the benefits, costs, risks, and other considerations of AI’s rise in the financial services industry. The SEC also noted in their 2025 Examination Priorities that emerging technologies such as AI will be a focus of investment advisers’ compliance examinations going forward.
So, what does this mean? Below, we break down relevant regulatory guidance and provide some insight into AI compliance frameworks for addressing the emerging technology.
What is AI?
AI is the simulation of the human intelligence process by machines, enabling the machine to learn, reason, and provide information and responses to inputs in its system. AI uses algorithms and large datasets to recognize patterns, solve problems, and is designed to improve over time without explicit human instructions.
AI can be used in a wide variety of manners; one common type of AI technology is Large Language Models (“LLMs”), such as OpenAI’s ChatGPT or Microsoft’s Co-Pilot. LLMs are a subset of AI technology that uses vast amounts of data to understand and generate human-like text. LLMs are trained on a broad dataset from a variety of sources to perform tasks and complex data analysis.
AI can also support efficiency gains such as organizing complex data into more understandable context, completing routine administrative tasks, assisting with drafting emails and policy updates, notetaking, and can be used as a tool for researching.
Compliance Risks of Using AI
While the opportunities for AI may often seem boundless, the technology also poses a wide array of risks with its use in the investment management space. Specifically, several risks with AI’s use are important to note:
- Generating false or misleading information: AI is not perfect and can draw on a variety of sources and information that could be factually false or misleading. Since AI only ingests information it receives, and cannot necessarily differentiate right from wrong, we cannot rely on all information output from AI to be correct. Additionally, AI can often misunderstand or misinterpret complex queries, thus leading to responses that may not be relevant or accurate.
- Introduce bias: Depending on the inputs and questions used to prompt AI tools, outputs from AI could introduce bias unknowingly. AI may also inherently amplify biases that may be presented in training data that an AI model uses to continually learn and improve.
- Improper disclosure to investors: For investment advisory firms, misrepresentation of a firm’s AI usage to investors could imply misleading advisory services and subsequent SEC inquiries. Investment advisers are required to ensure full and fair disclosure of their advisory activities to their investors, and improperly disclosing the use of AI in the investment decision-making process, or misrepresenting AI usage at their firm, presents significant compliance risks. As noted in the SEC’s charge against an investment adviser on October 10, 2024, the charged firm made several misrepresentations to its investors regarding its purported use of AI to perform automated trading for client accounts. The SEC noted in the October 2024 press release that they will “continue to be vigilant and pursue those who lie about their firms’ technological capabilities and engage in AI Washing.”
- Violate privacy and information security standards: Some AI platforms may store user inputs and reuse them to generate responses for other users in the future. Data input, such as personally identifiable information, could be extracted from an AI platform if given the right prompts and data input, thus potentially compromising client data. Additionally, some AI platforms offer the ability to record or transcribe meetings, which may require two-party consent to such recordings depending on applicable state laws. Investment management firms are also tasked with safeguarding confidential and personal information within their possession, and inputting confidential data into an AI model could compromise such data and lead to an information breach.
Compliance Mitigating Factors with AI
With the SEC’s focus on AI in the investment management space, several key compliance protocols can be implemented to ensure your firm is taking the best steps forward with the new emerging technology:
- First, it’s important to understand how your firm currently uses AI so you can create a policy that supports responsible use and manages compliance risks. Start by sending out surveys about AI usage, talking with employees about how they use AI in their daily work, and forming a group to help define how AI should be used across the firm.
- Second, draft and adopt an AI Acceptable Use Policy that aligns with current regulatory expectations and can be updated as new rules or guidance come out. Provide regular training to staff on AI risks and the rules your firm has set, so everyone understands how to use AI properly.
- Third, be open about, and disclose where appropriate, your firm’s AI usage. If you use AI to create marketing materials, make investment decisions, manage models, or produce other outputs, it is important to clearly explain this so investors know how the technology is being used.
- Lastly, keep reviewing and improving your firm’s approach to AI over time, and as you adopt new or more advanced tools, or apply them to more complex business tasks. Teach employees to use AI with clear goals, in order to avoid false or misleading information; have humans review AI-generated content before using it; be transparent about how AI is used; and protect any private or sensitive information. These steps will help ensure AI is used responsibly as the technology continues to grow.
Conclusion
AI’s prominence in the investment management space will continue to evolve and change the way firms operate. AI is a novel technology, and its uses can bring efficiency and optimization to many components of an investment firm’s business. Ensuring your firm maximizes AI’s value to your organization, while remaining transparent to investors on its uses, and establishing firmwide compliance controls on AI usage, will continue to be prioritized by the SEC as the regulatory agency expands on its examinations and enforcement of AI uses in the investment space.