Categories
Misc

Scaling AI for Financial Services with Full-Stack Solutions to Implementation Challenges

To scale AI for financial services, companies must face several resource hurdles. The full-stack solution from NVIDIA and VMware helps you leverage the competitive advantages of AI.

AI adoption has grown rapidly over the past few years due to its ability to automate repetitive tasks and increase revenue opportunities. Yet many companies still struggle with how to meaningfully scale AI in financial services. Increased data needs and a lack of internal talent are a few issues.

In this post, we provide a landscape overview of AI use cases and highlight some key scaling challenges, along with how leading banks and financial institutions are using end-to-end solutions to mobilize AI adoption.

What drives the use of AI in financial services?

The power of AI to solve complex business problems is widely recognized in the financial services industry, an early adopter of data science. Banking is among the top three industries in annual investment in big data and analytics. An NVIDIA survey found that 83% of C-suite leaders, IT architects, and developers in financial institutions believe that AI is vital to their future success.

It is easy to understand why: Business Insider estimates that the potential for AI-driven cost savings for banks will reach $447 billion by 2023. Across consumer finance and capital market sectors, firms are looking for ways to mine that potential.

Improved fraud detection and prevention rank highest among proven AI use cases, especially with online-payment fraud losses expected to hit $48 billion annually by 2023.

Other business drivers include maximizing client returns through portfolio optimization or creating personalized customer service through call center transcription. The list continues to grow.

A recent paper by McKinsey and Company underscores the coming sea change: to thrive, banks need to operate “AI first.” Therefore, realizing AI’s value does not come from single projects or teams but instead requires collaboration among data engineers, data scientists, and app developers across the financial institution, all supported by an AI-enabled IT infrastructure.

Finance industry use cases

The quickest returns often stem from improved fraud detection and prevention, the most cited use of AI in financial services. AI is also beneficial for creating efficiencies and reducing compliance costs related to anti-money laundering efforts and know-your-customer regulations.

When combining risk management with profit maximization, investment firms increasingly look to AI for building algorithmic trading systems, which are powered by advances in GPUs and cloud computing. Analysts involved in asset management and portfolio optimization can use natural language processing (NLP) to extract more relevant information from unstructured data faster than ever before, reducing the need for labor-intensive research.

Within consumer finance, NLP helps retail banks gain a better understanding of customer needs and identify new marketing opportunities through call center transcription. Along with chatbots and virtual assistants, transcription falls within the larger category of conversational AI, which helps personalize service by creating 360o views of customers.

Another use of AI within customer engagement is recommendation engines. This technology improves cross-selling by leveraging credit usage, scoring, and balances to suggest fitting financial products.

Key challenges of scaling AI

Banking, trading, and online-payment firms that do not adopt AI and machine learning (ML) risk being left behind, but even companies that embrace the technology face hurdles. In fact, almost half of AI projects never make it to production.

Elevated costs

One fundamental concern: AI demands exceed the existing IT infrastructure for most companies. The tooling that data scientists require for AI workloads is often difficult to deploy onto legacy systems. 28% of IT pros rate the lack of infrastructure as the primary obstacle to adoption. The proliferation of AI apps is also making IT management increasingly difficult.

Shadow IT

Financial institutions that invest in AI only within a research lab, innovation team, or line of businesses frequently have data scientists operating on customized bare-metal infrastructure. This setup produces silos in data centers and leads to shadow IT existing outside departmental IT control.

Given that structure, it is easy to see why 71% of banking executives struggle with how to scale AI, and 32% of IT pros rank data silos and data complexity as the top barrier to adoption.

These silos further hinder productivity. Rather than focusing on training ML models, data scientists spend more time as de facto IT managers for their AI fiefdoms.

Furthermore, if that infrastructure does not include accelerated computing and the parallel processing power of GPUs, deep learning models can take days or weeks to train, thereby wasting precious time for data scientists. It is like hiring world-class race car drivers and equipping them with Yugos.

Unused infrastructure

As expected, costs begin to spiral. Disparate IT increases operational overhead. Inefficient utilization of infrastructure from silos results in higher costs per workload when compared with the cloudlike efficiency achieved by allocating resources on demand through virtualization. Expensive data scientists are forced to manage IT, and ROI slows. AI projects languish.

AI-driven platforms offered by NVIDIA and VMware

So how can financial institutions fully adopt AI, scale what they have implemented, and maximize its value?

Built on NVIDIA GPUs and VMware’s vSphere with Tanzu, NVIDIA AI Enterprise offers an end-to-end platform with a suite of data science tools and frameworks to support diverse AI applications. Crucially, it also helps reduce time to ROI while overcoming problems posed by ad hoc implementations.

NVIDIA and VMware’s full-stack solution is interoperable from the hardware to the application layer, providing a single platform for both financial services apps and AI workloads. It eliminates separate AI infrastructure, as it is developed on common NVIDIA-Certified Systems with NVIDIA GPUs and VMware vSphere with Tanzu, a cloud computing platform that already exists in most data centers.

This enterprise suite offers the AI tooling of TensorFlow, PyTorch, and RAPIDS, while vSphere’s Tanzu Kubernetes Grid can run containerized apps orchestrated with Kubernetes alongside virtualized apps, creating consistent infrastructure across all apps and AI-enabled workloads.

That consistency pulls data science out of silos and brings deployment into the mainstream data center, joining the IT team with data scientists and developers. Expensive devices like GPUs and accelerators can be shared using NVIDIA virtual GPUs, driving down the total cost of ownership while providing the speed-up needed to train ML models quickly. Meanwhile, data scientists are freed from IT management to focus on their specialized work.

In other words, IT leads get simplicity, manageability, easy scaling, and deployment; data scientists are supported with infrastructure and tooling that facilitates their core work; and business executives see quicker and greater ROI.

Summary

These applications merely scratch the surface of what is possible with AI in financial services, especially as the technology matures. To scale AI for financial services, companies can increase resource capacity, hire domain-specific expertise, and find innovative ways to govern their data.

The full-stack solution from NVIDIA and VMware provides a platform to leverage the competitive advantages of AI across entire firms while overcoming common implementation challenges.

For more information about how NVIDIA AI Enterprise can activate those uses, see the NVIDIA AI-Ready Enterprise Platform for Financial Services solution brief and the AI and HPC solutions for financial services page.

See what the platform can do for you with immediate trial access to the NVIDIA AI Enterprise software suite LaunchPad, which runs on private accelerated computing infrastructure in VMware vSphere environments and includes a curated set of hands-on labs for AI practitioners and IT staff.

Leave a Reply

Your email address will not be published. Required fields are marked *