DMR News

Advancing Digital Conversations

Grath Launches Topa, a Managed Inference Platform for Financial Reconciliation

ByEthan Lin

Apr 27, 2026

Grath, the financial services reconciliation and GRC platform, today announced the launch of Topa™, a managed inference platform purpose-built for financial reconciliation. Topa gives banks, payment services providers, brokers and fintechs programmatic access to a domain-trained reconciliation engine, machine learning models, and an AI agent through a single API.

Topa represents a deliberate push towards an inference infrastructure as a service model. The platform exposes three service layers; Engine for deterministic rule-based matching, Inference for probabilistic matching via models trained exclusively on financial services reconciliation data, and Agent for conversational reconciliation intelligence, each accessible independently or in combination.

“Financial services teams increasingly want to build their own reconciliation workflows rather than adopt a vendor’s packaged application,” said Matt Povey, Chief Executive Officer of Grath. “We decided to take the matching engine and the machine learning capability we have built over several years and make them available as infrastructure. Topa is the platform we wish we’d had when we were building reconciliation systems inside banks.”

Reconciliation, the process of matching financial records across internal ledgers and external counterparties remains one of the most operationally intensive workflows in financial services. Existing solutions fall into three categories, each with significant limitations: legacy software products that impose rigid workflows and pricing models; general-purpose large language models that lack domain training and return unstructured output; and internal builds that require engineering teams to solve problems already solved elsewhere.

Topa is designed to occupy a fourth position: domain-specific inference infrastructure that financial services engineering teams can integrate directly into their own systems.

The platform is delivered through a single REST API via the Topa developer console. The three service layers operate at different price points reflecting their compute profiles:

Engine handles deterministic matching using configurable rules and tolerance bands across standard financial data formats. Most production workloads route the majority of their volume through Engine at millisecond latency.

Managed Inference access to the Topa model family, purpose-built and trained exclusively on financial services reconciliation data. The models handle fuzzy matching, exception classification, counterparty disambiguation, and settlement-cycle reasoning tasks where general-purpose models consistently underperform.

Agent is a reasoning layer that orchestrates Engine and Inference, providing match rule tuning, configuration recommendations, root-cause analysis and exception narratives.

Ethan Lin

One of the founding members of DMR, Ethan, expertly juggles his dual roles as the chief editor and the tech guru. Since the inception of the site, he has been the driving force behind its technological advancement while ensuring editorial excellence. When he finally steps away from his trusty laptop, he spend his time on the badminton court polishing his not-so-impressive shuttlecock game.

Leave a Reply

Your email address will not be published. Required fields are marked *