Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
The investing world has a significant problem when it comes to data about small and medium-sized enterprises (SMEs). This has nothing to do with data quality or accuracy — it’s the lack of any data at all.
Assessing SME creditworthiness has been notoriously challenging because small enterprise financial data is not public, and therefore very difficult to access.
S&P Global Market Intelligence, a division of S&P Global and a foremost provider of credit ratings and benchmarks, claims to have solved this longstanding problem. The company’s technical team built RiskGauge, an AI-powered platform that crawls otherwise elusive data from over 200 million websites, processes it through numerous algorithms and generates risk scores.
Built on Snowflake architecture, the platform has increased S&P’s coverage of SMEs by 5X.
“Our objective was expansion and efficiency,” explained Moody Hadi, S&P Global’s head of risk solutions’ new product development. “The project has improved the accuracy and coverage of the data, benefiting clients.”
RiskGauge’s underlying architecture
Counterparty credit management essentially assesses a company’s creditworthiness and risk based on several factors, including financials, probability of default and risk appetite. S&P Global Market Intelligence provides these insights to institutional investors, banks, insurance companies, wealth managers and others.
“Large and financial corporate entities lend to suppliers, but they need to know how much to lend, how frequently to monitor them, what the duration of the loan would be,” Hadi explained. “They rely on third parties to come up with a trustworthy credit score.”
But there has long been a gap in SME coverage. Hadi pointed out that, while large public companies like IBM, Microsoft, Amazon, Google and the rest are required to disclose their quarterly financials, SMEs don’t have that obligation, thus limiting financial transparency. From an investor perspective, consider that there are about 10 million SMEs in the U.S., compared to roughly 60,000 public companies.
S&P Global Market Intelligence claims it now has all of those covered: Previously, the firm only had data on about 2 million, but RiskGauge expanded that to 10 million.
The platform, which went into production in January, is based on a system built by Hadi’s team that pulls firmographic data from unstructured web content, combines it with anonymized third-party datasets, and applies machine learning (ML) and advanced algorithms to generate credit scores.
The company uses Snowflake to mine company pages and process them into firmographics drivers (market segmenters) that are then fed into RiskGauge.
The platform’s data pipeline consists of:
Crawlers/web scrapers
A pre-processing layer
Miners
Curators
RiskGauge scoring
Specifically, Hadi’s team uses Snowflake’s data warehouse and Snowpark Container Services in the middle of the pre-processing, mining and curation steps.
At the end of this process, SMEs are scored based on a combination of financial, business and market risk; 1 being the highest, 100 the lowest. Investors also receive reports on RiskGauge detailing financials, firmographics, business credit reports, historical performance and key developments. They can also compare companies to their peers.
How S&P is collecting valuable company data
Hadi explained that RiskGauge employs a multi-layer scraping process that pulls various details from a company’s web domain, such as basic ‘contact us’ and landing pages and news-related information. The miners go down several URL layers to scrape relevant data.
“As you can imagine, a person can’t do this,” said Hadi. “It is going to be very time-consuming for a human, especially when you’re dealing with 200 million web pages.” Which, he noted, results in several terabytes of website information.
After data is collected, the next step is to run algorithms that remove anything that isn’t text; Hadi noted that the system is not interested in JavaScript or even HTML tags. Data is cleaned so it becomes human-readable, not code. Then, it’s loaded into Snowflake and several data miners are run against the pages.
Ensemble algorithms are critical to the prediction process; these types of algorithms combine predictions from several individual models (base models or ‘weak learners’ that are essentially a little better than random guessing) to validate company information such as name, business description, sector, location, and operational activity. The system also factors in any polarity in sentiment around announcements disclosed on the site.
“After we crawl a site, the algorithms hit different components of the pages pulled, and they vote and come back with a recommendation,” Hadi explained. “There is no human in the loop in this process, the algorithms are basically competing with each other. That helps with the efficiency to increase our coverage.”
Following that initial load, the system monitors site activity, automatically running weekly scans. It doesn’t update information weekly; only when it detects a change, Hadi added. When performing subsequent scans, a hash key tracks the landing page from the previous crawl, and the system generates another key; if they are identical, no changes were made, and no action is required. However, if the hash keys don’t match, the system will be triggered to update company information.
This continuous scraping is important to ensure the system remains as up-to-date as possible. “If they’re updating the site often, that tells us they’re alive, right?,” Hadi noted.
Challenges with processing speed, giant datasets, unclean websites
There were challenges to overcome when building out the system, of course, particularly due to the sheer size of datasets and the need for quick processing. Hadi’s team had to make trade-offs to balance accuracy and speed.
“We kept optimizing different algorithms to run faster,” he explained. “And tweaking; some algorithms we had were really good, had high accuracy, high precision, high recall, but they were computationally too costly.”
Websites do not always conform to standard formats, requiring flexible scraping methods.
“You hear a lot about designing websites with an exercise like this, because when we originally started, we thought, ‘Hey, every website should conform to a sitemap or XML,’” said Hadi. “And guess what? Nobody follows that.”
They didn’t want to hard code or incorporate robotic process automation (RPA) into the system because sites vary so widely, Hadi said, and they knew the most important information they needed was in the text. This led to the creation of a system that only pulls necessary components of a site, then cleanses it for the actual text and discards code and any JavaScript or TypeScript.
As Hadi noted, “the biggest challenges were around performance and tuning and the fact that websites by design are not clean.”