Companies are becoming more interested in AI and how it may be utilized to (possibly) increase productivity. They are, however, cautious of the hazards. According to a recent Workday poll, the top challenges to AI deployment are the timeliness and dependability of the underlying data, possible bias, and security and privacy.
Scott Clark, who previously co-founded the AI training and experimentation platform SigOpt (which Intel acquired in 2020), saw a business opportunity and set out to produce “software that makes AI safe, reliable, and secure.” Clark founded Distributional to develop the first version of this software with the purpose of scaling and standardizing testing for multiple AI use cases.
“Distributional is building the modern enterprise platform for AI testing and evaluation,” Clark said in an email interview with Eltrys. “As AI applications’ power grows, so does the risk of harm.” Our tool is designed to help AI product teams discover, evaluate, and resolve AI risk before it damages their customers in production.”
Clark was motivated to start distribution after experiencing tech-related AI issues at Intel after the purchase of SigOpt. As Intel’s VP and GM of AI and high-performance computation, he found it virtually impossible to assure that high-quality AI testing occurred on a regular basis.
“The lessons I drew from my convergence of experiences pointed to the need for AI testing and evaluation,” Clark went on to say. “Whether it’s due to hallucinations, instability, inaccuracy, integration, or any of dozens of other potential issues, teams frequently struggle to identify, understand, and address AI risk through testing.” Deep and distributional knowledge are required for proper AI testing, which is a difficult challenge to tackle.”
The basic product of Distributional tries to discover and diagnose AI “harm” from huge language models (similar to OpenAI’s ChatGPT) and other forms of AI models, seeking to semi-automatically determine what, how, and where to test models. Clark claims that the program provides firms with a “complete” perspective of AI risk in a sandbox-like pre-production environment.
“Most teams choose to assume model behavior risk and accept that models will have issues.” Clark said. “Some may attempt ad hoc manual testing to identify these issues, which is time-consuming, disorganized, and inherently incomplete.” Others may attempt to detect these vulnerabilities passively with passive monitoring techniques when AI is in production. “As a consequence, our platform contains an extensible testing framework for continually testing and analyzing stability and robustness, a customizable testing dashboard for visualizing and understanding test results, and an intelligent test suite for designing, prioritizing, and generating the appropriate mix of tests.”
Clark was hazy on the specifics of how this all works, as well as the general strokes of Distributional’s platform. In his defense, he said that the company is still in the process of co-designing the product with corporate partners.
So, given that Distributional is pre-revenue, pre-launch, and has no paying clients, how can it compete with the AI testing and assessment platforms that are currently on the market? After all, there are numerous, like Kolena, Prolific, Giskard, and Patronus, many of which are well-funded. As if the rivalry wasn’t fierce enough, IT behemoths such as Google Cloud, AWS, and Azure provide model assessment tools as well.
Clark argues that Distributional is distinct in its enterprise-oriented software. “From day one, we’re building software capable of meeting the data privacy, scalability, and complexity requirements of large enterprises in both unregulated and highly regulated industries,” the CEO added. “The types of enterprises with whom we are designing our product have requirements that extend beyond existing offerings available in the market, which tend to be individual developer-focused tools.”
If everything goes as planned, Distributional will begin collecting income early next year when its platform becomes generally available and a few of its design partners convert to paying clients. Meanwhile, the firm is gathering cash from venture capitalists. Distributional revealed today that it has secured a $11 million seed round headed by Andreessen Horowitz’s Martin Casado, with participation from Operator Stack, Point72 Ventures, SV Angel, Two Sigma, and angel investors.
“We hope to usher in a virtuous cycle for our customers,” Clark went on to say. “By improving testing, teams will be more confident in deploying AI in their applications.” They will see the influence of AI rise exponentially as they deploy more of it. As users get more familiar with this impact scale, they will apply it to increasingly complicated and relevant challenges, which will need even more testing to guarantee they are safe, trustworthy, and secure.