Dark Mode Light Mode

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Follow Us
Follow Us
Login Login

Intel and others promise enterprise-level open generative AI tools.

Is it possible for generative AI designed for enterprises, such as AI that can autocomplete reports and spreadsheet formulas, to achieve interoperability? Working alongside various organizations such as Cloudera and Intel, the Linux Foundation, a nonprofit organization dedicated to supporting and maintaining open source projects, is determined to uncover the answer.

The Linux Foundation launched the Open Platform for Enterprise AI (OPEA) on Tuesday, with the goal of fostering the expansion of open, multi-provider, and modular generative AI systems. As part of the Linux Foundation’s LF AI and Data organization, OPEA aims to facilitate the development of robust and scalable generative AI systems that leverage the most innovative open source solutions available. Ibrahim Haddad, the executive director of LF AI and Data, states that this initiative will facilitate the release of advanced AI technologies for widespread adoption across the ecosystem.

“OPEA will revolutionize the field of AI with its advanced and versatile framework that leads the way in technology stacks,” Haddad expressed. “This initiative demonstrates our commitment to fostering open source innovation and collaboration in the AI and data communities through a fair and transparent governance model.”

Advertisement

Among the members of OPEA, which is one of the Linux Foundation’s Sandbox Projects, there are notable enterprise companies such as Cloudera, Intel, IBM-owned Red Hat, Hugging Face, Domino Data Lab, MariaDB, and VMware.

So what could they create collaboratively? Haddad proposes several potential options, such as enhancing support for AI toolchains and compilers, which would enable the execution of AI workloads on various hardware components. Additionally, he mentions the use of heterogeneous pipelines for retrieval-augmented generation (RAG).

Generative AI is gaining popularity in enterprise applications, and it’s easy to understand the reasons behind its growth. The data that generative AI models train on typically constrains their responses and actions. However, RAG allows for the expansion of a model’s knowledge base beyond the initial training data. IT project managers often rely on external information, such as proprietary company data or public databases, when using RAG models. We use this information to generate responses or complete tasks.

Intel And Others Promise Enterprise-Level Open Generative Ai Tools. 19

Intel provided additional information in its own press release:

Enterprises face difficulties in implementing RAG solutions due to the lack of standardized components. This makes it challenging for them to select and deploy open and interoperable solutions that can expedite their time to market. We aim to tackle these issues by working closely with the industry to establish standardized components, such as frameworks, architecture blueprints, and reference solutions.

Assessment will also be a crucial aspect of what OPEA addresses.

Within its GitHub repository, OPEA presents a rubric that evaluates generative AI systems across four dimensions: performance, features, trustworthiness, and readiness for enterprise use. Performance, as defined by OPEA, refers to benchmarks derived from practical usage scenarios. Assessing the features of a system involves evaluating its interoperability, deployment options, and user friendliness. Ensuring the reliability and quality of an AI model is of utmost importance. Similar to the role of an IT project manager, enterprise readiness emphasizes the necessary steps to successfully launch a system without encountering significant problems.

Rachel Roumeliotis, the director of open source strategy at Intel, explains that OPEA will collaborate with the open source community to provide tests, assessments, and grading for generative AI deployments upon request.

Currently, uncertainty surrounds OPEA’s other projects. However, Haddad mentioned the possibility of adopting an open model development approach similar to Meta’s growing Llama family and Databricks’ DBRX. As a result of their efforts, Intel has already contributed to the OPEA repo. Generative AI powers these contributions, which include reference implementations for a chatbot, document summarizer, and code generator. We specifically optimized these implementations for Intel’s Xeon 6 and Gaudi 2 hardware.

Now, OPEA’s members are highly committed (and understandably motivated) to developing tooling for enterprise generative AI. Cloudera has recently announced new partnerships to develop an innovative “AI ecosystem” in the cloud. Domino provides a range of applications for developing and evaluating business-oriented, generative AI. Last August, VMware introduced new compute products focused on the infrastructure side of enterprise AI, known as “private AI.“.

Will these vendors collaborate effectively to develop cross-compatible AI tools under OPEA?

There is a clear advantage to doing so. Customers have the flexibility to choose from a variety of vendors based on their specific requirements, available resources, and budget. However, past experiences have demonstrated the potential risks associated with being overly reliant on a single vendor. Hopefully, that won’t be the final result.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Substack 's Notes is receiving Twitter-like features.

Next Post

Kickstarter introduces ‘late pledges’ for finished projects

Advertisement