The popular open-source OpenSearch software that’s used by enterprises for data analytics and enhancing artificial intelligence workloads is getting a major update.
The OpenSearch Software Foundation, a vendor-neutral organization that’s responsible for the development of the software, has just announced the launch of OpenSearch 3.0, bringing about a massive performance increase and new vector search capabilities designed to accelerate AI development.
Originally created by Amazon Web Services Inc., the OpenSearch project launched in 2021 as a community-driven alternative to Elasticsearch. It was founded in response to a decision by Elastic N.V. to switch the previously open-source Elasticsearch engine from an Apache 2 license to a more restrictive Server Side Public License that’s perceived as being anticompetitive.
OpenSearch was forked from the Amazon Elasticsearch Service and released under the same Apache 2 license that was dropped by Elastic. It’s a powerful distributed search and analytics engine that’s designed to handle large volumes of data efficiently and deliver fast and accurate results from search queries.
It has often been likened to a kind of digital librarian that can organize, catalog and retrieve information and surface insights from vast datasets, but unlike most human librarians, it can do that in real time. It’s widely used for applications such as log analytics, building search engines and data analytics.
Available starting today, the OpenSearch 3.0 release builds on OpenSearch 1.3, boosting its performance by an impressive 9.5 times, according to the foundation.
There are also dozens of new features in the platform that aim to facilitate AI applications such as generative AI chatbots, retrieval-augmented generation, hybrid search and recommendation engines. According to the foundation, these applications rely heavily on vector databases that store unstructured information as mathematical representations, making it easier for algorithms to identify patterns within vast datasets. Vector databases can dramatically improve the performance of AI, but they often struggle with speed and scale when dealing with billions of vectors.
GPU acceleration for higher velocity vector search
OpenSearch 3.0 aims to address these scalability issues through the introduction of graphics processing unit acceleration with the OpenSearch Vector Engine. It’s an experimental new feature that leverages the most advanced GPUs from Nvidia Corp. to beef up search performance with large-scale vector workloads, while simultaneously reducing index build times to lower operational costs.
Complementing that, OpenSearch is getting support for Anthropic PBC’s Model Context Protocol, which provides standardized interfaces for integrating large language models with external data sources and other development tools. It also adds new features that aim to reduce storage consumption by eliminating redundant vector data sources.
Besides boosting OpenSearch’s vector processing performance, the foundation has also made great efforts to optimize how the platform ingests, transports and manages data. For instance, it has added support for the gRPC Remote Procedure Call framework as another experimental feature that enables more efficient data transport between clients, servers and nodes.
Other new features include pull-based ingestion, providing OpenSearch with more control over the flow of data through its system, as well as the ability to “pull” data from streaming platforms such as Apache Kafka. Reader and writer separation helps to boost the efficiency of index building operations, while the integration of Apache Calcite is said to enable more intuitive query-building and exploration.
Finally, the new release boosts OpenSearch’s core infrastructure, with upgrades to Lucene 10 improving its search and indexing capabilities, and the Java 21 Minimum Supported Runtime, enabling users to access modern language features and performance improvements in the Java codebase.
Foundation Governing Board Chair Carl Meadows said the enterprise search market is set to grow to more than $8.9 billion by 2030, fueled by rampant demand for more advanced AI applications.
Image: SiliconANGLE/Microsoft Designer
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU