Rockset Ushers in the New Era of Search and AI with a 30% Lower Price

January 31, 2024

,

In 2023, Rockset announced a new cloud architecture for search and analytics that separates compute-storage and compute-compute. With this architecture, users can separate ingestion compute from query compute, all while accessing the same real-time data. This is a game changer in disaggregated, real-time architectures. It also unlocks ways to make it easier and cheaper to build applications on Rockset.

Today, Rockset releases new features that make search and analytics more affordable than ever before:

  • General purpose instance class: A new ratio of compute and memory resources that is suitable for many workloads and comes at a 30% lower price.
  • Xsmall virtual instance: A low-cost starting price point for dedicated virtual instances of $232/month.
  • Autoscaling virtual instances: Autoscale virtual instances up and down on demand based on CPU utilization.
  • Microbatching: An option to microbatch ingestion based on the latency requirements of the use case.
  • Incremental materializations: An ability to create derived, incrementally updated collections from a set of base collections.

In this blog, we delve into each of these features and how they are giving users more cost controls for their search and AI applications.

General purpose instance class

Rockset introduces the concept of an instance class, or different ratios of compute and memory resources for virtual instances. The two instance classes available are:

  • General purpose: This class provides a ratio of memory and compute suitable for many workloads
  • Memory optimized: For a given virtual instance size, the memory optimized class has double the memory of the general purpose class

We recommend users test Rockset performance on the general purpose instance class with a 30% lower price. When you see your workload run low on memory with moderate CPU utilization, switch from general purpose to the memory optimized instance class. The memory optimized instance class is ideal for queries that process large datasets or have a large working set size due to the mix of queries.

Rockset also introduces a new XSmall virtual instance size at $232/month. While Rockset already has the developer edition, priced as low as $9/month, it uses shared virtual instances with variable performance. The introduction of a new XSmall virtual instance size provides consistent performance for applications at a lower starting price.

Autoscaling virtual instances

Rockset virtual instances can be scaled up or down with an API call or a click of a button. With autoscaling virtual instances, this can happen automatically for workloads in response to CPU utilization.

Rockset monitors the virtual instance CPU utilization metrics to determine when to trigger a switch in virtual instance size. It uses a decay algorithm, allowing for historical analysis with emphasis on recent measurements when making autoscaling decisions. Autoscaling has the following configuration:

  • Autoscale up occurs when CPU utilization decay value exceeds 75%
  • Autoscale down occurs when the CPU utilization decay value is below 25%

Cooldown periods occur after autoscaling up of 3 minutes and autoscaling down of 1 hour.

Rockset scales up or down a virtual instance in as few as 10 seconds with compute-storage separation. One Rockset customer was able to save 50% on their monthly bill by turning on autoscaling, as they could dynamically respond to changes in CPU utilization of their application without requiring any management overhead.

Rockset’s cloud-native architecture contrasts with the tightly coupled architecture of Elasticsearch. The Elastic Cloud autoscaling API can be used to define policies to monitor the resource utilization of the cluster. Even with the autoscaling API providing notifications, the responsibility still falls on the user to add or remove the resources. This is not a hands-free operation and also involves the transfer of data across nodes.

Microbatching

Rockset is known for its low-latency streaming data ingestion and indexing. On benchmarks, Rockset achieved up to 4x faster streaming data ingestion than Elasticsearch.

While many users choose Rockset for its real-time capabilities, we do see use cases with less sensitive data latency requirements. Users may be building user-facing search and analytics applications on data that is updated after minutes or hours. In these scenarios, streaming data ingestion can be an expensive part of the cost equation.

Microbatching allows for the batching of ingestion in intervals of 10 minutes to 2 hours. The virtual instance responsible for ingestion spins up to batch incoming data and then spins down when the batching operation is complete. Let’s take a look at how microbatching can save on ingestion compute costs.

A user has a large virtual instance for data ingestion and has an ingest rate of 10 MB/second with a data latency requirement of 30 minutes. Every 30 minutes, 18,000 MB have accumulated. The large virtual instance processes 18 MB/second so it takes 16.7 minutes to batch load the data. This results in a savings of 44% on data ingestion.

Microbatching Example
Batch size (10 MB/second * 60 seconds * 30 minutes) 18,000 MB
Batch processing time (18,000 MB batch size ÷ 18 MB/second large peak streaming rate ÷ 60 seconds/minute ) 16.7 minutes
Ingestion compute saving (1-(( 16.7 minutes saved * 2 times per hour)/(60 minutes/hour))) 44%

Microbatching is yet another example of how Rockset is giving more cost controls to users to save on resources depending on their use case requirements.

Incremental materialization

Incremental materialization is a technique used to optimize query performance.

Materializations are precomputed collections, like tables, created from a SQL query on one of more base collections. The idea behind materializations is to store the result of a computational expensive query in a collection so that it can be retrieved quickly, without needing to recompute the original query every time the data is needed.

Incremental materializations address one of the challenges with materializations: the ability to stay up to date when the underlying data changes frequently. With incremental materializations, only the periodic data changes are computed rather than needing to recompute the entire materialization.

In Rockset, incremental materializations can be updated as frequently as once a minute. We often see incremental materializations used for complex queries with strict SLAs in the sub-100 MS.

Let’s use an example of an incremental materialization for a multi-tenant SaaS application, recording order counts and sales by seller. In Rockset, we use the INSERT INTO command to create a derived collection.

Embedded content: https://gist.github.com/julie-mills/150cbe7ed6c524c6eb6cc3afbd2b6027

We save this materialization as a query lambda. Query lambdas enable users to save any SQL query and execute it as a dedicated REST endpoint. Query lambdas can now be scheduled for automatic execution and certain actions can be configured based on their results. To create incremental materializations using scheduled query lambdas, you set a time interval by which the query is run with the action to insert the result into a collection using the INSERT INTO command.

With incremental materializations, the application query can be simplified to achieve low query latency.

Embedded content: https://gist.github.com/julie-mills/ec916f94ed41de0cdd518d070f4b24f4

Rockset is able to achieve incremental materializations using scheduled query lambdas and the INSERT INTO command, allowing users to maintain the complexity of the query while achieving better price performance.

Speed and efficiency at scale

Rockset continues to lower the cost barrier to search and AI applications with general purpose virtual instances, autoscaling, microbatching and incremental materializations.

While this release gives users more cost controls, Rockset continues to abstract away the hard parts of search and AI including indexing, cluster management, scaling operations and more. As a result, users can build applications without incurring the compute costs and human costs that have traditionally accompanied systems like Elasticsearch.

The ability to scale genAI applications efficiently in the cloud is what is going to enable engineering teams to continue to build and iterate on next-gen applications. Cloud native is the most efficient way to build.