-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit and monitor warmup memory usage #5568
base: main
Are you sure you want to change the base?
Conversation
Due to tantivy limitations, searching a split requires downloading all of the required data, and keep them in memory. We call this phase warmup. Before this PR, the only thing that curbed memory usage was the search permits: only N split search may happen concurrently. Unfortunately, the amount of data required here varies vastly. We need a mechanism to measure and avoid running more split search when memory is tight. Just using a semaphore is however not an option. We do not know beforehands how much memory will be required by a split search, so it could easily lead to a dead lock. Instead, this commit builds upon the search permit provider. The search permit provider is in charge of managing a configurable memory budget for this warmup memory. We introduce here a configurable "warmup_single_split_initial_allocation". A new leaf split search cannot be started if this memory is not available. This initial allocation is meant to be greater than what will be actually needed most of the time. The split search then holds this allocation until the end of warmup. After warmup, we can get the actual memory usage by interrogating the warmup cache. We can then update the amount of memory held. (most of the time, this should mean releasing some memory) In addition, in this PR, at this point, we also release the warmup search permit: We still have to perform the actual task of searching, but the thread pool will take care of limiting the number of concurrent task. Closes #5355
Also attach the permit to the actual memory cache to ensure memory is freed at the right moment.
Adding an extra generic field into the cache to optionally allow permit tracking is weird. Instead, we make the directory generic on the type of cache and use a wrapped cache when tracking is necessary.
if is_top_5pct_memory_intensive( | ||
resource_stats.short_lived_cache_num_bytes, | ||
resource_stats.split_num_docs, | ||
) { | ||
// We log at most 5 times per minute. | ||
quickwit_common::rate_limited_info!(limit_per_min=5, split_num_docs=resource_stats.split_num_docs, %search_request.query_ast, short_lived_cached_num_bytes=resource_stats.short_lived_cache_num_bytes, query=%search_request.query_ast, "memory intensive query"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we create a metric as well?
@@ -226,6 +226,8 @@ pub struct SearcherConfig { | |||
#[serde(default)] | |||
#[serde(skip_serializing_if = "Option::is_none")] | |||
pub storage_timeout_policy: Option<StorageTimeoutPolicy>, | |||
pub warmup_memory_budget: ByteSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we define a default value for this?
And reuse the default value function in SearcherConfig::default for consistency
@@ -274,6 +276,8 @@ impl Default for SearcherConfig { | |||
split_cache: None, | |||
request_timeout_secs: Self::default_request_timeout_secs(), | |||
storage_timeout_policy: None, | |||
warmup_memory_budget: ByteSize::gb(1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want larger defaults here.
Maybe memory budget of 10GB
and single split alloc to 1GB or something like this.
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { | ||
write!(f, "CachingDirectory({:?})", self.underlying) | ||
} | ||
} | ||
|
||
struct CachingFileHandle { | ||
struct CachingFileHandle<C> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the extra complexity for?
Description
To better control the memory usage during search, we add some tooling around the warmup cache.
How was this PR tested?
Describe how you tested this PR.