Sunday, June 14, 2015

Solr soft commit Gotcha - OOM

Without frequent hard commits, intense indexing rate bundled with Solr soft commits could lead to an out of memory error:


Our Solr Collection stores browsing history with a max search visibility requirement of 30 seconds. Having multiple writer processes, we designed to auto soft and auto hard commit to avoid explicit commits storm.

An OOM during an intense indexing session caught us by surprise . A quick heap dump inspection revealed a fat RAMDirectory in heap. Why? Well, soft commits uses Lucene NRT, which stores data in RAM, the memory is freed up once a hard commit arrives to persisted the data to disk ensuring durability.

The fault was in our auto hard commit policy, which was time only (maxTime=10min). If you index fast enough within those 10 minutes you'll run out of memory.
We fixed that by adding a maxDocs=50,000 limit.
Where maxDocs is calculated by:
[size of doc] X [num of docs] <= [memory we want to spend per core]
500b          X 50,000        <= 25m

We're currently running with 8 shard replicas so max memory usage for NRT would be: 200m.

So soft

Conclusion

When soft committing, make sure to limit heap usage by specifying both maxDocs and maxTime limits in your auto hard commit policy.
This is one of the factors that will affect your Solr memory usage.
Happy soft committing.


No comments:

Post a Comment