[ACCEPTED]-Couchbase Metadata overhead warning-nosql
Every document has metadata and a key stored 46 in memory. The metadata is 56 bytes. Add 45 that to your average key size and multiply 44 the result times your document count to 43 arrive at the total bytes for metadata and 42 key in memory. So the RAM required is affected 41 by the doc count, your key size, and the 40 number of copies (replica count + 1). You 39 can find details at http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#memory-quota. The specific formula 38 there is:
(documents_num) * (metadata_per_document + ID_size) * (no_of_copies)
You can get details about the user 37 and metadata being used by your cluster 36 from the console (or via REST or command 35 line interface). Look at the 'VBUCKET RESOURCES' section. The 34 specific values of interest are 'user data 33 in RAM' and 'metadata in RAM'. From your 32 screenshot, you are definitely running up 31 against your memory capacity. You are over 30 the low water mark, so the system will eject 29 inactive replica documents from memory. If 28 you cross the high water mark, the system 27 will then start ejecting active documents 26 from memory until it reaches the low water 25 mark. any requests for ejected documents 24 will then require a background disk fetch. From 23 your screenshot, you have less than 5% of 22 your active documents in memory already.
It 21 is possible to change the warning metadata 20 warning threshold in the 2.5.1 release. There 19 is a script you can use located at https://gist.github.com/fprimex/11368614. Or 18 you can simply leverage the curl command 17 from the script and plug in the right values 16 for your cluster. As far as I know, this 15 will not work prior to 2.5.1.
Please keep 14 in mind that while these alerts (max overhead 13 and max disk usage) are now tunable, they 12 are there for a reason. Hitting either of 11 these alerts (especially in production) at 10 the default values is a major cause for 9 concern and should be dealt with as soon 8 as possible by increasing RAM and/or disk 7 on every node, or adding nodes. The values 6 are tunable for special cases. Even in development/testing 5 scenarios, your nodes' performance may be 4 significantly impaired if you are hitting 3 these alerts. For example, don't draw conclusions 2 about benchmark results if your nodes' RAM 1 is over 50% consumed by metadata.
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.