[ACCEPTED]-C++ stack memory and CPU cache-stack-memory

Accepted answer
Score: 10

The exact implementation of the C++ Standard 27 is an implementation detail: it varies from 26 compiler to compiler, from platform to platform, etc...

Now, even 25 though you could in theory use a split stack 24 for C++, major implementations use a contiguous 23 segment of memory (of varying size).

This 22 contiguity and frequent reuse do indeed 21 easily reap the benefits of caches, however 20 it is not a panacea either. Actually, you 19 can also create artificial scenarios for 18 cache bounces: if your L1 cache is small (32k ?) and 17 has 2-ways associativity, then you can easily 16 craft a scenario that requires accessing 15 the L2 cache. Just use a 64k array on your 14 stack (it's small enough not to blow it 13 up), and then access data at 0, 16k, 32k, and 12 48k repeatedly in a loop: it should trigger 11 lots of evictions and requires fetches from 10 L2 cache.

So, it is not really that the stack 9 itself is so cache-friendly, but rather 8 than its usage is predictable and well-known. You 7 could reap the same cache benefits with 6 a custom-made allocator (though allocation 5 would be slightly slower).

On the other hand, there 4 are other advantages and disadvantages to 3 using the stack:

  • disadvantage: if you attempt to consume too much of it, you get a Stack Overflow.
  • disadvantage: if you overwrite an array on the stack, you might corrupt the stack itself, and it is a debug nightmare (it's also used by so called Stack Smashing attacks).
  • advantage: C++ has specific patterns (RAII, SBRM) that take advantage of the behavior of the stack. Deterministic "undo" actions are a joy to program with.

So in the end I would be 2 wary of deciding between stack and heap 1 solely based on potential cache behavior.

More Related questions