memory management - C++ new allocates more space than expected -


i trying test c++ app behavior when memory requirements high, seems cannot use of available ram. have following program:

class node {     public:         node *next; };   int main() {     int i=0;      node *first = new node();     node *last = first;      //should   120000000 * 8 bytes each -> approx 1 gb     (i=0; < 120000000; i++) {         node *node = new node();         node->next = 0;         last->next = node;         last = last->next;     }       (i=0; < 120000000; i++) {         node *oldfirst = first;         first = first->next;         delete oldfirst;     }      delete first;      return 0;     } 

it supposed allocate approx 1 gb of data , because node class occupies 8 bytes. ve verified via sizeof, gdb, valgrind.

this program allocates 4 gb of data! if double size, ( 120000000 -> 2400000000 ), there 2 options (my laptop has 8gb of ram installed):

  • if have turned off swap area, process killed kernel.
  • if not, paging takes place, , os becomes slow.

the point cannot test application allocates 2 gb of data, because consumes 8 gb of ram!

i thought maybe bytes allocated when ask new node more 8 (that size of node object), tried following:

class node {     public:         node *next;         node *second_next; };   int main() {     int i=0;      node *first = new node();     node *last = first;      //should   120000000 * 8 bytes each -> approx 1 gb     (i=0; < 120000000; i++) {         node *node = new node();         node->next = 0;         last->next = node;         last = last->next;     }       (i=0; < 120000000; i++) {         node *oldfirst = first;         first = first->next;         delete oldfirst;     }      delete first;      return 0;     } 

now node object occupies 16 bytes. memory footprint of application same! 120000000 resulted in 4 gb of ram used, 240000000 resulted in app being killed linux kernel.

so came across this post

is true every new in c++ allocates @ least 32 bytes?

short answer - forgot factor in memory allocation overhead. memory allocator needs keep track of allocated blocks of memory in consumes memory, , if you're allocating lot of small blocks overhead gets unreasonably large compared amount of memory requested. there block alignment think of, lot of allocators try smart , align blocks of memory optimal cpu access speed, they'll aligned cache lines.

last not least, successful request give 8 bytes of memory might have allocated larger chunk behind scenes. after all, asking malloc/new specific amount of memory guarantees you'll chunk of @ least size, not size.

for use case of allocating lots , lots of small chunks, you'd need pool allocator minimizes overhead.

actually, should consider better data structure large linked list lots of small nodes.


Comments

Popular posts from this blog

Capture and play voice with Asterisk ARI -

java - Why database contraints in HSQLDB are only checked during a commit when using transactions in Hibernate? -

visual studio - Installing Packages through Nuget - "Central Directory corrupt" -