c++ - Are there benefits to allocating large data contiguously? -


in program, have following arrays of double: a1, a2, ..., am; b1, b2, ..., bm; c1, c2, ..., cm; members of class, of length n, m , n known @ run time. reason named them a, b, and, c because mean different things , that's how accessed outside class. wonder what's best way allocate memory them. thinking:

1) allocating in 1 big chunk. like.. double *all = new double[3*n*m] , have member function return pointer requested part using pointer arithmetic.

2) create 2d arrays a, b, , c of size m*n each.

3) use std::vector? since m known @ run time, need vector of vectors.

or not matter use? i'm wondering what's general practice.

this depends on how data used. if each array used independently straightforward approach either number of named vectors of vectors.

if arrays used example a[i] , b[i] related , used together, separate arrays not approach because you'll keep accessing different areas of memory potentially causing lot of cache misses. instead want aggregate elements of a , b struct or class , have single vector of aggregates.

i don't see big problem allocating big array , providing appropriate interface access correct sets of elements. please don't new manage memory: use vector in case: std::vector<double> all(3*n*m); i'm not sure buys either 1 of other options may more clear intention.


Comments

Popular posts from this blog

linux - Does gcc have any options to add version info in ELF binary file? -

android - send complex objects as post php java -

charts - What graph/dashboard product is facebook using in Dashboard: PUE & WUE -