Backstory: My supervisor tends to capture complex relationships that drive environmental models in .NET DataTable instances. With the right GUI toolkit, it’s (now) trivial to stick the DataTable into a GUI somewhere and allow a callibrator to tweak parameters until the model starts acting like the real-world.

However, DataTables limit us to describing only two-dimensional relationships. My supervisor has semi-seriously threatened me that we might need to store relationships of arbitrary dimensionality. We’d need some kind of “generic” structure that could store and retrieve the data, but that structure should not be limited to any particular number of dimensions. What’s he’s asked of me isn’t as unwholesome as the tile of this blog post insinuates, but the man is forcing me to use grey-matter damn-it!!!

In chewing over just how exotic this can all get, I was pointed at the R project. Our discussion very much reminded my Supervisor of things he’d seen using R. So, I first poured over some user documentation, and reverse-engineered a class diagram of R’s data structures (don’t hold me to perfect accuracy, disambiguation of natural language can be tricky at the best of times , and I wasn’t looking to supply a definitive model):

This exercise really delivered for me. Arrays of any dimension within R seem to be defined as two vectors, one containing the data, another containing dimension information (a sequence of positive intiegers). With only these two “data” and “dimension” vectors, they have enough information to make that “data” vector act like an array of however many dimensions as needed. What was missing in my understanding is how they mapped from array coordinates and dimensions (an n-tuple, where n is the number of dimensions) to a single coordinate, identifying the equivalent cell in the “data” vector.

A little more formally, if I have an n-dimensional array *A _{n}*, I can also have a value-for-value equivalent vector

*V*, so when I reference a “value” in

_{n}*A*, via say

_{n}*a(c*, I can also reference the same value in

_{1}, …, c_{n})*V*, via say

_{n}*v(c*. I essentially need a mapping function , taking the array coordinates (

_{v})*c*) and dimension details (

_{1}, …, c_{n}*d*) as input, and returning the vector coordinate (

_{1}, …, d_{n}*c*) as output.

_{v}Getting the easy stuff out of the way first, I needed a vector the same size as an array of *n* dimensions. The size of each dimension matters, I’ll express thusly: *d _{1}, …, d_{n}*. Each dimension size is drawn from the set of positive integers (each dimension must be an integer > 0). A “data” vector to hold the same amount of information as the array is obtained by simply multiplying the dimensions together (apologies to the maths nerds in advance, I took a whirlwind refresher-course on math constructs this morning after a hiatus of at least 10 years, it may not be entirely “legit”):

Next, I played with 2-dimensional arrays until I had a formula that would work in converting coordinates *a(c _{1}, c_{2})* into

*c*. Turns out it looks like this (so long as I index any given coordinate from 0, which is fine given how programming languages typically allow you to index their abstract data types):

_{v}Makes sense. I can visualise the array flattening out into a vector as a string of sausages. Initially the sausages are laid out such that each row of the array is one sausage. Each sausage must be as long as *c _{1}*, and there must be

*c*sausages. Extra dimensions got much harder to visualise, so I just spent time punching numbers around until I ended up with a formula that works in the general case:

_{2}We can re-express that formula in some psuedo-code:

vectorCoordinate = 0 dimensionOffset = 1 for index = 0 to (dimensions.Count - 1) if index = 0 then vectorCoordinate = arrayCoordinate(index) else dimensionOffset = dimensionOffset * dimensions(index - 1) vectorCoordinate = vectorCoordinate + arrayCoordinate(index) * dimensionOffset end if end for

And done! I now get how I can make two vectors act as if they were really an n-dimensional array. You know, I bet this is already all well documented somewhere. I wonder if Knuth has already covered this ground?

One final thing. The formulas were made pretty via WordPress.com’s native support for LaTeX.

understand

LikeLike