Last week, the magic smoke escaped from the 3D graphics card of my trusty desktop machine. Last night, the magic came back with a new card. Notes, I thought to myself. I need notes, so it’s dead easy to do again.
Because… let’s face it Linds…. you LIKE messing around with the software too much to let something as trivial as the threat of Operating System permadeath stop you. And you like getting it up again fast, because OS reinstalls ain’t where the action is.
So the following is for you future Linds, and possibly you, reader with the google-fu fingers, and a good working knowledge of Linux systems…
I wrote an algorithm.
It was fast.
It was wrong.
Replicant tears did fall.
Fast… a couple of minutes versus the hours needed by its R cousin.
Worked… Final results showed an optimised selection of sites and management actions remarkably similar to the test-results that came with the R script.
Wrong… Optimised, but with species penalties reporting a value of 490, where the R script reported zero species penalty. 490… the smallest it would go. 490… a bizarre artefact, where it otherwise seemed to have work as intended.
Tears… in the rain. Lost in time… all those moments…
Buy a unicorn, stingemeister!
R is a language for statistical computing. For the past few months, I’ve been tasked with cutting a prototype modelling tool from R to a .NET framework that does something similar in a previous project here. Though I’ve skirted around the edge of R on occasion with my succession of contracts, this project is the first time where I had to really go deep.
There was pain, and its source was primarily assuming that R was ‘just another language’, similar enough to the set of languages I’m familiar with that I’d be fine.
It’s not, and I wasn’t.
If you’re experienced with other programming languages and are coming to R for the first time, here be dragons. If you’re considering going to another language from R, for the love of efficient computation, leave your dragons there with R.
What follows is a number of insights I’ve gained in contrasting R against the more ‘traditional’ languages out there. They’re essentially breaches of the rule of least surprise, where ‘no surprise’ is a baseline of ‘I can expect this assumption to hold regardless of programming language’.
Posted in Programming
Tagged DotNet, R
Well, what a ride this past year’s been, on several fronts. I’ve been away from the blog for nearly 6 months, by the looks. Mostly, that’s because a) I managed to seal myself out of my WordPress account, then b) was too damn distracted with everything else to deal with “this too”. So, now that the amazing staff at WordPress.com have gotten me back into my own account, let’s try for a catchup summary, shall we?
I’ve heard it said that forgiveness is the ultimate act of self-compassion. That it is, in essence, a gift the forgiving one grants themselves in order to release the negativity they hold for their transgressors.
Forgiveness has not been a strong suit of mine. I can say it out loud “Transgressors, I forgive you!”, but… I don’t need a holy-man to tell me that the unshifted resentment towards the other, despite my proclamation to the contrary, is my ultimate proof of failure.
A seductive proclamation too, because I can fool the outer world with such a statement. It’s even possible to fool myself with enough disconnection from my emotional response to my truly ‘head-felt’ delivery.
But what if releasing that negativity is actually really important to me? How can I achieve a heart-felt forgiveness when my head is clearly unwilling to release such a tasty coulda-shoulda-woulda chew-bone?
Alright. This is the very last time I figure out how to get gource producing visualisations of my development activities from scratch. Today’s post is me leaving notes on this for next time, so I can cut straight to the chase.
A couple of times now, I’ve had occasion to want to give the people I write software for some insight into what I’ve been doing. This project last completed was one such example. A lot of work was done under the hood to enable the code-base to have a replaceable user-interface, and possibly also spatial database. The user-interface had a few new features, but the lion’s-share of it ‘looks’ exactly the same as it was when I started.
I can guarantee you though, that the source looks absolutely nothing like how it started out. Most of my time was spent in taking a code-base “designed” to be a single-user desktop application, and turn it into something that would be relatively easy to make multi-user (it is, now) and optionally, web-enabled (one particular large data-set stops this from progressing just yet).
How then, do I convince the people paying me to tear up their code-base that even though the user-interface is the same, things are now radically different under the hood? Enter gource, and some discussion on how it visualises the git repository as the source-code evolves through the project.
Forgive me for I have softwared.
I’m on the tail-end, or possibly tale-end, of a project that was pretty rough as such things go. Not the toughest gig I’ve done, but no cake-walk either.
Anyone who’s professionally played in this space knows that Murphy’s law is drawn to tight-deadline software development projects like a wunch of salivating, button-eyed bankers racing to the reading of the last will and testament of Marley & Marley. Continue reading