Tag Archives: Perl

Gradle for LaTeX PDF Generation

Last night, I was Minecrafting with my daughter, and one of “those” thoughts smacked me between the eyes.

“I wonder if anyone has ever tried cooking a LaTeX document with gradle?” bubbled up from some nether-region that is obviously still obsessing over my recent gradle play.
Continue reading

A Cribsheet for Perl Profiling

A very brief post, capturing a crib-sheet of sorts on profiling with Perl for when I need it next. First, I’m using NYTProf, and huge props to the developers! Free Perl profiling is an absolute delight.

Second, all the info below can be drawn from that NYTProf link, but there’s a lot of verbage. So, here’s the absolute bare-bones essence of getting Perl profiling going with the module:

  1. Have NYTProf installed so Perl’s @INC array finds it (ActiveState Perl makes it dead easy with their package manager).
  2. Add a line to the opening preamble of the bootstrap perl file
    use Devel::NYTProf;
  3. Run the perl file from the command-line thusly:
    # perl -d:NYTProf bootstrap.pl
  4. Once finished, there will be a file called ‘nytprof.out’ in the same directory, which builds as the script runs. Use the following executable supplied with the module, to convert the raw output file to a richly-endowed set of web pages containing human-grokable profiler gold:
    # nytprofhtml
  5. This creates a directory ‘/nytprof’ in the same directory. Find the ‘index.html’ file inside it, and open it with your favourite web browser.
  6. Profit:
Sample NYTProf Results Screen

Sample NYTProf Results Screen

And here are some froody ideas around Perl efficiency once a bottleneck has been isolated.

On a side note, IDE support including breakpoint debugging, and now profiling. Punching Perl through the “just for glue” ceiling into “TISM” territory.

Think I’ll rename myself Greg and go hooning:

Torchlight 2 Toon Archiving: The Sequel

The kids left me alone last night, so I decided I’d goof around with my Torchlight  2 character archiving.  Recent reading around Git has convinced me that Git stores deltas of its binary file commits, so my last objection to using Git over Subversion for binary files has just been laid to rest.

Also, the original Perl script was a bit brain-dead in that it was good at automatically committing save files that had changed, but additions and deletions were still things I’d have to manually tell the repository about.

As these things tend to go, the lion’s share of the script was written in maybe the first 15 minutes.  The rest of the night was spent tweaking and testing the command template constants, destroying and re-creating Git archives until I had myself convinced that the script was automatically committing exact replicas of the save-game directory, regardless of additions, deletions and modifications.

The script turned out to be pretty short, and is included at the end of this blog post in all its perlesque glory.  Now that I have the trick of it though, the pattern is pretty much applicable to any save-game directory I might want to subject to version control.  The human-readable formula is basically:

  1. Delete all contents of the directory within your version control repository that contains the copied image of your save-game directory.
  2. Fully (recursively) copy the contents of that save-game directory into  the recently emptied archive directory.
  3. Ask the repository to identify all files that have been deleted from the  copied file set that it is currently archiving.  If there are any deleted files, stage these files to also be deleted in the repository on the next commit.
  4. Ask the repository to identify all new files that it is currently not archiving. If there are any new files, stage these  files to be added into the repository on the next commit.
  5. Have the repository apply in a single commit, all deletions, additions and modifications identified.  

# An archive utility for Torchlight 2 characters. Works by commiting the current
# save-game contents to a GIT repository housed in the parent directory to the script.
# (c) 2013, Lindsay Bradford, released under the Creative Commons Attribution licence.
# http://creativecommons.org/licenses/by/3.0/
# The parent directory has two subdirectories, being "bin", and "toons".
#  The "bin" directory contains this script.
#  The "toons" directory contains the save-game content of the game.
# Usage:
#    archiveTLToons.pl <"optional commit message string"">
# Modify the constants below to suit your own environment

package ArchiveTL2Toons;

use strict;

###### constants for directory locations and command templates below #####

use constant GAME_SAVE_DIR =>
	"/home/linds/.wine/drive_c/users/linds/My Documents/My Games/Runic Games/Torchlight 2/save/76561198044661040/.";

use constant ARCHIVE_TOON_DIR =>

	sprintf "rm -rf '%s'", ARCHIVE_TOON_DIR;

use constant COPY_COMMAND =>
	sprintf "cp -r '%s' '%s'", GAME_SAVE_DIR, ARCHIVE_TOON_DIR;

	sprintf "git ls-files --deleted \"%s\" | xargs -r git rm --quiet", ARCHIVE_TOON_DIR;

	sprintf "git add --all \"%s\"", ARCHIVE_TOON_DIR;

use constant COMMIT_ARCHIVE_COMMAND => "git commit --quiet --m \"%s\"";

##### Methods below #####

# Bootstrap method.


# Archives the current set of Torchlight 2 toons by
# deleting the archive contents, taking a recursive
# copy of the save directory back into the directory
# and commiting a snapshot of the copied content.

sub archiveTL2toons() {
  my $commandLineComment = $_[0];

  if ($commandLineComment eq "") {
  	$commandLineComment = "Commit of current save state.";

    "Clearing archive content...",

    "Copying Torchlight 2 save game content to archive...",


# Commits a snapshot of the current content
# of the archive, assuming that the current content
# is exactly what the commit should contain.  Specifically:
#   * Any files missing  from the archive are deleted in the commit
#   * Any new files found are automatically added with the commit
#   * All modified files are commited as-is.

sub snapshotArchive() {
 my $commandLineComment = $_[0];

    "Staging removal of missing files from archive...",

   "Staging addition of untracked new files to archive...",

  my $message = &getNowTimestamp . " | " . $commandLineComment;

  my $commitCommand = sprintf COMMIT_ARCHIVE_COMMAND, $message;

    "Commiting staged snapshot of save-directory to archive...",

# Generates a timestamp of the current time.

sub getNowTimestamp() {
 my ($sec, $min, $hr, $day, $mon, $year) = localtime;
 return sprintf("%04d-%02d-%02d %02d:%02d",
       1900 + $year, $mon + 1, $day, $hr, $min);

# Simple method that prints the $message supplied,
# runs the $command specified, and prints any results
# the command generates.

sub runCommand() {
  my ($message, $command) = @_;

  print "$message\n";

  my $result = `$command`;
  print $result;

On a final note, I installed EPIC for Eclipse to modify the script, so despite my intense dislike for the lack of automated refactoring, I’m begrudgingly having to admit that it’s working better for me than doing it in a text editor.

An EPIC Perl in Eclipse?

Late last week, the boss sat me down with a new task that boils down to reusing part of a previous solution which I wrote in Perl.  Both utilities  rely on library support for reading in DBF files.  Perl’s XBase library is watertight, which is more than I can say about other free libraries for more “serious” languages.

Given that XBase  just lets me get the job dome, there’s been no real desire here to revisit the Perl decision, despite the fact that the previous build exposed a drawback Perl has in dealing with large data sets in-memory. Specifically, dynamically allocated arrays grow exponentially in pre-allocated memory whenever the array gets close to filled.  Doesn’t matter for small data sets, but I’m occasionally into data sets sometimes climbing into the gigabyte range.  That’s a huge amount of potentially wasted memory and unnecessary page swapping to disk.

Anyway, the reuse potential with the new job involves extracting a common library for exporting data to a 3rd-party tool. Now, the first time I just whipped the Perl solution up in VIM, and deemed it good enough.  This time, as I’m essentially doing the same thing over for file exports, the extraction of a shared library could be made easier with a rich IDE environment.

I took a brief look around, and found EPIC, touting itself as a Perl development plugin for Eclipse (which, given its lack of any financial cost, and ability to run cross-platform, keeps ending up as a favoured IDE).

EPIC  didn’t take too long to download and install.  Configuration boiled down to be little more than simply telling EPIC where the perl.exe file resides. From the screenshot below, you should be able to see a few of the more obvious features such as syntax hi-lighting (which is about where VIM stops), outlines of methods & variables, easy navigation of the project space, etc.

EPIC editing a Perl document

EPIC editing a Perl document

The EPIC development  environment is a lot more powerful than VIM, so in that sense, I’ve had a win.  However, there’s some buyer’s remorse happening here. There are two big-ticket items that are really baking my noodle.

First, syntax checking seems to only really work properly once a file has been saved.  Things get odd between saves, with phantom problems being reported which are in  are fact syntactically correct.   What this means is that I’ve developed a subtle instinctive distrust for the issues being reported to me unless I’ve just saved a change. I’m finding the issues reported between saves an unwanted distraction, because I can’t trust them to actually be issues.

Second, and also my greatest disappointment, lies in the near non-existent support for automated refactoring.  EPIC supports a grand total of a only a single refactoring technique  (extract subroutine, to be precise).  Not even renaming of methods and variables is available, which tend to be bread-and butter techniques for me when I’m engaged in refactorings like extracting a generalised library.

And that leads me to an important insight (one I’ve had before but forgotten).  A killer-app feature of  IDEs that sees me enthusiastically embrace an IDE over the more scary-end text editors like VIM , is support for automated refactoring.

It really gets to me that a task that is  completely automatable isn’t made so for a programmer, where our very jobs are all about automation. As a consequence, I’m pretty bloody harsh about it missing from modern IDEs.

It’s entirely possible that there’s just not the scope for achieving a rich set of refactorings given whatever limited chatter EPIC might be getting back from the Perl compiler.  I get that, and find myself already cutting EPIC some slack if we’re dealing with a black-box compiler making things difficult to get anything back but a cursory guess at code structure.


Deeper down, I’m getting constant violations of the rule of least surprise, and no matter how thoroughly I reason it out, I’m not pleased.

Engineering Archivable Torchlight 2 Characters

[Update: It’s been less than 24 hours since this post, and Runic have release patch 1.16. The patch allows me to play VonMalefic again (joy). Looks like it was more the game choking on a legitimate game file, rather than the file corruption I was afraid of.]

I’m in a painful place since the weekend, where patch 1.14 of Torchlight 2 caused a night of game crashes with my main character. Patch 1.15 came down the wire the next morning and bam, I can no longer even load the character without the game crashing.

Seem that this is a known issue on the forums (and here too). The toon has a full set of Mondon armour and Twitch (a legendary greatsword) that I spent quite a bit of time trading bits for to eventually complete.

Here is the toon VonMalefic, in a Steam screenshot I took jJust after the last piece of Mondon kit was added:

VonMalefic in better days

VonMalefic in better days

Now, I’m not pleased about game save-file corruption at any time, but at the peak of my obsession with a new computer game, it’s doubly painful.  It’s a software issue, and what does any self-respecting programmer do when he runs into a software issue?  They get all Bob-the-Builder on the issue!

Now, Steam is a great environment for ensuring off-site backups, but if a file gets corrupted, now I have an off-site backup of a corrupt file.  What I need is an archive of toon history that I can roll back to a “known-good point” with.  What’s the difference between a backup and an archive?  Well you see…. ahh screw it, here’s something that does a better job of nailing the difference between backups and archives better than I care to in this post.

My threadbare archiving solution?  I set up a Subversion repository on the machine, and write a very basic Perl script to automatically copy save-file games into the repository and take a snapshot.  My heart belongs to Git nowadays for file repository management, but last I checked it was still crap at binary files, so it’s Subversion all the way.

And finally, that threadbare script (I’m emulating the game using Wine under Ubuntu, which is why the directories look a little odd to Windows users):


package ArchiveTL2Toons;

use strict;

$ArchiveTLToons::SaveFileDir = "/home/linds/.wine/drive_c/users/linds/My Documents/My Games/Runic Games/Torchlight 2/save";
$ArchiveTLToons::ArchiveDir = "/home/linds/TL2ToonArchive/WorkingCopy";

sub copySaveFilesToArchive() {
  `cp -r \"$ArchiveTLToons::SaveFileDir\" $ArchiveTLToons::ArchiveDir`;

sub getNowTimestamp() {
 my ($sec, $min, $hr, $day, $mon, $year) = localtime;
 return sprintf("%04d-%02d-%02d %02d:%02d\n", 
       1900 + $year, $mon + 1, $day, $hr, $min);

sub snapshotArchive() {
 my $timestamp = &getNowTimestamp();
 my $cmd = "svn ci $ArchiveTLToons::ArchiveDir -m \"Commit of changes @ $timestamp\""; 
  #print $cmd;

sub archiveTL2toons() {


The script currently only handles character file changes easily. Any time I add or delete characters, I currently need to manually tell Subversion about the additions and deletions. Still, right now it gets me a very basic archiving solution, allowing me to recover from save-file corruption in the future.

On a final note, I’m conflicted on what to do now. I’ve lodged a support ticket with Runic in the (remote) hope that the save-file is somehow salvageable. I’ve resigned myself to that path being unlikely, so I’ve re-rolled Von. The replacement toon (Dawnshammer) will be more thoroughly Min-Maxed than Von, who bares the scars of my n00bishness with the game.

Still, Von wasn’t too bad all things considered, and entirely fit for the end-game grind. I’d rather have Von back over re-investing all that development/trading time with Dawnshammer. Fingers crossed Runic deliver some magic on this one.

Using Perl to Test External Process STDOUT Piping

Last week, I was introduced to FreePascal and it’s recently released IDE Lazarus. I was made aware of it because I was sniffing around for something that would create native binaries of GUI windows, and that would be very quick to draw up (ideally, as powerful as the Microsoft .NET GUI designer).

Before jumping into the issue that this blog post is about, I’m happy to report that the form designer in Lazarus is top-notch. I think I just found a new rapid-prototyping GUI tool that costs me no money, and minimal time.

Now… onto the pain and embarrassment. I convinced myself that I had an issue where FreePascal’s TProcess component (an adapter class through to external processes) was ignoring its poUsePipes property (the one that ensures piped output from the external process is made available to Pascal as soon as it is generated). The symptoms were no matter what I tried, my long-running external process output wasn’t hitting the GUI memo widget I’d set up until the end of the process.

I built a small isolation test to confirm what I was seeing, and submitted a bug report. It was quickly marked as resolved with a comment that left me feeling very derpy indeed.

Of course! Did I ensure the STDOUT buffer was being flushed after every write? No, I did not! My problem turned out to be the external process, not FreePascal. Now, because I never want to forget this again, I’m leaving myself a note on Perl and flushing STDOUT when I know my utility’s output will be piped to other processes.

Turns out in Perl there a special variable devoted to autoflushing STDOUT whenever something is written to the stream. The toy metronome perl script I wrote and submitted as part of my bug report is reproduced below, with the key line to flush the STDOUT stream appearing on line 7.


package ToyMetronome;

use strict;

$| = 1;  # Autoflush, so other processes pipe in the STDOUT stream unbuffered

sub processCommandLineArgs() {
  my (@arguments) = @_;

  my $returnArgs = {
    Ticks => 5

  for (my $argIdx= 0; $argIdx < scalar @arguments; $argIdx++) {
    my $arg = lc($arguments[$argIdx]);

    if ($arg eq "--ticks") {
      if ($argIdx + 1 >= scalar @arguments) {
        print "Error: invalid number of ticks specified.\n";
        exit 1;
      } else {
        $returnArgs->{Ticks} = $arguments[$argIdx + 1];

        if ($returnArgs->{Ticks} !~ /^*\d$/) {
          print "Error: invalid number of ticks specified.\n";
          exit 1;

  return $returnArgs;

sub ToyMetronome() {
  my $args = &processCommandLineArgs(@_);

  for(my $tick = 1; $tick <= $args->{Ticks}; $tick++) {
    printf "ToyMetronome: tick %d.\n", $tick;

####### Application below ########


If I ever again become suspicious about external processes not piping their output to my utilities, I now have a Perl script where I can knock out whether it’s the external process without first embarrassing myself with spurious bug reports.

Deep-derp learning! It’s the new shiz!

Storing my Résumé Source with GitHub

Off the back of a recent hard-drive death, I was forced to reinstall LaTeX on a shiny new version of Ubuntu. Whilst doing it, I got interested in the chatter happening in Google+ around LaTeX. I saw a few posts discussing the idea of hosting collaborative efforts in research paper production via GitHub, which struck me as a very clever thing to do.

Since my PhD efforts in the early 2000s, I’ve religiously stored my LaTeX documents in a local Subversion, and more recently in Git repositories. My LaTeX documents also include my résumé. It struck me that I could just upload the resume production scripts and source to GitHub. Then, I’ll (theoretically) never have to worry about a hard drive failure ever seriously threatening my resume history ever again.

So there you have it. If you’re interested in seeing what a LaTeX PDF resume production environment driven by simple perl scripts looks like, here’s mine.