# Just a Vacuum Fluctuation

This blog was never intended to be of a personal nature, but looking through the content in the six months or so since its inception - it’s not much of anything else either. When I set everything up here; my life was fairly consistent and plans of getting a heap of work done, blogging and a number of other side projects were on the cards. Soon after a metaphorical whirlwind came through and tore up pretty much every aspect of my life. Many things have laid dormant, this blog being one of the first. The storm is passing (I think), so a resurrection of this place is in order. Before that happens though, things have changed and new directions have been planned.

Octopress has served me well as an introduction to static blogging, but Ruby has never been on my list of favoured languages, nor is it one I plan on picking up. Haskell on the other hand has been playing a major role recently, so as of 5 minutes after this post is published I’ll be working on porting Axiomatic Semantics to Hakyll. A leaner site design (mobile friendly, latest web developments) is also on the cards to fix a number of bloated pieces of the current layout.

Until that’s all ready, this post will remain to let those of you who do come here once in a while that you have not been abandoned.

# Workman Layout for Vim

I’ve recently switched keyboard layouts from Dvorak to Workman. Dvorak has been good to me over the past 5 years or so, but the philosophy behind it wasn’t actualised in it’s final design. Workman has been optimised for English and minimises finger strain etc etc. There’s no point rabbiting on about it as all of my praises or critiques are already well fleshed out on the Workman website.

It’s been two weeks or so since the switch and I’m at th proficiency stage where I’m not yelling at the awkwardness of my inability to find a letter, but if the switch from Qwerty to Dvorak is any indication it’ll be a couple of months until I’m completely up to speed.

I had a fantastic Vim map for Dvorak suggested by Adam Davis, which kept the Qwerty h,j,k,l navigation keys in the same place; remapping sane Dvorak keys with minimal disruption. So obviously something similar for Workman that didn’t disrupt Vim flow is needed.

Taking the laziest approach first; google tells me there are only two current suggestions. First, Matt Weolk has taken the complete remap Qwerty:Workman approach, which is outlined in this gist and takes the idea from colqer; a Colemak solution to the same issue. I really don’t like the blind approach this method uses as I remember Vim keybindings more on their associations (y = yank), than muscle memory of their original Qwerty locations. The second solution is a simple j <-> t switch discussed in this thread. Closer to what I’m after, and I used it for a while; but ultimately the navigation keys being separated and not entirely on the home row doesn’t sit with me.

So, here’s my attempt at a decent Workman remapping for Vim:

h,j,k,l are replaced by the Workman y,n,e,o home keys, with a few new associations:

• (Y)ank -> (H)aul
• Search (N)ext -> (J)ump
• (E)nd word -> brea(K) of word [yeah, that one’s a push…]
• (O)pen new line -> (L)ine

Considering I now use three different layouts depending on where I am, I’ve had to set up a layout remap function in my .vimrc. Here it is in it’s present state at the time of writing; check my dotfiles repository for updates though.

# Unveiling Some Makefile Black Magic

Whilst my higher education started off in the computer science realm, I quickly became disillusioned and, excluding a decent temporal shift, moved more into the physical sciences. Whilst I never finished my CS degree; what I completed gave me an adequate understanding of development life cycles, program design and sufficient competency in c++ to get shit done. When I started heavily coding again, forces shunted me towards Matlab and higher level quick and dirty rapid prototyping. As we all know; you can only go so far in this world and I’ve recently found myself back into the depths with c, fortran and even a little assembly.

Ultimately though, my c++ programs never needed to link to external libraries or worry about machine specific configurations the -o switch was the only one I needed when calling gcc pretty much. Now I’m building MPI tools to run on supercomputing clusters that need the highly optimised linear algebra routines; written down by our forefathers in a more civilised age.

I need a Makefile, the file filled with dark arts known only to those with neck beards and ghostly white skin.

Realistically, Makefiles are relatively simple things, but seem to have a stigma associated with them if you’re outside the computer science sphere. In fact; here’s a quote from my PhD supervisor when I told him about my knowledge gain concerning this post:

Hehe, careful. Those that learn how to write makefiles are usually doomed to vanish …. banished to a basement (or IT department of a fortune 500 company) for all eternity.

I guess writing this post and publishing it on the internet is sealing my fate…

The my first Makefile tutorials around the internet are not too bad (take a look at WLUG and Mrbook to get started); but the Black Magic I eluded to in the title of this post is much cooler than just typing make instead of g++ main.cpp interrobang.cpp -o omgwtfbbq.

### Pre-processor macros

The specific problem I needed to overcome was managing one set of code that requires different linking libraries depending on what machine it was running on.

• Vayu uses intel compilers and requires the MKL libraries
• Trifid uses gcc compilers and requires the blas and lapack libraries

Because of these conditions, code in certain files differ. For example, calls to linear algebra routines on Vayu require an MKL_int type, whereas the same call on Trifid asks for int. A pre-processor macro defining a generalised int type LP_INT enables me to overcome this problem. This macro uses an if-elif-else formalism to check what machine the code is compiling on and adds additional headers if needed:

Now, how can we define these VAYU and TRIFID variables? SUMMON THE MAKEFILE:

Grab the hostname of the machine & check it against known results (in my case I just check for trifid). Any shell call can be used here if hostname isn’t appropriate. Then, setup the required libraries, includes and compiler information specific to the identified machine. Most importantly: append CPPFLAGS to incorperate a machine bool set to 1 which the pre-processor macros are looking for.

Et voilà! Call make on either machine and build without a hassle. No more merge conflicts between git branches for me. A shoutout to Ash who put me on the right path with this issue.

# Using Datatool and TikZ to Generate Figures From Data

If you’re not already using PGF and TikZ for figures in your $\LaTeX$ documents, I suggest you take a few evenings and get acquainted with a number of examples, so you can grasp the magnitude of its’ capability - you certainly won’t be disappointed.

Building static diagrams and graphs (adding PGFPlots into the mix) is fine, but I find myself constantly wanting decent plots from real data, that don’t fit the usual line/surface paradigm. The datatool package is perfect for this kind of work.

Something I’m working on currently is the classification of voids in amorphous solids, voronoi networks seem to be a great way of expressing the arrangement of atoms in these systems. The following example uses an amorphous aluminium oxide and is represented in 2D so as not to complicate the problem too much.

To simplify things further, I’ve separated my input data into three csv files that look something like this:

The datatool package reads this information in through its load database command \DTLloaddb

pulling the file into the data variable, and assigning keys to each column. Now using a foreach command to loop over all rows in data,

I can draw a circle of radius \r at position (\x,\y); as well as color each circle depending on its associated \species key [lines 23–30 in the full code below].

Two other functions of datatool that I use in this example are extremely useful:

computes the bounds of the (x,y) data, which I use to draw a bounding box;

grabs the location of x from data at idx, where that index value equals \one from another data set.

If you include all of this with some TikZ trickery, it’s fairly simple to generate a number of figures like this incredibly fast with a myriad of different data sets.

The entire code-set for this project is below. The in-line comments expand on the syntax I outline above and should answer most questions you may have about each functions purpose.

# Octopress and jQuery

In the process of theming Axiomatic Semantics, I came across a virtually undocumented (in the Octopress sphere) caveat when including jQuery elements. A number of javascript functions in the Octopress source use $ as a variable. This is not uncommon; although jQuery aliases to $ - which causes some confusion in the processing of Octopress’ functions. My issue was the GitHub aside constantly being stuck at the Status Updating… phase. To overcome this issue, the simplest method is to insert the jQuery include in after_footer.html, but if you need the call before then for whatever reason and want it in head.html; you’re gonna have a bad time.

The fix is quite simple:

will return control of $ back to Octopress as old references of $ are saved during jQuery initialization; noConflict() simply restores them for use again. You can read more about it in the jQuery documentation.

Note: This is not to be confused with the Status Updating… bug that was rectified in Octopress 2.0 when GitHub updated their API to v3. If you’re using an older Octopress version, a rake update_source should take care of that problem.

# Hello World

In the past, my blogs have always been akin to a public, online diary. They tended to generally be inane drivel about what we got up to on Friday night. With that being said, the era I’m referring to is circa 2002; modern technology has moved on - now we have Facebook for that kind of shit.

My goals for this blog are somewhat different. I’ve realised that my grandiose schemes for this domain (neophilus.net) are most likely never going to happen, and my life now is considerably busier than it was back in the day. Also, no-one gives two shits about some random guys’ everyday life; not unless they’re a 13 yo girl swooning over a pop star.

If you are a 13 yo girl; fan-mail is appreciated, although I highly doubt you’re my target audience.

I’m a physicist, interested in high performance computing, Linux, electronics, UAVs, hardware & software hacking, optimisation and science in general (I elaborate a little more on my about page). My blog will mostly be about these topics (there will likely be a few posts on ancient history or linguistics from time to time as well).

Initially, I intend Axiomatic Semantics to be a log of things I’m learning as I push through this barrier I seem to be approaching presently: a capable programmer that isn’t really working on public projects at all, but feels he should be. Also, there’s a massive amount of information I’m discovering as I complete my PhD, or my research work at DSTO - a lot of it I feel may be useful to others out there. There’s also the added bonus of this ultimately being an easy archive for me to come back to in the future - once I’ve totally forgot about something and attempt to reinvent the wheel.

Bear with me for a bit whilst I get the back end set up to my liking - actual content will soon follow.