This little static site is in support of the PhD I completed at the University of New Orleans. The code is here.
I’ve pushed a slide deck, rendered to html, here
copyright 2019 Thomas Luke McCulloch
All Rights Reserved (until I can figure out what to do with this code.)
The dissertation describing what I’ve done is here.
A paper my advisor and I wrote detailing some of the first things I came up and built with for the project here.
I hope to post a pre-print for this somewhere soon.
There is a new paper to be given at a constraint programming conference, briefly summarizing the details of the constraint programming system. This paper will be part of the 12th International Workshop on Constraint Programming and Decision Making here and here.
As for this web page, well, a blog belongs here. Unfortunately I am in the process of fixing up many long neglected projects. This one is not yet first in line!
These days I am chiefly working on two or three things: algorithms inspired by geometric physics, understanding geometric physics, and high performance computing. The ongoing projects related to this research are here:
There are several points to this research — much of which is self-educational, as required by my cross disciplinary mind-set.
First, there are cross disciplinary reasons to get into discrete differential geometry research. The algorithms are inspired by the more beautiful and topological side of physics and the idea is to re-purpose the research that is out there now to do more expressive, more powerful, and more efficient constrained variational design of, e.g. ship hulls and other complicated shapes.
Next, notions from differential geometry and topology are playing an increasingly interesting role together with quantum information, quantum computing, and condensed matter physics. I hope to leverage my geometric entryway into the necessary mathematics to better understand computational quantum systems, algorithms, and error correcting codes. When quantum machines take over certain realms of physics and engineering, I want to be well ahead of the curve, developing software to match the hardware.
With the C++ expression templates and CUDA programming project, I have several goals:
So far, I have successfully implemented the basic techniques for expression templates, including a few element wise operations on matrices (heap arrays) and, for educational purposes, extended this to matrix multiplication. Immediately on getting it right for matmul, I took it out of the code base and replaced it with matrix caching. Now the chief things it lacks are smart caching based on the operand types across the entire computational tree, and smart algorithm picking based on available hardware, available software libraries, and the like. (The literature, or the docs for a good well documented code-base like Blaze, Eigen, MTL4, or ETL) should clarify some of what I am saying here. Sometimes, still, the best simple thing to do is to call out to Fortran, what can I say?
Alright, now I want to tie this, too, back to my PhD research. In developing the logical, relational rules processing engine for my automated ship hull design geometry optimizing code-base, I needed something that would automatically turn simple Python/NumPy math into relational rules. I didn’t realize it at the time, but what I implemented for parsing the mathematical expressions into a binary syntax tree, and in processing that tree into a relational, connected, rules base, I was following a pattern that repeats itself again and again throughout computer science — the interpreter pattern.
From Norvig to SICP, and from C++ expression templates to a n-ary logical rules, and finally in Tensor Flow, Theano, and no doubt in other machine learning libraries, the interpreter pattern keeps showing up.
Here is how this pattern appears in my work:
On the other hand, for expression template programming in C++, the interpreter pattern shows up in the following way: