12/07/2022

The Mind-Expanding Power of the Right Interface

How we interact with our tools can completely transform our results

An issue of interaction

It's no great revelation that research -- be it purely academic or applied in industry -- is becoming increasingly dependent on software of every type. Across practically every scientific and engineering discipline, both using and creating software has become critical for simulation and design, and there can be no doubt of its net effect of expanding the forefront of human understanding.

But this progress belies what is often an uneasy relationship between software and user -- particularly at the bleeding edge -- and, at its core, the problem is the interface.

So, what do we talk about when we talk about interfaces? Here, I'm referring to all the means of interaction that a piece of software presents to the world -- not just graphically, but also through input files, other software, etc.. When it comes to software created by and used by researchers, interfaces often leave a lot to be desired.

What causes bad interfaces?

A quick survey of two of the most problematic regions of the research software landscape:

One-off scripts

Researchers need to try new ideas as fast as possible -- and when embarking on a new research direction, the horizon for planning is often shorter than we'd like. It's not often obvious what's going to work, and architecting a well-structured, user-facing software package from the off is next to impossible.

For this reason, research groups often find themselves with an unwieldy range of small, bespoke scripts, each written to accomplish an immediate task. Input files are whatever made sense at the time, tailored to the original developer's individual workflow. There's little thought about the interface for the rest of the group, let alone group members years into the future.

The incentives for quickly iterating on rough scripts are high, and there's often just not enough time to write software that can be used reliably long into the future. There's also a systematic undervaluing of the utility of what's being written in the moment. Most research ideas fail, so probably nobody will want or need to use this current version, so it's not worth the polish (something of a self-fulfilling prophecy).

Software engineers often look down on code from scientists, and in some cases that's fair. Just as some excellent researchers don't make great teachers, many equally aren't able to create usable interfaces to their work. In most cases, though, scientific software quality is just a product of its environment. The reward distribution is geared towards those who publish new ideas, not so much those who enable the research of others (at least not in the immediate term -- and the reality is that most academics are perpetually on a ticking clock to show some results). Show me the incentive, I'll show you the outcome. In industry, the disparity between advancing the field and the real incentives for personal gain are of course even more acute.

Large-scale packages

At the other end of the size spectrum are behemoth scientific calculation packages -- created through collaborative effort over years, even decades, of incremental development. These are designed to be used by others, but they often feel poorly tailored for the realities of use. Part of this can be a disconnect between the original authors and today's end users. These packages were usually written to make the most sense for solving the implementer's own problems, which may deviate significantly from the problems of users years down the line.

Tied to this is the issue of complexity creep. As users and maintainers change, new in-demand features are typically grafted on as effectively as possible, to attempt to make some progress on addressing the issue above. At best, the result is old machinery adapted for a new usecase -- and all the compromises between the old and new functionality that that entails. At worst, the accumulation of these efforts over the years results in an incoherent mishmash of features with wildly varying degrees of usability and support.

Even when these packages are able to tackle a focused, well-defined problem with lasting relevance to users, they can be a real headache in practice. This may in part be due to a broad perception that when it comes to technical software the only thing that matters is computational performance. After all, only very smart people are going to be using this package, so why would it need to be easy-to-use for the average person? In reality, the end users are experts in their field; they're not experts in your convoluted input deck format.

Why are bad interfaces such a problem?

At a glance, the problem of ill-performing interfaces might seem like a minor gripe. Yes, it takes some time to familiarise yourself with one arcane script and input deck after another -- but isn't this just a relatively small upfront cost? I'd argue the problem is much more insidious. Interfaces aren't just a one-time hurdle to address when first interacting with a new piece of software. They subtly, but significantly, guide how we go about our work.

Firstly: the issue of cognitive load. R&D work is mentally demanding. A little friction is frustrating when it comes to using the likes of social media apps, etc.; in the midst of a demanding R&D task, the same level of friction can derail a train of thought. A few minutes rechecking how to call a particularly unintuitive function not only slows progress, it can act as a hard barrier to developing ideas with any meaningful complexity. The brain has a very limited amount of working memory -- mental clutter has an impact on work reaching much further than a few one-off delays.

Beyond cognitive load, though, there's the overarching problem of impedance mismatch between user and software. This comes down to the disparity between how the developer and user think about the software and the problem it solves. The issue has many causes -- in the case of one-off scripts, it may be the idiosyncracies of the writer; for large-scale packages, it may be the time gap between initial development and usage. But the result is a substantial roadblock to innovation.

Mismatched interfaces turn the act of applying software into following IKEA assembly instructions. You're no longer exploring new ideas as they come to you, as the process of translating them to what's on the screen puts up too great a barrier -- it becomes impossible to implement what you want at the speed of your thoughts, and more complex, multistep concepts drift out of reach. You're not immediately reaching for the right tool as and when you have a flash of inspiration, experimenting with new ideas for joinery and finishing. You're inserting the dowels, hammering them into place, and praying the result looks something like the picture in the catalogue.

I won't name and shame the package, but there's a particular (widely-used) example that comes to mind in the materials modelling space that I interacted with as a Physics student. Rather than defining the fairly simple system I wanted to model, a hot plasma, in a way that made immediate sense to me, I had to first create some inert objects modelling the particles, then create a new object to move them around, then another object to calculate the forces on them, another to get the information I wanted out, and so on. It worked. But it felt incredibly clunky. Getting the results for what should have been a really simple case took far too long, largely due to the fact that Physicists just don't think of systems of particles in this way.

It was deeply confusing to me that someone had created simulation software like this, with an interface layer so disparate from how we naturally think about the underlying problem. Later in my research, though, I developed a similar piece of software of my own, and the reasoning suddenly became clear. From the software writer's point of view, the package's interface was a very natural way to think about how the systems work -- we want nice, separate objects for each concern, and having the user interface mirror this made perfect sense.

The authors of that package had developed mental models that diverged from those of the user (and presumably of their past selves). The software still had many great qualities -- particularly when it came to performance -- but there was a significant barrier to proper usage that had become invisible to the owners. There's a long-standing support mailing list for the package with problems from users, many of which stem indirectly from this design decision. From the authors' responses, it's clear that they believe the problems exist between keyboard and chair -- ignorant users who haven't read the docs. When software doesn't work the way we envision, though, the docs are effectively written in a different language.

I certainly don't claim to be immune to this problem. I expect most software engineers have had experiences where it felt like users were deliberately misunderstanding their software, and it's very easy to lay the blame with them (occasionally with good reason). It takes a lot of effort to detach yourself from a developer's mindset, try to understand how the user sees the problem they're trying to solve, and attempt to provide the tools that fit that viewpoint. A neat, well-architected separation of concerns on the backend doesn't have to map one-to-one to the user interface.

Towards a solution

The good news is that there's already action being taken. In research institutions, we're seeing tighter collaboration between software engineers and the researchers themselves. In some cases this is purely for optimising performance (e.g. GPU integration) and interface design is a lower priority -- but the deep involvement of two groups with very different points of view at least allows some of the right questions to be asked. Increasingly, as well, journals are encouraging the publication of source code to better promote reproducibility. Additional guidelines for interface design might help nudge software utility in the right direction for everyone involved.

On the tooling side, there's a need to create specialist technology to bridge the gap -- work with practitioners, on the one hand, to find the natural abstractions that work best -- and then enable creators to provide that interface without having to divert efforts away from the technology they're developing. And, equally importantly, there's a need to allow new interfaces to be created quickly and easily as mental models of the problem evolve. This is part of our vision for the Discovery Platform -- enabling easy integration of interfaces that users want and expect.

Finally, we can take inspiration from the many excellent research tools already out there. Yes, there are research packages with interface issues -- but there are also plenty of examples of great interface design. Scikit-learn, for example, has become a staple among machine learning practitioners. The interface it presents aligns extremely well with how most users think about training models, across the whole pipeline from data separation and cleaning to cross validation and ultimately inference.

Perhaps most impressive -- and instructive -- are the past and newly emerging frameworks for automatic differentiation. These packages take an operator that's a natural part of scientific language and successfully map it almost one-to-one into a software interface. As a user, there's no need to think of the software as an additional layer of complexity -- you're able to just write what you want to do in the most obvious way -- and the productivity boost is incredible.

In his PhD thesis, Doug Maclaurin, Autograd author, talks about the motifs of his work in a way that has stuck with me:

"The gradient operator is an excellent abstraction, and we use it freely and to great effect in symbolic mathematics. But having Autograd, a practical implementation of the gradient operator in an actual programming language, has been invaluable."

This directly follows from what he calls

"the mind-expanding power of programming tools that present the right abstraction"

Brett Larder, Co-founder + CTO
While researching Atomic and Laser Physics at Oxford, Brett developed the first prototype of the Discovery Platform. As CTO, he leads the vision and development of the platform, productisation of research, and architecting of the company's technological infrastructure.

Latest Articles

From Unknown Unknowns to Known Unknowns

Unravelling the beauty of uncertainty

Perspective on Fusion Energy with Lasers

Can Machine Learning help ease the arduous path toward a viable solution?

Days to Seconds

A new lens – imagine the future possibilities