Skip to content

(WIP) This is Not a Talk About Ducks 🦆🦆🦆

User Experience (UX)

User Experience - is, I hope by now, obviously of massively importance when building software products. It doesn’t matter how smart or innovative a product is. If it is hard to use then people won’t. Or perhaps even worse - they will learn to resent using it when they have to.

I know I am preaching to the converted on this. Everyone here knows the importance of focusing on usability and having empathy for the end user.

But that simple term end user contains legion. Are we talking about the customers? What about those administrators and the moderators who keep the whole thing running? And then there are the executive who need up to date stats on performance, sales, revenue and on and on.

Even if there is only the customers who use the system - there are still 2 distinct sets of users on any software product. The ones the system is being built for, and...

The developers working on the code - not to forget the designers, product owners and so on. But as a fellow developer that is where I am going to focus today.

Onboarding

  • A Human
    • A bit of history
    • A bit of an intro to how we do things
    • Expectations (this will take time, but i'll check in from time to time to help out)
  • README, just enough to get up and running - more like an index page than documentation

Documentation

When I first learnt to formally code, Java was the thing. One of the things I really loved about Java was it's super comprehensive documentation of the standard library. So you can imagine how excited I was to learn that it was just a thing, but a thing I could use too. Javadocs is a tool that reads code and automatically generates API documentation from it.

This taught me not only how great it was to have good documentation - but that inline documentation was powerful. Documentation that is found within your code.

Inline Documentation

In the Python world at least there is are at least 2 types of inline documentation.

Regular comment, used to annotate complex or non-obvious implementation details. Python emphasises the importance of readability in the code itself. More often then not it still isn't enough. A little pointer here and there describing why something was done a particular way can save multiple downstream headaches.

EXAMPLE TO GO HERE

Docblocks or Docstring are the second type. These are, usually, multi-line and attached to a function, a class definition or even a whole file. They describe the code in terms of what is does and how it is supposed to be used. To info anyone reading it what to expect when using the code - without having to actually read it with a fine tooth comb. They would ideally describe the contract of that code - what goes in, what comes and, are there any side effects, exceptions possible?

They may also delve a little into the why if what follows is particularly thorny or dense.

EXAMPLE TO GO HERE

What I went on to learn pretty quickly is that API documentation is not really enough to truly understand a code base. In fact API documentation, inline documentation is one one type documentation. Young Jon didn't know it then but there are in fact at least 4 distinct types of documentation. And for any project you will probably need at least a little bit of all four of them for it be a nice place to work.

The Four Kinds of Documentation

  • Reference
  • Tutorials
  • Guidelines
  • Discussions

Our inline documentation is an example of Reference documentation. It's the cold hard facts of how it is. No delicate prose or soliloquies on the unrequited beauty of set theory, just tell it to them straight.

For your applications and projects consider adding reference documentation for things like command line tools (management commands in Django for example), environmental variables, and data structured used to share information with other systems. They are purely descriptive.

Reference documentation is Information Orientated.

The next type on the list is Tutorials - these I'm sure you are intimately familiar with. They are usually presented as a sequence of steps the reader should follow exactly to achieve a particular outcome.

Again it is best to avoid diving into the why and wherefores - treat the person reading as competent but unfamiliar with the subject matter. Take it slow, don't assume steps are obvious - include them all.

Examples range in scale - they could be as small as creating a new type of button component in a ReactJS project up to creating a whole fully functional application. The needs will change from project to project. Tutorials are there to show a beginner that they can achieve something meaningful. They should be repeatable and consistent.

Tutorials are Learning Oriented.

Which brings up on to the 3rd type of documentation - guidelines. These have a lot in common with tutorials. So much so that they are often overlooked as a separate thing.

They are still essentially recipes - they are an answer to a question that a beginner might not even be able to formulate. They state, and then answer specific questions like:

  • How to perform a deployment
  • How to import example data into your development environment
  • How to create a new dashboard

Often these are manifested as How-to guides. Guidelines do not need to be as prescriptive as a tutorial, or as bullet proof. The reader can be considered to be more familiar at this point. To be able to understand the context a little better and to be able to adapt the content to their own specific situation. Again, like tutorials they should only really describe the task at hand, not explain why it is the way it is.

Aside from pure reference material - these are probably the easiest or at least the most comfortable for developer to write. Developers do these things day to day - it is really just transcribing the steps they follow.

Guidelines are Goal or Problem Orientated

The final type of documentation I am going to cover is discussions. This is often the most valuable and therefore the most difficult type of documentation to get right.

This is the only place where you get to explain the why, to present conflicting views and then (hopefully) record which direction you took. This the the background, the history of a project. This is where those soliloquies are not only permitted but encouraged! Discussion documentation, just as much as your Git log is the story of the product as narrated by the developers who were there.

These are hard.

I should probably clarify this as these are hard to keep clean and understandable. Developers (aka I) like nothing more than to debate the one true way. So much so that I'm fairly certain you could ask the same question of any three developers and get at least 4 answers, most of which can be boiled down to it depends.

Discussions are where you get to describe what it depends on, and how that effected the decisions you made along the way. This means that when another developer comes along in 6 months they don't just just stare on with a look of confusion bordering on revulsion. And remember that developer is often you - a lot of stuff can go down in 6 months.

Even with this new found freedom it's probably good to reign it in a little. To that end the projects I am involved in generally keep Architectural Decision Records or ADRs. These provide a structure to describe key decisions that have been made along the way and I'd usually split them into 3 key section

  • Context background information, prior art, a discussion of the problem itself
  • Decision what the final decision was and why
  • Implications any downstream side effects of the decision, have we knowingly chosen an approach with limitations in order to deliver faster

These are a great way to help onboard new developers - to give them a feel for the code base. They should help the understand the why as well as the what. Although in a larger or older project you may want to flag specific ADRs are the most important in order to not overwhelm people.

Discussions are Understanding Orientated.

Testing

What might appear odd at first glance is that the topic of documentation, to me at least, slides quite neatly into the topic of testing.

It was pointed out to me not so long ago that the tests that accompany a code base are in themselves a form of documentation. They describe to a developer how the pieces of the software work with each other. How sub-systems are supposed be used and what to expect them to do in given situations. So a through test suite can do a lot more than just provide some level of assurance that a system behaviours a certain way and that changes don't break things. It can be informative in it's own right.

Re-framing test suites this way helps you to think about them differently.

For example in application code we often try to be pretty clever - to abstract away differences. To avoid repeating ourselves, because we know that if we repeat ourselves at some point the approaches will diverge. A bug is found in one copy - and fixed. But the other copy is forgotten and still manifest the same defect.

Well, that and that developers are functionally lazy. Why fix 2 things when you can fix both in one place and go back to arguing about the merits of functional vs OO coding paradigms or some other such debate.

We want our code to be DRY - don't repeat yourself

I see this often applied to tests as well. It was particularly prevalent in xUNIT types tests where test cases are wrapped up into test classes and some. This felt like home to many of us. It's a class, therefore I should split things up into lots of little methods and DRY out my tests.

I've come to realise this is a mistake. Test suites should be DAMP

DAMP - Descriptive and Meaningful Phrases

If that isn't a backcronym I don't know what is. But what I am trying to convey is that if you think about your test suite as another type of documentation focused squarely on the developers who come after you. Then you begin to see that being overly clever, or DRYing out your code can actually diminishes the value of your test suite by obfuscating what the various parts of your system are doing and how they interact. That to some extent it is better to repeat yourself if it mean that a test can be more easily understood in isolation. That each test case tell a whole story and all stories need a beginning, a middle and and end:

  • Give
  • When
  • Then

or, because alliteration aid awareness:

  • Arrange
  • Act
  • Assert

If the act is hidden aware in many functions, or the assert is not immediate and in your face, then you don't have a complete story. You cannot see how the parts actually hang together without jumping though hoops.

I left out the arrange part intentionally. Whilst it is still important to spell out clearly how you are arranging thing - it can get complex. Especially when their is a sizeable amount of data set up involved. So this is one area I would compromise, by creating factories the generate data for you. Ideally using descriptive names and sane defaults so even if the developer doesn't go an inspect the implementation they will have a clear understanding of what is being done.

Whilst we are talking about clarity and being descriptive can we take a side stroll into test cases.

Consider a URL endpoint that adds a product to your basket. You create a test file:

test_basket_api.py

Then add a test case

def test_success(authenticated_client):
    product = factories.Product()
    payload = {"product_id": product.id}
    basket = authenticated_client.get_basket()

    response = authenticated_client.post("/basket", payload)

    assert response.status_code == 201
    assert product in basket
    ...

Looks fine to me - tests that happy path nicely. We have clear arrange, act and assert sections, right?

Now consider this one small change:

def test_a_product_can_be_added_to_a_users_basket(authenticated_client):
    product = factories.Product()
    payload = {"product_id": product.id}
    basket = authenticated_client.get_basket()

    response = authenticated_client.post("/basket", payload)

    assert response.status_code == 201
    assert product in basket

Both tests are the same - but by renaming the test case to be more descriptive I've immediately be able to convey a huge amount of additional context.

As seasoned web developers - who has spent plenty time building both REST APIs and e-commerce platforms the idea that POSTing to a basket resource usually indicates the creation of a new resource within that one if not a new one. But that wasn't always the case. The second example make the actual intent of the test extremely clear.

Whilst I realise this example is a fairly contrived one I do think it also illustrates the point.

Hot tip - if you want to really see this play out in a project using pytest. Try adding the pytest-spec plugin and re-running you test suite. (unless you are already using pytest-sugar they don't play nicely together)

Tooling

  • CI/CD
  • Unification of local tooling with CI/CD
  • Parity

Peer Reviews

As the Author

As the Reviewer

Style

Empathy

UX == Empathy for the user DX == Developers are users of the code base too, this is just extending the same thing to your colleagues.

What I've talked about today is simply a collections of tools, techniques and approaches that I collected over the last few years and personally found useful. These are not best practice, they are not all perfect and the list is far from complete. The list will grow, shrink and mutate over time and be varyingly appropriate in different situations.

But you don't really need a list of fancy tools to ensure that your fellow developers have the best possible experience. You just need empathy, everything else flows from that.

So have empathy for your users:

  • The customers
  • The product consumers
  • The administrators
  • The developers who inherit your code
  • And of course yourself

References