The long way through Software Craftsmanship

Talks I have attended at the Craft Conf 2015

Apr 25, 2015 - 1 minute read - Comments - craft-conf2015attended-talksbudapest

These are the talks I’ve attended at the Craft Conf 2015:

Thursday 23th

Friday 24th

I couldn’t attend the last talk nor the keynote as I had to get to the airport

What legacy code is about

Apr 21, 2015 - 3 minute read - Comments - legacy-codetrustclean-codemichael-feathersquotedefinitionupdate-definitiontestsunit-testworking-effectively-with-legacy-codewelcwewlc

In his book Working effectively with legacy code, Michael Feathers describes:

[…] legacy code as code without tests. It is a good working definition, and it points to a solution […]

M Feathers, in the preface of Working effectively with legacy code

I really like this definition. It is objective and measurable. But this is also a downside. Let’s take any concurrent code, for example: you can achieve 100% coverage on a single thread and the program could have defects when executed in parallel. But this might be an outlier, as concurrency is difficult to test. Let’s take another example.

Let’s a imagine a non-concurrent (i.e., single thread) code composed by a single class (no collaborators; just one responsibility; it is “Clean Code”, as defined per R.C. Martin’s book) that has some happy paths as well as some error-handling cases. As usual. The original code writer has decided to test only the happy paths. They deployed it to production, as the product was ready. You inherit this code.

After the system is deployed to production, you start receiving defect reports on the happy paths (the tested part). The first one, you cross it off as an one-time thing. Then the second and third come. You take a deeper look and start seeing some inconsistencies in the tests, some tests that actually do not test anything interesting:

  • some test the test framework
  • some the language
  • some test a happy path that works by pure luck
  • some functionalities are tested (not all happy paths)
  • there’s actually a defect converted into a specification (test) (??)

Now, I ask myself this question: when did I start considering this code as legacy? When I saw the bug reports in production (several in a row) or when I saw the poor suite of tests?

I do think this codebase is legacy, no matter when I realized about it. But my first instinct would be to consider it legacy from the moment I received the bug reports, as I lost the trust in that code.

Because for me, legacy code is about trust, not tests. You can have tests but no trust; you can have trust but no tests (the latter is more difficult in non-trivial codebases.)

This is why I would like to update the definition:

Legacy code is code without tests that provide trust to all your stakeholders

The last part is very important, because the definition is no longer objective but depends from person to person. If the codebase doesn’t have tests, it is definitely legacy. But even if it has them, would you consider the given example a legacy or non-legacy code?

These stakeholders include everyone affected or affecting the product: owners, developer team (in the scrum, wide sense), users, clients, …

Full disclosure: I’ve written plenty of legacy code. Code which I thought was good but really wasn’t. Even with tests, even with TDD, even with Clean Code. This has made me change my mind about what legacy code is all about.

Annotations and Aspects in Java

Apr 21, 2015 - 2 minute read - Comments - javaaspectaspectjspringannotationsnippet

I’ve written a sample project with an annotation and an aspect to intercept it and decorate it.

The code provides a way of doing try|catch, specifying an exception to be caught.

The source code is available here

In the process

This has taken me approximately one hour to prepare the spike and half more to refactor and massage a bit.

If you want to see the raw details, the refactor has been committed step by step (the spike hasn’t). The unmassaged code can be found here. The massaged version is at the tip of the arrow

More information

Regarding annotations (like @Override):

Regarding aspect (based on AspectJ):

  • This question has been my main source of information / inspiration for the spring + aspectj configuration

Conclusions

One of the drawbacks that I find to common examples or tutorials is that they’re not complete. They show you part of the information, but not the whole. While this helps most of the readers, there are library incompatibilities, defects, specific versions, etc. This is why I like to post the whole solution, including pom.xml (dependencies) and tests.

Having less documentation for developers can be acceptable if you have tests / more tests; of course, as long as other developers can follow your thought while developing.

Logging is a feature

Apr 19, 2015 - 3 minute read - Comments - logginglogfeaturehypothesisclientchallengesplunk

At a client, we’re facing this challenge: we cannot access production logs, as we don’t have access to production environments. The solution we’ve chose is to implement the logging component as a layer on top of Splunk.

This formatter layer –per component– accesses a generic layer –for the whole company– that accesses splunk. In this manner, it is very easy to reuse the splunk connection and configuration and inject mocks. Also helps with the local environment: you always log to console (even if in production you don’t have access to it) and can disable this in local, not needing a local splunk installation.

Strategies for dealing with this:

  • functional logger: no shared state
    • can only log information available in this scope: parameters, attributes, static information
  • stateful logger: information is added to the threadContext and logged whenever necessary
    • can log information in this scope plus any of the previous invokings.
    • the problem is precisely sharing state: you have to control all the possible parents to know where this data comes from. A piece of information may come from any source.
    • Lose the reference of who is invoking you, as there is no customization for this. (only using implementation)
    • Speak two different languages: one for logging and one for business logic
  • mixed: mix and match the best of both worlds.
    • The problem with the shared state can be solved if no data is overwritten and any information is stored in a map where the key is the fully qualified (FQ) method name (or FQ class name + FQ method name)

The GOOS book explains this very well on the 20th chapter, first half “logging is a feature”. If we log as described in chapters 19 and 20, some of the problems are reduced:

  • multiple levels of abstraction
  • indirect control over logging. decide in a single place whether to log exceptions of each kind or not
  • remove “implementation details” such as “log.isDebugEnabled”
  • remove performance issues: no need to build strings for logging every time, only need to pass objects. inside the logging layer, if the logger is enabled for that level, then spend the time building strings and print them

We should remember that:

  • Whenever logging becomes useful data (for the stakeholders) it no longer is a debug information and needs to be tested (automatically). In my opinion, it is not enough to test it daily when debugging, as these stakeholders rely on the data. Also, humans are not good detecting small differences, while machines excel at it.
  • Why treat the logging feature different from any other? It is an investment, like any other, and as such should work properly

Talk: Simplicity matters

Apr 18, 2015 - 1 minute read - Comments - clojurerich-hickeytalkkeynoterails-conf

I’ve seen this video by Rich Hickey: “Simplicity matters”, taken from Manuel Rivero’s google plus’ site

here are my notes:

  • the software is the elephant
  • do more, do it differently, do it better
  • complexity of the elephant is going to dominate what you can do
  • your ability to reason about your program is critical
  • design is about pulling things apart
  • become familiar by learning, trying
  • “We can be creating the exact same programs out of significantly simpler components”. Rich Hickey

Complexity vs simplicity:

Complexity Simplicity
state, objects values
methods functions, namespaces
variables managed refs
inheritance, switch, matching polymorphism à la carte
syntax data
imperative loops, fold set functions
actors queues
ORM declarative data manipulation
conditionals rules
inconsistency consistency
  • he does not agree on “simplicity” on the agile manifesto
  • simplicity is not about you. simplicity is a lot of hard work
  • simplicity is not an objective

Order is all over the place:

Complex Simple
positional arguments named arguments or map
syntax data
product types associate records
imperative programs declarative programs
prolog datalog
call chains queues
XML JSON, Clojure literals

The Guardian of the Source

Apr 17, 2015 - 2 minute read - Comments - craftsmanshipguardiansourcevalueadded-valuepull-requestclientguardian-of-the-sourcecollaborationanalogysocrasandro-mancuso

Overheard today at the office, while they were discussing the approval of a pull request:

[Pull request reviewer] - I’m sorry, can’t approve this: I don’t really agree with this design. Isn’t there an alternative for this?

[Pull request author] - [redacted], of course you don’t, you’re the guardian of the source

[PR reviewer] - Well, sometimes I’m a bit picky about introducing bad code into our codebase.

This has made me think about Gandalf blocking the path of Balrog Demon, shouting “you shall not pass” while on top of the bridge.

This is part of the objectives of doing a pull request before merging code: stop the bad code1 before it is merged to the codebase.

Later, as you start having broken windows in your codebase, it is easier that you start introducing more. Exactly about this is the broken windows theory.

On the other hand, if you try to block that bad code before it is merged, it is cheaper to remove it: nothing depends on it, you introduce no repetition, etc2

This also connects with the idea present in Software Craftsmanship, now called The Software Craftsman, about preserving value for investors and treat software as an investment (with positive ROI) rather than a sunk cost.

It is being picky and, to the most of our possibilities, do not letting any defects nor bad code into the codebase. A hedge fund manager would not let any toxic or dubious asset into the managed funds. This would be counterproductive in the long run (maybe even in the short one) and would be not doing their job correctly. Why is it that some developers suggest or accept these toxic assets?

So, at least for me, would be an honor to be called the “Guardian of the Source” by someone I respect, as long as this is a serious proposition (i.e., not a joke of some kind).


  1. Bad code can be expressed in many ways: long methods, bad naming, breaking design principles (e.g., SOLID, KISS, DRY - OAOO), code with defects, no tests or no coverage, etc. [return]
  2. A source is Discover defects early. Another is this paper that cites the ratio of fixing it later to 5:1 (five times more expensive). [return]