Java's Volatile Modifier

A while ago I wrote a Java servlet Filter that loads configuration in its init function (based on a parameter from web.xml). The filter's configuration is cached in a private field. I set the volatile modifier on the field.

When I later checked the company Sonar to see if it found any warnings or issues in the code I was a bit surprised to learn that there was a violation on the use of volatile. The explanation read:

Use of the keyword 'volatile' is generally used to fine tune a Java application, and therefore, requires a good expertise of the Java Memory Model. Moreover, its range of action is somewhat misknown. Therefore, the volatile keyword should not be used for maintenance purpose and portability.

I would agree that volatile is misknown by many Java programmers. For some even unknown. Not only because it's never used much in the first place, but also because it's definition changed since Java 1.5.

Let me get back to this Sonar violation in a bit and first explain what volatile means in Java 1.5 and up (until Java 1.8 at the time of writing).

What is Volatile?

While the volatile modifier itself comes from C, it has a completely different meaning in Java. This may not help in growing an understanding of it, googling for volatile could lead to different results. Let's take a quick side step and see what volatile means in C first.

In the C language the compiler ordinarily assumes that variables cannot change value by themselves. While this makes sense as default behavior, sometimes a variable may represent a location that can be changed (like a hardware register). Using a volatile variable instructs the compiler not to apply these optimizations.

Back to Java. The meaning of volatile in C would be useless in Java. The JVM uses native libraries to interact with the OS and hardware. Further more, it is simply impossible to point Java variables to specific addresses, so variables actually won't change value by themselves.

However, the value of variables on the JVM can be changed by different threads. By default the compiler assumes that variables won’t change in other threads. Hence it can apply optimizations such as reordering memory operations and caching the variable in a CPU register. Using a volatile variable instructs the compiler not to apply these optimizations. This guarantees that a reading thread always reads the variable from memory (or from a shared cache), never from a local cache.


Further more on a 32 bit JVM volatile makes writes to a 64 bit variable atomic (like longs and doubles). To write a variable the JVM instructs the CPU to write an operand to a position in memory. When using the 32 bit instruction set, what if the size of a variable is 64 bits? Obviously the variable must be written with two instructions, 32 bits at a time.

In multi-threaded scenarios another thread may read the variable half way through the write. At that point only first half of the variable is written. This race-condition is prevented by volatile, effectively making writes to 64 bit variables atomic on 32 bit architectures.

Note that above I talked about writes not updates. Using volatile won’t make updates atomic. E.g. ++i when i is volatile would read the value of i from the heap or L3 cache into a local register, inc that register, and write the register back into the shared location of i. In between reading and writing i it might be changed by another thread. Placing a lock around the read and write instructions makes the update atomic. Or better, use non-blocking instructions from the atomic variable classes in the concurrent.atomic package.

Side Effect

A volatile variable also has a side effect in memory visibility. Not just changes to the volatile variable are visible to other threads, but also any side effects of the code that led up to the change are visible when a thread reads a volatile variable. Or more formally, a volatile variable establishes a happens-before relationship with subsequent reads of that variable.

I.e. from the perspective of memory visibility writing a volatile variable effectively is like exiting a synchronized block and reading a volatile variable like entering one.

Choosing Volatile

Back to my use of volatile to initialize a configuration once and cache it in a private field.

Up to now I believe the best way to ensure visibility of this field to all threads is to use volatile. I could have used AtomicReference instead. Since the field is only written once (after construction, hence it cannot be final) atomic variables communicate the wrong intent. I don't want to make updates atomic, I want to make the cache visible to all threads. And for what it's worth, the atomic classes use volatile too.

Thoughts on this Sonar Rule

Now that we've seen what volatile means in Java, let's talk a bit more about this Sonar rule.

In my opinion this rule is one of the flaws in configurations of tools like Sonar. Using volatile can be a really good thing to do, if you need shared (mutable) state across threads. Sure thing you must keep this to a minimum. But the consequence of this rule is that people who don't understand what volatile is follow the recommendation to not use volatile. If they remove the modifier effectively they introduce a race-condition.

I do think it's a good idea to automatically raise red flags when misknown or dangerous language features are used. But maybe this is only a good idea when there are better alternatives to solve the same line of problems. In this case, volatile has no such alternative.

Note that in no way this is intended as a rant against Sonar. However I do think that people should select a set of rules that they find important to apply, rather than embracing default configurations. I find the idea to use rules that are enabled by default, a bit naive. There's an extremely high probability that your project is not the one that tool maintainers had in mind when picking their standard configuration.

Furthermore I believe that as you encounter a language feature that you don't know, you should learn about it. As you learn about it you can decide if there are better alternatives.

Java Concurrency in Practice

The de facto standard book about concurrency in the JVM is Java Concurrency in Practice by Brain Goetz. It explains the various aspects of concurrency in several levels of detail. If you use any form of concurrency in Java (or impure Scala) make sure you at least read the former three chapters of this brilliant book to get a decent high-level understanding of the matter.

Book review: Understanding the 4 Rules of Simple Design

It's about time that a book is written about the 4 rules of simple design. These rules are possibly the most powerful yet least understood software design practices out there.


If you care about your software being easy to adapt to changing requirements and a continuously evolving environment, you should probably know about the 4 rules of simple design. Understanding the 4 Rules of Simple Design helps you grow a decent understanding of these rules. Or at least enough so that you can practice them yourself (at the next Coderetreat, maybe). So, go buy this wonderful book.

Slightly Longer Review

The 4 rules of simple design were originally codified by Kent Beck in the late 90's. Kent Beck also writes about these rules in his book Extreme Programming Explained. The rules that will lead you to a simple design are
  1. Tests pass
  2. Express intent
  3. DRY
  4. Small
By themselves they are simple rules. Maybe this is why I find people underestimate them. But following these 4 simple rules does lead to better designs. That's what makes them so interesting. The rules of simple design aren't usually taught at trainings. The only training format, that I know of, that practices these rules is Coderetreat.

The author of Understanding the 4 Rules of Simple Design, Corey Haines, is to Coderetreat what Kent Beck is to Extreme Programming. Corey is like the father of Coderetreat. The past 5 years he has traveled the world to give trainings using the Coderetreat format. If someone can write a book about the rules of simple design, no doubt it's him.

In the introductory part of the book Corey introduces himself, the Coderetreat format, and explains what Coderetreat is about and why it matters. He then discusses the 4 rules and explains what they mean.

These introductions lead to the examples chapter. This is the larger part of the book. The examples are based on the patterns that Corey noticed while facilitating Coderetreat sessions. They include how test names should influence the object's API, what DRY actually means, how to replace procedural polymorphism, and much more. For each example Corey presents the case and then leads you through the thought process of improving the design.

Of course there are many other design principles that should not be forgotten. Some of these, the SOLID principles and Law of Demeter, are explained in this book as well. Corey mentions how focussing on the 4 rules of simple design naturally leads to satisfying most of the SOLID principles as well. I share this experience.

In only a hundred pages Corey has managed to describe the 4 rules of simple design with some great examples that no doubt show you the true power of these 4 simple rules.

However, simple design is hard and after reading this book everything won't just fit like magic. Corey has done an excellent job writing this book. Software design is hard and in my experience the way to get better at is through practice. Reading this book is an excellent start,

The style of writing makes this book easy to read, from cover to cover. If you like a preview of some of Corey's writing you should definitely read his blog post Some thoughts on pair-programming styles.

Completed with a further reading list, this book is most definitely worth it's money. As it's published through Leanpub you can set a price yourself. Although you really shouldn't want to pay less than $20.

I would definitely recommend this book to programmers with all sorts of experience; from beginner to expert, from apprentice to master craftsman.

For a little critical note, I found it a bit disturbing that the first release didn't include a picture of Zak the cat. It's awesome that he had fixed that, so this wonderful book now comes with Corey's trademark! ;-)

Disclaimer: what I've written here is my own personal opinion. I do not have any stakes in selling this book. I don't benefit financially from Corey selling more copies, nor books sold to visitors of this blog.

Method References in Ruby

Today I was writing some Ruby code for filtering arrays when I recalled that Ruby has a shorthand notation to pass a method by its name (as a symbol). I'm writing this post to help me remember, while I figure out how it really works.

Say, for example, that we need to select the odd numbers from a sequence. Of course we could pass a block to do that.
(1..9).select { |n| n.odd? } #=> [1, 3, 5, 7, 9]
But we can also pass a symbol, prefixed with an ampersand. While not actually accurate (in a few minutes you'll see why), this is what I call a method reference.
(1..9).select &:odd? #=> [1, 3, 5, 7, 9]
Personally I prefer the latter form as it's a better representation of the essence that I want my code to do: from range 1 to 9 select only odd numbers.

How does it work?

When Googling for information on the ampersand unary operator in Ruby, I first found a few blog posts and Stack Overflow answers that told me that & calls to_proc on a given object. That would be convenient, although it implies that this would probably only work for methods that accept both a block or a proc. Let's worry about that later.

First try and see if this works. If & would call to_proc on the symbol I should be able to pass a proc myself, too.
(1..9).select :foo?.to_proc
ArgumentError: wrong number of arguments (1 for 0)
Whoops! That didn't work too well. What's even more interesting is that if I just put the ampersand in front, the code runs.
(1..9).select &:foo?.to_proc #=> [1, 3, 5, 7, 9]
So maybe these first hits weren't actually correct about things. At this point it seems more likely that ampersand turns a proc into a block. And given any object, it will get a proc by calling to_proc first.

To see how my new theory works out I use a custom object that answers to to_proc. Because Ruby's blocks are special things that cannot be created or assigned to variables, I cannot simply capture the resulting value of &:foo. Let's give it a try.
odds =
def odds.to_proc() ->(n) { n.odd? } end

(1..9).select &odds #=> [1, 3, 5, 7, 9]
Awesome! However, this doesn't really confirm that the ampersand in fact turns the proc into block, only that actually to_proc is called on the object.

To check that the ampersand turns the proc into a block I use a function that yields.
def yielder(n) yield n end
yielder 1, &odds #=> true
yielder 2, &odds #=> false
While passing a proc obviously won't work.
yielder 1, odds.to_proc
ArgumentError: wrong number of arguments (2 for 1)
Tentatively my conclusion is that the ampersand unary operator takes a duck-type of a proc (i.e. an object that responds to to_proc) and returns a block. If you see any flaws in this, don't hesitate to let me know.

How Agile is a Scrum team?

Most teams I meet today are agile. Or, so they proclaim to be. All of these teams do Scrum, and that makes them agile. Doesn't it?

If I look back at the 12 practices the Agile Manifesto is build on (short: the Agile practices) I conclude that Scrum values a subset: the planning game, on-site customer, small releases and whole team.

Yet most of the teams doing Scrum that I meet have no customer on-site, although the teams do value this practice. Furthermore I mostly see a formal separation of developers and QA, and more often than not these teams use large releases (more than a few months). On the up side, most teams use Continuous Integration and have a set of coding standards, often formal.

The average of applying 3 out of the 12 Agile practices makes me wonder. Are these teams actually agile? Or maybe they are just a "little agile". Is that a thing?

Agile Principles

Let's take a look at the Principles behind the Agile Manifesto. The number one priority is "to satisfy the customer through early and continuous delivery of valuable software". Closely followed by the importance to (even late in development) embrace changing requirements and the notion that working software is the primary measure of progress.

These may seem to be supported by the planning game with user stories, doing the most valuable story first. That way the customer gets the most value out of each sprint, right? While that's true, I believe there's more to it.

While valuable software is very important, I think the key is in the early and continuous delivery of software. We add value to software by changing and extending it. This is what the planning game and user stories won't help us with. But changing software is highly valued by the Agile Principles.

Rest of the Agile Practices

And therefore there are Agile Practices that support us in changing software, enabling us to embrace change of requirements. These practices include automation of acceptance tests, test-driven development, pair programming, simple design and refactoring (not in any specific order).

I wonder how well Scrum teams can keep up the agile principles if they don't follow any other of these practices. I've seen teams respond to new requirements by demanding a "refactor sprint" to clean up the mess they made. Because there was no way to incorporate the changes otherwise. I've been on such teams years ago.

I won't state that it's impossible for teams to continuously deliver valuable software without following most of the agile practices. But I do wonder how they at all could. I mean, without simple design and constantly keeping the code clean, how well can code be changed, even in a few months from creating it? What raises a flag for a broker feature when lacking stable unit and acceptance test suites?

So without most of the agile practices, can you really get into a stable and continuous delivery of value?


I don't think Scrum is to blame here. Don't get me wrong. I like Scrum.

Scrum mostly embodies the planning and management rules of Extreme Programming (XP). I believe it's thanks to Scrum that much of the planning and management practices have made their way into mainstream today.

It's just because Scrum doesn't include the other Agile practices many folks doing Scrum think that those are somehow unimportant. The most successful teams I've seen are all doing most, if not all, of the other Agile practices as well. This is also totally possible for a Scrum team.

You're mileage may vary

Over the past years I've been practicing different ways to write software, but every time I got back to the agile practices as I find them to work best.

You're mileage may vary, of course. If you use different practices that even better support the Agile principles I would love to hear about them and try them myself.


I think it's fair to say that JavaScript is no longer that "thing" we used to enhance our otherwise static web apps by adding some dynamic elements to it. Most users use a JavaScript enabled browser and users that lack JavaScript support are simply outnumbered. Without JavaScript you cannot use a large number of online services and this number is growing.

From time to time I've been playing around with JavaScript frameworks like Angular and Backbone. When you want to start an app with any of these frameworks - or in fact, any JavaScript app for that matter - you'll quickly run into tools like Grunt and Bower. If you don't know what they are, go ahead an see what they're about. I'll be here, waiting patiently for your return. On the other hand, you don't need full knowledge of these tools, so you might as well read on.

The problem with starting an app like this is that it's pretty harsh to set up. You may need configuration for compiling CoffeeScript and Less, a unit test library, something to run the tests, minimize JavaScript, run JSLint. The list may seem almost infinite.

Enter Lineman

Lineman is a thin wrapper around tools like Grunt and Test'em to take away the initial configuration burden from developers and let them focus on what they love most: building great apps.

Justin Searls, the creator of Lineman did a brilliant talk on what's been wrong for years with building JavaScript apps. In this talk he gives an overview of some of the major problems you face with JavaScript development and how Lineman helps out. I'd recommend you to watch his talk.

The difference with Yeoman (a competing tool) is that Lineman is not based on code generation. Where Yeoman generates your initial project with a full Grunt configuration and example code (which you don't need) and some other stuff, Lineman installs itself as your Grunt configuration. The benefit of this is that you can update your tools without breaking the configuration (or manually updating it).

With Lineman all you need to do to start building a new app is tell it to create one.
$ lineman new <app-name>
That's it. It provides much like a walking skeleton for your application that you can run and test. Immediately you can enter your regular build cycle of changing code and have it automatically built and tested each time you save a file. If that's not all, it's blazingly fast. One of the goals the creators of Lineman set is for each incremental build and test run to take no more than a few hundred millis, independent of the size of the codebase. Now if that doesn't stimulate TDD I don't know what will.

To start the build cycle all you need to do is to fire up lineman run in one shell and lineman spec in another.

Both of these watch your sources so your app is build and tested whenever you save a change. You need to keep both running because lineman run compiles your sources and lineman spec watches the built files for running your specs. It took me a little help to find this out (much thanks to Justin for helping out).

Framework support

So what about framework support? Lineman creates a vanilla flavored app. This is the walking skeleton, but includes nothing like the framework support you can get from Yeoman.

Personally this is the way I prefer to start an app. Any libraries and frameworks I need I will include myself. Not only gives this full control to what vendor libs I actually use, it also gives me full control to when I want to use them and gives me the opportunity to defer their inclusion if I don't need them (yet).

If you like to have some popular framework already included that's okay, Lineman offers that too. All you need to do is to clone one of the Lineman template projects from Github and you'll get a walking skeleton app with that framework. At the moment templates are provided for Backbone, Ember, and Angular, for building a library or a blog.

What's next?

At this point I just want to start building things. The lineman website covers a lot of the configuration that's possible.

Next to build and test configuration one of the first things I need in most apps is to be able to run end-to-end tests. I suspect Lineman can help me out with this too.

I hope I'll keep enjoying Lineman as much as I do in the first few hours of using it.

Fast Remote Service Tests

Testing code that interacts with remote services is often pretty hard. There are a lot of tradeoffs that influence what tests you can write and the amount of tests to write. Most of the times you have zero control over the data you get from the service, which makes assertions tough to say the least.

A while ago I used the VCR library to write some Ruby tests against a remote service. VCR addresses the above problems. It records your test suite's HTTP interactions to replay them during future runs. The obvious benefits are fast and repeatable tests.

This week I was wondering whether that's a thing for Java as well. As it turns out there's Betamax to do that. Actually Betamax is a Groovy port of VCR that can be used with any JVM language.

Betamax installs a proxy in between you and the target host, records each request and response on tape and replays the tape for known requests. It works for any HTTP client that respects Java's proxy settings, and for a bunch that don't such as Apache HttpClient and WSLite.


In a JUnit test you can use Betamax as a method-level TestRule. On each test-method that should record and replay you put an @Betamax recorder and set a tape.

Consider the following example where I use the Spotify Metadata API to get the popularity of an artist. In this example I use the Apache HttpClient library and configure it for Betamax.
public class SpotifyTest {
  @Rule public final Recorder recorder = new Recorder();

  private final DefaultHttpClient http = new DefaultHttpClient();

  @Betamax(tape = "fixtures/popularity")
  public void get_popularity() throws Exception {
    Spotify spotify = new Spotify(http);
    assertThat(spotify.popularity("The Beatles"), is(.55f));

  public void setUp() throws Exception {
At the moment of writing this code the popularity of The Beatles is .55 but as this number is based on user opinion it is highly likely to change. Using a Betamax tape gets the same response (as long as the request does not change) and allows to assert .55 for popularity.


As I've shown you Betamax properly records and replays any HTTP communication using either a proxy or a wrapper class (as in the example). HTTPS is also supported but may be a bit more interesting as you use Betamax in a proxy-based setup. Using a wrapper will work just fine.

The problem with HTTPS and a proxy-based setup obviously is that the proxy cannot intercept data on standard HTTPS communication. This is why we trust HTTPS.

Betamax has its way around this. You can enable sslSupport on the Betamax Recorder. When your client code is okay with a broken SSL certificate chain you can make this work.

Again this is only really a problem as you use a proxy-based setup. Using a client wrapper enables Betamax directly on API calls easing HTTPS communication.

Try it yourself

Betamax can help you to write fast and repeatable unit tests for clients of remote services. The most beneficial to me is that the tests are really fast because remote communication is eliminated. Asserting on specific values can be helpful although personally I like a property-based style for these tests (e.g. popularity must be a number >= 0 and <= 5).

Give Betamax a try the next time you interact with a remote service.

Code Katas

In this post I want to talk about code katas. Most of you have heard of code katas before, many of you  probably have done some katas. A while ago code katas were getting a fair bit of attention. Including of course some skepticism.

Just in case you haven't heard of code kata yet, let me explain briefly. The term code kata is coined by Dave Thomas, co-author of The Pragmatic Programmer. It's a bow to the kata concept in martial arts. Kata is a Japanese word and literally translates to form. Originally kata was a training method to help a student master a specific move or technique.

The intention of code katas is to help programmers to master just that: master a specific move or technique. To practice these we use simple algorithms that we can dream up at any moment. Think of sorting and prima factor calculation.

So what's effective about these code katas? I mean coding up the same algorithm time after time just doesn't seem like you practice any technique. Except maybe your typing skills. This is why skepticists will tell you that katas are not effective and no more than a waste of time. Solving the same set of programming problems doesn't make you a master programmer.

And they are are spot on. Solving the same set of problems with (almost) the exact same code time after time doesn't help a bit towards mastery.

Yet I totally believe that katas can be a very effective way to practice. Just not by coding up a solution to a problem in the same way over and over again. On the path towards mastery you will have to find many ways of solving problems, not just one. A kata is not about the solution to the problem. It's about the path of how you get to the solution.

How many ways can you think of to calculate a Fibonacci sequence? In what forms can you code a bubble sort? And a quick sort? There's no need to learn these algorithms, you probably can dream them already. Instead you focus on the code you use to get you there. You'll walk along a certain path until you master it. You make little tweaks to the way you code up the solution along the way. You carefully consider every step you take.

To me this is what katas are really about. I often use katas to practice and learn. Again, not about the solution, but about the path. My path. The way I code. Katas help me to challenge each of my steps along the way. Is this next step the best I can think of? Why this step? Why not another? By challenging every step on my way it's almost like pairing with myself. As a navigator I can pick up important feedback.

Then I find code katas also particularly useful for learning a new language. The famous Hello World asserts that my installation works and I can write a main. Next I want to feel the language. By doing a few katas I can get a decent grasp on its syntax. Then I want to explore some of the language's unique features. Because I've seen many solutions to the katas in various languages I can get a feeling of what the language has to offer.

That said, katas are not the only way to practice and to learn. Many programming languages now have a set of koans available, to help you learn the language. I find these are very useful, too. And most katas are useless for learning more about a particular library of framework. Which is not the point of katas anyway.

As you can read, in my experience code katas help me practice and learn. Yet I completely agree that doing katas by coding up the same solution over and over again, is a waste of energy. I want to reiterate that it's not about the solution here, it's about the path. Most of the time I delete my code directly after I finish a kata. The lessons I learn from walking the path remain in my head and I won't ever look back at the solution. The exception here is the first few katas I do in a completely new language. Before I have the syntax in my fingertips, sometimes I find it useful to look back at the previous exercises.

If you're interested there are a lot different katas out there. Google is your friend. To get started you can use this Kata Catalog.