Sunday, May 31, 2009

Advanced Java "Stump the Chump" Interview Questions Part 3

A while back I wrote an entry listing some Java-based “stump the chump” questions. These are questions and I have encountered or used in an interview to separate the competent developers from the extremely competent ones. The idea is that the questions represent fairly obscure areas of development where the details can cost days or months of productivity but aren’t really deal breakers in terms of employability.

Recently I encountered a few more such questions, this time in the realm of concurrency and multi-threaded development so I thought I’d document them for future reference.

Question 1: Why will the following code not always terminate?
public class foo {
private boolean shouldStopFlag = false;

public void methodCalledByThreadA() {
while (!shouldStopFlag) {
//Do some work here.
}
}

public void methodCalledByThreadB() {
shouldStopFlag = true;
}
}
The answer to this problem is a fairly subtle one that deals with data visibility. As written, the JRE is allowed to put the shouldStopFlag in any place in memory including registers and cache that isn’t necessarily visible to all threads. I first read about this issue several years ago in the excellent book, Java Concurrency in Practice by Brian Goetz, Joshua Block, et. all. I even gave a speech at the Rocky Mountain Oracle Users’ Group (ROUG) Training Days where I warned others about the issue. However, I still actually had to make the mistake and lose a couple hours wondering why before I truly appreciated the fact that this isn’t really an isolated, “one in a million” kind of problem. (I was running a dual core machine and saw the problem about one time in a hundred calls. Fortunately, I was using test-driven development and was able to reproduce the symptoms every couple of runs.)

There are several solutions that can address this problem. The first is to use a synchronized block to guard the flag. (That may be overkill in this particular code sample but I believe that, in general, synchronized blocks have an unfair stigma due to performance problems that were addressed a long time ago.)

The second option is to use the volatile keyword. This instructs the JRE not to put the variable into areas that aren’t visible across threads. (The meaning of this keyword has changed slightly starting in Java 1.5 so be careful of older documents covering the topic.)

The third option is to make the variable into a java.util.concurrent.atomic.AtomicBoolean. This class makes the variable into a lock-free, thread-safe version of the boolean variable. AtomicBooean variables also have other standard, atomic methods such as compareAndSet and getAndSet. Finally, according to this source, the atomic package also takes advantage of underlying hardware to implement the atomic behavior.

In general, I’ve noticed that certain concurrency issues and tools such as race conditions, semaphores, and deadlocks are well understood by developers but visibility issues unique to Java are not as widely known or understood. (How many people know what the Java Memory Model is and why it changed as of Java 1.5?)

Question 2: Why are happens-before relationships important in the Java Memory Model?

First a note: Notice that I didn’t ask, “What is a happens-before relationship as it pertains to the Java Memory Model?” The reason for this is that the best descriptions that I’ve read of happens-before relationships take several hundred carefully-chosen words and are full of subtleties that I think even the experts would have trouble getting right without a cheat-sheet. The clearest explanation that I have seen on-line so far is here.

In any case, the reason that has-before relationships matter in multi-threaded applications is that under certain circumstances, the compiler and the JRE are allowed to execute commands out of order from what was actually written. The reordering is invisible most of the time. (In single threaded applications, it is invisible all of the time.) However, in multi-threaded applications, the reordering may be visible if the affected sections of code aren’t properly synchronized.

I feel lucky not to have personally run afoul of this issue (that I know of). Some of the just-in-time compiler’s optimizing capabilities are really cool but I’d hate to have to try to identify this issue in a live system.

Friday, October 24, 2008

Note to Self: Software Developers are not the Center of the Universe

Those who have read my previous blog entries will have noticed by now that I like to take a somewhat contrary tone in my writing. Of course, in my everyday career life, I find plenty of useful, if uninteresting things to think about in the software world. New languages such as Flex are making so much progress that I get tired just trying to keep up and of course, there remain the day-to-day minutia that really aren't that interesting to write about and would be even less interesting to read about. For those interested in keeping up with the latest technological happenings in the development world, I would recommend Dustin Marx's blog. He has a passion for what he writes and the energy of a man ten years my junior when it comes to documenting his findings. For my own part, I try to document the little epiphanies that I occasionally stumble across during my development activities. Recently, I had another one.

I was reminded of something so common that it seems to be often overlooked by myself and the other developers that I've met: we are not the center of the universe. Putting aside the, "No Duh," obviousness of that statement for a moment, allow me to point to a couple of examples of where this principle seems to be forgotten.
I was at a conference a few months ago listening to an expert panel argue (again) about the merits of static versus dynamic languages. One of the experts was railing against Java as being "high-ceremony" and generally too verbose. He complained about the number of exceptions that need to be checked when performing file I/O. He argued that many catch blocks are usually left blank and that the language shouldn't force developers to write so much code to handle these things. He finished the argument with an almost off-handed comment, "… especially since all of those exceptions pretty much never occur anyway," or something to that effect.

At that point, another expert on the same panel responded, "As someone who's worked in an operations center when a hard disk was failing, I can tell you that those exceptions do occur and they should be treated seriously by the developer, even if they are rare." After pausing for several months to digest what I'd just witnessed, the blinding flash of obviousness finally hit me: the day-to-day events of the software development world are not representative of the day-to-day events of the operational world, the "real" world that our software operates in. Software developers are not the center of the universe and our typical experiences are not representative of anyone else.

In the above example, if my hard drive started to fail I'd be thinking about the last time I backed up my data and checked in my code. I almost certainly wouldn't be thinking about how gracefully my software was handling the problem. In fact, I probably wouldn't even be running my software when I noticed things starting to go wrong. ("Quick, now that I'm getting intermittent blue screens of death, let's fire up the web server and see if the sessions fail gracefully.")

I don't think that the importance of that one-in-a-million hard disk failure had ever occurred to the first expert when he was complaining about having to catch too many kinds of exceptions. (And no, you should never leave them blank.) But as developers, we often spend so much of our time focused on the act of software creation that we rarely experience (or think about), the everyday life of the users. I'm not talking about software usability (though that sometimes shows up as a symptom). What I'm pointing out is that we regularly choose languages, libraries, tools, and practices that make our life easier without really considering what happens after we deliver the product and go away.

Here's another example. I've met many open-source proponents who honestly don't understand why any company would purchase a half-million dollar application server, database, ESB, or portal server when there are plenty of open-source offerings available, "for free." Answer: Only a small portion of the total cost of operating any of these products is in the actual development. The majority of the costs come from keeping our software running day after day, year after year. In that time frame, when (not if) a problem occurs, the $500,000 in licensing fees is a drop in the bucket next to the potential losses if the enterprise fails catastrophically. Companies will spend that money up front if it will help to keep those clusters running smoothly.

Most developers that I've met don't think about their software that way. (That included me until recently.) Instead, they see the headaches with installing, configuring, and learning a new "proprietary" tool. They look at the time it takes to get a server "out of the box" and running and feel that it takes way too long and involves too many steps. They tend to worry more about the turnaround time when hot deploying their latest compilation and less about what it would take to roll back a new version of their software without taking an entire cluster down if something goes wrong.

All of these issues are valid too…for a developer. An operations floor doesn't care if a hot deploy takes seconds or tens of minutes. They don't deploy that often. They do however, care about not having to restart their enterprise if a piece of hardware fails for some reason.

I suppose it's fair to consider the possibility that an operations center isn't center of the universe either. However, all of this really is meant to server as a reminder (to me if no one else) that while we are paid to solve problems, they're our customer's problems. Our problems are often secondary in the grand scheme of things.

Saturday, May 10, 2008

When Fear Cripples Engineers

Recently, one of my colleagues applied to speak about Flex at a major local conference. Instead of receiving the standard form letter rejecting his submission, he received a polite Email telling him that his speech was declined because most of the conference attendees didn’t like Flash. Although my colleague was disappointed, we were only a little surprised. However, the subject of our conversation afterwards wasn’t the conference organizer. He was making a sound business decision based on what he believed would sell the most tickets. (By the way, after talking with some more of his peers, he changed his mind and the speech is now scheduled.)


Actually, the real subject of our conversation was on why engineers reject some technologies out of hand. It may come as a surprise to those who don’t work in our industry but for all that we’re supposed to be altruistically pursuing technical solutions to fairly abstract problems, the reality is that baser emotions like fear, laziness, and a desire for power play at least as large a role in the everyday life of a developer. Sure, objections to new ideas are wrapped in terms like “technical risk”, “unproven technologies”, and “vendor lock-in,” and sometimes the people using them mean exactly that and no more. However, equally as often, they’re used in place of real truths like, “I don’t know that new technology and I’m afraid to learn it,” or “I like being the only person in the building who knows this other technology.” In fact, the ugly truth is that egos in the engineering field can be every bit as large as they are in Hollywood.


I do not mean to imply that I’m somehow immune to the very same emotions. I’m not. However, feeling a certain way and letting to affect my behavior aren’t the same thing. For that reason, I’m going to post a few suggestions for anyone who is interested. These practices have helped me advance my career without compromising my integrity. Most of these should be obvious and none of them are new. (Somehow, that didn’t stop me or my co-workers from having to re-learn them the hard way.) And hey, if you’re perfect, you can still hang them on the wall as a subtle hint for that annoying other guy in your office.



  1. Know what you’re not willing to do – I wouldn’t exactly advertise these around the office but I do know which tasks I won’t do and why. I also know which tasks I would only do long enough to find another job. Some of them are technologies where I would never be better than mediocre. Some of them are technologies that I don’t believe will have a future regardless of what I read on the Internet. I also know that these are areas where my job could one day be at risk. However, nobody can be all things to all people. The key, in my opinion, is not to hide (or coddle) my weaknesses. Instead, I think it’s better to play to my strengths and work on the areas where I believe improvement will get me where I want to go in the future.

  2. Do not get emotionally attached to your code – I’ve lost count of the number of times I’ve seen people make bad technical decisions in order to protect code that they wrote. Ours is a creative industry. Creating new code is what we do. When we cling to existing code unnecessarily, it sometimes prevents us from trying entirely new and better ways of doing things. Of course, infinite refactoring is a slow road to failure too but don’t preserve code beyond its useful lifetime.

  3. Learn new skills constantly - In a field where change is the only constant, the secret to longevity is to be comfortable learning these new skills. This is easy to say but better way to measure this is to ask yourself a few questions. When was the last time you learned a new computer language? When was the last time you read a technical book? How about the last conference you attended? Life is of course that thing that happens to us all when we’re busy making plans. However, learning new skills helps to alleviate worries about when the next round of layoffs are going to occur.

  4. Being the new kid on the block builds character – Within the programs that I’ve worked on, I’ve nearly always become a local subject matter expert in a few areas. Admittedly, it feels good to have people come ask questions once in a while. However, that recognition vanishes the moment I transfer to a new group. This has generally proven to be a good thing. It helps to force me to stay on top of the facts and not rest on my laurels. Also, it reminds me regularly just how many other people out there are smarter than I am.

  5. Capitalize on uncertainty – Software projects regularly face uncertainty. Sometimes the customer doesn’t quite know what it is that they want. Sometimes funding can be uncertain. Occasionally companies are bought out or layoffs sweep through the office like a plague (with fear and superstition following in its wake I might add). During these times, I’ve noticed that many (perhaps even most) people withdraw. Productivity goes down and they’re afraid to make decisions. The words coming out of their mouth start to be all about their troubles and doubts. Learning to thrive in these instances is one of the big secrets to both personal and corporate success. All of the success books that talk about vision and commitment (and there are tons of them) are really geared toward how to handle these situations. The thing that still amazes me, even after a decade in the industry, is just how many people out there are determined to see failure and impending doom around every corner. Don’t be one of those people. Have I failed occasionally? Of course. In fact, one of the easier measures of success is to count the number of times you’ve failed and kept going. Successful people fail regularly. Unsuccessful people don’t seem to fail all that often. Instead, they quit attempting anything and just wait for someone else to tell them what to do.

  6. Dare to ask questions. The truth can withstand scrutiny. – My entire family seems to be genetically incapable of just sitting down and shutting up. Sometimes, (ok regularly,) this can get us into trouble. However, throughout college and my professional life, I’ve still noticed that there are classes and meetings where it seems like one or two people in the room are doing all of the talking and everyone else is sitting around trying to act like they know what’s going on. The funny thing is that when I’ve dared to show my ignorance by raising my hand and asking if someone could please explain the subject to me in small words, others in the room seemed relieved to have the explanation. I’ve also noticed that big words and lots of acronyms are regularly used as a substitute for actual understanding. In short, don’t be afraid to admit what you don’t know and don’t be put off by someone else asking tons of questions. The truth can stand up to questions, BS cannot.

  7. Teach others how to do your job – It is occasionally tempting to not share what you know with someone else. After all, if they can do your job, then why exactly would a company want to keep you? I’ve even seen people attempt to build a career around that premise. This usually resulted in more fear on the employee’s part instead of less. The fact is that companies cannot and should not live with a single point of failure for any length of time. Hoarding knowledge is a bad idea, no matter how smart it seems. The other fallacy with this idea is that success is a zero-sum game. (If someone else succeeds, it was only because you failed or vice-versa.) Actually, I (and many others) would say that long-term and lasting success is one of those things that you only get by giving away. Make others successful and you can’t help but succeed.

  8. Admit when you’re wrong – Ok, I’ll admit it. I hate doing this. More accurately, I hate making mistakes. Recently I made a bone-headed mistake that cost a couple of hundred dollars. What frustrated me more was the fact that I knew better and I still made the mistake. I suppose in the grand scheme of things it wasn’t too big of a deal. I even had a co-worker who suggested a way to cover it up. However, experience shows that this pretty much always backfires. Instead, I recommend going the other direction. If I’m the first to discover and admit my mistake, it usually eliminates about 90% of the grief that I would have gotten if someone else had announced it first.

Friday, December 21, 2007

Critical Thinking about Software Technology

For all that we work in an industry full of well educated and presumably intelligent people, it still amazes me a times just how much we, as an industry, regularly act like lemmings. By way of example, consider what kinds of responses you would typically find to the following statements if you’d encountered them on-line in the last 5 to 7 years:
  • J2EE is overly expensive, over used, over-hyped, and too complicated. The DTO pattern is over-used and a sign of a failed architectural design.

  • Agile is well intentioned but too hard to implement and doesn’t really work in all development environments.

  • Dynamically typed languages like Ruby are good for a niche market but aren’t suitable for large-scale programs.

  • Getters and setters are over-used and a sign of bad or thoughtless OO design.


Perhaps I’m being too negative. After all, it’s easy to complain about something. Let’s try some affirmative statements instead:
  • EJB 3.0 is well designed and ready for prime time. Developers should consider incorporating it into their designs.

  • JSF is ready to take off and become the next major web-development framework.

  • Web services are a growing technology and definitely on the must-use list for distributed applications.


Before proceeding, I’d like to pause for a moment and point out that I’m not actually endorsing any of the statements above. That’s not really the point. (Actually, I deliberately picked a mix of statements that I agree with and disagree with.)

The point is that nearly all of the above statements, at one time or another, are or were virtually guaranteed to start a flame war. What strikes me as odd though, is that many of the people repeating the arguments for or against any given technology, often haven’t tried them. Personally, I believe that we tend to look for consensus on the Internet. After all, not many people have the luxury of spending months trying out a technology before they pass judgment on it. However, in my experience, the Internet doesn’t really show a representative sample. When I’ve spoken at conferences, I or a colleague will often poll the audience on their previous experience with whatever we’re talking about. How many people are developing for the web? According to my unscientific observations, about half. How many people have actually used a relatively new language like Flex? My bet is that fewer than 10% of the people will raise their hands when I ask that in a couple of months. Ruby? Probably very few. Yet if you just read blogs, technical articles, vendor websites, conference speaker’s notes, etc. it would seem that nobody is still using “old” technologies like struts.

I also believe that there are many valid arguments against all of the above statements. However, they’re only valid when delivered by people who have thought critically about the issue, taken their own personal experiences and observations into account, and left room for the possibility that their experiences probably don’t represent all of the possible perspectives on the issue. (This applies to me too. I encourage anyone reading this to take what I’m saying with a grain of salt. After all, I’m just some guy writing a blog too.)

So, here is one of my New Year’s resolutions: I will think critically about all of the new technologies, ideas, and buzzwords in the coming year. Just because someone is excited doesn’t make it a brilliant idea no matter how smart the individual is or how great his or her reputation is. Similarly, I will not bash someone else’s ideas simply because everyone else is down on it. I, like everyone else in my industry, am paid to think. I am not paid for my charm, wit, or dashing good looks. (which is just as well) Therefore, when I fail to think for myself and arrive at my own conclusions, I am over-paid.

Saturday, November 17, 2007

The 80/20 Rule

I’m regularly surprised at how ill handled or abused the 80/20 rule is. In this case, I’m not referring to the Pareto principle, where 80% of an effect comes from 20% of a cause. What I’m referring to is more along the lines of a “rule of thumb.” (I don’t actually know if there’s a technical term for this.) The idea behind the 80/20 rule is simple: almost every rule, policy, or best practice has exceptions. (In short it only applies about 80% of the time.) Naturally, there is nothing special about the 80-20 split. Some things are 90-10, 95-5, or 60-40.

So why bother with a blog entry stating the obvious? Everybody knows that rules have exceptions. However, I have two reasons for mentioning the issue. The first is as an official disclaimer. Nearly every rule, best practice, software pattern, or idea that I (or anyone else) will ever post will have at least one exception. Generally, when I call out a best practice, it should go without saying that there are times when it won’t apply. As developers, it is our responsibility to think critically and to know when not to use an idea. That’s what we’re paid to do. I also believe that we ought to know why any given best practice is considered a good idea. Why do coding standards encourage the use of getters and setters? Why are goto statements considered evil in object oriented code? Why do we make variables private and not public in a class? Answering, “Because everyone else does it that way,” is not good enough. Coding standards should not be about applying a technical form of peer-pressure, they should be about helping developers recognize a potential mistake before it’s made.

The second reason for discussing the 80/20 rule is to point out the consequences of trying to create a blanket policy that doesn’t allow for exceptions. When called out, the notion seems ridiculous but it’s done all the time in our industry. (How many people have had to write three line javadoc comments for a one line getter or setter?) Honestly, some of the worst hacks I’ve ever seen came from attempting to make a design pattern fit a problem that didn’t apply. Well intentioned management policies can violate this rule too. (The concrete examples that I’ve witnessed are worthy of their own blog entries and have been saved for later.) In general though, I would say this:
Life is full of exceptions. Knowing what the exceptions are and how to work around them is at the heart of learning to write good code. Or maybe I should have said, “80% of learning how to write good code is in understanding the 20% of the 80/20 rule.”

Saturday, November 3, 2007

Why Write About Beginning Java

After some careful consideration, I’ve decided to include “Beginning Java” entries in my blog. When I was in college, I noticed that academia seems to feel that it’s not worth writing anything that isn’t cutting edge. Working out in the “real world” afterward, I found plenty of people who act as though the glory in programming (if you can call it that) comes from knowing the rare and obscure hacks, compiler issues, and language features that separate the true alpha geeks from the wannabes.

Frankly, having written code since the second grade, gotten two degrees on the subject, and done it professionally for the last decade, I tend to be of the opinion that coding is what I do, not who I am. I also observed something else while getting my brown belt in Kempo: the thing that really separates the white belts from the black belts isn’t who can do the complicated stuff like tornado kicks, it’s how efficiently and consistently they can do the basics like blocking and punching.

So what makes it the basics interesting? For me it’s questioning why the things we call best practices actually are (or are not in some cases). It’s well and fine to document what a dynamic proxy is and why anyone would need one (and maybe I will someday) but I think it’s also important to periodically revisit the industry standards that everyone follows and ask, “are we doing what’s best or are we acting like lemmings?”

I love the fact that someone was willing to take a stand and declare that getters and setters are evil. Even if I didn’t agree (which is not the case as it turns out), it encourages programmers to stop and think critically. I don’t know if the things that I plan to write will turn out to be as profound or widely noticed, but in the spirit of, “when a developer stops thinking, he’s overpaid,” I intend to a post a few of my own back-to-basics blog entries.

Saturday, October 20, 2007

Separating Advanced Java Programmers from Competent Ones: “Stump the Chump” Interview Questions Part 2

Q: What is Jar sealing and when would you need it?

A: This question relates to how class loaders are implemented. (It’s my preferred stump the chump question and I’ve actually run into this situation before.) The background necessary for this question is as follows:

Class loaders are hierarchical. Most J2SE applications have at least three class loaders and J2EE apps generally have more. (The class loaders in a J2EE app form a tree which allows one Java VM to protect deployments from namespace collisions, etc.) Each class loader, except for the root loader, has a parent and the class loaders are usually supposed to ask the parent to supply a class before trying to do it themselves. Thus, when a class loader tries to load java.lang.String, it should try to get it from the parent rather than create its own. Only if the parent cannot create the class should the loader try. The root-level class loader loads all classes necessary for things like the security manager to perform its job. The second class-loader usually creates all Java classes that run inside the security manager, and the rest of the class loaders in the hierarchy are used for the code that isn’t part of the VM.


Classes are loaded by a class loader exactly once. In other words, classes are read from a jar file no more than once by a class loader instance. The problem comes when developers want the ability to reload a class, for example when an application is hot-deployed to a server. (I believe JUnit also likes to reload any user-created classes between tests.) In that case, the class loader instance is destroyed and a new one is created in its place. However, for this technique to work, the class loader must have the opposite behavior from the one described above, they must first attempt to create the class, and only ask the parent if they cannot do it themselves.


So far, so good. The problem comes when a class is loaded by two separate class loaders in a hierarchy. At this point version differences can cause all kinds of unpredictable behavior and often, the problem doesn’t even manifest itself near the actual offending code. Symptoms generally include things like null pointer exceptions in lines of code that make no sense. In many ways, it reminds me of incrementing a pointer too far in C++ except that it generally doesn’t core the VM.


In general, there are two ways that I’ve seen the problems manifest themselves:


  1. An application server’s class loader loads one version of a jar file (like log4j) and the developer deploys a different version with an application. If a method signature changes from one version to the next, then the developer can get errors about not having the correct number of parameters in the method call even though he/she’s done nothing wrong. The VM may also generate errors about calling methods that don’t exist (even when they do) or worse, the method call works fine until the class called by the developer calls another object that doesn’t exist in that version of the package.

  2. A developer is writing code and testing it periodically as he develops. At this point, it is almost certain that method signatures are changing and new methods are being added, renamed, or removed. When deploying this code to an application server, something goes wrong with the deployment and more than one version of the code lives on the server without the developer realizing it. For me this happened when I was using weblogic 7 and switched between hot deployment (where I dropped a new version of the ear file into a directory) and deploying via the web interface. The class loaders were apparently not peers in the class loader tree and both versions of my application were “partially” deployed.


The worst part of this problem is that it is not obvious and not intuitive what’s going on. The problem may not manifest any symptoms right away and when symptoms appear, they almost never seem related to the class loader in any way. Generally, I’ve realized the mistake after spending three days debugging code that hasn’t changed and always used to work. Somewhere around the point when I start to question my sanity and my abilities as a developer, I realize that this is the dreaded “hot deployment” issue. The problem is easily fixed by reinstalling the server instance. (Sometimes undeploying and redeploying an application isn’t sufficient and it’s not obvious where the server keeps all of its cached files.)

Enter the ounce of prevention. Sealing a jar involves placing a line inside the manifest file that lists the jar (or some subset of it like a particular package) as being sealed. (The <jar> ant task also has an option to seal the jar.) That line instructs the class loader hierarchy to only retrieve classes in that package from exactly one file. A jar sealing exception, java.lang.SecurityException: sealing violation is thrown if the class loader attempts to get its files from more that one jar. This prevents all of the headaches listed above and, generally, the class loader “does the right thing” on the edge cases. (for example, if one version of a jar is sealed but not another)


The only place that I’ve run into problems with jar sealing is when developers want to build test classes in the same package and directory as the code itself. Usually, they use ant tasks to create a deliverable jar and a test jar that only contains the test classes. Considering the pain that can be caused when class loaders go wrong, I would recommend placing test code in its own package (Sub-packages can live in a separate jar with no problems.) or building a “test” jar that contains both the base code and the test classes.