Saturday, May 30, 2009

How TDD and Pairing Increase Production

"Test-driven Development" and "Pair Programming" are two of the most widely known of agile practices, yet are still largely not being practiced by many agile teams. Often, people will cite being "too busy" to adopt such practices as TDD and pairing; in essence, implying that striving for high code quality will reduce productivity. Mike Hill explains how this logic is seriously flawed.

Mike tells us, essentially, that one must "go better" if they want to "go faster"

Monday, May 25, 2009

Successfully Adopting Pair Programming

In 3.5 years as a consultant I spent more time talking (with clients) about pair programming than any other topic. In general, client developers had never properly paired and had no desire to do so. To make matters worse, the business predominantly thought two developers on one machine was a waste.

Despite the prejudices, usually by the time we left a client the business and developers had become pro-pairing.

Successfully adopting pair programming can be hard, but it's also entirely possible if you leverage the lessons I've learned.

This article assumes you have done some pairing and are looking to help your organization adopt pairing where it makes sense. The advice can be helpful for people in various roles; however, it is written mostly for developers or team leads looking to introduce pair programming to their teams.

This article makes no attempt to address whether you should or should not be pairing. There are benefits and drawbacks to pair programming (like most things), and I think there is already decent information available covering that topic. Discussing the pros and cons of pairing would take away from the focus of this article: If you already believe in pair programming, how can you successfully introduce it to your team?




read article at : http://www.infoq.com/articles/adopting-pair-programming

Tuesday, May 12, 2009

Remote Lazy Loading in Hibernate

Lazy loading in Hibernate means fetching and loading the data, only when it is needed, from a persistent storage like a database. Lazy loading improves the performance of data fetching and significantly reduces the memory footprint.

When Hibernate initializes the data object, actually it creates a reference (of the data) to the data object and doesn't load the data as such. Hibernate then intercepts the method calls to this reference and loads the actual data. In order to intercept and load the data, Hibernate requires the data object be associated with a Hibernate Session.

Problems might arise when these "lazy loaded" data objects (containing the reference) are transferred to other application layers, especially to remote client. These objects get serialized/de-serialized on their way to the remote client and there by detaching itself from the Hibernate Session. Accessing this detached reference will always lead to some exception.

What if these lazy loaded objects can still maintain their references even at the remote client layer (where there is no Hibernate Session) and still be able to lazy load data? This is quite possible and this concept of lazy loading data even from a remote client is called remote lazy loading.

In this article we'll discuss the solution by extending Hibernate's lazy loading framework. We'll use 3.2.x version of Hibernate library.

countinue here : http://www.theserverside.com/tt/articles/article.tss?l=RemoteLazyLoadinginHibernate


Prime guidelines for Test Driving

  • Do. Not. Skip. Refactoring.
  • Get to green fast.
  • Slow down after a mistake.

Let’s go through these guidelines one by one, examining what makes them so
important that they made it to our short list.

Do. Not. Skip. Refactoring.

If you haven’t yet considered tattooing the word refactor on the back of both your hands, now would be a good time to do that. And I’m only half joking. The single biggest problem I’ve witnessed after watching dozens of teams take their first steps in test-driven development is insufficient refactoring.

Not refactoring mercilessly and leaving duplication in the code base is about
the closest thing to attaching a time bomb to your chair. Unfortunately, we are
good at remembering the “test” and “code” steps of the TDD cycle and extremely proficient at neglecting a code smell that screams for the missing step.

Thus, I urge you to pay attention to not skipping refactoring. If you have some-
one to pair with, do so. Crosscheck with your programming buddy to spot any
duplication you may have missed. Bring Fowler’s Refactoring book with you to the toilet. Learn to use your IDE’s automated refactorings. It’s good for you, the doctor said so!

I apologize if I’m being too patronizing, but following the TDD cycle all the
way is important. Now that we’ve got that in check, there are two more guidelines for us to go through. The first of them relates to the code step of the TDD cycle get to green fast. Let’s talk about that.

Get to green fast

As we test-drive, we’re basically aiming for the simplest design we can think of for the problem at hand. We don’t, however, go for the simplest design right off the bat in the code step. Instead, we should strive to get back to green fast. The code step is where we get to that green bar with as few edits as possible. The refactoring step is where we perfect our design.
You might want to read the previous paragraph out loud. Don’t worry about
others looking at you like you’re a freak. You’re just pointing out facts.
Speaking of facts, it’s more than likely that you will make one or two mistakes
in your career even after adopting TDD. Our third guideline tells us to slow down once the proverbial smelly substance hits the fan.

Slow down after a mistake

It is common for developers practicing TDD to start taking slightly bigger and bigger steps as time goes by. At some point, however, we’ll take too big a bite off our test list and end up reverting our changes. At these points, we should realize that the steps we’re taking are too big compared to our ability to understand the needed changes to our implementation. We need to realize that we must tighten our play. Small steps. Merciless refactoring. It’s that simple. Walking to the water cooler might not be a bad idea either.

These guidelines are certainly not a complete reference to successful test-
driving. Practices and guidelines don’t create success. People do. Having said
that, I hope they will help you find your way to working more productively and
to avoiding some of the pitfalls I’ve seen many people stumble into as beginning TDD’ers.

read more : http://www.manning.com/koskela/

Wednesday, May 6, 2009

Lean service architectures with Java EE 6

The complexity and bloat often associated with Java EE are largely due to the inherent complexity of distributed computing; otherwise, the platform is surprisingly simple. As I discussed in my last article for JavaWorld, Enterprise JavaBeans (EJB) 3.1 actually consists of annotated classes and interfaces that are even leaner than classic POJOs; it would be hard to find anything more to simplify. Nonetheless, (mis)use of Java EE can lead to bloated and overstated architectures. In this article, I discuss the essential ingredients of a lean service-oriented architecture (SOA), then explain how to implement one in Java EE without compromising maintainability. I'll start by describing aspects of SOA implementation that lend themselves to procedural programming, then discuss domain-driven (aka object-oriented) design.

read more : Lean service architectures with Java EE 6


Saturday, May 2, 2009

Test Driven Development - legacy code

Legacy code is traditionally considered to mean code written by someone else somewhere at some point in time. Old code, that is.

Based on this thinking, Michael Feathers, in the preface of his book Working Effectively with Legacy Code, coined a new definition for legacy code: “code without tests.”

If we define legacy code as “code without tests” then writing code not test first would basically be like writing instant legacy code.

How do we test-drive on top of a legacy code base?

Michael Feathers in his book describes a process for working with legacy code.

1 - Identify change point
2 - Identify inflection point
3 - Cover the inflection point
  • 3.a - Break external dependencies
  • 3.b - Break internal dependencies
  • 3.c - Write tests
4 - Make changes
5 - Refactor covered code

Lasse Koskela in his book "Test Driven" splits these steps to three phases.

1 - Analyzing the change
2 - Preparing for the change
3 - Test-driving the change




Analyzing the change

When we start analyzing the change we want to make, we first identify the change points. Change points are the places in the code base where we need to edit code in order to implement the desired change. This is fundamentally no different from the analysis we carry out with any kind of code base, legacy or not. The main difference is that a legacy code base is generally more difficult to learn and understand than one with thorough unit tests documenting the intent and purpose of individual classes and methods.

When we know where the change should take place, we identify the inflection
point. The inflection point (or test point) is the point “downstream” in our code
where we can detect any relevant change in the system’s behavior after touching the code in the change points. Typical inflection points are close-proximity seams such as method invocations after or around the change points.

Sometimes, however, it might make sense to find the inflection point farther away from the change point, Examples of such distant inflection points might be network connections to external systems and log output produced by the code in and around the change point. In some cases, it might even
be sufficient to treat the system’s persistent data source as the inflection point.

The trade-off to make is basically between the ease of writing tests at the chosen
inflection point and the certainty provided by those tests.

Close-proximity inflection points tend to provide a more localized checkpoint
without too much noise around the signal. Distant inflection points, on the other hand, are more likely to catch side effects our analysis hadn’t found—but in exchange for potentially more effort in writing the tests because we often don’t have access to the kind of detailed information we usually have when testing close to the change point.

After having analyzed the code base looking for the change and inflection points, we know what we need to change and where we can test for the expected behavior. In the next phase, we prepare to make the change in a safe, test-driven manner.



Preparing for the change

Once we’ve spotted the change and inflection points, we proceed to cover the
inflection point with tests that nail down the current behavior before we make our change. This might involve breaking dependencies with careful edits that expose the dependency through a seam we can manipulate in our tests.

The tests we write to cover the inflection point are typically what we call characterization tests, meaning that they nail down the current functionality as is, without worrying about whether that behavior is correct. Characterization tests are often also learning tests in the sense that we use them to verify assumptions we’ve made while identifying the change points.

With sufficient characterization tests in place, we’re ready to move on to the third phase of our test-driven legacy development process—making the change.


Test-driving the change

After we’ve written tests around the change and inflection points to the degree
that we’re comfortable with our test coverage, we make the change by adding a
test for the desired new functionality. As we proceed with implementing the
change, our characterization tests tell us if we broke anything while editing the
legacy code, and the newly added tests tell us when we have managed to implement the desired change correctly. Finally, once we’ve successfully made the change and all tests are passing, we refactor the code as usual, enjoying the cover of our automated tests.

That’s all there is to working with legacy code in general. The main differences
between the regular test-driven development cycle and the process described are that we need to write tests for the existing behavior before adding a test for the new behavior and that we often need to make small dependency-breaking edits without our safety net in order to be able to start writing those tests. It just
requires a bit more care and thought.

reference : http://www.manning.com/koskela/

Good News for Java Developers in a Tight Economy

Budget dollars may be tight for most companies, but that doesn’t mean enterprise IT departments can do without the technology skills, talent and certifications they need to better navigate a down economy.

Research released this week by Foote Partners shows that some skills continue to pay well, despite the recession. The research firm’s data showed that pay for 60 skills and certifications declined in the first quarter, yet another 46 skills and certifications experience increases in pay during the same time period.

read more : http://www.networkworld.com/news/2009/042309-it-skills-pay-hikes-downturn.html