Wednesday, December 30, 2009

Simplifications in Spring 3.0

two short but useful posts about Spring 3.0 simplifications by Spring Source Team.

- MVC Simpllifications in Spring 3.0

- Configuration Simplifications in Spring 3.0

- Task Scheduling Simplifications in Spring 3.0

have a look if you want to know what's new in Spring 3.0

Friday, December 25, 2009

Introducing the Java EE 6 Platform: Part 1

Java Platform, Enterprise Edition (Java EE) is the industry-standard platform for building enterprise-class applications coded in the Java programming language. Based on the solid foundation of Java Platform, Standard Edition (Java SE), Java EE adds libraries and system services that support the scalability, accessibility, security, integrity, and other requirements of enterprise-class applications.

Since its initial release in 1999, Java EE has matured into a functionally rich, high performance platform. Recent releases of the platform have also stressed simplicity and ease of use. In fact, with the current release of the platform, Java EE 5, development of Java enterprise applications has never been easier or faster.


Java EE 6 adds significant new technologies and extends the usability improvements made in previous Java EE releases.

Progress continues. The next release of the platform, Java EE 6, adds significant new technologies, some of which have been inspired by the vibrant Java EE community. It also further simplifies the platform, extending the usability improvements made in previous Java EE releases.

This article highlights some of the significant enhancements in Java EE 6.

Contents

- Java EE 6 Goals
- Powerful New Technologies
- Enhanced Web Tier Capabilities
- EJB Technology, Even Easier to Use
- A More Complete Java Persistence API
- Further Ease of Development
- Profiles and Pruning
- Summary
- For More Information
- Comments

read article here http://java.sun.com/developer/technicalArticles/JavaEE/JavaEE6Overview.html

Saturday, November 7, 2009

Dependency Injection in Java EE 6 - Part 1

Dependency Injection in Java EE 6 - Part 1

This series of articles introduces Contexts and Dependency Injection for Java EE (CDI), a key part of the soon to be finalized Java EE 6 platform. Standardized via JSR 299, CDI is the de-facto API for comprehensive next-generation type-safe dependency injection for Java EE. Led by Gavin King, JSR 299 aims to synthesize the best-of-breed dependency injection features from solutions like Seam, Guice and Spring while adding many useful innovations of its own.

In this first article of the series, we are going to take a high-level look at CDI, see how it fits with Java EE overall and discuss basic dependency management as well as scoping. In the course of this series, we will cover features like component naming, stereotypes, producers, disposers, decorators, interceptors, events, the CDI API for portable extensions and many more. We will also talk about how CDI aligns with Seam, Spring as well as Guice and augment the discussion with some implementation details using CanDI, C

This series of articles introduces Contexts and Dependency Injection for Java EE (CDI), a key part of the soon to be finalized Java EE 6 platform. Standardized via JSR 299, CDI is the de-facto API for comprehensive next-generation type-safe dependency injection for Java EE. Led by Gavin King, JSR 299 aims to synthesize the best-of-breed dependency injection features from solutions like Seam, Guice and Spring while adding many useful innovations of its own.

In this first article of the series, we are going to take a high-level look at CDI, see how it fits with Java EE overall and discuss basic dependency management as well as scoping. In the course of this series, we will cover features like component naming, stereotypes, producers, disposers, decorators, interceptors, events, the CDI API for portable extensions and many more. We will also talk about how CDI aligns with Seam, Spring as well as Guice and augment the discussion with some implementation details using CanDI, Caucho's independent implementation of JSR 299 included in the Resin application server.

aucho's independent implementation of JSR 299 included in the Resin application server.



continue ... Dependency Injection in Java EE 6

Friday, October 23, 2009

An update on Java Persistence API 2.0

The Java™ Persistence API (JPA) was first introduced in Java Platform Enterprise Edition (Java EE) 5 as part of the Enterprise JavaBean™ (EJB) 3.0 family of specifications Since that time, JPA 1.0 has proven to be a very popular persistence framework. Even though this first specification was very complete and functional, there is always room for improvement. The next iteration of the JPA specification (JPA 2.0) is currently under development via JSR 317.

The JPA expert group is working hard to finalize the JPA 2.0 specification. Public Final Draft #2 of the specification was recently made available, and the final JPA 2.0 specification is planned to be available by November 16, 2009.

A single Comment Line column cannot do justice to all the new features in the JPA 2.0 specification. Instead, brief introductions will be made via this article followed by information to help you go experience these features firsthand via Apache’s OpenJPA project.

read more : An update on Java Persistence API 2.0

Saturday, May 30, 2009

How TDD and Pairing Increase Production

"Test-driven Development" and "Pair Programming" are two of the most widely known of agile practices, yet are still largely not being practiced by many agile teams. Often, people will cite being "too busy" to adopt such practices as TDD and pairing; in essence, implying that striving for high code quality will reduce productivity. Mike Hill explains how this logic is seriously flawed.

Mike tells us, essentially, that one must "go better" if they want to "go faster"

Monday, May 25, 2009

Successfully Adopting Pair Programming

In 3.5 years as a consultant I spent more time talking (with clients) about pair programming than any other topic. In general, client developers had never properly paired and had no desire to do so. To make matters worse, the business predominantly thought two developers on one machine was a waste.

Despite the prejudices, usually by the time we left a client the business and developers had become pro-pairing.

Successfully adopting pair programming can be hard, but it's also entirely possible if you leverage the lessons I've learned.

This article assumes you have done some pairing and are looking to help your organization adopt pairing where it makes sense. The advice can be helpful for people in various roles; however, it is written mostly for developers or team leads looking to introduce pair programming to their teams.

This article makes no attempt to address whether you should or should not be pairing. There are benefits and drawbacks to pair programming (like most things), and I think there is already decent information available covering that topic. Discussing the pros and cons of pairing would take away from the focus of this article: If you already believe in pair programming, how can you successfully introduce it to your team?




read article at : http://www.infoq.com/articles/adopting-pair-programming

Tuesday, May 12, 2009

Remote Lazy Loading in Hibernate

Lazy loading in Hibernate means fetching and loading the data, only when it is needed, from a persistent storage like a database. Lazy loading improves the performance of data fetching and significantly reduces the memory footprint.

When Hibernate initializes the data object, actually it creates a reference (of the data) to the data object and doesn't load the data as such. Hibernate then intercepts the method calls to this reference and loads the actual data. In order to intercept and load the data, Hibernate requires the data object be associated with a Hibernate Session.

Problems might arise when these "lazy loaded" data objects (containing the reference) are transferred to other application layers, especially to remote client. These objects get serialized/de-serialized on their way to the remote client and there by detaching itself from the Hibernate Session. Accessing this detached reference will always lead to some exception.

What if these lazy loaded objects can still maintain their references even at the remote client layer (where there is no Hibernate Session) and still be able to lazy load data? This is quite possible and this concept of lazy loading data even from a remote client is called remote lazy loading.

In this article we'll discuss the solution by extending Hibernate's lazy loading framework. We'll use 3.2.x version of Hibernate library.

countinue here : http://www.theserverside.com/tt/articles/article.tss?l=RemoteLazyLoadinginHibernate


Prime guidelines for Test Driving

  • Do. Not. Skip. Refactoring.
  • Get to green fast.
  • Slow down after a mistake.

Let’s go through these guidelines one by one, examining what makes them so
important that they made it to our short list.

Do. Not. Skip. Refactoring.

If you haven’t yet considered tattooing the word refactor on the back of both your hands, now would be a good time to do that. And I’m only half joking. The single biggest problem I’ve witnessed after watching dozens of teams take their first steps in test-driven development is insufficient refactoring.

Not refactoring mercilessly and leaving duplication in the code base is about
the closest thing to attaching a time bomb to your chair. Unfortunately, we are
good at remembering the “test” and “code” steps of the TDD cycle and extremely proficient at neglecting a code smell that screams for the missing step.

Thus, I urge you to pay attention to not skipping refactoring. If you have some-
one to pair with, do so. Crosscheck with your programming buddy to spot any
duplication you may have missed. Bring Fowler’s Refactoring book with you to the toilet. Learn to use your IDE’s automated refactorings. It’s good for you, the doctor said so!

I apologize if I’m being too patronizing, but following the TDD cycle all the
way is important. Now that we’ve got that in check, there are two more guidelines for us to go through. The first of them relates to the code step of the TDD cycle get to green fast. Let’s talk about that.

Get to green fast

As we test-drive, we’re basically aiming for the simplest design we can think of for the problem at hand. We don’t, however, go for the simplest design right off the bat in the code step. Instead, we should strive to get back to green fast. The code step is where we get to that green bar with as few edits as possible. The refactoring step is where we perfect our design.
You might want to read the previous paragraph out loud. Don’t worry about
others looking at you like you’re a freak. You’re just pointing out facts.
Speaking of facts, it’s more than likely that you will make one or two mistakes
in your career even after adopting TDD. Our third guideline tells us to slow down once the proverbial smelly substance hits the fan.

Slow down after a mistake

It is common for developers practicing TDD to start taking slightly bigger and bigger steps as time goes by. At some point, however, we’ll take too big a bite off our test list and end up reverting our changes. At these points, we should realize that the steps we’re taking are too big compared to our ability to understand the needed changes to our implementation. We need to realize that we must tighten our play. Small steps. Merciless refactoring. It’s that simple. Walking to the water cooler might not be a bad idea either.

These guidelines are certainly not a complete reference to successful test-
driving. Practices and guidelines don’t create success. People do. Having said
that, I hope they will help you find your way to working more productively and
to avoiding some of the pitfalls I’ve seen many people stumble into as beginning TDD’ers.

read more : http://www.manning.com/koskela/

Wednesday, May 6, 2009

Lean service architectures with Java EE 6

The complexity and bloat often associated with Java EE are largely due to the inherent complexity of distributed computing; otherwise, the platform is surprisingly simple. As I discussed in my last article for JavaWorld, Enterprise JavaBeans (EJB) 3.1 actually consists of annotated classes and interfaces that are even leaner than classic POJOs; it would be hard to find anything more to simplify. Nonetheless, (mis)use of Java EE can lead to bloated and overstated architectures. In this article, I discuss the essential ingredients of a lean service-oriented architecture (SOA), then explain how to implement one in Java EE without compromising maintainability. I'll start by describing aspects of SOA implementation that lend themselves to procedural programming, then discuss domain-driven (aka object-oriented) design.

read more : Lean service architectures with Java EE 6


Saturday, May 2, 2009

Test Driven Development - legacy code

Legacy code is traditionally considered to mean code written by someone else somewhere at some point in time. Old code, that is.

Based on this thinking, Michael Feathers, in the preface of his book Working Effectively with Legacy Code, coined a new definition for legacy code: “code without tests.”

If we define legacy code as “code without tests” then writing code not test first would basically be like writing instant legacy code.

How do we test-drive on top of a legacy code base?

Michael Feathers in his book describes a process for working with legacy code.

1 - Identify change point
2 - Identify inflection point
3 - Cover the inflection point
  • 3.a - Break external dependencies
  • 3.b - Break internal dependencies
  • 3.c - Write tests
4 - Make changes
5 - Refactor covered code

Lasse Koskela in his book "Test Driven" splits these steps to three phases.

1 - Analyzing the change
2 - Preparing for the change
3 - Test-driving the change




Analyzing the change

When we start analyzing the change we want to make, we first identify the change points. Change points are the places in the code base where we need to edit code in order to implement the desired change. This is fundamentally no different from the analysis we carry out with any kind of code base, legacy or not. The main difference is that a legacy code base is generally more difficult to learn and understand than one with thorough unit tests documenting the intent and purpose of individual classes and methods.

When we know where the change should take place, we identify the inflection
point. The inflection point (or test point) is the point “downstream” in our code
where we can detect any relevant change in the system’s behavior after touching the code in the change points. Typical inflection points are close-proximity seams such as method invocations after or around the change points.

Sometimes, however, it might make sense to find the inflection point farther away from the change point, Examples of such distant inflection points might be network connections to external systems and log output produced by the code in and around the change point. In some cases, it might even
be sufficient to treat the system’s persistent data source as the inflection point.

The trade-off to make is basically between the ease of writing tests at the chosen
inflection point and the certainty provided by those tests.

Close-proximity inflection points tend to provide a more localized checkpoint
without too much noise around the signal. Distant inflection points, on the other hand, are more likely to catch side effects our analysis hadn’t found—but in exchange for potentially more effort in writing the tests because we often don’t have access to the kind of detailed information we usually have when testing close to the change point.

After having analyzed the code base looking for the change and inflection points, we know what we need to change and where we can test for the expected behavior. In the next phase, we prepare to make the change in a safe, test-driven manner.



Preparing for the change

Once we’ve spotted the change and inflection points, we proceed to cover the
inflection point with tests that nail down the current behavior before we make our change. This might involve breaking dependencies with careful edits that expose the dependency through a seam we can manipulate in our tests.

The tests we write to cover the inflection point are typically what we call characterization tests, meaning that they nail down the current functionality as is, without worrying about whether that behavior is correct. Characterization tests are often also learning tests in the sense that we use them to verify assumptions we’ve made while identifying the change points.

With sufficient characterization tests in place, we’re ready to move on to the third phase of our test-driven legacy development process—making the change.


Test-driving the change

After we’ve written tests around the change and inflection points to the degree
that we’re comfortable with our test coverage, we make the change by adding a
test for the desired new functionality. As we proceed with implementing the
change, our characterization tests tell us if we broke anything while editing the
legacy code, and the newly added tests tell us when we have managed to implement the desired change correctly. Finally, once we’ve successfully made the change and all tests are passing, we refactor the code as usual, enjoying the cover of our automated tests.

That’s all there is to working with legacy code in general. The main differences
between the regular test-driven development cycle and the process described are that we need to write tests for the existing behavior before adding a test for the new behavior and that we often need to make small dependency-breaking edits without our safety net in order to be able to start writing those tests. It just
requires a bit more care and thought.

reference : http://www.manning.com/koskela/

Good News for Java Developers in a Tight Economy

Budget dollars may be tight for most companies, but that doesn’t mean enterprise IT departments can do without the technology skills, talent and certifications they need to better navigate a down economy.

Research released this week by Foote Partners shows that some skills continue to pay well, despite the recession. The research firm’s data showed that pay for 60 skills and certifications declined in the first quarter, yet another 46 skills and certifications experience increases in pay during the same time period.

read more : http://www.networkworld.com/news/2009/042309-it-skills-pay-hikes-downturn.html

Thursday, April 30, 2009

Build and deploy OSGi as Spring bundles using Felix (Part 2)

Build and package Java classes as OSGi bundles using the Spring DM framework in a Felix container. This article, Part 2 of this series, shows you how to create bundles using the Spring framework and then deploy them in a Felix runtime environment. You will see how the core OSGi framework dependency is removed through a simple Spring-based configuration.

Learn more:
http://www.ibm.com/developerworks/opensource/library/ws-osgi-spring2/index.html?ca=dgr-jw22&S_TACT=105AGX59&S_CMP=grsitejw22

Build and deploy OSGi bundles using Apache Felix (Part 1)

In this article, Part 1 of a series, you develop an order application with client-side and server-side components. Then you package these components as OSGi bundles. The client invokes the service component to process the order. The service component has a method that processes the order and prints the order ID. After reading this article, you can apply the concepts and features of Apache Felix to build and package Java component classes as OSGi bundles.

read more : OSGi and Spring, Part 1: Build and deploy OSGi bundles using Apache Felix

Transaction strategies: Models and strategies overview

It's a common mistake to confuse transaction models with transaction strategies. This second article in the Transaction strategies series outlines the three transaction models supported by the Java platform and introduces four primary transaction strategies that use those models. Using examples from the Spring Framework and the Enterprise JavaBeans (EJB) 3.0 specification, Mark Richards explains how the transaction models work and how they can form the basis for developing transaction strategies ranging from basic transaction processing to high-speed transaction-processing systems.

Learn more:
http://www.ibm.com/developerworks/java/library/j-ts2.html?ca=dgr-jw22SprinEJBtrans&S_TACT=105AGX59&S_CMP=grsitejw22

Java Transaction

Transaction processing should achieve a high degree of data integrity and consistency. This article, the first in a series on developing an effective transaction strategy for the Java platform, introduces common transaction pitfalls that can prevent you from reaching this goal. Using code examples from the Spring Framework and the Enterprise JavaBeans (EJB) 3.0 specification, series author Mark Richards explains these all-too-common mistakes.

Learn more:
http://www.ibm.com/developerworks/java/library/j-ts1.html?ca=dgr-jw22SafeJavaTrans&S_TACT=105AGX59&S_CMP=grsitejw22