Tuesday, September 23, 2014

Rasterizing scalable images during build


In most web projects your content does not only include code, html or css but also images. Those images are typically delivered as non-scalable graphics to the clients (i.e. jpg, png, gif). But ususally those images are created from a scalable image that allows to "design" the actual image and produce versions of various sizes from the scalable image. The process of transforming a scalable graphic to a non-scalable graphic is called rasterization and fortunately the process can be automated when you're running your build with Maven.

The Batik Maven plugin allows you to convert your svg (Scalable Vector Graphics) images to non-scalable images, supported types are png, jpeg, tiff or pdf. The plugin can be added to your build lifecycle. The plugin documentation states that adding the following snippet to your pom.xml

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>batik-maven-plugin</artifactId>
  <version>1.0-beta-1</version>
  <executions>
    <execution>
      <goals>
        <goal>rasterize</goal>
      </goals>
    </execution>
  </executions>
</plugin>
Unfortunately that's not sufficient as not all dependencies seem to be satisfied. But the following snippet worked for me (see also https://jira.codehaus.org/browse/MOJO-1670 )

<build>
  <plugins>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>batik-maven-plugin</artifactId>
      <version>1.0-beta-1</version>
      <executions>
        <execution>
          <goals>
            <goal>rasterize</goal>
          </goals>
        </execution>
      </executions>
      <dependencies>
        <dependency>
          <groupId>batik</groupId>
          <artifactId>batik-rasterizer</artifactId>
          <version>1.6</version>
        </dependency>
        <dependency>
          <groupId>org.axsl.org.w3c.dom.svg</groupId>
          <artifactId>svg-dom-java</artifactId>
          <version>1.1</version>
        </dependency>
        <dependency>
          <groupId>org.w3c.css</groupId>
          <artifactId>sac</artifactId>
          <version>1.3</version>
        </dependency>
      </dependencies>
      </plugin>
    </plugins>
</build>

With that snippet I could place my svg files into /src/main/svg and during the build those files are rasterized to png files in the /target/generated-resources/images (locations and type are default settings and can be configured). From now on it's quite easy to "compile" your images during the build and you never have to paint your images by pixel anymore.

If you're looking for a free and powerful SVG "editor", try Inkscape.


Tuesday, August 26, 2014

User Acceptance Testing with Selenium and Cucumber

In the last implementation project I participated in we applied the Behavior Driven Development approach where the user stories are defined in Given-When-Then style. In this article I want to describe how I combined Cucumber with Selenium in order to automate our User-Acceptance Tests using Behavior Driven Development.

For running automated BDD tests there are some free frameworks available (a brief comparison of BDD frameworks). We decided to go for Cucumber because it served all our requirements, has a good documentation with good examples and is pretty easy to get it up and running

Cucumber JUnit Test

The following listing shows a simple example that is practically the archetype for all our Cucumber tests.

@RunWith(Cucumber.class)
@CucumberOptions(
  features = { "classpath:features/example/" },
  glue = { "my.project.uat." },
  tags = { "@Example" })
public class ExampleCucumberTest {
  //empty
}
The annotations of the example in detail are:
  • @RunWith declares the TestRunner for this test, which is the Cucumber class. The test won't run without it.
  • @CucumberOptions define various options for this tests. The options are optional but are quite helpful in controlling the behavior of the test
    • features: declares a path were the BDD feature files (text files) are found. The example points to a location in the classpath. All feature files (.feature extension) below that location are considered. Multiple locations can be defined as an array.
    • glue: defines the packages where Steps and Hooks are located. Steps and Hooks contain the actual code for the tests. Multiple packages can be defined as an array.
    • tags: defines which stories should be executed. If you omit this option, all Stories are executed, otherwise only those that have one of the tags set will be run.
Of course there are more options available (see Cucumber apidoc)  but these are the options I use most. As soon as you have your first story written, you can start right away with this simple test.

BDD Feature 

The following exaple story is taken from the Cucumber documentation

@Example
Feature: Search courses
  Courses should be searchable by topic
  Search results should provide the course code

Scenario: Search by topic
    Given there are 240 courses which do not have the topic "biology"
    And there are 2 courses, A001 and B205, that each have "biology" as one of the topics
    When I search for "biology"
    Then I should see the following courses:
      | Course code |
      | A001        |
      | B205        |

With that feature in the right location you can run the above test with JUnit. Of course it will not be successful. Actually, with the default settings it will ignore all the steps unless you use the @CucumberOption(strict=true) which is recommended when you run the test as part of a quality gate.
The Cucumber documentation provides some good descriptions on Features and their syntax. You can define Backgrounds that are executed for each scenario of the feature (similar to JUnit 4 @BeforeClass) or Scenario Outlines to run the feature against a set of data. It is even possible to write your BDDs in different languages. Therefore you have to start the feature with the line following line and have all the keywords in the according language.

#language: de
Funktionalität: ...

But you have to be careful with the encoding of the Feature files and special characters. It's best to use UTF-8 as default encoding. A complete list of the keywords in other languages can be found in the Cucumber apidoc.

Cucumber Steps

When the test for the feature is run and the steps or part of it are not yet implemented, it will produce an output like this:

You can implement missing steps with the snippets below:

@Given("^there are (\\d+) courses which do not have the topic \"([^\"]*)\"$")
public void there_are_courses_which_do_not_have_the_topic(int arg1, String arg2) throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@Given("^there are (\\d+) courses, A(\\d+) and B(\\d+), that each have \"([^\"]*)\" as one of the topics$")
public void there_are_courses_A_and_B_that_each_have_as_one_of_the_topics(int arg1, int arg2, int arg3, String arg4) throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}
...

The test prints out skeletons for unimplemented steps. What you do now is to create a new step which is a simple Java class and put in one of the packages defined in the glue. Copy the skeletons into the class and implement it.
So the steps are basically what is executed for each line. It is possible to pass parameters to the steps that are extracted and converted so that steps can be reused with different values. Its also possible to define entire tables as input.

Hooks

Hooks are basically the same as steps but fulfill a similar role like the JUnit @Before and @After annotated methods, the even use the similar annotations (actually, the are named the same but are in a different package). You can trigger certain hooks using tags like shown in the following example:

@WithFirefox
Scenario: Response times with 10 users
...

and the according hook in Java will be

@Before("@WithFirefox")
public class BrowserHook {
 ...
  public void setupScenario_Firefox() {
   ...
  }
 ...

Dependencies between Hooks and Steps

In order to reuse existing code or to access the state of a particular Hook or Step instance you can create a dependency between the classes by defining a constructor that accepts a particular Hook or Step. The Cucumber JUnit runner will create instances of the according classes and inject them in classes that are dependent.
public class MyBrowserSteps {

  private BrowserHook browserHook;

  public MyBrowserSteps(final BrowserHooks browserHook) {
    this.browserHook = browserHook;
  }
The same applies to Steps so you can make one set of steps dependent on other steps.

Selenium Steps

So far I only described how to write any test with Cucumber, but for User Acceptance Testing you might want to test the actual solution. For web application that is the deployed application that is accessed by a browser. For browser automation the Selenium framework is widely known and framework of choice for most cases. It provides a recording tool (a plugin to Firefox) to record user interactions with the browser. It provides a model to access elements of the website using Java and various methods of locating elements in the browser.
For automated user-acceptance tests with Selenium and Cucumber the basic approach would be to
  1. Record actions with the Selenium Recorder
  2. Copy them to Steps classes that match your BDD
  3. Define assertions in Then step implementations
A step implemented with Selenium will look like in the following example

@When("^I push the button$")
public void i_push_the_button() throws Throwable {
  driver.findElement(By.cssSelector("div.v-select-button")).click();
}

In order to reuse steps for different browsers, I used the BrowserHook shown in on of the example on above and set-up the browser using a specific tag for each browser. The driver is first initialized upon the first call to getDriver(). In the step itself I retrieve the driver from the BrowserHook that got injected. The BrowserHook may be implemented like this

public final class BrowserHooks {
  private enum DriverType {
    headless,
    firefox,
    ie,
    chrome, ;
  }
  
  private WebDriver driver;

  public WebDriver getDriver() {
    if (driver == null) {
      switch (driverType) {
 case ie:
   driver = new InternetExplorerDriver();
   break;
        case firefox:
          driver = new FirefoxDriver();
          break;
        ...
    }
    return driver;
  }

  @Before("@WithFirefox")
  public void setupScenario_Firefox() {
    driverType = DriverType.firefox;
  }
  
  @Before("@WithIE")
  public void setupScenario_InternetExplorer() {
    driverType = DriverType.ie;
  }
  ...
}
And the step definition that uses it may look like
public class MyBrowserSteps {

  private BrowserHook browserHook;

  public MyBrowserSteps(final BrowserHooks browserHook) {
    this.browserHook = browserHook;
  }

  @When("^I push the button$")
  public void i_push_the_button() throws Throwable {
    this.browserHook.getDriver().findElement(By.cssSelector("div.v-select-button")).click();
  }
...

Aggregate Steps

One of the big advantages of a BDD framework like Cucumber is, that you can define steps that aggregate multiple steps. A good example for this is the Login Story. Although this is a point of typical discussions whether "Login User" is a valid Use Case or User Story (with regards to its business value) the requirement to allow a user to login does exists and its parameters need to be defined (whether it is via Single Sign On, Smartcard, Username/Password, Two-Factor or whatever else).
So let's assume you define a login user story such as
 Given the login screen is being displayed
 When I enter my username "xxx" and my password "yyy"
  And I push the login button
 Then I see the main screen of the application
  And I see my name being displayed in the user info box
Now you don't want to describe all these steps over and over again because the rest of the application under test requires a logged in user. So you could begin the other stories with
  • When the user "xxx" is logged in
Or even better by using a hook/tag before the story like
  • @Authenticated
Now, what you do in your code is to define a dependency to the steps class that contains the login step definitions and invoke each of them in the correct order either in a hook definition or in a step definition. The advantage of a hook is that you could combine it with other hooks to set up the test user or even persona.
public class LoginSteps {

  private BrowserHook browserHook;

  public LoginSteps (final BrowserHooks browserHook) {
    this.browserHook = browserHook;
  }


  @Given("^the login screen is being displayed$")
  public the_login_screen_is_being_displayed() {
    this.browserHook.getDriver().get(baseURL);
  }
  @When("^I enter my username \"([^\"]*)\" and my password \"([^\"]*)\"$")
  public void I_enter_my_username_and_my_password(String arg1, String arg2) throws Throwable {
    //with Selenium, put in the values in the login form
  }
  @When("^I push the login button$")
  public void I_push_the_login_button() throws Throwable {
    // with Selenium, locate the submit/login button and click it
  }
  ...
}

public class LoginHook {

  private LoginSteps loginSteps;
  private String testUser;
  private String testPassword;

  public LoginHook (final LoginSteps loginSteps) {
    this.loginSteps= loginSteps;
   
  }

  @Before(value="@PersonaXY", order=1)
  public void selectPersonaXY() {
     this.testUser = ...;
     this.testPassword = ...;
  }

  @Before(value="@Authenticated", order=2)
  public void login() {
    this.loginSteps.the_login_screen_is_being_displayed();
    this.loginSteps.I_enter_my_username_and_my_password(testUser, testUserPassword);
    this.loginSteps.I_push_the_login_button();
    ...
  }
}

And how it is used in a story

 @Authenticated @PersonaXY
 Given I see the meaningful screen
 When I do something purposeful
 Then I get a sensible result

 Conclusion

In this article I gave a brief introduction into Cucumber and how to write testcases with it. I showed how to define steps with Selenium to create meaningful, browser-based user acceptance testing and I showed how to combine thereby reuse steps and hook to create a rich user acceptance testing suite.

Friday, August 22, 2014

Developing a dynamic JEE service client with CDI (part 2)

In the previous blog entry I described how to develop an JEE service client that performs lazy lookups and is injectable to satisfy a service dependency. In this article I want to describe how to extend this service client to allow client-side service call interception.

Before I dive into the details of the interception, I want to outline the reasons why client-side call interception might be useful.

Background

In our current project we aspired a RESTful component-based architecture. For each of the components it should be possible to integrate them in a service oriented architecture via WebService, deploy them in a JEE environment accessing it as JNDI-locateable services or access them via a RESTful lightweight interface. Each component should be deployable standalone and should be operational as long as their lower-level component dependencies are satisfied. To be RESTful, our architecture followed the seven principles of REST, described in the famous thesis of Roy Fielding.

With JEE we could easily add caching behavior on the service side of a service consumer-provider relationship by adding an interceptor that serves a call with data from a cache and therefore reducing processing resource usage on the service side.

On the consumer side there is no such mechanism as to transparently intercept a service call before it is actually sent to the service. Such a client-side call interception could be of use for a range of use-cases
  • caching, to reduce network resource usage
  • error handling and fault tolerance
  • logging
  • security
 The following description uses the service client I described in a previous blog post.

Adding Interceptor support

In order to add call interceptors to the service client, we have to extend the invocation handler that delegates the service method invocation to the actual service by adding a stack of interceptors that are invoked consecutively.

When a JEE interceptors is invoked an InvocationContext is passed to the interceptor. The interceptor may decide to fully intercept the request and not proceeding the invocation further. This is behavior is needed for caching or security concern. Alternatively, interceptors may execute some additional operations before or after proceeding with the invocation which is useful for logging or error handling.

The InvocationContext is an interface introduced by JEE6. I used my own implementation of this interface where the getter methods defined by the interface simply return the values that are passed to the InvocationHandler of the dynamic proxy (see the ProxyServiceInvocationHandler of the previous blog post), the getTimer() method returns null, as I don't really need it. The main functionality of the InvocationContext however resides in the proceed() method. I used a stack (Deque) to maintain a list of registered interceptors of which each is checked whether it may intercept the method invocation or not.

The check is done in the method isInterceptedBy and does check the class of the interceptor instance as well as the method whether they both are annotated with the same the InterceptorBinding annotation. I deviated a bit from the JEE6 standard as I allowed an interceptor without an explicit interceptor binding to intercept any method invocation, but that is up to you.
The main method of my InvocationContext implementation are shown in the next listing

class ProxyInvocationContext implements InvocationContext {

  ...

  @Override
  public Object proceed() 
    throws IllegalAccessException, 
           IllegalArgumentException, 
           InvocationTargetException {
    if (!interceptorInvocationStack.isEmpty()) {
      final Object interceptor = interceptorInvocationStack.removeFirst();
      if (isInterceptedBy(interceptor)) {
        final Method aroundInvoke = interceptorMethods.get(interceptor);
        return aroundInvoke.invoke(interceptor, this);
      }
    }
    return this.method.invoke(target, this.parameters);
  }
  
  private boolean isInterceptedBy(final Object interceptor) {
    boolean hasNoBinding = true;
    for (final Annotation an : interceptor.getClass().getAnnotations()) {
      if (an.annotationType().getAnnotation(InterceptorBinding.class) != null) {
        hasNoBinding = false;
        if (getMethod().getAnnotation(an.annotationType()) != null) {
          return true;
        }
      }
    }
    return hasNoBinding;
  }
}

Instantiating Interceptors in CDI Context using DeltaSpike

Now we want to create interceptors and put them into the interceptor stack. The JEE 6 standard already defines interceptors so we could simply re-use this specification. JEE6 compliant interceptors may be used in a CDI lifecycle, that means lifecycle methods annotated with @PostConstruct or @PreDestroy are executed as well as injection of dependencies.
With the implementation of the InvocationContext described above, we need a collection of interceptor instances. In our project I used a configuration file (similar to the beans.xml where you have to define which interceptors should be loaded) where I defined the interceptor classes that should be loaded.

Creating an instance of a class by the fully qualified name of the class is trivial using reflection (i.e. Class.forName("...").newInstance()). But when you have a CDI container you want to let the container do the creation and further process the lifecycle of the the instance including dependency injection.

JEE6 itself does not provide the means for that but it defines an SPI. The Apache DeltaSpike project offers an implementation for that SPI (btw. it is developed by the same guys behind CDI in JEE6 itself). DeltaSpike allows to interact directly with the CDI container.

The following code snippet assumes, that you have already loaded the interceptor class and verified it is annotated with the @Interceptor annotation. It first checks, if a CDI environment is active, if not, the interceptor is instantiated the traditional way. If it is active, a BeanManager reference is obtained, an injection target is created using the interceptor class. This is needed to create the instance and inject dependencies to the instance. With the injection target, an interceptor Bean is created using the BeanBuilder of DeltaSpike. The Bean is a Contextual instance that is required by the manager to create a creational context. Having this context we can create a managed instance using the create method. DeltaSpikes contribution to the snippet is the BeanManagerProvider , the BeanBuilder and the DelegatingContextualLifecycle.

private <I> I createInterceptor(final Class<I> interceptorClass) 
  throws InstantiationException, IllegalAccessException {
  
  final I interceptor;
  
  if (BeanManagerProvider.isActive()) {
    final BeanManager manager = BeanManagerProvider.getInstance().getBeanManager();
    final InjectionTarget<I> target = manager.createInjectionTarget(
                                            manager.createAnnotatedType(interceptorClass));
            
    final Bean<I> interceptorBean = 
      new BeanBuilder<I>(manager)
          .beanClass(interceptorClass)
          .beanLifecycle(new DelegatingContextualLifecycle<I>(target))
          .create();
    interceptor = interceptorBean.create(
                     manager.createCreationalContext(interceptorBean));

  } else {
    interceptor = interceptorClass.newInstance();
  }
  return interceptor;
}

Conclusion

In this article I described how the Dynamic Service Client can be extended in order to provide client-side service call interception and how Interceptors (or Beans in general) can be instantiated in a CDI context using Apache DeltaSpike.

Wednesday, July 23, 2014

Developing a dynamic JEE service client with CDI (part 1)

With JEE6 CDI was introduced in and standardized for the Java Enterprise world. The main scope for the CDI extension was for enterprise services as it simplified various mechanisms in the JEE world such as EJB lookup or invocation interception. Unfortunately the new features fall a bit short on the consumer side of services implemented with CDI, in particular the lookup or service locator pattern, and client side invocation interception.
In this first article I want to describe a dynamic service client that performs lazy service dependency resolution (lookup) and that can be injected into a CDI injection point.
It is a good base for being extended to use sophisticated configuration or error handling mechanisms.

Defining a Dynamic Service Locator

In JEE6 its quite easy to perform a JNDI lookup for an EJB using the @EJB annotation. Unfortunately this is done during construction of the service consumers. When you have a multi-module project where each module should be deployable independently from the others, deployment will fail if the dependencies defined by the @EJB annotation could not be satisfied, that is, the EJB lookup fails.
The key to this problem is a lazy lookup, that is done first time the dependency is actually needed. But that is not supported by the @EJB annotation.
In our current project we solved the problem by implementing a dynamic service proxy that implements the interface of the service and performs the lookup using a JNDI lookup or another strategy on the first actual service call.

Creating a Proxy

Creating a dynamic proxy is relatively simple, you need a custom implementation of an InvocationHandler that performs the actual method invocation and need to specify the interfaces the dynamic proxy should implement. A factory method is used to create such a proxy instance. The the following code example shows such a factory method:
public abstract class DynamicServiceClient {

  public static <T> T newInstance(final Class T serviceInterface) {
    return (T) java.lang.reflect.Proxy.newProxyInstance(
      getClassLoader(),
      new Class[] { serviceInterface }, 
      new ProxyServiceInvocationHandler T(config));
    }
 
  private static ClassLoader getClassLoader(){
     ...
  }
}

Invocation Handler

The invocation handler delegates all service call to an instance of the actual service. The service is kept as an instance field of the invocation handler that is initialized upon the first service call using a Service Locator. The invoke method of the Invocation Handler delegates the call to the service instance.

class ProxyServiceInvocationHandler<T> 
  implements InvocationHandler {
 /**
  * The service reference that is lazily initialized
  */
  private T service; 

 /**
  * The service locator that performs the actual lookup
  */
  private final ServiceLocator<T> locator;

    ...

  @Override
  public Object invoke(final Object proxy, final Method method, final Object[] args) 
    throws IllegalAccessException, IllegalArgumentException, InvocationTargetException {
    //do the lookup if not already done
    if (this.service == null) {
      this.service = this.locator.locate();
    }
    //do the actual service call and return the result
    return method.invoke(this.service, args);
  }
}

Service Locator

The service locator that is used in the the above listing performs the actual lookup. In our project I used an implementation that allowed to define a location strategy in a configuration, with the JNDI lookup being implemented in one of the strategies. The following example however shows a simple implementation that performs a straightforward JNDI lookup using a lookupName.

public class ServiceLocator<T>{
 /**
  * the name for the JNDI lookup
  */
  private String lookupName;

    ...

  public T locate() {
    final Context context = new InitialContext();
    T service = (T)context.lookup(this.lookupName);
    return service;
  }
}

Service Injection

In order to use the dynamic service shown above and inject it via CDI, I used a producer method that creates instances of the dynamic service and a qualifier to distinguish the dynamic service client from the actual service implementation that might be available in the CDI container as well.
The creation of the instance itself is done by the static factory method shown in the first listing. The producer extends the DynamicServiceClient I described earlier and uses its static factory method. In order to keep the producer class simple, we need to extend some methods to the DynamicServiceClient.

The extended DynamicServiceClient that is shown in the following listing contains the logic for the actual service proxy instantiation and an abstract method returning the interface class of the service to be instantiated that has to be implemented by the producer.

public abstract class DynamicServiceClient<T>{
  /**
   * Instance of the service client
   */
  private T serviceInstance;

  /**
   * The method has to return the interface of the service 
   */
  protected abstract Class<T> getServiceInterface();

  /**
   * Creates a new instance of the service client. 
   */
  protected T getInstance() {
   if (serviceInstance == null) {
       serviceInstance = newInstance(getServiceInterface());
   }
   return serviceInstance;
 }

  /**
   * The factory method to create a new service instance
   */
  public static <T> T newInstance(final Class serviceInterface) {
    return (T) Proxy.newProxyInstance(
        getClassLoader(),
        new Class[] { serviceInterface }, 
        new ProxyServiceInvocationHandler());
    }
  }

  private static ClassLoader getClassLoader(){
     ...
  }
}

To qualify an injection point and the producer method to create and inject instances of the service client that implement the service interface and not instances of the actual service, I used a qualifier as shown in the next listing.

@Qualifier
@Retention(RUNTIME)
@Target({ METHOD, FIELD, PARAMETER })
public @interface ServiceClient {}

Now we're ready to define a producer using the DynamicServiceClient as base class. The producer is relatively simple as it only contains two simple methods.


public class MyServiceClientProducer 
  extends DynamicServiceClient<MyBusinessService> {

  @Produces @ServiceClient 
  public MyBusinessService getInstance() {
    return super.getInstance();
  }

  protected Class<MyBusinessService> getServiceInterface() {
    return MyBusinessService.class;
  }
}
To inject the dynamic service client into an injection point we simply have to define the Service interface class and use the qualifier. That's it.

public class MyServiceConsumer {

  @Inject @ServiceClient
  private MyBusinessService service;

  ... 
}

Instead of
public class MyServiceConsumer {

  @EJB(lookup="...")
  private MyBusinessService service;

  ... 
}

Conclusion

In this article I described how to implement a dynamic service locator that can be used in a CDI container. The service locator performs a lookup upon the first call of a service method and therefore provides a mechanism for fault tolerant dependency resolving and further a foundation for implementing client-side service call interception - which I will a describe in a future post.

Tuesday, April 29, 2014

JUnit Testing with Jackrabbit

Writing unit tests for your code is not only best practices, it's essential for writing quality code. In order to write good unit tests, you should use mocking of code not under test. But what if you're using a technology or an API that would require quite a lot of complicated mocks?
In this article I'd like to describe how you write unit tests for code that accesses a JCR repository.

At first I really tried to mock the JCR API using Mockito, but stopped my attempt at the point where I had to mock the behavior of the Query Object Model. It became apparent, that writing mocks would outweigh the effort to write the actual production code by far. So I had to search for an alternative and found one.

The reference implementation of JCR is the Apache Jackrabbit project. This implementation comes with a set of JCR Repository implementations, one of these is the TransientRepository. The TransientRepository starts the repository on first login and shuts it down on the last session being closed. The repository is created in memory which works pretty fast and makes it the best solution for unit testing. But nevertheless, a directory structure is created for the repository and unless not specified a config file is created as well.

For writing unit tests against this repository, we need the following:
  • a temporary directory to locate the directory structure of the repository
  • a configuration file (unless you want one created on every startup)
  • the repository instance
  • a CND content model description to initialize the repository data model (optional) 
  • an admin session to perform administrator operations
  • a cleanup operation to remove the directory structure
  • the maven dependencies to satisfy all dependencies
Let's start with the Maven dependencies. You need the JCR Spec, the Jackrabbit core implementation and the Jackrabbit commons for setting up the repository.

<properties>
  <!-- JCR Spec -->
  <javax.jcr.version>2.0</javax.jcr.version>
  <!-- JCR Impl -->
  <apache.jackrabbit.version>2.6.5</apache.jackrabbit.version>
</properties>
...
<dependencies>
<!-- The JCR API -->
  <dependency>
    <groupId>javax.jcr</groupId>
    <artifactId>jcr</artifactId>
    <version>${javax.jcr.version}</version>
  </dependency>
  <!-- Jackrabbit content repository -->
  <dependency>
    <groupId>org.apache.jackrabbit</groupId>
    <artifactId>jackrabbit-core</artifactId>
    <version>${apache.jackrabbit.version}</version>
    <scope>test</scope>
  </dependency>
  <!-- Jackrabbit Tools like the CND importer -->
  <dependency>
    <groupId>org.apache.jackrabbit</groupId>
    <artifactId>jackrabbit-jcr-commons</artifactId>
    <version>${apache.jackrabbit.version}</version>
    <scope>test</scope>
  </dependency>
</dependencies> 

Now let's create the directory for the repository. I recommend to locate it in a temporary folder so multiple test runs don't affect each other if cleanup failed. We use the Java TempDirectory facility for that:
//prefix for the repository folder
import java.nio.file.Files;
import java.nio.file.Path;
...
private static final String TEST_REPOSITORY_LOCATION = "test-jcr_";
...
final Path repositoryPath = 
      Files.createTempDirectory(TEST_REPOSITORY_LOCATION);

Next, you require a configuration file. If you already have a configuration file available in the classpath, i.e. in src/test/resource, you should load it first:
final InputStream configStream = 
  YourTestCase.class.getResourceAsStream("/repository.xml");

Knowing the location and the configuration, we can create the repository:
import org.apache.jackrabbit.core.config.RepositoryConfig;
import org.apache.jackrabbit.core.TransientRepository;
...
final Path repositoryLocation = 
      repositoryPath.toAbsolutePath();
final RepositoryConfig config = 
      RepositoryConfig.create(configStream, repositoryLocation.toString());
final TransientRepository repository = 
  new TransientRepository(config);

If you ommit the config parameter, the repository is created in the working directory including the repository.xml file, which is good for a start, if you have no such file.

Now that we have the repository, we want to login to create a session (admin) in order to populate the repository. Therefore we create the credentials (default admin user is admin/admin) and perform a login:
final Credentials creds = 
  new SimpleCredentials("admin", "admin".toCharArray());
final Session session = repository.login(creds);

With the repository running and an open session we can initialize the repository with our content model if require some extensions beyond the standard JCR/Jackrabbit content model. In the next step I import a model defined in the Compact Node Definition (CND) Format, described in JCR 2.0
import org.apache.jackrabbit.commons.cnd.CndImporter;
...
private static final String JCR_MODEL_CND = "/jcr_model.cnd.txt";
...
final URL cndFile = YourTestCase.class.getResource(JCR_MODEL_CND);
final Reader cndReader = new InputStreamReader(cndFile.openStream());
CndImporter.registerNodeTypes(cndReader, session, true);

All the code examples above should be performed in the @BeforeClass annotated method so that the repository is only created once for the entire test class. Otherwise a lot of overhead will be generated. Nevertheless, in the @Before and @After annotated methods, you should create your node structures and erase them again (addNode() etc).

Finally, after you have performed you test, you should cleanup the test environment again. Because a directory was created for the transient repository, we have to remove the directory again, otherwise the temp folder will grow over time.
There are three options for cleaning it up.
  1. Cleaning up in @AfterClass annotated method
  2. Cleaning up using File::deleteOnExit()
  3. Cleaning up using shutdown hook
I prefer combining 1 and 3 for fail-safe deletion. For option 1 we require a method to destroy the repository and cleaning up the directory. For deleting the directory I use Apache Commons FileUtil as it allows deletion of directory structures containing files and subdirectories. The method could look like this:

import org.apache.commons.io.FileUtils;
...
@AfterClass
public static void destroyRepository(){
  repository.shutdown();
  String repositoryLocation = repository.getHomeDir();
  try {
    FileUtils.deleteDirectory(new File(repositoryLocation));
  } catch (final IOException e) {
   ...
  }
  repository = null;
}

As fail-safe operation I prefer to add an additional shutdown hook that is executed when the JVM shuts down. This will delete the repository even when the @AfterClass method is not invoked by JUnit. I do not use the deleteOnExit() method of File as it requires the directory to be empty while I could call any code in the shutdown hook using my own cleanup implementation.


A shutdown hook can easily be added to the runtime by specifying a Thread to be executed on VM shutdown. We simply add a call to the destroy methode to the run() method.
Runtime.getRuntime().addShutdownHook(new Thread("Repository Cleanup") {
  @Override
  public void run() {
    destroyRepository();
  }
});

Now you should have everything to set-up you Test JCR Repositoy and tear-down the test environment. Happy Testing!

Monday, April 28, 2014

Configuring SLF4J with Log4J2 using Maven

In our current project we had to decide for a logging framework. As it apears to be a common standard to use the Simple Logging Facade for Java (SLF4J) on caller side, there were various options on the logging framework itself. In the end we decided to use Log4J 2 as logging framework, mainly for reasons of better performance in comparison to other frameworks (see The Logging Olympics).
The setup of this combination is fairly ease, and there are good examples how to add a SLF4J binding to your Maven project as in this good blog article. But as Log4J is rather new and no final release has been announced yet (current is release candidate 1), it's hard to find a definitive configuration example, so I decided to put down ours.

<properties>
  <slf4j.version>1.7.6</slf4j.version>
  <!-- current log4j 2 release -->
  <log4j.version>2.0-rc1</log4j.version> 
</properties>
...
<dependencies>
  <dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>${slf4j.version}</version>
  </dependency>
  <!-- Binding for Log4J -->
  <dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>${log4j.version}</version>
  </dependency>
  <!-- Log4j API and Core implementation required for binding -->
  <dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>${log4j.version}</version>
  </dependency>
  <dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>${log4j.version}</version>
  </dependency>
</dependencies> 

And a basic log4j configuration (located at your src/main/resources path) could look like this
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="OFF">
 <appenders>
  <Console name="Console" target="SYSTEM_OUT">
   <PatternLayout pattern="%d{HH:mm:ss} [%t] %-5level %logger{36} - %msg%n" />
  </Console>
 </appenders>
 <Loggers>
  <Logger name="yourLogger" level="debug" additivity="false">
   <AppenderRef ref="Console" />
  </Logger>
  <Root level="error">
   <AppenderRef ref="Console" />
  </Root>
 </Loggers>
</configuration>

Tuesday, April 1, 2014

Guard Conditions of UML Decision Nodes

In the beginning of the year I attended a requirements engineering class where we discussed - among other topics - activity diagrams. One element of an activity diagram is the decision node, the diamond shape, where you can branch the flow of activities depending on guarding condition for each outgoing edge. There we stumbled upon one - at least to me - completely new aspect on the guard conditions.

According to the UML specification the guard condition protects the entry of an edge. If the condition is not fulfilled, the edge is no entered. Before the above mentioned discussion I always assumed that an edge without a condition always evaluates to true and is always evaluated at last of all outgoing edged and is therefore entered only if none of the other edges' condition applies. But that is wrong!

The truth is, that according to the UML specification the order of the "processing" of the edges is not determined and therefore may be completely random. So if you define an outgoing edge of a decision node without a guarding condition, it may be the case, the edge is entered although another edge's condition may have been met but would be evaluated after the edge without a condition.

Conclusion is, if you want to have a determined behavior of your activities, always define all of the guard conditions of your outgoing edges of decision nodes.

Monday, March 24, 2014

Some interesting points about JCR

While I was doing research about Java Content Repository (JCR, specified in JSR 283) for our current project I stumbled upon various interesting fact worth mentioning besides the actual specification details, which I like to share.
  • JCR was specified by Day Software, a company based in Bale, Switzerland which is especially interesting to me, as I live in Switzerland. Day Software has been bought by Adobe
  • In comparison with CMIS, JCR seems to be a functional superset of CMIS, as various sources indicate (see here, here,here or here), which means, JCR offers more functionality than CMIS does. Nevertheless, both standards are hardly directly comparable, as they focus on disjunct fields of use. While CMIS is service oriented and aims at repository interoperability, JCR is a standard and model for accessing hierarchical content. JCR seems to be easier to implement that CMIS.
  • JCR and CMIS are compatible to each other.
    • It's possible to access JCR repositories via CMIS. The JCR reference implementation (Jackrabbit) offers a service provider interface (SPI) implementation towards CMIS as well as there is a CMIS-JCR Bridge as part of the OpenCMIS implementation Apache Chemistry (for an overview, see here). The only restriction ist, that some of the JCR mixin types are not visible via CMIS.
    • An access via JCR to CMIS repositories should be a programmer's exercise as they are directly mappable.
  • JBoss ModeShape is an alternative open source implementation of the JCR that is not based on Jackrabbit. Although it does not implement the standard as complete as Jackrabbit does regarding optional elements (the mandatory elements are fully implemented), it offers various functionalities beyond Jackrabbit, such as Administration, configuration and enterprise readiness. The documentation is much more useful.
  • Both, Apache Jackrabbit and JBoss ModeShape are implementations of the JCR standard and form only the data management component of a content management system. Typically, CMS define their own data model and define workflows for dealing with the data.
  • Content Management Systems that are built upon a JCR Repository are
    • Magnolia CMS (support both ModeShape and Jackrabbit), it's a swiss product, by the way!
    • Jahia (Jackrabbit)
    • Hippo-CMS (Jackrabbit)
    • exo Platform (is more of a Social/Collaboration Portal product, but offers it's own JCR implementation)
    • Adobe Exprience Manager (former CQ5, uses ContentRepositoryExtreme (CRX) which is  a commercial, enterprise-ready JCR implementation, based on Jackrabbit),
  • Other implementations that supportJCR
  • Day Software (now part of Adobe) offers various JCR connectors for other commercial repositories:
  • Apache Sling is a framework allowing to access JCR repositories via REST and to create repository-oriented web applications.
  • Apache Stanbol is a framework combining the elements of semantic web with structured repository data of content management systems (JCR, CMIS) 
  • A good overview of available CM Repositories is shown here http://stackoverflow.com/questions/1174131/looking-for-a-good-programmable-java-cms-content-management-system
  • Looking at the Jackrabbit team you might stumble upon the name 'Roy Fielding'. Doesn't ring a bell? Maybe have a look at his must-read thesis (tl;dr: its about defining REST architectural style)

Tuesday, January 14, 2014

Prototyping in Agile Environments

In my blog post a couple of days ago I wrote about prototyping and technical debt. After some discussions with my colleagues about the value of prototyping I want to refine my statements a bit further.

What is a prototype?

First of all, what is a prototype? Wikipedia states "A prototype is an early sample, model or release of a product built to test a concept or process or to act as a thing to be replicated or learned from". There are several types of prototypes which basically breaks down to the two dimensions:
  • Horizontal vs. Vertical Prototype
  • Evolutionary vs. Throwaway Prototype
 To help understand the relations and implications of the various kinds of prototypes, have a look at the following diagram. (The diagram assumes, that front-end (UI) aspects are on the upper side while backend aspects (logic, processing, persistence, etc) are on the lower side and time flows from left to right.)


Basically, a prototype should answer questions to the solution to be built. Therefore, an intrinsic value of a prototype is to help reduce a kind of risk by giving those answers.

Horizontal and Vertical Prototypes

A horizontal prototype may help to reduce functional risk, so the customer can validate if the solution to be developed meets her requirements. A vertical prototype may help to reduce technical risks as it proves that certain architectural or technical decisions are viable. A prototype may also be combination horizontal and vertical aspects, but typically should focus only one of these, for example verifying if certain UI components can be integrated together in the UI framework focuses more on vertical than horizontal aspects.

Nevertheless, in the lean world, most types of a prototype are waste (throwaway prototypes) or potential waste (horizontal, evolutionary prototypes). In order to reduce waste, efforts of producing a prototype that will not become part of the solution should be minimized. The prototype should focus only on reducing risks and giving answers to a predefined set of questions. In other words it should have a narrow and defined goal.

Technical Debt and Technical Credit 

Two aspects of prototyping in agile environments are technical debt and technical credit.  The latter being a concept you do not read often in the agile literature, but which exists if you think the concept of technical debt to the end.

The following diagram depicts the concept of technical credit and is a variation of the first diagram.


It differs from the first one in three areas:
  • a vertical throwaway prototypes does not necessarily have to start at the front-end. It could actually cover any part of the technical stack.
  • a horizontal evolutionary prototype may also cover other layers of the software, not only UI, and therefore may produce "Technical Credit" as the opposite of technical debt. 

Technical Debt

As it is obvious, a horizontal prototype that is part of the solution produces a lot of technical debt in the first place, especially if the prototype targets at the UI. The customer perceives the solution as finished, but it is dysfunctional from a technical perspective, and most of the feature promises are broken until the technical debt is worked off.

Technical Credit

The opposite of technical debt is technical credit. Technical credit is generated, when something is developed and delivered in advance, that does not provide any immediate and perceivable business value. An example could be an architecture refactoring that provides a simple extension mechanism so that business functionalities can be developed more rapidly. This could cost more in advance than is perceived as returned business value but could cost less afterwards compared with the business values additional functionalities provide. But still there is a great risk of producing waste, for example if the requirements change.

Functional Debt and Functional Credit

The other dimension of debt and credit is the functional. A project starts with a certain amount of functional debt - the product backlog, which is worked off continuously. Further, the requirements engineering discipline refers to the Kano model of success factors:
  • Basic or Threshold Attributes - if they are missing, the solution is not accepted at all
  • Performance Attributes - some attributes could be missing, but the more are implemented the better
  • Excitement Attributes - the customer does not even know they want those. They could compensate for missing performance attributes and may help to exceed the customer expectations (making her a more happy customer).

Functional Debt

Functional debt are features that are implicitly or explicitly (Basic and Performance Attributes) required but are not yet delivered. Working off the functional debt of the backlog is exactly the concept of a vertical, evolutionary prototype, and therefore an intrinsic mitigation strategy in agile processes to reduce functional risks of overall project and maximize the work not done according to the principles of the agile manifesto.

Functional Credit

Functional credit on the other hand is generated, when Excitement Attributes are delivered. Functional credit may help to compensate for functional debt (missing Performance Attributes) or even more help to exceed the customer expectation by raising your functional balance above 0.

Nevertheless, concentrating on generating functional credit - in the meaning of developing something that is not implicitly or explicitly required - poses the risk of producing waste. For example, you cannot guarantee for sure the developed feature is an Excitement Attribute. The customer may even get upset that you do not provide what has was agreed on, but still it could be a risk worth to be taken - depends on the situation.


Theory applied

How will all this theory impact our daily work? Basically, it should help you to make informed decisions and know of the implications of using one concept to your project.

The following diagram depicts a possible situation in a project where the above discussed aspects have been taken into consideration.

The diagram shows:
  • a limited horizontal evolutionary prototype with a limited scope that anticipates what has to be developed in the next iteration. This could support marketing operations by showing what the customer could expect soon and prove that it's vaporware.
  • a lightweight horizontal throwaway prototype (the bar smaller than in the diagrams above) to support the vision (using a wireframe) that goes beyond the evolutionary horizontal prototype to mitigate functional risks
  • maximized amount of work not done
  • several proof-of-concept lightweight vertical throwaway prototypes (bars are more narrow) that cover combined the entire technical stack to mitigate technical risks.
Further, the diagram shows several iteration increments with growing perceivable value but also with growing technical debt (which might occur in every project). The next diagram depict how the iteration scope may have looked like (yellow), how it should look in an ideal world (green) and how it should look to reduce the technical debt again (red). The latter effectively depicts to the approach of producing only something of little business value but with a big (estimated) effort.

Of course it's better to avoid technical debt in the first place, but when you have created it you have to deal with it. The Disciplined Agile Delivery framework lists 11 strategies of dealing with technical debt.

Concluding Thoughts

There might be occasions where any kind of prototype has it's value and helps the project. But if you choose a certain type of prototyping approach in your project, keep the implications in mind. Always try to reduce waste and technical debt and maximize the work not done, while minimizing the risks.

Good practices for prototyping should include:
  • Reduce functional risk without producing technical debt with a wireframe mockup (horizontal, throwaway prototype)
  • Reduce high technical risks with a limited proof-of-concept throwaway prototype (horizontal or vertical). Define a set of questions to be answered by the prototype to limit the production of waste. (i.e. "is the architecture feasible and matches our criteria?", "does the framework to be used matches our needs?", "are the performance criteria achievable?", "does the solution integrate into the environment?" etc.). Such a prototype is always a tradeoff between waste and risk reduction and requires thorough considerations.
  • Avoid producing a horizontal, evolutionary prototype. But if you do it, limit it's scope to limit production of technical waste. Anticipate only a small and manageable number of iterations (i.e. 1-3) and define a strategy along with the prototype creation how to work off the technical debt. Make it transparent.
  • Do vertical, evolutionary prototyping whenever possible.
  • Have a strategy to deal with technical debt.
tl;dr
The world of prototyping is not black and white, it all depends on the situation.

Monday, January 13, 2014

Social Bookmarking

Over the holiday season I sought for a solution to keep my bookmarks between my private and my working computer in sync. Although Firefox comes with a synchronization feature, I aimed more to a solution based on social bookmarking. Back in the days when working for IBM Software Services for Lotus I used social bookmarking on the internal platform - guess what - Lotus Connections (now IBM Connections), which had a good browser integration (an IBM internally available extension, I don't know if that's now part of the product or a free or paid asset). However, the combination of persisting my bookmarks and sharing my (tagged) links with the community was a great added value to my work as it significantly decreased the time to find specific information, especially in the field I was working in and in the network of colleagues that worked in the same field.
So the requirements for my sought-for social bookmarking solution were set:
  • Browser integration being a substitute for Browser Bookmarks
  • Synchronization between different working stations
  • Support for Tagging, to ease search for information 
  • Possibility to share links with colleagues and friends
  • Possibility to have private links
I remembered, that there had been a social bookmarking site, if not THE social bookmarking site, called del.icio.us that had a good browser integration. So I started using it. But my experience with it was not that good and I guess it has its origin in the history of the site. Delicious was bought by yahoo, went through years of stagnation before it was sold again to AVOS, the company of the founders of youtube. The browser integration developed by yahoo doesn't seem well supported anymore (or at the moment?), especially the synchronization between browser and website brakes frequently and you have to sign out, delete the local bookmarks and sign in again (the sign-in of the browser extension is done via the website previous.delicious.com, that says everything). So I ended multiple times with editing bookmarks in the extension that were not persisted on the website or editing bookmarks on the websites that were not only not synchronized to the browser, but overwritten again with the data from the browser. On top of that, I don't find the web UI not very user friendly, it's overly simplistic and not very intuitive. So I sought for an alternative.

I took StumbleUpon into consideration, but it's poor browser integration (incompatible with NoScript plugin) was a no-go. Further it focuses more on news than knowledge sharing and has no concept of private links - everything is public. Though I find the idea behind stumble upon generally intriguing, it's not what I was looking for.
Google Bookmarks fell short a good browser integration and Evernote although allowing to capture merely everything is not really a social bookmarking site and therefore lacks some basic functionality.

After reading the good blog post regarding "10 Alternatives to Delicious.com" I tried out Diigo. And I must admit, that's exactly what I was looking for, even more. It comes with a browser plugin that allows to easily capture new bookmarks and also display your bookmarks in the bookmarks toolbar based on custom filters on the captured bookmarks. Bookmarks can also be organized into lists that can be filtered too. Further, the plugin has a sidebar to quickly search all your captured bookmarks. For me Diigo is a complete replacement for the browser bookmark management and helps me easily to keep in sync with different computer, including smart phones, as it also provides a neat app.
But Diigo is not only a social bookmarking tool. It goes well beyond that, its a knowledge working tool. With Diigo you can add sticky notes to web pages, highlight passages, comment on them, capture images, capture parts of a web page as image and share everything of that with the public or a group of people, and of course keep them private as well. So whenever I capture a bookmark, I can also highlight what was important for me from that page or why I looked up a certain information. When working in a group it helps, sharing certain information with each other.

There may be other alternatives in the web as well, but having found Diigo, I found a solution that works for me, so I stick with that.

tl;dr
Want to do social bookmarking and looking for an alternative to delicious? Try Diigo. It's a knowledge working tool.

Tuesday, January 7, 2014

Technical Debt and Prototyping


Currently I am preparing for my IREB Requirements Engineering Certification and I read (and already knew before) that in Requirements Engineering creating a prototype is a method and tool to validate requirements.
A prototype should always answer a beforehand defined set of questions. Further, there are two basic types of prototypes.
  1. Throwaway Prototype - the prototype is not used further in the development process apart from being a technical example.
  2. Evolutionary Prototype - the prototype itself is continuously developed further until it becomes the final (or a shippable) product.
The question is, what role does prototyping play in agile development?

Let's first have a look at the dimensions of a prototype. The wikipedia article about prototyping describes the two dimensions of a prototype as
  • vertical (in german also known as "Durchstichprototyp"), in the meaning of a slice of the system to be developed, including frontend, business logic and persistent.
  • horizontal, in the meaning of an almost complete implementation of a single layer of a multi-layered architecture. I guess, the most common example of a horizontal prototype is a UI prototype. Also common are interface stubs.
I would assume, that vertical prototypes are closer to the principles of agile development, as they are more "something that is done" than a horizontal prototype.

Why is that?

In agile the concept of "technical debt" is known as a measure of design or software quality, or better, the lack of it. There are some strategies to deal with it, as described in the DAD's "11 Strategies to deal with technical debt".  Further, agile or scrum, aims at producing increments that provide value to the customer, in most of the cases, the "value" reflects some additions or changes to the UI, that the customer can validate relatively easy.

A vertical prototype is a potentially shippable or consumable piece of software in the meaning of agile. It is functional and it is done. Maybe it needs some refactoring, but basically the amount of technical debt is relatively low. One could even say, that producing an evolutionary, vertical prototype is doing agile (if done right).

On the opposite, a horizontal prototype produces only an artifact on one single layer of the software, which is in most cases the UI. The customer can validate, if the prototype reflects the requirements. But, the prototype itself provides no real value, as it is a dysfunctional piece of software, as all the stuff behind the UI is not implemented. In other words, a horizontal UI prototype is a huge pile of technical debt. The more complete the prototype is, the higher is the debt.
If a evolutionary, horizontal prototype is the starting point of the implementation (i.e. the product of a pre-project or the first iterations), it is relatively difficult to plan backlog entries for the further iterations with increments that "provide value" from a customer perspective. Ways to deal with it could be:
  • deliver only small increments that add value (i.e. slight adjustments to the UI) but that take more time than one would assume they would take. For example changing the background color takes a week. 
  • define the disabling of elements of the UI as added value and focus on only one element that is actually implemented, and the continue as ususal. This means actually to make the evolutionary prototype a throwaway or least major parts of it.
  • call a spade a spade and track it as chores that provide no actual value but a reduction of the technical debt and the stakeholders have to bite the bullet for getting several iterations with no added value
My suggestion would be to use either horizontal throwaway or vertical evolutionary prototypes.

The throwaway protoype - especially UI prototypes - could enrich the requirements descriptions as it provides a clearer vision, how the system should look like and partially how it behaves. Creating this prototype should be the job of the product owner or her assistant's. Wireframe models are a good example of such a prototype. The prototype can be decomposed into smaller user stories or tasks the development team implements, while keeping the entire prototype as a detailed version of a vision.

The evolutionary prototype is actually developed during iterations of the development process and it should always be of the vertical dimension which ensures, that every iterations produces something that is done.

Concluding, I would make the three suggestions:
  • if using a prototype as part of the requirements definition (job of the PO), use a horizontal throwaway prototype. I'd recommend a wireframe model.
  • creating a vertical evolutionary prototype is developing the product in an agile or at least iterative way
  • avoid creating horizontal evolutionary or vertical throwaway prototypes as they produce high technical debt or are unfinished work, which is bad.

tl;dr
horizontal prototypevertical prototype
throwaway prototypegood
done by PO as part of requirements
bad,
use only to demonstrate general feasibility to reduce high risks (proof-of-concept), produces waste
evolutionary prototypebad,
produces high technical debt, avoid if possible
good,
actual potentially shippable increment, done by development team