Sunday, 29 January 2017

Java 8 Stream terminal operations - reduce vrs collect

Background

Following are some common terminal operations supported by Stream -


In this post we will see two of these - reduce() and collect(). As you can see they are reduction. They reduce stream to an object. Lets see each of them is detail now.


reduce()

reduce() method combines the stream into a single object. It can reduce the stream either to same same type as that of stream or different. Methods available for reduce are -

  • T reduce(T identity, BinaryOperator<T> accumulator)
  • Optional<T> reduce(BinaryOperator<T> accumulator)
  • <U> U reduce(U identity, BiFunction<U,? super T,U> accumulator, BinaryOperator<U> combiner)
Let's see some examples - 

        Stream<String> myStringStream = Stream.of("a","n","i","k","e","t");
        String stringResult = myStringStream.reduce("", (a,b) -> a + b);
        System.out.println("String reduce : " + stringResult);
        Stream<Integer> myIntegerStream = Stream.of(2,3,4,5,6);
        int intResult = myIntegerStream.reduce(1, (a,b) -> a * b);
        System.out.println("Intger reduce : " + intResult);

Output is -
String reduce : aniket
Intger reduce : 720

As you can see the 1st method gets the identity, then uses the 1st element of the stream and operates both to get a result. Then it takes the result and the 2nd element to process again and so on to finally return a result.

2nd method does not take an identity as an input well because it's not strictly mandatory but if you notice it returns an Optional value. There can be 3 cases here -
  1. If the stream is empty, an empty Optional is returned.
  2. If the stream has one element, it is returned.
  3. If the stream has multiple elements, the accumulator is applied to combine them.
For eg -

        Stream<Integer> myIntegerStream = Stream.of(2,3,4,5,6);
        Optional<Integer> intResult = myIntegerStream.reduce((a,b) -> a * b);
        if(intResult.isPresent()) {
            System.out.println("Intger reduce : " + intResult.get());
        }

And the output is again -
Intger reduce : 720

3rd method is used mainly when parallel Streams are involved. In that case you stream is divided into segments, accumulator is used to combine individual segments and then a combiner is used to combine those segments.

For reduce arguments to be used for parallel streams it must satisfy following properties -

  • The identity must be defined such that for all elements in the stream u ,
    combiner.apply(identity, u) is equal to u .
  • The accumulator operator op must be associative and stateless such that (a op b) op c is equal to a op (b op c) .
  •  The combiner operator must also be associative and stateless and compatible with the identity, such that for all u and t combiner.apply(u,accumulator.apply(identity,t)) is equal to accumulator.apply(u,t) .
NOTE : As part of the parallel process, the identity is applied to
multiple elements in the stream, resulting in very unexpected data. So above properties should be obeyed.

collect()

collect() is again a reduction called mutable reduction. In this we use mutable objects like StringBuilder or ArrayList to collect data. Note the result here is different type than that of the stream content. Methods available are -

  • <R> R collect(Supplier<R> supplier, BiConsumer<R, ? super T> accumulator, BiConsumer<R, R> combiner)
  • <R,A> R collect(Collector<? super T, A,R> collector)
 For eg.

        Stream<String> myStringStream = Stream.of("a","n","i","k","e","t");
        StringBuilder stringResult = myStringStream.collect(StringBuilder::new, StringBuilder::append,StringBuilder::append);
        System.out.println("String reduce : " + stringResult.toString());
        myStringStream = Stream.of("a","n","i","k","e","t");
        TreeSet stringTreeSetResult = myStringStream.collect(TreeSet::new, TreeSet::add,TreeSet::addAll);
        System.out.println("String reduce : " + stringTreeSetResult);


And the output is -
String reduce : aniket
String reduce : [a, e, i, k, n, t]

Or you can use the collectors -

        Stream<String> myStringStream = Stream.of("a","n","i","k","e","t");
        List<String> resultList = myStringStream.collect(Collectors.toList());
        System.out.println(resultList);


Output -
[a, n, i, k, e, t]

For using collect() on parallel streams make sure your mutable container is thread safe. You can use concurrent collections for this.

Joining Collector

Simple joining collector will join elements with the specified delimiter -

    public static void main(String[] args) {
        List<String> myList = Arrays.asList("I","am","groot");
        String result = myList.stream().collect(Collectors.joining(" "));
        System.out.println("Collect using joining : " + result);
    }

Output :
Collect using joining : I am groot

Collecting into Map

You can also collect your stream results into Map. Lets see how -

    public static void main(String[] args) {
        List<String> myList = Arrays.asList("I","am","groot");
        Map<Integer,String> result = myList.stream().collect(Collectors.toMap(x -> x.length(),x -> x));
        System.out.println("Collect using toMap : " + result);
    

Output :
Collect using toMap : {1=I, 2=am, 5=groot}
Here we are creating a map with key as length of the element in the stream and value as the element itself.
But wait what happens when I give input as - Arrays.asList("I","am","groot","ok") . There are two elements that will map to same key since both "am" and "ok" have same length. Well java does not know what to do and will just throw an Exception - 
Exception in thread "main" java.lang.IllegalStateException: Duplicate key am
    at java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133)
    at java.util.HashMap.merge(HashMap.java:1245)
You can always tell Java how to handle this scenario as follows -
    public static void main(String[] args) {
        List<String> myList = Arrays.asList("I","am","groot","ok");
        Map<Integer,String> result = myList.stream().collect(Collectors.toMap(x -> x.length(),x -> x, (a,b) -> a+ " " + b));
        System.out.println("Collect using toMap : " + result);
    }
Output :
Collect using toMap : {1=I, 2=am ok, 5=groot}
This is because we have given (a,b) -> a+ " " + b which bean if there are two elements mapping to same key concatenate them using a space and out it in the for the corresponding key.

Collect using grouping

 You can collect using grouping to which will collect your data in the map based on your condition. Eg. - 

    public static void main(String[] args) {
        List<String> myList = Arrays.asList("I","am","groot","ok");
        Map<Integer,List<String>> result = myList.stream().collect(Collectors.groupingBy(String::length));
        System.out.println("Collect using toMap : " + result);
    }

Output :
Collect using toMap : {1=[I], 2=[am, ok], 5=[groot]}

Collect using Partitioning

Partitioning is a special type of grouping in which map just has two entries (for keys) and those are true and false. You need to give a predicate for the method to put stream element in correct bucket. Eg - 

    public static void main(String[] args) {
        List<String> myList = Arrays.asList("I","am","groot","ok");
        Map<Boolean,List<String>> result = myList.stream().collect(Collectors.partitioningBy(s -> s.length() > 3));
        System.out.println("Collect using toMap : " + result);
    }

Output :
Collect using toMap : {false=[I, am, ok], true=[groot]}

Difference between reduce() and collect()

  •  If you have immutable values such as ints,doubles,Strings then normal reduction works just fine. However, if you have to reduce your values into say a List (mutable data structure) then you need to use mutable reduction with the collect method.
  • In the case of reduce() we apply the function to the stream elements themselves where as in the case of collect() we apply the function to a mutable container.

Related Links

Saturday, 28 January 2017

Creating Threads with the ExecutorService

Background

You know there are 2 ways to create your threads and run it asynchronously from main thread. You can either -
  • extend Thread class or
  • implement Runnable interface and pass it to a thread.
Consider following example -

    public static void main(String[] args) {
        
        Thread t1 = new Thread(() -> System.out.println("My ThreadId : " + Thread.currentThread().getId()));
        Thread t2 = new Thread(() -> System.out.println("My ThreadId : " + Thread.currentThread().getId()));
        Thread t3 = new Thread(() -> System.out.println("My ThreadId : " + Thread.currentThread().getId()));
        t1.start();
        t2.start();
        t3.start();
        System.out.println("Main ThreadId : " + Thread.currentThread().getId());
        
    }


One of the possible outputs is -
My ThreadId : 10
My ThreadId : 12
Main ThreadId : 1
My ThreadId : 11

Obviously we cannot determine for sure the order of execution of threads as it is dependent on OS level thread scheduler.We will not get into that. Important points to note here is how we created and started thread, used lambda expression for runnable interface.


Runnable as we know is a functional interface and can be used in lambda expressions -

@FunctionalInterface
public interface Runnable {
    public abstract void run();
}


So thats the normal way. But why manage thread starting/stopping , handing results if any on your own when Java provides you with a convenient way. And this is where ExecutorService comes into picture.

In this post we will see what ExecutorService is, how we can use to to create and run threads, various types of methods it supports etc.

Creating Threads with the ExecutorService

ExecutorService is an interface but it has multiple concrete implementations we can use. It is essentially a framework that creates and manages threads for you. It has other features like thread pooling and scheduling we will come to later.

Consider following example -

    public static void main(String[] args) {
        
        ExecutorService service = null;
        try {
        service = Executors.newSingleThreadExecutor();
        service.execute(() -> System.out.println("My 1st ThreadId : " + Thread.currentThread().getId()));
        service.execute(() -> System.out.println("My 2nd ThreadId : " + Thread.currentThread().getId()));
        service.execute(() -> System.out.println("My 3rd ThreadId : " + Thread.currentThread().getId()));
        System.out.println("Main ThreadId : " + Thread.currentThread().getId());
        } finally {
        if(service != null) service.shutdown();
        }
        
    }


Here we have used newSingleThreadExecutor which is basically ExecutorService with a single thread. You send multiple runnable implementations to it and that single thread runs it sequentially. Since it is a single thread it's output will be sequential too (and hence predictable). However you cannot predict complete output of above program as main threads output you cannot predict with respect to other threads as that runs independently.

One of the possible outputs of above program -

My 1st ThreadId : 10
Main ThreadId : 1
My 2nd ThreadId : 10
My 3rd ThreadId : 10

Another thing you might have noticed above is the shutdown method. You need to be very careful here. You need to call shutdown on executor service when you are done because executor service create non daemon threads and your application will never shut down if those threads are not stopped. Once shutdown in called no more tasks are accepted by executor service but it continues running already accepted tasks. Below is it' life cycle -



NOTE : ExecutorService interface does not implement AutoCloseable, so you cannot use a try-with-resources statement.

Submitting tasks to executor service

As you must have seen in  code snippet above we use executor services execute() method to submit our runnables. But problem with these are we never really know if the runnables have completed their operation. Fortunately there is another method we can use called submit() - this also takes a Runnable as an argument but returns a Future object that can be used to determine if task is complete or not.

    public static void main(String[] args) throws InterruptedException, ExecutionException {
        
        ExecutorService service = null;
        try {
        service = Executors.newSingleThreadExecutor();
        Future<?> result =  service.submit(() -> System.out.println("My 1st ThreadId : " + Thread.currentThread().getId()));
        while(!result.isDone()){
            System.out.println("Executor task in progress");
            Thread.sleep(100);
        }
        System.out.println("Executor task completed : " + result.isDone());
        } finally {
        if(service != null) service.shutdown();
        }
        
    }


Above prints -

Executor task in progress
My 1st ThreadId : 10
Executor task completed : true

In case you noticed Future object also has a get method. So if you are wondering what does it return then the answer is it depends. So far we have seen simply Runnable instance provided to submit method and since we know Runnable does not return anything (void return type) get on corresponding future will also return a null. However there is another interface called Callable similar to Runnable.

  • Callable allows you to return a value. 
  • It also allows you to throw checked exceptions.
  • And you can use it in submit/execute method of ExecutorService.
  • And yes it's also a functional interface like Runnable.

@FunctionalInterface
public interface Callable<V> {
    /**
     * Computes a result, or throws an exception if unable to do so.
     *
     * @return computed result
     * @throws Exception if unable to compute a result
     */
    V call() throws Exception;
}


The Callable interface was introduced as an alternative to the Runnable interface,
since it allows more details to be retrieved easily from the task after it is completed.

Sample example for Callable interface use -

    public static void main(String[] args) throws InterruptedException, ExecutionException {
       
        ExecutorService service = null;
        try {
        service = Executors.newSingleThreadExecutor();
        Future<String> result =  service.submit(() -> "My 1st ThreadId : " + Thread.currentThread().getId());
        while(!result.isDone()){
            System.out.println("Executor task in progress");
            Thread.sleep(100);
        }
        System.out.println("Executor task completed : " + result.get());
        } finally {
        if(service != null) service.shutdown();
        }
       
    }


Output -

 Executor task in progress
Executor task completed : My 1st ThreadId : 10


NOTE: Callable functional interface is similar to Supplier functional interface. Both don't take any argument and return something. If Java compiler at any point finds this ambiguous then it will throw compilation error. You need to typecast it to work.

Eg.

    public static void useMe(Supplier<String> input) {}
    public static void useMe(Callable<String> input) {}
    
    public static void main(String[] args) {
        useMe(() -> {throw new IOException();}); // DOES NOT COMPILE
    }


Compilation will fail with following error -

The method useMe(Supplier<String>) is ambiguous for the type Java8Demo

Also note Callable can throw checked exception while Supplier cannot. But Java does not check this as seen in above example.

NOTE : Until now we have only seen single threaded executor service. But you can have a pool as well i.e executor service with predefined number of threads in pool.

 For eg.  ExecutorService executor = Executors.newFixedThreadPool(5);

NOTE : ExecutorService was introduced since Java 5.

Related Links

Saturday, 14 January 2017

Sending xml(JAXB) and json(Jackson) response with Spring MVC @ResponseBody annotation

Background

In some of our earlier posts we saw examples of how Spring controllers work. 

In those examples we returned a view name from controller method and subsequently corresponding JSP was returned and rendered. In this post we will see how we can send a xml or a json response instead of a JSP.


 Sending xml(JAXB) and json(Jackson) response with Spring MVC @ResponseBody annoatation

Your controller would look like below -

package com.osfg.controllers;

import javax.servlet.http.HttpServletRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.MediaType;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

import com.osfg.model.Employee;

/**
 * 
 * @author athakur
 * Controller to handle employee information
 */
@Controller
public class EmployeeController {

    Logger logger = LoggerFactory.getLogger(EmployeeController.class);

    
    @RequestMapping(value = "/getEmployeeInfoData", method = RequestMethod.GET, produces = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
    @ResponseBody
    public Employee getEmployeeInfo(HttpServletRequest request) {
        logger.debug("Receive GET request for employee data information");
        Employee empForm = new Employee();
        empForm.setName("Aniket Thakur");
        empForm.setAge(25);
        return empForm;
    }

}
 
 Most of annotations and code you would already know from previous examples. New thing here is the produces field which is set to 
  • MediaType.APPLICATION_JSON_VALUE,
  • MediaType.APPLICATION_XML_VALUE
which mean your controller method can give response in json or xml. 
NOTE : Spring framework matches this produces field with  accepts header of incoming request.

and your Employee model object would look like -

package com.osfg.model;

import java.io.Serializable;

import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;

/**
 * 
 * @author athakur
 * Model class for Employee
 */
@XmlRootElement(name = "employee")
public class Employee implements Serializable {
    
    private static final long serialVersionUID = 1L;
    
    private String name;
    private int age;
     
    @XmlElement
    public String getName() {
        return name;
    }
    public void setName(String name) {
        this.name = name;
    }
    
    @XmlElement
    public int getAge() {
        return age;
    }
    public void setAge(int age) {
        this.age = age;
    }
     
}


Notice the annotations used. Those are required for XML transformation by JAXB library.

NOTE : Don't forget to add <mvc:annotation-driven/>  tag in your spring configuration file.

Once above code and configurations are in place all you have to do is put JAXB and jackson jars on classpath. I am using ivy so add following dependency on youe ivy xml -

<dependency org="com.fasterxml.jackson.core" name="jackson-databind" rev="2.8.5"/>

NOTE : I am using Java 8. JAXB is inbuilt in java since java 7. So no need to explicitly add it.






You can find the working snippet of above code in my github repo -
https://github.com/aniket91/WebDynamo/blob/master/src/com/osfg/controllers/EmployeeController.java
file of
https://github.com/aniket91/WebDynamo

 Related Links

What's the difference between <mvc:annotation-driven /> and <context:annotation-config /> in servlet?

Background

With Spring 3.0 came mvc name space and along with it came a lot of simplifications in Spring framework. The one that we are going to discuss today is - <mvc:annotation-driven />

This is the simplest tag you can add in your spring configuration file to enable many new features.

This is obvious you spring configuration xml tag. For java equivalent you can add the annotation @EnableWebMvc to one of your @Configuration classes.

@Configuration
@EnableWebMvc
public class WebConfig {
}

The above registers a 
  • RequestMappingHandlerMapping, 
  • a RequestMappingHandlerAdapter, and 
  • an ExceptionHandlerExceptionResolver 
(among others) in support of processing requests with annotated controller methods using annotations such as @RequestMapping, @ExceptionHandler, and others.


What's the difference between <mvc:annotation-driven /> and <context:annotation-config /> in servlet?

With <mvc:annotation-driven /> following features get enabled -


  1. Configures the Spring 3 Type ConversionService (alternative to PropertyEditors)
  2. Adds support for formatting Number fields with @NumberFormat
  3. Adds support for formatting Date, Calendar, and Joda Time fields with @DateTimeFormat, if Joda Time is on the classpath
  4. Adds support for validating @Controller inputs with @Valid, if a JSR-303 Provider is on the classpath
  5. Adds support for support for reading and writing XML, if JAXB is on the classpath (HTTP message conversion with @RequestBody/@ResponseBody)
  6. Adds support for reading and writing JSON, if Jackson is on the classpath (along the same lines as #5)
This is the complete list of HttpMessageConverters set up by mvc:annotation-driven:

  1.  ByteArrayHttpMessageConverter converts byte arrays.  
  2.  StringHttpMessageConverter converts strings. 
  3.  ResourceHttpMessageConverter converts to/from org.springframework.core.io.Resource for all media types. 
  4.  SourceHttpMessageConverter converts to/from a javax.xml.transform.Source. 
  5.  FormHttpMessageConverter converts form data to/from a MultiValueMap<String, String>. 
  6.  Jaxb2RootElementHttpMessageConverter converts Java objects to/from XML — added if JAXB2 is present and Jackson 2 XML extension is not present on the classpath. 
  7.  MappingJackson2HttpMessageConverter converts to/from JSON — added if Jackson 2 is present on the classpath. 
  8.  MappingJackson2XmlHttpMessageConverter converts to/from XML — added if Jackson 2 XML extension is present on the classpath. 
  9.  AtomFeedHttpMessageConverter converts Atom feeds — added if Rome is present on the classpath. 
  10.  RssChannelHttpMessageConverter converts RSS feeds — added if Rome is present on the classpath.

context:annotation-config on the other hand looks for annotations on beans in the same application context it is defined and declares support for all the general annotations like @Autowired, @Resource, @Required, @PostConstruct etc etc.


Related Links



t> UA-39527780-1 back to top