Saturday, 24 September 2016

How to delete recently opened files history in ubuntu 14.04


In this post we will see how we can delete recently opened files from Ubuntu's Unity dashboard. Why would I do that you ask? Answer is privacy but then again it largely depends on the usecase. There should not be such a need for a strictly persona computer but if it is shared it is better to delete history when you leave.

How to delete recently opened files history in ubuntu 14.04

Open security and privacy settings from unity dashbaord.

Then go to Files & Applications and select clear usage data and select the period you want to clear data from - 

Click Ok and you are done.

Related Links

Wednesday, 21 September 2016

Programmatically upload files to amazon (AWS) S3


This post will show you how to programmatically upload files to your AWS S3 account using AWS S3 SDK. Lets start by creating access key.

This post hopes you already have a S3 account set up on your AWS console.

Credentials setup

Lets start be creating access key. Following screenshots show you how to create your access key using AWS IAM (Identity and access management).

Once you have create a user you need to give it access to S3. Follow next set of steps for that -

 This will give your new user access to S3.Now you can use these credentials in the code. Before we go to code there is one more step. You need to set up your credential file. It will be in
  • ~/.aws/credentials
If its not there create one and add credentials to it as show below from the user you had created from IAM console.

I have used some template here but replace it with your exact access key if and secret key. Lets go on to the code now.

Programmatically upload files to amazon s3

I am using eclipse and ivy dependency management. So your ivy file should look like following -

<ivy-module version="2.0" xmlns:xsi=""
        <dependency org="com.amazonaws" name="aws-java-sdk-s3" rev="1.11.36"/>

Note the dependency we have used. It for aws s3 only.

Now lets head on to the code -

package com.osfg;


import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;

 * @author athakur
public class AwsS3Demo {
    private static String AWS_BUCKET_NAME = "test-athakur";
    private static String AWS_KEY_NAME = "testData";
    private static String UPLOAD_FILE = "/Users/athakur/Desktop/data.txt";
    public static void main(String[] args) throws IOException {
        AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
        try {
            System.out.println("Uploading a new object to S3 from a file\n");
            File file = new File(UPLOAD_FILE);
            s3client.putObject(new PutObjectRequest(
                    AWS_BUCKET_NAME, AWS_KEY_NAME, file));

         } catch (AmazonServiceException ase) {
            System.out.println("Caught an AmazonServiceException, which " +
                    "means your request made it " +
                    "to Amazon S3, but was rejected with an error response" +
                    " for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
        } catch (AmazonClientException ace) {
            System.out.println("Caught an AmazonClientException, which " +
                    "means the client encountered " +
                    "an internal error while trying to " +
                    "communicate with S3, " +
                    "such as not being able to access the network.");
            System.out.println("Error Message: " + ace.getMessage());


Just run the above code and it should upload your file to the AWS bucket. Make sure -
  1. Credentials are correct in ~/.aws/credentials file
  2. That credential has access to S3 module
  3. File is present on your machine
  4. bucket name is valid
NOTE : ProfileCredentialsProvider internally uses ~/.aws/credentials file for the credentials to authenticate against AWS S3.

You can also find above code snippet at -

IF all goes correctly file should get uploaded to the S3 -

Related Links

Saturday, 17 September 2016

Spring configuration files XML schema versions


If you have previously worked on a Spring web project then you must have come across Spring configuration files. They are xmls which will have references to to the namespace they are using and the version they are pointing to. Pick up any one of the Spring post we have seen before -
You would see spring configurations files something like -

<beans xmlns=""

Notice the 2.5 version. I mean it's pretty old now. You can find all the schema versions here -
 But as your project improves who cares about updating these files right? You update the spring version(jars) perhaps as you go forward but these configuration files remain as is. Lets see how we can resolve this problem.

Spring configuration XML schema: with or without version?

So how do you resolve this you ask? Do not specify the version. Yes I repeat do not specify the version. It should be something like -

<beans xmlns=""

When you do not specify any version it will take that latest schema that you have in your classpath jar and that will upgrade with the version of jar you upgrade. Spring schema versions are in a file called spring.schemas in spring-beans jar file -

 From this file it will take the latest schema version without you need to worrying about it.

NOTE : You should try to move completely move away from Spring xml configurations. You should start using Java class based configurations and annotations.

Related Links

Saturday, 10 September 2016

Whats new in Java 8?


In this post well see what all new features and changes have come in Java 8 release.

Whats new?

  • Default methods are introduced in Java 8 which means you can provide a method with body in your interface and all concrete classes need not implement it. They can override it though. For this you need to use default keyword in your method. More details - 
  • Java 8 has also introduced Lambda expressions which use functional interface. You can see more details below - 
  • As you know for local variables to be accessed by methods in anonymous classes the local variable needs to be declared final. However from Java 8 it is accessible even if it is effectively final. More details - 
  • As we know variables in an interface are implicitly public, final and static and methods we public and abstract. Though variables remain the same with default methods we saw in point no 1 , non abstract methods are also possible in interface now. Also static methods are allowed in interface now. Following code snippet works from Java 8 -

    public interface TestInterface {
        String NAME = "Aniket";    //public static final 
        String getName();    //public abstract
        default String getDefaultName() { // non static default method
            return "Abhijit";
        static String getNonDefaultSttaicName() { // static methods
            return NAME;
  • Changes in HashMap : The performance has been improved by using balanced trees instead of linked lists under specific circumstances. It has only been implemented in the classes -
    • java.util.HashMap,
    • java.util.LinkedHashMap and 
    • java.util.concurrent.ConcurrentHashMap.

      This will improve the worst case performance from O(n) to O(log n).
  • Java 8 introduces another new syntax called method references. Covered it a new post -
  • Java 8 also introduces a new class called Optional which is a better way to represent values that may not be present instead of using null and and adding null checks -
  • Lastly another Major change that was added was the Stream APIs. You can read all about it in following post -
  • Walking a directory using Streams API in java. This is a continuation of previous post NIO.2 API directory traversal using Path and FileVisitor which was introduced in java 7.
  • In java 8 new APIs are added for Date and Time. 
  • Collection improvements in Java 8

Related Links

Understanding Java 8 Stream API


Java 8 has introduced a new set of APIs involving streams. They look very powerful in term of processing and also uses functional programming we have seen in last couple of posts (Refer links in Related Links section at the bottom of this post). In this post we will essentially see what these streams are and how can we leverage it.

Streams in Java are essentially sequence of data which you can operate upon together it's called a pipeline. A stream pipeline is essentially comprising of 3 parts -

  1. Source : Think of it as data set that is used to generate a stream. Depending on data set a stream can be finite or infinite.
  2. Intermediate operations : Intermediate operations are operations that you perform on the given data set to filter or process your data. You can have as many intermediate operations as you desire. These intermediate operations give you the processed stream so that you can perform more intermediate operation on them. Since streams use lazy evaluation, the
    intermediate operations do not run until the terminal operation runs.
  3. Terminal operation :  This actually produces a result. There can be only one terminal operation. As stream can be used only once it will be invalid post terminal operation.

NOTE : Intermediate operations return a new stream. They are always lazy; executing an intermediate operation such as filter() does not actually perform any filtering, but instead creates a new stream that, when traversed, contains the elements of the initial stream that match the given predicate. Traversal of the pipeline source does not begin until the terminal operation of the pipeline is executed.

Intermediate vrs terminal operations

Creating a Stream

You can create Streams in one of the following ways -

        Stream<String> emptyStream = Stream.empty();
        Stream<Integer> singleElementStream = Stream.of(1);
        Stream<Integer> streamFromArray = Stream.of(1,2,3,4);
        List<String> listForStream = Arrays.asList("ABC","PQR","XYZ");
        Stream<String> streamFromList =;
        Stream<Double> randomInfiniteStream = Stream.generate(Math::random);
        Stream<Integer> sequencedInfiniteStream = Stream.iterate(1, n -> n+1);

Line 1 creates an empty stream. Line 2 creates a stream having one element. Line 3 creates a stream containing multiple elements. Line 5 creates a stream out of a existing List. Line 6 and 7 are generating infinite Streams. Line 6 takes a supplier as argument to generate the sequence whereas Line 7 takes a Seed data integer (something to start with) and an Unary Operator used to generate the sequence.

If you try to print out infinite sequence you program will hang until you terminate it. You can try -


Terminal and intermediate Stream operations

We will not get in details of each terminal and intermediate stream operations. Instead I will list them out and then see example for it. 

Common terminal operations
  1. allMatch()/anyMatch()/noneMatch()
  2. collect()
  3. count()
  4. findAny()/findFirst()
  5. forEach()
  6. min()/max()
  7. reduce()
Common intermediate operations
  1. filter()
  2. distinct()
  3. limit() and skip()
  4. map()
  5. sorted()
  6. peek()

NOTE : Notice how min(),max(), findFirst() and findAny() return Optional values.

Now lets start with how to print a Steams content because that's what we do when we are in doubt.

You can print a Stream is one of the following ways -

        List<String> listForStream = Arrays.asList("ABC","PQR","XYZ");
        Stream<String> streamFromList =;
        //printing using forEach terminal operation
        //recreate stream as stream once operated on is invalid
        streamFromList =;
        //printing using peek intermediate operation
        streamFromList =;
        //printing using collect terminal operation

Line 4 used forEach terminal operation to print out the Stream. It takes a consumer as the argument which in this case  is "System.out::println". We have used method reference here because that's common but corresponding Lambda expression would be "s -> System.out.println(s)". 
Line 8 uses peek which is a intermediate operation to look at the stream elements. It also takes a consumer as the argument. Lastly in Line 11 we have used collect terminal operator to collect the results as List and then print it put. You can define your own Collectors or you can use the ones Java have provided for you. You can find these in class. For example here we have used - Collectors.toList().

Note if you have an infinite Stream these print methods will hang and you will have to manually terminate the program.

Also note you cannot modify the Base data structure directly while using it in Stream. So -

        List<String> listForStream = new ArrayList<>(Arrays.asList("ABC","PQR","XYZ"));
        Stream<String> streamFromList =;
        streamFromList.forEach(elm -> listForStream.remove(elm));

will give you -

Exception in thread "main" java.util.ConcurrentModificationException
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(
    at HelloWorld.main(

as you are iterating on a List and modifying it simultaneously. Instead you could filter the stream -

        List<String> listForStream = Arrays.asList("ABC","PQR","XYZ");
        Stream<String> streamFromList =;
        listForStream = streamFromList.filter(x -> x.contains("A")).collect(Collectors.toList());

You will get - [ABC]

Understanding flatMap() intermediate operation

This is an interesting intermediate operation. Hence covering this separately. It's signature is as follows -
  • <R> Stream<R> flatMap(Function<? super T, ? extends Stream<? extends R>> mapper);
This basically takes each element from the stream this is called on and converts each element into a separate stream. This new stream corresponding to an element in original stream may have a different element depending on how mapping function is written. Finally each stream resulting from each element of original stream is flattened to return a single stream which has elements from all resultant stream. Eg. -

    public static void main(String[] args) {
        Stream<String> stream = Stream.of("I", "Am", "Groot");
        Stream<String> flattenStream = stream.flatMap(s -> Stream.of(s.toUpperCase()));

Output :
Now flatMap takes each element of  stream and converts into another stream. Something like -
"I" -> Stream.of("I")
"Am" -> Stream.of("AM")
"Groot" -> Stream.of("GROOT")
and then flattens it
-> Stream.of("I", "AM", "GROOT") and returns.
Above is just to make you understand how it works for this case. Do not take it for actual implementation. 

This way you can merge Streams or Lists. Eg. -

    public static void main(String[] args) {
        List<String> dcHeros = Arrays.asList("Superman","Batman","Flash","Constantine");
        List<String> marvelHeros = Arrays.asList("Hulk","Ironman","Thor","Captian America");
        List<String> awesomeness = Stream.of(, -> s).collect(Collectors.toList());

output :
[Superman, Batman, Flash, Constantine, Hulk, Ironman, Thor, Captian America]

Examples of Streams usage

Lets see examples of common usage now -

Lets say you have list of name. You want to get all names from that list that start with A and sort it based on their name and return 3 of them.

        List<String> listForStream = Arrays.asList("Aniket", "Amit", "Ram", "John", "Anubhav", "Kate", "Aditi");
        Stream<String> streamFromList =;
        .filter(x -> x.startsWith("A"))

You will get :


Let's see what we did here. First we got the stream out of current List, then we added a filter to have only those elements in stream which start with A. Next we are calling sorted which essentially sorts the sequence of data remaining in stream. This will be natural sort based on name. Lastly we just limit 3 entries and print them.

Now guess what the following code does -

        Stream.iterate(1, n -> n+1)
        .filter(x -> x%5==0)

And the output is -

Firstly we are creating an infinite Stream here using iterate. It will generate sequence 1,2,3,4,5.... so on. Next we apply filter to keep only multiples of 5. Next we limit to only 5 such results. This will reduce our infinite stream to a finite one. Lastly we print out those 5 results. Hence the result.

Now lets move on to using peek -

        Stream.iterate(1, n -> n+1)
        .filter(x -> x%5==0)

What would above code snippet print? Answer is -

So here we are printing the details once post filter and then once after limiting. Hence the result.

NOTE : Stream never modifies the original collection unless you do change it yourself from the stream. See following example to understand -

        List<String> myList  = new ArrayList<String>();
        List<String> newMyLis = -> str + "a").collect(Collectors.toList());

Output of which is -
[a, b, b, d]
[aa, ba, ba, da]

Also to reiterate Stream does not really run until its terminal operation is run. It is lazy init. So something like -
  • -> s.startsWith("I"))
will just return a stream and do nothing.

Working with primitives and Stream

Similarly we have Streams for primitives as well -
Here are three types of primitive streams:
  • IntStream: Used for the primitive types int, short, byte, and char
  • LongStream: Used for the primitive type long
  • DoubleStream: Used for the primitive types double and float
They have additionally range() and rangeClosed() methods. The call range(1, 100) on IntStream and LongStream creates a stream of the primitives from 1 to 99 whereas rangeClosed(1, 100) creates a stream of the primitives from 1 to 100. The primitive streams have math operations including average(), max(), and sum(). There is one more additional method called summaryStatistics() to get many statistics in one call.

private static int range(IntStream ints) {
    IntSummaryStatistics stats = ints.summaryStatistics();
    if (stats.getCount() == 0) throw new RuntimeException();
    return stats.getMax()—stats.getMin();

Also there are functional interfaces specific to streams.

Parallel Streams

Streams have inbuild support for multi threading. There are two ways you can create a parallel stream -
  1. Call parallel() on an existing stream to convert into a parallel stream (as an intermediate operation) OR
  2. You can directly call parallelStream() on your collection object to get a parallel stream.
2nd way is used more often. Now lets see the difference between the two -

Consider following example -

    public static void main(String[] args) {
        System.out.println("Using a Serial Stream : ");
        System.out.println("Using a Parallel Stream : ");

One possible output is -

Using a Serial Stream :
Using a Parallel Stream :

The reason for saying one possible output is that for parallel stream you cannot really predict the order. It's like printing each number in different runnable tasks submitted to a fixed thread pool executor service.

NOTE : Parallel streams can process results independently, although the order of the results cannot be determined ahead of time.  Also if you are using parallel stream always use concurrent collections.

NOTE : Any stream operation that is based on order, including findFirst(), limit(), or skip(), may actually perform more slowly in a parallel environment. This is a result of a parallel processing task being forced to coordinate all of its threads in a synchronized-like fashion.

Related Links

Saturday, 3 September 2016

Understanding and using Java 8 Optional class


Java 8 has introduced a new class Optional which is kind of a container which may or maynot hold a value. You might ask whats the big deal? I might be as well and I might add null checks to avoid NPE. Think about it there is no clear way to suggest that null is a special value that can be returned whereas using Optional clearly indicates that the the value may not be present. Plus you can use functional programming with it - Lambdas and much more. Well see that in a moment.

To the code...

Let's see how we can use this in code - 

import java.util.Optional;
import java.util.function.Consumer;

public class Java8Demo {

    public static void main(String[] args) {
        Consumer<String> printer =  (str) -> System.out.println("Concat result : " + str);
        //case1 :
        String str1 = "abc";
        String str2 = "pqr";
        printer.accept(concat(Optional.of(str1), Optional.of(str2)));
        //case2 :
        str1 = "abc";
        str2 = null;
        printer.accept(concat(Optional.of(str1), Optional.ofNullable(str2)));
        //case3 :
        str1 = "abc";
        str2 = null;
        printer.accept(concat(Optional.of(str1), Optional.of(str2)));// WARN:NPE
    public static String concat(Optional<String> str1, Optional<String> str2) {
        System.out.println("str1 present : " + str1.isPresent());
        System.out.println("str2 present : " + str2.isPresent());
        //ifPresent(Consumer c)
        //orElse(T other)
        String val1 = str1.orElse("");
        //orElseGet(Supplier s)
        String val2 = str1.orElseGet(() -> "");
        return val1 + val2;


And the output is :

str1 present : true
str2 present : true
Concat result : abcabc
str1 present : true
str2 present : false
Concat result : abcabc
Exception in thread "main" java.lang.NullPointerException
    at java.util.Objects.requireNonNull(
    at java.util.Optional.<init>(
    at java.util.Optional.of(
    at Java8Demo.main(

as expected. 


Method isPresent() is used to check if your Optional container has value or not. If it is there you can use get() method to retrieve it. Also notice how create a Optional instance. If it has to contain value you create it as Optional.of(value) other wise you use Optional.ofNullable(value) or Optional.empty(). Then you also have other useful methods like orElse(T other) which will return the value if it is present in Optional instance or return the argument supplied in method parameter. You have similar other methods - orElseGet(Supplier s), ifPresent(Consumer c), orElseThrow(Supplier s) that again uses functional programming. Have demoed most of them in code above. Others are similar and straightforward. You can see details of all methods in picture below.

 Details of all the methods of Optional class are as follows -

Other uses

You can also use this in Spring framework as follows -

If you are using Spring 4.1 and Java 8 you can use java.util.Optional which is supported in @RequestParam, @PathVariable, @RequestHeader and @MatrixVariable in Spring MVC -

@RequestMapping(value = {"/json/{type}", "/json" }, method = RequestMethod.GET)
public @ResponseBody TestBean typedTestBean(
    @PathVariable Optional<String> type,
    @RequestParam("track") String track) {      
    if (type.isPresent()) {
        //type.get() will return type value
        //corresponds to path "/json/{type}"
    } else {
        //corresponds to path "/json"

Related Links

Thursday, 1 September 2016

How to change default apps to open files with on the Mac


So sometime back I had to install pgAdmin . It is basically client to connect to your postgres DB. From then all my sql files started opening in that. I wanted to open it in my regular sublime text. In this post I will show you how you can change the default application your file open in.

 How to change default apps to open files with on the Mac

  • Go to Finder and select the type of file you want to change default application that opens it.
  • Go to File -> Get info
  • Expand "Open with:" section.
  • select the application you want to open it with
  • Then click on "Change All".
  • Accept the confirmation dialog.

Above sequence with screenshots -

And you are done! All you .sql files will now start opening with sublime text.

Related Links

t> UA-39527780-1 back to top