Saturday, 10 February 2018

Shell script to update lambda code and environment variables to multiple regions and environments using AWS CLI


If you are using AWS stack in your project then you must have used Lambda service some or the other time. It is what we call the serverless architecture. You do not care about the hardware or the  operating system. You just provide the Lambda with the code you wish to run which can be triggered from multiple sources like an S3, API gateway etc.

Since AWS services are region specific you may have same lambda code running in multiple regions. You will have to go to each region and deploy your lambda code either by directly uploading the zip file or uploading from S3 bucket in same region. Either ways this way is time consuming and repetitive. Also you might have same code running under different names corresponding to different execution environments like dev,qa,production etc. Also each lambda may have environment variables like database configuration settings or other custom setting like memory and timeout.

In this post I will show you a simple shell script that uses AWS CLI to do this from your local machine. You can just run this command and it will take care of deploying your code, changing environment variables and setting custom configurations for each region you wish to deploy lambda to.

Assumptions and Setup

This deployment script assumes you have installed AWS Cli and configure a profile in it. If you have not done it already refer -
NOTE : If you are not explicitly providing a profile name to aws configure it is by default set as "default"

Next this script also assumes you have local zip file that has your lambda code. This script also takes an env variables as the input which expects your lambda function name to have it as a suffix. So if your base lambda function name is "my-lambda" then your actual lambda function name is different environments should be -
  • test : my-lambda-test
  • dev : my-lambda-dev
  • qa : my-lambda-qa
  • prod : my-lambda-prod
Script has the base name of the lambda and some environment variables that are defined globally and per region. Script also has array of regions the lambda should be updated to. These  things you can change as per your usecase.

Shell script code

You can find the code on my Github gist -
I am also adding it below to explain it how it works. However to see the most recent version refer to the gist link above.

Code is as follows -

#  Author : athakur
#  Version : 1.0
#  Date : 10/02/2018
#  Description : Deployment script to update lambda code and env variables
#  Sample usage :  
#    Local : ./ test aws-admin fileb://../
#    Dev : ./ dev aws-admin fileb://../
#    QA : ./ qa aws-admin fileb://../
#    Prod : ./ prod aws-admin fileb://../
echo "Updating lambda code for ENV : $1 PROFILE : $2 ZIP_FILE_PATH : $3"


if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]
  echo "Incorrect arguments supplied. Format - ./ ENV PROFILE ZIP_FILE_PATH"




SUPPORTED_REGIONS=("us-east-1" "ap-northeast-1" "ap-southeast-1" "ap-southeast-2")

    echo 'Region : $REGION'
    case "$REGION" in
            echo "Environment not provided"
            exit 1

    echo "Env variables :  $env_variables"        
    lambda_update_env_command="aws lambda update-function-configuration --function-name $FUNCTION_NAME --region $REGION --profile $PROFILE --environment '$env_variables' --timeout 300 --memory-size 3008"    
    echo "Executing command : $lambda_update_env_command"
    eval $lambda_update_env_command
    lambda_update_code_command="aws lambda update-function-code --function-name $FUNCTION_NAME --region $REGION --zip-file $ZIP_FILE_PATH --profile $PROFILE"    
    echo "Executing command : $lambda_update_code_command"
    eval $lambda_update_code_command    
    echo "Completed Lambda function update for region $REGION"

Now let's try to understand what we are doing in above shell script.

Understanding the shell script

Above shell script takes 3 arguments -
  1. Env : This is the env. Eg. test, dev, qa, prod etc
  2. Profile name : This is the aws profile name configured. If you have not done so this will just be "default"
  3. zip file path : Path to lambda zip file

 First part of the script validates that you provided arguments needed for script to run. Next we define some environment variables that we need to set for our lambda. Next we set a environment variable called SNS_ENDPOINT that changes per region. You can do a similar code snippet per environment as well.

Next we have an array of aws regions we need to deploy our lambda code in. You can add / remove as per your use case. Finally we run 2 aws commands for each region -
  1. update-function-configuration : This updates the environment variables and other configurations needed by your lambda
  2. update-function-code : This updates the actual lambda code that gets deployed to the lamdba.
NOTE :  We are also setting --timeout 300 --memory-size 3008 which basically sets lambda timeout and memory to maximum available i.e 5 mins and 3 GB respectively.

NOTE :  Lambda is charged as the amount of time it runs * memory it uses. So change above configurations as per your need and budget.

Related Links

Friday, 2 February 2018

Simulating environment variables in NodeJs using dotenv package


When you write code there are certain variables that may differ as per the environments like dev, qa prod etc. This might have sensitive data as API keys, passwords etc. You would definitely do not want to put it in your code directly since this will be added to some code repository like git and others may have access to the same. 

General practice that is followed is to use environment variables that can be defined at the environment level and can be read and used in the code. For eg. consider Elastic beanstalk or a Lambda in AWS world. You would define environment variables for the environment and use that in code. If it's your our physical box you might define the environment variables at the OS level or maybe at tomcat level if you are using tomcat as the container. Environment variables work fine in all such cases. 

But how would you do the same in local. In this post I will show how we can simulate environment variables in a NodeJs process with a package called dotenv.

This post expects you to know basics of NodeJs and have NodeJs and npm (Node package manager) installed on your machine. If you have not done that then please refer my earlier post -

Simulating environment variables in NodeJs using dotenv package

First you need to install dotenv package using npm. To do so go to the directory where you would have your NodeJs file and execute following command -
  • npm install dotenv
If you get some warning you can ignore it for now. You should see a directory called node_modules getting created in the same directory where you executed this command. This folder will have the package dotenv that we just installed.

Now that we have package installed let's see how we can simulate an environment variable. For this simply create a file name .env in the same directory and add the environment variable you expect to read in code in it. For this demo I will use 3 environment variables -
  • USERNAME=athakur
  • PASSWORD=athakur
Now create NodeJS file in the same directory. Let's call it test.js. The directory structure is as follows -

Add following content in test.js -

'use strict';
const dotenv = require('dotenv');

const env = process.env.ENVIRONMENT
const username = process.env.USERNAME
const password = process.env.PASSWORD

console.log("Env : " + env);
console.log("Username : " + username);
console.log("Password : " + password);

Save the file and execute it as -
  • node test.js
You should see following output on the screen -

Env : local
Username : athakur
Password : athakur

And that's it. You can add any number of environment variables in the ".env" file and read it in your node js code as process.env.variable_name.

NOTE :  .env file would be hidden in Ubuntu since Ubuntu hides all files that start with a dot (.). You can just press Ctrl + H to view hidden files or do a "ls -la" in console. More details -

To read more about this package  you can read -

Related Links

How to show hidden files and folder in Ubuntu


Some files are folders are hidden in Ubuntu. These are the ones that start with a "." in the beginning. Eg -
  • ~/.bashrc
  • ~/.vimrc etc
In this post I will show you how you can make these files visible.

How to hide files and folder in Ubuntu?

The Files file manager gives you the ability to hide and unhide files at your discretion. When a file is hidden, it is not displayed by the file manager, but it is still there in its folder.

To hide a file, rename it with a "." at the beginning of its name. For example, to hide a file named example.txt, you should rename it to .example.txt.

You can hide folders in the same way that you can hide files. Hide a folder by placing a "." at the beginning of the folder’s name.

How to show hidden files and folder in Ubuntu CLI

To see the hidden files in the command line interface (CLI) you can just use -
  • ls -la
To not see the hidden files you can just use -
  • ls -l

 How to show hidden files and folder in Ubuntu Files explorer

To hidden files in the Files explorer you can go to -
  • View -> Show hidden files
or you can simply press
  • Ctrl + H 

You can use the same shortcut or select the same setting again to togggle between showing and hiding hidden files in your Files explorer.

To make this permanent you can go to -
  • Edit -> Preferences
and turn on the setting to show hidden files.

Related Links

Thursday, 1 February 2018

How to restore a corrupted or deleted partition with TestDisk and Ubuntu Live


I erased one of my partitions recently which was mounted on my /home path in Ubuntu Linux. However I was able to restore the partition back and life was back to normal. 

I was trying to install Windows and the installer (from USB) was forcing UEFI mode instead of Legacy. So the NTFS partition did not work out and Windows could not be installed since partition was of type MBR instead of GPT (which is required by UEFI mode). So when I tried to make it GPT it started erasing entire disk instead of the disk partition I had selected. I stopped the process immediately but my partitions were gone and it was one disk without any partitions. As I mentioned earlier I was able to restore my previous partitions and data was intact.

In this post I will show you how we can do this.


You need to have a bootable USB with Ubuntu or gparted. Gparted has both tools -
  • gparted and
  • testdisk
installed so it is a much simpler option. But if you already have a bootable USB with ubuntu then you can use the same like I did.

Boot your machine from this USB drive.

NOTE : If you do not have a bootable USB you can create one using unetbootin.


How to restore a corrupted or deleted partition with TestDisk and Ubuntu Live

After you boot from Ubuntu live USB go to "Software and Updates" and under Downloadable from internet select the entry with "Universe".

Now run -
  • sudo apt-get update
  • sudo apt-get install gparted
  • sudo apt-get install testdisk

Open gparted to see your disks and partitions. If the partition is missing you should see an unallocated partition. Now run testdisk -
  • sudo testdisk
 And follow next steps -

  1. Select "No Log" option.
  2. Select the disk drive you want to recover, e.g. /dev/sdc.
  3. Select your partition table type. Usually it's Intel.
  4. Select "Analyse" and then "Quick Search".
  5. Your drive will be analysed and you will see a list of all found partitions.  Press Enter.
  6. On the next screen you have the option to either perform a second Deeper Search, or Write the current partition table to disk. If the quick search was successful, choose Write.
  7. Finally reboot your machine to see the reflected changes
  8. You can resuse the gaprted to see that the partition is restored post reboot


That's it. Your partitions should be restored.

Related Links

Sunday, 28 January 2018

Creating web application with Spark Java framework


Spark is  a Java framework that let's you create web application. In this post we will see how we can write a basic web application using Java Spark framework.  Do not confuse this with Apache Spark which is a big data framework.  If you want to quickly bring up a local server to test something out Spark Java let's you do it in the simplest way possible. You do not need application server. It embeds Jetty server inside it.


Add following dependencies in your pom.xml for gradle build.


spark-core is the spark framework whereas slf4j-simple is for logging. Once above setup is done we can proceed to actual implement our rest application.

Getting Started with Java Spark

Following is a simple Spark code that starts a server and returns "Hello World!" in the response -

import spark.Request;
import spark.Response;
import spark.Route;
import spark.Spark;

 * @author athakur
public class HelloWorldWithSpark {

    private static final Logger logger = LoggerFactory.getLogger(HelloWorldWithSpark.class);

    public static void main(String args[]) {
        Spark.get("/", new Route() {

            public Object handle(Request request, Response response) throws Exception {
                logger.debug("Received request!");
                return "Hello World!";


Just run above Java code. It should start a jetty server and start listening for incoming requests. Default port that server listens on is 4567. So after running above code go to the browser and access following url  -
You should see "Hello World!" in the response.

Spark exposes static methods that let you define the URLs or routes you want to do some processing on and return some response. In above example we are listening on path "/" which is the root path and returning "Hello World!".

Same code in Java 8 perspective using functional programming/lambda would be -

public class HelloWorldWithSpark {

    private static final Logger logger = LoggerFactory.getLogger(HelloWorldWithSpark.class);

    public static void main(String args[]) {
        Spark.get("/", (req,res) -> {
            return "Hello World!";


NOTE : Here we are using GET verb but you can use any like POST, PUT etc.

You can easily create REST APIs from this. Sample example given below -

public class HelloWorldWithSpark {

    private static final Logger logger = LoggerFactory.getLogger(HelloWorldWithSpark.class);

    public static void main(String args[]) {
        Spark.get("/employee/:id", (req,res) -> {
            logger.debug("Got request to get employee with id : {}", req.params(":id"));
            return "Retrieved Employee No " + req.params(":id");
       "/employee/:id", (req,res) -> {
            logger.debug("Got request to add employee with id : {}", req.params(":id"));
            return "Added Employee No " + req.params(":id");


That was simple. Wasn't it? You want to deploy it in production like an actual web application in form of war you need to follow a bit different steps -

Related Links

t> UA-39527780-1 back to top