Sunday, 18 March 2018

How To Set Up an HTTPS Service in IIS


Windows provides Internet information service (IIS) to host your local applications on your windows machine. You need to enable IIS from "Turn Windows features on or off".

Once you have done that you can add an application to it and get started. I have created a youtube video to demo the same -

This video shows how to set up a simple website on IIS and also add https support. I am also going to show how to add https support part in this post.

How To Set Up an HTTPS Service in IIS

First, make sure you have self-signed certificate generated in your IIS manager. To do that go to "Server certificates" in your machine home node inside your IIS manager -

Next, double-click the "Server Certificates" section and make sure a self-signed certificate exists. 

If no certs exist for localhost go ahead and create one using "Create self-signed Certificate". 

Once you have the certificate go to Sites in the navigation panel on the left and click on Default website under sites.

Next, click on "Bindings" in the section on the right and add https binding. Make sure you select correct SSL certificate in the process that we create in previous steps.

Once you are done just click ok and you should have https binding set for your website.

Now you can open your website with https protocol.

Related Links

Sunday, 11 March 2018

AngularJS Routing Using UI-Router


In the last post, we saw AngularJS Hello World example. In this post, we will see how we can create an angular JS app with routing using UI-Router module. If you have not read the previous post I would highly recommend you read that first.


There is a small change in the file structure than our previous example. Create files as shown in the following screenshot -

Following are file details -
  • helloWorld.html : Our main HTML file. Starting page similar to the last example.
  • helloworld.module.js : Declares angular module and it's dependencies
  • helloWorld.controller.js : Controller for the module
  • helloworld.config.js : Config for the module.  
  • login.html : login page to route to
  • setting.html : setting page to route to
 Also make sure you install http-server module to host your angular app and test it -
  • npm install -g http-server

Then you can simply run in your project directory as -
  • http-server -o 
NOTE : -o option is to open browser.

AngularJS Routing Using UI-Router

Now let's see one by one content of each file. Let's start with  helloWorld.html -

        <title>Hello World!</title>
        <script type="text/javascript" src=""></script>
            <script src="//"></script>
            <script type="text/javascript" src="helloworld.module.js"></script>
        <script type="text/javascript" src="helloWorld.controller.js"></script>
        <script type="text/javascript" src="helloworld.config.js"></script>
    <body ng-app="helloWorldApp">
        <div ng-controller="helloWorldAppController" ng-init="init()">
                  <p>Enter Message here : <input type="text" ng-model="message"></p>
                   <p>Entered Message :  {{ message }}!</p>

                   <a href="/login">Login</a> <br/>
                   <a href="/setting">Setting</a>

                <div class="content-wrapper">

In this base HTML file we have just referenced other angular JS files that we need. An interesting thing to note here is the ui-view tag. This is where the injected code goes. But we will see that in a moment.

Let's see out module and controller files next -


(function() {
    'use strict';
    console.log("In hello world module");
    angular.module('helloWorldApp', ['ui.router']);

This code just declares your module as we did in last example. Only change is that it has an additional dependency on ui.router module. Note we have included a new script in helloWorld.html to get code for this module.


    'use strict';
    console.log("In hello world controller");
    angular.module("helloWorldApp").controller("helloWorldAppController", function($scope) {
        $scope.init = function() {
            console.log("Init method called");
            $scope.message = "Hello World!";


This is a controller of the module we defined. This is again same as we did in last post. No new changes here.

Now let's see our new logic - helloworld.config.js

(function() {
    'use strict';
    console.log("In hello world config");
        function($stateProvider, $urlRouterProvider) {
                     url: "/login",
                     templateUrl: "login.html"
                    url: "/setting",
                    templateUrl: "setting.html"


This is actually the routing logic. For eg. if it encounters URL "/login" it will render page "login.html" that we have in same folder. Same for setting.html.

Finally, let's see out login.html and setting.html files -

login.html -

<h1>This is a login page!</h1>
<a href="/helloWorld.html">Back</a>

setting.html -

<h1>This is a login page!</h1>
<a href="/helloWorld.html">Back</a>

 Once you have all files in place just run http-server as follows -

And you should see following behavior - 


But wait what happened? I thought the code was supposed to be injected at tag ui-view.

This is because we do not actually use angular routing per say to change the state. Let's do some changes to see how we can do that. Change your login href as follows - 

<a href="" ng-click="replaceLoginPage()">Login</a>

and now add this method to the controller -

    'use strict';
    console.log("In hello world controller");
    angular.module("helloWorldApp").controller("helloWorldAppController", function($scope, $state) {
        $scope.init = function() {
            console.log("Init method called");
            $scope.message = "Hello World!";

        $scope.replaceLoginPage = function() {
            console.log("In replaceLoginPage");


And now it should replace ui-view tag.

Related Links

Tuesday, 27 February 2018

AngularJS Hello World Example


AngularJS is a client-side javascript framework developed by Google which can interact with HTML. It helps to easily create and maintain SPA (Single page applications)

Some of its features are -
  1. Helps create responsive applications
  2. Provides MVC capabilities
  3. Dependency injection
  4. Powerful and flexible with less code to write
NOTE: AngularJS is a very old framework. The first version is called AngularJS. Later versions are just called angular. You can see the detailed release history - This post will try to explain a simple Hello World program in Angular JS. This is mainly for projects that are already on it. If you are starting a new project I would highly recommend to go with later angular versions. You can visit  for the same.

 AngularJS Hello World Example

Let's start by writing our HTML file. Create a folder where you can store your code files. Now in this folder create two files -
  1. helloWorld.html
  2. helloWorld.js
And add following content to it -


        <title>Hello World!</title>
        <script type="text/javascript" src=""></script>       
        <script type="text/javascript" src="helloWorld.js"></script>
    <body ng-app="helloWorldApp">

        <div ng-controller="helloWorldAppController" ng-init="init()">
                  <p>Enter Message here : <input type="text" ng-model="message"></p>
                   <p>Entered Message :  {{ message }}!</p>


helloWorld.js  :

    'use strict';
    console.log("In JS");
    var helloWorldApp = angular.module("helloWorldApp", []);
    helloWorldApp.controller("helloWorldAppController", function($scope) {
        $scope.init = function() {
            console.log("Init method called");
            $scope.message = "Hello World!";


Now you can simply open the helloWorld.html file and see the changes. 


Understanding the code

Now that we saw the behavior let's try to understand the code.

In our code we have initialized a angular module using ng-app directive. In the javascript we get this module using andular.module() function.  Inside a module you can have multiple controllers controlling various parts of your html. In our case we have defined a controller called helloWorldAppController that controls the div element we have defined. In the javascript we are getting this controller from our module.

Once you have the controller you can define varibles and functions that you can access in the HTML controller by this particular controller. You can also notice the variable called "message" that is defined in the controller is accessible in the HTML using {{message}} syntax. You would also see the binding exist using the ng-model syntax. This is called two way binding. So value changed in the input field is immediately visible in same HTML using {{message}} syntax as well as the controller using $scope.message. In the html where we have declared controller we have also specified init method using ng-init and you can see this menthod defined in controller using $scope.init. This is called when controller is initialized and this function in the controller script initialized messages variable to "Hello World!" that is reflected in HTML page.

Lastly we have injected our controller script (helloWorld.js ) and  angular js script in the html for our angular js code to work. Make sure angular script is the 1st one added.

All keywords starting with ng- are angular directived.
  • ng-app
  • ng-controller
  • ng-init
  • ng-model 

Create a gif from a video using ffmpeg with good quality


In the last post, we saw to Resizing videos and images using ffmpeg. In this post, we will see how we can convert a video into a gif with good quality.

 Create a gif from a video using FFmpeg

You can use the following shell script to convert your video to Gif -

#  Author : athakur
#  Version : 1.0
#  Create Date : 27/02/2018
#  Update Date : 27/02/2018
#  Description : Create gif from a video
#    Sample usage : ./ input.mp4 output.gif
# ffmpeg static build can be downloaded from
echo "Converting $1 to $2"

if [ -z "$1" ] || [ -z "$2" ]
  echo "Incorrect arguments supplied. Format - ./ input.mp4 output.gif"



ffmpeg -v warning -i "$1" -vf "$filters,palettegen" -y "$palette"
ffmpeg -v warning -i "$1" -i $palette -lavfi "$filters [x]; [x][1:v] paletteuse" -y "$2"

echo "Completed gif creation" 

You can find this code in my GitHub gists section-
 To run you can simply execute following command - 
  • ./ input.mp4 output.gif
Input need not be an mp4. It can be an .avi or a .webm too.

Sample video and gif are as follows -

Video :

Gif :

That's it your high-quality gif is ready.

You can do more with FFmpeg like scaling and cropping of video before converting to gig. Eg.

./ffmpeg -loglevel warning -y -i input.mp4 -i palette.png -filter_complex "crop=800:500:0:0[x];[x][1:v]paletteuse=dither=bayer:bayer_scale=3[tl];[tl]fps=10,scale=1024:-1:flags=lanczos"  target.gif

Related Links

Friday, 23 February 2018

Resizing videos and images using ffmpeg


If you are a developer and a geek and need to work on the images/videos formatting in terms of cropping, scaling or resizing then you can use FFmpeg for that. Static build for FFmpeg is available on -
which you can download and store it on your local machine to use.

I have even added this to my Linux PATH so that I can access it from anywhere. If you are not aware how to do that refer -
Once you have added FFmpeg in your PATH you can simply run ffmpeg to see the help content -

Specifically, if you are working on making Android or iOS apps or chrome plugin and you have an icon of standard size and you want to resize it to fit other aspect ratios supported by the corresponding platforms then FFmpeg really comes in handy.

Rescaling images with FFmpeg

I have a simple icon with size 256*256 to be used as replay button.

 I want it in 128*128 size.  You can do this with the following command -
  • ffmpeg -i icon-256.png -vf scale="128:128" icon-128.png
And if you want 48*48 you can similarly do -
  • ffmpeg -i icon-256.png -vf scale="48:48" icon-48.png

If you want to want to retain the aspect ration you can do -
  • ffmpeg -i icon-256.png -vf scale="128:-1" icon-128.png

You can do similar resizing for a video instead of an image -
  • ffmpeg -i input.avi -vf scale="320:240" output.avi
If you want your image to be based on the actual image size then you can do that as well. For eg. You want the image to be double size of what the size it actually is -

  • ffmpeg -i icon-256.png -vf scale="iw*2:ih*2" icon-double.png

Since this is double this would be 512*512 since original was 256*256.
Similarly, if you want half you can do -

  •  ffmpeg -i icon-256.png -vf scale="iw/2:ih/2" icon-half.png

 Since this is half this would be 128*128 since original was 256*256.

  • iw : input width
  • ih : input height

Related Links

Saturday, 10 February 2018

Shell script to update lambda code and environment variables to multiple regions and environments using AWS CLI


If you are using AWS stack in your project then you must have used Lambda service some or the other time. It is what we call the serverless architecture. You do not care about the hardware or the  operating system. You just provide the Lambda with the code you wish to run which can be triggered from multiple sources like an S3, API gateway etc.

Since AWS services are region specific you may have same lambda code running in multiple regions. You will have to go to each region and deploy your lambda code either by directly uploading the zip file or uploading from S3 bucket in same region. Either ways this way is time consuming and repetitive. Also you might have same code running under different names corresponding to different execution environments like dev,qa,production etc. Also each lambda may have environment variables like database configuration settings or other custom setting like memory and timeout.

In this post I will show you a simple shell script that uses AWS CLI to do this from your local machine. You can just run this command and it will take care of deploying your code, changing environment variables and setting custom configurations for each region you wish to deploy lambda to.

Assumptions and Setup

This deployment script assumes you have installed AWS Cli and configure a profile in it. If you have not done it already refer -
NOTE : If you are not explicitly providing a profile name to aws configure it is by default set as "default"

Next this script also assumes you have local zip file that has your lambda code. This script also takes an env variables as the input which expects your lambda function name to have it as a suffix. So if your base lambda function name is "my-lambda" then your actual lambda function name is different environments should be -
  • test : my-lambda-test
  • dev : my-lambda-dev
  • qa : my-lambda-qa
  • prod : my-lambda-prod
Script has the base name of the lambda and some environment variables that are defined globally and per region. Script also has array of regions the lambda should be updated to. These  things you can change as per your usecase.

Shell script code

You can find the code on my Github gist -
I am also adding it below to explain it how it works. However to see the most recent version refer to the gist link above.

Code is as follows -

#  Author : athakur
#  Version : 1.0
#  Date : 10/02/2018
#  Description : Deployment script to update lambda code and env variables
#  Sample usage :  
#    Local : ./ test aws-admin fileb://../
#    Dev : ./ dev aws-admin fileb://../
#    QA : ./ qa aws-admin fileb://../
#    Prod : ./ prod aws-admin fileb://../
echo "Updating lambda code for ENV : $1 PROFILE : $2 ZIP_FILE_PATH : $3"


if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]
  echo "Incorrect arguments supplied. Format - ./ ENV PROFILE ZIP_FILE_PATH"




SUPPORTED_REGIONS=("us-east-1" "ap-northeast-1" "ap-southeast-1" "ap-southeast-2")

    echo 'Region : $REGION'
    case "$REGION" in
            echo "Environment not provided"
            exit 1

    echo "Env variables :  $env_variables"        
    lambda_update_env_command="aws lambda update-function-configuration --function-name $FUNCTION_NAME --region $REGION --profile $PROFILE --environment '$env_variables' --timeout 300 --memory-size 3008"    
    echo "Executing command : $lambda_update_env_command"
    eval $lambda_update_env_command
    lambda_update_code_command="aws lambda update-function-code --function-name $FUNCTION_NAME --region $REGION --zip-file $ZIP_FILE_PATH --profile $PROFILE"    
    echo "Executing command : $lambda_update_code_command"
    eval $lambda_update_code_command    
    echo "Completed Lambda function update for region $REGION"

Now let's try to understand what we are doing in above shell script.

Understanding the shell script

Above shell script takes 3 arguments -
  1. Env : This is the env. Eg. test, dev, qa, prod etc
  2. Profile name : This is the aws profile name configured. If you have not done so this will just be "default"
  3. zip file path : Path to lambda zip file

 First part of the script validates that you provided arguments needed for script to run. Next we define some environment variables that we need to set for our lambda. Next we set a environment variable called SNS_ENDPOINT that changes per region. You can do a similar code snippet per environment as well.

Next we have an array of aws regions we need to deploy our lambda code in. You can add / remove as per your use case. Finally we run 2 aws commands for each region -
  1. update-function-configuration : This updates the environment variables and other configurations needed by your lambda
  2. update-function-code : This updates the actual lambda code that gets deployed to the lamdba.
NOTE :  We are also setting --timeout 300 --memory-size 3008 which basically sets lambda timeout and memory to maximum available i.e 5 mins and 3 GB respectively.

NOTE :  Lambda is charged as the amount of time it runs * memory it uses. So change above configurations as per your need and budget.

Related Links

Friday, 2 February 2018

Simulating environment variables in NodeJs using dotenv package


When you write code there are certain variables that may differ as per the environments like dev, qa prod etc. This might have sensitive data as API keys, passwords etc. You would definitely do not want to put it in your code directly since this will be added to some code repository like git and others may have access to the same. 

General practice that is followed is to use environment variables that can be defined at the environment level and can be read and used in the code. For eg. consider Elastic beanstalk or a Lambda in AWS world. You would define environment variables for the environment and use that in code. If it's your our physical box you might define the environment variables at the OS level or maybe at tomcat level if you are using tomcat as the container. Environment variables work fine in all such cases. 

But how would you do the same in local. In this post I will show how we can simulate environment variables in a NodeJs process with a package called dotenv.

This post expects you to know basics of NodeJs and have NodeJs and npm (Node package manager) installed on your machine. If you have not done that then please refer my earlier post -

Simulating environment variables in NodeJs using dotenv package

First you need to install dotenv package using npm. To do so go to the directory where you would have your NodeJs file and execute following command -
  • npm install dotenv
If you get some warning you can ignore it for now. You should see a directory called node_modules getting created in the same directory where you executed this command. This folder will have the package dotenv that we just installed.

Now that we have package installed let's see how we can simulate an environment variable. For this simply create a file name .env in the same directory and add the environment variable you expect to read in code in it. For this demo I will use 3 environment variables -
  • USERNAME=athakur
  • PASSWORD=athakur
Now create NodeJS file in the same directory. Let's call it test.js. The directory structure is as follows -

Add following content in test.js -

'use strict';
const dotenv = require('dotenv');

const env = process.env.ENVIRONMENT
const username = process.env.USERNAME
const password = process.env.PASSWORD

console.log("Env : " + env);
console.log("Username : " + username);
console.log("Password : " + password);

Save the file and execute it as -
  • node test.js
You should see following output on the screen -

Env : local
Username : athakur
Password : athakur

And that's it. You can add any number of environment variables in the ".env" file and read it in your node js code as process.env.variable_name.

NOTE :  .env file would be hidden in Ubuntu since Ubuntu hides all files that start with a dot (.). You can just press Ctrl + H to view hidden files or do a "ls -la" in console. More details -

To read more about this package  you can read -

Related Links

How to show hidden files and folder in Ubuntu


Some files are folders are hidden in Ubuntu. These are the ones that start with a "." in the beginning. Eg -
  • ~/.bashrc
  • ~/.vimrc etc
In this post, I will show you how you can make these files visible.

How to hide files and folder in Ubuntu?

The Files file manager gives you the ability to hide and unhide files at your discretion. When a file is hidden, it is not displayed by the file manager, but it is still there in its folder.

To hide a file, rename it with a "." at the beginning of its name. For example, to hide a file named example.txt, you should rename it to .example.txt.

You can hide folders in the same way that you can hide files. Hide a folder by placing a "." at the beginning of the folder’s name.

How to show hidden files and folder in Ubuntu CLI

To see the hidden files in the command line interface (CLI) you can just use -
  • ls -la
To not see the hidden files you can just use -
  • ls -l

 How to show hidden files and folder in Ubuntu Files explorer

To hidden files in the Files explorer, you can go to -
  • View -> Show hidden files
or you can simply press
  • Ctrl + H 

You can use the same shortcut or select the same setting again to toggle between showing and hiding hidden files in your Files explorer.

To make this permanent you can go to -
  • Edit -> Preferences
and turn on the setting to show hidden files.

Related Links

Thursday, 1 February 2018

How to restore a corrupted or deleted partition with TestDisk and Ubuntu Live


I erased one of my partitions recently which was mounted on my /home path in Ubuntu Linux. However I was able to restore the partition back and life was back to normal. 

I was trying to install Windows and the installer (from USB) was forcing UEFI mode instead of Legacy. So the NTFS partition did not work out and Windows could not be installed since partition was of type MBR instead of GPT (which is required by UEFI mode). So when I tried to make it GPT it started erasing entire disk instead of the disk partition I had selected. I stopped the process immediately but my partitions were gone and it was one disk without any partitions. As I mentioned earlier I was able to restore my previous partitions and data was intact.

In this post I will show you how we can do this.


You need to have a bootable USB with Ubuntu or gparted. Gparted has both tools -
  • gparted and
  • testdisk
installed so it is a much simpler option. But if you already have a bootable USB with ubuntu then you can use the same like I did.

Boot your machine from this USB drive.

NOTE : If you do not have a bootable USB you can create one using unetbootin.


How to restore a corrupted or deleted partition with TestDisk and Ubuntu Live

After you boot from Ubuntu live USB go to "Software and Updates" and under Downloadable from internet select the entry with "Universe".

Now run -
  • sudo apt-get update
  • sudo apt-get install gparted
  • sudo apt-get install testdisk

Open gparted to see your disks and partitions. If the partition is missing you should see an unallocated partition. Now run testdisk -
  • sudo testdisk
 And follow next steps -

  1. Select "No Log" option.
  2. Select the disk drive you want to recover, e.g. /dev/sdc.
  3. Select your partition table type. Usually it's Intel.
  4. Select "Analyse" and then "Quick Search".
  5. Your drive will be analysed and you will see a list of all found partitions.  Press Enter.
  6. On the next screen you have the option to either perform a second Deeper Search, or Write the current partition table to disk. If the quick search was successful, choose Write.
  7. Finally reboot your machine to see the reflected changes
  8. You can resuse the gaprted to see that the partition is restored post reboot


That's it. Your partitions should be restored.

Related Links

Sunday, 28 January 2018

Creating web application with Spark Java framework


Spark is  a Java framework that let's you create web application. In this post we will see how we can write a basic web application using Java Spark framework.  Do not confuse this with Apache Spark which is a big data framework.  If you want to quickly bring up a local server to test something out Spark Java let's you do it in the simplest way possible. You do not need application server. It embeds Jetty server inside it.


Add following dependencies in your pom.xml for gradle build.


spark-core is the spark framework whereas slf4j-simple is for logging. Once above setup is done we can proceed to actual implement our rest application.

Getting Started with Java Spark

Following is a simple Spark code that starts a server and returns "Hello World!" in the response -

import spark.Request;
import spark.Response;
import spark.Route;
import spark.Spark;

 * @author athakur
public class HelloWorldWithSpark {

    private static final Logger logger = LoggerFactory.getLogger(HelloWorldWithSpark.class);

    public static void main(String args[]) {
        Spark.get("/", new Route() {

            public Object handle(Request request, Response response) throws Exception {
                logger.debug("Received request!");
                return "Hello World!";


Just run above Java code. It should start a jetty server and start listening for incoming requests. Default port that server listens on is 4567. So after running above code go to the browser and access following url  -
You should see "Hello World!" in the response.

Spark exposes static methods that let you define the URLs or routes you want to do some processing on and return some response. In above example we are listening on path "/" which is the root path and returning "Hello World!".

Same code in Java 8 perspective using functional programming/lambda would be -

public class HelloWorldWithSpark {

    private static final Logger logger = LoggerFactory.getLogger(HelloWorldWithSpark.class);

    public static void main(String args[]) {
        Spark.get("/", (req,res) -> {
            return "Hello World!";


NOTE : Here we are using GET verb but you can use any like POST, PUT etc.

You can easily create REST APIs from this. Sample example given below -

public class HelloWorldWithSpark {

    private static final Logger logger = LoggerFactory.getLogger(HelloWorldWithSpark.class);

    public static void main(String args[]) {
        Spark.get("/employee/:id", (req,res) -> {
            logger.debug("Got request to get employee with id : {}", req.params(":id"));
            return "Retrieved Employee No " + req.params(":id");
       "/employee/:id", (req,res) -> {
            logger.debug("Got request to add employee with id : {}", req.params(":id"));
            return "Added Employee No " + req.params(":id");


That was simple. Wasn't it? You want to deploy it in production like an actual web application in form of war you need to follow a bit different steps -

Related Links

Saturday, 20 January 2018

Difference between a forward proxy and a reverse proxy server


Most of the companies out there have a proxy in between their corporate traffic and internet. This could be for multiple reasons - network security being one of them. In my previous post I showed how to set up a squid proxy -
That was basically a forward proxy. There are other type of proxies called reverse proxies. In this post we will see difference between them and how they work.

Proxy in lay man terms mean someone acting on behalf of someone else. This is the main principle behind forward and reverse proxy.

Forward Proxy :

Working :

Forward proxy sits between client machines and an origin server. Client machines make a request to the forward proxy with target as the origin server. Forward proxy then makes a request to the origin server, gets the response and sends it back to the clients. Clients in this case need to be explicitly configured to use this kind of forward proxy.

So to summarize a forward proxy retrieves data from another website (origin server) on behalf of the clients.

Example : 

 Consider three computers - A, B and C. Now A want to request a website hosted on computer C. In normal case it would directly be
  • A -> C
where computer A directly asks C for the website. However in case of Forward proxy there is an intermediate computer B. Computer A makes request to this computer B instead of directly making request to C. Computer B now makes a request to C gets the website and returns it back to the A. So the path would be
  • A -> B -> C.

When :

There can be multiple cases in which a forward proxy might be useful. Some are -
  • Client machines (Computer A in above case) are behind some firewall and have no access to internet and thereby no access to origin server.
  • A company wants to block some of the malicious sites. They do this on the forward proxy and make sure all client make request via this proxy.
  • A forward proxy also has feature to cache requests so that the response time is minimum.

Reverse Proxy :

Working : 

Forward proxy was used to shield the client machines where as a reverse proxy is used to shield a origin server. So client machines make call to the reverse proxy as if they are the origin servers. Reverse proxy now makes a call to the actual origin server and returns the response back to the client.

Example :
Let's consider a similar example of 3 computers - A,B and C. Again in a normal scenario A would directly request website from C.
  • A -> C
In case of reverse proxy there is a computer B which hides C behind it. A makes call to B instead and B fetches the website from C and returns it back to A. So the path is again -
  •  A -> B -> C
  • Provide internet users access to a server that is behind the firewall.
  • Load balance backend servers.
  • Typical CDN deployment. Proxy server would tell the client the nearest CDN server.

 Difference between a proxy and a reverse proxy server

 If you see the example above in case of both forward and reverse proxy path is always -
  • A -> B -> C
In case of forward proxy B shields machine A by fetching content by C itself and sending back to A. Where as in case of reverse proxy B shields C by fetching the data from C and sending it back to A.

In case of forward proxy, C would think that B is the machine sending it request where there could be multiple A's behind B. Similarly  in case of reverse proxy A would think that it is sending request to C but it would actually be a request to B and B would in turn talk to multiple C and send back the request to A.

Related Links

Thursday, 18 January 2018

How to set up a squid Proxy with basic username and password authentication in Ubuntu


Most of the big companies have their own proxies through which all the company data is routed through. This ensure malicious sites are blocked and all other traffic is audited via proper authentication. 

To give a little background on Squid proxy -
Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, including speeding up a web server by caching repeated requests, caching web, DNS and other computer network lookups for a group of people sharing network resources, and aiding security by filtering traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other protocols including Internet Gopher, SSL,[6] TLS and HTTPS. Squid does not support the SOCKS protocol.

Squid was originally designed to run as a daemon on Unix-like systems. A Windows port was maintained up to version 2.7. New versions available on Windows use the Cygwin environment.[7] Squid is free software released under the GNU General Public License.

Source : Wiki

Installing Squid proxy on Ubuntu

To install squid server simply run following command in your terminal -
  • sudo apt install squid

Squid run as daemon service in Ubuntu. You can execute following command to see the status of this service -
  • service squid status
It will show you if squid service is running or not.

Some important file paths are -
  • /etc/sqid :  This is where your squid configuration resides
  • /var/log/squid : This is where your squid logs reside
  • /usr/lib/squid3,/usr/lib/squid : This is where your squid modules or libraries reside.
Now that we have Squid proxy installed. Let's configure it.

Squid configuration is located at -
  • /etc/squid/squid.conf
Before you make changed to this file make a copy of this and store it aside. Use following commands to do that -

  • sudo cp /etc/squid/squid.conf /etc/squid/squid.conf.original
  • sudo chmod a-w /etc/squid/squid.conf.original 
This essentially created a copy of  squid.conf called squid.conf.original and removed all write access to it so that no one can accidentally write it.

Default TCP port that Squid listens to is 3128. Go ahead and change it to 8888. I prefer using 8888 port since this is used by other proxies as well like Charles and Fiddler. To do this find a line called

  • http_port 3128
and change it to

  • http_port 8888

Next you need to provide rules to allow and disallow traffic. If you want to just allow trafic from your local machine you can add the following lines to the configuration -
  • acl localhost src
  • http_access allow localhost 
acl is nothing but access control list. it's a keyword that states acl is starting. Next localhost is the name that is used to indentify the acl. I have named it localhost but it can be anything. Next we have src which is used to identify local IP addresses. Other options are -
  1. srcdomain  : used for declaring local domain, 
  2. dst : used for public IP & 
  3. dstdomain : used for public domain name
Next  we have http_access that will basically take action provide in it's next word on the acl we define. In this we we are saying allow and for acl named localhost that we defined above. So Squid proxy is going to allow all http traffic from local machine (i.e with IP

Last line you can add as  -
  • http_access deny all
which says you deny all other traffic. So the way acl's work is -

For each request that Squid receives it will look through all the http_access statements in order until it finds a line that matches. It then either accepts or denys depending on your setting. The remaining rules are ignored. 

This was basic settings for squid proxy. Now let's see how we can add an authentication to this scheme.

Post configuration you can just restart the squid service -
  • service squid restart
You can also view the service logs for this in file-
  • less /var/log/squid/cache.log
 And you can view the access logs in file -

  • less /var/log/squid/access.log

How to set up a squid Proxy with basic username and password authentication?

For this you can add following lines to your squid configuration file squid.conf -

auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/passwords
auth_param basic realm proxy
acl authenticated proxy_auth REQUIRED
http_access allow authenticated

ident_lookup_access deny all
http_access deny all

Above configuration will ensure all traffic is authenticated. The username/password that would be needed to provide access will be stored in a file - /etc/squid/passwords. We will now see how we can create this file.

To generate username/passwrod you need to use a command called htpasswd. You can install this using -
  • apt-get install apache2-utils
Next to generate username/password type in following command -
  • sudo htpasswd -c /etc/squid/passwords YOUR_USERNAME
Replace  YOUR_USERNAME with the user name you want. Eg admin. You will be prompted for password for this username twice. Once done your user is all setup. You can use this credentials to access your proxy.

NOTE : htpasswd stores the password hashed.

One done you can restart your squid service -
  • service squid restart
My conf file looks like below -

acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_port 8888

coredump_dir /var/spool/squid

refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .               0       20%     4320

auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/passwords
auth_param basic realm proxy
acl authenticated proxy_auth REQUIRED
http_access allow authenticated

ident_lookup_access deny all
http_access deny all 

Now you can test this by adding a proxy in firefox and trying to go to a http URL.

Add username/password that you just created before and the URL should be accessible.

Related Links

Saturday, 6 January 2018

Writing your first Django app - PART 1


Django is a python based web framework that let's you create webapps quickly and with less code. It's free and opensource. For more details on the framework itself visit -
In this post we will create a sample app in Django python framework.


This tutorial assumes you are using Django 2.0, which supports Python 3.4 and later.

Install python3 and pip3 -
  • sudo apt-get install python3-pip
Next install Django python framework using pip3 -
  • sudo pip3 install Django 
You can see the installed version of python and django in various ways. Some are given in screenshot below -

Creating a Django project

Create a skeleton of your Django app using following command -
  • django-admin startproject djangodemo
You should see a directory getting created with name djangodemo. Inside this you should have file and another directory with same name djangodemo. This inner directory named djangodemo is actually a python package. Outer directory is just a holder with file. file is used to give you command line tasks to interact with your django project. You can see the version of you django framework used with following command -
  •  python3 version 

Directory structure is as follows -

 Some other pointers other than ones mentioned above -
  • tells python this directory should be considered as a package.
  • This also means your inner djangodemo directory is a python package.
  • Your Django app settings go here.
  • URLs used in your Django project go here.
  • This is an entry-point for WSGI-compatible web servers that can serve your project.
 Now that you have created your project let's run it with following command -
  • python3 runserver

Ignore the warnings for now.

NOTE : You don't have to restart the server everytime you make changes to code. Django handles it. Just refresh the pages.

Open -
 You should see installation successful message as follows -

NOTE : By default your server will run on port 8000. But you can change it as follows -
  • python runserver 8080

Creating Django App

A project is collection of apps and it's configurations needed for a website to run. Apps are modules that run in your project. A project can have multiple apps. Similarly a app can be part of multiuple projects. Apps can be at any python paths.

You can create a app as follows -
  • python3 startapp testapp
I like to put all apps in a directory called apps in your actual python package directory. You can do that as follows -

Creating your webpage

Go to your apps directory and edit to add following content -

from django.http import HttpResponse

def index(request):
    return HttpResponse("Hello world!")

 Next in the same directory create a file called and add following content to it -

from django.urls import path
from . import views

urlpatterns = [
    path('', views.index, name='index'),

Finally go to your project directory  - djangodemo/djangodemo and edit file to have following content -

from django.contrib import admin
from django.urls import path, include

urlpatterns = [
    path('test/', include('djangodemo.apps.testapp.urls')),

Next in apps directory inside djangodemo directory create a file called You can do this using -
  • touch
Now simply run your server and visit -
to see your site.

Understanding : First we created an app called testapp. It should have some default files like stores all your views. Here we added a new view called index and mapped it inside a file to the root url ("") at the app level. Next we mapped this to at our project level for '/test'. include maps the url provided and forwards rest the included module. In this case it will check url has 'test/' and forward the rest which is - "" to the in the testapp where we have mapped request view to "". So request view gets rendered.

NOTE : Note how we added a file in apps directory. This is to ensure python recognizes this directory as a package. So that we could use djangodemo.apps.testapp.urls in the of project.

That's it you created your 1st django project and app. We will see some more details about this in next post. Thanks.

Related Links

t> UA-39527780-1 back to top