Skip to main content

Posts

Showing posts from 2016

Apache Spark Job with Maven

Today, I'm going to show you how to write a sample word count application using Apache Spark. For dependency resolution and building tasks, I'm using Apache Maven. How ever, you can use SBT (Simple Build Tool). Most of the Java Developers are familiar with Maven. Hence I decided to show an example using Maven.


This application is pretty much similar to the WordCount Example of the Hadoop. This job exactly does the same thing. Content of the Drive.scala is given below.
This job basically reads all the files in the input folder. Then tokenize every word from space (" "). Then count each and every word individually. Moreover, you can see that application is reading arguments from the args variable. First argument will be the input folder. Second argument will be used to dump the output.
Maven projects needs a pom.xml. Content of the pom.xml is given below.
Run below command to build the Maven project.
mvn clean package Maven will download all the dependencies and all t…

Getting Public Data Sets for Data Science Projects

All of us are interested in doing brilliant things with data sets. Most people use Twitter data streams for their projects. But there a lot of free data sets in the Internet. Today, I'm going to list down few of them. Almost all of these links, I found from a Lynda.com course called Up and Running with Public Data Sets. If you want more details, please watch the complete course on Lynda.com.
Quandl (https://www.quandl.com/) Inforum (http://www.inforum.umd.edu/)Google Public Dataset (https://www.google.com/publicdata/directory) Amazon Public Dataset (https://aws.amazon.com/public-data-sets/)US Open Data Portal (https://www.data.gov/)Google Ngram Viewer (https://books.google.com/ngrams)UK Open Data Portal (https://data.gov.uk/)Corpus of Contemporary American English (http://corpus.byu.edu/coca/)World Bank (http://data.worldbank.org/)UN (http://data.un.org/)EuroStat (http://ec.europa.eu/eurostat)CIA World FactBook (https://www.cia.gov/library/publications/the-world-factbook/)American…

vboxdrv setup says make not found

After you update kernal you need to run vboxdrv setup. But if you are trying to compile it for the first time or after removing build-essential package, you might see the below error.

user@ubuntu:~$ sudo /etc/init.d/vboxdrv setup [sudo] password for user: Stopping VirtualBox kernel modules ...done. Recompiling VirtualBox kernel modules ...failed! (Look at /var/log/vbox-install.log to find out what went wrong) user@ubuntu:~$ cat /var/log/vbox-install.log /usr/share/virtualbox/src/vboxhost/build_in_tmp: 62: /usr/share/virtualbox/src/vboxhost/build_in_tmp: make: not found /usr/share/virtualbox/src/vboxhost/build_in_tmp: 62: /usr/share/virtualbox/src/vboxhost/build_in_tmp: make: not found /usr/share/virtualbox/src/vboxhost/build_in_tmp: 62: /usr/share/virtualbox/src/vboxhost/build_in_tmp: make: not found
Ubuntu needs build-essential to run above command. Run below command to install the build-essential.

sudo apt-get install build-essentail sudo /etc/init.d/vboxdrv setup
Then you can …

HDFS - How to recover corrupted HDFS metadata in Hadoop 1.2.X?

You might have Hadoop in your production. And sometimes Tera-bytes of data is residing in Hadoop. HDFS metadata can get corrupted. Namenode won't start in such cases. When you check Namenode logs you might see exceptions.
ERROR org.apache.hadoop.dfs.NameNode: java.io.EOFException at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.UTF8.readFields(UTF8.java:106) at org.apache.hadoop.io.ArrayWritable.readFields(ArrayWritable.java:90) at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:433) at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:759) at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:639) at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:222) at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:79) at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:254) at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:235) …

How to fix InsecurePlatformWarning on Ubuntu?

Python modules sometimes give issues. We got below warning from a python application.
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning

After a small research we found out, this might be an issue related to outdated module. We ran below commands in that server. After that warning didn't occur.
$ sudo apt-get install build-essential python-dev libffi-dev libssl-dev
$ sudo pip install --upgrade ndg-httpsclient

Vagrant on Windows 7 vs Ubuntu 14.04

My whole team had to work on a project which is using Vagrant. Most of us had 8GB memory except one unfortunate intern. He had only 4GB of memory on his workstation. All the team members could spawn Vagrant machines without a problem except him.
So we requested for more memory. Insisted IT department to upgrade it to 8GB. Oh no! Our IT is going to retire desktops. So they don't buy new parts for existing desktop system. Somehow we managed to get 1GB memory card. Now he got 5GB memory in his computer. This computer had Athlon processor. ( I cannot recall the model number.)
Then we tried to spin it up again. To provision to vagrant machine it took at least 3 hours. Sometimes package gets corrupted. Somehow he stopped to provision machines, once he realized that it is useless.
Then we moved him to another task. There he had to work with Ubuntu closely. So I forced him to install Ubuntu. However this kid was okay to install Ubuntu. Then I created Ubuntu 14.04 USB bootable drive. Then…

How to specify ReleaseLabel for EMR cluster with Boto2

Boto is the AWS SDK for Python. You can create clusters, instances or anything using Boto. But sometimes Boto imposes limitations. I wanted to create a EMR cluster with RelaseLabel 4.2.0. But we were using Boto2. ReleaseLabel is an option in Boto3. For Boto2 there was no documented option for RelaseLabel.
So I found out a way to create EMR (Elastic Map Reduce) clusters using Boto 2 with a given ReleaseLabel.

I have commented AMI Version because ReleaseLabel will pick AMI version correctly. Above program will print the cluster ID in terminal. 
Sometimes you might get an issue saying "No Default VPC found.". This is a network related issue. In that case you might need to specify subnet ID for EMR cluster. Then you don't need to specify an availability zone.