My CLRS Study on GitHub

My study and practice of algorithms from CLRS committed on github:

https://github.com/sandeeppalakkal/algorithms_clrs

I’m only beginning and it is incomplete.

I’m a little over-ambitious. Let’s see how much I can progress!

Advertisements
Posted in Uncategorized | Leave a comment

Installation of Caffe on CPU Only [without GPU]

I installed Caffe in my laptop. Installation and testing, although took time, was easy, thanks to a lot of help available on internet. The following is just a summary of the steps I followed in my machine.

Install Pre-Requisites

# General Dependencies 
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev 

# sudo apt-get install libopencv-dev #I already have 3.1.0
sudo apt-get install libhdf5-serial-dev protobuf-compiler 

# Install Boost Library
sudo apt-get install --no-install-recommends libboost-all-dev 

# Additional Dependencies
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev

Install OpenBLAS

#Download the development version of OpenBlas
git clone git://github.com/xianyi/OpenBLAS
cd OpenBLAS
make FC=gfortran
sudo make PREFIX=/usr/local/ install

Compile and Install Caffe

# Download Caffe
git clone http://github.com/BVLC/caffe.git
cd caffe
# Copy the given example make-config file
cp Makefile.config.example Makefile.config

Edit Makefile.config:

  1. Set CPU_ONLY := 1
  2. Set BLAS := open
  3. Set OPENCV_VERSION := 3 (I’m using 3.1.0)

I had installed OPENCV in a non-default path, but caffe Makefile searches only in the default path. To find the actual path, we can use pkg_config (provided, we had appropriately followed instructions given in [4] during installation of OpenCV). So, add in Makefile the following

COMMON_FLAGS += $(shell pkg-config opencv --cflags)

after the line (search and find)

COMMON_FLAGS += $(shell pkg-config opencv --cflags) )

Now build and install Caffe.

# Build caffe and install
make all
make test
make runtest

Add Python Support

make pycaffe 
echo export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH >> ~/.bashrc

Test

Test Google Deep Dream as given in [1]. Result is given below:

download

Enter a captioGoogle’s Deep Dream Produced by Caffe in My Laptop:)

References

  1. http://hanzratech.in/2015/07/27/installing-caffe-on-ubuntu.html
  2. http://caffe.berkeleyvision.org/installation.html
  3. https://hunseblog.wordpress.com/2014/09/15/installing-numpy-and-openblas/
  4. https://prateekvjoshi.com/2013/10/18/package-opencv-not-found-lets-find-it/
Posted in Uncategorized | 1 Comment

Valgrind for Memory Profiling in Linux

If you’ve created a program, you might be interested to check if your program has any memory leakage, how much memory your program actually consumes, etc. In other words, you want to do memory profiling. A popular tool that can be used in Linux is valgrind. Valgrind provides a few tools that can be used for different purposes. The list of tools and uses copied from Valgrind official website [1] is given below.

I used Valgrind only to profile memory consumption: actual heap and stack size the code takes in RAM. This can be achieved by the valgrind tool massif [2]. The following is a quick summary of how you can use this.

  • Installation: sudo apt-get install valgrind [3].
  • To use valgrind, tool must be specified by option --tool (e.g., --tool=massif).
  • Massif samples the memory usage during the execution of the code several times. Sampling can be uniform in time (default) or can be indexed by memory allocated to the code. This can be chosen by --time-unit=i for time and --time-unit=B for memory (B stands for Bytes).
  • By default massif measures only the heap usage. To also record the stack usage, use the option --stacks=yes.
  • If you want all memory allocated to the code, which includes shared library memory, low level memory call, etc., use the option --pages-as-heap=yes. When this option is used, stack memory also is recorded. Hence, this option cannot be used along with --stacks=yes option.
  • The result of massif will be written in a file massif.out.%p, where %p stands for the process id the operating system has assigned to the code while execution time (note that process id will differ from run to run). This output file can be visualized using ms_print massif.out.%p. The output can be interpreted easily. See [2] for more details on this.
  • Example: I have the following code, which I call example.cpp.
     #include <iostream>
    using namespace std;
    const int N = 100000;
    int main()
    {
        int a[N];
        int *b = new int [N];
        a[N] = 0;
        delete[] b;
        return 0;
    } 
    • I compile it with g++ -g -o example example.cpp. (-g option helps valgrind to pinpoint exact lines in the code where heap memory allocations happened.
    • In the command prompt, I run
      valgrind --tool=massif --time-unit=B --stacks=yes ./example
    • The result is written in massif.out.4574 (4574 is the process id).
    • View this output file using ms_print massif.out.4574. The result is given below.
    • Note that the code has a heap memory of 100,000 int (variable a) in the stack and 100,000 int (variable b) in heap. So there should be 400,000 Bytes in the stack and 400,000 bytes in the heap. You can verify this by checking line number 63 in the output below. Check line number 65. By the time execution reaches this point, the heap is freed by the program. So, heap becomes 0, while the stack still remains the same. Next, the program prepares to exit and the stack also is emptied. Executions before 60 corresponds to program initiation (loading to RAM etc.).
--------------------------------------------------------------------------------
Command:            ./example
Massif arguments:   --time-unit=B --stacks=yes
ms_print arguments: massif.out.4574
--------------------------------------------------------------------------------

    KB
781.5^                                                      ########
     |                                                      #
     |                                                      #
     |                                                      #
     |                                                      #
     |                                                      #
     |                                                      #
     |                                                      #
     |                                                      #
     |                                                      #
     |                                             @@@@@@@@@#       :::::::::
     |                                             @        #       :
     |                                             @        #       :
     |                                             @        #       :
     |                                             @        #       :
     |                                             @        #       :
     |                                             @        #       :
     |                                             @        #       :
     |                                             @        #       :
     |                                             @        #       :
   0 +-----------------------------------------------------------------------&amp;amp;amp;amp;gt;MB
     0                                                                   3.204

Number of snapshots: 68
 Detailed snapshots: [1, 9, 23, 37, 41, 45, 50, 60, 62, 64 (peak)]

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
  0              0                0                0             0            0
  1         39,412            2,580                0             0        2,580
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
  2         68,904              944                0             0          944
  3         95,600            1,168                0             0        1,168
  4        124,656              960                0             0          960
  5        150,740            1,124                0             0        1,124
  6        172,264            1,104                0             0        1,104
  7        216,420            1,124                0             0        1,124
  8        244,620            1,124                0             0        1,124
  9        276,204            1,124                0             0        1,124
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 10        307,788            1,124                0             0        1,124
 11        352,520            1,104                0             0        1,104
 12        384,104            1,104                0             0        1,104
 13        415,688            1,104                0             0        1,104
 14        447,272            1,104                0             0        1,104
 15        478,788              788                0             0          788
 16        506,196              788                0             0          788
 17        537,780              788                0             0          788
 18        564,852              788                0             0          788
 19        602,480            1,104                0             0        1,104
 20        647,600            1,104                0             0        1,104
 21        692,720            1,104                0             0        1,104
 22        737,840            1,104                0             0        1,104
 23        782,960            1,104                0             0        1,104
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 24        828,080            1,104                0             0        1,104
 25        850,640            1,104                0             0        1,104
 26        873,200            1,104                0             0        1,104
 27        895,760            1,104                0             0        1,104
 28        925,812              788                0             0          788
 29        948,624              960                0             0          960
 30        982,824            1,296                0             0        1,296
 31      1,016,988              980                0             0          980
 32      1,039,800              768                0             0          768
 33      1,062,704            1,104                0             0        1,104
 34      1,085,472            1,296                0             0        1,296
 35      1,119,636              980                0             0          980
 36      1,142,448              768                0             0          768
 37      1,165,352            1,104                0             0        1,104
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 38      1,188,120            1,296                0             0        1,296
 39      1,222,284              980                0             0          980
 40      1,245,096              768                0             0          768
 41      1,279,352            1,168                0             0        1,168
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 42      1,313,524              748                0             0          748
 43      1,347,756              748                0             0          748
 44      1,370,428            1,108                0             0        1,108
 45      1,393,340              788                0             0          788
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 46      1,427,424            1,296                0             0        1,296
 47      1,450,200            1,112                0             0        1,112
 48      1,473,084              980                0             0          980
 49      1,495,760              896                0             0          896
 50      1,517,372              876                0             0          876
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 51      1,538,932              612                0             0          612
 52      1,560,540              612                0             0          612
 53      1,582,100              612                0             0          612
 54      1,603,708              612                0             0          612
 55      1,625,232              736                0             0          736
 56      1,646,868              612                0             0          612
 57      1,668,560              856                0             0          856
 58      1,690,084              396                0             0          396
 59      1,711,640            1,032                0             0        1,032
 60      1,733,240              904                0             0          904
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 61      2,134,280          400,192                0             0      400,192
 62      2,134,404          400,316                0             0      400,316
00.00% (0B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 63      2,536,000          800,264          400,000             8      400,256
 64      2,536,000          800,264          400,000             8      400,256
49.98% (400,000B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
-&amp;amp;amp;amp;gt;49.98% (400,000B) 0x80485F3: main (example.cpp:8)

--------------------------------------------------------------------------------
  n        time(B)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 65      2,937,776          400,248                0             0      400,248
 66      3,337,872              152                0             0          152
 67      3,359,496              936                0             0          936

Tools Available in Valgind (copy paste from [1]):

  • Memcheck is a memory error detector. It helps you make your programs, particularly those written in C and C++, more correct.
  • Cachegrind is a cache and branch-prediction profiler. It helps you make your programs run faster.
  • Callgrind is a call-graph generating cache profiler. It has some overlap with Cachegrind, but also gathers some information that Cachegrind does not.
  • Helgrind is a thread error detector. It helps you make your multi-threaded programs more correct.
  • DRD is also a thread error detector. It is similar to Helgrind but uses different analysis techniques and so may find different problems.
  • Massif is a heap profiler. It helps you make your programs use less memory.
  • DHAT is a different kind of heap profiler. It helps you understand issues of block lifetimes, block utilisation, and layout inefficiencies.
  • SGcheck is an experimental tool that can detect overruns of stack and global arrays. Its functionality is complementary to that of Memcheck: SGcheck finds problems that Memcheck can’t, and vice versa..
  • BBV is an experimental SimPoint basic block vector generator. It is useful to people doing computer architecture research and development.

References:

[1] http://valgrind.org/

[2] http://valgrind.org/docs/manual/ms-manual.html

[3] https://wiki.ubuntu.com/Valgrind

Posted in R Programming | Leave a comment

Radial Basis Function Neural Networks

In this post, I am only recording what I learned today: about RBFN. References are given below.

  • RBFN was first introduced in [1].
  • It was inspired by the fact that neurons in the visual cortex of human brain responds to small, local regions in the visual field. RBFN imitates this local response characteristic by using radial basis function, which essentially measures some kind of distance between neighbouring points in a vector space.
  • RBFN has three layers: input layer, hidden layer and output layer.
  • Hidden layer has neurons implementing a radial basis function (e.g., Gaussian function), which has two parameters: center (mean) and spread (variance).
  • Output layer is a linear machine (weighted sum of outputs of the hidden layer).
  • So, an RBFN implements any (nonlinear) function which lies in a vector space (of functions) spanned by the radial basis functions!
  • RBFN are used in classification (e.g., hand written digits).
  • There are many ways to train an RBFN. Commonly, the training is done in three stages. Assume that the number of radial functions to be used is decided (this decision depends on the problem and data).
    • In the first stage, the centers of each radial function is determined. This done by random selection of centers from the data, k-means clustering, vector quantization, etc. The variances of the radial basis functions are also determined in this stage.
    • In the second stage, the weights of the third layer (linear sum) is determined using a matrix inversion or pseudo inverse or gradient descent.
    • In the third stage, the parameters are further optimized using a back-propagation step.
  • A very good paper to read to understand more about RBFN and training schemes is [2].
  • Wikipedia gives a fast introduction to RBFN [3].
  • [4] provides a good tutorial with very good illustrations and a very good practical example, along with MATLAB code.

References

[1] D. Broomhead and D. Lowe, “Multivariable functional interpolation and adaptive      networks”, Complex Systems, 2, 321–355, 1988. [PDF]

[2] F. Schwenker, H. A. Kestler, G. Palm, “Three learning phases for radial-basis-function networks”. Neural Networks, 14, 439–458, 2001. [PDF]

[3] https://en.wikipedia.org/wiki/Radial_basis_function_network.

[4] https://chrisjmccormick.wordpress.com/2013/08/15/radial-basis-function-network-rbfn-tutorial/

Posted in Machine Learning, Neural Networks | Leave a comment

Torch in Ubuntu 14.04

  • Torch is an open source machine learning library.
  • Extensive tools for deep learning available.
  • It is a scripting language based on the Lua programming language.
  • It can be interfaced with C using LuaJIT.
  • It has a C/CUDA implementation. Hence, fast and efficient.

Installing Torch in Ubuntu[3]

  • If 32 bit OS is used, install by using the following command 

    TORCH_LUA_VERSION=LUA52 ./install.sh

  • Refresh bashrc 

    source ~/.bashrc

  • To uninstall torch, just remove the folder 

    rm -rf ~/torch

  • To install new packages 

    luarocks install image

    luarocks list

References

[1] http://torch.ch/

[2] https://en.wikipedia.org/wiki/Torch_(machine_learning)

[3] http://torch.ch/docs/getting-started.html

Posted in Machine Learning | Leave a comment

Descriptors

In the last post, I talked about HOG, which is a very successfully used image feature used in object detection and identification. Closely related are the binary descriptors, which encode local characteristic of image intensity values in a binary code, particularly, around a key point. These codes in two locations in the same image or different image can be matched to find similarities (like template matching). There are quite a few good descriptors now: BRIEF [5], ORB [6], BRISK [7], FREAK [8]. They all are related to Local Binary Pattern (LBP)[3,4], which is a particular case of Texture Spectrum model [1,2].

The binary descriptors are recent advances (not so recent yet!) and their research was inspired by the success of SIFT [9] and SURF [10], which are patch descriptors, again used to match similar key points in different images/locations. However, SIFT and SURF are slow and patent protected.

A good introduction to the above descriptors can be found in Gil’s computer vision blog [11].

Deep Learning

Understanding the descriptors and their applications is good. But the most recent developments in the literature indicates a different trend. The descriptors described above are all hand-designed. That is, they were discovered or designed with the help of years of experience and intuition. On the other hand, recent developments, deep learning networks, gives us a way to intelligently automate this process: the possibility that machine learns the relevant features to detect and identify objects itself, with very little human intervention! Yes, that is all about deep learning and deep learning network. I’ll update as I progress through the deep learning literature. However, I would like to mention that a good overview/starting point may be Prof. Yoshua Bengio‘s monograph on deep learning [12]. You can also search for online courses or youtube lectures on deep learning.

References

[1] DC. He and L. Wang (1990), “Texture Unit, Texture Spectrum, And Texture Analysis”, Geoscience and Remote Sensing, IEEE Transactions on, vol. 28, pp. 509 – 512.

[2] L. Wang and DC. He (1990), “Texture Classification Using Texture Spectrum”, Pattern Recognition, Vol. 23, No. 8, pp. 905 – 910.

[3] T. Ojala, M. Pietikäinen, and D. Harwood (1994), “Performance evaluation of texture measures with classification based on Kullback discrimination of distributions”, Proceedings of the 12th IAPR International Conference on Pattern Recognition (ICPR 1994), vol. 1, pp. 582 – 585.

[4] T. Ojala, M. Pietikäinen, and D. Harwood (1996), “A Comparative Study of Texture Measures with Classification Based on Feature Distributions”, Pattern Recognition, vol. 29, pp. 51-59.

[5] Calonder, Michael, et al. “BRIEF: Binary robust independent elementary features.” Computer Vision–ECCV 2010. Springer Berlin Heidelberg, 2010. 778-792.‏

[6] Rublee, Ethan, et al. “ORB: an efficient alternative to SIFT or SURF.” Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.‏

[7] Leutenegger, Stefan, Margarita Chli, and Roland Y. Siegwart. “BRISK: Binary robust invariant scalable keypoints.” Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.‏

[8] Alahi, Alexandre, Raphael Ortiz, and Pierre Vandergheynst. “FREAK: Fast retina keypoint.” Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.‏

[9] Lowe, David G. “Object recognition from local scale-invariant features.”Computer vision, 1999. The proceedings of the seventh IEEE international conference on. Vol. 2. Ieee, 1999.‏

[10] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “Surf: Speeded up robust features.” Computer Vision–ECCV 2006. Springer Berlin Heidelberg, 2006. 404-417.‏

[11] https://gilscvblog.wordpress.com/2013/08/18/a-short-introduction-to-descriptors/

[12] http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

Posted in Computer Vision | Leave a comment

Histogram of Gradients (HOG)

Histogram of gradients (HOG) is a very successfully used feature in object detection and recognition algorithms. Introduced by Dalal and Triggs in 2005 [1], it has found many applications including in face detection, facial landmarks detection (e.g. SDM), human detection, etc. When combined with SVM or deep neural networks, they are very powerful  and most used in recent literature. There are more than 13,000 citations to Dalal and Trigg’s paper at this time in google scholar.

The original paper can be found here [1]. Apart from the original paper, there are other easy explanations available [2–4], a detailed talk by Dalal [5] and a short lecture on HOG by Prof. Mubarak Shah [6]. An implementation is available from vlfeat [7, 8], which can be called from Matlab or C programs.

[1] N. Dalal and B. Triggs, “Histogram of Oriented Gradients for Human Detection”, CVPR 2005. [PDF]

[2] https://chrisjmccormick.wordpress.com/2013/05/09/hog-person-detector-tutorial/

[3] http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/

[4] http://farshbaf.net/en/artificial-intelligence/blog/hog-matlab-implementation

[5] Histogram of Oriented Gradients (HOG) for Object Detection in Images

[6] https://www.youtube.com/watch?v=0Zib1YEE4LU

[7] http://www.vlfeat.org/overview/hog.html

[8] http://www.vlfeat.org/api/hog_8c.html

Posted in Computer Vision | Tagged , | Leave a comment