Monday, June 20, 2011

Speeding up Python (NumPy, Cython, and Weave)

The high-level nature of Python makes it very easy to program, read, and reason about code. Many programmers report being more productive in Python. For example, Robert Kern once told me that "Python gets out of my way" when I asked him why he likes Python. Others express it as "Python fits your brain." My experience resonates with both of these comments.

It is not rare, however, to need to do many calculations over a lot of data. No matter how fast computers get, there will always be cases where you still need the code to be as fast as you can get it. In those cases, I first reach for NumPy which provides high-level expressions of fast low-level calculations over large arrays. With NumPy's rich slicing and broadcasting capabilities, as well as its full suite of vectorized calculation routines, I can quite often do the number crunching I am trying to do with very little effort.

Even with NumPy's fast vectorized calculations, however, there are still times when either the vectorization is too complex, or it uses too much memory. It is also sometimes just easier to express the calculation with a simple loop. For those parts of the application, there are two general approaches that work really well to get you back to compiled speeds: weave or Cython.

Weave is a sub-package of SciPy and allows you to inline arbitrary C or C++ code into an extension module that is dynamically loaded into Python and executed in-line with the rest of your Python code. The code is compiled and linked at run-time the very first time the code is executed. The compiled code is then cached on-disk and made available for immediate later use if it is called again.

Cython is an extension-module generator for Python that allows you to write Python-looking code (Python syntax with type declarations) that is then pre-compiled to an extension module for later dynamic linking into the Python run-time. Cython translates Python-looking code into "not-for-human-eyes" C-code that compiles to reasonably fast C-code. Cython has been gaining a lot of momentum in recent years as people who have never learned C, can use Cython to get C-speeds exactly where they want them starting from working Python code. Even though I feel quite comfortable in C, my appreciation for Cython has been growing over the past few years, and I know am an avid supporter of the Cython community and like to help it whenever I can.

Recently I re-did the same example that Prabhu Ramachandran first created several years ago which is reported here. This example solves Laplace's equation over a 2-d rectangular grid using a simple iterative method. The code finds a two-dimensional function, u, where ∇2 u = 0, given some fixed boundary conditions.

Pure Python Solution

The pure Python solution is the following:

from numpy import zeros
from scipy import weave

dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy

def py_update(u):
    nx, ny = u.shape
    for i in xrange(1,nx-1):
        for j in xrange(1, ny-1):
            u[i,j] = ((u[i+1, j] + u[i-1, j]) * dy2 +
                      (u[i, j+1] + u[i, j-1]) * dx2) / (2*(dx2+dy2))

def calc(N, Niter=100, func=py_update, args=()):
    u = zeros([N, N])
    u[0] = 1
    for i in range(Niter):
    return u

This code takes a very long time to run in order to converge to the correct solution. For a 100x100 grid, visually-indistinguishable convergence occurs after about 8000 iterations. The pure Python solution took an estimated 560 seconds (9 minutes) to finish (using IPython's %timeit magic command).

NumPy Solution

Using NumPy, we can speed this code up significantly by using slicing and vectorized (automatic looping) calculations that replace the explicit loops in the Python-only solution. The NumPy update code is:

def num_update(u):
    u[1:-1,1:-1] = ((u[2:,1:-1]+u[:-2,1:-1])*dy2 + 
                    (u[1:-1,2:] + u[1:-1,:-2])*dx2) / (2*(dx2+dy2))

Using num_update as the calculation function reduced the time for 8000 iterations on a 100x100 grid to only 2.24 seconds (a 250x speed-up).   Such speed-ups are not uncommon when using NumPy to replace Python loops where the inner loop is doing simple math on basic data-types.

Quite often it is sufficient to stop there and move on to another part of the code-base.  Even though you might be able to speed up this section of code more, it may not be the critical path anymore in your over-all problem.  Programmer effort should be spent where more benefit will be obtained.  Occasionally, however, it is essential to speed-up even this kind of code.

Even though NumPy implements the calculations at compiled speeds, it is possible to get even faster code.   This is mostly because NumPy needs to create temporary arrays to hold intermediate simple calculations in expressions like the average of adjacent cells shown above.  If you were to implement such a calculation in C/C++ or Fortran, you would likely create a single loop with no intermediate temporary memory allocations and perform a more complex computation at each iteration of the loop.

In order to get an optimized version of the update function, we need a machine-code implementation  that Python can call.   Of course, we could do this manually by writing the inner call in a compilable language and using Python's extension facilities.  More simply, we can use Cython and Weave which do most of the heavy lifting for us.

Cython solution

Cython is an extension-module writing language that looks a lot like Python except for optional type declarations for variables.  These type declarations allow the Cython compiler to replace generic, highly dynamic Python code with specific and very fast compiled code that is then able to be loaded into the Python run-time dynamically.  Here is the Cython code for the update function:

cimport numpy as np

def cy_update(np.ndarray[double, ndim=2] u, double dx2, double dy2):
    cdef unsigned int i, j
    for i in xrange(1,u.shape[0]-1):
        for j in xrange(1, u.shape[1]-1):
            u[i,j] = ((u[i+1, j] + u[i-1, j]) * dy2 +
                      (u[i, j+1] + u[i, j-1]) * dx2) / (2*(dx2+dy2))

This code looks very similar to the original Python-only implementation except for the additional type-declarations.   Notice that even NumPy arrays can be declared with Cython and Cython will correctly translate Python element selection into fast memory-access macros in the generated C code.   When this function was used for each iteration in the inner calculation loop,  the 8000 iterations on a 100x100 grid took only 1.28 seconds.

For completeness, the following shows the contents of the file that was also created in order to produce a compiled-module where the cy_update function lived.

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

import numpy

ext = Extension("laplace", ["laplace.pyx"],
    include_dirs = [numpy.get_include()])
      cmdclass = {'build_ext': build_ext})

The extension module was then built using the command: python build_ext --inplace

Weave solution

An older, but still useful, approach to speeding up code is to use weave to directly embed a C or C++ implementation of the algorithm into the Python program directly.   Weave is a module that surrounds the bit of C or C++ code that you write with a template to on-the-fly create an extension module that is compiled and then dynamically loaded into the Python run-time.   Weave has a caching mechanism so that different strings or different types of inputs lead to a new extension module being created, compiled, and loaded.    The first time code using weave runs, the compilation has to take place.  Subsequent runs of the same code will load the cached extension module and run the machine code.

For this particular case, an update routine using weave looks like:

def weave_update(u):
    code = """
    int i, j;
    for (i=1; i<Nu[0]-1; i++) {
       for (j=1; j<Nu[1]-1; j++) {
           U2(i,j) = ((U2(i+1, j) + U2(i-1, j))*dy2 + \
                       (U2(i, j+1) + U2(i, j-1))*dx2) / (2*(dx2+dy2));
    weave.inline(code, ['u', 'dx2', 'dy2'])

The inline function takes a string of C or C++ code plus a list of variable names that will be pushed from the Python namespace into the compiled code.   The inline function takes this code and the list of variables and either loads and executes a function in a previously-created extension module (if the string and types of the variables have been previously created) or else creates a new extension module before compiling, loading, and executing the code.

Notice that weave defines special macros so that U2 allows referencing the elements of the 2-d array u using simple expressions.   Weave also defines the special C-array of integers Nu to contain the shape of the u array.   There are also special macros similarly defined to access the elements of array u if it would have been a 1-, 3-, or 4-dimensional array (U1, U3, and U4).   Although not used in this snippet of code, the C-array Su containing the strides in each dimension and the integer Du defining the number of dimensions of the array are both also defined.

Using the weave_update function, 8000 iterations on a 100x100 grid took only 1.02 seconds.   This was the fastest implementation of all of the methods used.    Knowing a little C and having a compiler on hand can certainly speed up critical sections of code in a big way.

Faster Cython solution (Update)

After I originally published this post, I received some great feedback in the Comments section that encouraged me to add some parameters to the Cython solution in order to get an even faster solution. I was also reminded about pyximport and given example code to make it work more easily. Basically by adding some compiler directives to Cython to avoid some checks at each iteration of the loop, Cython generated even faster C-code. To the top of my previous Cython code, I added a few lines:

#cython: boundscheck=False
#cython: wraparound=False

I then saved this new file as _laplace.pyx, and added the following lines to the top of the Python file that was running the examples:

import pyximport
import numpy as np
from _laplace import cy_update as cy_update2

This provided an update function cy_update2 that resulted in the very fastest implementation (943 ms) for 8000 iterations of a 100x100 grid.


The following table summarizes the results which were all obtained on a 2.66 Ghz Intel Core i7 MacBook Pro with 8GB of 1067 Mhz DDR3 Memory. The relative speed column shows the speed relative to the NumPy implementation.

Method Time (sec) Relative Speed
Pure Python 560 250
NumPy 2.24 1
Cython 1.28 0.57
Weave 1.02 0.45
Faster Cython 0.94 0.42

Clearly when it comes to doing a lot of heavy number crunching, Pure Python is not really an option. However, perhaps somewhat surprisingly, NumPy can get you most of the way to compiled speeds through vectorization. In situations where you still need the last ounce of speed in a critical section, or when it either requires a PhD in NumPy-ology to vectorize the solution or it results in too much memory overhead, you can reach for Cython or Weave. If you already know C/C++, then weave is a simple and speedy solution. If, however, you are not already familiar with C then you may find Cython to be exactly what you are looking for to get the speed you need out of Python.


  1. Just out of curiosity, have you tried to disable bounds checking and wrap around for cython?

    cimport cython
    def cy_update(...): ...


  2. Thanks for the writeup.
    There's also a numexpr project ( ) which claims to speed up numpy operations.
    It'll be good if you can add it to the mix to see how far we can go with only pure python user code

  3. pyximport provides a quick alternative to For example, I put the cy_update function in _laplace.pyx. Then I imported it at as follows:

    import pyximport
    pyximport.install(setup_args={'include_dirs': [np.get_include()]})
    from _laplace import cy_update

    The default destination for the builds is the directory .pyxbld in your home directory.

  4. pankaj,

    Thanks for the suggestion about numexpr. I definitely thought about numexpr and actually did do a numexpr example --- but in this case I did not get any speed up. In fact, it was slower than the NumPy example (took 4 seconds). Now, I didn't try to investigate if any configurations to the numexpr engine would speed that up. If you have any suggestions that would be very helpful.

  5. Pankaj,

    Here is my num expr example that didn't work so well:

    def expr_update(u):
        bottom = u[2:,1:-1]
        top = u[:-2,1:-1]
        left = u[1:-1,2:]
        right = u[1:-1,:-2]
        u[1:-1,1:-1] = ne.evaluate("((bottom + top)*dy2 + "\
            "(left + right)*dx2) / (2*(dx2+dy2))")

  6. Hsy,

    Thanks for reminding me about the compiler directives. I should have remembered that one. I added

    #cython: boundscheck=False
    #cython: wraparound=False

    to the top of my file and ended up with a 943 ms solution for Cython. I will add that to the table.

  7. Eryksun,

    Thank you for the suggestion about pyximport. I had heard of it, but had not used it much. That is indeed a bit easier to explain.

  8. I updated the post to reflect the suggestions of Eryksun and Hsy. Thanks for the feedback!

  9. Hi Travis! Can you please run my Fortran 90 implementation on your computer?

    Here is my table using Aspire 1830T, Intel Core i7:

    Method Time Relative Speed

    NumPy 2.03 1
    Cython 1.25 0.61
    Fortran loop 0.47 0.23
    Fortran array 0.19 0.09

    Using gfortran 4.5.2 in Ubuntu Natty and the following optimizations:
    -O3 -march=native -ffast-math -funroll-loops

    So my Fortran array implementation is 6.5x faster than your slower Cython implementation.

  10. The reason I am asking is that my Fortran reimplementation of the *same* NumPy solution (i.e. using arrays instead of loops) is 10.6 faster. As such (if my benchmark is correct), your conclusion that NumPy can get you "most" of the way to compiled speed would be questionable, because it would be better to simply use Fortran, using the NumPy like programming, to get 10x speedup, with minimal effort.

    But maybe there is some hidden problem somewhere (i.e. some compiler options, lapack (?), who knows).

  11. I've just finished a 4 hour tutorial at EuroPython 2011 on High Performance Python, the slides are online:
    I covered Python, PyPy, Cython, Numpy (+cython), NumExpr, ShedSkin, multiprocessing, ParallelPython and pyCUDA for the Mandelbrot problem.
    I'm in the process of writing up the training into a free ebook, it'll be on my blog ( all going well within a week.

  12. Hi Travis! I'd like to point out that PyPy is very promising in terms of massively speeding up native Python and considerably speeding up Numpy.

    I ran your example with the native Python and Numpy update methods, and got the behavior you observe: the speedup is at least two orders of magnitude. Then I wrote a tiny wrapper class around Python lists to emulate 2D arrays, and ran it through PyPy 1.5. At 8000 iterations, it's roughly 2x slower than CPython+Numpy. That is an astounding improvement over native Python!

    There is an effort underway to port Numpy to PyPy, but it seems not enough communication is happening between PyPy and Numpy developers. I need Numpy for my job, and I would love to see Numpy incorporate support for PyPy! (I intend to help as well.) I think PyPy has made spectacular progress recently and is the future of Python.

  13. Forgot to mention - please see these blog posts for PyPy developers' efforts with Numpy:

  14. Thanks for reviving performance Python again. :)

    I wonder how well np_inline works. Given the trouble with weave from time to time, it seems like a simple alternative. I haven't used it but saw it on the scipy mailing lists:

    I'm also curious about Ondrej's benchmarks. In the past, with the original Performance Python article the speed difference with the modified weave/pyrex/fortran was not too much.

    The PyPy benchmarks are also very exciting.

    I think there is merit in actually spinning this off as a small project in itself where folks can contribute code and add to the list of benchmarks. Some form of a shootout. We could simply open up a small project on github for this? What do you think?


  15. I have created a new Github project called scipy/speed located here:

    There, I included a "modular" version of Ondrej's F90 example (compile with f2py). The standard looping construct gave similar results to Cython (0.93s).

    However, the "vectorized" F90 code gave the very fastest results, completing the 8000 iterations in 0.57s. This is indeed impressive. It looks like modern Fortran 90 is still the fastest way to compile vectorized expressions.

  16. Great article and interesting about Pypy as I haven't followed it much bit might. Also, how do you get your code to show up highlighted correctly above??


  17. Travis thx for this wonderful article ! I was just looking for ways to use Cython and Numpy together

    btw William Stein has a Sage worksheet that shows some of the more advanced Cython features with Numpy:

    So what licence is the code under? Can I use parts of it in a non-commercial tutorial?

  18. @Staffan: your link comes up with a "Notebook Bug". Could you please post a working one again?

  19. Nice post, my response is pretty late in but may help people. You have a bug in most of your implementations, except the numpy solution.
    The numpy solution does something different than the rest, you can't actually do this update in place and get what you expect as is each iteration through the loop overwrites data that the next iterations expects to be there, that is, this iteration which updates U2(i,j) actually ruins the input for the next iteration's U2(i,j-1)

    1. Micha, It's been a (very) long time since I did this kind of stuff and my memory is dim on the details. (And Varga's book on "Matrix Iterative Analysis" is sitting on my bookshelf at the office.)
      That being said, isn't the numpy version akin to a Jacobi over-relaxation (JOR) pass, while the other solutions are akin to a simultaneous over-relaxation (SOR) pass?
      IFF that is the case (and my memory isn't failing me), then in fact the SOR passes *should* converge faster from a numerical analysis point of view, due to their asymptotic performance.
      It's true that means the numpy and other implementations are (in effect) using different algorithms, with the numpy being the slower performer.
      In other words, it may not be a bug, it may be a feature!
      Warning! People who are obsessed about this kind of stuff should definitely look it up in (somplace like) Varga's book, rather than trusting my imperfect memory.

    2. I noticed the same thing -- the NumPy version does Jacobi while the others do Gauss-Seidel. You could do red-black Gauss-Seidel in the pure NumPy version, but I don't think that it is possible to do pure Gauss-Seidel with NumPy. Also, if you look at the results, you'll see that the NumPy version converges slower.

      The timings are likely still fine, since the # of operations are the same, but the convergence will be worse for the NumPy version.

      I've been playing with this with the python C-API, ctypes, and f2py examples as well and f2py is the fastest.

  20. Thanks for your article. I'm curious to know what do you guys think of the Julia language. I've slightly modified an iterative laplace implementation from here:

    The code is very similar to your first Python example, very easy, yet very fast!

    About 0.002367143 seconds (160096 bytes allocated), using a Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz, tested at:

    (google account needed)

  21. Pretty article! I found some useful information in your blog, it was awesome to read, thanks for sharing this great content to my vision, keep sharing.
    Salesforce institutes in Chennai|Salesforce training center in Chennai


  22. This information is impressive; I am inspired with your post writing style & how continuously you describe this topic. After reading your post, thanks for taking the time to discuss this, I feel happy about it and I love learning more about this topic.
    PHP Course Chennai|php training in velachery|php training institute

  23. Thanks for splitting your comprehension with us. It’s really useful to me & I hope it helps the people who in need of this vital information.
    Web designing course in chennai|Web design training in chennai|Web design course in Chennai

  24. Thanks for sharing this niche useful informative post to our knowledge, Actually SAP is ERP software that can be used in many companies for their day to day business activities it has great scope in future so do your SAP training in chennai
    SAP Training Institute in Chennai|sap course in Chennai|SAP HANA Training in Chennai


  25. Thanks for sharing this valuable post to my knowledge; SAS has great scope in IT industry. It’s an application suite that can change, manage & retrieve data from the variety of origin & perform statistical analytic on it
    sas training in Chennai|sas training institute in Chennai|sas training chennai

  26. It’s too informative blog and I am getting conglomerations of info’s. Thanks for sharing; I would like to see your updates regularly so keep blogging. If anyone looking ccna just get here.
    ccna training institute in Chennai|ccna courses in Chennai|ccna institutes in Chennai


  27. This technical post helps me to improve my skills set, thanks for this wonder article I expect your upcoming blog, so keep sharing..
    Best Informatica Training In Chennai|Informatica training center in Chennai

  28. Cloud is one of the tremendous technology that any company in this world would rely on(Salesforce Training in Chennai). Using this technology many tough tasks can be accomplished easily in no time. Your content are also explaining the same(Salesforce admin training in chennai). Thanks for sharing this in here. You are running a great blog, keep up this good work(big data training).

  29. Thanks for your details and explanations..I want more information from your side..I Am working in erp development software in chennaishould you need for any other clarification please call in this number.044-6565 6523.

  30. It was really a wonderful article and I was really impressed by reading this blog. We are giving all software and Database Course Online Training. Oracle Training in Chennai is one of the reputed Training institute in Chennai. They give professional and real time training for all students.
    Oracle Training in chennai

  31. Oracle Training in chennai
    It’s too informative blog and I am getting conglomerations of info’s about Oracle interview questions and answer .Thanks for sharing, I would like to see your updates regularly so keep blogging.

  32. Informatica Training in chennai
    This information is impressive; I am inspired with your post writing style & how continuously you describe this topic. After reading your post,
    thanks for taking the time to discuss this, I feel happy about it and I love learning more about this topic.

  33. Pega Training in Chennai
    Brilliant article. The information I have been searching precisely. It helped me a lot, thanks.Keep coming with more such informative article. Would love to follow them..


  34. QTP Training in Chennai
    Thank you for the informative post. It was thoroughly helpful to me. Keep posting more such articles and enlighten us.

  35. There are lots of information about latest technology and how to get trained in them, like Hadoop Training in Chennai have spread around the web, but this is a unique one according to me. The strategy you
    have updated here will make me to get trained in future technologies Hadoop Training in Chennai By the way you are running a great blog. Thanks for sharing this..

  36. Nice article i was really impressed by seeing this article, it was very interesting and it is very useful for me.I get a lot of great information from this blog. Thank you for your sharing this informative blog..

    SAS Training in Chennai

  37. I found some useful information in your blog, it was awesome to read, thanks for sharing this great content to my vision, keep sharing..

    Greens Technologies In Chennai

  38. I was looking about the Oracle Training in Chennai for something like this ,Thank you for posting the great content..I found it quiet interesting, hopefully you will keep posting such blogs…

    Greens Technologies In Chennai

  39. Thanks for sharing amazing information about pega Gain the knowledge and hands-on experience you need to successfully design, build and deploy applications with pega. Pega Training in Chennai

  40. Who wants to learn Informatica with real-time corporate professionals. We are providing practical oriented best Informatica training institute in Chennai. Informatica Training in chennai

  41. QTP is a software Testing Tool which helps in Functional and Regression testing of an application. If you are interested in QTP training, our real time working. QTP Training in Chennai,

  42. Looking for real-time training institue.Get details now may if share this link visit Oracle Training in chennai

  43. Hey, nice site you have here!We provide world-class Oracle certification and placement training course as i wondered Keep up the excellent work experience!Please visit Greens Technologies located at Chennai Adyar Oracle Training in chennai

  44. Awesome blog if our training additional way as an SQL and PL/SQL trained as individual, you will be able to understand other applications more quickly and continue to build your skill set which will assist you in getting hi-tech industry jobs as possible in future courese of action..visit this blog Green Technologies In Chennai

  45. Nice site....Please refer this site also nice if Our vision succes!Training are focused on perfect improvement of technical skills for Freshers and working professional. Our Training classes are sure to help the trainee with COMPLETE PRACTICAL TRAINING and Realtime methodologies.Green Technologies In Chennai

  46. I updated the post to reflect the suggestions Job oriented Hadoop training in Chennai is offered by our institue is mainly focused on real time and industry oriented. We provide training from beginner’s level to advanced level techniques thought by our experts. Hadoop Training in Chennai

  47. I also wanted to share few links related to sas training Check this sitete.if share indepth sas training.Go here if you’re looking for information on sas training. SAS Training in Chennai

  48. It is really very helpful for us and I have gathered some important information from this blog.Oracle Training In Chennai

  49. Oracle Training in Chennai is one of the best oracle training institute in Chennai which offers complete Oracle training in Chennai by well experienced Oracle Consultants having more than 12+ years of IT experience.

  50. There are lots of information about latest technology and how to get trained in them, like Hadoop Training Chennai have spread around the web, but this is a unique one according to me. The strategy you have updated here will make me to get trained in future technologies(Hadoop Training in Chennai). By the way you are running a great blog. Thanks for sharing this.

  51. Great post and informative was awesome to read, thanks for sharing this great content to my vision.Informatica Training In Chennai

  52. A Best Pega Training course that is exclusively designed with Basics through Advanced Pega Concepts.With our Pega Training in Chennai you’ll learn concepts in expert level with practical manner.We help the trainees with guidance for Pega System Architect Certification and also provide guidance to get placed in Pega jobs in the industry.

  53. Our HP Quick Test Professional course includes basic to advanced level and our QTP course is designed to get the placement in good MNC companies in chennai as quickly as once you complete the QTP certification training course.

  54. Thanks for sharing this nice useful informative post to our knowledge, Actually SAS used in many companies for their day to day business activities it has great scope in future.

  55. Greens Technologies Training In Chennai Excellent information with unique content and it is very useful to know about the information based on blogs


  57. Thank you for this detailed article on Web designing course. I’m aspiring to do Web designing course.


  58. Thanks for sharing this informative blog .To make it easier for you Greens Techonologies at Chennai is visualizing all the materials about (OBIEE).SO lets Start brightening your future.and using modeling tools how to prepare and build objects and metadata to be used in reports and more trained itself visit Obiee Training in chennai

  59. Thanks for sharing this valuable post to my knowledge; SAS has great scope in IT industry. It’s an application suite that can change, manage & retrieve data from the variety of origin & perform statistical analytic on it...
    sas training in Chennai|sas course in Chennai|sas training chennai|sas training institute in Chennai

  60. Nice blog...Very useful information is providing by ur is a way to find Oracle Training In Chennai

  61. i wondered keep share this sites .if anyone wants realtime training Greens technolog chennai in Adyar visit this blog..performance tuning training In Chennai and more Oracle Training In Chennai

  62. once again sharing this informative blog .Datastage training In Chennai It uses a graphical notation to construct data integration solutions and is available in various versions may visit greens technology chennai in adyar Greens Technologys Training In Chennai

  63. i gain the knowledge of Java programs easy to add functionalities play online games, chating with others and industry oriented coaching available from greens technology chennai in Adyar may visit.Core java training In Chennai

  64. I have read your blog and I got very useful and knowledgeable information from your blog. It’s really a very nice article Greens Technologies Training In Chennai

  65. fantastic presentation .We are charging very competitive in the market which helps to bring more oracle professionals into this market. may update this blog . Oracle training In Chennai which No1:Greens Technologies In Chennai

  66. Excellent post, I agree with you 100%! I’m always scouring the oracle for new information and learning whatever I can, and in doing so I sometimes leave comments on blogs.Oracle Training In Chennai

  67. There are lots of information about latest technology and how to get trained in them, like Best Hadoop Training In Chennai have spread around the web, but this is a unique one according to me. The strategy you have updated here will
    make me to get trained in future technologies Hadoop Training in Chennai By the way you are running a great blog. Thanks for sharing this blogs..

  68. I found some useful information in your blog, it was awesome to read, thanks for sharing this great content to my vision, keep sharing..
    SalesForce Training in Chennai

  69. Pretty article! I found some useful information in your blog, it was awesome to read,thanks for sharing this great content to my vision, keep sharing..
    Unix Training In Chennai

  70. This information is impressive..I am inspired with your post writing style & how continuously you describe this topic. After reading your post, thanks for taking the time to discuss this, I feel happy about it and I love learning more about this topic
    Android Training In Chennai In Chennai

  71. I have read your blog and i got a very useful and knowledgeable information from your blog.You have done a great job.
    SAP Training in Chennai

  72. Oracle Training in chennai
    Thanks for sharing such a great information..Its really nice and informative..

  73. Selenium Training in Chennai
    Wonderful blog.. Thanks for sharing informative blog.. its very useful to me..

  74. Data warehousing Training in Chennai
    I am reading your post from the beginning, it was so interesting to read & I feel thanks to you for posting such a good blog, keep updates regularly..

  75. Whatever we gathered information from the blogs, we should implement that in practically then only we can understand that exact thing clearly,
    but it’s no need to do it, because you have explained the concepts very well. It was crystal clear, keep sharing..
    Websphere Training in Chennai

  76. Oracle DBA Training in Chennai
    Thanks for sharing this informative blog. I did Oracle DBA Certification in Greens Technology at Adyar. This is really useful for me to make a bright career..