Friday, December 6, 2013

Why I promote conda

Anaconda users have been enjoying the benefits of conda for quickly and easily
managing their binary Python packages for over a year.  During that time conda
has also been steadily improving as a general-purpose package manager.  I
have recently been promoting the very nice things that conda can do for Python
users generally --- especially with complex binary extensions to Python as
exist in the NumPy stack.   For example, It is very easy to create python 3
environments and python 2 environments on the same system and install
scikit-learn into them.   Normally, this process can be painful if you
do not have a suitable build environment, or don't want to wait for
compilation to succeed.

Naturally, I sometimes get asked, "Why did you promote/write another
python package manager (conda) instead of just contributing to the
standard pip and virtualenv?"  The python packaging story is older and
more personal to me than you might think.  Python packaging has been a thorn
in my side personally since 1998 when I released my first Python extension
(called numpyio actually).  Since then, I've written and personally released
many, many Python packages (Multipack which became SciPy, NumPy, llvmpy,
Numba, Blaze, etc.).   There is nothing you want more as a package author than
users.  So, to make Multipack (SciPy), then NumPy available, I had to become a
packaging expert by experiencing a lot of pain with the lack of
suitable tools for my (admittedly complex) task.

Along the way, I've suffered through believing that distutils,
setuptools, distribute, and pip/virtualenv would solve my actual
problem.  All of these tools provided some standardization (at least around what somebody
types at the command line to build a package) but no help in actually doing the
build and no real help in getting compatible binaries of things like SciPy
installed onto many users machines.

I've personally made terrible software engineering mistakes because of the lack of
good package management.  For example, I allowed the pressure of "no ABI
changes" to severely hamper the progress of the NumPy API.  Instead of pushing
harder and breaking the ABI when necessary to get improvements into NumPy, I
buckled under the pressure and agreed to the requests coming mostly from NumPy
windows users and froze the ABI.  I could empathize with people who would spend
days building their NumPy stack and literally become fearful of changing it.
From NumPy 1.4 to NumPy 1.7, the partial date-time addition caused various
degrees of broken-ness and is part of why missing data data-types have never
showed up in NumPy at all.   If conda had existed back then with standard
conda binaries released for different projects, there would have been almost
no problem at all.   That pressure would have largely disappeared.   Just
install the packages again --- problem solved for everybody (not just the
Linux users who had apt-get and yum).

Some of the problems with SciPy are also rooted in the lack of good packages
and package  management.  SciPy, when we first released it in 2001 was
basically a distribution of multiple modules from Multipack, some new BLAS /
LAPACK and linear algebra wrappers and nascent plotting tools.  It was a SciPy
distribution masquerading as a single library.  Most of the effort spent was
a packaging effort (especially on Windows).  Since then, the scikits effort
has done a great job of breaking up the domain of SciPy into more manageable
chunks and providing a space for the community to grow.   This kind of re-
factoring is only possible with good distributions and is really only
effective when you have good package management.   On Mac and Linux
package managers exist --- on Windows things like EPD, Anaconda or C.
Gohlke's collection of binaries have been the only solution.

Through all of this work, I've cut my fingers and toes and sometimes face on
compilers, shared and static libraries on all kinds of crazy systems (AIX,
Windows NT, etc.).  I still remember the night I learned what it meant to have
ABI incompatibilty between different compilers (try passing structs
such as complex-numbers between a file compiled with mingw and a library compiled with
Visual Studio).   I've been bitten more than once by unicode-width
incompatibilities, strange shared-library incompatibilities, and the vagaries
of how different compilers and run-times define the `FILE *` file pointer.

In fact, if you have not read "Linkers and Loaders", you should actually do
that right now as it will open your mind to that interesting limbo between
"developer-code" and "running process" overlooked by even experienced
developers.  I'm grateful Dave Beazley recommended it to me over 6 years ago.
Here is a link:  http://www.iecc.com/linker/

We in the scientific python community have had difficulty and a rocky
history with just waiting for the Python.org community to solve the
problem.  With distutils for example, we had to essentially re-write
most of it (as numpy.distutils) in order to support compilation of
extensions that needed Fortran-compiled libraries.  This was not an
easy task.  All kinds of other tools could have (and, in retrospect,
should have) been used.  Most of the design of distutils did not help
us in the NumPy stack at all.  In fact, numpy.distutils replaces most
of the innards of distutils but is still shackled by the architecture
and imperative approach to what should fundamentally be a declarative
problem.  We should have just used or written something like waf or
bento or cmake and encouraged its use everywhere.  However, we buckled
under the pressure of the distutils promise of "one right way to do
it" and "one-size fits all" solution that we all hoped for, but
ultimately did not get.  I appreciate the effort of the distutils
authors.  Their hearts were in the right place and they did provide a
useful solution for their use-cases.  It was just not useful for ours,
and we should not have tried to force the issue.  Not all code is
useful to everyone.  The real mistake was the Python community picking
a "standard" that was actually limiting for a sizeable set of users.
This was the real problem --- but it should be noted that this
"problem" is only because of the incredible success and therefore
influence of python developers and python.org.  With this influence, however,
comes a certain danger of limiting progress if all advances have to be
made via committee --- working out specifications instead of watching for
innovation and encouraging it.

David Cooke and many others finally wrestled numpy.distutils to the
point that the library does provide some useful functionality for
helping build extensions requiring NumPy.  Even after all that effort,
however, some in the Python community who seem to have no idea of the
history of how these things came about and simply claim that setup.py
files that need numpy.distutils are "broken" because they import numpy
before "requiring" them.  To this, I reply that what is actually
broken is the design that does not have a delcarative meta-data file
that describes dependencies and then a build process that creates the
environment needed before running any code to do the actual build.
This is what `conda build` does and it works beautifully to create any
kind of binary package you want from any list of dependencies you may
have.  Anything else is going to require all kinds of "bootstrap"
gyrations to fit into the square hole of a process that seems to
require that all things begin with the python setup.py incantation.

Therefore, you can't really address the problem of Python packaging without
addressing the core problems of trying to use distutils (at least for the
NumPy stack).  The problems for us in the NumPy stack started there and have
to be rooted out there as well.  This was confirmed for me at the first PyData
meetup at Google HQ, where several of us asked Guido what we can do to fix
Python packaging for the NumPy stack.   Guido's answer was to "solve the
problem ourselves".  We at Continuum took him at his word.  We looked at dpkg,
rpm, pip/virtualenv, brew, nixos, and 0installer, and used our past experience
with EPD.  We thought hard about the fundamental issues, and created the conda
package manager and conda environments.  We who have been working on this for
the past year have decades of Python packaging experience between us: me,
Peter Wang, Ilan Schnell, Bryan Van de Ven, Mark Wiebe, Trent Nelson, Aaron
Meurer, and now Andy Terrel are all helping improve things.  We welcome
contributions, improvements, and updates from anyone else as conda is BSD
licensed and completely open source and can be used and re-used by
anybody.  We've also recently made a mailing list
conda@continuum.io which is open to anyone to join and participate:
https://groups.google.com/a/continuum.io/forum/#!forum/conda

Conda pkg files are similar to .whl files except they are Python-agnostic.  A
conda pkg file is a bzipped tar file with an 'info' directory, and then
whatever other directory structure is created by the install process in
"prefix".   It's the equivalent of taking a file-system diff pre and post-
install and then tarring the result up.  It's more general than .whl files and
can support any kind of binary file.    Making conda packages is as simple as making a recipe for it.   We make a growing collection of public-domain, example recipes available to everyone and also encourage attachment of a conda recipe directory to every project that needs binaries.

At the heart of conda package installation is the concept of environments.
Environments are like namespaces in Python -- but for binary packages.  Their
applicability is extensive.  We are using them within Anaconda and Wakari for
all kinds of purposes (from testing to application isolation to easy
reproducibility to supporting multiple versions of packages in different
scripts that are part of the same installation).  Truly, to borrow the famous
Tim Peters' quip: "Environments are one honking great idea -- let's do more of
those".  Rather than tacking this on after the fact like virtualenv does to
pip, OS-level environments are built-in from the beginning.  As a result,
every conda package is always installed into an environment.  There is a
default (root) environment if you don't explicitly specify another one.
Installation of a package is simply merging the unpacked binary into the union
of unpacked binaries already at the root-path of the environment.   If union
filesystems were better implemented in different operating systems, then each
environment would simply be a union of the untarred binary packages.  Instead
we accomplish the same thing with hard-linking, soft-linking, and (when
necessary) copying of files.

The design is simple, which helps it be easy to understand and easy to
mix with other ideas.  We don't see easily how to take these simple,
powerful ideas and adapt them to .whl and virtualenv which are trying
to fit-in to a world created by distutils and setuptools.  It was
actually much easier to just write our own solution and create
hundreds of packages and make them available and provide all the tools
to reproduce what we have done inside conda than to try and untangle
how to provide our solution in that world and potentially even not
quite get the result we want (which can be argued is what happened
with numpy.distutils).

You can use conda to build your own distribution of binaries that
compete with Anaconda if you like.  Please do.  I would be completely
thrilled if every other Python distribution (python.org, EPD,
ActiveState, etc.) just used conda packages that they build and in so
doing helped improve the conda package manager.  I recognize that
conda emerged at the same time as the Anaconda distribution was
stabilizing and so there is natural confusion over the two.  So,
I will try to clarify: Conda is an open-source, general,
cross-platform package manager.  One could accurately describe it as a
cross-platform hombrew written in Python.  Anyone can use the tool and
related infrastructure to build and distribute whatever packages they
want.

Anaconda is the collection of conda packages that we at Continuum provide for
free to everyone, based on a particular base Python we choose (which you can
download at http://continuum.io/downloads as Miniconda).  In the past it has
been some work to get conda working outside Miniconda or Anaconda because our
first focus was creating a working solution for our users.  We have been
fixing those minor issues and have now released a version of conda that can be
'pip installed'.   As conda has significant overlap with virtualenv in
particular we are still working out kinks in the interop of these two
solutions.   But, it all can and should work together and we fix issues as
quickly as we can identify them.

We also provide a service called http://binstar.org (register with beta-code
"binstar in beta") which allows you to host your own binary conda packages.
With this missing piece, you just tell people to point their conda
repositories to your collection -- and they can easily install everything you
want them to.  You can also build your own conda repositories and host them on
your own servers.  It all works, today, now -- for hundreds of thousands of
people.  In this context, Anaconda could be considered a "reference"
distribution and a proof of concept of how to use the conda package manager.
Wakari also uses the conda package manager at its core to share bundles.
Bundles are just conda packages (with a set of dependencies) and capture the
core problems associated with reproducible computing in a light-weight and
easily reproduced way.  We have made the tools available for *anyone* to re-
create this distribution pretty easily and compete with us.

It is very important to keep in mind that we created conda to solve
the problem of distributing an environment to end-users that allow
them do to advanced data analytics, scientific discovery, and general
engineering work.  Python has a chance to play a major role in this
space.  However, it is not the only player.  Other solutions exist in
the space we are targeting (SAS, Matlab, SPSS, and R).  We want Python
to dominate this space.  We could not wait for the packaging solution
we needed to evolve from the lengthy discussions that are on-going
which also have to untangle the history of distutils, setuptools,
easy_install, and distribute.  What we could do is solve our problem
and then look for interoperability and influence opportunities once we
had something that worked for our needs.   That the approach we took
and I'm glad we did.  We have a working solution now which benefits
hundreds of thousands of users (and could benefit millions more if
IT administrators recognized conda as an acceptable packaging approach
from others in the community).

We are going to keep improving conda until it becomes an obvious
solution for everyone: users, developers, and IT administrators alike.
We welcome additions and suggestions that allow it to interoperate
with anything else in the Python packaging space.   I do believe that the group of people working on Python packaging and Nick Coghlan in particular are doing a valuable service.  It's a very difficult job to take into account the history of Python packaging, fix all the little issues around it, *and* provide a binary distribution system that allows users to not have to think about packaging and distribution.    With our resources we did just the latter.   I admire those who are on the front lines of the former and look to provide as much context as I can to ensure that any future decisions take our use-cases into account.   I am looking forward to continuing to work with the community to reach future solutions that benefit everyone.

If you would like to see more detail about conda and how it can be used here are some
resources:

Documentation: http://docs.continuum.io/conda/index.html
Talk at PyData NYC 2013:
 - Slides: https://speakerdeck.com/teoliphant/packaging-and-deployment-with-conda
 - Video: http://vimeo.com/79862018

Blog Posts:
 - http://continuum.io/blog/anaconda-python-3
 - http://continuum.io/blog/new-advances-in-conda
 - http://continuum.io/blog/conda

Mailing list:
 - conda@continuum.io
 - https://groups.google.com/a/continuum.io/forum/#!forum/conda

Wednesday, July 3, 2013

Thoughts after SciPy 2013 and a specific NumPy improvement

I attended a few days of SciPy 2013 and enjoyed interacting with the many old friends and many new friends that participate in this conference.   I thought the program committee did an excellent job of selecting talks and there were more attendees this year which also mirrors my experience with the PyData conference series which sells out every time.     Andy Terrell, a NumFOCUS board member and researcher at the University of Texas, and Jonathan Rocher, an Enthought developer, were co-chairs of SciPy this year and did an excellent job of coordination.

Continuum Analytics, my new company, is the institutional sponsor of the PyData conference series and I know how much work it can be, so my thanks go out to Enthought for their efforts to sponsor the SciPy conference this year and in years past.   I'm really looking forward to the day when the SciPy conference, like the PyData conference series, directly benefits NumFOCUS which is a non-profit organization with 501(c)(3) status started by the scientific Python community and run by the same community behind so much of the SciPy stack.    It looks like steps are being taken in that direction which is wonderful to see.  At the SciPy conference, Fernando Perez, of IPython fame, led the charge to get fiscal sponsorship documents improved to make it much simpler for people wanting to sponsor the great projects on the scientific python stack (IPython, NumPy, SciPy, Pandas, SymPy, Matplotlib, etc.) to have a vehicle to do it.  This year, NumFOCUS was able to sponsor the attendance of two students to the SciPy conference because of generous donors.  Right now, NumFOCUS is looking for help for its website to improve the look and feel.   It's a great way to get involved with the community and help out.    Just send an email to the numfocus google group (a public group for all to get involved with):  mailto:numfocus+subscribe@googlegroups.com?subject=Subscribe.

Right now, a conversation involving graph-representations for Python compilation tools is happening on the numfocus mailing list among several interested parties from SymPy, Numba, Theano, Pythran, Parakeet, etc.     One of the highlights of the conference for me was meeting and interacting with other people interested in Python-for-science compiler technology as it looks like there is a healthy community developing around this topic.   I hope those interested in the topic check out compilers.pydata.org and issue pull requests to that github-hosted page to describe their favorite tool.

I only attended some of the tutorial given by fellow Continuum team members Ben Zaitlen and Clayton Davis.   I was gratified to see that wakari.io was useful for so many people during the tutorials, and appreciated the feedback on how we can continue to improve the tool.   I'm also grateful to see all the people able to productively use Anaconda which is our free, cross-platform, distribution for using Python for scientific work and data analysis.

It was nice to see David Cournapeau give a detailed discussion of NumPy internals in one of the tutorials.   There is much more that could be said about NumPy internals, but David gave a good introduction to the topic.   I like how he showed how it is possible to extend the NumPy dtype system --- especially with certain kinds of types.   In NumPy, I tried very hard to make the type-system more extensible.   It's nice to see it being used more and more.   Extending the type system more generally (to include things like variable-length strings, and infinite precision floats) is a bit harder and not very easy to do in current NumPy (especially while trying to keep the foundation stable).     In fact, one of the reasons Continuum is sponsoring the development of dynd is precisely to build a foundation with an easier to extend type-system.   Making it a C++ library should hopefully allow languages like Javascript, Ruby, Haskell, and others to also benefit from the dynamic type concepts as well.

I really enjoyed the talk on Spyder by Carlos Cordoba.   The Spyder IDE is a very nice tool and I was happy to see Carlos promoting it.   The Spyder IDE is featured in our Anaconda Launcher (part of the Anaconda 1.6 release) along with the IPython notebook and IPython console.   The Launcher allows anyone to publish their app to multiple platforms simply by making a conda package (with an icon and an entry-point) and upload it to a repository that the Launcher is looking at.   All the dependencies can be specified and they will be installed via conda automatically when the app is selected.   The hope is to make it very easy for anyone to get their cool application based on Python in front of people quickly without having to make installers for every platform.

Besides the excellent keynote talks, by Fernando Perez, William Schroeder, and Olivier Grisel, I also found the talks by Matthew Rocklin, Pat Marion, Ramalingam Saravanan, Serge Guelton, Samuel SkillmanJake Vanderplas, and Joshua Warner very interesting.   It was especially nice to meet Joshua who was coming from the Mayo Clinic where SciPy began.   I started writing the SciPy library in 1999 at the Mayo Clinic while I was a graduate student there (then called Multipack, special, and a bunch of other modules).    It was very nice to meet someone from Mayo contributing again to this community with a very nice fuzzy logic package based on the work of an old professor of mine Hal Otteson.    His work is now a new scikit.  The scikit concept has been a tremendous boon for development of the Scientific Python community as it allows more distributed development and more rapid expansion of the available tools.    If better packaging had existed at the time, I would very likely have kept my early modules independent so they could grow with their own developer bases.   What is now the SciPy library should most likely have been a SciPy distribution (with perhaps a smaller core).    But, hindsight is 20/20 and given the state of the world at the time, the best option seemed to be to create the SciPy library with Eric Jones and Pearu Peterson.

Mark Wiebe did an excellent job in presenting dynd, a C++ library for dynamic multi-dimensional array manipulation with nice python bindings.   Mark's work, sponsored by Continuum Analytics,  is something that could lead to NumPy 2.0, although nobody has suggested exactly how that might work yet.    As dynd forms a foundation for Blaze, and Blaze and NumPy can co-exist for many years, I haven't been thinking much about how NumPy 2.0 could grow out of dynd until now.  I do now have some ideas about how NumPy could be improved that I think will help the space evolve more fluidly and productively with many interested people able to coordinate their varied efforts.   The most important of these is the introduction of multi-methods into NumPy which I'll outline below.

I participated on a panel about the future of Array Oriented Computing in Python.   Of course, I've been spending a lot of time over the past year working and thinking exactly about that, so I would have preferred a talk versus a panel with only a limited amount of time.    However, I have limited time to prepare talks and will be speaking at the upcoming PyData conference in Boston, so I was grateful for the chance to at least express some of the ideas we've been working on.    To be clear, I think that Blaze is the future of Array Oriented Computing in Python, though we have some work ahead to prove that out.   Exactly what the transition from NumPy to Blaze looks like for people will be a story I care quite a bit about and will be telling more and more in the coming months and years.    I take personal responsibility for anyone who adopted NumPy, and I will do everything I can to make sure their transition to using Blaze is as simple as possible.   Backward compatibility is very important to me.  I spent many hours making sure that NumPy was compatible with both Numarray and Numeric.   Fortunately, Blaze and NumPy can co-exist and so there is less of a story of either / or and more about which / when (especially during the transition phase).

There is also another possibility that will be interesting to see if it emerges:  retro-fitting NumPy with multi-methods (dispatching on python type and also on dtype).    I think this is the single-most important thing that can be done for NumPy.   If someone is motivated and has budget, I can work with her to do this in about 1-2 months (maybe even sooner depending on the experience).    This is not on my immediately funded road-map, however, so it would need outside funding and/or interest.

There are several different multi-method implementations for Python.   For those unfamiliar with the concept, here is a good essay by Guido on the general concept.   Multi-methods are also at the heart of Julia.    They are a simple concept.    Basically, a multi-method is an object that dispatches to a different implementation based on the number and types of the arguments.   The idea is that you can add new implementations of the underlying function quite easily without changing the function object itself.   So, for example, if numpy.dot were a multi-method, then I could change the implementation of numpy.dot for my new fancy array-object without directly changing the source-code of numpy.dot in NumPy and all downstream functions and methods that use numpy.dot in their implementation would automatically work with my new type of array.    Multi-methods allow extensibility in a manner similar to how operator overloading allows extensibility in object-oriented programming.   But, it's a much more natural fit for operations where dispatching only on the first argument does not make a lot of sense.

In fact, at the heart of NumPy's ufuncs is a multi-method dispatch mechanism (on NumPy dtype, instead of Python type), so NumPy users have been using multi-methods for a long time.  In fact, if NumPy's ufuncs were true multi-methods to begin with, then all the hassle with __array_wrap__, __array_prepare__, and so forth which are hacks to compensate for the lack of true Python-type-based multi-methods would not be necessary.    If you look at the implementation of NumPy's masked array's for example you will see some of the ugliness that is caused by NumPy's lack of a better multi-method mechanism.    Numba's autojit also effectively creates a kind of multi-method as it creates a new function to dispatch to whenever it encounters a new set of types for the arguments.    These are the ideas that we are building on and using in Blaze, as we learn from our experience with NumPy.

The biggest challenge for multi-methods is always what function to return if you don't find an exact match.    A simple multi-method is basically a dictionary whose key is the a tuple of the types of the input arguments and whose value is the implementation.  But, what do you do if the key does not return an implementation?  How do you find a compatible function and use it instead?    There is a lot of theory on this and several approaches people have taken.  I'm not aware of a universal solution that everybody agrees should be used.      However, there are reasonable approaches that can be taken using  the idea of typesets or type-hierarchies (for those interested you can read more about contravariance and covariance for other approaches to resolving the type dispatch problem as well).

I'm confident that useful if not universal approaches to this problem can be found (several are already available for Python and in Julia, for example).   For NumPy, what is needed is a two-tiered dispatch mechanism.   My view is that all NumPy (and SciPy and Scikit) functions should be multi-methods that dispatch based on Python-type *and* then additionally for memory-view-like objects on the data-type of the elements.    The dispatch rules for each of these cases can and should be separate, I think.

If you are interested in this problem and especially if you have money to fund it, feel free to contact me directly at travis at continuum dot io.

While I am spending more and more of my conference time with the PyData conference series, I still enjoy reconnecting with people I will always consider friends at the SciPy conference.   Fortunately, many speakers participate in both.     Having both conferences allows the community to grow and have bigger and better impact as I think can be witnessed by the increased attendance this year at SciPy.  

Sunday, December 16, 2012

Passing the torch of NumPy and moving on to Blaze

I wrote this letter tonight to the NumPy mailing list --- a list I have been actively participating in for nearly 15 years.


Hello all, 

There is a lot happening in my life right now and I am spread quite thin among the various projects that I take an interest in.     In particular, I am thrilled to publicly announce on this list that Continuum Analytics has received DARPA funding (to the tune of at least $3 million) for Blaze, Numba, and Bokeh which we are writing to take NumPy, SciPy, and visualization into the domain of very large data sets.    This is part of the XDATA program, and I will be taking an active role in it.    You can read more about Blaze here:  http://blaze.pydata.org.   You can read more about XDATA here:  http://www.darpa.mil/Our_Work/I2O/Programs/XDATA.aspx  

I personally think Blaze is the future of array-oriented computing in Python.   I will be putting efforts and resources next year behind making that case.   How it interacts with future incarnations of NumPy, Pandas, or other projects is an interesting and open question.  I have no doubt the future will be a rich ecosystem of interoperating array-oriented data-structures.     I invite anyone interested in Blaze to participate in the discussions and development at https://groups.google.com/a/continuum.io/forum/#!forum/blaze-dev or watch the project on our public GitHub repo:  https://github.com/ContinuumIO/blaze.  Blaze is being incubated under the ContinuumIO GitHub project for now, but eventually I hope it will receive its own GitHub project page later next year.   Development of Blaze is early but we are moving rapidly with it (and have deliverable deadlines --- thus while we will welcome input and pull requests we won't have a ton of time to respond to simple queries until at least May or June).    There is more that we are working on behind the scenes with respect to Blaze that will be coming out next year as well but isn't quite ready to show yet.

As I look at the coming months and years, my time for direct involvement in NumPy development is therefore only going to get smaller.  As a result it is not appropriate that I remain as "head steward" of the NumPy project (a term I prefer to BDF12 or anything else).   I'm sure that it is apparent that while I've tried to help personally where I can this year on the NumPy project, my role has been more one of coordination, seeking funding, and providing expert advice on certain sections of code.    I fundamentally agree with Fernando Perez that the responsibility of care-taking open source projects is one of stewardship --- something akin to public service.    I have tried to emulate that belief this year --- even while not always succeeding.  

It is time for me to make official what is already becoming apparent to observers of this community, namely, that I am stepping down as someone who might be considered "head steward" for the NumPy project and officially leaving the development of the project in the hands of others in the community.   I don't think the project actually needs a new "head steward" --- especially from a development perspective.     Instead I see a lot of strong developers offering key opinions for the project as well as a great set of new developers offering pull requests.  

My strong suggestion is that development discussions of the project continue on this list with consensus among the active participants being the goal for development.  I don't think 100% consensus is a rigid requirement --- but certainly a super-majority should be the goal, and serious changes should not be made with out a clear consensus.     I would pay special attention to under-represented people (users with intense usage of NumPy but small voices on this list).   There are many of them.    If you push me for specifics then at this point in NumPy's history, I would say that if Chuck, Nathaniel, and Ralf agree on a course of action, it will likely be a good thing for the project.   I suspect that even if only 2 of the 3 agree at one time it might still be a good thing (but I would expect more detail and discussion).    There are others whose opinion should be sought as well:  Ondrej Certik, Perry Greenfield, Stefan van der Walt, David Warde-Farley, Pauli Virtanen, Robert Kern, David Cournapeau, Francesc Alted, and Mark Wiebe to name a few (there are many other people as well whose opinions can only help NumPy).    For some questions, I might even seek input from people like Konrad Hinsen and Paul Dubois --- if they have time to give it.   I will still be willing to offer my view from time to time, and if I am asked. 

Greg Wilson (of Software Carpentry fame) asked me recently what letter I would have written to myself 5 years ago.   What would I tell myself to do given the knowledge I have now?     I've thought about that for a bit, and I have some answers.   I don't know if these will help anyone, but I offer them as hopefully instructive:   

1) Do not promise to not break the ABI of NumPy --- and in fact emphasize that it will be broken at least once in the 1.X series.    NumPy was designed to add new data-types --- but not without breaking the ABI.    NumPy has needed more data-types and still needs even more.   While it's not beautifully simple to add new data-types, it can be done.   But, it is impossible to add them without breaking the ABI in some fashion.   The desire to add new data-types *and* keep ABI compatibility has led to significant pain.   I think the ABI non-breakage goal has been amplified by the poor state of package management in Python.   The fact that it's painful for someone to update their downstream packages when an upstream ABI breaks (on Windows and Mac in particular) has put a lot of unfortunate pressure on this community.    Pressure that was not envisioned or understood when I was writing NumPy.

(As an aside:  This is one reason Continuum has invested resources in building the conda tool and a completely free set of binary packages called Anaconda CE which is becoming more and more usable thanks to the efforts of Bryan Van de Ven and Ilan Schnell and our testing team at Continuum.   The conda tool:  http://docs.continuum.io/conda/index.html is open source and BSD licensed and the next release will provide the ability to build packages, build indexes on package repositories and interface with pip.    Expect a blog-post in the near future about how cool conda is!).  

2) Don't create array-scalars.  Instead, make the data-type object a meta-type object whose instances are the items returned from NumPy arrays.   There is no need for a separate array-scalar object and in fact it's confusing to the type-system.    I understand that now.  I did not understand that 5 years ago.   

3) Special-case small arrays to avoid the memory indirection and look at PDL so that generalized ufuncs are supported from the beginning.

4) Define missing-value data-types and labels on the dimensions and arrays

5) Define a standard "dictionary of NumPy arrays" interface as the basic "structure of arrays" concept to go with the "array of structures" that structured arrays provide.

6) Start work on SQL interface to NumPy arrays *now*

Additional comments I would make to someone today: 

1) Most of NumPy should be written in Python with Numba used as the compiler (particularly as soon as Numba gets the ability to create Python extension modules which is in the next release).  
2) There are still many, many optimizations that can be made in NumPy run-time (especially in the face of modern hardware). 

I will continue to be available to answer questions and I may chime in here and there on pull requests.    However, most of my time for NumPy will be on administrative aspects of the project where I will continue to take an active interest.    To help make sure that this happens in a transparent way,  I would like to propose that "administrative" support of the project be left to the NumFOCUS board of which I am currently 1 of 9 members.   The other board members are currently:  Ralf Gommers, Anthony Scopatz, Andy Terrel, Prabhu Ramachandran, Fernando Perez, Emmanuelle Gouillart, Jarrod Millman, and Perry Greenfield.      While NumFOCUS basically seeks to promote and fund the entire scientific Python stack,   I think it can also play a role in helping to administer some of the core projects which the board members themselves have a personal interest in. 

By administrative support, I mean decisions like "what should be done with any NumPy IP or web-domains" or "what kind of commercially-related ads or otherwise should go on the NumPy home page", or "what should be done with the NumPy github account", etc.  --- basically anything that requires an executive decision that is not directly development related.    I don't expect there to be many of these decisions.  But, when they show up, I would like them to be made in as transparent and public of a way as possible.  In practice, the way I see this working is that there are members of the NumPy community who are (like me) particularly interested in admin-related questions and serve on a NumPy team in the NumFOCUS organization.     I just know I'll be attending NumFOCUS board meetings, and I would like to help move administrative decisions forward with NumPy as part of the time I spend thinking about NumFOCUS. 

If people on this list would like to play an active role in those admin discussions, then I would heartily welcome them into NumFOCUS membership where they would work with interested members of the NumFOCUS board (like me and Ralf) to direct that organization.    I would really love to have someone from this list volunteer to serve on the NumPy team as part of the NumFOCUS project.   I am certainly going to be interested in the opinions of people who are active participants on this list and on GitHub pages for NumPy on anything admin related to NumPy, and I expect Ralf would also be very interested in those views.

One admin discussion that I will bring up in another email (as this one is already too long) is about making 2 or 3 lists for NumPy such as numpy-admin@numpy.org,  numpy-dev@numpy.org, and numpy-users@numpy-org.  

Just because I'll be spending more time on Blaze, Numba, Bokeh, and the PyData ecosystem does not mean that I won't be around for NumPy.    I will continue to promote NumPy.   My involvement with Continuum connects me to NumPy as Continuum continues to offer commercial support contracts for NumPy (and SciPy and other open source projects).   Continuum will also continue to maintain its Github NumPy project which will contain pull requests from our company that we are working to get into the mainline branch.      Continuum will also continue to provide resources for release-management of NumPy (we have been funding Ondrej in this role for the past 6 months --- though I would like to see this happen through NumFOCUS in the future even if Continuum provides much of the money).    We also offer optimized versions of NumPy in our commercial Anaconda distribution (Anaconda CE is free and open source).   

Also, I will still be available for questions and help (I'm not disappearing --- just making it clear that I'm stepping back into an occasional NumPy developer role).   It has been extremely gratifying to see the number of pull-requests, GitHub-conversations, and code contributions increase this year.   Even though the 1.7 release has taken a long time to stabilize, there have been a lot of people participating in the discussion and in helping to track down the problems, figure out what to do, and fix them.    It even makes it possible for people to think about 1.7 as a long-term release.  

I will continue to hope that the spirit of openness, tolerance, respect, and gratitude continue to permeate this mailing list, and that we continue to seek to resolve any differences with trust and mutual respect.    I know I have offended people in the past with quick remarks and actions made sometimes in haste without fully realizing how they might be taken.   But, I also know that like many of you I have always done the very best I could for moving Python for scientific computing forward in the best way I know how.    

Thank you for the great memories.   If you will forgive a little sentiment:  My daughter who is in college now was 3 years old when I began working with this community and went down a road that would lead to my involvement with SciPy and NumPy.   I have marked the building of my family and the passage of time with where the Python for Scientific Computing Community was at.   Like many of you, I have given a great deal of attention and time to building this community.   That sacrifice and time has led me to love what we have created.    I know that I leave this segment of the community with the tools in better hands than mine.   I am hopeful that NumPy will continue to be a useful array library for the Python community for many years to come even as we all continue to build new tools for the future. 

Very best regards,

-Travis 


Wednesday, October 10, 2012

Continuum and Open Source

As an avid open source contributor for nearly 15 years --- and a father with children to provide for --- I've observed intently the discussions about how to monetize open source.   As a young PhD student, I even spent hours avoiding my dissertation by reading about philosophy and economics to try and make sense of how an open-source economy might work.

I love creating and contributing to open source code --- particularly code that has the potential to influence and touch for the better millions of lives.  I really enjoy spending as much time as I can on that activity.   On the other hand, the wider economy wants money from me for things like college expenses, housing, utilities, and the "camp champions" that I get to attend this week with my 11 year old son.   So, I have thought and read a lot about how to make money from open source.

There are a lot of indirect ways to make money from open source which all amount to giving away the code and then making money doing "something else":   training, support, consulting, documentation, etc.  These are all ways you can sell the expertise that results from open source.  Ultimately, however, under all these models open source is a marketing expense and you end up needing to focus your real attention on the the thing you end up getting paid for -- the service itself.   As a result, the open source code you care about tends to receive less attention than you had originally hoped and you can only spend your "free time" on it.     I've seen this play out over several years in multiple ways.

I still believe that a model that is patterned after the original copyright/patent compromise of "limited-time" protection is actually a good one --- especially for certain kinds of software.   Under this model, there are two code-bases: an open source one and a proprietary one.   People pay for the software they want and use (and therefore developers get paid to write it) while premium features migrate from the paid-for branch to the free-and-open-source code base as the developers get paid.  

While this model would not work for every project, it does have some nice features:

  • it allows developers to work full-time on code that benefit users (as evidenced by those users' willingness to pay for the software)
  • developers have a livelihood directly writing code that "will become" open source as people pay for it
  • users only pay for software that they are getting "premium benefits" from and those premium benefits are lifting the state of open-source software over time
It is a wonderful thing for developers to have a user-base of satisfied customers.   For all the benefits of open-source,  I've also seen first hand the difficulty of supporting a large user-base with no customers who are directly paying for continued support of the code-base which eventually leads to less satisfied customers. 

I am thrilled to be part of a forward-thinking company like Continuum Analytics that is committed enough to open source software to both sponsor directly open source projects (like NumPy and Numba) as well as seek to move features from its premium products into open-source.   You can read more about Continuum's Open Source philosophy here: Continuum and Open Source

For example, we recently moved a feature from our premium product, NumbaPro, into the open-source project Numba which allows you to compile a python file directly to a shared library.  You can read about that feature here: Compiling Python code to Shared Library.

We will continue to develop Numba in the open --- in conjunction with others who wish to participate in the development of that project.    Our ability to spend time on this, of course, will be directly impacted by how many licenses of NumbaPro we can sell (along with our other products and services).   So, if computing on GPUs, creating NumPy ufuncs and generalized ufuncs easily, or taking advantage of multiple-cores in your Python computations is something that would benefit you, take a look at NumbaPro and see if it makes sense for you to purchase it.   Hopefully, in addition to great software you appreciate, you will also recognize that you are contributing directly to the development of Numba.

Sunday, September 2, 2012

John Hunter 1968-2012

It was a shock to hear the news from Fernando that John Hunter needed chemo therapy to respond to the cancer that had attacked him.    Literally days previous to the news we had just been talking at the SciPy conference about how to take NumFOCUS to the next level.   Together with the other members of NumFocus we have ambitious plans for the Foundation: scholarships and post-doc funds for students and early professionals contributing to open-source, conference sponsorship, packaging and continuous integration sponsorships, etc.   We had been meeting via phone in board meetings every other week and he was planning to send a message to the matplotlib mailing list encouraging people to donate to our efforts with NumFOCUS.     Working with John in person on a mutual project was gratifying.   His intelligence, enthusiasm, humility, and pragmatism were a perfect complement to our board discussions.

He had also just spoken at SciPy 2012 and gave a great talk discussing his observations and lessons learned from Matplotlib.  If you haven't seen the talk, stop reading this and go watch it here --- you will see a great and humble man describe a labor of love (and not give himself enough credit for what he accomplished).

When I heard the news, I wrote a quick note to John expressing my support and appreciation for all he had done for Python --- not only because I truly feel that matplotlib is a major reason that projects I have invested so heavily in (NumPy and SciPy) have become so popular, but also because I knew that I had not shared enough with him how much I think of him.  A sinking feeling in my heart was telling me that I may not have much time.

This is what I sent him:
Hey John,

I am so sorry to hear the news of your diagnosis.    I will be praying for you and your family.   I understand if you cannot respond.   Please let me know if there is anything I can do to help.   

I have so much respect for you and what you have done to make Python viable as a language for technical computing.  I also just think you are an amazing human being with so much to give.  

All the best for a speedy recovery. 
-Travis 

This is the response I received.

Thanks so much Travis. We're moving full speed ahead with a treatment plan -- chemo may start Tues.  As unpleasant as it can be, I'm looking forward to the start of the fight against this bastard.

Thanks so much for your other kind words. You've always been a hero to me and they mean a lot. I have great respect for what you are doing for numpy and NUMFOCUS, and even though I am stepping back from work and MPL and everything non-essential right now, I want to continue supporting NF while I'm able.  
All the best,
JDH

I had no idea how much I would come to appreciate this small but meaningful exchange -- my last communication with John.  Only a few weeks later, Fernando Perez (author of IPython and a great friend to John) sent word that our mutual friend had an unexpected but terrible reaction to his initial treatment, and it had placed him in critical condition and the prognosis was not good.

I ached when literally hours later, John died.   I thought of his 3 daughters (each only about 3 years younger than my own 3 daughters) and how they would miss their father.   I thought of the time he did not spend with them because he was writing matplotlib.   I know exactly what that means because of the time I have sacrificed with my own little girls (and boys) bringing SciPy to life, merging Numarray and Numeric into NumPy, resurrecting llvmpy, and bringing Numba to life.   I thought of the future time I would not get to spend with him building NumFOCUS into a foundation worthy of the software it promotes.    I have not lost many of my loved ones to death yet.  Perhaps this is why I have been so affected by his death.  Not since my mother died 2 years ago (August 31, 2010), has the passing of another driven me so.

When I thought of John's girls, I thought immediately of what could we do to show love and appreciation.   What would I want for my own children if I were no longer here to care for them?   My oldest daughter had just started college and was experiencing that first transformative week.  Perhaps this was why I thought that more than anything if I were not around I would want my girls to have enough money for their education.  After speaking with Fernando and with approval from John's wife, Miriam, we setup the John Hunter Memorial Fund.  Anthony Scopatz, Leah Holdridge, and I have spent several hours since then making sure the site stays operational (mainly overcoming some unexpected difficulties caused by Google on Friday).

My personal goal is to raise at least $100,000 for John's girls.   This will not cover their entire education, but it is will be a good start and will be a symbolic expression of appreciation for all those who work tirelessly on open source software for the benefit of many.     After a few days we are at about $20,000 total (from about 450 donors).   This is a great start and will be greatly appreciated by John's family --- but I know that all those who benefit from the free use of a high-quality plotting library can do better than that.      If you have already given, thank you!    If you haven't given something yet, please consider what John has done for you personally, and give your most generous donation.  

There are fees associated with using online payment networks.    We will find a way to get those fees waived or covered by specific corporate donations, so don't let concern of the fees stop you from helping.    We've worked hard to make sure you have as many options to pay as possible.  You can use PayPal or WePay (which both have fees of 2.9% + $0.30), you can use an inexpensive payment network like Dwolla (only $0.25 for sending more than $10 and free for sending less --- but you have to have a Dwolla account and put money into it), or you can do as David Beazley suggested and just send a check to one of the addresses listed on the memorial page.

Whatever you decide to do, just remember that it is time to give back!

John has always been supportive of my work in open source.  It was his voice that was one of the few positive voices that kept me going in the early days of NumPy when other voices were more discouraging.    He has also consistently been a calming and supportive voice on the mailing lists when others have been less considerate and sometimes even hostile.    I'm very sorry he will not be able to see even more results of his tireless efforts.  I'm very sorry we won't get to feel more of his influence in the world.   The world has lost one who truly recognized that great things require cooperation of many people.   Obtaining that cooperation takes sacrifice, trust, humility, a willingness to listen, a willingness to speak out with respect, and a willingness to forgive.   He exemplified those characteristics.   I am truly saddened that I will not be able to learn more from him.

When SciPy was emerging from my collection of modules in 2001, one of the things Eric Jones and I wanted was an integrated plotting package.    We spent time on a couple of plotting tools in early SciPy (a simple WX plotting widget, xplot based on Yorick's gist).    These early steps were not going to get us what users needed.  Fortunately, John Hunter came along around 2001 and started a new project called Matplotlib which steadily grew in popularity until it literally exploded in about 2004 with funding from the Perry Greenfield and the Space Science Telescope Institute and the efforts of the current principal developer of Matplotlib: Michael Droettboom.

I learned from John's project many important things about open source development.   A few of them:

  • Examples, documentation, and ease of use matter -- a lot
  • Large efforts like Python for Science need a lot of people and a distributed, independent development environment (not everything belongs in a single namespace).
    • SciPy needed to be a modular "library" not a replacement for Matlab all by itself. 
    • The community needed a unifying installation to make it easy for the end-user to get everything, but we did not need a single namespace. 
    • Open source projects can only cover as much space as a team of about 5-7 active developers can understand.   Then, they need to be organized into a larger integration and distribution projects --- a hierarchical federation of projects. 
    • The only way large projects can survive is by separating concerns, having well defined interfaces, and groups that work on individual pieces they have expertise in. 
  • Backwards compatibility matters a great deal to an open source project (he created numerix for Matplotlib to facilitate for end-users the migration of Numeric through Numarray to NumPy in Matplotlib)
I'm sure if John were here, he could improve my rough outline and make it much better.   From improving plotting libraries to making useful use of record arrays, he was always doing that.   In fact, one of John's last contributions to the world is in improving the mission statement of NumFOCUS.    In a recent board meeting, he suggested the word "accessible" to the mission statement:  The purpose of NumFOCUS is to promote the use of accessible and reproducible computing in science and technology.  

His life's work has indeed been to make science and technology computing more accessible through making Python the de facto standard for doing science with his excellent plotting tool.  Let's continue to improve the legacy he has left us by working together to make computing even more accessible.  We have a long way to go, but by standing on the shoulders of giants like John we can see just that much farther and continue the journey.  

Besides helping his daughters there is nothing more fitting that we can do to honor John's memory than continuing to promote the other work he spent so many hours of his life pushing by contributing to open source projects and/or supporting financially the foundation he wanted to see successful.  

Great people lift us both in life and death.   In life they are gracious contributors to our well being and encourage us to grow.  In death they cause us to reflect on the precious qualities they reflected.  They make us want to improve.  When we think of them, we want to hold our children close, give an encouraging word to a colleague, feel gratitude for our friends and family, and forgive someone who has hurt us.  John Hunter (1968 - 2012) was truly a great man!

Wednesday, August 15, 2012

Numba and LLVMPy

It's been a busy year so far.    All the time spent on starting a new company, starting new open source projects, and keeping up with the open source projects that I have interest in, has meant that I haven't written nearly as many blog-posts as I planned on.   But, this is probably a good thing at least if you follow the wisdom attributed to Solomon --- which has been paraphrased in this quote attributed to Abraham Lincoln.

One of the things that has been on my mind for the past year is promoting array-oriented computing as a fundamental concept more developers need exposure to.    This is one reason that I am so excited that I've been able to find great people to work on Numba (which intends to be an array-oriented compiler for Python code).      I have given a few talks trying to convey what is meant by array-oriented computing, but the essence is captured by the difference between the life.py example in the Python code-base and a NumPy version of the same code.

I have seen many, many real world examples of very complicated code that could be simplified and sped up (especially on modern hardware) by just thinking about the problem differently using array-oriented concepts. 

One of the goals for Numba is to make it possible to write more vectorized code easily in Python without relying just on the pre-compiled loops that NumPy provides.    In order to write Numba, though, we first needed to resurrect the llvm-py project which provides easy access to the LLVM C++ libraries from Python.   This project is interesting in its own right and in addition to forming a base tool chain for Numba, allows you to do very interesting things (like instrument C-code compiled with Clang to bitcode), build a compiler, or import bitcode directly into Python (a la bitey).

While the documentation for llvm-py left me frustrated early on, I have to admit that llvm-py re-kindled some of the joy I experienced when being first exposed to Python.    Over the past several weeks we have worked to create the llvmpy project from llvm-py.   We now have a domain http://www.llvmpy.org, a GitHub repository, a website served from GitHub, and sphinx-based documents that can be edited via a pull request.    The documentation still needs a lot of improvement (even to get it to the state that the old llvm-py project was in), and contributions are welcome.  

I'm grateful to Fernando Perez, author of IPython, for explaining the 4-repository approach to managing an open source web-site and documentation via github.   We are using the same pattern that IPython uses for both numba and llvmpy.   It took a bit of work to get set-up but it's a nice approach that should make it easier for the community to maintain the documentation and web-site of both of these projects.    The idea is simple.   Use a project page (repo llvmpy.github.com) to be the web-site but generate this repo from another repo (llvmpy-webpage) which contains the actual sources.   I borrowed the scripts from the IPython project to build the page from the sources, check-out the llvmpy.github.com repo, copy the built pages to the repo, and then push the updates back to github which actually updates the site.    The same process (slightly modified) is used for the documentation except the sources for the docs live in the llvmpy repo under the docs directory and the built pages are pushed to the gh-pages branch of the llvmpy-doc repo.    If you are editing sources you only modify llvmpy/docs and llvmpy-webpage files.   The other repos are generated and pushed via scripts.

We are using the same general scheme to host the numba pages (although there I couldn't get the numba.org domain name and so I am using http://numba.pydata.org).   With llvmpy on a relatively solid footing, attention could be shifted to getting a Numba release out.  Today, we finally released Numba 0.1.   It took longer than expected after the SciPy conference mainly because we were hoping that some of the changes (still currently in a devel branch) to use an AST-based code-generator could be merged into the main-line before the release.  

Jon Riehl did the lion's share of the work to transform Numba from my early prototype to a functioning system in 0.1 with funding from Continuum Analytics, Inc.   Thanks to him, I can proudly say that Numba is ready to be tried and used.    It is still early software --- but it is ready for wider testing.   One of the problems you will have with Numba right now is error reporting.  If you make a mistake in the Python code that you are decorating, the error you get will not be informative -- so test the Python code before decorating it with Numba.    But, if you get things right, Numba can speed up your Python code by 200 times or more.    It's is really pretty fun to be able to write image-processing routines in Python.   PyPy can do this too, of course, but with Numba you have full integration with the CPython stack and you don't have to wait for someone to port the library you also want to use to PyPy.

Numba's road-map is being defined right now by the people involved in the project.  On the horizon is support for NumPy index expressions (slices, etc.), merging of the devel branch which uses the AST and Mark Florisson's minivect compiler, improving support for error checking, emitting calls to the Python C-API for code that cannot be type-specialized, and improving complex-number support.  Your suggestions are welcome.

Monday, July 30, 2012

More PyPy discussions

I'm very glad that my co-founder of Continuum Analytics,  Peter Wang, has published his recent follow-up blog-post that hopefully clarifies his perspective on the on-going dialogue about CPython and PyPy.

Peter is a fundamentally good-natured person, and he is a lot of fun to be around --- even when he is disagreeing with you.   I'm very fortunate to be working with him on a daily basis.   He can be opinionated, but his ability to connect deeply to a wide-variety of subjects means that you come away from a dialogue with him having learned something (even if you still remain unconvinced by his views).  

Peter is also one of the smartest people I've ever met.   One of my great memories in life is sitting at dinner with Peter and Eric Weinstein while those two great minds treated me, Wes McKinney, and Adam Klein to the most impressive display of metaphor ping-pong I've ever seen covering a wide-variety of topics from social justice to string theory.  I could keep up with the dialogue, but not enough to really participate meaningfully --- and the other two Ivy-league-educated dinner partners were in the same boat.

I fundamentally agree with Peter's perspective that CPython-the-runtime is and will remain the centerpiece of the Python conversation.    In fact, I would say that even more focus needs to be on CPython-the-runtime.   It is great to see improvements in Python 3.3 like the completion of the memory-view implementation and the fixing of the internal string (Unicode) representation, but there are many other improvements that could be made.

It is a wonderful and inspiring thing to see great developers think out of the box with novel projects like Jython, IronPython, and PyPy.   Nonetheless from my perspective we still have a long way to go to really connect the average developer with ideas of array-oriented computing that could really help the continuing onslaught of parallel-devices-in-search-of-software.   As a result, it feels like those wanting Java, .NET, and machine-code integration would be better served by more attention on JPype, Python.NET, LLVMPy, and even CorePy.   Such efforts would also be better for the entire user-base of Python --- especially a majority of industry uses of Python.

But regardless of my perspective, I'm encouraged by the PyPy developer enthusiasm, and I do want to encourage dialogue regardless of my views.   As a result, I am very happy to report that both NumFOCUS and Continuum Analytics recently joined forces to sponsor Maciej Fijalkowski on a small project to create an embedded version of PyPy --- a "PyPy-in-a-Box."  This is an integration of PyPy to the CPython run-time (so that you can speed-up a particular CPython function by calling out to a library-version of PyPy).   This is proof-of-concept code so it is not appropriate for production --- but it is a good example of what is possible when we all work together to promote the Python ecosystem.

The online project is here:  https://bitbucket.org/fijal/hack2/src/default/pypyembed  and you can get a binary version that works on 64-bit Linux here:  http://baroquesoftware.com/~fijal/pypy-1.9-in-a-box-linux64.tar.bz2.

This approach needs more development to be a viable tool in the CPython ecosystem, but one of my suggestions to the PyPy community is that they focus on "shedding-tools" like this one for the CPython world --- so that everyone can benefit from their innovations.   With an integration effort like embeded PyPy, one can also make better comparisons with tools like Numba --- another dynamic-compilation run-time that uses LLVM and LLVM-py.     Numba has made a lot of progress in the last few months.   In fact, I recently gave a talk on the project at the well-attended SciPy2012 conference in Austin.   You can view my slides that outline and motivate the project online.   An actual release of the project is imminent, but you can already use Numba to very easily write signficant Python code using NumPy arrays that executes at "C-speeds."  But, that is worth another blog-post of its own....