Monday, 16 June 2014

Pypy 2.3.1 versus cPython 2.7.6 on very large builds

A good build practice is to keep the count of build tasks to an absolute minimum. It implies fewer objects to process (reduced pressure on the Python interpreter), less data to store (data serialization), and fewer processes to spawn (reduced pressure on the OS). If is therefore a good idea to enable batches if the compiler supports them (waflib/extras/unity.py and waflib/extras/batches_cc.py for example).

Although very large builds should be uncommon, it can be interesting to consider how the Python interpreter behaves at the limits. Here is for example a few results on playground/compress for a large amount of tasks:

The runtime difference between cPython and Pypy becomes noticeable at approximately 100K tasks (1 minute). It then stretches to about 90 minutes for 500K tasks. One explanation for these figures can be found in the memory usage:

Since the Pypy interpreter requires much less memory than cPython, it is more likely to remain efficient with a high number of objects.

Monday, 17 December 2012

Linux filesystems for build workloads

Linux systems include several filesystems by default: XFS, JFS, Ext3, Ext4, reiserfs3. These filesystems have certain characteristics, with some known to be better at small file handling (reiserfs3), others at big files handling (XFS), or featuring annoying quirks (long filesystem verification time on Ext3).

I tend to prefer XFS because the Ext2/Ext3 verification times (fsck) can take a very long time to verify (this is just unacceptable on production environments). After seeing XFS performing poorly on a file server (extremely long file deletes), I have decided to take actual measures to make myself an informed opinion.

The scenarios below represent typical operations on servers running on a build farm: file writes (building the software), file deletes (clean builds), and file system verification (unexpected shutdowns).

The numbers have been obtained on an Ubuntu 12.10 workstation freshly installed (Quantal Quetzal) having two mechanical hard drives. A large build folder of 55GB containing source code and build artifacts was used in the tests below (350000 files spread in 19000 folders). The data was first copied to a freshly created filesystem, then the filesystem was unmounted and verified (fsck -f where applicable), and then all the files were removed from the filesystem. The very large fileset was essential to get relevant data, and best times of 2 runs were recorded.

File writes

This test represents the time to copy all the files to the initially empty filesystem from a separate hard drive:

Filesystem verification

A weak point of Ext3 on servers is that verifying the filesystem can take a long time. This verification can happen if the system was not switched off properly, and can cause unwanted downtimes. I was suspecting that Ext4 would take a verification time too, but I was pleasantly surprised:

File removal

File removal has been a weak point of XFS for a long time. Removing a few terabytes of data can take such a long time that I sometimes consider replacing rm by mkfs. I was hoping that the version of XFS in the kernel 3.2 would perform much better due to the recent optimizations. The following represents the time to remove the directory copied previously:

Conclusion

For build servers and related fileservers, it makes sense to prefer Ext4 to other filesystem types. XFS was a good solution against Ext3, but this is not the case anymore.

Sunday, 16 December 2012

Caching object files for the build

An interesting idea to accelerate the builds is to cache already generated object files. The Waf library provides a simple cache system by intercepting the task execution and retrieving files from the cache. Extensions are even provided to limit directory growth or to share the files over the network

In practice, implementing a cache layer on the build system level will not work very well. The following points are conclusions of years of experimentation on both open and closed-source projects:

  1. The task signatures used for identifying tasks make poor keys for accessing the cache. Platform-specific command-line flags, characters (/ or \), and absolute paths severely limit the cache re-use.
  2. Implementing different task signatures to work around the previous limitations (overriding BuildContext.hash_env_vars for example) will cause at best only performance issues (long startup time), and at worst mysterious cache reuse errors.
  3. Because of the two previous points, the build system can become too brittle and too complex.
  4. The Python runtime is essentially single-threaded. The build process is therefore unable to launch more tasks when retrieving files from the cache.

The best system so far is to wrap the compilers or the commands in the manner of ccache. While this requires some more work up front, the resulting builds are faster and more robust.

The ccache application is limited to C/C++ compilations, but it is easy to write command-line wrappers. Such wrappers can then access custom low-latency tcp servers for example.

Saturday, 8 December 2012

Running Waf on Pypy 2.0

Is Pypy an option for running Waf builds now? While Pypy 2.0 beta 1 still hangs on simple parallel builds, Pypy nightly (59365-f2f4cb496c1c) seems to work much better now.

The numbers below represent the best times of 10 runs on a 64-bit Ubuntu 12.10 laptop. The typical benchmark project was used for this purpose (./utils/genbench.py /tmp/build 50 100 15 5):

cPython 2.7.3 pypy-c-jit pypy-c-nojit
no-op build 0.76s 6.5s 7.7s
full build 39s 45.4s 48.3s

The no-op build times represent the time taken to load the serialized Python data without executing any command. Pypy is still using a pure python implementation if pickle, which is likely to take much more time than the C extension present in cPython.

This can explain the time differences on the full build times. If we substract these values, we can imagine that the Pypy runtime is getting nearly as fast as cPython.

Saturday, 1 September 2012

KDE 4.9

Waf was originally created to ease the creation of KDE applications, but it has not worked so well in practice. The first versions of KDE 4 were terrible, and I think they discouraged anyone from using it ever again.

Fortunately, the version 4.9 has changed for the best, and it finally provides a pleasant development environment. At least, after the stability fixes (the plasma desktop does not crash anymore, the network manager just works), there are fewer annoyances than on other desktop environments. In particular, the focus stealing prevention policy helps to concentrate, and the apps do not pop-up password/keyring windows all the time anymore.

If Qt5 and KDE5 do not break the API too much, we should see more applications for KDE appearing over time.

Monday, 13 August 2012

Computed gotos in python 2.7

Since Pypy does not work too well for multithreaded applications at the moment, so I am now stuck with cPython.

Since Python 2.7.3 is about as fast as Python 3.2 for my applications, I wondered what Python 3 optimizations could be backported to 2.7. The computed gotos patch did not look too complicated to adapt, so I have created my own version. Here are two files to add to build a computed-gotos-enabled cPython 2.7.3 interpreter: Python/ceval.c and Python/opcode_targets.h.

The optimization does not seem to make a visible difference on my applications though, even after recompiling with -fno-gcse/-fno-crossjumping.

Thursday, 22 March 2012

Listing files efficiently on win32 with ctypes

Listing files on Windows platforms is not particularly fast, but detecting if files are folders, or listing the last modification times is extremely slow (os.path.isfile, os.stat). Such function calls become major bottlenecks on very large Windows builds. An effective workaround is to use the functions FindFirstFile and FindNextFile to list files and their properties at the same time while listing folders. The results can then be added to a cache for later use.

Though cPython provides access to these functions though ctypes, finding a good example is fairly difficult. Here is a short code snippet that works with Python 2:

import ctypes, ctypes.wintypes

FILE_ATTRIBUTE_DIRECTORY = 0x10
INVALID_HANDLE_VALUE = -1
BAN = (u'.', u'..')

FindFirstFile = ctypes.windll.kernel32.FindFirstFileW
FindNextFile  = ctypes.windll.kernel32.FindNextFileW
FindClose     = ctypes.windll.kernel32.FindClose

out  = ctypes.wintypes.WIN32_FIND_DATAW()
fldr = FindFirstFile(u"C:\\Windows\\*", ctypes.byref(out))

if fldr == INVALID_HANDLE_VALUE:
    raise ValueError("invalid handle!")
try:
    while True:
       if out.cFileName not in ban:
           isdir = out.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY
           ts = out.ftLastWriteTime
           timestamp = (ts.dwLowDateTime << 32) | ts.dwHighDateTime
           print str(out.cFileName), isdir, timestamp
       if not FindNextFile(fldr, ctypes.byref(out)):
           break
finally:
    FindClose(fldr)
To learn more about the attributes available on the "out" object, consult the msdn documentation on WIN32_FIND_DATAW