Friday, 16 January 2015

Cost of starting processes on multiple cores

Ten years ago most computers had one or two CPU cores and quite a few of us were hoping that performance would not be a concern in the future. News have been a bit disappointing, as processors may well have more cores, but they are not getting much faster. New Python interpreters would overcome the weaknesses of CPython, but they are not as fast and usable for the time being (Pypy, Jython, IronPython). This slower progress has also helped the development of cloud solutions as older hardware usage remains profitable.

Software is growing in complexity and the build tools have not escaped the trend. As anyone can write a build system, quite a lot of tools have been written, but only a few new ideas have appeared over the years. Waf has been using a reverse dependency graph since about 2006 for example, and small enhancements such as cleaning up stale files as part of the build were already implemented as extensions years ago.

An innovative idea currently explored in wonderbuild is to write the build scripts as Python generators in order to limit the amount of "busy wait". It seems to yield interesting performance benefits on benchmarks at least. The Waf source code can be modified to use generators (modifications in Task.py and Runner.py) but the performance benefits are not significant, and writing all Python functions as generators is complicated. The Python asyncio module is also limited to Python >= 3.4.

At least, experimenting with the benchmark files has revealed that a significant amount of time can be spent in spawning processes. Build performance used to degrade approximately linearly with build tool performance years ago on a single core system, but now the effects become excessively clearly visible with multiple cores.
The following picture illustrates the hardware thread activity on an 4-core hyperthreaded CPU (i7-4770K) during two builds (link to the benchmark). The first build is unable to spawn sufficient processes to keep the hardware fully busy. It also appears that the occupancy degrades over time; this is probably due to the growing memory usage of the Python process.

The second build was obtained by enabling a new Waf extension called prefork in the build process. Instead of spawning processes as needed, the extension would start slave processes, and reserve a pool of connections to them. When needed, threads in the build process would just call the slaves to launch the compiler processes for them and to return the exit status and any additional text produced during the execution (build outputs can become garbled if all processes write at the same time).

The second build on the picture was nearly twice as fast as the first one (30s -> 17s), and the difference on larger benchmark builds seems to improve (2m7 -> 0m55). Yet, this is unlikely to help so much in practice: on the Samba builds the gap is much smaller (~5%: 1m50 -> 1m45). This is probably due to the build tasks taking a much longer time to complete.
I would be curious to experiment on hardware featuring a lot of cores though (128? 256?), so if you can access or provide access to such hardware, feel free to drop a comment, or to join the discussion on #waf on freenode.

Monday, 16 June 2014

Pypy 2.3.1 versus cPython 2.7.6 on very large builds

A good build practice is to keep the count of build tasks to an absolute minimum. It implies fewer objects to process (reduced pressure on the Python interpreter), less data to store (data serialization), and fewer processes to spawn (reduced pressure on the OS). If is therefore a good idea to enable batches if the compiler supports them (waflib/extras/unity.py and waflib/extras/batches_cc.py for example).

Although very large builds should be uncommon, it can be interesting to consider how the Python interpreter behaves at the limits. Here is for example a few results on playground/compress for a large amount of tasks:

The runtime difference between cPython and Pypy becomes noticeable at approximately 100K tasks (1 minute). It then stretches to about 90 minutes for 500K tasks. One explanation for these figures can be found in the memory usage:

Since the Pypy interpreter requires much less memory than cPython, it is more likely to remain efficient with a high number of objects.

Monday, 17 December 2012

Linux filesystems for build workloads

Linux systems include several filesystems by default: XFS, JFS, Ext3, Ext4, reiserfs3. These filesystems have certain characteristics, with some known to be better at small file handling (reiserfs3), others at big files handling (XFS), or featuring annoying quirks (long filesystem verification time on Ext3).

I tend to prefer XFS because the Ext2/Ext3 verification times (fsck) can take a very long time to verify (this is just unacceptable on production environments). After seeing XFS performing poorly on a file server (extremely long file deletes), I have decided to take actual measures to make myself an informed opinion.

The scenarios below represent typical operations on servers running on a build farm: file writes (building the software), file deletes (clean builds), and file system verification (unexpected shutdowns).

The numbers have been obtained on an Ubuntu 12.10 workstation freshly installed (Quantal Quetzal) having two mechanical hard drives. A large build folder of 55GB containing source code and build artifacts was used in the tests below (350000 files spread in 19000 folders). The data was first copied to a freshly created filesystem, then the filesystem was unmounted and verified (fsck -f where applicable), and then all the files were removed from the filesystem. The very large fileset was essential to get relevant data, and best times of 2 runs were recorded.

File writes

This test represents the time to copy all the files to the initially empty filesystem from a separate hard drive:

Filesystem verification

A weak point of Ext3 on servers is that verifying the filesystem can take a long time. This verification can happen if the system was not switched off properly, and can cause unwanted downtimes. I was suspecting that Ext4 would take a verification time too, but I was pleasantly surprised:

File removal

File removal has been a weak point of XFS for a long time. Removing a few terabytes of data can take such a long time that I sometimes consider replacing rm by mkfs. I was hoping that the version of XFS in the kernel 3.2 would perform much better due to the recent optimizations. The following represents the time to remove the directory copied previously:

Conclusion

For build servers and related fileservers, it makes sense to prefer Ext4 to other filesystem types. XFS was a good solution against Ext3, but this is not the case anymore.

Sunday, 16 December 2012

Caching object files for the build

An interesting idea to accelerate the builds is to cache already generated object files. The Waf library provides a simple cache system by intercepting the task execution and retrieving files from the cache. Extensions are even provided to limit directory growth or to share the files over the network

In practice, implementing a cache layer on the build system level will not work very well. The following points are conclusions of years of experimentation on both open and closed-source projects:

  1. The task signatures used for identifying tasks make poor keys for accessing the cache. Platform-specific command-line flags, characters (/ or \), and absolute paths severely limit the cache re-use.
  2. Implementing different task signatures to work around the previous limitations (overriding BuildContext.hash_env_vars for example) will cause at best only performance issues (long startup time), and at worst mysterious cache reuse errors.
  3. Because of the two previous points, the build system can become too brittle and too complex.
  4. The Python runtime is essentially single-threaded. The build process is therefore unable to launch more tasks when retrieving files from the cache.

The best system so far is to wrap the compilers or the commands in the manner of ccache. While this requires some more work up front, the resulting builds are faster and more robust.

The ccache application is limited to C/C++ compilations, but it is easy to write command-line wrappers. Such wrappers can then access custom low-latency tcp servers for example.

Saturday, 8 December 2012

Running Waf on Pypy 2.0

Is Pypy an option for running Waf builds now? While Pypy 2.0 beta 1 still hangs on simple parallel builds, Pypy nightly (59365-f2f4cb496c1c) seems to work much better now.

The numbers below represent the best times of 10 runs on a 64-bit Ubuntu 12.10 laptop. The typical benchmark project was used for this purpose (./utils/genbench.py /tmp/build 50 100 15 5):

cPython 2.7.3 pypy-c-jit pypy-c-nojit
no-op build 0.76s 6.5s 7.7s
full build 39s 45.4s 48.3s

The no-op build times represent the time taken to load the serialized Python data without executing any command. Pypy is still using a pure python implementation if pickle, which is likely to take much more time than the C extension present in cPython.

This can explain the time differences on the full build times. If we substract these values, we can imagine that the Pypy runtime is getting nearly as fast as cPython.

Saturday, 1 September 2012

KDE 4.9

Waf was originally created to ease the creation of KDE applications, but it has not worked so well in practice. The first versions of KDE 4 were terrible, and I think they discouraged anyone from using it ever again.

Fortunately, the version 4.9 has changed for the best, and it finally provides a pleasant development environment. At least, after the stability fixes (the plasma desktop does not crash anymore, the network manager just works), there are fewer annoyances than on other desktop environments. In particular, the focus stealing prevention policy helps to concentrate, and the apps do not pop-up password/keyring windows all the time anymore.

If Qt5 and KDE5 do not break the API too much, we should see more applications for KDE appearing over time.

Monday, 13 August 2012

Computed gotos in python 2.7

Since Pypy does not work too well for multithreaded applications at the moment, so I am now stuck with cPython.

Since Python 2.7.3 is about as fast as Python 3.2 for my applications, I wondered what Python 3 optimizations could be backported to 2.7. The computed gotos patch did not look too complicated to adapt, so I have created my own version. Here are two files to add to build a computed-gotos-enabled cPython 2.7.3 interpreter: Python/ceval.c and Python/opcode_targets.h.

The optimization does not seem to make a visible difference on my applications though, even after recompiling with -fno-gcse/-fno-crossjumping.