Merging Communities

Come together, right now
— Beatles, Abbey Road, Come together

So September, 28th 2016 is the 6th birthday of LibreOffice and at the recent conference, we took a picture of those who were there on day zero:

brno32

As you might notice, I am not on that picture — on day zero I was working at Oracle, and were surprised by the news — like many others in this picture:

brno30

This is everyone at this years LibreOffice conference who used to work on the OpenOffice.org codebase at StarDivision, Sun or Oracle. A few people are in both pictures: Caolán McNamara and Thorsten Behrens were with LibreOffice from the start, but also worked at the OpenOffice.org team in Hamburg at some point in time. Of those working on OpenOffice.org still when LibreOffice started, I was the first to join LibreOffice — I had quit my job for that. It was an exciting time.

Looking back, both of these groups appear small — mostly because merging them, the LibreOffice community became so much more that the sum of its parts:

brno23

And of course, while a lot of people were at the conference, not everyone could
join, so there are more contributors from each of these groups than are in the
pictures. This years “state of the project” presentation showed again that the members of The Document Foundation are a truly worldwide community:

tdf-members

So, like the branches of the different descendants of OpenOffice.org, the
contributors and communities did come together in Brno to push
LibreOffice forward as one!

Advertisements

Zu Spät: Hackfest Hamburg

Warum hast Du mir das angetan?
Ich hab’s von einem Bekannten erfahren.

— Die Ärtze, Debil, Zu Spät

Its been more than two years since the last Hackfest in Hamburg! So we are indeed much too late (german: Zu Spät) with repeating this wonderful Event. Right a day after everyone updated his or her Desktop to Wily Werewolf we will meet for a weekend of happy hacking again in Hamburg!

Hamburg Hackfest 2013 - carelessly stolen from Eikes Retrospective
Hamburg Hackfest 2013 – carelessly stolen from Eikes Retrospective

So now, we will meet again. You are invited to drop by this weekend, we will celebrate a bit on Friday evening (ignoring the german culinary advise in the song linked above about “Currywurst and Pommes Fritz” — I imagine we prefer Club Mate and Pizza) and hack on LibreOffice on Saturday and Sunday. Curious new faces are more then welcome!

I would kill 500 lines and I would kill 500 more …

I would walk 500 miles and I would walk 500 more
The proclaimers, 500 miles

So I recently noted that github reported I have 1337 commits on LibreOffice since I joined Canonical in February 2011. Looking at those stats, it seems I also deleted some net 155,634 lines over that time in the codebase.

LibreOffice commits

Even though I cant find that mail, I seem to remember that Michael Stahl, when joining the LibreOffice project proclaimed his goal to be to contribute ‘a net negative lines of code.’1) Now I have not looked into the details of the above stats — they might very likely reveal to be caused by some bulk change. Which would be lame, unless its the killing of the old build system, for which I think I can claim some credit. But in general I really love the idea of ‘contributing a net negative number of lines of code’.

So, at the last LibreOffice Hackfest in Cambridge 2), I pushed a set of commits refactoring the UNO bindings of writer tables. It all started so innocent. I was actually aiming to do something completely different: namely give the UNO cursors in Writer (SwUnoCrsr) somewhat saner resource management and drag them screaming and kicking out of the 1980ies. However, once in unotbl.cxx, I found more of “determined Real Programmer can write FORTRAN programs in any language” and copypasta there than I could bear. I thought: “This UNO stuff has decent test coverage, you could refactor it a bit quickly.”.

Of course I was wrong with both sides of that statement: On the one hand, when I started the coverage was 70.1% LOC on that file which is not really as high as I expected. On the other hand, I did not end with “a bit quickly”, rather I went on to refactor away:
dc -e "`git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^+|wc -l` `git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^-|wc -l` - p"
-1015

… a thousand lines. On discovering the lacking test-coverage, I quickly added some more tests — bringing coverage to 77.52% LOC at least now.3) And yes, I also silently fixed the one regression I thereby discovered I had introduced, which nobody seemed to have noticed so far. One thing I noticed in this little refactoring spree is that while C++11s features might look tame compared to more modern programming languages in metrics like avoiding boilerplate, it still outclasses what we had before. Beyond the simplifying refactoring, features like lambdas are really nice for non-interactive (test-driven) debugging, including quickly asserting on the state of variables some over some 10 stackframes up or down without going into major contortions in testcode.

1) By the way, a quick:
dc -e "`git log --author Stahl -p |grep ^+|wc -l` `git log --author Stahl -p |grep ^-|wc -l` - p"
-108686

confirms Michael is more than living up to his personal goals.

2) Speaking of the Hackfest: The other thing I did there was helping/observing Sam Tuke getting setup for his first code contribution. While we made great progress in making this easier than it used to be, we could be a lot better there still. Sadly though, I didnt see a shortcut or simplification we could implement right away.

3) And along with that did bring coverage of unochart.cxx from abismal 4.4% LOC to at least 35.31% LOC  as a collateral damage.

addendum: Note that the writer tables core also increased coverage quite a bit from 54.6% LOC to 65% LOC.

Easterhegg: Vimpressing LibreOffice!

Das ist alles nur geklaut und gestohlen,
nur gezogen und geraubt.
Entschuldigung, das hab ich mir erlaubt.
— die Prinzen, Alles nur geklaut

So, you might have noticed that there was no April Fools post from me this, year unlike previous years. One idea, I had was giving LibreOffice vi-key bindings — except that apparently already exists: vibreoffice. So I went looking for something else and found odpdown by Thorsten, who just started to work on LibreOffice fulltime again, and reading about it I always had the thought that it would be great to be able to run this right from your favourite editor: Vim.

And indeed: That is not hard to do. Here is a very raw video showing how to run presentations right out of vim:

Now, this is a quick hack, Linux only, requires you to have Python3 UNO-bindings installed etc. If you want to play with it: Clone the repo from github and get started. Kudos go out to Thorsten for the original odpdown on which this is piggybagging (“das ist alles nur geklaut”). So: Have fun with this — I will have to install vibreoffice now.

Following the White Rabbit

When logic and proportion have fallen sloppy dead
And the white knight is talking backwards
And the red queen’s off with her head
Remember what the dormouse said
Feed your head, feed your head

— Jefferson Airplane, White Rabbit

So, this was intended as a quick and smooth addendum to the “50 ways to fill your vector” post, bringing callgrind into the game and ensuring everyone that its instructions counts are a good proxy for walltime performance of your code. This started out as mostly as expected, when measuring the instructions counts in two scenarios:

implementation/cflags -O2 not inlined -O3 inlined
A1 2610061438 2510061428
A2 2610000025 2510000015
A3 2610000025 2510000015
B1 3150000009 2440000009
B2 3150000009 2440000009
B3 3150000009 2440000009
C1 3150000009 2440000009
C3 3300000009 2440000009

The good news here is, that this mostly faithfully reproduces some general observations on the timings from the last post on this topic, although the differences in callgrind are more pronounced in callgrind than in reality:

  • The A implementations are faster than the B and C implementations on -O2 without inlining
  • The A implementations are slower (by a smaller amount) than the B and C implementations on -O3 with inlining

The last post also suggested the expectation that all implementations could — and with a good compiler: should — have the same code and same speed when everything is inline. Apart from the A implementations still differing from the B and C ones, callgrinds instruction count suggest to actually be the case. Letting gcc compile to assembler and comparing the output, one finds:

  • Inline A1-3 compile to the same output on -Os, -O2, -O3 each. There is no difference between -O2 and -O3 for these.
  • Inline B1-3 compile to the same output on -Os, -O2, -O3 each, but they differ between optimization levels.
  • Inline C3 output differs from the others and between optimization levels.
  • Without inlinable constructors, the picture is the same, except that A3 and B3 now differ slightly from their kin as expected.

So indeed most of the implementations generate the same assembler code. However, this is quite a bit at odd with the significant differences in performance measured in the last post, e.g. B1/B2/B3 on -O2 created widely different walltimes. So time to test the assumption that running one implementation for a minute is producing reasonable statistically stable result, by doing 10 1-minute runs for each implementation and see what the standard deviation is. The following is found for walltimes (no inline constructors):

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 80.6 s 78.9 s 78.9 s 79.0 s
A2 78.7 s 78.1 s 78.0 s 79.2 s
A3 80.7 s 78.9 s 78.9 s 78.9 s
B1 84.8 s 80.8 s 78.0 s 78.0 s
B2 84.8 s 86.0 s 78.0 s 78.1 s
B3 84.8 s 82.3 s 79.7 s 79.7 s
C1 84.4 s 85.4 s 78.0 s 78.0 s
C3 86.6 s 85.7 s 78.0 s 78.9 s
no inline measurements
no inline measurements

And with inlining:

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 76.4 s 74.5 s 74.7 s 73.8 s
A2 75.4 s 73.7 s 73.8 s 74.5 s
A3 76.3 s 74.6 s 75.5 s 73.7 s
B1 80.6 s 77.1 s 72.7 s 73.7 s
B2 81.4 s 78.9 s 72.0 s 72.0 s
B3 80.6 s 78.9 s 72.8 s 73.7 s
C1 81.4 s 78.9 s 72.0 s 72.0 s
C3 79.7 s 80.5 s 72.9 s 77.8 s
inline measurements
inline measurements

The standard deviation for all the above values is less than 0.2 seconds. That is … interesting: For example, on -O2 without inlining, B1 and B2 generate the same assembler output, but execute with a very significant difference in hardware (5.2 s difference, or more than 25 standard deviations). So how have logic and proportion fallen sloppy dead here? If the same code is executed — admittedly from two different locations in the binary — how can that create such a significant difference in walltime performance, while not being visible at all on callgrind? A wild guess, which I have not confirmed yet, is cache locality: When not inlining constructors, those might be in CPU cache from one copy of the code in the binary, but not for the other. And by the way, it might also hint at the reasons for the -march= flag (which creates bigger code) seeming so uneffective. And it might explain, why performance is rather consistent when using inline constructors. If so, the impact of this is certainly interesting. It also suggest that allowing inlining of hotspots, like recently done with the low-level sw::Ring class, produces much more performance improvement on real hardware than the meager results measured with callgrind. And it reinforces the warning made in that post about not falling in the trap of mistaking the map for the territory: callgrind is not a “map in the scale of a mile to the mile”.

Addendum: As said in the previous post, I am still interested in such measurements on other hardware or compilers. All measurements above done with gcc 4.8.3 on Intel i5-4200U@1.6GHz.

SwNodeIndex: Ludicious Speed

No-no-no, light speed is too slow!
Yes, we’ll have to go right to… ludicrous speed!

— Dark Helmet, Spaceballs

So, I recently brought up the topic of writers notes in the LibreOffice ESC call. More specifically: the SwNodeIndex class, which is, if one broadly simplifies an iterator over the container holding all the paragraphs of a text document. Before my modifications, the SwNodes container class had all these SwNodeIndices in a homegrown intrustive double linked list, to be able to ensure these stay valid e.g. if a SwNode gets deleted/removed. Still — as usual with performance topics — wild guesses arent helpful, and measurements should trump over intuition. I used valgrind for that, and measured the number of instructions needed for loading the ODF spec. Since I did the same years and years ago on the old OpenOffice.org performance project, I just checked if we regressed against that. Its comforting that we did not at all — we were much faster, but that measurement has to be taken with a few pounds of salt, as a lot of other things differ between these two measurements (e.g. we now have a completely new build system, compiler versions etc.). But its good we are moving in the right direction.

implementation SwNodes SwNodeIndex total instructions performance linedelta
DEV300_m45 71,727,655 73,784,052 9,823,158,471 ? ?
master@fc93c17a 84,553,232 60,987,760 6,170,762,825 0% 0
std::list 18,461,317 103,461,317 14,502,230,571 -5,725%
(-235% of total)
+12/-70
std::vector 18,986,848 3,707,286,032 9,811,541,380 -2,502% +22/-70
std::unordered_map 18,984,984 82,843,000 7,083,620,244 -627%
(-15% of total)
+16/-70
std::vector rbegin 18,986,848 143,851,229 6,214,602,532 -30%
(-7% of total)
+23/-70
sw::Ring<> 23,447,256 inlined 6,154,660,709 11%
(2.6% of total)
+108/-229

With that comforting knowledge, I started to play around with the code. The first thing I did was to replace the handcrafted intrusive list with a std::list pointing to the SwNodeIndex instances as a member in the SwNodes class. This is expected to slow down things, as now two allocs are needed: one for the SwNodeIndex class and one for the node entry in the std::list. To be honest though, I didnt expect this to slow down the code handling the nodes by a factor of ~57 for the loading of the example document. This whole document loading time (not just the node handling) slows by a factor of ~2.4. So ok, this establishes for certain that this part of the code is highly performance sensitive.

The next thing I tried to get a feel for how the performance reacts was using a std::vector in the SwNodes class. When reserving some memory early, this should severely reduce the amount of allocs needed. And indeed this was quicker than the std::list even with a naive approach just doing a push_back() for insertion and a std::find()/std::erase() for removal. However, the node indices are often temporarily created and quickly destroyed again. Thus adding new indices at the end and searching from the start certainly is not ideal: Thus this is also slower than the intrusive list that was on master by a factor of ~25 for the code doing the node handling.

Searching for a SwNodeIndex from the end of the vector, where we likely just inserted it and then swapping it with the last entry makes the std::vector almost compatitive with the original implementation: but still 30% slower than the original implementation. (The total loading time would only have increased by 0.7% using the vector like this.)

For completeness, I also had a look at a std::unordered_map. It did a bit better than I expected, but still would have slowed down loading by 15% for the example experiment.

Having ruled out that standard containers would do much good here without lots of tweaking, I tried the sw::Ring<> class that I recently rewrote based on Boost.Intrusive as a inline header class. This was 11% quicker than the old implementation, resulting in 2.6% quicker loading for the whole document. Not exactly a heroic archivement, but also not too bad for just some 200 lines touched. So this is now on master.

Why do this linked list outperform the old linked list? Inlining. Especially, the non-inlined constructors and the destructor calling a trivial non-inlined member function. And on top of that, the contructors and the function called by the destructor called two non-inlined friend functions from a different compilation unit, making it extra hard for a compiler to optimize that. Now, link time optimization (LTO) could maybe do something about that someday. However, with LTO being in different states on different platforms and with developers possibly building without LTO for build time performance for some time, requiring the compiler/linker to be extra clever might be a mixed blessing: The developers might run into “the map is not the territory” problems.

my personal take-aways:

  • The SwNodeIndex has quite a relevant impact on performance. If you touch it, handle with care (and with valgrind).
  • The current code has decent performance, further improvement likely need deeper structual work (see e.g. Kendys bplustree stuff).
  • Intrusive linked lists might be cumbersome, but for some scenarios, they are really fast.
  • Inlining can really help (doh).
  • LTO might help someday (or not).
  • friend declarations for non-inline functions across compilation units can be a code smell for possible performance optimization.

Please excuse the extensive writing for a meager 2.6% performance improvement — the intention is to avoid somebody (including me) to redo some or all of the work above just to come to the same conclusion.


Note: Here is how this was measured:

  • gcc 4.8.3
  • boost 1.55.0
  • test document: ODF spec
  • valgrind --tool=callgrind "--toggle-collect=*LoadOwnFormat*" --callgrind-out-file=somefilename.cg ./instdir/program/soffice.bin
  • ./autogen.sh --disable-gnome-vfs --disable-odk --disable-postgresql-sdbc --disable-report-builder --disable-scripting-beanshell --enable-gio --enable-symbols --with-external-tar=... --with-junit=... --with-hamcrest=... --with-system-libs --without-doxygen --without-help --without-myspell-dicts --without-system-libmwaw --without-system-mdds --without-system-orcus --without-system-sane --without-system-vigra --without-system-libodfgen --without-system-libcmis --disable-firebird-sdbc --without-system-libebook --without-system-libetonyek --without-system-libfreehand --without-system-libabw --disable-gnome-vfs --without-system-glm --without-system-glew --without-system-librevenge --without-system-libcdr --without-system-libmspub --without-system-libvisio --without-system-libwpd --without-system-libwps --without-system-libwpg --without-system-libgltf --without-system-libpagemaker --without-system-coinmp --with-jdk-home=...

Auld Lang Syne

we’ll tak’ a cup o’ kindness yet,
for auld lang syne.

— Die Roten Rosen, Auld Lang Syne

Eike already greeted the Year of Our Lady of Discord 3181 four days ago, but I’d like to take this opportunity to have a look at the state of the LibreOffice project — the bug tracker status that is.

By the end of 2014:

unconfirmed

And a special “Thank You!” goes out to everyone who created one of the over 100 Easy Hacks written for LibreOffice in 2014, and everyone who helped, mentored or reviewed patches by new contributors to the LibreOffice project. Easy Hacks are a good way someone curious about the code of LibreOffice to get started in the project with the help of more experienced developers. If you are interested in that, you find more information on Easy Hacks on the TDF wiki. Note that there are also Easy Hacks about Design related topics and on topics related to QA.

If “I should contribute to LibreOffice once in 2015” wasnt part of your new years resolutions yet, you are invited to add this as Easy Hacks might convince you that its worthwhile and … easy.