One year of desktop Linux


If you consider switching to Linux from a Windows or OS X machine, you will find some of my experience in this post, one year after starting to use Linux exclusively for my professional life as a scientist. With the ubiquity of web applications, the desktop environment might seem less important than a few years ago. Still, there are plenty of things that are better done on a desktop, including computation-heavy or specialized analyses. This post assumes you have some knowledge of Linux, specifically how to install applications and how to edit configuration files.

I first tested Linux on my laptop in 2001 and 2002 but was only convinced to permanently switch to using it for my personal use with the arrival of Ubuntu Dapper Drake in 2006. Using Ubuntu at home had an impact on my daily professional life. For example, our collection of reagents is maintained on a LabKey server on Ubuntu in a virtual machine. Experience with the command line definitely helped for the administration of our laboratory backup server, a Sinology network disk station. Last, but not least, the extensive experience with shell and shell scripts, was crucial for data analysis projects both for research and in teaching.

Before going further, I would like to share with you a desktop screenshot, showing my current theme in Gnome, OneStepBack. The easiest way to install the theme is to download and unzip the corresponding file in a .themes folder in your home directory. The theme can be applied with the Gnome Tweaks application.


An important choice when switching to Linux is the distribution, as it affects several aspects of the linuxian life . Based on my previous experience with Ubuntu, I chose it, because “it just works”. The size of the Ubuntu users community is very important, since many questions that I might have had were already asked and answered by someone. Many thanks to those which asked and answered questions, as well as to those sharing their experience with specific issues of application install or configuration. In addition to generic Linux documentation, one of the best sources of information on arcane Linux configuration options is the ArchWiki, a documentation portal for ArchLinux (which you can try in a user-friendly version called Manjaro).

Installation and basic configuration

It was not an easy choice to change after 18 years of Mac use. I still like the way Macs work and how the desktop looks, but the lack of a Pro solution in latest years, as well as the steady development of user-friendliness in Linux, settled the decision. Ubuntu 17.04 was thus installed on a Dell machine with plenty of RAM, an SSD disk for the system (more on this later) and two 2 Tb data disks.

Initial installation required endless updates for the preinstalled Windows system, reducing the size of the Windows partition and installation of Ubuntu. The most difficult part was the assignment of the two data disk to a zfs pool in a ‘mirrored’ configuration. The data, which are also mirrored daily to an external network disk, are thus duplicated among the two disks, without any need for further configuration. The use of ZFS is probably a headache for most users, and unfortunately I did not have enough time to do more than basic configuration.

While it might seem a minor issue for most of the system administrators who use Linux  servers mostly through a command line interface, the desktop environment has a major impact on Linux used through a graphical user interface. It has been my pleasure to test and work for some time with most of the major desktop environments and, strangely enough, I like them all. So, no, you won’t find here any of the “The 5 best desktop environments for Linux” non-sense. The current blog post is written under a Gnome Shell environment, which, with extensions, works mostly as I want it to work. KDE Plasma is probably my “go to” distribution if I take my laptop on a journey, since I am sure that external monitors, for example, are correctly recognized. On the workstation, Cinnamon, a GTK3-based DE from the Linux Mint creators, works flawlessly. I spent some time as well with XFCE, my distribution of choice for my 12 year old laptop, and with MATE.

Advantages and annoyances of using Linux

My major problem since using Linux has been the death of the system SSD in December last year, 8 months after buying the workstation. Maybe it was not Linux’s fault, but I tend to believe that somehow, the very frequent writing of logs and other information to the disk had a catastrophic effect. I am not a specialist, but I prefer to use now a 7200 rpm hard disk for the system. So, be cautious on what type of media your operating system lives. You might want to follow some advice from Pjotr, a Linux user from Holland.

A clear advantage in using Linux is anything linked with development, from scripting in R or Python, to installing software that you need for mapping reads to DNA sequences (where Conda and Bioconda are fantastic tools). Many available text editors are fantastic, and I have a personal preference for Kate, both for its multiple customization options, speed, and overall usability.


A major advantage in using Linux daily is the existence of excellent open-source tools and their ease of installation, mostly with a apt install program command. For Python, I’m using Spyder and for R, RStudio. The manuscript figures benefit from editing in Inkscape and the manuscripts, as well as other documents are written with LibreOffice. Bibliography is handled gracefully by Zotero and its extensions, together with Firefox. For any kind of ideas, for collecting figures from papers, and general notes, CherryTree is a fantastic open-source program. PDF reading benefits from Okular. Image analysis and editing are the realm of ImageJ (and FIJI) and Gimp. For file searches, CatFish is fantastic. For a knowledge database with indexed pdf files, Recoll is my tool of choice. Handling pdf files can be done with an extremely flexible Java tool, jPdftweak.

Switching to Linux is not easy and you have to be ready to invest some time in customizing your experience with the system. Some very strange bugs manifested themselves with the printing system, for example. From time to time, the pages printed on a local network printer have a “Top secret” watermark :-(. Fonts can also be a problem, especially when exchanging files with your colleagues, who use Macs or Windows machines. Establishing a file share server is not painless either (the best solution I found was to install and configure a Samba server on my machine).

In view of the time and effort spent with setting up and customizing my Linux machines, it would be impossible for me to say that I regret my decision. Having invested in this installation, places me between the post-purchase rationalization and IKEA effect cognitive biases. I’ll be back with experience at two years post-switch, probably in April 2019….

Conclusion: if you are already a ‘power user’ for Linux and you don’t need proprietary applications for your work, switching to Linux is a lot of work coupled with a lot of fun.

If you have any specific question or want to share your positive or negative experience, please go to the comments section. Thank you.

sozi presentation image

Svg based presentations with Inkscape and Sozi


Have you ever heard of deck.js, Impress or Reveal ? These are tools that allows one to create nice web-based presentations with plenty of animations. As strange as it may sound, I first looked at these apps because I wanted some smooth drop shadows for the pictures in my own LibreOffice presentations. CSS can be used to get that effect. I tried all the above mentioned libraries and tools and was disappointed by the fact that adding images and graphics seems to be an after-thought. These tools are excellent for text-based presentations but placing vector graphics, like svg, in a particular position and making graphically-rich presentations is not their main domain of use. Users circumvented these limitations by writing extensions, see examples for deck.js here.

Wandering on the web led me to a tool that allows graphic rich presentations to be made with the best open source tool for vector graphics creation: Inkscape. One of the integrated plug-ins, called JessyInk, allows one to use a succesion of layers to create an svg file that, when opened with a web browser, are shown one after the other. JessyInk is an impressive tool, allowing one to annotate ‘live’ the presentation or to show an overview by clever keyboard shortcuts, but I did not like the fact that the original svg had to superpose all the layers. Additionally, the svg file is modified by JessyInk with some elements not being rendered at all in the obtained presentation. Not good.

In search for an alternative, I found Sozi, an application that takes a different approach from JessyInk. It allows something that I did not thought possible – take a poster, done with Inkscape or other svg creator, and make a presentation based on succesive views of different parts of the large image. Everything is integrated in a html/json pair of files that integrate the original svg with JavaScript magic. While the Sozi interface is in need of some ergonomy improvements, the tool is robust enough to allow me to create a 60 slides presentations in an evening, from a large svg file that I already had in Inkscape. A few of the slides  can be viewed on my bitbucket site (although some images lack from the presentation). Scrolling allows zooming in and out of the slides and clicking on the slide number opens a nice menu with all the slide titles, for fast navigation. The first slide, as a screenshot of my current Ubuntu desktop (clicking on the image goes to the presentation page):

sozi presentation image
Sozi presentation, first slide

I would very likely use Sozi and Inkscape for presentations in the future. There are a few things to consider though:

  1. Images inserted in the SVG file are better handled through relative links and should be kept in a directory that is close to the svg file itself. Moving the presentation around becomes possible if the html, json and img folder are moved together.
  2. Sozi does not allow for the moment to easily have several versions of the presentation based on the same svg but this feature might come in the future. It will be also very useful to allow the existence of svgs where text was converted to paths, so that one does not need to carry some exotic fonts around just to be able to show a presentation. If you want to duplicate a presentation, you have to duplicate the html, json and original svg and change their name so that they match each other.
  3. One has to be familiar with Inkscape and its quirks.
  4. Web browsers are not exactly familiar to the idea of presentations, the mouse pointer, for example, insists on showing the title of the web page.
  5. Do not use layers in the original Inkscape presentation. Somehow, Sozi becomes confused and might transform the layers differently, giving a very strange result. Better to use blocked elements, if you want to make a pattern of rectangular regions as template for the graphical elements of the presentation.
  6. Always do a backup of your presentation in PDF – sozi-to-pdf does it gracefully. It is a module that has to be installed separately. It generates a rather large pdf file because it is composed of high quality graphic png. While it is large, it will not depend on locally installed fonts or any additional resources to be displayed as you expect it to be.

Update (April 2017). For the type of presentation I usually do, better distribute the images or drawings in separate svg files. This helps a lot in finding a particular image or result, more difficult with very large SVG files, as those used for a Sozi presentation.

Any alternatives to LaTeX for collaborative manuscript writing in science ?

In a world in which we read mostly on screens and in which unprintable data like videos or high resolution images are part of published papers, it makes sense to think of new ways of producing, sharing and reading research results. Finding ways to easily collaborate with co-workers and to be able to keep a manuscript in a shareable and flexible format is an ongoing quest (see for example, datacite).

When trying to see what online tools allow to write manuscripts in a collaborative way, I was very much impressed by Overleaf‘s interface and gave it a try for a real manuscript writing. While some of my colleagues had no problem in working with the system, it is still a little odd that, for example, including citations requires uploading a file in the .bib format. Some ‘infinite compiling’ errors scared one of my collaborators and were perceived as a lack of robustness of the system. Adding references and cross-references remains quite involved and one needs some time spent in LaTeX innards to be able to get to a nice end result.

Lens Writer screenshot
An image of Lens Writer in action. Installing the package from github and launching a local server is very simple.

This post was motivated by my recent enthusiasm when reading about a JavaScript library called Substance that serves as a basis for several projects, including one that is designed to allow easy writing and sharing of scientific data:  Lens Writer. The philosophy of this way of writing a scientific report is, from my very limited understanding of it, that everything revolves around web-based technologies, running with JavaScript. An early version of the editor, working with Node.js, provides a play ground for those curious to test its capabilities. Definitely a project that will be very interesting to follow and see its evolution!

Film cameras and film

As many other passionated photographers, I began looking around and testing old used film cameras and having fun with them. I would like to share some of the good and bad points of these cameras and I thought about putting it all in a “page” (as WordPress designates it). However, several posts would be certainly better.

Update: Three years later, a slight reorganisation of this small site gives priority to posts, with pages that only contain links to the relevant entries.

First, my excellent impression on Fuji Superia line of films. I’ve used the ISO 200, 400 and 800 versions and I’m using actually mostly the Fuji Superia 200. This negative film has oversaturated colors with a greenish cast, small grain and excellent contrast and resolution. Just to give you an example (from a comparison of f/2.8 and f/5.6 images when using the excellent Olympus XA compact camera):

Olympus XA: f/2.8 vs f/5.6

I don’t have much of an experience with color negatives but I tested several rolls of Kodak Ektar 100 and Kodak Ultramax 400 and had various complains about them when compared with the Superias. First, the Ultramax colors are colder and light sources, especially red, tend to look very bad. I only have a relatively good example of a shot on an Ultramax, using a Contax Tsv II camera:

Reflections on rue de Rennes

Ektar 100 gave me very nice results but I had the impression that the images were soft even if full of details. I don’t really know if little grain is really important. I only have one example of picture taken with a Nikon FA SLR and Ektar 100 film:Traffic jam

On the black and white side of film, I shot with Ilford HP5+, its 125 ISO equivalent and Kodak Tri-X 400. In a Minolta X500, an Ilford Delta 400 awaits development. Only one concludion so far: for graphical work the high contrast of Tri-X is very useful.

Tamron Adaptall 28mm

Why another lens ?

Finding an Olympus OM-30 camera on a garage sale a few weeks ago rekindled my interest in film photography.
Olympus OM-30 SLR camera (OM-F)

Now, the Olympus camera only came with the standard 50mm f/1.8 compact lens and I thought it would be fun to see how film works with a 28 or 24mm wide angle without the crop factor that applies to APS-C size digital cameras. The only widely known group of lenses that are interchangeable between different cameras, if provided with the right adapter, is the Tamron’s Adaptall series (see for more information).

From the different Tamrons I had the occasion to use, with one exception, the late plastic version of a 28-70mm f/3.5-4.5 zoom, they were all optically good to excellent. One special mention goes to the famous Tamron 90mm f/2.5 (or f/2.8, there are several models) macro lens. A compact, fast zoom is the 35-80mm f/2.8-3.8 which delivers excellent image quality and is a pleasure to use. Let me show it to you, mounted on a Nikon D40 dSLR:

Tamron zoom model 01A

Tamron 28mm f/2.8, cheap, heavy, good

The lens shown in the following images is not the typical Tamron 28mm f/2.8 only because it is not painted black. I did not like the look of the black ones, that’s why the sample I have is brushed aluminium:

Tamron 28mm f/2.8 white, front

The characteristics of the lens, as borrowed from the Tamron official site (in Japanese, thank you Google Translate) are:

Model: CW-28
Lens configuration: 7 lenses in 7 groups
Minimum Aperture: 16
Minimum focusing distance: 0.25m
Filter Diameter: 52mm
Weight: 240g
Maximum Overall Length X Diameter: 65mmx42mm
Produced between 1976 and 1979

Not mentioned – aperture has 5 blades.

The lens was replaced by a more compact version that is also lighter but, alas, takes 49mm filters. On Nikon I’m used with 52mm filter diameters.It’s much easier if all the lenses have interchangeable caps. Mounted on a D40, the lens looks impressive, due in part to the perspective distortion of the Fuji E900 lens when taking the picture:

Nikon D40 with Tamron 28mm f/2.8 adaptall lens

In fact the lens is pretty compact even if it cannot compete with the Nikon series E 28mm, which has a lot more plastic and a simpler optical formula (so there is a difference of 5 cm in the minimal focusing distance – 25 cm for the Tamron and 30 cm for the Nikon):

Tamron 28mm vs Nikon 28mm

A  few more images of the lens before jumping to conclusions about handling and optical quality:

Tamron 28mm f/2.8 front

Tamron 28mm f/2.8 side view

Tolo Toys and Tamron

Handling is excellent, the lens has the right size and bulk (for my hands in any case). Manual focusing is very pleasant though not always easy (an f/2 lens would have been better, but that’s a lot more expensive and does not exist in the Tamron’s adaptall line).

Depth of field preview on a D40 ?

The Tamron was built in the 70s and intended for use on many different cameras. Some of these, I imagine, were not able to allow focusing and metering with the lens wide-open and to stop down “automatically” only at the very moment the picture was taken. For those cameras, the Tamron has a switch on the side (marked A, for Automatic, I imagine). The A gets covered by the switch when moved and the lens aperture closes to the value set on the aperture ring:

Tamron 28mm f/2.8 depth of field preview switch

Now I have a depth of field preview on my Nikon D40!

Even if the camera had such a function it would be close to useless due to the size of the image in the viewfinder. On a full frame camera the situation would be different. Anyway, while the depth of field preview is extremely useful on film, when you cannot see the result immediately the lack of such a function became much less problematic on digital SLRs.

Image quality (on APS-C sensor)

Excellent! That sums up a few hours of tests and comparisons. At f/2.8 there is some loss of contrast and I would not use the lens wide open. However, from f/4 the images are crisp with good contrast and with plenty detail showing everywhere. Take this opinion cautiously because I am no expert and, for me, the series E Nikon, generally badly rated (see for example this evaluation, in French, of several Nikon lenses or another one on Bjørn Rørslett’s site), has a lot to offer. I just don’t like its handling as much as I like the handling of the Tamron.

Quick 100% crop from images taken with the Tamron vs the excellent 18-55mm Nikkor kit lens:

Tamron vs Nikon kit lens

At left, there is no much difference at f/4 between the center resolution and contrast (kit at left, Tamron at right). Extreme left side of the image (right image) shows that the Tamron keeps more resolution, which would be expected from a prime vs a consumer zoom lens. Color rendition is different. The Tamron lets in less light at the same aperture too.

In conclusion, by taking pictures with the Tamron, one “benefits” from a marvelous mechanical device and gets similar or better results than with the kit lens. That’s all that I needed to know. Some other test and sample images follow.

Sample images with the Tamron 28mm f/2.8

Flowers at f/5.6 without or with camera flash

Evening place de Catalogne, f/4

Night on rue d'Alesia, long exposure

f/4 vs f/2.8 Tamron 28mm

Flare Tamron 28mm

Flare 2, Tamron 28mm f/2.8 at f/4

Pretty funny internal reflexions and ‘rainbows’ when the sun is in frame. However, contrast is preserved to a good extent in the shadows and that’s important. The BBAR (Broad-Band Anti Reflection) coating seems efficient.

Summary and conclusion

There are several good points about the Tamron 28mm f/2.8:

  • Excellent mechanical construction and a pleasure to handle
  • Very good image quality from f/4
  • Depth-of-field preview integrated in the lens (not really useful but funny to use)
  • Adaptall system that allows the use of the lens on other brand film or digital cameras
  • Takes Nikon-type 52mm filters, unlike its descendant, the 28mm f/2.5, which takes 49mm filters

And of course, there are downsides:

  • Aperture with “only” 5 blades. But I did not see any adverse effect to that. It’s not only the number of blades that determines the out of focus areas appearance. It is true that the macro Tamrons have 9 blades and the zoom I was mentioning before has 8.
  • Quite heavy and bulky – the more compact Tamron that followed (28mm f/2.5) may have a more convenient size.
  • Not that good at f/2.8

In conclusion: a pretty/ugly lens that’s a pleasure to use.

If you have any comments or negative or positive experience with Tamrons, it would be great to share our opinions.