Monday, October 17, 2016

Ansible Inventory

Recently I've started using ansible after a couple of years using salt stack. In general ansible feels very easy to use and very versatile. However, one of the things I miss is an easier way to handle your hosts inventory: manage groups, manage variables, etc.

For that reason I started a small program that helps me do all the inventory management using a console interface. It also integrates directly with ansible as a dynamic inventory. Here are some of the features:

  • Add, edit and delete hosts and groups
  • Add, edit and delete variables for hosts and groups
  • List hosts and groups
  • Show the hierarchy tree of group
  • Unique color per group, host and variable for visual identification
  • Use of regular expressions for bulk targeting
  • Importing an already existing inventory in the ansible JSON format
  • Direct use as a dynamic inventory with ansible (--list and --host)
  • Different backends with concurrency: file (for local use) and redis (for network use).
Let me show you how it looks.

You can get more information and the tool itself on github:

As always, all comments and sugestions are welcome so please, let me know what you think.

Thursday, September 29, 2016

Image compression tool for websites

Why to compress images?

One of the most important tricks to make a web page load faster is to make its components smaller. Nowadays images are one of the most common assets in web pages and also the ones that then to accumulate a big part of the web page size. This means that if you reduce the size of the images, there will be a big impact in the size of the webpage and therefore it will load noticiable faster.

I've been compressing images in websites for a while and doing benchmarks before and after the image compression. In some websites I could double the performance only with image compression.

How to compress images?

There are several very good tools to compress images but usually they are only for a single type of them (jpeg, png, etc) and usually they are used against a single file. This makes hard to apply compression to a whole website, where you would need to find all the image files and compress them depending on its type.

For this reason I created an script that finds all images, makes a backup of them and compress them with the right tool depending on type. It also has some additional features that come very handy when you are dealing with image compression every day.

The image compression script

The compression script uses mainly 4 tools and all the compression merits goes to these tools:  jpegoptim (by Timo Kokkonen), pngquant (by Kornel LesiƄski), pngcrush (by Andrew Chilton) and gifsicle (by Eddie Kohler). You will need to install these tools in order for the script to work and the script itself will warn you if they are not installed.  In a debian/ubuntu system you can install them with this command:
sudo apt-get install jpegoptim gifsicle pngquant pngcrush
Once you have these compression tools you can start using the compression script. Lets see the options we have.
 Use: ./dIb_compress_images <options> <path>

      -r              Revert. Replace all processed images with their
      -j              Compress JPEG files.
      -p              Compress PNG files.
      -g              Compress GIF files.
      -L              Use lossless compression.
      -q              Quiet mode.
      -c              Continue an already started compression. This will
                      convert only not already backuped files.
      -t <percentage> Only use the converted file if the compression
                      saves more than  size. The files not
                      below  will appear as 0% in output.

 NOTE: If none of -j, -p or -g are specified all of them (-j -p -g)
       will be used.

So yo can choose which types of images you want to compress (by default all of them) in a given path. The script will find those image types recursively inside that path and compress them.

You can compress them using lossy compression (default and recommended) or lossess (with -L flag). Usually using lossy compression makes no visual differences for the naked eye between the non-compressed and compressed version of a image. Lossy compression gives a much compressed version of the image so it is recommended.

For each image, the script will make a backup and this backup will not be ever overwritten so the backup will always be the original image. In case you want to revert the changes because you found some visual differences or whatever, you can use the -r flag over the same path, which will cause all the image backups to be restored.

By default, each time you execute the script on a given path, it will recompress all the images despite if they were already compressed or not. If you want to skip the already compressed images, you can use the -c flag. If you use it together with the quiet mode flag (-q) you can add the script to a cron task to periodically compress the images of your site.

You can also specify a percentage threshold so you can only keep the compressed version of an image if it saves at least that percentage in its used space with the -t parameter.

Here is an example output of the most basic execution. It is compressing images from a real website, altough I changed the image names in the script output for security reasons.

In this case, the images used 35MB but after compression they only use 12MB, that is a 33% of their original size!

Again, these compression rates are due to the fantastic tools the script uses. What I did was to gather information from many tests to find the best options for these tools to get the best balance between size and quality (and also compression time).

As always, let me know your thoughts and sugestions to improve it are more than welcome!

The script is available here:

Friday, July 1, 2016

PyWench: Huge improvements

I've recently been paying some attention to this tool. I've discovered some bugs and some great room for improvements.
Basically I found that the rps metric was not very precise and also that the performance of the application was not very good.
Regarding the requests per second issue, I changed the way they are calculated and now I can grant that the results are quite accurate (compared to what I see in the log files of the benchmarked server).
I've also found that the performance was quite bad, mainly because of the python threading system and the python GIL.  So I decided to migrate the tool to python multiprocessing and the performance improvements have been huge: almost 3 times faster.
As I was already changing things I also changed the plotting library from gnuplot to matplotlib as gnuplot wasn't very stable. This also implies that the plot cannot be viewed live anymore and the "-l" option will now allow you to play with the plot once the benchmark is done. Also, now the tool should work properly in systems with out xserver.

Please, be aware that you need a big server (several cores, big bandwidth, etc) in order to benchmark a website properly instead of end up benchmarking the tool itself or your bandwidth connection. I will still work in this tool in the future and one of the things I would like to do is to make it distributed so it can run a benchmark from different servers and then gather all the results in a central node. This would help to eliminate bottlenecks in the client side of the benchmark.

Please, if you use the tool report any bug you see and of course let me know if you see how to improve the tool. All comments are welcome :).

You can already find this new version at: