Lee RowlandsSenior Developer
Boosting performance of a complex Drupal 7 project with Blackfire.io
Earlier in the year we worked with a household Australian name to help them build a next-generation more-performant version of their energy-comparison site.
We used Blackfire.io to optimize the critical elements of the site.
Read on to find out more about our experience.
Project background
We've built some pretty complex sites and were thrilled when we were approached by a household Australian name to help them build a next-generation more-performant version of their energy-comparison site.
We took a fairly bold approach to the project, focussing on some of the key pain points of the previous build such as:
- Slow to load search results page
- Difficult for admin users to add and edit energy offers.
We settled on a site for admin users to maintain the energy offers with file upload support and a separate consumer site to power the offer search.
In order for them to both access shared data, the admin user site stores and updates data using AWS DynamoDb as the canonical data-source.
Published energy offers (available for consumers to search) are stored in an ElasticSearch index with access to both DynamoDb and Elastic abstracted behind a generic PHP library that contains both infrastructure concerns as well as the domain model.
This means the bulk of the code is decoupled from Drupal, and the two sites interact with the storage and retrieval via the domain model interfaces. This decoupling meant we could test domain logic without needing the full Drupal stack.
The key piece of the site functionality is the ability to enter your current energy usage and have the system show you an indicative bill for each of the offers available in your area.
Obviously this means you can't use reverse-proxy caching technologies like Varnish as each search relies on user-posted data to feed into the complex calculations performed by the algorithm.
Throughout the build, we identified a number of places to optimize the algorithm but in the interest of avoiding premature optimisation, waited until the bulk of the site was built before profiling.
Enter Blackfire.io
Having had success using Blackfire.io to profile the Drupal 8 installation process, we opted to use it to help us squeeze the most out of our application.
Setting up was a breeze, we simply added the required repositories, installed the packages and completed the configuration.
Because we were intersted in profiling the POST requests to the search form, where the most intensive calculation algorithm was, we used the browser to submit the form and then grabbed it as a cURL request from the console.
From here it was straight forward to use the cli blackfire binary with the curl string to profile a search submission and the algorithm.
First pass and eliminating HTTP requests
We ran our first pass to generate a baseline for comparison. The test was ran using a local VM against a staging AWS DynamoDB instance with realistic data and Elasticsearch running on a an EC2 instance. We were expecting some network latency, but found that HTTP requests accounted for 94% of the page-load time. Even in the same data-centre instead of on a VM, this was surely low hanging fruit.
The bulk of our search data was stored in Elasticsearch, but each record was associated with the retailer that offered it. Thanks to profiling we were able to see that the bulk of the load time was loading each retailer entity in turn from the DynamoDb store. Given changes to retailer details (address, logo etc) would be very infrequent, we added a cache-layer to this, which was easy to do as access to this data was behind a domain-model interface. Thanks to the trusty service container we simply created a new cached decorator for our retailer storage service.
Second pass and hashing
After implementing this cache layer, we had reduced the page load time by 90% in our first pass. Digging into the second pass we found a lot of time being spent hashing data returned from the ElasticSearch index. In order to track if any changes needed saving on the admin user site, we implemented a hash calculation in the Offer object's constructor, which could be recalculated at anytime to determine if changes existed that needed to be persisted. Because the domain model allowed storage of offers in both DynamoDb and Elasticsearch, we found that offers loaded back from Elasticsearch were also calculating this hash, but because the consumer-site was read-only this was redundant. So we slightly modified the constructor to only calculate the hash if it was missing and saved 56% of the cpu-time of the already improved page.
Third pass and calculate on write
The bulk of the algorithm for calculating the energy offer estimate requires determining how many days a given tariff overlaps with a given season. On our third pass we saw a lot of time being spent calculating these intersections. But - these are static, and only change when the offer changes, so could be calculated and persisted with the offers when they were saved. After changing these to be calculated at write time we saw another 10% reduction in cpu-time. Now we were down to an order of magnitude faster than the baseline.
Fourth pass and optimising \DateTime creation
On our fourth pass we noticed the a lot of cpu-time being spent constructing \DateTime objects. Many of the offer properties are timestamps which are stored as strings and converted back into \DateTime objects in the denormalizer. We were using the generic \DateTime constructor method in a utility factory, but in all cases we knew the stored format of the string, so switching it to the more-performant \DateTime::createFromFormat() and statically caching a single timezone object instead of creating a new one each time saved us some more time.
Wrapping up
There are a number of profiling options on the market for PHP, but Blackfire is by far the easiest we have found to install, utilize and interpret. The ability to switch between CPU cycles, load time, memory use and I/O time and quickly pinpoint pain-points means you can yield real performance improvements in just a few pases - as seen by this case-study.
I look forward to using blackfire on my next project and in my open-source contributions.
This post originally appeared on the Blackfire.io blog