Al3x' Tech Blog

Add free performance monitoring to your websites with Lighthouse and AWS CDK

30 May 2020

The Burden of Performance

Nowadays, regardless of the size of the business, basically everyone needs some kind of web presence, might it be a Software as a Service offering, an e-commerce website, a blog or a simple landing page to capture some new audience or just provide basic information.

And we all know the mantra too: if your website doesn’t perform, (potentially new) users will be lost, search engines positioning will suffer terribly and you’ll eventually be out of business. I know, a bit contrived… but it gets to the point.

Ok Google, make my website FAST

The domain of web pages optimization is truly vast, it involves so many different topics, technologies, competences and business' aspects that’s notoriously a hard feat.

Even if there’s no silver bullet for developing for the web in a way that delivers perfect usability, optimal accessibility while keeping it all secure, thanks to powerful tools like Google’s Lighthouse developers today have at least a very good head start (and more help will come soon). Quoting from the project homepage:

Lighthouse is an open-source, automated tool for improving the quality of web pages. You can run it against any web page, public or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO and more. You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important, as well as how to fix it.

Here an example of how a Lighthouse report generated by https://web.dev/measure/ webapp looks like:

Screenshot of a Lighthouse example report

So to generate a Lighthouse report we can use a web application, Chrome DevTools and even a Node.js module. What the homepage doesn’t say is that Lighthouse is baked by the PageSpeed Insights web service that provides an HTTP REST endpoint (currently living at https://www.googleapis.com/pagespeedonline/v5/runPagespeed) which, when hit with proper query parameters, runs all the required computation in some Google’s server and eventually produces a very terse report of all the audits results in a handy JSON format. You can learn all about PageSpeed Insights API on the official web page.

If we provide a valid Google developer API token when interacting with PageSpeed we are allowed to use it in a programmatic (i.e. automated and frequently recurring) way. For example we could think of building an API endpoint to keep monitored with a simple HTTP check and be alerted when, for whatever reason, the Lighthouse score for one of the websites we want to monitor is below the safe threshold we have defined.

This is yet another perfect use case for a simple Serverless utility: sending recurring HTTP requests is a task very easily accomplished with, for example, an asynchronous AWS Lambda invoked by a CloudWatch cron-like Event; a simple solution with no need for maintenance and, in many cases, even completely free of charge.

Feature request

This is what I’d like to have:

  • during office hours (e.g. from 8 to 16), run an hourly Pagespeed performance test for each of my websites' main page and store test results in a database
  • expose the results stored in the database at GET https://api.l3x.in/pagespeed_report, return HTTP code != 200 if anything wrong
  • keep that endpoint under monitoring (for free) with StatusCake and receive alerts through its Pushover integration if the above HTTP request returns a non-200 code. Side note: StatusCake offers Pagespeed monitoring natively but it’s available to paying users only.

Serverless to the rescue

To define and deploy all the needed resources I leveraged one more time dear AWS Cloud Development Kit (CDK). The new pagespeed stack is essentially composed of a single Lambda (pagespeed_poller.py) that sends the actual requests to Google Pagespeed Insights API and does some simple math with the reports (calculates the mean for every audit score value), a DynamoDB table to cache the results (line #24 of the stack), and a CloudWatch Event cron rule to invoke the Lambda with a fixed frequency (line #44).

Speaking of performance optimizations, a brief mention to the relatively new Python concurrent.futures module which is used by pagespeed_poller Lambda to query Pagespeed APIs in parallel and sensibly reduce the Lambda execution time (in my cases reported execution time is roughly 6 seconds):

from concurrent.futures import (ThreadPoolExecutor, wait)
executor = ThreadPoolExecutor()
results = wait([executor.submit(run_job, url) for url in GOOGLE_PAGESPEED_TARGET_URLS])

Above is a redacted excerpt of the actual implementation that shows how easy and straightforward it is to express this kind of behavior with the concurrent.futures APIs that were introduced to the Standard Library in Python v3.2. It basically tells the Python interpreter to execute run_job(url) in a dedicated thread for each url defined in the GOOGLE_PAGESPEED_TARGET_URLS list. The wait function will block the main thread and return only when all the pooled threads have completed execution, with or without exceptions.

A new /pagespeed_report route has been added to the proxy Lambda public API too in order to expose the stored DynamoDB records via an HTTP GET request that can be probed by basically any modern monitoring system:

$ curl -v https://api.l3x.in/pagespeed_report | jq '.'
[...]
< HTTP/2 200
[...]
{
  "name": "api",
  "http_code": 200,
  "message": [
    {
      "url": "https://cv.l3x.in/",
      "latest_score_value": 0.9982352941176471,
      "latest_score_timestamp": "2020-05-30T11:30:48.735Z"
    },
    {
      "url": "https://a.l3x.in/",
      "latest_score_value": 0.9658823529411764,
      "latest_score_timestamp": "2020-05-30T11:30:49.777Z"
    }
  ],
  "timestamp": "2020-05-30T12:13:43.463878Z"
}

Finally, I added the StatusCake setup from the web interface given that’s a quick one-shot task but I see they expose a TESTS API too; it should not be that hard to add the HTTP monitoring and notification setup/tear down implementing the missing CRUD actions in CDK to make it talk to StatusCake APIs.

The diagram of the final solution:

Solution architecture diagram

and this is how a StatusCake notifies my iPhone of an incident via Pushover:

Screenshot of alerts sent via Pushover to an iPhone

Final words

I hope the above gave you a decent idea on how sometimes something easily accomplished following a serverless approach can provide quite some value while saving money at the same time.

I have to mention that this is a loose kind of monitoring that in case of errors will send alerts with quite some delay, it’s more than ok for my current needs but might be something you want to consider when developing your monitoring solution. That said, Pagespeed documentation mentions “multiple queries per second” so it should be possible to massively increase query frequency (and number of monitored pages) if needed.

Finally, even if being alerted of websites performance degradation is definitely better then being blind, the one above is nevertheless a fairly rudimental use of Lighthouse. A better approach to enforce high performance would be to introduce Lighthouse tests into your CI/CD pipeline to prevent releasing suboptimal builds into production; if that’s your aim you might want to start from the Lighthouse-CI project and move from there.

That’s it for today. Please leave a comment here or contact me directly if you have any correction/suggestion/idea to share about AWS, CDK, serverless and all the rest, I’m looking forward to hear your thoughts 👍🏻

* hashed with MD5, i.e. never shown nor stored in plain text