A Look Inside the Think Tank...

Why Build Progressive Web Apps: Push, but Don't be Pushy!?(Video Write-Up)

Created on and categorized as Technical.
Written by Thomas Steiner.

(This is the write-up of the second episode of my new YouTube show “Why Build Progressive Web Apps.” If you prefer watching, the video is embedded below.)

(Also check out the write-up of the first episode , or watch the video .)

Let's face it, on the web, push notifications have become a bit of an omnipresent annoyance. The reason for the bad reputation of push notifications--in my opinion--is that we have been, well, a bit too “pushy” in trying to get people to allow them.

“Why Build Progressive Web Apps,” episode 2: Push, but Don't Be Pushy!

The folks over at Mozilla have phrased it like this in a blog post:

“Online, your attention is priceless. That's why every site in the universe wants permission to send you notifications about new stuff. It can be distracting at best and annoying at worst.”
Blog post by Mozilla announcing the option to block push notifications globally.

A particularly bad practice is to pop up the permission dialog on page load, without any context at all. Several high traffic sites have been caught doing this. To subscribe people to push notifications, you use the the PushManager interface. Now to be fair, this does not allow the developer to specify the context or the to-be-expected frequency of notifications. So where does this leave us?

const options = {
  userVisibleOnly: true,
  applicationServerKey: APPLICATION_SERVER_KEY,
  // No way to specify context or frequency ¯\_(..)_/¯
};
const subscription = await reg.pushManager.subscribe(options);

First, maybe let's take one step back and brainstorm why we would want push notifications in the first place. If done right, push notifications are actually pretty great . For example, they can inform you if you have been outbid on an auction site. They can alert you about severe weather conditions in your hometown. On a less serious note, they can notify you when you have a match on a dating site. Or they can let you know if there's a significant price drop for something you're interested in. And yes, of course push notifications can also inform you of new content on a news site.

As I wrote above, there is no way on the API-level to inform users about the context of push notifications. All you can do with the options parameter is set a flag whether the notifications should be userVisibleOnly, and provide the applicationServerKey. In consequence, it's crucial that we as application developers provide the context for our notifications ourselves.

Maybe you remember the AffiliCats sample app from the first episode of “Why Build Progressive Web Apps.” It's a simple app that simulates a comparison site where you can get great offers for cats. What's new this time is a button for getting price drop alerts.

AffiliCats app with price drop alerts (Source: https://googlechromelabs.github.io/affilicats/).

When you press it for the very first time, the notifications permissions prompt pops up, and it's immediately clear that it's related to the price drop alerts.

Permission prompt after signing up for Price Alerts.

If you grant permission, the app subscribes you to a push notification endpoint that is configured to send out dummy notifications, and after a couple of seconds after subscribing, you should receive your first notification.

Push notification announcing that prices for cats are going down.

So you can see, prices for cats are dropping, you better get one while they last. And there we have it, an actually useful push notification. It was contextual, meaningful, and timely. The AffiliCats app is open source, go check out the source code if you want to see how it's implemented. Push notifications are a great power, and with great power comes great responsibility. If you remember one thing from this write-up, I hope it's context matters!

In the next episode of “Why build Progressive Web Apps,” we look at another PWA super power: Add to Home Screen! Looking forward to reading our seeing you! In order not to miss it, subscribe to our Medium Dev Channel, the Chrome Developers YouTube channel, follow @ChromiumDev on Twitter?--?and if you like, I am @tomayac almost universally on the World Wide Internet.

Why Build Progressive Web Apps: Never Lose a Click-Out! (Video Write-Up)

Created on and categorized as Technical.
Written by Thomas Steiner.

(This is the write-up of the first episode of my new YouTube show "Why Build Progressive Web Apps." If you prefer watching, the video is embedded below. This post was cross-posted to Medium.com.)

On the Google Chrome Developers YouTube channel, we have been pushing the concept of Progressive Web Apps (PWA) a fair bit, and there have been some great success stories of companies building PWAs. But you might wonder, what may have worked for the mentioned partners, might not necessarily work for your company. In my new video series called "Why build Progressive Web Apps," I want to show you common use-case driven patterns for applying PWA features that set you up for success. In the first episode, I look at affiliate sites and how they can manage to never lose a click-out.

"Why Build Progressive Web Apps," episode 1: Never Lose a Click-Out!

You have maybe seen or even used a comparison site in the past. For example, to find out what is the cheapest internet provider, or to get the best hotel offer for your next vacation. Many of these comparison sites rely on commission-based affiliate marketing : when you click out to a third-party vendor site and end up converting, the referring comparison site earns a small fee. In consequence, such sites want you to click through to the best offer, and under no circumstances do they want to risk losing a click-out.

Screenshots of exemplary comparison sites.

Many sessions with comparison sites happen "on the go," say, on the commute to work. And while in the majority of cases you might be connected, there are definitely situations where you lose your connection, like in a tunnel or when your signal strength drops to just one or two bars, and you end up being de facto offline. Having an architecture that gives the network some time to respond, but that gracefully degrades to cached content or fallback placeholder content, can help improve the user’s experience drastically.

About to lose your mobile connection (Credits: https://unsplash.com/photos/wVcQqwNeDj8).

In order to demonstrate how to deal with such situations, I have created my own sample comparison site called AffiliCats with purely dummy content, but coming from real APIs like the Wikimedia API, the Google Static Maps API, the Random Number API, the Bacon Ipsum API, and the Place Kittens API.

AffiliCats sample app (Source: https://googlechromelabs.github.io/affilicats/).

The app has a big search bar on top where you can search for items, like cats. Each item has an image and a title, as well as offers and a "View Deal" button that leads to the third-party vendor’s site. Then we have three tabs with more photos, reviews, and the location of the item.

AffiliCats tabs: photos, reviews, location (Source: https://googlechromelabs.github.io/affilicats/).

Each tab’s content is lazily loaded on-demand with one, or multiple, fetch request. So in each case, the request can either succeed, time out if the network is too slow, or fail from the start if we’re entirely offline.

Waterfall diagram showing lazy-loading.

In the latter two cases, we want to respond with fallback content, for example, a "reviews took too long to load" message, or a timeout image. When the network comes back, or the slow request eventually goes through, we can then dynamically replace the fallback or placeholder content with real content. The user can also decide to press "reload" and refresh the complete page. This is called "navigation request". If we’re offline, we can then show a fallback page with skeleton content.

Fallback content in case loading takes too long, offline placeholders, and dynamic loading.

Finally, let’s see how we can make sure not to lose the click-out. What happens if the user clicks on the "View Deal" button when they are offline? Notice how most of the page is disabled, but the money-making button is still active?

While the app is offline and most interactive features are disabled, the "View Deal" button can still be clicked.

A precached forwarding page opens that waits for the connection to come back, and once online again, it then eventually still realizes the click-out, and drops our imaginary affiliate cookie…

The precached forwarding page loads--even when offline--and waits for the connection to come back, to then eventually still realize the click-out.

You are invited to play with this app yourself and read the source code. I hope this has been useful, and maybe you can apply some of these patterns to your own websites. Thanks for reading, and see or read you for the next episode of "Why build Progressive Web Apps," where we will look at push notifications.

Why Browsers Download Stylesheets With Non-Matching Media Queries

Created on and categorized as Technical.
Written by Thomas Steiner.

The other day, I read an article by Dario Gieselaar on Optimizing CSS by removing unused media queries. One of the core ideas is that you can use the media attribute when including your stylesheets like so:

<link href="print.css" rel="stylesheet" media="print">
<link href="mobile.css" rel="stylesheet" media="screen and (max-width: 600px)">

In the article, Dario links to Scott Jehl's CSS Downloads by Media Query test suite where Scott shows how browsers would still download stylesheets even if their media queries are non-matching.

I pointed out that the priority of these downloads is Lowest, so they're at least not competing with core resources on the page:

At first sight this still seemed suboptimal, and I thought that even if the priority is Lowest, maybe the browser shouldn't trigger downloads at all. So I did some research, and, surprise, it turns out that the CSS spec writers and browser implementors are actually pretty darn smart about this:

The thing is, the user could always decide to resize their window (impacting width, height, aspect ratio), to print the document, etc., and even things that at first sight seem static (like the resolution) can change when a user with a multi-screen setup moves a window from say a Retina laptop screen to a bigger desktop monitor, or the user can unplug their mouse, and so on.

Truly static things that can't change (a TV device can't suddenly turn into something else) are actually being deprecated in Media Queries Level 4 (see the yellow note box); and the recommendation is to rather target media features instead (see the text under the red issue box).

Finally, even invalid values like media="nonsense" still need to be considered, according to the ignore rules in the spec.

So long story short, browsers try to be as smart as possible by applying priorities, and Lowest is a reasonable value for the cases in Scott's test.

New top-level HTTP Archive Report on Progressive Web Apps

Created on and categorized as Technical.
Written by Thomas Steiner.

(This was crossposted to Medium.com)

As a follow-up from the Progressive Web Apps study from a couple of weeks ago, we're now happy to announce that we've landed a new top-level HTTP Archive report on Progressive Web Apps based on the study's raw data.

This report currently encompasses two sections: (i) PWA Scores and (ii) Service Worker Controlled Pages, which translates roughly to Approach 1 and Approach 2 of the PWA study mentioned above.

You can use this data for example to see the percentages of pages that were controlled by a service worker over time based on Chrome ServiceWorkerControlledPage use counter statistics. Good news: the trend is going up.

As a result of Rick Viscomi's new lenses feature, you can now also dive into the data in an even more fine-grained manner, for example, to see the development of median Lighthouse scores of just the Wordpress universe. Note that while there was a switch in the Lighthouse scoring algorithm from v2 to v3 of the tool, the chart shows the median score, which naturally is more robust in the presence of outliers.

Next steps entail also getting the data from Approach 3 of the study into the httparchive.technologies.* tables, so that we can allow everyone to run BigQuery analyses on top of these in a cost-efficient manner, without having to go through the massive (70+ TB) httparchive.response_bodies.* tables!

Big thanks to Rick again, whose guidance and leadership were essential to make this happen. We're looking forward to this data being put to good use.

Service Worker Caching Strategies Based on Request Types

Created on and categorized as Technical.
Written by Thomas Steiner.

(This article was cross-posted to Medium.com.)

TL;DR

Instead of purely relying on URL-based pattern matching, also consider leveraging the lesser-known--but super useful--Request.destination property in your service worker to determine the type and/or caching strategy of requests. Note, though, that Request.destination gets set to the non-informative empty string default value for XMLHttpRequest or fetch() calls. You can play with the Request.destinationplayground app to see Request.destination in action.

Different Caching Strategies for Different Types of Resources

When it comes to establishing caching strategies for Progressive Web Apps, not all resources should be treated equally. For example, for a shopping PWA, your API calls that return live data on some items' availabilities might be configured to use a Network Only strategy, your self-hosted company-owned web fonts might be configured to use a Cache Only strategy, and your other HTML, CSS, JavaScript, and image resources might use a Network Falling Back to Cache strategy.

URL-based Determination of the Request Type

Commonly, developers have relied on the known URL structure of their PWAs and regular expressions to determine the appropriate caching strategy for a given request. For example, here's an excerpt of a modified code snippet courtesy of Jake Archibald's offline cookbook:

// In serviceworker.js
self.addEventListener('fetch', (event) => {
// Parse the URL
const requestURL = new URL(event.request.url);
// Handle article URLs
if (/^\/article\//.test(requestURL.pathname)) {
event.respondWith(/* some response strategy */);
return;
}
if (/\.webp$/.test(requestURL.pathname)) {
event.respondWith(/* some other response strategy */);
return;
}
/* … */
});

This approach allows developers to deal with their WebP images (i.e., requests that match the regular expression /\.webp$/) differently than with their HTML articles (i.e., requests that match /^\/article\//). The downside of this approach is that it makes hard-coded assumptions about the URL structure of a PWA or the used MIME types' file extensions, which creates a tight coupling between app and service worker logic. Should you move away from WebP to a future superior image format, you would need to remember to update your service worker's logic as well.

Request.destination-based Determination of the Request Type

It turns out, the platform has a built-in way for determining the type of a request: it's called Request.destination as specified in the Fetch Standard. Quoting straight from the spec:

“A request has an associated destination, which is the empty string, "audio", "audioworklet", "document", "embed", "font", "image", "manifest", "object", "paintworklet", "report", "script", "serviceworker", "sharedworker", "style", "track", "video", "worker", or "xslt". Unless stated otherwise it is the empty string.”

The empty string default value is the biggest caveat. Essentially, you can't determine the type of resources that are requested via the following methods:

navigator.sendBeacon(), EventSource, HTML's <a ping=""> and <area ping="">, fetch(), XMLHttpRequest, WebSocket, [and the] Cache API

In practice having Request.destination get set to the non-informative empty string default value matters the most for fetch() and XMLHttpRequest, so at least for resources requested through these techniques, it's oftentimes back to URL-based pattern handling inside your service worker.

On the bright side, you can determine the type of everything else perfectly fine. I have built a little Request.destinationplayground app that shows some of these destinations in action. Note that for the sake of the to-be-demonstrated effect it also contains some anti-patterns like registering the service worker as early as possible and actively circumventing the browser's preloading heuristics (never do this in production).

An <img> , two <p> s with background images and triggers for XMLHttpRequest or fetch() , an <iframe> , and a <video> with poster image and timed text track

When you think about it, there are a huge number of ways a page can request resources to load. A <video> can load an image as its poster frame and a timed text track file via <track>, apart from the video bytes it obviously loads. A stylesheet can cause images to load that are used somewhere on the page as background images, as well as web fonts. An <iframe> loads an HTML document. Oh, and the HTML document itself can load manifests, stylesheets, scripts, images, and a ton of other elements like <object> that was quite popular in the past to load Flash movies.

Request.destination playground app showing different request types

Coming back to the initial example of the shopping PWA, we could come up with a simple service worker router as outlined in the code below. This router is completely agnostic of the URL structure, so there's no tight coupling at all.

// In serviceworker.js
self.addEventListener('fetch', (event) => {
const destination = event.request.destination;
switch (destination) {
case 'style':
case 'script':
case 'document':
case 'image': {
event.respondWith(
/* "Network Falling Back to Cache" strategy */);
return;
}
case 'font': {
event.respondWith(/* "Cache Only" strategy */);
return;
}
// All `XMLHttpRequest` or `fetch()` calls where
// `Request.destination` is the empty string default value
default: {
event.respondWith(/* "Network Only" strategy */);
return;
}
}
});

Browser Support for Request.destination

Request.destination is universally supported by Chrome, Opera, Firefox, Safari, and Edge. For Chrome, support was added in Chrome 65, so for the unlikely case where your target audience uses older browsers than that, you might want to be careful with fully relying on this feature for your router. Other than that, Request.destination is ready for business. You can see the full details on the corresponding Chrome Platform Status page.

When Request.destination isn't Enough

If you have more complex caching needs, you will soon realize that purely relying on Request.destination is not enough. For example, all your stylesheets may indeed use the same response strategy (and thus be good candidates for Request.destination), however, your HTML documents or API requests might still require different caching logic the more advanced your app gets.

Fortunately, you can freely combine Request.destination with URL-based pattern matching, there's absolutely no harm in doing so. A basic example could be to use Request.destination for dealing with all kinds of images to return a default offline fallback placeholder, and to use Request.url with URL-based pattern matching for other resources. You can likewise decide to have different behavior based on the Request.mode of the request, for instance to check if you are dealing with a navigational request (Request.mode === 'navigate') in single-page apps.

Conclusion

Coming up with a reasonable caching strategy for a PWA is hard enough. Having ways to tame this complexity is definitely welcome, so whenever feasible?—?given your PWA's structure?—?in addition to URL-based pattern handling, also consider leveraging Request.destination for your service worker's routing logic. It may not be able to handle all routes and there are important exceptions and corner cases, but it's definitely a good idea to reduce the coupling of service worker logic and URL structure as much as possible.

Acknowledgements

Thanks to Mathias Bynens, Jeff Posnick, Addy Osmani, Rowan Merewood, and Alberto Medina for reviewing this article, and again Mathias for his help with debugging emoji encoding in Edge!