JetBrains Marketplace — Web Performance Rework

Evgenii Ray
16 min readMay 15, 2021
Jetbrains Marketplace

1. Introduction

Five years ago, back in 2016, after the Plugin Repository Revamp meeting, the project started its smooth transition to evolving as a Marketplace. Back in that time, the project’s primary focus was to be a storage of plugins for JetBrains products. But things have changed a lot since that time. Now Marketplace is the common platform for plugins across all JetBrains products. With the number of users increased and many new features developed, we started our infinite war against legacy code, which held us back significantly in terms of performance and web user experience.

In this article, I’d like to share our path from the outdated legacy application to the modern application, which first in the company ships the code in ES2017 to production. So, grab the cup of tea, fasten the seatbelts. This is going to be a long, but warm story 😊

TLDR: The Marketplace team fully migrated the application to SPA. The new configuration reduced the build time of the project by ~80%. The network footprint was reduced from 2 MB to 500 KB on the heaviest page — Scala Plugin. The application is shipped in ES7. We completely removed the babel polyfills and all legacy code. We also enabled the modern compression mechanism — Brotli, which gave us about 15% efficiency over the default GZIP. The slowest pages average loading speed has increased by ~50% on average for China Region.

1.1 Where we started

Our web application was based on GSP (Groovy Server Pages), with pre-rendered content generated on the server. Interactivity was added via JQuery and React libraries and supporting javascript code. Below you’ll find the picture of that structure.

Web Bundle structure

Problems of such structure

  1. Duplication — as you can see, the Common UI bundle was copied to each page bundle. And all components which we reused across the application were also copied. This creates an issue with a huge amount of code duplication in production entry bundles which negatively affects the network and script parsing performance.
  2. Not scalable — as our codebase grows, we compile more code and increase each bundle’s size. Adding new libraries significantly slows down the whole application.
  3. No effective caching — each page was creating a separate bundle of JS and CSS. Because it had a different name for each page, browsers couldn’t cache it, and we load the duplicated code each time the user travels to a different page.
  4. Bad user experience — actually, this is a result of the previous 3 points. The application didn’t work like a SPA, and the user had to experience a long page loading when surfing through our application.

So, the main question is why we ended up like that? The reason is simple. Here what we looked like initially.

Plugin Repository back in 2016

The business goal was to transform the plugins repository into a fully-featured Marketplace with support for paid plugins and plugins for non-IntelliJ-based products in a reasonable amount of time. Back in that time, there was only one front-end developer in the team.

So, the team decision was:

  1. Focus on business goals
  2. Not re-write the whole Plugin Repository, but gradually refactor it
  3. Do not invent a components library; use what we have inside the company

We took the internal UI library based on React but was in the beta version. The build process was fully delegated to out-of-the-box webpack configuration inside this library package. So, we had no control over it. With that setup, the marketplace team started to refactor the old code base and implement new features.

1.2 Let’s take care of the performance

As the Marketplace project continued to deliver new features and do its rebranding, the front-end team size was increased to two front-end developers. This opened the opportunity to deliver features faster and re-work the current setup we had. At our team gathering in Munich, I proposed the idea of establishing the performance culture in our team.

The reason for that was simple:

  1. Growing user base, especially in China Region
  2. The number of new features we needed to deliver increased, which led to the increasing complexity of the project.
  3. The marketplace was becoming a platform for selling plugins. It was even more important than ever to make our application faster.

After our gathering, we had an abstract plan of action points we need to take for next year. Although, it took us almost two years to fulfill them all.

1.3 The Roadmap

Together with my team, we’ve come up with the next roadmap

  1. Refactor old pages to a new design and technology stack
  2. Integrate analytics to have a view of how the pages are loaded, where are the possible bottlenecks, etc
  3. Extract common code into shareable NPM packages so that we can share it with other projects like KPM
  4. Merge all pages into SPA to ensure that all pages are responsive and there are no additional loads
  5. Optimize build configuration to ensure the best development experience
  6. Optimize the bundle size and CDN to serve the application efficiently

In this article, I’m going to focus on measurable things like performance, bundle size, etc., so from this point, let’s dive into the meat.

2. Build performance — JavaScript

As the project complexity grows, it becomes much harder to maintain a comfortable development experience. Let’s look at the table and how the projects’ compile-time looked like in 2018 and 2020.

Testing Machine: MBP 2019 16” 32GB RAM i9

As you can see, even on a high-end device, it’s really noticeable how the compile time is increasing with the project’s growth. Although, as the first approach, we tried the next optimizations:

  1. Tweak webpack configuration to reduce meta-data output
  2. Disable all possible optimization for dev-mode
  3. Loaders caching
  4. Parallel loader execution
  5. Incremental typescript compilation + type checking in a separate thread

All these steps didn’t give us the performance boost we wanted. So, we went with a radically different approach — changing our build tools. Because loaders like babel, ts-loader are written in JavaScript, it by default restricts them in terms of speed, so we tried something different. For JS and TS compilation, we chose to use es-build. This is a tool written in Go which is supposed to be blazingly fast. Let’s compare the old and new build stack.

Let’s check how the performance of the build changed.

* 6.3 seconds refers to the combined compilation of Javascript and Typescript

Here is the list of problems we’ve faced:

  1. Libraries that are supposed to be built by babel tools simply won’t work with es-build out-of-the-box. We’ve faced that with the ring-ui internal package. To fix it, we had to extract incompatible parts which relied on babel and refactor them.
  2. All your files should be ES6 modules.
  3. ES5 is pretty badly supported. We had to migrate the rest of our legacy code.
  4. es-build enforce you to use import type notation for types. Ignoring that rule leads to compilation problems.
  5. No type-check. es-build is just a compiler. For type-checking, we still have to use the fork-ts-checker to check the types in a separate thread. This means that the code compiles much faster than the type checking is happening.

With all of that, we’ve fixed the build performance of TypeScript code, and now it’s easy to scale the codebase without worrying about long-wait compilations. Now, let’s move to another pain point — CSS build-time.

2.1 Build performance — CSS

In our project, we’ve been using SASS from the start. This tool worked perfectly for us, and we didn’t want to change it even on some “lightweight” modular alternatives like PostCSS. However, the amount of code written in SASS increased significantly, and it started to affect the overall development experience.

Bottleneck — magic @import statement

Across our code, we often re-use common classes and utility stuff via the @import statement.

But there is a caveat here:

Each stylesheet is executed, and its CSS emitted every time it’s @imported, which increases compilation time and produces bloated output.

Sass team provided an alternative notation called — @module The recent version fixes that problem and provides the same code reusability across the whole project without negative effects on the compilation and output.

We took the following approach:

  1. Remove all @import statements in SCSS files
  2. All global core classes should be accessible in every place without import
  3. Utility classes, variables, mixins should be imported via @module statements

To achieve that, we’ve created a global module and the global import at the beginning of the bundled SCSS file. Here how it looks in webpack configuration:

With that, we make:

  1. Global styles imported only once at the beginning of CSS output and available across all application without importing
  2. Modular styles, such as utility classes, mixins, etc. available as a global module, which can be called in SASS. This allows us to re-use utility classes without worrying about code duplication created by @import

Here is an example, how we use @module styles

Now, let’s measure the results:

We’ve improved the speed of CSS compilation by 60%. As a bonus, we’ve also reduced the size of the CSS bundle. But we’ll look at that in the next section.

2.2 Summing up

So, our build optimization gave us astonishing results in terms of the build performance. For your convenience, I’ve attached comparison charts below.

Cold Compilation

Cold compilation means fresh node-module installation, clearing all caches and etc. Initial application run.

Hot Compile

Hot compile refers to the compilation of changes after the initial application run is done.

We’re now compiling our project much faster, which positively affects the developer experience. In the next section, let’s dive into how we optimized the bundle size of our application.

3. Optimizing web application performance

We started from the basic setup to analyze the performance of our app.

  1. Google Analytics — to understand the user behavior on websites, error rate, browser stack, etc.
  2. New Relic — an additional tool to track page errors, page loads, speed trends, etc.
  3. Regular Lighthouse checks.

This is what we had back in 2019 on the Scala Plugin page. I will use it as a benchmark further because this is one of the slowest pages on our application, according to our web analytics.

As you can see, this wasn’t acceptable for the future development of the Marketplace platform. Let’s proceed to the optimizations we made to fight this situation.

3.1 Bundle reorganization

This is the structure we had.

So, what can we do about that? First of all, we needed to remove the duplicate code generated into each bundle and eliminate the old libraries we include. I’ve attached the table of assets footprints of the Scala plugin page to show how bad the situation was.

The numbers weren’t that scary on paper, but because about 30% of our traffic comes from China Region, it was crucial for us to improve the network footprint and decrease the average page-load.

We’ve decided to follow the following steps to improve that:

  1. Remove legacy libraries that bloated our production bundle
  2. Migrate to SPA
  3. Optimize bundle structure to avoid any duplication
  4. Optimize bundle size by splitting our application into a small set of chunks
  5. Extract critical assets from the bundle
  6. Bundle all application styles into one bundle

By making all these steps, we’ve come to the following web-app bundle structure:

3.2 HTTP2 power

Why splitting the application into a set of microchunks is a good idea?

This is because of the HTTP2 multiplexing feature. With HTTP1, we were able to fetch five assets concurrently by opening five connections. That’s why bundlers tried to solve the problem by bundling all the assets into one file. But finally, these times are gone, and we can freely split the bundle. With HTTP2, we can fetch 100+ assets concurrently in terms of one connection, which solves our problem of shipping the front-end as a one “fat” bundle.

3.3 Using resource hints to separate resource classes

Browser can recognise critical resources if we provide proper hint. Here’re a whole 5 different methods of preloading, and all of them are suitable for different goals.

Source: https://3perf.com/blog/link-rels/. Thanks Ivan Akulov for the great article.

<link rel="dns-prefetch">

Instructs the browser to make a DNS request for a server’s IP address in advance. This is useful for CDNs, Google Fonts, and all other cases when you know you’ll need a resource in a short time, know the domain it’s hosted at, but don’t know its exact path. In this case, resolving the server’s IP address in advance would save you from 50 to 300 ms.

Use it: when you know you’ll need a resource soon, but you don’t know its full url yet (for older browsers)

<link rel="preconnect">

Instructs the browser to perform the connection to a server in advance. It’s useful in the same cases when dns-prefetch is useful, but sets up a full connection and saves more time. The drawback here is that opening a new connection is pretty resource-intensive, so you don’t want to overuse this optimisation.

Use it: when you know you’ll need a resource soon, but you don’t know its full url yet

<link rel="prefetch">

Preloads and caches a resource in background with a low priority. This is useful e.g. to preload a JS bundle for the next page of an app.

Use it: when you know you’ll need a resource soon, but you don’t know its full url yet

<link rel="preload">

Preloads a resource in background with a high priority. This is useful to preload a resource you’ll need in several seconds – e.g., a non-critical CSS file.

Use it: when you’ll need a resource in a few seconds

<link rel="prerender">

Preloads the specified page in the background and renders it in an invisible tab. Later, when a visitor clicks to a link leading to the prerendered page, the page displays immediately. This is what Google uses to preload its first search result.

Use it: when you’re certain users will navigate to a specific page, and you want to speed it up

3.4 Telling Webpack which hint to use for a chunk

Webpack 4.6.0+ supports prefetching and preloading.

Using these inline directives while declaring your imports allows webpack to output “Resource Hint” which tells the browser that for:

  • prefetch: resource is probably needed for some navigation in the future
  • preload: resource will also be needed during the current navigation

And it works great with React.lazy(). We used preload to grab the most critical assets, such as styles, rendering and vendor code and prefetch for the most popular routes in the application.
High-level overview:

High level bundles stricture

Code example:

hint for webpack to prefetch chunk

By prefetching chunks which will be used for further navigation, we’re saving valuable loading time in the future. During runtime, webpack will populate these resources with a script tag. So you don’t need to include it manually in your HTML

3.5 Generating critical assets

Unfortunately, because in our project, we were tight up with Java back-end templating engine for pages, we can’t generate the HTML and inline critical resources inside with webpack. But, what we could do is to extract critical libraries into “cache groups”. Our layout and internal libraries doesn’t change often, so it’s good idea, to pack them as chunks with preload hint, so we can make sure that they are loaded with a high-priority plus cached inside the browser.

Cache groups example

This configuration tells webpack to generate vendor, polyfills, marketplace, layout cache-groups in production build. This groups can be manually added to the HTML Template with a preload hint

3.6 Comparison between old and new bundle

Let’s benchmark the footprint of our Scala page again.

Below I’ve provided the list of assets we’re loading for this page.

To make it clearer, let’s split assets into classes — critical and lazy.

  • Critical assets are required for application to work. Without them, no actual rendering will happen.
  • Lazy assets are the assets that load dynamically on the background, and the application gets progressively rendered with each “chunk” fetched.

Now, we can compare the footprint of the “legacy” and “new” in terms of critical/lazy assets.

As you can see from the table, removing duplicate code allowed us to reduce the bundle size by 65%. The user loads about ~440KB of sharable code, which is loaded a single time and cached by the browser. The rest is the dynamic content, loaded as a set of dynamic chunks ~20–30KB size. HTTP 2 enables the effective loading of such chunks.

Note: the CSS size was reduced from 200KB to 80KB. This is the bonus which I was talking about. Removing the @import keyword across the application helped us to remove the duplication in the CSS production bundle. Pretty cool.

3.7 Moving to ES7 in production

Browsers widely support ES2017. According to caniuse.com, it’s about 94% worldwide. Why don’t we ship it in production? This will allow us to avoid shipping unnecessary polyfills, speed up the build time and generally improve the application’s performance and, in the future, fully migrate to native ES6 Modules. But the most exciting part is how it affects the size of our application.

So, we got a 116 KB bundle size reduction just by switching the compile target to ES2017 and removing unnecessary babel polyfills. Cool!

3.8 Efficient compression — Brotli

The set of the small things give a significant impact. Most of the CDNs use default GZIP compression to serve the static content, but there is also a compression mechanism that is not widely used but supported by modern browsers and CDN. This is Brotli. On paper, it should be more efficient by 15–20%. Let’s apply it to our application.

Let’s check how it works for us:

In our case, it gave us a 15% additional bundle size reduction. In total, we saved about ~200 KB. That’s quite a good number.

4. Final results

4.1 Lighthouse

Now we can re-measure the Lighthouse to see if our refactoring, new build configuration, and CDN compression did a job done.

Lighthouse — 2020

Lighthouse — 2021

These are good numbers 🎉 🎉 🎉 But still, we have a lot of work to do.

4.2 Scala plugin page average load

So, let’s compare Google Analytics data for the month before optimizations and almost a month after.

About 50% reduction in terms of page loading for the slowest page on the website. The same picture can be found for most of the slowest pages.

4.3 Web-application footprint

4.4 Results reflection

What was good:

  1. Reduced the bundle size by 65%
  2. Reduced development compile time by 80%
  3. According to Google Analytics, application loading speed improved significantly in China Region by ~30–50% depending on the page
  4. We removed all legacy code, and now the application works as a SPA
  5. Application is shipped in ES7 and ready to native modules
  6. Significantly improved Lighthouse metrics

What was unexpected

  1. The average loading speed in Europe and the USA is slightly improved, but not that much. About 8–10% depending on the page, even with such bundle reduction. I suppose this is because of the fast internet connection in these regions, which handles 700 KB and 2 MB payloads almost the same.
  2. We can’t go with native modules yet. This is because some of our core in-house libraries shipped in ES5, and we found many incompatibility issues shipping them together with ES Modules.

5. Further development

For the last 2 years, we did a big job refactoring Marketplace from the JQuery + GSP application to a modern web application. But this process never stops. As the next steps, the team is going to implement next features:

  1. Native ES6 Modules
  2. Providing modern image format (WEBP / WEBF) instead of PNG / GIF
  3. Further performance improving

6. Final words

Thanks for reading such a long article! I’d love to provide more technical details and analytics, but it’s really hard to fit 2 years into a single post. Feel free to ask me about details.

I want to say personal thanks to the Marketplace team and especially to @Daria Chembrovskaya, @Semyon Atamas, @Patrick Kostas, @Sergei Ugdyzhekov, Anna Kozhemyako for helping me with releasing these changes.❤️❤️❤️

This article was written on my last working day at Jetbrains. I’d love also to say thanks to Jetbrains and all my friends here. Working with such talented people around was a great challenge and journey for me. This is was awesome 3 years :)

LinkedIn: evgenii-ray
Telegram Channel: https://t.me/frontend_engineer_blog

With love and drive to develop,
Evgenii

--

--

Evgenii Ray

Front-end Engineer at Meta | Photographer | Never Settle