5 critical ingredients for a better software development process

Software development methodologies often come equipped with a set of tools. These tools are principles and activities that, among other things, enable predictability, drive continuous improvement, and help identify bottlenecks.
Scrum sprints provide visibility into future deliverables, Kanban’s WIP limit helps the team reduce waste, and XP (Extreme Programming) advocates for rapid feedback. These are just a few of the many concepts that each methodology offers.

Whether your team uses a more flexible methodology like Kanban or a more well-defined one like Scrum, you may be missing out on beneficial tools that are not built into your methodology of choice.
The popular software development methodologies available today have been a game-changer, but none of them (not even Scrum) provide a complete toolkit.

In leading teams of various sizes, product maturity, and stages of a company, I learned there’s a set of tools that assisted my teams in creating a culture of high productivity and consistent delivery of high-quality products.
Below I list this set of battle-tested tools that provided great value for me and my teams time after time. If your team is not practicing any of these tools, I strongly recommend giving it a try.

Task size limit
Whether your team is implementing user stories, requirements, or any other type of a deliverable, limiting the size of these tasks has significant benefits:
It provides focus for the team, shortens feedback loops, increases flexibility because there are more opportunities to make adjustments between tasks, smaller tasks are more predictable, and so on.
I recommend aiming at an average of one work week per task and no more than two work weeks.

Work-in-progress limit
Limiting work-in-progress (WIP) is about identifying bottlenecks and ensuring healthy flow in your development process. To put it into practice, limit the number of tasks (active or inactive) in each step of your workflow. You can, for example, limit the number of tasks currently in development to x2 of the number of developers on your team. If there are any inefficiencies in your development process, you will quickly reach the WIP limit and the inefficiencies will be exposed, forcing you to act.
While it is more commonly used in Kanban boards, the underlying principle behind it can be applied to any methodology.

Throughput/velocity calculation
Knowing your team’s task completion rate, combined with some basic math, can help your team plan more efficiently, provide estimated delivery dates to stakeholders, and identify bottlenecks in your development process.
Kanban teams frequently strive for tasks that are roughly the same size. This allows the team’s throughput to be calculated. When combined with the current backlog, you can deduct when each backlog item will be delivered.
In Scrum, a sprint backlog can be created by combining the team’s velocity and task size estimates.
For teams that have never used this tool before, the efficiency boost it provides is often not obvious. It manifests itself over time in an improved planning process and the invaluable predictability it provides for both small and large teams.

Pre-mortems
Perhaps the simplest of tools, yet one of the most effective ones with the highest return on investment. A pre-mortem is a session in which the team gathers to assess what will break in production and come up with ideas to mitigate these risks.
Ask the team to imagine the project has failed and to brainstorm every possible reason for that failure. Later, the team can devise methods to proactively detect these failures and prevent them from occurring in the first place.
It’s surprising how uncommon pre-mortems are, given their low cost and high value.

Retrospectives
The retrospective is a meeting in which team members reflect on their work and discuss what went well and what could be improved. Typically, retrospective meetings are held at the end of a project or sprint.
I’m assuming most of the readers are familiar with the concept, but I’m also assuming not all teams are practicing it regularly. It is often because the team focuses on pointing fingers rather than creating a blameless environment, an environment where team members feel safe sharing their learnings and looking for ways to improve together.
Being disciplined about conducting effective blameless retrospectives, with actionable results, is a critical component in a true continuous improvement culture.

These are just a few of the tools that I’ve found helpful in my work as a developer and engineering manager. I’m sure there are many more that your team can benefit from. The ones I listed provided the most value for the least amount of effort while meeting the core requirements of visibility, predictability, and continuous improvement.
I hope this list will help you and your team build better software.

Dockerizing Playwright/E2E Tests


Author: Zachary Leighton | Cross-posted from Medium

Tired of managing your automation test machines? Do you spend every other week updating Chrome versions, or maintain multiple VMs that seem to always be needing system updates? Are you having problems scaling the number of tests you can run per hour, per day?

If so (and perhaps even if not), this guide is for you!

We’re going to eliminate those problems and allow for scalability, flexibility, and isolation in your testing setup.

How are we going to do that you may ask? We’re going to run your test suite inside Docker containers!

#PimpMyDocker

If you haven’t thought about it much, you may wonder why you want to do such a thing.

Well… it all boils down to two things, isolation & scalability.

No, we’re not talking about the 2020’s version of isolation.

We’re talking about isolating your test runs so they can run multiple times, on multiple setups all at the same time.

Maybe you only run a handful of tests today, but in the future you may need to cover dozens of browser & OS combinations fast and at scale.

In this tutorial I’ll attempt to cover the basics of running a typical web application in a Docker container, as well as how to run tests against it in a Docker container using Playwright.

Note that for a true production setup you will want to explore NGINX as a webserver in the container, as well as an orchestration system (like Kubernetes). For a robust CI/CD pipeline also you’d want to run the tests as scripts and have your CI/CD provider run this all in a container with dependencies installed, but more on that later.

You can clone the repo here to have the completed tutorial or you can copy the script blocks as we go along.


The Basics

Creating your app

For this example we’re going to be using the Vue CLI to create our application.

Install the Vue CLI by running the following in your shell:

npm i -g @vue/cli @vue/cli-service-global

Now let’s create the app, here we’ll call it e2e-in-docker-tutorial (but use whatever you like):

vue create e2e-in-docker-tutorial

Follow the on-screen instructions if you want a custom setup, but for this we’ll be using the Default (Vue 3 Preview) option.

After everything finishes installing, let’s make sure it works.

We’ll serve the app locally by running the following:

npm run serve

And open a browser for localhost on the port it chose (http://localhost:8080) and you should see something like this this.

Abba, build me something!

We’ll now write a small sample page that we can use later on to test. It’ll have some basic logic with some interactivity.

For this example we’ll create a dog bone counter, so we can track how many treats Arthur the Cavachon has gotten.

“Is that a dog?” — Arthur

We’ll only have two buttons, “give a bone” and “take a bone” (Schrodinger’s bone is coming in v2).

We’ll also add a message so that he can tell us how he’s feeling. When he is given a bone he will woof in joy, but when you take a bone he will whine in sadness.

And lastly, we’ll keep a counter going so we can track his bone intake (gotta watch the calcium intake you know…).

We’ll write the styles here in BEM with SCSS so we’ll need to add sass and scss-loader to the devDependencies. Note we use version 10 here due to some compatibility issues with the postcss-loader in the Vue CLI.

npm i -D sass sass-loader@^10

The App.vue should look something like this:

<template>
  <Arthur />
</template>

<script>
import Arthur from './components/Arthur.vue'

export default {
  name: 'App',
  components: {
    Arthur
  }
}
</script>

<style>
#app {
  font-family: Avenir, Helvetica, Arial, sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
  text-align: center;
  color: #2c3e50;
  margin-top: 60px;
}
</style>

And we also have our Arthur.vue component which looks like this:

<template>
  <div class="arthur">
    <h1>Arthur's Bone Counter</h1>
    <img src="~@/assets/arthur.jpg" class="arthur__img" />
    <h2 id="dog-message">
      {{ dogMessage }}
    </h2>
    <h3 id="bone-count">
      Current bone count: {{ boneCount }}
    </h3>
    <div>
      <button class="arthur__method-button" @click="giveBone" id="give-bone">Give a bone</button>
      <button class="arthur__method-button" @click="takeBone" id="take-bone">Take a bone</button>
    </div>
  </div>
</template>

<script>
export default {
  name: 'Arthur',
  data() {
    return {
      boneCount: 0,
      dogMessage: `I'm waiting...`
    }
  },
  methods: {
    giveBone() {
      this.boneCount++;
      this.dogMessage = 'Woof!';
    },
    takeBone() {
      if (this.boneCount > 0) {
        this.boneCount--;
        this.dogMessage = 'Whine!';
      } else {
        this.dogMessage = `I'm confused!`;
      }
    }
  }
}
</script>

<style lang="scss">
.arthur {
  &__img {
    height: 50vh; // width scales automatically to height when unset
  }
  &__method-button {
    margin: 1rem;
    font-size: 125%;
  }
}
</style>

Once we have that all set up we can run the application and we should see:

Click the “Give a bone” button to give Arthur a bone and he will thank you!

If he doesn’t listen you can also take the bone away, but he’ll be confused if you try to take a bone that isn’t there!

Hosting your built app

In order to host a build we’ll need to add http-server to our devDependencies and create a start script, start by adding http-server:

npm i -D http-server

And add a start script to package.json that will host the dist folder, which is created by running a build, on port 8080:

“start”: “http-server dist -- port 8080”

To test this, run the following commands and then open a browser to http://localhost:8080:

npm run build
npm start

You should see the application running now. Great work! Let’s wrap this all into a Docker container!

I Came, I Saw, I Dockered

Creating a Dockerfile

Now let’s create a Dockerfile that can build the bone counter and serve it up for us.

Create Dockerfile (literally Dockerfile, no extension) in the root of the repo and we’ll base off of the current Alpine Node LTS (14 as of writing this).

We’ll add some code to copy over the files, build the application and run it inside of the Docker container with http-server from our start command.

FROM mhart/alpine-node:14

WORKDIR /app

COPY package.json package-lock.json ./

# If you have native dependencies, you'll need extra tools
# RUN apk add --no-cache make gcc g++ python3

RUN npm ci

COPY . .

RUN npm run build

# Run the npm start command to host the server
CMD ["npm", "start"]

We’ll also add a .dockerignore to make sure we don’t accidentally copy over something we don’t want, such as node_modules, as we’ll install that on the agent.

node_modules
npm-debug.log
dist

Going back to the Dockerfile it’s important to note why we copy over the package* files first, which is because of layer caching.

If we change anything in package.json or package-lock.json Docker will know, and will rebuild from that line downward.

If however, you only changed the application files, it will use the cached version of package.json and its install and will only run the layers where we build and after.

This can significantly save some time when you have large installations, or need to rebuild the image multiple times for larger repos.

Let’s now build the image and tag it as arthur. Make sure you’re in the root directory where the Dockerfile is.

docker build . -t arthur

The output should look something like this:

Once it’s built we’ll run the image we just built and we’ll forward port 8080 on the running container to host port 9000 instead of straight through to 8080 on the host.

docker run -p 9000:8080 arthur

Notice we see in the log that it’s running on port 8080, but this is from inside the container. To view the site we need to go to our host machine on port 9000 that we set using the -p 9000:8080 flag.

Navigate to http://localhost:9000 and you should see the app:

Congratulations! You are now running a web application inside of a Docker container! Grab a celebratory coffee or beer you want before continuing on, I’ll just wait here watching cat videos on Youtube.

Thou Shalt Test Your App

Writing a Playwright test

For the next part, we’ll cover adding Playwright and jest to the project, and we’ll run some tests against the running application. We’ll use jest-playwright which should come with a lot of boilerplate type code to configure Jest and also run the server while testing.

Playwright is a library from Microsoft, with an API almost 1 to 1 with Puppeteer that can run a browser via a standard javascript API.

The big difference is that Puppeteer is limited to chromium-based browsers, but Playwright includes a special WebKit browser runtime which can help cover browser compatibility with Safari.

For your own needs you may want to use Selenium, Cypress.io, Puppeteer, or something else altogether. There are many great automation and end-to-end testing tools in the JavaScript ecosystem so don’t be afraid to try something else out!

So going back to our tutorial, let’s start by adding the devDependencies we need, which are Jest, the preset for Playwright, Playwright itself, and a nice expect library to help us assert conditions.

npm install -D jest jest-playwright-preset playwright expect-playwright

We’ll create a jest.e2e.config.js file at the root of our project and specify the preset along with a testMatch property that will only run the e2e tests. We’ll also set up the expect-playwright assertions here as well.

module.exports = {
  preset: 'jest-playwright-preset',
  setupFilesAfterEnv: ['expect-playwright'],
  testMatch: ['**/*.e2e.js']
};

Please note that this separation by naming doesn’t matter much for this demo project. However, in a real project you’d also have unit tests (you *DO* have unit tests with 100% coverage don’t you…) and you’d want to run the suites separately for performance and other reasons.

We’ll also add a configuration file for jest-playwright so we can run the server before we run the tests. Create a jest-playwright.config.js with the following content.

// jest-playwright.config.js

module.exports = {
    browsers: ['chromium', 'webkit', 'firefox'],
    serverOptions: {
        command: 'npm run start',
        port: 8080,
        usedPortAction: 'kill', // kill any process using port 8080
        waitOnScheme: {
            delay: 1000, // wait 1 second for tcp connect 
        }
    }
}

This configuration file will also automatically start the server for us when we run the e2e test suite, awesome dude!

Writing the tests

Now let’s go ahead and write a quick test scenario, we’ll open up the site and test a few actions and assert they do what we expect (arrange, act, assert!).

Go ahead and create arthur.e2e.js in the __tests__/e2e/ directory (create the directory if not present).

The tests will look like the following:

describe('arthur', () => {
    beforeEach(async () => {
        await page.goto('http://localhost:8080/')
    })

    test('should show the page with buttons and initial state', async () => {
        await expect(page).toHaveText("#dog-message", "I'm waiting...");
        await expect(page).toHaveText("#bone-count", "Current bone count: 0");
    });

    test('should count up and woof when a bone is given', async () => {
        await page.click("#give-bone");
        await expect(page).toHaveText("#dog-message", "Woof!");
        await expect(page).toHaveText("#bone-count", "Current bone count: 1");
        
    });

    test('should count down and whine when a bone is taken', async () => {
        await page.click("#give-bone");
        await page.click("#give-bone");
        // first give 2 bones so we have bones to take!
        await expect(page).toHaveText("#dog-message", "Woof!");
        await expect(page).toHaveText("#bone-count", "Current bone count: 2");


        await page.click("#take-bone");
        
        await expect(page).toHaveText("#dog-message", "Whine!");
        await expect(page).toHaveText("#bone-count", "Current bone count: 1");

    });

    test('should be confused when a bone is taken and the count is zero', async () => {
        // check it's 0 first
        await expect(page).toHaveText("#dog-message", "I'm waiting...");
        await expect(page).toHaveText("#bone-count", "Current bone count: 0");
        
        await page.click("#take-bone");
        
        await expect(page).toHaveText("#dog-message", "I'm confused!");
        await expect(page).toHaveText("#bone-count", "Current bone count: 0");
    });
})

We won’t go into the specifics of the syntax of Playwright in this article, but you should have a basic idea of what the tests above are doing.

If it’s not so clear you can check out the Playwright docs, or try to step through the tests with a debugger.

You might also get some eslint errors in the above file if you are following along and have eslint on VS Code enabled.

You can add eslint-plugin-jest-playwright and use the extends on the recommended setup to lint properly in the e2e directory.

First install the devDependencies eslint-plugin-jest-playwright:

npm i -D eslint-plugin-jest-playwright

Then create an .eslintrc.js file in __tests__/e2e with the following:

module.exports = {
    extends: [
        'plugin:jest-playwright/recommended'
    ]
};

Goodbye red squiggles!

Run tests run!

Now that the tests are set up properly, we’ll go ahead and add a script to run them from the package.json.

Add the test:e2e script as follows:

“test:e2e”: “jest — config jest.e2e.config.js”

This will tell jest to use the e2e config instead of the default for unit tests (jest.config.js).

Now go ahead and run the tests, keep up the great work!

Note that you may need to set up some libraries if you don’t have the right system dependencies. For that please consult the Playwright documentation directly, or just skip ahead to the Docker section which will have everything you need in the container.

Running it in a Docker container

Now we’ll put it all together and run the e2e tests inside a Docker container that’s got all the dependencies we need, which will let us scale easily and also run against a matrix (we don’t touch on this in this article but maybe in a part 2).

Create a Dockerfile.e2e like so:

# Prebuilt MS image
FROM mcr.microsoft.com/playwright:bionic

WORKDIR /app

COPY package.json package-lock.json ./

RUN npm ci

COPY . .

RUN npm run build

# Run the npm run test:e2e command to host the server and run the e2e tests
CMD ["npm", "run", "test:e2e"]

Note that the CMD here is set to run the e2e tests. This is because we want to run the tests as the starting command for the container, not as part of the build process for the container. This isn’t how you’d run with a CI provider necessarily so YMMV.

Go ahead and run the docker build for the container and specify the different tag and Dockerfile:

docker build . -f Dockerfile.e2e -t arthur-e2e

In this demo we build the container with the tests baked in, but in theory you exclude the tests from the COPY command and could mount a volume of the tests so you wouldn’t need to rebuild between test changes.

We can run the container and see the tests with the following command (the --rm flag will remove the container at the end of the test so we don’t leave containers hanging):

docker run --rm arthur-e2e

You should see output like the following:

Great job! You just ran e2e tests in WebKit, Chromium and FireFox in a Docker Container!

If you enjoyed this tutorial and you’d like to participate in an amazing startup that’s looking for great people, head over to Tipalti Careers!

If you’d like to comment, or add some feedback also feel free, we’re always looking to improve!

Migrating from AngularJS to Vue — Part 1

Author: Rotem Benjamin

Here at Tipalti, we have multiple web applications which were written over the course of the past 10 years. The newest of which is an AngularJS application that was born at the beginning of 2014. Back then AngularJS was the most common Javascript framework, and it made sense to start a new application with it. The following is our journey of migrating and rewriting this app from AngularJS to a new VueJS 2 app.

Why Migrate?

Google declared in 2018 that AngularJS is going into support only mode and that will end in 2021. We wouldn’t want our application using a framework without any support. So the sooner we can start migrating the better, as migration can take quite some time.

Why Vue?

As of writing this, there are 3 major players in the front-end framework ecosystem: Angular, React and Vue.
It may appear that for most people migrating from angularJS to Angular is a better idea than a different framework. However, this migration is as complicated as any other migration option, thanks to Google’s total rewriting of the angular framework.

I won’t go into the whole process we underwent until finally deciding to go with Vue, but I’ll share the main reasons we chose to do so. You can find a lot of information on various blogs and tech sites that support these claims.

  • Great performance — Vue is supposed to have better performance than Angular and according to some comparisons better than React as well.
  • Growing community and GitHub commits — It is easy to see on Trends that the Vue community is growing with each passing month and it’s GitHub repo is one of the most active ones.
  • Simplicity — Vue is so easy to learn! Tutorials are short, clear and in a really short time, one can gain enough knowledge needed in order to start writing Vue applications.
  • Similarity to AngularJS — Though it is not a good reason on its own, the similarity to AngularJS makes learning Vue easier.

You can read more reasons in THIS fine post.

What was our status when we started?

It is important to mention that our application was and still is in development, so stopping everything and releasing a new version 6 months (hopefully) later was not an option. We had to take small steps that would allow us to gradually migrate to Vue without losing the ability to develop new features in our application going forward.

During the course of the application development, we followed best practices guides such as John Pappa’s and Todd Motto’s.

The final version of our application was such that:

  • All code was bundled with Webpack.
  • All constant files were AngularJS consts.
  • All model classes were AngularJS Factories.
  • All API calls were made from AngularJS Services.
  • Some of the UI was an HTML with controller related to it (especially for ui-route) and some were written as Components (with the introduction of Component to AngularJS in 1.5 version).
  • Client unit testing was written using Jasmine and Karma.

Can it be done incrementally?

Migrating a large scale application such as the one we have into Vue is a long process. We had to think carefully about the steps we would need to take in order to make it a successful one, that would allow us to write new code in Vue and migrate small portions of the app over time.

As you can understand from the above, our code was tightly coupled to AngularJS, such that in order to migrate to Vue some actions were needed to be made prior to the actual migration — and that was the first part of our migration plan.

It’s important to emphasize that as we started the migration, we understood that although the best practices guides mentioned above are really good, they had made us couple our code to a specific framework without a real need to actually do so.

This is also one of the lessons we learned during our migration — Write as much pure JS code as you possibly can and depend as little as possible on frameworks, as frameworks come and go, and you can easily find yourself trying to migrate again to a new framework in 2–3 years. It seems obvious as that’s what SOLID principles are all about, however it’s sometimes easy to miss those principles when ignoring the framework itself as a dependency.

What were our first steps in the migration plan?

So, the actual action items we created as the preliminary stage of our migration plan were detaching everything possible from AngularJS. Meaning modifying code that has little dependencies, but many others depend on it from AngularJS to ES6 style. By doing so we are detaching it from AngularJS ecosystem. Practically, we transformed shared code to be used by import/export instead of AngularJS built-in dependency injection.

To wrap things up, we created the following action items:

  • Remove all const injection to AngularJS and use export statements. Every new const will be added as an ES6 const and will be used by import-export.
  • Update our httpService (which was an AngularJS service) which is the HTTP request proxy in our app. Every API call in our application was made using this service. We replaced the $http dependency with axios and by doing so created an AngularJS-free httpService. Following that, we removed the injection of httpService from every component in our app and included it with an import system.
  • Create a new testing environment using Chai, Sinon, and Mocha, which allows us to test ES6 classes that are not part of the AngularJS app.
  • Transfer all BL services from AngularJS to pure ES6 classes and remove their dependency from our app.
  • Create a private NPM with shared vue controls what will be used by every application we have. This is not a necessity for the first step to full migration, but it will allow us to reuse components across multiple apps and have them the same behavior and styling (which can be overridden of course), which is something that is expected from different apps in the same organization.

What’s next?

As for our next steps, every new feature will be developed in Vue and will be for now part of the AngularJS app by using ng-vue directive, and at the same time, we can start migrating logical components and pages from AngularJS to Vue by leveraging that same great ng-vue directive.

We did migrate one page to ng-vue and added new tests for the vue components we created, in addition to embedding Vuex that will eventually take control of the state data.
Once we finished all the above, we were ready to take the next step of our migration….
More to follow on the next post.

White labeling software UI/UX using LESS

What is White Labeling?

“White Labeling” (also known as “skinning”) is a common UI/UX term used for a product or service that is produced by one company and rebranded by another, giving the appearance that the product or service was created by the marketer (or the company s/he represents). Rebranding has been used with products ranging from food, clothing, vehicles, electronics and all sorts of online wares and services.

For many of Tipalti’s customers, white labeling is key to providing a seamless payment experience for payees such as publishers, affiliates, crowd and sharing economy partners, and resellers. Rather than take a user out of one portal to a third party site and break the communication chain, Tipalti enables the paying company to deeply embed the functionality within their own experience.

How does white labeling work in Tipalti?

Tipalti’s payee portal is fully customizable, allowing our customers to match their corporate brand to every step of the onboarding and management process. For example, entering their contact data, selecting their payment type, completing tax forms, and tracking their invoicing and payment status are all aspects that a payee would need to see and can be presented with the customer’s brand. Customers use their own assets to ensure a seamless look and feel in any payee-facing content.

To support the wide array of branding while maintaining a consistent code-base for all customers, we built the payee portal using LESS.

What is LESS?

CSS preprocessors are a staple in web development. Essentially, CSS pre-processors extend plain CSS into something that looks more like a programming language.

LESS was created in 2009 by Alexis Sellier, also known as @cloudhead, and it has become one of the most popular preprocessors available and has also been widely deployed in numerous front-end frameworks like Bootstrap. It adds to CSS programming traits such as Variables, Functions or Mixins, and Operations, which allows web developers to build modular, scalable, and more maintainable CSS styles.

Click here for more information on how LESS works, syntax and tools.

How LESS is used for white labeling?

Let’s assume you want to use LESS for styling a blog post. Your LESS CSS stylesheet will look something like this:

@post-color: black;
@post-background-color: white;
@sub-post-color: white;
@sub-post-background-color: black;

.post {
	color: @post-color;
	background-color: @post-background-color;
	.sub-post {
		background-color: @sub-post-background-color;
		color: @sub-post-color;
	}
}

With this styling, the blog post will roughly look like this:

less-black

After awhile, let’s say you made contact with a new business partner. The partner wants to display your blog post inside their website, but since the color scheme of the blog post does not match the general style of the partner’s website, they want to create a custom styling that will match their color scheme. This is essentially the process of “white labeling.”

So how do you change the styling to match both color schemes?

This all can be done only in plain CSS, but imagine having to override each CSS class in your blog stylesheet for each theme. Not that attractive, right?

If you are using LESS, the solution is simple. You’ll just override the variables related to each property you are willing to change. The changes are all concentrated in a single location, which makes applying them much easier, faster and less prone to errors.

If your partner prefers the post background color to be blue, you will simply need to do the following:

  1. Create another LESS file
  2. Inside of it add the line “@post-background-color: blue; 
  3. Compile the new LESS file after compiling the original one

That’s all it takes to achieve the wanted layout change. No need to worry that you forgot to update any CSS classes.

less-blue

Complex LESS customizations in Tipalti

For our product, each Tipalti customer can easily customize their own payee portal simply by overriding variables in their own LESS file.

The array of possible customizations reaches far more than colors. For example, our customers are able to change the layout of fieldsets, icons, buttons, elements positions, width/height calculations and more. Because of LESS, the rebranding never interferes with the common code, which is important as there is a lot of intelligence built into our payee portal. Likewise, the customer never has to write any custom code around the logic and execution of the processes. When you’re dealing with something as complex as global payments and validations on payment methods, this is incredibly important.

Here are examples of two very different LESS override stylesheets applied on the same base stylesheet:

less-example-1

less-example-2

Other uses for LESS

LESS is very useful when you’re developing services for customers that feature their brand, but it’s also very clean and efficient for creating a flexible user interface and user experience for your product. For example, when you have many repeated elements that need to be styled, LESS can simplify the effort with variables, operators, and mixins for cross-browser and mobile/responsive support.