google-site-verification=2XVdMptxbgK2QCgMAJdqAh0EHmP7kfUOCtaUvMPqgxM
google-site-verification=2XVdMptxbgK2QCgMAJdqAh0EHmP7kfUOCtaUvMPqgxM
Technology

How to Run Cross-Browser Tests WIth Jest?

Jest is a popular JavaScript testing framework for cross-browser testing that provides an easy way to create/run unit and integration tests. Its speed, interactive mode, and snapshot testing make it a go-to tool for many web developers.

However, Jest runs tests only in the Node.js environment by default. To validate code across different browsers, we need to expand Jest’s capabilities using additional tools and configuration.

In this article, we will learn how to run cross-browser tests with Jest. We will cover integrating services like LambdaTest to open up Jest to real browser testing.

The ability to shift testing left with Jest while expanding test coverage beyond Node can accelerate development confidence and prevent browser-specific defects from reaching users. Let’s explore how we can accomplish this important cross-browser testing capability with Jest.

How to Run Cross-Browser Tests WIth Jest on the LambdaTest Platform

Here are the key steps to enable running cross-browser tests with Jest:

1. Set up a test environment

– Install Jest and any browsers you want to test locally. Popular options are Puppeteer, Playwright, or Selenium Webdriver.

// Install Jest

npm install –save-dev jest

// Install Puppeteer for Chrome/Chromium testing

npm install –save-dev puppeteer

// Install Playwright for Chrome, Firefox, Safari

npm install -D playwright

– Configure Jest to run tests across the local browsers using Jest’s multi-project runner capability or a test orchestration framework like p-map.

// In the Jest config file

// Run tests across 2 puppeteer instances

module.exports = {

  projects: [

{

   …jestConfig,

   testEnvironment: “jest-environment-puppeteer”,

   launch: {

     headless: true,

     args: [‘–no-sandbox’]

   }

},

{

   …jestConfig,

   testEnvironment: “jest-environment-puppeteer”,

   launch: {

     headless: false

   }

}

  ],

};

// Use p-map to run tests concurrently

const pMap = require(‘p-map’);

pMap(browserFixtures, async (browser) => {

  await jest.runTests({

testEnvironment: browser.testEnvironment,

launch: browser.launch,

  });

}, { concurrency: 2 })

– Run simple sanity checks locally to validate the setup before expanding to more browsers.

Define browser-specific setup files

// jest-puppeteer.config.js

module.exports = async function() {

  const browser = await puppeteer.launch({

headless: false

  });

  //…initialize context, pages, browser objects

  // Set up browser mock/stub

  browserMock = { /*…*/ }

  return {

browser,

browserMock,

//…

  }

}

This allows initializing each browser properly before tests execute and reuse of mock objects across test files for that browser.

2. Integrate LambdaTest

  1. Sign up for a LambdaTest account to access their online Selenium grid of 3000+ browser/OS combinations and real device cloud. 
  2. Configure Jest to run tests in parallel on both local browsers and the LambdaTest online grid using capabilities like multiRun.
  3. Set LambdaTest username, access key, and grid URL in the Jest config file or as environment variables.

Set LambdaTest credentials

// Set LambdaTest credentials as environment variables

export LT_USERNAME = “<your username>”  

export LT_ACCESS_KEY = “<your access key>”

Install LambdaTest NPM package

// Install lt-selenium npm package

npm install lt-selenium

Configure LambdaTest as Jest environment

// In Jest config

// Import lt-selenium

import * as lt from ‘lt-selenium’;

module.exports = {

  projects: [

…localBrowserSetups,

// LambdaTest config

{

   displayName: ‘LambdaTest Chrome’,

   testEnvironment: lt.LambdaTestEnvironment,

   environmentVariables: {

     LT_USERNAME: process.env.LT_USERNAME,

     LT_ACCESS_KEY: process.env.LT_ACCESS_KEY,

   }

   launch: {

     capabilities: {

       browserName: ‘Chrome’,

       browserVersion: ‘latest’,

       platform: ‘Windows 10’

     }

   }

}

  ]

}

Set capabilities

// Capabilities to test Safari on MacOS Big Sur

launch: {

  capabilities: {

browserName: ‘Safari’,

browserVersion: ‘latest’,

platform: ‘macOS Big Sur’

  }

}

This allows running Jest tests in parallel across local browsers and additional environments on the LambdaTest online grid.

3. Add CI integration

– Set up a CI pipeline like GitHub Actions, CircleCI, etc., to automate Jest test runs on every code change.

Add Jest test script in CI config

# .github/workflows/ci.yml

jobs:

  test:

steps:

   – uses: actions/checkout@v2

   – run: npm install

   – run: npm test

     # Run Jest tests in CI  

– Configure the CI system to spin up parallel runs across local and LambdaTest browsers based on the Jest config.

Configure CI to run tests across multiple browsers

jobs:

  test:

strategy:

   matrix:

     browser: [chrome, firefox, safari, edge]

   steps:

     – uses: actions/checkout@v2

     – run: npm install

     – run: npm test — –browser=${{ matrix.browser }}

       # Pass browser via CLI

Spin up parallel runs on LambdaTest grid

jobs:

  test:

   #…

steps:

   – run: npm test — –runInBand

     # Jest config handles LambdaTest parallelization

– View test reports and videos on the CI system as well as LambdaTest dashboard.

Access LambdaTest logs/reports

jobs:

  test:

steps:

   – uses: actions/checkout@v2

   //…

   – run: npm test

   – uses: lambda-actions/lambda-test-report-action@v1

     with:

       lt_access_key: ${{ secrets.LT_ACCESS_KEY }}  

       # Download LambdaTest logs

This automates executing Jest tests across multiple local and LambdaTest browsers on every code change while providing access to debugging data.

4. Collect coverage reports

– Use Jest’s built-in coverage reporter or tools like `jest-coverage-reporters` to gather coverage data.

Use Jest’s built-in coverage reporting

// Enable coverage reporting in Jest config

module.exports = {

  collectCoverage: true,

  coverageReporters: [“json”, “html”]

}

// Generate coverage report

npm test — –coverage

“`

This will output a JSON coverage report showing statement, branch, etc, coverage.

Install jest-coverage-reporters for customized reports

npm install jest-coverage-reporters

“`

“`js

// Jest config

module.exports = {

  coverageReporters: [“text”, “lcov”]

}

// text: console report

// lcov: lcov info file for external upload

– Merge coverage reports from local and LambdaTest test runs to get overall coverage.

Merge coverage reports

// Merge coverage data

import {mergeReports} from ‘jest-coverage-reporters’

const coverage = mergeReports(jsonReportsFromAllRuns);

“`

“`

// Create combined report

npm run coverage — –coverage –mergeReports

// Upload final coverage

npm install coveralls

coveralls < coverage/lcov.info

This allows gathering comprehensive coverage data across test runs on multiple local browsers and LambdaTest machines.

5. Handle test timeouts

– Set appropriate custom timeouts in Jest config to allow for browser launch time variability.

Set custom timeouts in Jest config

// Set higher default timeout for all tests

module.exports = {

  testTimeout: 30 * 1000 // 30 seconds

}

// Set different timeouts per browser

module.exports = {

  projects: [

{

   testTimeout: 15 * 1000 // 15 seconds for local Chrome

},

{

   testEnvironment: “jest-environment-lambdatest”,

   testTimeout: 30 * 1000 // 30 seconds for LambdaTest browsers

}

  ]

}

– Use LambdaTest tunnel and internal site access to reduce timeouts from external network latency.

Use LambdaTest tunnel for internal site access

// Import LambdaTest tunnel module

const ltTunnel = require(‘lt-tunnel’);

// Create tunnel object

const tunnel = ltTunnel({

  user: process.env.LT_USERNAME,

  key: process.env.LT_ACCESS_KEY  

});

module.exports = {

  environmentVariables: {

LT_TUNNEL: tunnel.toString()

  }

}

– Analyze timeout trends across browsers and retry failing tests to isolate browser-specific issues.

Analyze timeout trends and retry failures

# View timeout-related failures across browsers

npm test — –json –outputFile=test-results.json

# Retry failing tests in the specific browser

npm test — -t=”iPhone 12 Safari” –detectOpenHandles

# Debug browser-specific timeout issues

This allows customizing timeout thresholds, reducing network latency timeouts, and debugging browser-specific trends.

Performing Real-Time Cross-Browser Testing with LambdaTest

Here is a detailed guide on performing real-time cross-browser testing with LambdaTest:

Getting Started

1. Sign up for a LambdaTest account if you don’t already have one.

2. Log into your LambdaTest account.

3. Navigate to the Real Time Testing section.

Starting a Test Session

1. Enter the URL of the website or web app that needs testing.

2. Select the OS, browser, browser version, and screen resolution for your test.

3. Click on “Start Session” to initiate testing on the configured environment.

Performing Testing

Once the test environment is ready:

Interact with the web app

Once the test session starts, you can interact with the web app or website through the browser screen. Test critical workflows by clicking buttons, filling forms, navigating between pages, and validating UI elements. Identify any issues faced during the user journey.

Take Screenshots

When you encounter a bug or compatibility issue, use the camera icon to take a screenshot. The screenshot will be captured of the current browser state for later reference. Take multiple screenshots to document various defects uncovered during testing.

Log Bugs

Click on the bug icon to log any defects along with details like steps to reproduce, screenshots, severity, etc. These bugs will be saved specifically for the test session. Bug reporting helps document the issues found during a session.

Record Videos

To record videos of test cases, click on the video icon. Execute test workflows while the video records your interactions. Stop recording once the video case is complete. Videos are useful for documentation and sharing with stakeholders.

Save Visuals to Gallery

All screenshots and videos taken during a session can be saved to your LambdaTest gallery using the “Save” option. Use the LambdaTest dashboard to access these visual assets later across sessions.

Change Resolution

The resolution dropdown in a session allows changing the browser screen resolution on the fly. Test the web app across different device resolutions like mobile, tablet, desktop, etc.

Move Session to Project

Sessions can be moved to a specific project using the Project icon. This organizes related sessions under one project for easier reporting and sharing.

End Session

Once testing is complete, end the session by clicking on the “End Session” button. Ending the session the online machine is back to the LambdaTest grid.

Switching Configurations

To switch to a different browser or OS without ending the session:

Click the Switch Button

The “Switch” button is located next to the configuration dropdown in the test session screen. When you want to change the browser or OS without ending the session, click on this Switch button.

Select New Configuration

After clicking the Switch button, the configuration dropdown will open. Here you can select the new browser, operating system, or resolution you want to test. For example, you can switch from Chrome on Windows to Safari on MacOS.

Confirm Switch

Once you have selected the desired new configuration, confirm the change by clicking the “Switch” button below the dropdown. This will initiate the switch to the new browser or OS.

Wait for New Session

The previous session will end, and LambdaTest will spin up a new online machine with the selected configuration. Wait for the new browser or OS screen to load.

Continue Testing

Now you can continue testing on the new environment. The switch enables changing test configurations on the fly without having to end sessions and reconfigure from scratch.

Switch Multiple Times

You can repeat the process and switch environments as many times as required during a test cycle. Switching saves time when validating across multiple browsers or OS versions.

The Switch capability makes it easy to expand test coverage during a single testing session.

Mobile Testing

To test on mobile devices:

Navigate to Mobile Testing Icon

The mobile testing icon is located on the left panel in the real-time testing section of LambdaTest. Click on this icon to switch from desktop browser testing to mobile testing.

Select Mobile Device

After clicking the mobile icon, you will see a list of mobile devices like iPhone, Samsung Galaxy, Google Pixel, etc. Select the target mobile device on which you want to test the web app.

Interact with Mobile View

Once the mobile device simulator loads, you will see the mobile view of your web app. Interact with the mobile view by clicking buttons, entering text in forms, and validating UI responsiveness.

Clear Cache on iOS

For iOS simulators, there is an additional “Clear Cache” button available. If you make changes and want to test them in the same session, use this clear cache button to purge cached app data.

Switch Mobile Devices

You can switch between mobile devices using the switch button, just like with desktop browsers. Test across different phones and tablet orientations.

End Session

Once mobile testing is complete, hit the “End Session” button to terminate the test run and release the mobile simulator.

Conclusion

LambdaTest‘s interactive real-time testing allows validating web apps across multiple desktop and mobile browsers with a click. The variety of environments and debugging capabilities provide comprehensive cross-browser coverage to identify compatibility issues before release.

Handy in-session tools like screenshot capture, bug logging, video recording, and asset saving enable comprehensive documentation during tests. Switching configurations on the fly allows expanding coverage without having to restart sessions repeatedly.

With its vast device selection and intuitive interface, LambdaTest enables teams to scale real-time cross-browser testing and deliver flawless omnichannel digital experiences. Interactive testing uncovers UI and UX defects early, leading to higher-quality software.

Read more articles for Evening Chronicle

About author

Articles

Hello, I'm Jennifer. I am an SEO content writer with 5 years of experience. I am knowledgeable in working across various niches. My expertise spans creating tailored content strategies, understanding audience needs, and ensuring top search engine rankings. My diverse experience has equipped me with the versatility to tackle various content challenges effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *