Skip to content

amanchopra1905/specflow-selenium-hyperexecute-sample

 
 

Repository files navigation

hyperexecute_logo

HyperExecute is a smart test orchestration platform to run end-to-end Selenium tests at the fastest speed possible. HyperExecute lets you achieve an accelerated time to market by providing a test infrastructure that offers optimal speed, test orchestration, and detailed execution logs.

The overall experience helps teams test code and fix issues at a much faster pace. HyperExecute is configured using a YAML file. Instead of moving the Hub close to you, HyperExecute brings the test scripts close to the Hub!

To know more about how HyperExecute does intelligent Test Orchestration, do check out HyperExecute Getting Started Guide

Try it now

Gitpod

Follow the below steps to run Gitpod button:

  1. Click 'Open in Gitpod' button (You will be redirected to Login/Signup page).
  2. Login with Lambdatest credentials and it will be redirected to Gitpod editor in new tab and current tab will show hyperexecute dashboard.

Run in Gitpod

How to run Selenium automation tests on HyperExecute (using SpecFlow framework)

Pre-requisites

Before using HyperExecute, you have to download HyperExecute CLI corresponding to the host OS. Along with it, you also need to export the environment variables LT_USERNAME and LT_ACCESS_KEY that are available in the LambdaTest Profile page.

Download HyperExecute CLI

HyperExecute CLI is the CLI for interacting and running the tests on the HyperExecute Grid. The CLI provides a host of other useful features that accelerate test execution. In order to trigger tests using the CLI, you need to download the HyperExecute CLI binary corresponding to the platform (or OS) from where the tests are triggered:

Also, it is recommended to download the binary in the project's parent directory. Shown below is the location from where you can download the HyperExecute CLI binary:

Configure Environment Variables

Before the tests are run, please set the environment variables LT_USERNAME & LT_ACCESS_KEY from the terminal. The account details are available on your LambdaTest Profile page.

For macOS:

export LT_USERNAME=LT_USERNAME
export LT_ACCESS_KEY=LT_ACCESS_KEY

For Linux:

export LT_USERNAME=LT_USERNAME
export LT_ACCESS_KEY=LT_ACCESS_KEY

For Windows:

set LT_USERNAME=LT_USERNAME
set LT_ACCESS_KEY=LT_ACCESS_KEY

The project structure is as shown below:

specflow-demo-sample
      |
      |--- Features (Contains the feature files)
              |
              | --- GoogleSearch.feature
              | --- LambdaTestSearch.feature
              | --- SeleniumPlayground.feature
              | --- ToDoApp.feature
      |--- Hooks (Contains the event bindings to perform additional automation logic)
              | --- Hooks.cs
      |--- Steps (Contains the step definitions that correspond to the feature files)
              | --- GoogleSearchSteps.cs
              | --- DuckDuckGoSearchSteps.cs
              | --- SeleniumPlaygroundSteps.cs
              | --- ToDoAppSteps.cs
      |--- App.config (Application Configuration file containing settings specific to the app)
      |
      yaml
       |
       |--- specflow_hyperexecute_matrix_sample.yaml
       |--- specflow_hyperexecute_autosplit_sample.yaml

Matrix Execution with SpecFlow

Matrix-based test execution is used for running the same tests across different test (or input) combinations. The Matrix directive in HyperExecute YAML file is a key:value pair where value is an array of strings.

Also, the key:value pairs are opaque strings for HyperExecute. For more information about matrix multiplexing, check out the Matrix Getting Started Guide

Core

In the current example, matrix YAML file (yaml/specflow_hyperexecute_matrix_sample.yaml) in the repo contains the following configuration:

globalTimeout: 90
testSuiteTimeout: 90
testSuiteStep: 90

Global timeout, testSuite timeout, and testSuite timeout are set to 90 minutes.   The target platform is set to Windows. Please set the [runson] key to [mac] if the tests have to be executed on the macOS platform.

runson: win

The matrix constitutes of the following entries - project and scenario. This is because parallel execution will be achieved at the scenario level.

matrix:
  project: ["OnlySpecTest.sln"]
  #Parallel execution at feature level
  scenario: ["GoogleSearch_1", "GoogleSearch_2", "GoogleSearch_3",
             "LambdaTestBlogSearch_1", "LambdaTestBlogSearch_2", "LambdaTestBlogSearch_3",
             "SeleniumPlayground_1", "SeleniumPlayground_2", "SeleniumPlayground_3",
             "ToDoApp_1", "ToDoApp_2", ToDoApp_3]

The testSuites object contains a list of commands (that can be presented in an array). In the current YAML file, commands for executing the tests are put in an array (with a '-' preceding each item). The dotnet test command is used to run tests located in the current project. In the current project, parallel execution is achieved at the scenario level.

Please refer to Executing specific Scenarios in Build pipeline for more information on filtering the test execution based on Category

testSuites:
  - dotnet test $project --filter "(Category=$scenario)"

Pre Steps and Dependency Caching

Dependency caching is enabled in the YAML file to ensure that the package dependencies are not downloaded in subsequent runs. The first step is to set the Key used to cache directories.

cacheKey: '{{ checksum "packages.txt" }}'

Set the array of files & directories to be cached. Separate folders are created for downloading global-packages, http-cache, and plugins-cache. Pleas refer to Configuring NuGet CLI environment variables to know more about overriding settings in NuGet.Config files.

NUGET_PACKAGES: 'C:\nuget_global_cache'
NUGET_HTTP_CACHE_PATH: 'C:\nuget_http_cache'
NUGET_PLUGINS_CACHE_PATH: 'C:\nuget_plugins_cache'

Steps (or commands) that must run before the test execution are listed in the pre run step. In the example, the necessary NuGet packages are fetched using the dotnet list package command. All the local packages are cleared using the nuget locals all -clear command, post which the entire project is built from scratch using the dotnet build -c Release command.

pre:
 # https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-list-package
 - dotnet list $project package > packages.txt
 - nuget locals all -clear
 - dotnet build -c Release

Post Steps

Steps (or commands) that need to run after the test execution are listed in the post step. In the example, we cat the contents of yaml/specflow_hyperexecute_matrix_sample.yaml

post:
  - cat yaml/specflow_hyperexecute_matrix_sample.yaml

Artifacts Management

The mergeArtifacts directive (which is by default false) is set to true for merging the artifacts and combining artifacts generated under each task.

The uploadArtefacts directive informs HyperExecute to upload artifacts [files, reports, etc.] generated after task completion. In the example, path consists of a regex for parsing the directories (i.e. Report/ and Screenshots/) that contain the test reports and execution screenshots respectively.

mergeArtifacts: true

uploadArtefacts:
 - name: Execution_Report
   path:
    - Report/**
 - name: Execution_Screenshots
   path:
    - Screenshots/**/**

HyperExecute also facilitates the provision to download the artifacts on your local machine. To download the artifacts, click on Artifacts button corresponding to the associated TestID.

specflow_matrix_artefacts_1

Now, you can download the artifacts by clicking on the Download button as shown below:

specflow_matrix_artefacts_2

Test Execution

The CLI option --config is used for providing the custom HyperExecute YAML file (i.e. yaml/specflow_hyperexecute_matrix_sample.yaml). Run the following command on the terminal to trigger the tests in C# files on the HyperExecute grid. The --download-artifacts option is used to inform HyperExecute to download the artifacts for the job. The --force-clean-artifacts option force cleans any existing artifacts for the project.

./hyperexecute --config yaml/specflow_hyperexecute_matrix_sample.yaml --force-clean-artifacts --download-artifacts

Visit HyperExecute Automation Dashboard to check the status of execution:

specflow_matrix_execution

Shown below is the execution screenshot when the YAML file is triggered from the terminal:

specflow_cli1_execution

specflow_cli2_execution

Auto-Split Execution with SpecFlow

Auto-split execution mechanism lets you run tests at predefined concurrency and distribute the tests over the available infrastructure. Concurrency can be achieved at different levels - file, module, test suite, test, scenario, etc.

For more information about auto-split execution, check out the Auto-Split Getting Started Guide

Core

Auto-split YAML file (yaml/specflow_hyperexecute_autosplit_sample.yaml) in the repo contains the following configuration:

globalTimeout: 90
testSuiteTimeout: 90
testSuiteStep: 90

Global timeout, testSuite timeout, and testSuite timeout are set to 90 minutes.   The runson key determines the platform (or operating system) on which the tests are executed. Here we have set the target OS as Windows.

runson: win

Auto-split is set to true in the YAML file.

 autosplit: true

retryOnFailure is set to true, instructing HyperExecute to retry failed command(s). The retry operation is carried out till the number of retries mentioned in maxRetries are exhausted or the command execution results in a Pass. In addition, the concurrency (i.e. number of parallel sessions) is set to 25.

retryOnFailure: true
maxRetries: 5
concurrency: 25

Pre Steps and Dependency Caching

Dependency caching is enabled in the YAML file to ensure that the package dependencies are not downloaded in subsequent runs. The first step is to set the Key used to cache directories.

cacheKey: '{{ checksum "packages.txt" }}'

Set the array of files & directories to be cached. Separate folders are created for downloading global-packages, http-cache, and plugins-cache. Pleas refer to Configuring NuGet CLI environment variables to know more about overriding settings in NuGet.Config files.

NUGET_PACKAGES: 'C:\nuget_global_cache'
NUGET_HTTP_CACHE_PATH: 'C:\nuget_http_cache'
NUGET_PLUGINS_CACHE_PATH: 'C:\nuget_plugins_cache'

Post Steps

The post directive contains a list of commands that run as a part of post-test execution. Here, the contents of yaml/specflow_hyperexecute_autosplit_sample.yaml are read using the cat command as a part of the post step.

post:
  - cat yaml/specflow_hyperexecute_autosplit_sample.yaml

The testDiscovery directive contains the command that gives details of the mode of execution, along with detailing the command that is used for test execution. Here, we are fetching the list of test methods that would be further passed in the testRunnerCommand

testDiscovery:
  type: raw
  mode: static
  command: grep -rni 'Features' -e '@' --include=\*.feature | sed 's/.*@//'

Running the above command on the terminal will give a list of scenarios present in the feature files:

  • GoogleSearch_1
  • GoogleSearch_2
  • GoogleSearch_3
  • ToDoApp_1
  • ToDoApp_2
  • ToDoApp_3
  • LambdaTestBlogSearch_1
  • LambdaTestBlogSearch_2
  • LambdaTestBlogSearch_3
  • SeleniumPlayground_1
  • SeleniumPlayground_2
  • SeleniumPlayground_3

The testRunnerCommand contains the command that is used for triggering the test. The output fetched from the testDiscoverer command acts as an input to the testRunner command.

testRunnerCommand: dotnet test --filter "(Category=$test)"

Artifacts Management

The mergeArtifacts directive (which is by default false) is set to true for merging the artifacts and combining artifacts generated under each task.

The uploadArtefacts directive informs HyperExecute to upload artifacts [files, reports, etc.] generated after task completion. In the example, path consists of a regex for parsing the directories (i.e. Report/ and Screenshots/) that contain the test reports and execution screenshots respectively.

mergeArtifacts: true

uploadArtefacts:
 - name: Execution_Report
   path:
    - Report/**
 - name: Execution_Screenshots
   path:
    - Screenshots/**/**

HyperExecute also facilitates the provision to download the artifacts on your local machine. To download the artifacts, click on Artifacts button corresponding to the associated TestID.

specflow_autosplit_artefacts_1

Now, you can download the artifacts by clicking on the Download button as shown below:

specflow_autosplit_artefacts_2

Test Execution

The CLI option --config is used for providing the custom HyperExecute YAML file (i.e. yaml/specflow_hyperexecute_autosplit_sample.yaml). Run the following command on the terminal to trigger the tests in C# files on the HyperExecute grid. The --download-artifacts option is used to inform HyperExecute to download the artifacts for the job. The --force-clean-artifacts option force cleans any existing artifacts for the project.

./hyperexecute --config yaml/specflow_hyperexecute_autosplit_sample.yaml --force-clean-artifacts --download-artifacts

Visit HyperExecute Automation Dashboard to check the status of execution

specflow_autosplit_execution

Shown below is the execution screenshot when the YAML file is triggered from the terminal:

specflow_autosplit_cli1_execution

specflow_autosplit_cli2_execution

Secrets Management

In case you want to use any secret keys in the YAML file, the same can be set by clicking on the Secrets button the dashboard.

specflow_secrets_key_1

Now create a secret key that you can use in the HyperExecute YAML file.

secrets_management_1

All you need to do is create an environment variable that uses the secret key:

env:
  PAT: ${{ .secrets.testKey }}

Navigation in Automation Dashboard

HyperExecute lets you navigate from/to Test Logs in Automation Dashboard from/to HyperExecute Logs. You also get relevant get relevant Selenium test details like video, network log, commands, Exceptions & more in the Dashboard. Effortlessly navigate from the automation dashboard to HyperExecute logs (and vice-versa) to get more details of the test execution.

Shown below is the HyperExecute Automation dashboard which also lists the tests that were executed as a part of the test suite:

specflow_hyperexecute_automation_dashboard

Here is a screenshot that lists the automation test that was executed on the HyperExecute grid:

specflow_testing_automation_dashboard

LambdaTest Community 👥

The LambdaTest Community allows people to interact with tech enthusiasts. Connect, ask questions, and learn from tech-savvy people. Discuss best practises in web development, testing, and DevOps with professionals from across the globe.

Documentation & Resources 📚

If you want to learn more about the LambdaTest's features, setup, and usage, visit the LambdaTest documentation. You can also find in-depth tutorials around test automation, mobile app testing, responsive testing, manual testing on LambdaTest Blog and LambdaTest Learning Hub.

About LambdaTest

LambdaTest is a leading test execution and orchestration platform that is fast, reliable, scalable, and secure. It allows users to run both manual and automated testing of web and mobile apps across 3000+ different browsers, operating systems, and real device combinations. Using LambdaTest, businesses can ensure quicker developer feedback and hence achieve faster go to market. Over 500 enterprises and 1 Million + users across 130+ countries rely on LambdaTest for their testing needs.

We are here to help you 🎧

About

Demonstration of SpecFlow Selenium testing on HyperExecute Grid

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C# 94.0%
  • Gherkin 6.0%