diff --git a/devDoc/ActsAndFacts.md b/devDoc/ActsAndFacts.md
deleted file mode 100644
index 202a038..0000000
--- a/devDoc/ActsAndFacts.md
+++ /dev/null
@@ -1,119 +0,0 @@
-## How many Acts and Facts need defined in step library.
-
-+ Do nothing, await
-+ Go to URL
-+ Reload page $$%%
-+ Forward $$%%
-+ Backward $$%%
-+ Scrool page.evaluate(_ => { ??
- window.scrollBy(0, window.innerHeight);
-});
-+ Interact with dialouge, e.g. alert? $$ %%
-+ verify dociment downloaded ??%%
-
-+ iframe acts and facts (a lots!!) %%
-
-
-+ verify new tab or window opened by page.on('popup') %%
-
-
-+ Handle Http Basic Authentication %%
-+ Click
-+ Focus
-+ Hover
-+ page.select API, select a dropdown %%
-+ Set cookie
-+ Set Cache enable
-+ Set Header
-+ page.setDefaultNavigationTimeout(timeout)
-+ file picker page.waitForFileChooser([options]) %%
-+ Keyboard actions %%
-+ Mouse actions %%
-+ Mobile actions tap and touch %%
-+ EventHandler to verify element attributes (properties?) or we can call the standart Elemnt API?
-+ Verify styles, positions, offset
-
-
-
-
-
-
-### Use page.emulate() to set the view port
-
-The device list [https://github.com/puppeteer/puppeteer/blob/main/src/common/DeviceDescriptors.ts](https://github.com/puppeteer/puppeteer/blob/main/src/common/DeviceDescriptors.ts), we can add custom device to the list, e.g. desktop, desltopX ...
-
-```js
-[
- {
- name: 'Kindle Fire HDX landscape',
- userAgent:
- 'Mozilla/5.0 (Linux; U; en-us; KFAPWI Build/JDQ39) AppleWebKit/535.19 (KHTML, like Gecko) Silk/3.13 Safari/535.19 Silk-Accelerated=true',
- viewport: {
- width: 1280,
- height: 800,
- deviceScaleFactor: 2,
- isMobile: true,
- hasTouch: true,
- isLandscape: true,
- },
- },
- {
- name: 'iPhone 7',
- userAgent:
- 'Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1',
- viewport: {
- width: 375,
- height: 667,
- deviceScaleFactor: 2,
- isMobile: true,
- hasTouch: true,
- isLandscape: false,
- },
- }
-]
-```
-
-### Big task - record console output (errors) and show it in reports
-
-page.on(console) API
-
-
-### Verify elements style
-
-```gherkin
-# Verify one single style on one or multiple elements
-Then I verified the style item {item} of it {selector} with value {value}
-```
-
-The styles of an element come from 2 ways: style attribute on element & css file (computed styles)
-
-We can eval selector to ElementHandler and get its style property to verify specific style items
-
-We can call Web API: Window.getComputedStyle() to get commputed styles
-
-```js
-const data = await page.evaluate(() => {
- const elements = document.body.getElementsByTagName("*");
-
- return [...elements].map(element => {
- element.focus();
- return window.getComputedStyle(element).getPropertyValue("font-family");
- });
- });
-```
-
-**In the Act step, try these 2 methods and the style-attribute alwyas override the css styles**, so, we check the style-attribute at first, if it's not existed, then try css styles.
-
-
-### Loayout testing
-
-getOffset
-
-getComputedStyle can help us do this
-
-
-### Dialog verification
-
-We need aware dialog in each story (add the dialog listenner), so that we can implement the **Then** step to verify and handle the dialog. I believe there is only one dialog exiting normally. (or we have to iterate all dialogs to dismiss them)
-
-> Add dialog listener to all stories globally, big task
diff --git a/devDoc/BrowserContextPage.md b/devDoc/BrowserContextPage.md
deleted file mode 100644
index b3c2ee9..0000000
--- a/devDoc/BrowserContextPage.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# Browser Instance, Browser Context and Page
-
-
-
-
-Unlike the Chrome used in laptops, we see more browser features when we use Puppeteer interacting with Chrome (or Chromium, Headless)
-
-```js
-page.$eval(selector, (element) => { ... }); // Use selector find out the first matching element
-page.$$eval(selector, (elements) => { ... }); // Use selector find out an array of elements
-```
-
diff --git a/devDoc/BrowserSessionInPlanScope.md b/devDoc/BrowserSessionInPlanScope.md
deleted file mode 100644
index a086136..0000000
--- a/devDoc/BrowserSessionInPlanScope.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Share Browser Session In Plan Scope
-
-When we run a story solely, it always use a new browser context with new **browser session**. That means all the data like cookies, cache ... are cleared. That is good. Actually we run the story by wraping it in a plan.
-
-However, mostly we don't want every story in a plan use a new **browser session** (except there is only one story in the plan). **Inheriting Browser Session** is the actual scenario when user use browser to access most applications. For example, the test user signed in on the app in one story, and the following stories should not sign in again ...
-
-How to fix this:
-
-+ The is a config item - _browserSessionScope_. The default value is **plan**, but user can change it to **story**.
-+ If _browserSessionScope !== 'story'_, the browser is launched in plan scope, and every story access this browser instance for test. And the browser context is closed when plan finished.
-+ If _browserSessionScope === 'story'_, the browser is launched in every story and closed in the end of every story.
-
-## Valid setting
-
-```json
-{
- "headlessChromium": true,
- "browserSessionScope": "plan",
- "newIncognitoContext": false
-}
-```
-
-The prowser session will be shared in different stories running in different stages, work well
-
-## Invalid setting
-
-```json
-{
- "headlessChromium": true,
- "browserSessionScope": "plan",
- "newIncognitoContext": true
-}
-```
-
-If each story open a **newIncognitoContext**, the stories cannot share same browser session. So we don't need specify **newIncognitoContext** at all. If the **browserSessionScope** is **plan**, we always use default browser context
-
-
diff --git a/devDoc/BurnTree.md b/devDoc/BurnTree.md
deleted file mode 100644
index edcef8a..0000000
--- a/devDoc/BurnTree.md
+++ /dev/null
@@ -1,84 +0,0 @@
-## Burn the plan tree
-
-SSE scenario testing message (data) format:
-
-```js
-// Each message carry a scenario result to SHMUI
-{
- type: "data", // the data payload only available for type === "data"
- data: {
- // a scenarion is a leaf of the tree | plan-->stage-->story-->storyLoop-->phaseLoop-->scenario
- stageIdx: "number",
- story: "story name"
- storyLP: "number",
- whenIdx: "number", // only for whenLP
- whenLP: "number",
- steps: {
- // string status array, could be "ready|passed|failed|skipped", e.g. ["passed", "passed", "skipped", "failed"]
- acts: [], // "acts === null" means current scenario is skipped.
- facts: [] //
- }
- }
-}
-```
-
-SSE info maeeasge format (other than scenario testing result)
-
-```js
-{
- type: "signal"
- // ... anything explain by SHMUI side.
-}
-```
-
-### Handow write scenario results to SSE
-
-+ The SSE endpint is created by SHM server by specify an URL.
-+ Browser side create an EventSource object to connect the SSE endpoint to set up the connection between browser and server.
-+ SSE is not established before browser side connect to it. Because there is no 'response' object created in SSE endpoint if no browser called the endpoint.
-+ After a browser connect the endpoint (the EventSourec instance was created in browser side), SSE is active in server and server can write meaasge to SSE endpoint.
-+ SSE endpoint active doesn't guarantee browser listeners can reveive the messages. The browser listeners may lose the SSE connection due to some reasons. So the browser side must maintain the connection by re-connecting (create a new EventSource)
-
-There is **sse** service existed in handow-core. SHM must pass the **res** reference to handow's sse when the SSE connectiong set up. Handow test (siuteRunr) can access **sse** and write scenario result to sse - writing to **res** in sse service.
-
-The handow-core sse service is exported in handow API, so SHM can pass res to sse by call sse.init(res).
-
-## Generate planTree in run time
-
-SHMUI need the plan tree to show tree burning.
-
-+ when handow is idle, user can always open a pan tree by select a plan from explorer.
-+ when handow is running, user can not choose any plan from explorer.
-+ when handow is running and user go to Runner, he can see current running plan tree burning. And the tree is updated when plan changed by schedule.
-
-### Can not use .plan.tree file
-
-We use a static plan.tree file as a tree of the plan. The tree is generated once user choose this plan from explorer, and then save it as file. Then we access this file as tree for future test running.
-
-**That doesn't work!!!!!!!!!!!!**
-
-+ If user didn't open plan by choose it from explorer, the tree file is not created!!
-+ After the tree file created, it is hard to update it when plan, parameter, stories ... changing!!!
-+ So there will bw issues of no-tree or wrong-tree.
-
-### We always generate the tree in run time
-
-+ When user choose a plan by runner's explorer, SHM ask handow generate the tree on fly.
-+ When handow put a plan to run, it will generate the tree before run the plan, and save the tree to handow status (SHM can get the tree by handow.handowStatus).
-+ No tree file existed, so the tree allways keep synchronous with plan, stories, parameters ...
-+ User can open a tree and run the plan manually, then the tree burning is ahowing
-+ When handow is running, user can go to Runner, then he load current planTree from handow and show current burning.
-+ The SSE start command will trigger runner updating plan tree
-
-### We still need planTree file in record (latest record and archived records)
-
-When SHMUI open an record, it will open the burned tree. This tree must be a file together with other record files. And the planTree must be archived.
-
-+ When handow persist the record JSON file, it will persist current planTree into a file.
-+ So SHMUI can open planTree of the record
-
-### Which tree is opened by Runner (of SHMUI)
-
-+ When handle is idle, Runner doesn't open any planTree by default.
-+ User can choose a plan from Explorer and open the plan tree in UI, but the plan tree is not set as **Running Tree** of handow before user run it.
-+ User will open the running plan tree by default when handow is running (and he can not choose other plan from explorer when handow is running). The running tree object was set to handow status before that plan running.
\ No newline at end of file
diff --git a/devDoc/CSR-Github-NPM.md b/devDoc/CSR-Github-NPM.md
deleted file mode 100644
index 58812d1..0000000
--- a/devDoc/CSR-Github-NPM.md
+++ /dev/null
@@ -1,136 +0,0 @@
-# Collaboration between CSR, Github and NPM
-
-Handow project has 3 remote repositories for developing version-control, collaboration and distribution.
-
-+ As normal developing version-control, a Git **CSR** instance is used synchronously with all Handow source files.
-+ As open-source distribution repository, a Github repository is used to release public source files on demand. For example,the Github repository is updated only when a new version is stable, not including private resources.
-+ For publishing code module online, the NPM repository is used to releases the core library files as a package.
-
-> The requirement is a little tricky. The first thinking is using 2 independent project for CSR and Github, and keep them synchrous manually by migrating source files from CSR to Github local repository before pushing to Github. And then publish NPM pacakge from Github source tree. It should work, but the manual operations are boring and error prone.
-
-There is a solution to meet the requirement basing on same branch of Git-CSR.
-
-## Handle Github updating
-
-Work arroud with CSR is just the normal Git working flow, but it isn't the same thing with Github. Here we treat Github as a **Hard-Updating-Only** remote repo. After developers clone the CSR remote, his local repository has only one remote.
-
-```
-$ git remote -v
-> origin ssh://newlifewj@gmail.com@source.developers.google.com:2022/p/handow-uat/r/handow (fetch)
-> origin ssh://newlifewj@gmail.com@source.developers.google.com:2022/p/handow-uat/r/handow (push)
-```
-
-Assuming a Github public repository was created for Handow, we add it as another remote.
-
-```
-$ git remote add github https://github.com/newlifewj/handow-core.git // named the adding remote as "github"
-```
-
-Then we can see the Github remote was added.
-
-```
-$ git remote -v
-> github https://github.com/newlifewj/handow-core.git (fetch)
-> github https://github.com/newlifewj/handow-core.git (push)
-> origin ssh://newlifewj@gmail.com@source.developers.google.com:2022/p/handow-uat/r/handow (fetch)
-> origin ssh://newlifewj@gmail.com@source.developers.google.com:2022/p/handow-uat/r/handow (push)
-```
-
-> The CSR and Github remotes are not synchronous, we can not checkout both of them. The Github remote is just used for hard updating.
-
-We use **force pushing** to update Github remote master branch from local dev ("github" is the Github remote name, we update its "master" branch by local CSR "dev" branch).
-
-```
-$ git push --force github dev:master
-```
-
-If we also want Github repo is tagged (it is not necessary but harmless), the updating command should be:
-
-```
-$ git push --force github dev:master --tags
-```
-
-### Actually git can force pushing to other remote URL without adding it to local repo
-
-```
-$ git push --force https://github.com/newlifewj/handow-core.git dev:master
-```
-
-This works even we don't add _https://github.com/newlifewj/handow-core.git_ as a named remote to current local repository.
-
-### Exclude source files for Github repo
-
-As mentioned before, we can force updating Github remote. But the updated Github remote repo is exectly same as CSR local dev. That doesn't meet the requirement, we want some secured resources are excluded on Github.
-
-> **It Is Not Possible** to make the 2 remotes differently by switching _.gitignore_ file. The resouces can not be ignored again after they have been checked into a remote repository.
-
-We implement the "multi-steps" solution to resolve excluding resources on Github:
-
-+ Be sure all IDE changes are saved, and all the changes are commtted and pushed to CSR.
-+ Delete resources which need to be ignored on Github remote.
-+ Commit current deleting change.
-+ Update Github remote.
-+ Hard reset local repository with CSR remote dev branch, then local dev is recovered
-
-In order to avoid making mistakes when perform muli-steps operation, a batch script could help us and make things easier. For example, we want ignore _**doc**_ folder on Github, then we create a batch runner: _pushgithub.bat_:
-
-```bat
-:: (comment line) pushgithub.bat, exclude doc folder for updating github remote ('^' to break line)
-
-rmdir /Q /S doc && ^
-git add -A && ^
-git commit -m "Prepare pushing to github repository" && ^
-git push --force github dev:master --tags && ^
-git reset --hard origin/dev
-```
-
-Then we can finish updating Github remote with excluding in one shot, **Great!!**
-
-```
-$ pushgithub
-```
-
-> Actually the tags (versions) are not necessary in Github, guessing people always clone the latest, but it's harmless anyway. However, the version tag is important for npm publishing and CSR history.
-
-## Publish module package to NPM
-
-Although NPM can depend on .gitignore to exclude resources, but we prefer using independent .npmignore because source code and executing module are totally different things (But they share a lot of things in Node application).
-
-> Publishing and updating npm pkg is the same thing. _"npm publish"_ can update exisied package.
-
-It is not necessary to keep Github release synchronous with NPM pkg updating, but better to do this. The steps to update package on NPM online repository:
-
-+ Change the version field in _package.json_, **!!important!!**
-+ Be sure all IDE changes are saved, and all the changes are commtted and pushed to CSR.
-+ Taged current commit and push tags to remote.
-+ Perform Github updating by call the batch file - _pushgithub_.
-+ At last, call _npm publish_ from dev.
-
-### Put a tag as version
-
-Versions help us access definite status of the code, it becomes more important after we have published mudlues with version to NPM. Different users could refer different versions, and we need access the relevant project status exactly. It is not necessary adding version tag on each commit. But we **must add version tag before updating NPM**.
-
-Git command adding tag to local repository. This will add tag to current commit in local repository (not cover un-committed changes).
-
-```
-$ git tag v1.0.0
-```
-
-After tag was added, it's only valid in local repository until pushing it especially, e.g. push to remote together with existed tags:
-
-```
-$ git push --tags
-```
-
-### handow-seed
-
-This project push same thing to CSR and Github, with 2 repository endpoint.
-
-After pushed to CSR origin/master, do:
-
-```
-git push github master:master
-```
-
-
-## Use npm-link accessing local resources as npm-module-package
\ No newline at end of file
diff --git a/devDoc/DeployWebsiteOnGoogleBucket.md b/devDoc/DeployWebsiteOnGoogleBucket.md
deleted file mode 100644
index cb3d87a..0000000
--- a/devDoc/DeployWebsiteOnGoogleBucket.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# Deploy static site on Google storage bucket
-
-The most important and frequently used cloud service is not the complex VM instance, cloud function ... It is the simple **Cloud Storage** (the buckets of Google Cloud).
-
-+ Buckets are cost efficiently to provide static data.
-+ Data is structured and maintained easy
-+ We can implement accessing control on bucket level (normal IAM control) or object level (ACLs - the fine control)
-+ The most important thing is, **a webapp UI stuffs could be deployed as a static website in Bucket**. We always split web application into 2 (or more) parts, a static UI and one or more service provider (e.g. mocro-services).
-
-> When we deploy web application to cloud, mostly include deploy UI and static data to buckets.
-
-## About Bucket
-
-+ Bucket is an objects storage, we can access it with cloud console, cloud CLI and storage API in code.
-+ Even if bucket is objects container, but we can access it as path-tree like file system. That make things handy to retrieve buckets and objects by path, and why we can deploy resource to bucket just like static resource server.
-+ Buckets deployed in cloud so we access them by internet, the **"root"** of whole google cloud storare is _**https://storage.googleapis.com/**_. The URL of a bucket is _**https://storage.googleapis.com/{bucket-name}**_, and an object URL is _**https://storage.googleapis.com/{bucket-name}/{path}/{object}**_. For example, _https://storage.googleapis.com/bkt2020/jan/snow.png_.
-+ The bucket-object accessing policy is implying **a bucket name must be unique among whole google storage**, because all buckets are living in a plate dish - _https://storage.googleapis.com/_.https://storage.googleapis.com/. And we also know each resource in buckets has an unique URI.
-+ Resources in google cloud buckets have their unique URIs. So it is possible to access any of them - but only if you have permissions.
-+ Buckets could be secured with different level, on bucket or on object to roles. The role could be user, program ... It is really complex issue. But for a static UI bucket, we always keep it public.
-+ If we do have some secured resource, we can create a fine-controled bucket for them. **Mostly we need a server with Authen/Auth** to handle secured data instead of Bucket permission. (E.g. the accessing to bucket is permitted to a web program, and we deploy permission control on the web server)
-
-## Website on Bucket
-
-Create a static website is easy, we can always run the static website on local machine (if using relative path for web resources). We can migrate whole resource tree (keep relative paths) to a public bucket, it should work like it is working on local machine. Then the website is available for everybody.
-
-So does for web UI project - if it connect other independent service provider.
-
-However, the simple usage for webUI is not good.
-
-+ The CORS issue becaue all Buckets share same root _https://storage.googleapis.com/_, so the service provider have to set Allow-Origin for this host URL. Probably you don't want do this.
-+ We always need our domain for webapp, and we want the web UI host with a domain name instead of _https://storage.googleapis.com/_.
-
-## Use domain name alias Bucket
-
-We can use **_https://storage.googleapis.com/{bucket-name}/..._** to access the static resources in bucket. But don't want use it if we have regiter ourown domain name already.
-
-> The new URL is **_https://storage.cloud.google.com/{bucket-name}/..._**, legacy one is still working.
-
-### Use your domain point to a bucket
-
-After you have a domain name, the first thing is **telling DNS to parse your domain** to IP address. The way to register your domain to DNS is **adding records to DNS**, the DNS will spread your records to DNS servers all over the internat, then you domain is recognized by the world.
-
-There are multiple **record types** for registering your domain to DNS, the basic type is **A** type record. The **A** type record just mapping your domain to an IP address. E.g. An **A-type** record map **_mydomin.com_** with **_107.24.10.150_**.
-
-However, DNS always need handle **sub-domains**. Assuming we want deploy multiple static resources, web sites, service applications ... to one domain name with different sub-domain names but point to same IP, e.g. **_doc.mydomin.com_**, **_www.mydomain.com_**, **_api.mydomain.com_** to IP **_107.24.10.150_**. Of course we can add multiple **A-type** record to do this.
-
-But using **A-type** records mapping multi-subdomains to same IP is not not good way, e.g. you have to change all records if your IP is changed. The better way is setting all records for sub-domains point to one domain name, e.g. **_mydomin.com_**. So that we just need change one record for IP changing, others will follow that **A-type** record.
-
-A record mapping domain name to other domain name is called **CNAME** record. And we have more important reasons to use **CNAME** DNS record in cloud network.
-
-#### Add CNAME record to map your domain to Google storage
-
-The issue is we don't have dedicated IP for our bucket. All buckets in Google storage share a group of IPs. These IPs are dynamically, and complicated navigating process exist between the IPs of Google storage API and your bucket. Shortly, we can not use **A-type** record to map our bucket with our domain. The correct way is using the **CNAME** (a sub-domain name for Google storage API) to create a **CNAME-type** record.
-
-The **CNAME** of Google storage - **_c.storage.googleapis.com_**. Define a bucket, e.g. **mybucket**, and setup **CNAME record** (e.g. doc.mydomain.com with the storage CNAME). After the record is valid in DNS network, we can acceess bucket with our domain: doc.mydomain.com/{a file in bucket}.
-
-### Config a bucket as static website
-
-We can config a bucket as a static website easily with its web config tool, just select **Edit Website Configuation**. Set the index page and error page. That's all.
-
-#### Use bucket as a SPA
-
-The interesting thing is we can upload a SPA UI to bucket (it is just several bundle files). Then we get a comprehensive website living in storge with **Low Costing**
-
-## Get domain and verify in current account
-
-## Define the host name as alias of CNAME
-
-## Specify the index.html
-
-From render storage URL to domain website
-https://storage.googleapis.com/www.handow.org/index.html vs http://www.handow.org
-
-What is "c.storage.googleapis.com"
-
-
-> Normally we use IAM accessing contrl instead of ACL
\ No newline at end of file
diff --git a/devDoc/G_Colaboration.md b/devDoc/G_Colaboration.md
deleted file mode 100644
index 592c28c..0000000
--- a/devDoc/G_Colaboration.md
+++ /dev/null
@@ -1,16 +0,0 @@
-# Handow in Google Ecosystem
-
-Handow live in Google cloud, so we out every thing to Google Ecosystem.
-
-## Google domain - handow.org, I got it already.
-
-## Set up emails basing on email forwarding,
-
-jian.wang@handow.org forward to newlifewj@gmail.com
-support@handow.org -- support.handow@gmail.com
-dev@handow.org -- dev.handow@gmail.com
-
-yichuang.xu@handow.org -- ??
-
-And we can add more users ...
-
diff --git a/devDoc/HandlwOutline.md b/devDoc/HandlwOutline.md
deleted file mode 100644
index 1608f5b..0000000
--- a/devDoc/HandlwOutline.md
+++ /dev/null
@@ -1,127 +0,0 @@
-# Handow Outline
-
-**E2E** test is not equivalent to **UAT** (User Acceptance Test) technically, but they are the same thing for web application. We do need a test tool running on browser directly to verify if the accessing behavior appearances and data presenting meet the requirement, and we only need one mostly - whatever it's called **E2E** or **UAT**. After a good cover UAT deployed, the **Unit Test** and **Integrated Test** could be omitted because most features are covered already by the **duck test**.
-
-> The **UAT** project is always required, but **Unit Test** and **Integrated Test** could be optional when we have a good covered **UAT**. At least we just need a little bit of them for some critical logic and behavior.
-
-**Handow** is a framework to make **E2E** test or **UAT** easier, It stands on **Puppeteer** and **Jest**, just looks like other **BDD** test tools, e.g. **Cucumber**, **Serenity** ...
-
-> Why called **Handow**? **Puppeteer** is a greate help for writing test on Chrome. But the test writers maybe boring with repeating the boilerplate of test suites. Just like the Puppeteer is a little weary of using those puppets, preparing the theatre. So he will think why don't just play hand shadow? **Handow** is the short of **Hand Shadow**.
-
-## Parallel and Sequential
-
-One significant feature of **Handow** is parallel testing, this will resuce a lot of time. There are some essential points of Hnadow framework.
-
-### Jest run multiple suites (.js test files) in paralle with multiple workers
-
-When Jest use **Worker Pool**, multiple workers can run test in different processes. The workers number could be specified by Jest CLI or config. The issue is different workers can not communicate with each other. So the parallel running test suites can not share anything, e.g. they can not share the same browser instance. In another word, each test suite always use a new browser instance.
-
-#### How many workers should be defined for Jest
-
-Running test suites in parallel can save a lot of time. But that doesn't mean the more workers the faster test running.
-
-+ The more cpu cores existing in the test maching, the more workers could be defined.
-+ The realtionship between cpus and workes is not One-Core-One-Worker. Multi workers (actually multiple processes) are still available in single core computer. But the performance will not be as good as multi-core machine.
-+ Spawn worker process also need some time and consume system resource. The time saving effect will be not as expected if we deploy too many workers.
-+ Normally, 4 or 5 workers will be good for most test project. (Or even 3 if the test run in a single core machine).
-+ If each test suite will last a long time and there are a lot waiting in testing, then multi-workers can same a lot of time. On the contrary, the multi-workers can not save much when each test suite are short and running rapidly.
-+ In practice, using 4 workers could be almost 3 times faster than complete serial running. That's a big progress already.
-+ Don't use many short test suites in one stage group, multiple workers can not help this scenario too much.
-
-### Browser cookies, chache, Incognito browsers
-
-Browsers have memory, they cache some data in local machine, e.g. cookies, htmls ... For a mordern web applicaiton, we always build the UI into versioned bundle files. In this case, the web-resource caching is not a problem for **UAT** any more. The only thing is the cookies which will impact the test design and running. Actually they are important function features need to be verified.
-
-#### Regular browser instances, Incognito instances vs cookies
-
-+ The regular browser instance can persisit/refer permanent cookies to/from local machine.
-+ All the regular browser instances share the same permanent cookies set. E.g. any new regular browser instances are login after user login with one browser instance.
-+ Even all regular browser instances are closed, then the new opened regular browser instances are still login because the cookie is not expired.
-+ The Incognigto browser can not access permanent cookies persisted by regular browser.
-+ The Incognigto browser also maintain cookies, but not by perment way.
-+ All Incognito browser instances share same cookies in Incognito browser context. After **all Incognito browser instances** are cloased, the cookies for them will be vanished immediately.
-
-### When should open Incognito instnace
-
-Normally, we just open Regular browser instances, cookies are available for **all test suites runing in all stages**. That's good for cookies share, e.g. all test cases are basing on one user login.
-
-However, we do need test scenarios basing on different cookies, e.g. different user roles login.
-
-In Handow, we don't use browser instance **directly**. Instead we use **Browser Context** object.
-
-```js
- browser = await puppeteer.launch({ /* config object */ });
-
- const context = await browser.defaultBrowserContext(); // This is the browser regular context
- const context = await browser.createIncognitoBrowserContext(); // Create an incognito context
-```
-
-Unlike Chrome behavior in your laptop
-
-+ Browser instances don't share cookies and chache.
-+ Browser instances don't persist any thing to file system. Cookies and chache are vanished after browser instance closed.
-+ Any browser instance has a default context, you can not close it.
-+ You can create additonal contexts to a browser instance, these additonal contexts are incognito context.
-+ The contexts of a browser instance don't share cookies and cache.
-+ The pages of a context can share cookies and cache.
-+ When you open a new page with _browser.newPage()_, the page is opened in default context.
-+ Is it possible to persist cookies and cache by specify local path ...?
-
-Can we use a cookie file and maintain them by code? So that Handow can provide steps to persist cookies and restore cookies from/to browser instances? Interesting thinking.
-
-```text
-# Store cookies after admin login success
-When I persist cookies {name: admin_cookies}
-
-# Restore cookies to a browser context
-When I restore cookies {name: admin_cookies}
-```
-### New page open inside a suite testing
-
-## Concept and Vocabulary
-
-Plan
-Group
-Stage
-Sequential
-Parallel
-
-Story
-Phase
-Literal (step)
-Suite
-Step
-Dummy Step
-Real Step
-Zombie Step
-
-
-## How Handow help
-
-+ User write BDD stories with Given-When-Then syntax (Create Literal-Suites)
-+ Handow generate Step Catalog by parsing stories.
-+ If user refer the Stardard Step Library in a Literal Step, Handow can generate Real Step in Step Catalog. Otherwise Dummy Steps are generated in Step Catalog.
-+ User can edit the Dummy Steps by filling with real code.
-+ Same Literal StepS will share same steps in Step Catalog.
-+ Basing on the Step Catalog, Hnadow can compile stories to suites.
-+ User can create Plans to run the test suites with different way, e.g. run full, subset, reuse ...
-
-**Suites could be reused in a plan, e.g. appear in different stage ...*
-
-## Browser new instance, reuse instance, open new Page (Tab)
-
-In order to improve performance, we can use _puppeteer.connect/disconnect()_ instead of _puppeteer.launch/close()_ to reuse existed browser instances. Of course, we need add more logic to know there is an anailable instance or not. A new browser instance is always created when there is no idle browser instance available. But the browser instance will be disconnected after current suite finished and the endpoint will added to the **Idle Browser Instances List**. So that they can be reused by following suites. **Note: a looping suite always use one browser instance (don't lanuch new for each looping)**. And all browser instances will be closed when Plan finshed.
-
-Puppeteer also support creating **Incognito Browser Instance**, so that will not worry about the cookies and bowser cache brought by previous test.
-
-Except open or reuse browser instance, we can also open a new tab to improve performance. For example, mostly we can use different tabs to flat run a **When Loop**. So that all the looping **When Phase** are starting at different tabs with same view. And the pages will be closed after **When Phase Loop** fnished.
-
-## Not using Jest
-
-Jest is just a runner for the **describe**, **test** template code. I found it is hard to borrow Jest in Handow runner.
-
-+ The code structure is terrible, we have to translate suite stories into nest **describe/test** code, seems not necessary at all.
-+ We can only control the test flow basing Jest API. That is not accepted for some good Handow feaures.
-+ Jest will generate its report file, maybe Handow need new format reports easier for presenting.
-+ Handow need **Dynamic Reporting** for the future remote monitoring. Jest can not provide this feature. (Maybe we can do this by deep invesgating Jest, don't want do this).
-+ Actually, I just use the **expect** library of Jest in Handow.
diff --git a/devDoc/HandowCore.md b/devDoc/HandowCore.md
deleted file mode 100644
index e8104e2..0000000
--- a/devDoc/HandowCore.md
+++ /dev/null
@@ -1,182 +0,0 @@
-# Understand Handow Core
-
-Users write literal stories and maybe a few custom steps, then they make a plan (easy JSON file) and run the plan. Handow will perform test automatically, output info to console (even output to socket stream) and generate records as test result.
-
-## Handow doesn't handle everything
-
-The UAT tool can do nothing before you created test stories (maybe need some custom steps) and wrote a plan - even it is Handow. However, after user did his job well, Handow can handle other things automatically. Basically, just need run the plan with CLI, or a script, or through Super UI. E.g., enter CLI command to shell:
-
- >handow --plan --/project/myPlan
-
-## Make a plan
-
-A plan is a JSON file like that
-
-```json
-{
- "title": "Sinff test for critical feature",
- "stages": [
- {
- "stage": "Head stage",
- "description": "These tests cases must be evaluated before others because ...",
- "stories": [ "story1" ]
- },
- {
- "stage": "Main stage 1",
- "description": "Run all main feature stories [2,3] in parallel",
- "stories": [ "story2", "story3" ]
- }
- ],
- "config": {
- "consoleOutput": "none",
- "newIncognitoContext": true,
- "headlessChromium": true,
- "workers": 5
- }
-}
-```
-
-The content of the plan explain itself well.
-
-The **planRunr** import the JSON plan directly as plan object.
-
-> The _config_ items can override system configuration, valid for current plan especially.
-
-## CLI interfact - handow.js
-
-User use CLI run Handow functions.
-
- CLI syntax: handow --[task] --[target(s)], e.g.:
-
- >handow --plan --/test/sniff // Run a plan, plan file is [app-root]/test/sniff.json
- >handow --story --/test/random/DeeplinkReportCards.feature // Run story 'DeeplinkReportCards'
- >handow --story --/test/random // Run all stories directly in '[app-root]/test/random' folder
- >handow --parsestory --/test/random/DeeplinkReportCards.feature // Parse story 'DeeplinkReportCards'
- >handow --parsestory --/test/random // Parse all stories directly in '[app-root]/test/random' folder
- >handow --buildsteps --/test/custom-steps // rebuild handow steps with custom steps in specific path
- >handow --help (or any undefined handow command) // Print out CLI help
-
->Handow also provide a server and Super UI to invoke similar tasks.
-
-## planRunr - the Plan Runner
-
-```js
-const planRunr = require('./planRunr');
-const plan = { /* plan object */ };
-
-// Run a plan with 3 workers
-planRunr(plan, 3);
-```
-
-+ Handow pass the plan object to **planRunr**.
-+ The **planRunr** prepare running env, e.g. merge configuation, init reports folder, archive history, starting console output ...
-+ The **planRunr** will rebuild all steps available for current running.
-+ Then **planRunr** start run the 1st stage (a group stories), and stage by stage ...
-+ Before run a stage, **planRunr** parse all stories in current stage group into story objects - **Suites**.
-+ The **planRunr** will figure out how many **workers** permitted to run stories in parallel, then choose relevent suites feed workers and run by call **Suite Runner**.
-+ Before put a story to Suite Runner, **planRunr** parse the story file into an object - a suite object, and then run the suite.
-+ Once a suite is finished, the **SUITE_FINISHED** event trigger **planRunr** put a waiting suite to run until all suites in current stage are ran out.
-+ Once all stories in current stage are finished, **STAGE_FINISHED** event is emitted and the **planRunr** start run next stage until all stages are ran out.
-+ At the point of last stage finished, plan is finishing. **planRunr** will process record file, output to console ... and open local html page show the test result.
-
-Along with running, **planRnnr** write test info continuously into internal record object, which will be saved as report file in JSON format.
-The **planRunr** also output realtime info to console (if enabled), and to socket (if **planRunr** is called by Handow Server).
-
-## suiteRunr - the Suite Runner
-
-```js
-const suiteRunr = require('./suiteRunr');
-const suite = { /* a suite object */ };
-
-// Run a suite - a story in object format
-suiteRunr(suite);
-// suiteRunr is an async function ( e.g. await suiteRunr(suite);). But planRunr doesn't call like this
-```
-
-The **suiteRunr** is called to run a story, **planRunr** can open mulitple **suiteEunr** to run stories asynchrously. Actually, **suiteRunr** doesn't run story, instead it run a suite object. However, the story-suite convertion is handled by **suiteRunr** in run-time. So we can say **suiteRunr** consume story files direcly.
-
-A suite object is another format of story, from user friendly literal to program friendly object data. It organize all steps, parameters, looping and skipping logic easy for **suiteRunr** consuming.
-
-+ The **planRunr** call **suiteRunr** run a story, and, if multiple workers specified, can open other suite runner without waiting it finished.
-+ Interacting with browser need a lot of waiting time to sychronize view updating. When one suite is waiting, other suite runners will continue their work. Multiple workers play suite testing in each other's waiting slot to avoid CPU idle. That't the key point of Handow.
-+ The **suiteRunr** will create a new browser context instance and run test on it. Multiple runner will open multiple browser context instances, that's why a group of suites can run in parallel (if not consider data conflict).
-+ After new browser instance created, **suiteRunr** will iterate and conpute parameters in suite to run phases and steps one by one until finish them all.
-+ Suite runner doesn't run steps directly. It iterate parameter looping, evaluate the skip condition on current parameters to call **stepRunr** continue running.
-+ The **suiteRunr** run phases and steps in synchronous way - no parallel inside a suite executing.
-+ Then **suiteRunr** close the browser and emit '**SUITE_FINISHED**' event with suite object as parameter.
-+ Along with pahses and steps processing, suiteRunr also interact with the **record** to generate test data object, including taking screen shot.
-+ Also output to console ... (output to socket ...)
-
-> Handow provide **--parsestory** command to convert story to suite and save suite object as a JOSN file.
-
-> Up to now, Handow doesn't spawn process to run multiple **suiteRunr**. Instead they are executed with node non-blobking mechanism.
-
-## stepRunr - the step runner
-
-```js
-const step = { /* the step object from suite */ };
-const sdata = { /* parameters valid in current story context */ }
-const page = { /* the prowser context instance opened in current story */ }
-const config = { /* the system config data computed in current story */ }
-
-const _result = await stepRunr( step, sdata, page, config );
-```
-
-The **suiteRunr** process the suite object, loop with story parameters, evaluate skip condition, ... However, it doesn't run the steps - the real test operation basing on pptr. Once **suiteRunr** come to and make decision to execute a step (either Act or Fact), it will invoke **stepRunr** to run the step.
-
-Actually the **stepRunr** doesn't run the step too, it will find out the step object from the **ste-bundle** repository and let step run itself (by its _doStep()_ method). The task of the **stepRunr** is introduce current suite context (parameters, pptr instances, record and outputing flow ...) to the step. Make the abstract step running in current environment. So we can say the **stepRunr** instantiates the step.
-
-+ The **suiteRunr** pass target step and env data to **stepRunr**.
-+ The **stepRunr** will find out the step object from step-bundle repository by label matching.
-+ The arguments of the step from bundle repository use general reference. **stepRunr** will resove the mapping beteen them and current story parameters.
-+ Then **stepRunr** call step's _doStep()_ method. All steps need access **page** and **config** object, they are passed to _doStep()_ too.
-+ Besides apply current data to call _doStep()_ method, **stepRunr** also instantiate the general step label to **populated step title**, so that steps have titles with solid meaning.
-+ After the step finished, **stepRunr** return result including status and populated step title, and also an Error object when step failed. The result back to **suiteRunr** (the caller) and feed further process, e.g. write to record object.
-
-## stepsBundle - repository for all available steps
-
-There are 2 bundle files locate at **[app-root]/stepBundles/** folder, _actBundles.js_ and _factBundles.js_. They are generated after Handow build steps by compile Handow built-in steps together with custom steps. Handow will rebuild steps before each test running.
-
-The steps, either Handow built-in or user custom steps, they are just snippet code. Handow compile them into generalized objects and bundled into 2 bundle files. Then they could be instantiated to real step by injecting into real parameters.
-
-> Mostly we can not run a single step because they need set app in browser with correct status and with valid parameter, config data ... If we do need test a step, it is possible to write a simple story including this step, and then run this story with CLI.
-
-> Actually Handow doesn't run stories technically, it always put stories in an internal plan and run the plan.
-
-## record
-
-The service to generate a record object along with test running, and save it as a JSON file. The record file name protocol is:
-
- [plan-name]_[timestam].json // e.g. myPlan_1573925791416.json
-
-Handow link the screenshots (if config enable it) file name into record automatically. That means the JSON file is all we need to render the html reports. The screenshot file names protocol:
-
- [story-name]_[timestamp].[jpg|png|...] // e.g. exampleStory_1573925546782.png
-
-+ user can specify a path to contain reports.
-+ user can specify the image format of the screen file.
-+ user can specify archive reports and the max archives reserved.
-
-When Handow start running a plan, it always archive current reports and the clean them from repots directory. There is a folder **[report dir]/archives** in reports directory for history archive. Each test history is archived into a folder named same as its JSON record name. E.g.:
-
- --/reports
- |--archives
- | |--myPlan_1573925791416
- | |. myPlan_1573925791416.json
- | |. story_1573925546782.png
- | |. ...
- |
-
-## honsole - handow's console printer
-
-It is called in test running to output test info and result to console.
-
-We can config output mode beteen "story", "step" and "none". ("story" is the default mode).
-
-+ "story" mode will show plan stage, story status and a progress bar, finally show the result summary.
-+ "step" mode will show a stream for each step status (we can choose showing @skip phase/step or not). The steps info are wrapped by story, phase and looping indicator. Finally show the result summary.
-+ "none" just show result summary after plan finished.
-
-> Showing steps stream info just make sense when we run a single story or a few stories with a single worker. The steps are interlaced not readable when we run multiple stories in parallel.
-
-## HTML Render
diff --git a/devDoc/HandowSite.md b/devDoc/HandowSite.md
deleted file mode 100644
index e2f01a8..0000000
--- a/devDoc/HandowSite.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Handow Site
-
-Handow site is a web application created with Node.js and React. Main features:
-
-+ Hosting all Handow public documents.
-+ The entry for donation to Handow. (e.g. donate Handow to download pdf document or html document)
-+ SHandow secured install, authorization and user management
-
-## Application design
-
-### Architecture
-
-Handow site is a Node.js/React SPA, which is deployed on Google cloud. MongoDB is installed for users and **SHandow** instances management. Documents (markdown files) in cloud storage are static resources of Handow Site.
-
-React project is deployed as cloud static resource, the Node.js server deployed as API service. When user access the document, he needn't access the server at all.
-
-### Security
-
-3 roles are permitted access Handow Site.
-
-+ **Visitor**, anybody could access Handow documents as a visitor. A visitor can donate to Handow documents and then download a html documents to local machine. After donated to Handow doc, visitor can choose setup hisown account on Handow Site, after that he can always download updated documents.
-+ After a vistor donated to Handow site and created his account, he becomes an **user**.
-+ A visitor can register an account, then he becomes an **User**.
-+ Except all permissions granted to **Visitor**, an **User** can sign in his account.
-+ User can donate to Handow document in his account view. After user domated, he can download html document from his account.
-+ User can activate SHandow from his account, then he can see his SHandow instance status.
-+ User can pay annual fee for SHandow from his account.
-+ **Admin** can do everything, but mostly **Admin** don't want to download doc or domate to Handow. Instead, the **Admin** role can access **user management** dashboard and all related secured pages.
-
-### Description
-
-
-> Scaffold a demo html page (including .js and .css) for documentation code demo. Document navigate user finishing an UAT project for this demo page. The demo page will host on Handow site.
-
-## Cloud static resource for doc .md files
-
-Handow documents are independent .md files stored in cloud storage instead of database. The urls of these files are constant properties hardcoded in React UI project.
-
-
-## Thinking
-
-+ Why we need a web application for documents, not a static page?
-+ How to map an URL to a static index page?
-+ Apply domain from google?
\ No newline at end of file
diff --git a/devDoc/HowCucumberHelp.md b/devDoc/HowCucumberHelp.md
deleted file mode 100644
index 708e720..0000000
--- a/devDoc/HowCucumberHelp.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# How Cucumber Help Handow
-
-We borrow **Cucumber-js** plugin to help using **Handow** in IDE.
-
-+ With help of Cucumber-js, we can write feature file (**Literal-Suite**) friendly in IDE, e.g. the highlight, syntax checking, ...
-+ We can also refer the steps in IDE, so that to reuse steps easily.
-+ Jump to step from Given/When/Then.
-
-But we don't implement cucumber itself.
-
-## We should have Handow pluggin finally
-
-Cucumber-js pluggin can help us a little bit. But the drawback is we have to obey its rules, e.g. the step format. The question is **Why create steps like this if we do not use Cucumber actually?** Furthermore, cucumber pluggin doesn't support generating dummy step.
-
-So ---
-
-In the future, we should create our Handow pluggin to work arround IDE integration.
-
-+ Literal steps in feature file refer steps, including jumping, promotion ...
-+ Generate Dummy steps (Listen literal suite updating and generate Dummy steps if not existed).
-+ Steps should distinguish parameters quantity.
-+ Tools to report steps information, remove zombie dummy steps.
-+ Compose suites basing on **JSON Suite** objects and **Step Catalog**.
-+ Plan runner
-+ Report process and integration
-+ Report UI project and **Remote Runner**
-
-## Conclusion
-
-+ Before creating Handow IDE pluggin, we just borrow cucumber IDE plugin temporarily.
-+ We create cucumber feature file as Literal Suite.
-+ We create Handow tool to generate dummy step - obey cucumber step format.
-+ We create Handow tool to compose suite from JSON-Suite and steps.
-+ We create Plan runner
-+ We create Report processor
-+ We create Report UI
-+ We create remote UI for Plan Runner and realtime monitor.
-
-
diff --git a/devDoc/HtmlRender.md b/devDoc/HtmlRender.md
deleted file mode 100644
index fcbeaaf..0000000
--- a/devDoc/HtmlRender.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# Html Render
-
-The **Html Render** include a html file, a javascript file and a css file. E.g. _**index.html**_, _**main.js**__ and _**main.css**_. The screenshot files could be rendered if they are available. (screenshots is optionally for test running).
-
-## How to set up the render
-
-At test finishing point, Handow generate the render in the reports path together with the record JSON file. The _**main.js**__ and _**main.css**_ are just common resources copied from Handow package. There is a render projet (React project) which generated these 2 file by building.
-
-The _**index.html**_ is a little bit special, it is a template file from the render project and Handow populate the template with current test JSON data, and then save it as _**index.html**_.
-
-```html
-
-
-
-
-
-
- Handow UAT Reports
-
-
-
-
-
-
-
-
-```
-
-When user open (or handow open it automatically) the _**index.html**_, the record data existed already. The _**main.js**_ just download to page and consume the existing data. The following render is just JavaScript work designed by the render project.
-
-> Must keep the path relationship of the render and screens when move them to other path (e.g. archive them). The rule is **All of them in one folder flatly**.
-
-## Features of the render
-
-The HTML Render is designed for 2 usage:
-
-+ Render local test result by open the _index.html_ with local browser.
-+ Put test result to remote storage or as a static website, user can render it by remote browser.
-
-In both cases, users can not use path navigation and deeplink (but javascript can help user browse different render views). If user want more, install Super UI. Because Super UI can provide powerful features, we just **keep the HTML Render simple and easy**.
-
-Actually the HTML Render just provide 2 dashboard views (no path, just switched by JS): **Summay** and **Story Details**.
-
-### Summary - the homepage
-
-It is the default page when render is opened.
-
-+ Static stuffs like plan name ...
-+ The pie chart with summary as legend.
-+ Report download button (json, cvs(later), pdf(far future)).
-+ Stories list by stage with color background.
-+ Every story is a link too, click it switch to its story detail dashboard.
-
-### Story Details
-
-Story Details is an interaction table showing phases, looping and every steps.
-
-+ A button back to homepage.
-+ static stuffs like story name, descriptions ...
-+ Summary for this story, and the summary items are also the filter of the details table.
-+ a list of phases with status and timing and ...
-+ For the story level looping, show then in flat way but use indent and background color interlacing.
-+ For each story level looping show relevant parameters above.
-+ Phase level looping is not flat open, but a small tip showing the looping times.
-+ Every phase has 2 image icons: start -> end, click it pop over the screen card.
-+ The phase items are expandable, click the expand icon to open the steps list below it.
-+ The phase expand icon can shink the steps list also.
-+ The loopin of the phase is indicated some how when expand the steps.
-+ For each looping, show relevant parameters.
-+ Each step show status, and screenshot, error message show for failed or broken steps.
-+ click the screenshot image icon of the step pop over the screen card.
-
-The screen card:
-
-+ show step title of the scrrenshot.
-+ show screenshot image
-+ a close icon, or click area out of the card, to close screen card.
-+ Next/Previous arrow show all screens of current pahse looping.
-+ A link to open relevant application view in blank tab (same URL).
-
diff --git a/devDoc/KnowHow.md b/devDoc/KnowHow.md
deleted file mode 100644
index fc3c9d7..0000000
--- a/devDoc/KnowHow.md
+++ /dev/null
@@ -1,117 +0,0 @@
-# How to use Puppeteer creating test
-
-Using Puppeteer create test code is quite straightly, just **access page elements and verify the result page elements**. Sometimes we need make call to access remote server, e.g. reset server status and data. Maybe we need access local machine to set browser running env. Anyway, Puppeteer provide a lot of APIs for most test operations. In some special test case, we need import 3-party libraries, e.f. axios ...
-
-However, Using Puppeteer doesn't mean you can create correct and robust test project. We still need some skills when we write code. The most important thing is **How to work arround asynchronou presenting**.
-
-## Work arround asynchronous presenting
-
-Broser render page in achronous way. That means the result is not presented synchronously with the operations. For example, the expected page is not showing immediately after you click a page link in current view, the data in a list is not updated immediately after you click the **Refresh** button. It is often failed when we test the expected result view after interact with the page due to browser's asynchronous nature. We can add extra code to fix this issue - **Synchronize Verify And Operating By Wait**.
-
-### Wait a while
-
-The easiest way is add a static waiting time after an operation,
-
-```js
-await page.$eval('#profile-submit', elelent => element.click()); // Click button to submit a form
-await page.waitFor(3000); // Wait 3 seconds
-const message = await page.$eval( '#form-message-bar', element => element.innerHTML ); // Message should be presented
-expect(message).toBe(`Profile updated sucessfully!`);
-```
-
-Easy, right? But we can not guarantee the message is always showing after 3 seconds.
-
-> Static time waiting is only used for short asynchronous interval when we are quite sure about the latency.
-
-### Wait for an element appear/disappear
-
-Usually there is a sinificant element existing in the coming view. We can synchronize the test flow by watching its appearance. For example, we have known an element id unique in the result view after navigating to new page or refresh a partial view. So we can suspend current executing until that element appear. And we need set a timeout for the appear-watching-waiting to avoid watching an element forever. It is an exception if the appear-watching timeout. Something was wrong when the expected element didn't appear in a time window.
-
-```js
-await page.$eval('#profile-submit', elelent => element.click()); // Click button to submit a form
-// Wait the element appear (existed in DOM without "display==none | visibility==hidden") with 20s timeout.
-await page.waitForSelector('#form-message-bar', { visible: true, timeout: 20000 });
-const message = await page.$eval( '#form-message-bar', element => element.innerHTML ); // Message should be presented
-expect(message).toBe(`Profile updated sucessfully!`);
-```
-
-> On the contray we can also element not-display by _page.waitForSelector(selector, { hidden: true, timeout: ? });_. The waiting return immediately if element not existing or not displaying currently, so it is used to wait the element appearing now.
-
-### Wait for some events, e.g. a Http reponse arrrived
-
-Sometimes there is no representative element appearing to synchronize the operating and result, e.g. click an update button to change candiates of a dropdown component deiven by dynamic data. Fortunately, Poppeteer provide some event-waiting APIs for those scenarios.
-
-```js
-await page.$eval('#update-select-items', elelent => element.click()); // Click button for data updating
-// Wait the Ajax call return 200OK with 20s timeout.
-await page.waitForResponse( ( response ) => {
- return response.url().includes('https://api.example.net/candidates') && response.status() === 200;
-}, { timeout: 20000 });
-await page.awaitFor(300); // Wait a moment for grogram process arriving data
-// ToDo: We can continute test flow, the dynamic
-```
-
-### Special mark element appear
-
-Basing on the UI design, sometimes there is a special elements existing for view and data dynamic refreshing, e.g. the Spin component. We can also synchronoize test steps by watching that mark element.
-
-```js
-await page.$eval('#profile-submit', elelent => element.click()); // Click button to submit a form
-await page.waitFor(300); // Wait for Spin to start spinning
-// Wait the Spin element finished, suppose data and view were updated already
-await page.waitForSelector('#form-message-bar', { hidden: true, timeout: 20000 });
-const message = await page.$eval( '#form-message-bar', element => element.innerHTML ); // Message should be presented
-expect(message).toBe(`Profile updated sucessfully!`);
-```
-
-> Synchronize actions are important, but Handow framework can not resolve this in system level because it is business related things. Developers need add special Acts to handle it. Handow provide built-in Act steps, e.g. _**When I wait it {selector} is displayed**,_ or _**When I wait it {url} is responsed 200**_, ...
-
-## What is the difference between Given and When phase
-
-+ The steps of **Given** and **When** phases are exactly same things.
-+ All **Act** and **Fact** steps can be used in both.
-+ No special **Given Acts** or **When Acts** (**Fact** steps are always labeld by **Then** in both phase)
-+ As a matter of fact, there is no step labeld by **Given** in **Handow** step library (all Act steps are labeled with **When**).
-
-### Examples
-
-#### Given|When promptation
-
-A developer is writing a story. IDE will prompt steps when he enter _'When I click'_:
-
-```text
----------------------------------
-| I click it (selector) |
-| I click the hyperlink (link) |
-| ... |
----------------------------------
-```
-
-> All the candidates come from **Act** library because the developer started the statement by **When**, and exactly same with starting by **Given** (And **And** keywaord follwing When|Given). Once after developers choose a step, he can edit the parameters (only the parameter names)
-
-#### Then promptation
-
-A developer is writing a story. IDE will prompt steps when he enter _'Then I can'_:
-
-```text
------------------------------------------------
-| I can see it (selector) is displayed |
-| I can see it (selector) showing html (html) |
-| ... |
------------------------------------------------
-```
-
-> All candidates steps come from **Fact** library because the developer started the statement by **Then**...
-
-### Differences beteen Given and When
-
-But there are some differences between **Given** and **When**.
-
-+ In running time, any **Given Act** failure will end current suite immediately, because we can not continue tesing basing on wrong status. But **When Act** error only break current phase, maybe test could be continued after one phase failed.
-+ However, mostly we should quit current suite after any **Act** failed. When an **Act** is failed, maybe a lot of failures will happen in following test. The failures will spend a lot of waiting time.
-+ There are some differentials in report when process **Given** or **When**
-+ Anyway, **Given or When** is not Step attributes, it is how to place them in a story.
-
-> Actually, Handow doesn't break current suite testing. It always break a loop. (For stories without story looping, breaking a loop means end whole story)
-
-
diff --git a/devDoc/ParameterLoopCondition.md b/devDoc/ParameterLoopCondition.md
deleted file mode 100644
index 425b597..0000000
--- a/devDoc/ParameterLoopCondition.md
+++ /dev/null
@@ -1,301 +0,0 @@
-# Parameters and looping
-
-We can pass paramteter to steps When creating literal steps in a feature file (Literal Suite or simply **Story**. **Why?** The major purpose is **Step Sharing**. If a step is bound with an actual constant value, it is hard to be reused by other test scenarioes. For example:
-
- # The literal step to click Submit Profile button
- When I click the button
-
-We don't use variable as the selector of the button element, instead just hardcode an element **id** in step code (e.g. "_#profile-submit-button_"). The result is we couldn't use this step function for clicking any other buttons. However, the issue is resolved easily if the **id** was passed as a parameter, and then this statement could be used for other clicking acts.
-
- # We can pass selector as parameter to make click-button as a shared step
- When I click the button {selector}
-
-## Examples for passing parameter from Literal to Step
-
-Here we call **Literal-Step** (a _Given/When/Then_ statement in Story) just as **Literal**, and call the **Real-Step** (the relevant actual code function for **Literal**) as **Step**.
-
-### No parameter at all
-
- # An action literal statement
- When I click Submit Profile button
-
-```js
-// Possible Step for upper Literal
-When("I click Submit Profile button", () => {
- // Hardcode the selector inside step
- await page.$eval( "#profile-submit-button", (ele) => ele.click() );
-})
-```
-
-We can see a Literal bind with a Step, and it is working. But there are some drawbacks.
-
-+ The Literal is clear enough but it is dedicated for clicking _"Submit Profile"_ button. We have to use different Literal for clicking _"Add User"_, _"Submit User"_, _"Cancel"_ and so on.
-+ Suppose it is better to make the Literal more generally so that we can share it for click all id-buttons. But we can not do that because the **selector** in Step is hardcoded (with the Submit Profile button id).
-+ If we implemented a specific Step for each Literal, the test code is difficult to refactor. For example, we have to change all click-button-steps code one by one after html changed.
-+ Actually test developers don't want put a lot of time to maintain Steps, it is better to maintain Literals - especially the parameters in Story file.
-
-### Pass value from Literal to Step
-
- # An action literal statement
- When I click the button {"#profile-submit-button"}
-
- # Another literal statement
- When I click the button {"#profile-delete-button"}
-
-```js
-// All click button literals share same step
-When("I click the button {selector}", (selector) => {
- // The selector variable is passed in running time
- await page.$eval( selector, (ele) => ele.click() );
-})
-```
-
-Now we use a variable in Step, it is populated by the value specified in Literal in running time. That make the **Step** shared by all click button literals. But it still has some drawbacks.
-
-+ The 1st question is how Step know _"selector"_ is a fridenly variable name? It need to be specified in Literal if the steps were genrated automatically.
-+ The value maybe not readable, e.g. a generated id like "#id-ere4433g-submit" ... It doesn't make sense appearing in report.
-+ We need iterate multiple values for test looping.
-
-### Pass parameter array with ailas name
-
- # An action literal statement, specify a general name to parameter
- When I click button {selector: profile_button}
- parameters: [
- {
- profile_button: "#profile-submit-button"
- },
- {
- profile_button: "#profile-delete-button"
- }
- ]
-
-```js
-When("I click button {selector}", (selector) => {
- // Use alias name "selector" inside step
- await page.$eval( selector, (ele) => ele.click() );
-})
-```
-
-We get some benefits if we write Literals like this.
-
-+ Using _{alias: parameter, ...}_ format to declare parameters.
-+ The **alias name** will be the argument in relevant Step, e.g. **selector** as the variable of the Step. The more specified **parameter names** only existed in Story files.
-+ In the report, the **parameter names** are shown as label, so that the report is meaningful for dedicated test suite. For example, the lable of upper sample could be **"When I click **. It is quite clear than **When I click the button "**
-+ We can provide multiple values to a set of parameters, in this case this **Phase** will be iterated with the param-array. This is also a way to reuse literals for same actions and verifications.
-
-> **alias** for **parameter** is important to auto generate general Steps and Literals syill keep specific, and reports will be friendly.
-
-> Report lable will use the parameter expression replacing the left word. "When I click **button** {selector}" was presented as "When I click ****". Is is more clear?
-
-### Access Parameters
-
-+ Paramaters defined on a **Phase** (not for an individule Literal Step).
-+ Parameters on **Given** phase are global for whole **Story (Suite)**, or we can say it is defined on Story.
-+ Parameter on a **When** block could be accessed only by **Acts** and **Facts** of this phase.
-+ Parameters with same names are valid in different **When** phase to stand for different parameters.
-+ If parameter names defined in **Given** phase are used again in **When** blocks, they are overriden in that **When** phases.
-
-## Looping by Params-Array
-
-We can loop **Story** or **Phase** by passing an array of value sets, e.g. _[ {values}, {values}, ... ]_. The Story or Phase will be executed multiple times by iterating different values.
-
-### Loop whole story (suite)
-
-If we put the Params-Array on **Given** phase (Actually it is on the top of the Story), the whole story (suite) will be looped. That is reasonable because after the **Given** (start status) changed, all the following **When** phases should be evaluated again.
-
- Given I have reset service {state: lgoin_role}
- And I open the page with {url: homepage_url}
- Then I can see label {selector: login_label_id} showing text {text: login_label}
- # Here the parameters is not a single value object, it is Params-Array instead
- parameters: [
- {
- login_role: "admin_login",
- homepage_url: "www.website.com",
- login_label_id: "#head-login-label",
- login_label: "Admin"
- },
- {
- login_role: "editor_login",
- homepage_url: "www.website.com",
- login_label_id: "#head-login-label",
- login_label: "Editor"
- }
- ]
-
-The steps could be:
-
-```js
-Given("I have reset service {state}", async(state) => {
- // API call to reset server
-});
-
-Given("I open the page with {url}", async(url) => {
- await page.goto(url);
-});
-
-Then("I can see label {selector} showing {text}", async(selector, text) => {
- const html = await page.$eval(selector, (el) => el.innerHTML);
- expect(html).toBe(text);
-})
-```
-
-+ Whole story (suite) is looped on **Params-Array**
-+ In suite looping, the **When** phases could be nest looping if they are iterated by its Params-Array.
-+ The parameters of **Given** phase are global parameters across whole suite, but parameters of **When** are scoped in its phase.
-
-### Loop the When phase
-
-Parameters can also declared for a **When** phase, the scope of these parameters are limited for the **When** phase scope. And Params-Array can loop **When** phase by iterating the value sets.
-
- When I enter text {text: login_username} to input {selector: login_username_input}
- And I enter text {text: login_password} to input {selector: login_password_input}
- Then I see validation {selector: login_validation} showing {text: login_validation_message}
- And I can see the button {selector: login_submit_button} is disabled
- # Params-Array on "When" pahse
- parameters: [
- {...},
- {...}
- ]
-
-The steps could be:
-
-```js
-When("I enter text {text} to input {selector}", async(text, selector) => {
- // call Puppeteer to fill the input
-});
-
-Then("I see validation {selector} showing {text}", async(selecttor, text) => {
- //
-})
-
-//...
-```
-
-+ The **When** phase will be iterated by the Params-Array.
-+ The **When** must be **Recyclable** - that means after the phase finished, all the actions and facts could be repeated again. For example, the page was changed after the phase, then we can not repeate same operating again. That phase is not **Recyclable** in this case.
-+ If we do need loop a **When** which is not **Recyclable**, we can add **Condition Control** on steps of the phase to make it repeated. (_Explain later_)
-+ Looping a **When** phase should take care about next phase working - if it's not the last phase. The value sets should be ordered and the last value set will guarantee continue following phases.
-
-> Looping suite or phase is good when looping is easy. Don't use tricky logic for looping to make test hard to understand. In this case we prefer using independent stories or phases.
-
-> Params-Array with a single member is the equivalent with a signle value object as parameters.
-
-## Conditional
-
-**Recyclable** could be an issue when we use different value sets loop a suite or phase. Mostly the view will be differently in looping with different value set. Perhaps some actions can not be performed and some facts is not true as we expected and result in can not continue the loop. Of course we can give up looping and use extra stories or phase. But sometimes it is easy to make looping possible by doing a little bit change - adding **Condition** on phases or steps.
-
-### Condition on Step
-
-For example:
-
- Given I have reset service {state: login_role}
- And I open the page with {url: homepage_url}
- Then I can see label {selector: login_label_id, text: login_label}
- Then I can see tab {selector: system_tab} (login_role==admin_login)
- Then I can not see tab {selector: system_tab} (login_role==editor_login)
- parameters: [
- {
- login_role: "admin_login",
- homepage_url: "www.website.com/",
- login_label_id: "#head-login-label",
- login_label: "Admin"
- },
- {
- login_role: "editor_login ",
- homepage_url: "www.website.com/",
- login_label_id: "#head-login-label",
- login_label: "Editor"
- }
- ]
-
-Parameters in Given phase can be accessed by **condition expression** all over the story. But paramaters in **When** phase can only be accessed by condition expression on this phase (either phase-condition or step-condition).
-
-+ Condition expression is declared in the end of a literal statement in "()".
-+ The expression can evaluate parameters of current phase and **Given** phase if current phase is **When** phase.
-+ Operators in expression could be Arithmatic, Compare and Logic operators, and the result of the expression **must** be boolean.
-+ The step will be skipped (ignored) if the ecpression false.
-+ Handow will append the express to the Step function, e.g. ( _[expression]_ ) && test("Then ...", fn)
-
-```js
-/* Atually suite is not composed by step directly .... The When describe.each() need pass all values ... */
-// Step condition example - after Handow compile steps into Jest code
-// Pay attention how to transfer values to parameters with describe.each()
-describe.each(["#profile-edit-button", "#profile-edit-form"])("When I click button {profile-edit-button}",
- ( profileEditbutton, profileEditForm ) => {
- beforeAll( async() => {
- // The act to click "profile-edit-button" is only performed when "Admin" login
- // Here the 'loginUsername' parameter is defined in Given phase
- ( loginUsername == "Admin" ) && await page.$eval( profileEditbutton, (ele) => ele.click() );
-
- // ... other Acts
- });
- // The fact of display "profile-edit-form" is only verified when "Admin" login
- ( loginUsername == "Admin" ) && test("Then I can see the view {profile-edit-form}" async() => {
- // The verification of this fact.
- })
-
- // ... other facts
-});
-```
-
-> Jest use a **describe** function wrapping a phase and whole suite (instead of executing Acts and Facts functions in simple flat way). That's why we emphase Phase in Handow. All the parameters are passed to **Phase Block (a describe function)** by _describe.each(value1, value2, ...)(["label1", "label2", ...], (param1, params, ...) => { ... } )_. Another issue is the the literal labels of Acts, Handow pass all Acts labels to the _describe.each_ as stringd separated by ";". In reports, Handow will match labes with each Act function. ????????
-
-> Step condition expression can access parameters of "Given" phase or of current "When" phase.
-
-### Condition on Phase
-
-Similar with skipping s step, Handow can skip a whole **When** phase by conditional expression.
-
- # A When phase
- (login_username == "Admin")
- When ...
- parameters: [ {}, {}, ... ]
-
-The consition expression is declared above the phase. After the compiled into Suite, it should be a conditional expression like this:
-
-```js
-// Actually skip a phase is simple with skip a step.
-// The conditional expression can only access "Given: parameters
-( loginUsername == "Admin" ) && describe.each([...])("When I ...", async(...) => {
- // the whole phase code block
-})
-```
-
-Same as step conditional, the phase condition expression can access parameters of this phase and parameters in story scope.
-
-> **Note** We never put condition on **Given** phase because ignore the Given phase means skip the whole story. Maybe the test flow need skip specific stories in some situations, but Handow will resolve this with Plan or other solutions.
-
-## Skip a phase or a step
-
-Using Handow conditional expression for Phace, Act or Fact steps skipping, e.g.:
-
- # Skip this phase
- (false)
- When ...
- Then ...
-
-And Handow can recognize 'skip' keyword too:
-
- # Skip this phase
- (skip)
- When ...
- Then ...
-
-Can also skip steps:
-
- # Skip specific steps
- When ... (skip)
- Then ... (skip)
-
-**Actually SKIP means IGNORE** in handow. When a phase or step was skipped, the relevant code is not executed by a static conditional logic. So the skipped phases and steps are not appear in report.
-
-We can use **#** to specify a comment line in Literal Story. So we can also comment a step to _skip_ it. But using **#** is not same with **(skip)** technially. The step code is not compiled into suite intead of ignore it in running time.
-
-> Anyway, we can also use **#** comment a step or all steps of a phase. Maybe developer think **#** is better than using **(skip)** conditional.
-
-## New browser instances, reuse instance and open new pages
-
-+ Once start a suite, the first thing is check array **browserInstances** to see if any idle instance existed.
-+ If there is one idle instance, use _browser.connect()_ to reused it.
-+ If no idle instance, use browser.launch() create a new instance (mostly we create an **Ingodnito** instance).
-+
diff --git a/devDoc/ParamsSelectorProbe.md b/devDoc/ParamsSelectorProbe.md
deleted file mode 100644
index 49b1167..0000000
--- a/devDoc/ParamsSelectorProbe.md
+++ /dev/null
@@ -1,196 +0,0 @@
-# Parameters, Selector parameter and Probe attribute
-
-Handow pass parameters to steps, that's way how to reuse them. Basically, there are only 2 tpyes parameters, **Selector** and **Content**.
-
-+ The selector is a string, **pptr** use them to try to reach element(s).
-+ The content could be string (including JSON string, html snippet string, ...), number or boolean, test steps use them make decision to verify the expectations.
-+ However, the content parameters can also work together with selector to help selecting elements, e.g. **xpath** selector get element by its content.
-
-## Selector syntax
-
-**Selectors must be consumed by pptr by relevant method**. For example, pptr can not evaluate a **CSS selector** as a **XPath selector**. The selector syntax is not an issue when users create custom steps for current project. But the type must be specified when user pass a selector parameter to built-in steps.
-
-### Built-in steps selector types
-
-User can see the built-in step using **CSS Slector** or **XPath** by the argument name, For example, there are 2 built-in steps for same purpose but using different pptr APIs
-
-```js
-When("I click it {selector}", async (elements) => {
- // Using CSS selector because of 'selector'
-})
-
-When("I click it {xpath}", async (elements) => {
- // Using XPath because of 'xpath'
-})
-```
-
-## Using story editor
-
-Big part!
-
-## Global parameters
-
-Handow stories get parameters by 2 ways:
-
-+ Immediatly defined in current story
-+ Refer global parameter table (js moodule files)
-
-### Parameters in story
-
-We have them in story scope and phase scope already.
-
-### Parameters from parameter tables
-
-We always have some parameters shared by different stories, an URL or a tab of the menu. If they are defined in one place, we can refactor them by one shot. For example, we change the URL globally to test application deployed to another environment.
-
-A global parameter module is just a node.js module file return an object. We can put some logic in the module (e.g. computing the parameters). But mostly just a plain object, user can create it without knowing JS.
-
-```js
-/********* Global Parameters for URL**********/
-'use strict';
-module.exports = {
- URL_Homepage: "https://storage.googleapis.com/handow-uat-assets/static/uat-pet-store/index.html",
- URL_LoginForm: "",
- // ...
-};
-```
-
-Maybe we need define multiple tables with good naming scoped.
-
-```js
-/********* Global Parameters for Message **********/
-'use strict';
-module.exports = {
- Message_NotFound: "https://storage.googleapis.com/handow-uat-assets/static/uat-pet-store/index.html",
- Message_NotAllowed: "",
- // ...
-};
-```
-
-In story file, any step statement can refer the global parameter keys without define value. The global parameters are injected to run time by Handow like parameters defined locally.
-
-> Parameters defined locally will override the global parameters if they have same names.
-
-#### Specify global parameter tables in config
-
-+ There is one config field, _config.globalParams_.
-+ It is false by default, that means no global parameter table defined.
-+ We can specify a path, in which all global tables are defined. e.g. _config.globalParams: '/params'_. The path is relative with app-root.
-+ The _config.globalParams_ is same as other properties, it is **false** by default, but user can override it in app _config.js_ and override it again in plan config.
-+ If global parameters are defained, Handow will import them into **storyRunr** before running any step. Then they are available for all steps in this story.
-
-
-
-
-+ User can not implement both **CSS** and **XPath** in one test project.
-+ After user change the config.selector, he must do _>handow --config_, this command will copy correct steps set to current _/steps_ directory. **!!!Important!!!**
-+ Actually, Handow will perform **re-config** (_>handow --config_) automatically before run any plan.
-
->
-
-+ The most important operations of test steps are accessing element.
-+ The story file define **Slector** parameters which are consumed by Puppeteer API to locate the target elements.
-+ Puppeteer can consume different type selectors, e.g. CSS selector, xpath, JS-path ... User can choose any of them when they create custom steps.
-+ But the built-in steps can only consume **CSS Selector**. The built-in steps always use _page.$$eval()_ to evaluate the selector and reach the elements.
-+ So user must use **CSS Selector** as parameter when invoke built-in steps.
-
-## Probe attribute
-
-User can use any valid CSS seletor. But **IT IS NOT EASY!!!** in most situation, especailly for the SPA implement a lot of 3-party components and building tools.
-
-+ The markups are complex if they are generated by template and building tools.
-+ Developers can not control the 3-party components, e.g. bootstrap, material-UI ...
-+ A lot of tags and attributes are generated after build the html, e.g. with CSS Modules ...
-
-Anyway, most html resered in browser are not semantic and even not readable. So, how can users get the correct selector?
-
-**Honestly**, it is very difficult and even not possible to test an existed html rendering - except they are designed friendly for test, e.g. add _"id"_ attribute to most testing targets.
-
-> Handow do not guarantee you can test all existed application, neither do any others.
-
-## Handow suggest using test-probe when you design the html markup
-
-+ It is necessary to set **Test-Probe** attributes.
-+ If the application existed already, you can not do this. Just too bad ...
-+ Handow suggest using a custom attribute as **Test Probe**, dedicated for UAT teast.
-+ We can remove these **Test Probe** from markup when we build application to production - only if they are custom dedicated attributes. (After they are removed, you can not test app anymore).
-+ User can specify an attribute as probe by set it to config.
-+ Handow provide easier **Probe Selector** syntax, user can use it as **CSS Selector**.
-+ User can always use selectors other than probe syntax.
-
-### Probe syntax
-
-The probe selector syntax:
-
- ([scope-selector])[probe-value]([order-selector])
-
-For example, we set _config.htmlProbe_ to be "h4w" in system config, then we can implement it to HTML like this:
-
-```html
-