Skip to content

Commit

Permalink
Updating Collapsible Sections
Browse files Browse the repository at this point in the history
  • Loading branch information
Some1Somewhere committed Dec 5, 2023
1 parent 8f9750e commit 0fef07c
Show file tree
Hide file tree
Showing 6 changed files with 80 additions and 35 deletions.
52 changes: 26 additions & 26 deletions content/BDD.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@
<details markdown="1">
<summary>Detailed Explanation:</summary>

This issue occurs when the Selenium WebDriver, specifically 'chromedriver', is not found in the Docker container's PATH. To resolve this:
This issue occurs when the Selenium WebDriver, specifically 'chromedriver', is not found in the Docker container's PATH. To resolve this:

1.**Switch Docker Image**: Update the Dockerfile to use `rofrano/pipeline-selenium`. This image is pre-configured with Chrome and chromedriver.
3. **Rebuild Docker Container**: After updating the Dockerfile, **rebuild** the container to ensure the new configuration is applied.
4. **Verify Installation**: Check if 'chromedriver' is correctly installed and accessible by running a test command inside the container.
1.**Switch Docker Image**: Update the Dockerfile to use `rofrano/pipeline-selenium`. This image is pre-configured with Chrome and chromedriver.
3. **Rebuild Docker Container**: After updating the Dockerfile, **rebuild** the container to ensure the new configuration is applied.
4. **Verify Installation**: Check if 'chromedriver' is correctly installed and accessible by running a test command inside the container.

</details>

Expand All @@ -41,11 +41,11 @@ requests==2.31.0
<details markdown="1">
<summary>Detailed Explanation:</summary>

The error indicating that the 'behave' command is not found suggests it is not installed in the Docker container. To fix this:
The error indicating that the 'behave' command is not found suggests it is not installed in the Docker container. To fix this:

1.**Check requirements.txt**: Ensure `behave` is listed in the `requirements.txt` file.
2. **Rebuild Container**: Rebuild the Docker container to install `behave` from the updated `requirements.txt`.
3. **Test Behave Installation**: Run a simple behave command to confirm it's now recognized in the container.
1.**Check requirements.txt**: Ensure `behave` is listed in the `requirements.txt` file.
2. **Rebuild Container**: Rebuild the Docker container to install `behave` from the updated `requirements.txt`.
3. **Test Behave Installation**: Run a simple behave command to confirm it's now recognized in the container.

## Problem : Chromedriver Unstable or Failing

Expand All @@ -58,32 +58,32 @@ requests==2.31.0
<details markdown="1">
<summary>Detailed Explanation:</summary>

To address Chromedriver's inconsistent behavior across different systems, follow these steps:
To address Chromedriver's inconsistent behavior across different systems, follow these steps:

1.**Temporary WebDriver Switch**:
- Run `export DRIVER=firefox` in the Docker environment to temporarily switch to Firefox WebDriver.
- This change applies only to the current session and helps determine if Firefox WebDriver resolves the issue.
1.**Temporary WebDriver Switch**:
- Run `export DRIVER=firefox` in the Docker environment to temporarily switch to Firefox WebDriver.
- This change applies only to the current session and helps determine if Firefox WebDriver resolves the issue.

2.**Test the Change**:
- Conduct your Selenium tests again to check if the issue with Chromedriver is resolved using Firefox WebDriver.
2.**Test the Change**:
- Conduct your Selenium tests again to check if the issue with Chromedriver is resolved using Firefox WebDriver.

3.**Permanent Configuration**:
- If the issue is resolved with Firefox, then make this change permanent.
- In your project's `.devcontainer/docker-compose.yml`, add the following under the `environment` section:
`` environment: DRIVER: firefox WAIT_SECONDS: 3 ``
- `DRIVER: firefox` sets Firefox as the default WebDriver.
- `WAIT_SECONDS: 3` reduces the wait time in case of errors, speeding up test execution.
3.**Permanent Configuration**:
- If the issue is resolved with Firefox, then make this change permanent.
- In your project's `.devcontainer/docker-compose.yml`, add the following under the `environment` section:
`` environment: DRIVER: firefox WAIT_SECONDS: 3 ``
- `DRIVER: firefox` sets Firefox as the default WebDriver.
- `WAIT_SECONDS: 3` reduces the wait time in case of errors, speeding up test execution.

4.**Rebuild Docker Environment**:
- After updating the `docker-compose.yml` file, rebuild the Docker environment to apply these changes.
- This ensures that all future tests will automatically use Firefox WebDriver.
4.**Rebuild Docker Environment**:
- After updating the `docker-compose.yml` file, rebuild the Docker environment to apply these changes.
- This ensures that all future tests will automatically use Firefox WebDriver.

5.**Verify Stability**:
- Run the tests again in the updated Docker environment to confirm the stability and consistency of Selenium tests with Firefox WebDriver.
5.**Verify Stability**:
- Run the tests again in the updated Docker environment to confirm the stability and consistency of Selenium tests with Firefox WebDriver.

</details>

---`</details>`
---

### Problem Statement

Expand Down
9 changes: 7 additions & 2 deletions content/CICD.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,10 @@ Users are experiencing issues where **`make lint`** runs successfully in a local

To resolve the discrepancies between local linting and GitHub Actions, users should update their workflow configuration to match the **`workflow.yml`** file provided in the "[Lab Flask TDD Workflow](https://github.com/nyu-devops/lab-flask-tdd/blob/master/.github/workflows/ci.yml)." Additionally, they have two options for addressing the linting step.

### Detailed Explanation:
<details markdown="1">
<summary>### Detailed Explanation:</summary>



**Understanding the Problem:**

Expand Down Expand Up @@ -54,4 +57,6 @@ To resolve the discrepancies between local linting and GitHub Actions, users sho
- This change aligns the local linting command with the one used in GitHub Actions, ensuring consistency.
5. **Re-run GitHub Actions:**
- Commit and push the changes to trigger the GitHub Actions workflow.
- Monitor the Actions tab in the GitHub repository to ensure that the linting step passes successfully.
- Monitor the Actions tab in the GitHub repository to ensure that the linting step passes successfully.

</details>
1 change: 0 additions & 1 deletion content/Docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,5 @@ Update [Dockerfile](https://github.com/nyu-devops/lab-flask-bdd/blob/c4654d806cd

Alternatively, switch to using the binary package **`psycopg[binary]`** to avoid the need for compiling the package and installing additional dependencies.

### Detailed Explanation:

---
6 changes: 4 additions & 2 deletions content/Git.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ To bypass the ownership checks by Git and remove the warning, execute the follow
```bash
git config --global --add safe.directory /app
```

### **Detailed Explanation:**
<details markdown="1">
<summary>### **Detailed Explanation:**</summary>

**Understanding the Problem:**

Expand Down Expand Up @@ -45,3 +45,5 @@ git config --global --add safe.directory /app
- This solution assumes that the user has assessed the security implications and determined that the directory is indeed safe to use.
- For security reasons, it's generally better to understand and fix the underlying permission issues rather than globally disabling ownership checks. The immediate solution is a workaround and not a fundamental fix.
- Users should be cautious about using **`-global`** configuration changes, as they apply to all repositories for the current user. If the environment is shared or used for multiple projects, consider whether this change is appropriate.

</details>
45 changes: 42 additions & 3 deletions content/Kubernetes.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,48 @@
# Kubernetes Issues

## Kubernetes Issue: Pods Not Reaching READY State

**Problem**: Pods in Kubernetes (k8s) are not reaching the READY state, showing a status of READY 0/1.

**Solution**: Add a health endpoint to the application being deployed on Kubernetes.
**Solution**: Add a liveness probe to your pod configuration using `kubectl`.

<details markdown="1">
<summary>Detailed Explanation</summary>

When a Kubernetes pod shows a status of READY 0/1, it indicates that the pod is running but not ready to receive traffic. This is often due to the readiness probe failing or not being configured.

### What is a Readiness Probe?
A readiness probe is used by Kubernetes to determine if a pod is ready to handle traffic. This is crucial for maintaining service availability and load balancing.

### Solution Steps:
1. **Identify the Health Endpoint**: Ensure your application has a health check endpoint (e.g., `/health`). This endpoint should return a success status when the application is ready to serve traffic.

2. **Add Readiness Probe to Pod Configuration**:
- Open your pod configuration YAML file.
- Add a readiness probe section under the container specification.
- Specify the probe type (HTTP, TCP, or exec), along with the necessary details like `path`, `port`, and `initialDelaySeconds`.

Example:
```yaml
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
periodSeconds: 5
```
3. **Apply the Configuration**:
- Use `kubectl apply -f <your-pod-config.yaml>` to apply the changes.

4. **Monitor Pod Status**:
- Use `kubectl get pods` to monitor the pod status.
- The READY status should change to 1/1 once the readiness probe is successful.

### Note:
- The configuration details may vary based on your application's specific needs.
- Ensure that the probe intervals and thresholds are set according to your application's startup time and performance characteristics.

- Detailed Explanation:
By implementing a readiness probe, Kubernetes can effectively manage traffic to the pods, ensuring that only healthy instances receive requests.

---
</details>
2 changes: 1 addition & 1 deletion content/OpenShift.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Note: It was also mentioned that changing the image to `postgres:15-alpine` reso

Remember to adjust the PostgreSQL deployment configurations to ensure that the data persists across pod restarts and deployments.

`</details>`
</details>

## Problem Statement

Expand Down

0 comments on commit 0fef07c

Please sign in to comment.