Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scope: Plan for deployments in case of no internet access #69

Open
4 tasks
singhalkarun opened this issue Aug 6, 2024 · 11 comments
Open
4 tasks

Scope: Plan for deployments in case of no internet access #69

singhalkarun opened this issue Aug 6, 2024 · 11 comments
Assignees
Labels

Comments

@singhalkarun
Copy link
Collaborator

singhalkarun commented Aug 6, 2024

We need Internet access for various components when installing BHASAI - https://github.com/BharatSahAIyak/devops (specially pulling Docker Images). Figure out all the components which require internet access and create a process for deploying without Internet Access.

Components

  • System Packages

  • Docker Images

  • Domain names used. E.g. In caddy's docker-compose.yaml

  • Add More here @GJS2162

Questions: How is the ssh connection b/w no internet sever and other server being established.
Found this: VPN: Set up a VPN on another server within the same network as the isolated server. Once connected to the VPN, you can SSH into the isolated server using its local IP address.

@singhalkarun singhalkarun changed the title Plan for deployments in case of no internet access Scope: Plan for deployments in case of no internet access Aug 23, 2024
@GJS2162
Copy link
Collaborator

GJS2162 commented Aug 26, 2024

BHASAI is this https://github.com/BharatSahAIyak/docker-bhasai, right?

@singhalkarun
Copy link
Collaborator Author

BHASAI is this https://github.com/BharatSahAIyak/docker-bhasai, right?

https://github.com/BharatSahAIyak/devops (added now in issue description also.

@GJS2162
Copy link
Collaborator

GJS2162 commented Aug 26, 2024

Exploring Tasks :

  • Cloning the repo : scp -r devops-scp [email protected]:/home/barman/devops
  • Blocking Traffic on 80 and 443 :
sudo iptables -A OUTPUT -p tcp --dport 80 -o enp0s1 -j DROP
sudo iptables -A OUTPUT -p tcp --dport 443 -o enp0s1 -j DROP 

  • For checking : curl -I https://www.google.com/

  • Enabling Traffic on 80 and 443 :

    sudo iptables -D OUTPUT -p tcp --dport 80 -o enp0s1 -j DROP
    
sudo iptables -D OUTPUT -p tcp --dport 443 -o enp0s1 -j DROP
  • Installing yq : 
On Mac :
    sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
    

scp yq_linux_amd64 [email protected]:/tmp/yq_linux_amd64
    sudo mv /tmp/yq_linux_amd64 /usr/bin/yq
sudo chmod +x /usr/bin/yq
  • Installing Docker : make install-docker

@GJS2162
Copy link
Collaborator

GJS2162 commented Aug 27, 2024

To install build-essential on a server without internet access, you'll need to download the necessary packages on a machine with internet access and then transfer them to the server. Here's how you can do it:

Step 1: Download the Packages on a Machine with Internet Access

  1. On a machine with internet access, update your package list:

    sudo apt-get update
  2. Download the build-essential package and its dependencies using apt-get with the --download-only option:

    sudo apt-get install --download-only build-essential

    This will download all the necessary .deb files to your system's cache (usually in /var/cache/apt/archives).

  3. Collect the downloaded .deb files:

    mkdir -p ~/build-essential-packages
    sudo cp /var/cache/apt/archives/*.deb ~/build-essential-packages/

    This will copy all the .deb files into the ~/build-essential-packages directory.

  4. Compress the folder containing the .deb files to make it easier to transfer:

     tar -czvf build-essential-packages.tar.gz -C ~/build-essential-packages .

Step 2: Transfer the Packages to the Server

  1. Use scp to transfer the compressed file to your server (replace user@server_ip:/path/to/destination with your server details):

    scp build-essential-packages.tar.gz user@server_ip:/path/to/destination
  2. SSH into your server and navigate to the destination directory:

    ssh user@server_ip
    cd /path/to/destination
  3. Extract the compressed file:

    tar -xzvf build-essential-packages.tar.gz

Step 3: Install the Packages on the Server

  1. Navigate to the directory containing the .deb files:

    cd build-essential-packages
  2. Install all the packages using dpkg:

    sudo dpkg -i *.deb
  3. If there are any missing dependencies, fix them by running:

    sudo apt-get install -f

If your system does not have internet access and you run apt-get install -f, it might not be able to fix the broken dependencies unless you have already downloaded the necessary packages and dependencies to your local cache or have them available in a local repository. In such cases, you would need to manually download the required packages and install them.

This will install any missing dependencies from the .deb files that are already present in the directory.

Summary

  • Download: Use a machine with internet access to download build-essential and its dependencies.
  • Transfer: Move the files to the server using scp.
  • Install: Use dpkg on the server to install the packages.

This method allows you to install build-essential without direct internet access on the server.

@GJS2162
Copy link
Collaborator

GJS2162 commented Aug 27, 2024

To build a Docker image without internet access, especially when your Dockerfile references external base images like hashicorp/consul-template:0.37.2 and vault:1.13.3, you need to ensure that these images are available locally on the machine where the build will occur. Here’s how you can do this:

Steps to Build the Image Without Internet

  1. Pre-download the Required Images:

    • On a machine with internet access, pull the required Docker images:
      docker pull hashicorp/consul-template:0.37.2
      docker pull vault:1.13.3
  2. Save the Images:

    • Save the pulled images as tar files using the docker save command:
      docker save -o consul-template_0.37.2.tar hashicorp/consul-template:0.37.2
      docker save -o vault_1.13.3.tar vault:1.13.3
  3. Transfer the Images to the Server:

    • Transfer the saved tar files to the server without internet access using scp, rsync, or any other method you prefer:
      scp consul-template_0.37.2.tar user@your-server:/path/to/save
      scp vault_1.13.3.tar user@your-server:/path/to/save
  4. Load the Images on the Server:

    • Once the tar files are on the server, load them into Docker using the docker load command:
      docker load -i /path/to/save/consul-template_0.37.2.tar
      docker load -i /path/to/save/vault_1.13.3.tar
  5. Build the Docker Image:

    • Now that the required base images are loaded locally, you can proceed to build your Docker image without needing internet access:
      docker-compose build
    • Docker will use the locally available hashicorp/consul-template:0.37.2 and vault:1.13.3 images during the build process.

Explanation

  • docker pull: Downloads the specified image from the Docker Hub (or another configured registry).
  • docker save: Exports the image to a tar file, which can be transferred to another machine.
  • docker load: Imports the image from the tar file into the Docker daemon on the target machine.
  • docker-compose build: Builds the image using the specified Dockerfile, leveraging the loaded images locally without requiring internet access.

By following these steps, you can build your Docker image on a server without internet access using the necessary base images.

@GJS2162
Copy link
Collaborator

GJS2162 commented Aug 27, 2024

For setting up docker :
Here are the commands for downloading Docker packages on a Mac, transferring them to an Ubuntu machine using scp, and then installing them on the Ubuntu machine.

Step 1: Making the zip for pacakges:

  1. **Set up Docker's apt repository.
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
  1. Create a directory to store the packages:

    mkdir -p ~/docker-packages
    cd ~/docker-packages
  2. **Download Docker Packages on the Ubuntu Machine

sudo apt-get download docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  1. ** Create zip
tar -czvf docker-packages.tar.gz *.deb
  1. Zip the packages for easy transfer:

    tar -czvf docker-packages.tar.gz *.deb
  2. Exit from the Ubuntu machine (if SSH'd):

    exit

Step 2: Transfer the Packages from Ubuntu to Mac Using scp

  1. Transfer the zipped package file to your Mac:

    scp <ubuntu-user>@<ubuntu-ip>:~/docker-packages/docker-packages.tar.gz ~/

    Replace <ubuntu-user> with your Ubuntu username, and <ubuntu-ip> with your Ubuntu machine's IP address.

Step 3: Transfer the Packages from Mac to the Target Ubuntu Machine Using scp

  1. Transfer the zipped package file to the target Ubuntu machine:

    scp ~/docker-packages.tar.gz <ubuntu-user>@<target-ubuntu-ip>:~/

    Replace <ubuntu-user> with your target Ubuntu username and <target-ubuntu-ip> with the IP address of the target Ubuntu machine.

Step 4: Install the Docker Packages on the Target Ubuntu Machine

  1. SSH into the target Ubuntu machine:

    ssh <ubuntu-user>@<target-ubuntu-ip>
  2. Unzip the transferred package file:

    tar -xzvf ~/docker-packages.tar.gz -C ~/docker-packages/
  3. Navigate to the directory containing the .deb files:

    cd ~/docker-packages/
  4. Install the Docker packages:

    sudo dpkg -i *.deb

@GJS2162
Copy link
Collaborator

GJS2162 commented Aug 27, 2024

For now, i was having much delay with scp and rsync, so i used the below to get the tar.gz file :

 python3 -m venv myenv
 source myenv/bin/activate
 pip install gdown
 gdown --id 11GjjqJgBgoMLnyHpuuStBcJXOltko8yr

@GJS2162
Copy link
Collaborator

GJS2162 commented Aug 27, 2024

Is there some fast way than rsync and scp to transfer file using ssh?

@singhalkarun
Copy link
Collaborator Author

singhalkarun commented Aug 28, 2024

rsync

We can use any method that can utilse the ssh access that we have. Does gdown utilise ssh? As per my understanding it needs internet

Also, can you share total amount of data in GB (split modules wise - e.g., for docker, for gpu drivers, etc) we will need to transfer in case of offline deployment and what speed you are getting during scp. We can accordingly plan on timelines in future for offline deployments.

@singhalkarun
Copy link
Collaborator Author

@GJS2162 to share

A list of components (system packages, docker images, etc)

  • bundle (e.g., docker)
  • list of packages
  • size in gbs
  • type: docker image, system package, etc
  • special remarks

Summarize process for 1 package installation and 1 docker image (raise a pr to add docs/no-internet-deployment.md)

@singhalkarun to prepare a list of domain names that are required during runtime (not a part of this ticket)

@singhalkarun
Copy link
Collaborator Author

singhalkarun commented Aug 29, 2024

  • Do a no internet deployment (track all system pacakges/docker images and their parameters)
  • Update no-internet-deployment.md to have 1 package and 1 docker image installation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants