Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker 20.10.6 IPv6 bindings shouldn't be mapped as network bindings for tasks for non IPv6 networks. #2870

Closed
tomelliff opened this issue May 18, 2021 · 16 comments

Comments

@tomelliff
Copy link

tomelliff commented May 18, 2021

Summary

Docker 20.10.6 includes this fix which now means the API returns IPv6 bindings which I don't think it has ever done before? The ECS Agent then maps this binding to tasks so you end up with multiple bindings when previously you just had the IPv4 binding. According to https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-vpc-dual-stack I don't think IPv6 anything should be working by default and multiple things need to be enabled (and never on the bridge networking mode)although I could be misreading that.

Description

Normally this isn't an issue because you have a 0.0.0.0 binding and a :: binding which then seems to get deduplicated on the target group.

Unfortunately there also appears to be an issue upstream (see moby/libnetwork#2639) where IPv6 host port bindings can be wrong and point to the wrong container when proxying IPv4 traffic to it. This seems to be the root cause of an issue that's been causing spurious healthcheck failures on our container fleet for the last couple of weeks (raised in support ticket 8277473901).

Expected Behavior

Either IPv6 host port bindings should be being filtered out here by default or there should be configuration to not enable it.

Only a single host port binding should be mapped to the task as with Docker 20.10.5 instances:

                    "networkBindings": [
                        {
                            "bindIP": "0.0.0.0",
                            "containerPort": 8080,
                            "hostPort": 49157,
                            "protocol": "tcp"
                        }
                    ],

Observed Behavior

Both IPv4 and IPv6 host port bindings are mapped to the task:

                    "networkBindings": [
                        {
                            "bindIP": "0.0.0.0",
                            "containerPort": 5000,
                            "hostPort": 49157,
                            "protocol": "tcp"
                        },
                        {
                            "bindIP": "::",
                            "containerPort": 5000,
                            "hostPort": 49157,
                            "protocol": "tcp"
                        }
                    ],

Environment Details

Multiple network bindings observed on this instance:

# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  scan: Docker Scan (Docker Inc., v0.7.0)

Server:
 Containers: 6
  Running: 3
  Paused: 0
  Stopped: 3
 Images: 5
 Server Version: 20.10.6
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-1045-aws
 Operating System: Ubuntu 20.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 36
 Total Memory: 68.59GiB
 Name: ip-10-7-121-124
 ID: RBQR:INSR:CPW6:XC3H:4GSV:KDQT:RMPH:X6SZ:CD5Z:QCLE:YKMU:PLUN
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Single IPv4 network binding on this instance:

# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
 Containers: 14
  Running: 14
  Paused: 0
  Stopped: 0
 Images: 16
 Server Version: 20.10.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-1038-aws
 Operating System: Ubuntu 20.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 7.676GiB
 Name: ip-10-7-193-75
 ID: ZFQT:3A7D:B3NN:YZUW:LEWZ:GWVA:ADNG:E3S3:CMQF:HXZQ:53U3:EQZG
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Supporting Log Snippets

@shubham2892
Copy link
Contributor

shubham2892 commented May 20, 2021

@tomelliff Thank you for reporting this. I am trying to repro this issue, can you please help me with AMI you are using and how are you updating the docker on the instance ? Thanks.

@tomelliff
Copy link
Author

We're basing off the latest Canonical Ubuntu 20.04 AMI and then we install Docker from the APT repo at https://download.docker.com/linux/ubuntu.

Our ECS agent systemd unit file looks like this:

[Unit]
Description=ECS Agent
Requires=docker.service
After=docker.service cloud-final.service

[Service]
Restart=always
ExecStartPre=/sbin/iptables -t nat -A PREROUTING --dst 169.254.170.2/32 \
 -p tcp -m tcp --dport 80 -j DNAT --to-destination 127.0.0.1:51679
ExecStartPre=/sbin/iptables -t filter -I INPUT --dst 127.0.0.0/8 \
 ! --src 127.0.0.0/8 -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
ExecStartPre=/sbin/iptables -t nat -A OUTPUT --dst 169.254.170.2/32 \
 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679
ExecStartPre=/sbin/sysctl -w net.ipv4.conf.all.route_localnet=1
ExecStartPre=-/usr/bin/docker rm -f ecs-agent
ExecStartPre=-/bin/mkdir -p /var/lib/ecs/dhclient
ExecStart=/usr/bin/docker run --name ecs-agent \
  --init \
  --cap-add=NET_ADMIN \
  --cap-add=SYS_ADMIN \
  --restart=on-failure:10 \
  --volume=/var/run:/var/run \
  --volume=/var/log/ecs/:/log \
  --volume=/var/lib/ecs/data:/data \
  --volume=/etc/ecs:/etc/ecs \
  --volume=/sbin:/sbin:ro \
  --volume=/lib:/lib:ro \
  --volume=/lib64:/lib64:ro \
  --volume=/usr/lib:/usr/lib:ro \
  --volume=/proc:/host/proc:ro \
  --volume=/sys/fs/cgroup:/sys/fs/cgroup \
  --net=host \
  --env-file=/etc/ecs/ecs.config \
  amazon/amazon-ecs-agent:latest
ExecStopPost=/usr/bin/docker rm -f ecs-agent

[Install]
WantedBy=default.target

and our ECS config file looks like this:

ECS_DATADIR=/data
ECS_LOGFILE=/log/ecs-agent.log
ECS_LOG_OUTPUT_FORMAT=json
ECS_LOGLEVEL=info
ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","awslogs","fluentd"]
ECS_ENABLE_CONTAINER_METADATA=true
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true
ECS_ENABLE_TASK_ENI=true
ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE=true
ECS_ENABLE_SPOT_INSTANCE_DRAINING=true

We also add the ECS_CLUSTER variable to the config file to join the instance to the correct cluster via user data.

@sparrc
Copy link
Contributor

sparrc commented May 20, 2021

I believe this is the same issue as moby/moby#42288, which the docker team is planning to fix in 20.10.7.

If we can confirm that this is fixed in 20.10.7 then I think the best course of action would be for ECS to do nothing to workaround this.

@sparrc
Copy link
Contributor

sparrc commented May 20, 2021

@tomelliff could you confirm if the instance has ipv6 enabled, then there would be no ill effects from this? I'm wondering if there could still be issues if the customer has ipv6 enabled on their instance but does not have ipv6 enabled up their entire VPC stack.

As in, even after the above moby issue is fixed, we should always have a configuration to turn off the ipv6 networkBinding from being added?

@jpradelle
Copy link

I experienced same issue on Ubuntu 18.04 with docker 20.10.6 using bridge network mode, ECS agent 1.51.0.
I opened an AWS support issue 8327955911

I experienced deployment stuck forever in in progress status, or target is healthy and responding well, but killed because detected as unhealthy (timeout on health check even if health check path is responding well when called from EC2 instance). These issues start when port binding for IPv4 and IPv6 are made on different ports. At the beginning both ports are the same, and after a while they start being different on one instance of the cluster and break everything until I redeploy the cluster instance.
For example something like:

                    "networkBindings": [
                        {
                            "bindIP": "0.0.0.0",
                            "containerPort": 5000,
                            "hostPort": 49157,
                            "protocol": "tcp"
                        },
                        {
                            "bindIP": "::",
                            "containerPort": 5000,
                            "hostPort": 49159,
                            "protocol": "tcp"
                        }
                    ],

@sparrc
Copy link
Contributor

sparrc commented May 27, 2021

Currently there are three known workarounds:

  1. pin to docker 20.10.5 if installing via docker repos
  2. install docker from your distribution's repos (ie install docker from default OS repos from ubuntu, debian, centos, etc.)
  3. do not disable ipv6 via kernel parameter

@sparrc
Copy link
Contributor

sparrc commented May 27, 2021

I built a development version off docker's master branch (20.10.7) and can confirm that this appears to be fixed. I did the same repro steps that I outlined here: moby/moby#42288 (comment), but this time I successfully launched the container and I confirmed that docker inspect returns only a single ipv4 network binding in the case that the instance has ipv6 disabled:

# docker run --name test --rm -d -p 5234:5234 public.ecr.aws/bitnami/minideb:latest sleep 999999
919b26090c74a95ced63e408a99be83f5025dc048a268861408d310b7d29ab78
 
# docker ps
CONTAINER ID   IMAGE                                   COMMAND          CREATED         STATUS         PORTS                    NAMES
919b26090c74   public.ecr.aws/bitnami/minideb:latest   "sleep 999999"   3 seconds ago   Up 2 seconds   0.0.0.0:5234->5234/tcp   test

# docker inspect test | jq .[0].NetworkSettings
{
  "Bridge": "",
  "SandboxID": "0a948f5220386cb91789749e55a78ec66395cec8bac8b429d6d5579acde0c763",
  "HairpinMode": false,
  "LinkLocalIPv6Address": "",
  "LinkLocalIPv6PrefixLen": 0,
  "Ports": {
    "5234/tcp": [
      {
        "HostIp": "0.0.0.0",
        "HostPort": "5234"
      }
    ]
  },
  "SandboxKey": "/var/run/docker/netns/0a948f522038",
  "SecondaryIPAddresses": null,
  "SecondaryIPv6Addresses": null,
  "EndpointID": "0eeffde7661a0c59ac6ac7b0caec5e25b69aee744b61e2875344867488553336",
  "Gateway": "172.17.0.1",
  "GlobalIPv6Address": "",
  "GlobalIPv6PrefixLen": 0,
  "IPAddress": "172.17.0.2",
  "IPPrefixLen": 16,
  "IPv6Gateway": "",
  "MacAddress": "02:42:ac:11:00:02",
  "Networks": {
    "bridge": {
      "IPAMConfig": null,
      "Links": null,
      "Aliases": null,
      "NetworkID": "54ad28d4139a0d0eeba57d3c5733f79761327dfacbf58ec7fd2f65b13ab6a9fb",
      "EndpointID": "0eeffde7661a0c59ac6ac7b0caec5e25b69aee744b61e2875344867488553336",
      "Gateway": "172.17.0.1",
      "IPAddress": "172.17.0.2",
      "IPPrefixLen": 16,
      "IPv6Gateway": "",
      "GlobalIPv6Address": "",
      "GlobalIPv6PrefixLen": 0,
      "MacAddress": "02:42:ac:11:00:02",
      "DriverOpts": null
    }
  }
}

@sparrc
Copy link
Contributor

sparrc commented May 27, 2021

This being said, we do have a change in behavior on 20.10.7 when ipv6 is enabled:

20.10.5

# docker run --name test --rm -d -p 5234:5234 public.ecr.aws/bitnami/minideb:latest sleep 999999
# docker inspect test | jq .[0].NetworkSettings.Ports
{
  "5234/tcp": [
    {
      "HostIp": "0.0.0.0",
      "HostPort": "5234"
    }
  ]
}

20.10.7

# docker run --name test --rm -d -p 5234:5234 public.ecr.aws/bitnami/minideb:latest sleep 999999
# docker inspect test | jq .[0].NetworkSettings.Ports
{
  "5234/tcp": [
    {
      "HostIp": "0.0.0.0",
      "HostPort": "5234"
    },
    {
      "HostIp": "::",
      "HostPort": "5234"
    }
  ]
}

@sparrc
Copy link
Contributor

sparrc commented May 27, 2021

I experienced same issue on Ubuntu 18.04 with docker 20.10.6 using bridge network mode, ECS agent 1.51.0.
I opened an AWS support issue 8327955911

I experienced deployment stuck forever in in progress status, or target is healthy and responding well, but killed because detected as unhealthy (timeout on health check even if health check path is responding well when called from EC2 instance). These issues start when port binding for IPv4 and IPv6 are made on different ports. At the beginning both ports are the same, and after a while they start being different on one instance of the cluster and break everything until I redeploy the cluster instance.
For example something like:

                    "networkBindings": [
                        {
                            "bindIP": "0.0.0.0",
                            "containerPort": 5000,
                            "hostPort": 49157,
                            "protocol": "tcp"
                        },
                        {
                            "bindIP": "::",
                            "containerPort": 5000,
                            "hostPort": 49159,
                            "protocol": "tcp"
                        }
                    ],

@jpradelle I've tried with 20.10.6 and 20.10.7 but have not been able to reproduce the situation where the ipv4 and ipv6 network bindings receive a different host port.

How often do you see that happening? and do you have some task definition that reproduces it?

@jpradelle
Copy link

It takes time for IPv4 and IPv6 port binding to be different. At the beginning both ports are the same. It took almost 3 weeks of run with at least a hundred deployments/redeployments before ports begins to be different. Currently I was not able to identify a pattern to reproduce.

I experienced twice targets responding well to call made from ELB (from my browser http://my-elb/my-target) but being killed due to ELB health check timeout at the end of health check grace period.
First time I didn't check network bindings.
Second time I had this duplicated network binding on different ports, and port on IPv4 was responding well, port on IPv6 was not responding: from my EC2 instance hosting the docker task with curl: http://localhost:49157/target -> 200 OK and http://localhost:49159/target not responding, http://ip6-localhost:49159/target not responding too. On target group health check was run on both ports, as only one was working task was killed.

Currently I renewed my cluster instances all my tasks are running and being deployed with duplicate network binding on same ports, everything works fine.

Here is the CloudFormation file I use

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  LogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: stg-mta/my-app
      RetentionInDays: 30
  Task:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: my-app
      NetworkMode: bridge
      TaskRoleArn: ...
      ContainerDefinitions:
      - Name: my-app
        Image: ...
        MemoryReservation: 512
        Environment:
        - ...
        PortMappings:
        - ContainerPort: 8080
          Protocol: tcp
        ReadonlyRootFilesystem: true
        LogConfiguration:
          LogDriver: awslogs
          Options:
            awslogs-group: stg-mta/my-app
            awslogs-region:
              Ref: AWS::Region
            awslogs-stream-prefix: my-app
        MountPoints:
        - SourceVolume: tmp
          ContainerPath: /tmp
      Volumes:
      - Name: tmp
      Tags:
      - ...
  
  Service:
    Type: AWS::ECS::Service
    DependsOn:
    - ListnerRuleApi
    Properties:
      ServiceName: my-app
      TaskDefinition:
        Ref: Task
      Cluster: stg-mta-ecs
      LaunchType: EC2
      DesiredCount: 1
      HealthCheckGracePeriodSeconds: 240
      DeploymentConfiguration:
        MaximumPercent: 200
        MinimumHealthyPercent: 100
      PropagateTags: TASK_DEFINITION
      LoadBalancers:
      - ContainerName: my-app
        ContainerPort: 8080
        TargetGroupArn:
          Ref: TargetGroupApi
      - ContainerName: my-app
        ContainerPort: 8080
        TargetGroupArn:
          Ref: TargetGroupInternalNoSso
      Tags:
      - ...
  TargetGroupApi:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: my-app-api
      Port: 8080
      Protocol: HTTP
      VpcId:
        Fn::ImportValue: vpcid
      TargetType: instance
      Matcher:
        HttpCode: 200-299
      HealthCheckPath: /ping.php
      HealthCheckProtocol: HTTP
      HealthCheckIntervalSeconds: 5
      HealthCheckTimeoutSeconds: 4
      HealthyThresholdCount: 3
      UnhealthyThresholdCount: 2
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: '5'
      Tags:
      - ...
  ListnerRuleApi:
    Type: AWS::ElasticLoadBalancingV2::ListenerRule
    Properties:
      ListenerArn:
        Fn::ImportValue: stg-mta-elb-api-listener
      Actions:
      - Type: forward
        Order: 50000
        TargetGroupArn:
          Ref: TargetGroupApi
      Conditions:
      - Field: path-pattern
        Values:
        - Fn::Sub: /my-app/*
        - Fn::Sub: /other-route2/*
        - Fn::Sub: /other-route3/*
        - Fn::Sub: /other-route4/*
      Priority: 1190
  TargetGroupInternalNoSso:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: my-app-int
      Port: 8080
      Protocol: HTTP
      VpcId:
        Fn::ImportValue: vpcid
      TargetType: instance
      Matcher:
        HttpCode: 200-299
      HealthCheckPath: /ping.php
      HealthCheckProtocol: HTTP
      HealthCheckIntervalSeconds: 5
      HealthCheckTimeoutSeconds: 4
      HealthyThresholdCount: 3
      UnhealthyThresholdCount: 2
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: '5'
      Tags:
      - ...
  ListnerRuleInternalNoSso:
    Type: AWS::ElasticLoadBalancingV2::ListenerRule
    Properties:
      ListenerArn:
        Fn::ImportValue: stg-mta-elb-internal-listener
      Actions:
      - Type: forward
        Order: 50000
        TargetGroupArn:
          Ref: TargetGroupInternalNoSso
      Conditions:
      - Field: path-pattern
        Values:
        - Fn::Sub: /my-app/public/*
        - Fn::Sub: /other-route/public/*
      Priority: 1048

@tomelliff
Copy link
Author

We just hit the issue again on 20.10.7 after unpinning from 20.10.5. So there's still the issue upstream with the different ports for IPv4 and IPv6.

We haven't disabled IPv6 on the instance but as mentioned in the issue above ECS supposedly has strict opt ins for IPv6 that are not met here (we're not using awsvpc networking mode for some of these impacted tasks) so either the documentation needs to be updated to explain how IPv6 addresses are used for bridge networking tasks (documentation has moved to https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking-bridge.html vs https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking-awsvpc.html for awsvpc networking mode) or the ECS agent should be stripping these IPv6 interfaces and not publishing them to the ECS API which then surfaces the upstream issue.

@sparrc
Copy link
Contributor

sparrc commented Jun 16, 2021

Thanks for the update @tomelliff

We haven't disabled IPv6 on the instance but as mentioned in the issue above ECS supposedly has strict opt ins for IPv6 that are not met here

Which issue are you referring to here?

I'm not 100% sure what you mean by ECS having "strict opt ins" for IPv6, but this maybe happens at a higher level than at the ecs-agent in a way that I don't fully understand.

From the ecs-agent level I'm not exactly sure what would be the best practice here. Obviously a user who intends to use ipv6 should not have their ipv6 interface stripped out, and it's not clear to me how the ecs-agent should determine that a particular ECS instance should be opted out of exposing ipv6 interfaces.

I'm tempted to say that the best solution for you would be to disable ipv6 on your instance using the kernel parameter, but I'm also happy to understand the issue better if you think there's something that ecs-agent should be doing to filter out these ipv6 interfaces. I can also help to reroute this to the ECS backend side of things if this is something that could or should be filtered out on their end.

@jpradelle
Copy link

I think Docker 20.10.7 upgrade solved the issue on my side.

I renewed my 2 instances of the cluster based on same AMI version with following upgrades:
ECS Agent from version 1.52.2 to 1.53.0
Docker from version 20.10.6 to 20.10.7

And now in my task network bindings I no longer have duplicated network bindings for same ports. For exemple on a task, I only have IPv4 binding, which is what I expected:

docker inspect be12c41c4dcc | jq .[0].NetworkSettings.Ports
{
  "8080/tcp": [
    {
      "HostIp": "0.0.0.0",
      "HostPort": "49184"
    }
  ]
}

@sparrc
Copy link
Contributor

sparrc commented Jun 22, 2021

OK, @jpradelle it sounds like maybe you were being affected by the docker bug that was exposing the ipv6 interface even though you had disabled ipv6 on your instances. Can you confirm if you have ipv6 disabled?

For anyone else who sees this issue, I believe the current best workaround would be to disable ipv6 on your instances with the linux kernel parameter ipv6.disable=1, which would look something like this on ubuntu:

  1. edit grub options vim /etc/default/grub and add ipv6.disable=1 to GRUB_CMDLINE_LINUX:
GRUB_CMDLINE_LINUX="ipv6.disable=1"
  1. reboot with shutdown -r now

@jpradelle
Copy link

I'm not sure exactly what my IPv6 configuration is. Using an Ubuntu AMI updated by my corporate service, never did anything on that part, neither on kernel parameter nor on docker configuration.
I think IPv6 is partially disabled, and yes disabled on docker part:

sysctl -a 2>/dev/null | grep disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.docker0.disable_ipv6 = 1
net.ipv6.conf.ens5.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.veth0144ea3.disable_ipv6 = 1
net.ipv6.conf.veth0213c98.disable_ipv6 = 1
...

@sharanyad
Copy link
Contributor

As mentioned by @sparrc, the workaround for now is to disable ipv6 on your instances if docker 20.10.6 is used (note - ECS Agent does not support this version yet).
I'm closing this issue. We will revisit this while upgrading Docker version in ECS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants