-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ECS] [request] User Defined Networks #184
Comments
User-defined networks are not yet supported on the task definition. Can you help us understand what you'd intend to use them for? Are you looking for something like service discovery, security isolation, or something else? |
Mostly in interested in the automated service discovery part where I can setup predefined domain names for containers and connect my services via them. Unfortunately this only works using user defined networks. Currently I'm setting up a host DNS server which then scans the running containers and updates the DNS entries manually which is not ideal. |
I am running into wanting this too for the service discovery aspect, I see it supports container links but I was of the understand that it was deprecated now in favor of using networks, is this something that will be implemented soon? |
my usecase is basically a bidirectional linking (see http://stackoverflow.com/questions/25324860/how-to-create-a-bidirectional-link-between-containers)
|
my use case is I would like to be able to scale containers in services separately, but have communication to other containers in a different service/task definition. If multiple different task definitions were able to connect to a user defined network, all containers across different task definitions would have network connectivity on that user defined network by hostnames. |
Would really love to see this. Currently Service Discovery is a huge pain requiring yet another service (which itself is usually cluster-based and self-discovers and then listens for other services). It's a messy solution, not to mention the Lambda "solutions" that are even more obnoxious to implement and maintain. ECS needs native service discovery support out of the box. Specific example would be clustering services such as RabbitMQ or similar services. |
+1 to seeing this in place. At a minimum passing through the equivalent of the |
I believe this needs to be looked into with a higher priority. The legacy links feature is currently deprecated and may be removed. This warning is in place on the documentation for the feature. https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/ |
+1 - Really need this feature to create mysql replicas without putting on same host/task. |
+1 Linking is going away, and there are many services which require knowing their 'externally reachable host address (host ip + external port)' at runtime, which theoretically could be solved with user-defined networks |
+1 I would very much like to be able to define my own network instead of being forced to use either 'Host', 'Bridge' or 'None'. The agent doesn't even need to create the network, just allow me to put in a network name that's custom and then at runtime see if it fails to start because the network doesn't exist. |
I need to route traffic through a container that is running a VPN client. That way the actual containers can be used without modification when they need to use a VPN. Similar to the |
+1 In docker links are indeed already deprecated |
👍 need this for elasticsearch nodes |
👍 need this for hazelcast |
👍 Would be useful for ZooKeeper. |
👍 Consul ... service discovery |
👍 Use case for us is an nginx reverse proxy container which sits in front of an upstream API service running in another container on the same host. Currently our only option is using the deprecated link feature over the bridge network, or using something like DNS/ELB/Consul. But obviously we'd like to avoid making a network hop to call something that's running on the same host. |
A major disappointment I have with most (all?) orchestration tools is the assumption that all containers will be mapped to ports on the host. With overlay networks, this is not necessary. Containers can communicate within the network on ports that are not exposed or mapped to the host. This is clearly preferable as it almost completely eliminates any sort of port management and the possibility for port conflicts. Start your containers in an overlay network, listen on standard ports (i.e.: 80/443) without worrying about conflicts, and setup a proxy to forward requests to your containers by name. Map your proxy to host port 80/443 and point your ELB at it. Manage it all using your service discovery DNS. This is the most elegant and maintainable solution, yet most orchestration tools will not support it. It's a crying shame. Literally, I am crying over it. I shudder to think about managing 10,000 containers with port mapping. If each container exposes two ports, that's 20,000 ports I have to manage! Oh, I can make them map to random host ports, but now my proxy logic is so much more complicated, and someday I'll simply run out of ports. The bottom line is that a "scalable" solution that's built on port mapping is not scalable -- because mapping ports is not scalable. I have modified the ECS agent to support this, and it works perfectly for my needs. However, it's less than ideal, because I lose the regular updates to the agent, unless I continually merge them in, and I have little to no visibility or control into the networks from the console or the CLI. Guys, let's ditch the port mapping nonsense. It's not necessary with overlay networks. |
@samuelkarp Is this currently in the works? For anyone trying to do service discovery, take a look at the following article: From what I understand, you can use a single application load balancer to load balance up to 75 services by assigning a unique path prefix for each service, which you can then use to address your services. This doesn't cover all use cases, but should be enough for many applications. |
@elasticsearcher We're currently working on the ability to attach an ENI to a task and use native VPC networking. We believe that this will address many of the use-cases described in this issue, as well as provide integration with existing VPC network topology and features that people are using today. If you're interested in details, check out aws/amazon-ecs-agent#701 (description of how we're planning to do this), the |
👍 My use case assumes certain containers will join user defined networks and call each other by host. This setup is meant to run both outside of and inside AWS (through various development phases). No need to reinvent the wheel. Please support whats already there. |
👍 My use case assumes certain containers will join user defined networks and call each other by host. This setup is meant to run both outside of and inside AWS (through various development phases). No need to reinvent the wheel. Please support whats already there. |
We require a number of containers to be bundled together with open communication, only exposing what needs to be consumed by the outside world. Link is ugly and not scalable, and we need to be able to set the networks within our task definitions. No need to over engineer whats already available. |
Any updates here? - this is a really needed feature |
This is much needed feature. I don't understand why AWS does not agree with the users. The use case is fairly common. Let's say you have a database container (serviceDB) that needs to be connected by multiple app containers (serviceApp). Put the database container and app container in one task definition and link them is not going to work. |
Surprised no one's mentioned Weaveworks' integration with ECS, because it does pretty much what everyone here is asking for: Basically, Weave assigns an IP address to each container and runs an auto-managed DNS service, which lets any container in the same cluster address any other container by its name. The DNS service also automatically load-balances all containers. I just tried it out and haven't encountered any issues so far. Just had to examine the ECS cluster setup script that they provide in the example to figure out the required SG and IAM configs. Does anyone have experience with Weave and ECS? Any feedback would be super helpful. |
@errordeveloper or @2opremio, would you mind chiming in please? I thought I'd loop you in since Weaveworks' solution seems to perfectly address this long-standing ECS feature request. Are there any limitations/concerns that we should be aware of or it's stable enough to use in production? :) |
Yes, Weave Net should be able to solve most (if not all) the use cases presented above. It's production ready and we provide AMIs and Cloud Formation templates to run it. See https://www.weave.works/docs/scope/latest/ami/ |
Thanks, @2opremio, that's great to hear! Weave Net makes connecting containerized apps so much easier. |
If someone is interrested, i have found a workaround.... Create a script setup-server.py:
Create a script run-server.sh:
In dockerfile add scripts and python tools
In ECS task:
It works nicely for me. But I hope ECS Team will make something to get user-defined network functionnality. |
Hi Everyone, Thank you for your feedback regarding using Docker's user-defined networks.
|
|
I tought @yunhee-l was talking about some new features regarding service discovery, but obviously I was wrong, I'm very familiar with service discovery (and doc) they have launched some time ago but they're forcing us to use Route53 so that's why I don't what to use it. |
my two cents Whatever direction you go you hit major problems with the simple task
the best option is to enable user defined networks (as requested in this issue) but also removing the "outside network communication" limitation for ec2 launch time would be a welcomed addition |
+1 |
+1. I would like to preach the evils of linking containers, and you can't link in fargate which means no sidecars in fargate. |
Can you explain more about the connection between linking and sidecars in Fargate? We definitely support sidecars in Fargate; if you need containers in the same Fargate task to communicate, they can do so via localhost. |
I was enlightened this morning about this article:
|
Looks like the ENI constraints have finally been addressed. Amazon ECS now supports increased elastic network interface (ENI) limits for tasks in awsvpc Networking Mode Amazon Elastic Container Service (ECS) now supports increased ECS task limits for select Amazon EC2 instances when using awsvpc task networking mode. When you use these instance types and opt in to the awsvpcTrunking account setting, additional Elastic Network Interfaces (ENIs) are available for tasks using awsvpc networking mode on newly launched container instances. Previously, the number of tasks in awsvpc network mode that could be run on an instance was limited by the number of available Elastic Network Interfaces (ENIs) on the instance; those ENIs could be used by ECS tasks or by other processes outside of ECS. As a result, the number of tasks that could be placed on EC2 instances often was constrained despite there being ample vCPU and memory available for additional containers to utilize. Now, you have access to an increased number of ENIs for use exclusively by tasks in awsvpc networking mode for select instance types. The increase is anywhere from 3 to 8 times the previous limits, depending on the instance type. The improved ENI limits are available in all regions where ECS is available. Please visit the AWS region table to see where Amazon ECS is available. To learn more on how to opt in, see Account Settings. To get started with increased ENI limits, read our documentation. |
Unfortunately, awsvpc (in ECS, not Fargate) has half-baked ENIs that don't allow public addresses. This necessitates setting up single-point-of-failure/costly NATs just for the privilege of accessing the internet. |
I ended up mixing links, extraHosts, and mapped ports to work around the lack of bidirectional linking imposed by lack of user defined networks. Here's the part of the config that does it incase it helps anyone else.
User defined networks would still be appreciated since it'd make this way simpler. |
Still a required feature of ECS. I do understand that we can use |
Thank you so much for this. |
Would be helpful to get some timeline/info on this request, this is a real blocker for us until we migrate our workload to EKS which does not have this issue. |
It seems like a lot of the desire here is to have DNS resolution between containers on the docker bridge. One problem with this is currently the actual container name that you define in ECS is not final name that docker creates the container as, and furthermore we append a random string to the actual name (code reference here) so that the name of your container on the bridge network is in the format So I believe to usefully support a user-defined network we would also need to change this naming convention to make it more easily discoverable, perhaps to something like |
aws/amazon-ecs-agent#3793 <- this has some overlap with the feature request here. (Looking into the pr now) |
Thanks, @fierlion - in my PR we implemented a custom bridge and Network Alias name - so we don't change the task container name |
Well, this really should be lined up to be released? |
I'm trying to see if user defined networks is supported?
I've looked at the task definition options and could not find any place to set the network the container should connect to?
Is it supported yet?
The text was updated successfully, but these errors were encountered: