diff --git a/.openpublishing.redirection.architecture.json b/.openpublishing.redirection.architecture.json index d021c9163487b..d90f8925a13a3 100644 --- a/.openpublishing.redirection.architecture.json +++ b/.openpublishing.redirection.architecture.json @@ -816,6 +816,534 @@ { "source_path_from_root": "/docs/architecture/maui/configuration-management.md", "redirect_url": "/dotnet/architecture/maui/app-settings-management" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/application-bundles.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/application-resiliency-patterns.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/authentication-authorization.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/azure-active-directory.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/azure-caching.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/azure-monitor.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/azure-security.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/candidate-apps.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/centralized-configuration.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/combine-containers-serverless-approaches.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/communication-patterns.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/definition.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/deploy-containers-azure.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/devops.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/distributed-data.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/elastic-search-in-azure.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/feature-flags.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/front-end-communication.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/grpc.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/identity-server.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/identity.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/includes/download-alert.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/infrastructure-as-code.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/infrastructure-resiliency-azure.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/introduce-eshoponcontainers-reference-app.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/introduction.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/leverage-containers-orchestrators.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/leverage-serverless-functions.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/logging-with-elastic-stack.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/monitoring-azure-kubernetes.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/monitoring-health.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/observability-patterns.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/other-deployment-options.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/relational-vs-nosql-data.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/resiliency.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/resilient-communications.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/scale-applications.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/scale-containers-serverless.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/security.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/service-to-service-communication.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/cloud-native/summary.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/containerize-monolithic-applications.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/logical-versus-physical-architecture.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/maintain-microservice-apis.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/microservices-addressability-service-registry.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/microservices-architecture.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/architect-microservice-container-applications/service-oriented-architecture.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/container-docker-introduction/docker-containers-images-registries.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/container-docker-introduction/docker-defined.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/container-docker-introduction/docker-terminology.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/container-docker-introduction/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/docker-application-development-process/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/handle-partial-failure.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/implement-circuit-breaker-pattern.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/implement-retries-exponential-backoff.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/monitor-app-health.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/partial-failure-strategies.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/includes/download-alert.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/key-takeaways.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/client-side-validation.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/cqrs-microservice-reads.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/ddd-oriented-microservice.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-model-layer-validations.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/enumeration-classes-over-enum-types.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-web-api-design.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-domain-model.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/microservice-ddd-cqrs-patterns/seedwork-domain-model-base-classes-interfaces.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/database-server-container.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/multi-container-applications-docker-compose.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/rabbitmq-event-bus-development-test-environment.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/multi-container-microservice-net-applications/test-aspnet-core-services-web-apps.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/net-core-net-framework-containers/container-framework-choice-factors.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/net-core-net-framework-containers/general-guidance.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/net-core-net-framework-containers/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/net-core-net-framework-containers/net-container-os-targets.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/net-core-net-framework-containers/net-core-container-scenarios.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/net-core-net-framework-containers/net-framework-container-scenarios.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/net-core-net-framework-containers/official-net-docker-images.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/secure-net-microservices-web-applications/authorization-net-microservices-web-applications.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/secure-net-microservices-web-applications/azure-key-vault-protects-secrets.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/secure-net-microservices-web-applications/developer-app-secrets-storage.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/microservices/secure-net-microservices-web-applications/index.md", + "redirect_url": "https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/architectural-principles.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/choose-between-traditional-web-and-single-page-apps.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/common-client-side-web-technologies.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/development-process-for-azure.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/includes/download-alert.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/index.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/modern-web-applications-characteristics.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" + }, + { + "source_path_from_root": "/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md", + "redirect_url": "https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf" } ] -} +} \ No newline at end of file diff --git a/docs/architecture/cloud-native/application-bundles.md b/docs/architecture/cloud-native/application-bundles.md deleted file mode 100644 index ef9048db19b2c..0000000000000 --- a/docs/architecture/cloud-native/application-bundles.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: Cloud Native Application Bundles -description: Architecting Cloud Native .NET Apps for Azure | Cloud Native Application Bundles -ms.date: 04/06/2022 ---- - -# Cloud Native Application Bundles - -[!INCLUDE [download-alert](includes/download-alert.md)] - -A key property of cloud-native applications is that they leverage the capabilities of the cloud to speed up development. This design often means that a full application uses different kinds of technologies. Applications may be shipped in Docker containers, some services may use Azure Functions, while other parts may run directly on virtual machines allocated on large metal servers with hardware GPU acceleration. No two cloud-native applications are the same, so it's been difficult to provide a single mechanism for shipping them. - -The Docker containers may run on Kubernetes using a Helm Chart for deployment. The Azure Functions may be allocated using Terraform templates. Finally, the virtual machines may be allocated using Terraform but built out using Ansible. This is a large variety of technologies and there has been no way to package them all together into a reasonable package. Until now. - -Cloud Native Application Bundles (CNABs) are a joint effort by many community-minded companies such as Microsoft, Docker, and HashiCorp to develop a specification to package distributed applications. - -The effort was announced in December of 2018, so there's still a fair bit of work to do to expose the effort to the greater community. However, there's already an [open specification](https://github.com/deislabs/cnab-spec) and a reference implementation known as [Duffle](https://duffle.sh/). This tool, which was written in Go, is a joint effort between Docker and Microsoft. - -The CNABs can contain different kinds of installation technologies. This aspect allows things like Helm Charts, Terraform templates, and Ansible Playbooks to coexist in the same package. Once built, the packages are self-contained and portable; they can be installed from a USB stick. The packages are cryptographically signed to ensure they originate from the party they claim. - -The core of a CNAB is a file called `bundle.json`. This file defines the contents of the bundle, be they Terraform or images or anything else. Figure 11-9 defines a CNAB that invokes some Terraform. Notice, however, that it actually defines an invocation image that is used to invoke the Terraform. When packaged up, the Docker file that is located in the *cnab* directory is built into a Docker image, which will be included in the bundle. Having Terraform installed inside a Docker container in the bundle means that users don't need to have Terraform installed on their machine to run the bundling. - -```json -{ - "name": "terraform", - "version": "0.1.0", - "schemaVersion": "v1.0.0-WD", - "parameters": { - "backend": { - "type": "boolean", - "defaultValue": false, - "destination": { - "env": "TF_VAR_backend" - } - } - }, - "invocationImages": [ - { - "imageType": "docker", - "image": "cnab/terraform:latest" - } - ], - "credentials": { - "tenant_id": { - "env": "TF_VAR_tenant_id" - }, - "client_id": { - "env": "TF_VAR_client_id" - }, - "client_secret": { - "env": "TF_VAR_client_secret" - }, - "subscription_id": { - "env": "TF_VAR_subscription_id" - }, - "ssh_authorized_key": { - "env": "TF_VAR_ssh_authorized_key" - } - }, - "actions": { - "status": { - "modifies": true - } - } -} -``` - -**Figure 10-18** - An example Terraform file - -The `bundle.json` also defines a set of parameters that are passed down into the Terraform. Parameterization of the bundle allows for installation in various different environments. - -The CNAB format is also flexible, allowing it to be used against any cloud. It can even be used against on-premises solutions such as [OpenStack](https://www.openstack.org/). - -## DevOps Decisions - -There are so many great tools in the DevOps space these days and even more fantastic books and papers on how to succeed. A favorite book to get started on the DevOps journey is [The Phoenix Project](https://www.oreilly.com/library/view/the-phoenix-project/9781457191350/), which follows the transformation of a fictional company from NoOps to DevOps. One thing is for certain: DevOps is no longer a "nice to have" when deploying complex, Cloud Native Applications. It's a requirement and should be planned for and resourced at the start of any project. - -## References - -- [Azure DevOps](https://azure.microsoft.com/services/devops/) -- [Azure Resource Manager](/azure/azure-resource-manager/management/overview) -- [Terraform](https://www.terraform.io/) -- [Azure CLI](/cli/azure/) - ->[!div class="step-by-step"] ->[Previous](infrastructure-as-code.md) ->[Next](summary.md) diff --git a/docs/architecture/cloud-native/application-resiliency-patterns.md b/docs/architecture/cloud-native/application-resiliency-patterns.md deleted file mode 100644 index efc32e1507960..0000000000000 --- a/docs/architecture/cloud-native/application-resiliency-patterns.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Application resiliency patterns -description: Architecting Cloud Native .NET Apps for Azure | Application Resiliency Patterns -author: robvet -ms.date: 04/06/2022 ---- - -# Application resiliency patterns - -[!INCLUDE [download-alert](includes/download-alert.md)] - -The first line of defense is application resiliency. - -While you could invest considerable time writing your own resiliency framework, such products already exist. [Polly](https://old.dotnetfoundation.org/projects/polly) is a comprehensive .NET resilience and transient-fault-handling library that allows developers to express resiliency policies in a fluent and thread-safe manner. Polly targets applications built with either .NET Framework or .NET 7. The following table describes the resiliency features, called `policies`, available in the Polly Library. They can be applied individually or grouped together. - -| Policy | Experience | -| :-------- | :-------- | -| Retry | Configures retry operations on designated operations. | -| Circuit Breaker | Blocks requested operations for a predefined period when faults exceed a configured threshold | -| Timeout | Places limit on the duration for which a caller can wait for a response. | -| Bulkhead | Constrains actions to fixed-size resource pool to prevent failing calls from swamping a resource. | -| Cache | Stores responses automatically. | -| Fallback | Defines structured behavior upon a failure. | - -Note how in the previous figure the resiliency policies apply to request messages, whether coming from an external client or back-end service. The goal is to compensate the request for a service that might be momentarily unavailable. These short-lived interruptions typically manifest themselves with the HTTP status codes shown in the following table. - -| HTTP Status Code| Cause | -| :-------- | :-------- | -| 404 | Not Found | -| 408 | Request timeout | -| 429 | Too many requests (you've most likely been throttled) | -| 502 | Bad gateway | -| 503 | Service unavailable | -| 504 | Gateway timeout | - -Question: Would you retry an HTTP Status Code of 403 - Forbidden? No. Here, the system is functioning properly, but informing the caller that they aren't authorized to perform the requested operation. Care must be taken to retry only those operations caused by failures. - -As recommended in Chapter 1, Microsoft developers constructing cloud-native applications should target the .NET platform. Version 2.1 introduced the [HTTPClientFactory](https://www.stevejgordon.co.uk/introduction-to-httpclientfactory-aspnetcore) library for creating HTTP Client instances for interacting with URL-based resources. Superseding the original HTTPClient class, the factory class supports many enhanced features, one of which is [tight integration](../microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly.md) with the Polly resiliency library. With it, you can easily define resiliency policies in the application Startup class to handle partial failures and connectivity issues. - -Next, let's expand on retry and circuit breaker patterns. - -### Retry pattern - -In a distributed cloud-native environment, calls to services and cloud resources can fail because of transient (short-lived) failures, which typically correct themselves after a brief period of time. Implementing a retry strategy helps a cloud-native service mitigate these scenarios. - -The [Retry pattern](/azure/architecture/patterns/retry) enables a service to retry a failed request operation a (configurable) number of times with an exponentially increasing wait time. Figure 6-2 shows a retry in action. - -![Retry pattern in action](./media/retry-pattern.png) - -**Figure 6-2**. Retry pattern in action - -In the previous figure, a retry pattern has been implemented for a request operation. It's configured to allow up to four retries before failing with a backoff interval (wait time) starting at two seconds, which exponentially doubles for each subsequent attempt. - -- The first invocation fails and returns an HTTP status code of 500. The application waits for two seconds and retries the call. -- The second invocation also fails and returns an HTTP status code of 500. The application now doubles the backoff interval to four seconds and retries the call. -- Finally, the third call succeeds. -- In this scenario, the retry operation would have attempted up to four retries while doubling the backoff duration before failing the call. -- Had the 4th retry attempt failed, a fallback policy would be invoked to gracefully handle the problem. - -It's important to increase the backoff period before retrying the call to allow the service time to self-correct. It's a best practice to implement an exponentially increasing backoff (doubling the period on each retry) to allow adequate correction time. - -## Circuit breaker pattern - -While the retry pattern can help salvage a request entangled in a partial failure, there are situations where failures can be caused by unanticipated events that will require longer periods of time to resolve. These faults can range in severity from a partial loss of connectivity to the complete failure of a service. In these situations, it's pointless for an application to continually retry an operation that is unlikely to succeed. - -To make things worse, executing continual retry operations on a non-responsive service can move you into a self-imposed denial of service scenario where you flood your service with continual calls exhausting resources such as memory, threads and database connections, causing failure in unrelated parts of the system that use the same resources. - -In these situations, it would be preferable for the operation to fail immediately and only attempt to invoke the service if it's likely to succeed. - -The [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker) can prevent an application from repeatedly trying to execute an operation that's likely to fail. After a pre-defined number of failed calls, it blocks all traffic to the service. Periodically, it will allow a trial call to determine whether the fault has resolved. Figure 6-3 shows the Circuit Breaker pattern in action. - -![Circuit breaker pattern in action](./media/circuit-breaker-pattern.png) - -**Figure 6-3**. Circuit breaker pattern in action - -In the previous figure, a Circuit Breaker pattern has been added to the original retry pattern. Note how after 100 failed requests, the circuit breakers opens and no longer allows calls to the service. The CheckCircuit value, set at 30 seconds, specifies how often the library allows one request to proceed to the service. If that call succeeds, the circuit closes and the service is once again available to traffic. - -Keep in mind that the intent of the Circuit Breaker pattern is *different* than that of the Retry pattern. The Retry pattern enables an application to retry an operation in the expectation that it will succeed. The Circuit Breaker pattern prevents an application from doing an operation that is likely to fail. Typically, an application will *combine* these two patterns by using the Retry pattern to invoke an operation through a circuit breaker. - -## Testing for resiliency - -Testing for resiliency cannot always be done the same way that you test application functionality (by running unit tests, integration tests, and so on). Instead, you must test how the end-to-end workload performs under failure conditions, which only occur intermittently. For example: inject failures by crashing processes, expired certificates, make dependent services unavailable etc. Frameworks like [chaos-monkey](https://github.com/Netflix/chaosmonkey) can be used for such chaos testing. - -Application resiliency is a must for handling problematic requested operations. But, it's only half of the story. Next, we cover resiliency features available in the Azure cloud. - ->[!div class="step-by-step"] ->[Previous](resiliency.md) ->[Next](infrastructure-resiliency-azure.md) diff --git a/docs/architecture/cloud-native/authentication-authorization.md b/docs/architecture/cloud-native/authentication-authorization.md deleted file mode 100644 index 5a32cd8cb5004..0000000000000 --- a/docs/architecture/cloud-native/authentication-authorization.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Authentication and Authorization in cloud-native apps -description: Architecting Cloud Native .NET Apps for Azure | Authentication and Authorization in Cloud Native Apps -ms.date: 04/06/2022 ---- - -# Authentication and authorization in cloud-native apps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -*Authentication* is the process of determining the identity of a security principal. *Authorization* is the act of granting an authenticated principal permission to perform an action or access a resource. Sometimes authentication is shortened to `AuthN` and authorization is shortened to `AuthZ`. Cloud-native applications need to rely on open HTTP-based protocols to authenticate security principals since both clients and applications could be running anywhere in the world on any platform or device. The only common factor is HTTP. - -Many organizations still rely on local authentication services like Active Directory Federation Services (ADFS). While this approach has traditionally served organizations well for on premises authentication needs, cloud-native applications benefit from systems designed specifically for the cloud. A recent 2019 United Kingdom National Cyber Security Centre (NCSC) advisory states that "organizations using Azure AD as their primary authentication source will actually lower their risk compared to ADFS." Some reasons outlined in [this analysis](https://oxfordcomputergroup.com/resources/o365-security-native-cloud-authentication/) include: - -- Access to full set of Microsoft credential protection technologies. -- Most organizations are already relying on Azure AD to some extent. -- Double hashing of NTLM hashes ensures compromise won't allow credentials that work in local Active Directory. - -## References - -- [Authentication basics](/azure/active-directory/develop/authentication-scenarios) -- [Access tokens and claims](/azure/active-directory/develop/access-tokens) -- [It may be time to ditch your on premises authentication services](https://oxfordcomputergroup.com/resources/o365-security-native-cloud-authentication/) - ->[!div class="step-by-step"] ->[Previous](identity.md) ->[Next](azure-active-directory.md) diff --git a/docs/architecture/cloud-native/azure-active-directory.md b/docs/architecture/cloud-native/azure-active-directory.md deleted file mode 100644 index dc060d9978151..0000000000000 --- a/docs/architecture/cloud-native/azure-active-directory.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Azure Active Directory -description: Architecting Cloud Native .NET Apps for Azure | Azure Active Directory -ms.date: 04/06/2022 ---- - -# Azure Active Directory - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Microsoft Azure Active Directory (Azure AD) offers identity and access management as a service. Customers use it to configure and maintain who users are, what information to store about them, who can access that information, who can manage it, and what apps can access it. AAD can authenticate users for applications configured to use it, providing a single sign-on (SSO) experience. It can be used on its own or be integrated with Windows AD running on premises. - -Azure AD is built for the cloud. It's truly a cloud-native identity solution that uses a REST-based Graph API and OData syntax for queries, unlike Windows AD, which uses LDAP. On premises Active Directory can sync user attributes to the cloud using Identity Sync Services, allowing all authentication to take place in the cloud using Azure AD. Alternately, authentication can be configured via Connect to pass back to local Active Directory via ADFS to be completed by Windows AD on premises. - -Azure AD supports company branded sign-in screens, multi-factory authentication, and cloud-based application proxies that are used to provide SSO for applications hosted on premises. It offers different kinds of security reporting and alert capabilities. - -## References - -- [Microsoft identity platform](/azure/active-directory/develop/) - ->[!div class="step-by-step"] ->[Previous](authentication-authorization.md) ->[Next](identity-server.md) diff --git a/docs/architecture/cloud-native/azure-caching.md b/docs/architecture/cloud-native/azure-caching.md deleted file mode 100644 index 31a8fbfe95cf8..0000000000000 --- a/docs/architecture/cloud-native/azure-caching.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: Caching in a cloud-native application -description: Learn about caching strategies in a cloud-native application. -author: robvet -ms.date: 04/06/2022 ---- - -# Caching in a cloud-native app - -[!INCLUDE [download-alert](includes/download-alert.md)] - -The benefits of caching are well understood. The technique works by temporarily copying frequently accessed data from a backend data store to *fast storage* that's located closer to the application. Caching is often implemented where... - -- Data remains relatively static. -- Data access is slow, especially compared to the speed of the cache. -- Data is subject to high levels of contention. - -## Why? - -As discussed in the [Microsoft caching guidance](/azure/architecture/best-practices/caching), caching can increase performance, scalability, and availability for individual microservices and the system as a whole. It reduces the latency and contention of handling large volumes of concurrent requests to a data store. As data volume and the number of users increase, the greater the benefits of caching become. - -Caching is most effective when a client repeatedly reads data that is immutable or that changes infrequently. Examples include reference information such as product and pricing information, or shared static resources that are costly to construct. - -While microservices should be stateless, a distributed cache can support concurrent access to session state data when absolutely required. - -Also consider caching to avoid repetitive computations. If an operation transforms data or performs a complicated calculation, cache the result for subsequent requests. - -## Caching architecture - -Cloud native applications typically implement a distributed caching architecture. The cache is hosted as a cloud-based [backing service](./definition.md#backing-services), separate from the microservices. Figure 5-15 shows the architecture. - -![Caching in a cloud native app](media/caching-in-a-cloud-native-app.png) - -**Figure 5-15**: Caching in a cloud native app - -In the previous figure, note how the cache is independent of and shared by the microservices. In this scenario, the cache is invoked by the [API Gateway](./front-end-communication.md). As discussed in chapter 4, the gateway serves as a front end for all incoming requests. The distributed cache increases system responsiveness by returning cached data whenever possible. Additionally, separating the cache from the services allows the cache to scale up or out independently to meet increased traffic demands. - -The previous figure presents a common caching pattern known as the [cache-aside pattern](/azure/architecture/patterns/cache-aside). For an incoming request, you first query the cache (step \#1) for a response. If found, the data is returned immediately. If the data doesn't exist in the cache (known as a [cache miss](https://www.techopedia.com/definition/6308/cache-miss)), it's retrieved from a local database in a downstream service (step \#2). It's then written to the cache for future requests (step \#3), and returned to the caller. Care must be taken to periodically evict cached data so that the system remains timely and consistent. - -As a shared cache grows, it might prove beneficial to partition its data across multiple nodes. Doing so can help minimize contention and improve scalability. Many Caching services support the ability to dynamically add and remove nodes and rebalance data across partitions. This approach typically involves clustering. Clustering exposes a collection of federated nodes as a seamless, single cache. Internally, however, the data is dispersed across the nodes following a predefined distribution strategy that balances the load evenly. - -## Azure Cache for Redis - -[Azure Cache for Redis](https://azure.microsoft.com/services/cache/) is a secure data caching and messaging broker service, fully managed by Microsoft. Consumed as a Platform as a Service (PaaS) offering, it provides high throughput and low-latency access to data. The service is accessible to any application within or outside of Azure. - -The Azure Cache for Redis service manages access to open-source Redis servers hosted across Azure data centers. The service acts as a facade providing management, access control, and security. The service natively supports a rich set of data structures, including strings, hashes, lists, and sets. If your application already uses Redis, it will work as-is with Azure Cache for Redis. - -Azure Cache for Redis is more than a simple cache server. It can support a number of scenarios to enhance a microservices architecture: - -- An in-memory data store -- A distributed non-relational database -- A message broker -- A configuration or discovery server - -For advanced scenarios, a copy of the cached data can be [persisted to disk](/azure/azure-cache-for-redis/cache-how-to-premium-persistence). If a catastrophic event disables both the primary and replica caches, the cache is reconstructed from the most recent snapshot. - -Azure Redis Cache is available across a number of predefined configurations and pricing tiers. The [Premium tier](/azure/azure-cache-for-redis/cache-overview#service-tiers) features many enterprise-level features such as clustering, data persistence, geo-replication, and virtual-network isolation. - ->[!div class="step-by-step"] ->[Previous](relational-vs-nosql-data.md) ->[Next](elastic-search-in-azure.md) diff --git a/docs/architecture/cloud-native/azure-monitor.md b/docs/architecture/cloud-native/azure-monitor.md deleted file mode 100644 index c8940d86f29d5..0000000000000 --- a/docs/architecture/cloud-native/azure-monitor.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: Azure Monitor -description: Using Azure Monitor to gain visibility into your system is running. -ms.date: 04/06/2022 ---- - -# Azure Monitor - -[!INCLUDE [download-alert](includes/download-alert.md)] - -No other cloud provider has as mature of a cloud application monitoring solution than that found in Azure. Azure Monitor is an umbrella name for a collection of tools designed to provide visibility into the state of your system. It helps you understand how your cloud-native services are performing and proactively identifies issues affecting them. Figure 7-12 presents a high level of view of Azure Monitor. - -![High-level view of Azure Monitor.](./media/azure-monitor.png) -**Figure 7-12**. High-level view of Azure Monitor. - -## Gathering logs and metrics - -The first step in any monitoring solution is to gather as much data as possible. The more data gathered, the deeper the insights. Instrumenting systems has traditionally been difficult. Simple Network Management Protocol (SNMP) was the gold standard protocol for collecting machine level information, but it required a great deal of knowledge and configuration. Fortunately, much of this hard work has been eliminated as the most common metrics are gathered automatically by Azure Monitor. - -Application level metrics and events aren't possible to instrument automatically because they're specific to the application being deployed. In order to gather these metrics, there are [SDKs and APIs available](/azure/azure-monitor/app/api-custom-events-metrics) to directly report such information, such as when a customer signs up or completes an order. Exceptions can also be captured and reported back into Azure Monitor via Application Insights. The SDKs support most every language found in Cloud Native Applications including Go, Python, JavaScript, and the .NET languages. - -The ultimate goal of gathering information about the state of your application is to ensure that your end users have a good experience. What better way to tell if users are experiencing issues than doing [outside-in web tests](/azure/azure-monitor/app/monitor-web-app-availability)? These tests can be as simple as pinging your website from locations around the world or as involved as having agents log into the site and simulate user actions. - -## Reporting data - -Once the data is gathered, it can be manipulated, summarized, and plotted into charts, which allow users to instantly see when there are problems. These charts can be gathered into dashboards or into Workbooks, a multi-page report designed to tell a story about some aspect of the system. - -No modern application would be complete without some artificial intelligence or machine learning. To this end, data [can be passed](https://www.youtube.com/watch?v=Cuza-I1g9tw) to the various machine learning tools in Azure to allow you to extract trends and information that would otherwise be hidden. - -Application Insights provides a powerful (SQL-like) query language called *Kusto* that can query records, summarize them, and even plot charts. For example, the following query will locate all records for the month of November 2007, group them by state, and plot the top 10 as a pie chart. - -```kusto -StormEvents -| where StartTime >= datetime(2007-11-01) and StartTime < datetime(2007-12-01) -| summarize count() by State -| top 10 by count_ -| render piechart -``` - -Figure 7-13 shows the results of this Application Insights Query. - -![Application Insights query results](./media/application_insights_example.png) -**Figure 7-13**. Application Insights query results. - -There is a [playground for experimenting with Kusto](https://dataexplorer.azure.com/clusters/help/databases/Samples) queries. Reading [sample queries](/azure/kusto/query/samples) can also be instructive. - -## Dashboards - -There are several different dashboard technologies that may be used to surface the information from Azure Monitor. Perhaps the simplest is to just run queries in Application Insights and [plot the data into a chart](/azure/azure-monitor/learn/tutorial-app-dashboards). - -![An example of Application Insights charts embedded in the main Azure Dashboard](./media/azure_dashboard.png) -**Figure 7-14**. An example of Application Insights charts embedded in the main Azure Dashboard. - -These charts can then be embedded in the Azure portal proper through use of the dashboard feature. For users with more exacting requirements, such as being able to drill down into several tiers of data, Azure Monitor data is available to [Power BI](https://powerbi.microsoft.com/). Power BI is an industry-leading, enterprise class, business intelligence tool that can aggregate data from many different data sources. - -![An example Power BI dashboard](./media/powerbidashboard.png) - -**Figure 7-15**. An example Power BI dashboard. - -## Alerts - -Sometimes, having data dashboards is insufficient. If nobody is awake to watch the dashboards, then it can still be many hours before a problem is addressed, or even detected. To this end, Azure Monitor also provides a top notch [alerting solution](/azure/azure-monitor/platform/alerts-overview). Alerts can be triggered by a wide range of conditions including: - -- Metric values -- Log search queries -- Activity Log events -- Health of the underlying Azure platform -- Tests for web site availability - -When triggered, the alerts can perform a wide variety of tasks. On the simple side, the alerts may just send an e-mail notification to a mailing list or a text message to an individual. More involved alerts might trigger a workflow in a tool such as PagerDuty, which is aware of who is on call for a particular application. Alerts can trigger actions in [Microsoft Flow](https://flow.microsoft.com/) unlocking near limitless possibilities for workflows. - -As common causes of alerts are identified, the alerts can be enhanced with details about the common causes of the alerts and the steps to take to resolve them. Highly mature cloud-native application deployments may opt to kick off self-healing tasks, which perform actions such as removing failing nodes from a scale set or triggering an autoscaling activity. Eventually it may no longer be necessary to wake up on-call personnel at 2AM to resolve a live-site issue as the system will be able to adjust itself to compensate or at least limp along until somebody arrives at work the next morning. - -Azure Monitor automatically leverages machine learning to understand the normal operating parameters of deployed applications. This approach enables it to detect services that are operating outside of their normal parameters. For instance, the typical weekday traffic on the site might be 10,000 requests per minute. And then, on a given week, suddenly the number of requests hits a highly unusual 20,000 requests per minute. [Smart Detection](/azure/azure-monitor/app/proactive-diagnostics) will notice this deviation from the norm and trigger an alert. At the same time, the trend analysis is smart enough to avoid firing false positives when the traffic load is expected. - -## References - -- [Azure Monitor](/azure/azure-monitor/overview) - ->[!div class="step-by-step"] ->[Previous](monitoring-azure-kubernetes.md) ->[Next](identity.md) diff --git a/docs/architecture/cloud-native/azure-security.md b/docs/architecture/cloud-native/azure-security.md deleted file mode 100644 index 94f25bed2a471..0000000000000 --- a/docs/architecture/cloud-native/azure-security.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -title: Azure security for cloud-native apps -description: Architecting Cloud Native .NET Apps for Azure | Azure Security for Cloud Native Apps -ms.date: 04/06/2022 ---- - -# Azure security for cloud-native apps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Cloud-native applications can be both easier and more difficult to secure than traditional applications. On the downside, you need to secure more smaller applications and dedicate more energy to build out the security infrastructure. The heterogeneous nature of programming languages and styles in most service deployments also means you need to pay more attention to security bulletins from many different providers. - -On the flip side, smaller services, each with their own data store, limit the scope of an attack. If an attacker compromises one system, it's probably more difficult for the attacker to make the jump to another system than it is in a monolithic application. Process boundaries are strong boundaries. Also, if a database backup gets exposed, then the damage is more limited, as that database contains only a subset of data and is unlikely to contain personal data. - -## Threat modeling - -No matter if the advantages outweigh the disadvantages of cloud-native applications, the same holistic security mindset must be followed. Security and secure thinking must be part of every step of the development and operations story. When planning an application ask questions like: - -- What would be the impact of this data being lost? -- How can we limit the damage from bad data being injected into this service? -- Who should have access to this data? -- Are there auditing policies in place around the development and release process? - -All these questions are part of a process called [threat modeling](/azure/security/azure-security-threat-modeling-tool). This process tries to answer the question of what threats there are to the system, how likely the threats are, and the potential damage from them. - -Once the list of threats has been established, you need to decide whether they're worth mitigating. Sometimes a threat is so unlikely and expensive to plan for that it isn't worth spending energy on it. For instance, some state level actor could inject changes into the design of a process that is used by millions of devices. Now, instead of running a certain piece of code in [Ring 3](https://en.wikipedia.org/wiki/Protection_ring), that code is run in Ring 0. This process allows an exploit that can bypass the hypervisor and run the attack code on the bare metal machines, allowing attacks on all the virtual machines that are running on that hardware. - -The altered processors are difficult to detect without a microscope and advanced knowledge of the on silicon design of that processor. This scenario is unlikely to happen and expensive to mitigate, so probably no threat model would recommend building exploit protection for it. - -More likely threats, such as broken access controls permitting `Id` incrementing attacks (replacing `Id=2` with `Id=3` in the URL) or SQL injection, are more attractive to build protections against. The mitigations for these threats are quite reasonable to build and prevent embarrassing security holes that smear the company's reputation. - -## Principle of least privilege - -One of the founding ideas in computer security is the Principle of Least Privilege (POLP). It's actually a foundational idea in most any form of security be it digital or physical. In short, the principle is that any user or process should have the smallest number of rights possible to execute its task. - -As an example, think of the tellers at a bank: accessing the safe is an uncommon activity. So, the average teller can't open the safe themselves. To gain access, they need to escalate their request through a bank manager, who performs additional security checks. - -In a computer system, a fantastic example is the rights of a user connecting to a database. In many cases, there's a single user account used to both build the database structure and run the application. Except in extreme cases, the account running the application doesn't need the ability to update schema information. There should be several accounts that provide different levels of privilege. The application should only use the permission level that grants read and writes access to the data in the tables. This kind of protection would eliminate attacks that aimed to drop database tables or introduce malicious triggers. - -Almost every part of building a cloud-native application can benefit from remembering the principle of least privilege. You can find it at play when setting up firewalls, network security groups, roles, and scopes in Role-based access control (RBAC). - -## Penetration testing - -As applications become more complicated the number of attack vectors increases at an alarming rate. Threat modeling is flawed in that it tends to be executed by the same people building the system. In the same way that many developers have trouble envisioning user interactions and then build unusable user interfaces, most developers have difficulty seeing every attack vector. It's also possible that the developers building the system aren't well versed in attack methodologies and miss something crucial. - -Penetration testing or "pen testing" involves bringing in external actors to attempt to attack the system. These attackers may be an external consulting company or other developers with good security knowledge from another part of the business. They're given carte blanche to attempt to subvert the system. Frequently, they'll find extensive security holes that need to be patched. Sometimes the attack vector will be something totally unexpected like exploiting a phishing attack against the CEO. - -Azure itself is constantly undergoing attacks from a [team of hackers inside Microsoft](https://azure.microsoft.com/resources/videos/red-vs-blue-internal-security-penetration-testing-of-microsoft-azure/). Over the years, they've been the first to find dozens of potentially catastrophic attack vectors, closing them before they can be exploited externally. The more tempting a target, the more likely that eternal actors will attempt to exploit it and there are a few targets in the world more tempting than Azure. - -## Monitoring - -Should an attacker attempt to penetrate an application, there should be some warning of it. Frequently, attacks can be spotted by examining the logs from services. Attacks leave telltale signs that can be spotted before they succeed. For instance, an attacker attempting to guess a password will make many requests to a login system. Monitoring around the login system can detect weird patterns that are out of line with the typical access pattern. This monitoring can be turned into an alert that can, in turn, alert an operations person to activate some sort of countermeasure. A highly mature monitoring system might even take action based on these deviations proactively adding rules to block requests or throttle responses. - -## Securing the build - -One place where security is often overlooked is around the build process. Not only should the build run security checks, such as scanning for insecure code or checked-in credentials, but the build itself should be secure. If the build server is compromised, then it provides a fantastic vector for introducing arbitrary code into the product. - -Imagine that an attacker is looking to steal the passwords of people signing into a web application. They could introduce a build step that modifies the checked-out code to mirror any login request to another server. The next time code goes through the build, it's silently updated. The source code vulnerability scanning won't catch this vulnerability as it runs before the build. Equally, nobody will catch it in a code review because the build steps live on the build server. The exploited code will go to production where it can harvest passwords. Probably there's no audit log of the build process changes, or at least nobody monitoring the audit. - -This scenario is a perfect example of a seemingly low-value target that can be used to break into the system. Once an attacker breaches the perimeter of the system, they can start working on finding ways to elevate their permissions to the point that they can cause real harm anywhere they like. - -## Building secure code - -.NET Framework is already a quite secure framework. It avoids some of the pitfalls of unmanaged code, such as walking off the ends of arrays. Work is actively done to fix security holes as they're discovered. There's even a [bug bounty program](https://www.microsoft.com/msrc/bounty) that pays researchers to find issues in the framework and report them instead of exploiting them. - -There are many ways to make .NET code more secure. Following guidelines such as the [Secure coding guidelines for .NET](../../standard/security/secure-coding-guidelines.md) article is a reasonable step to take to ensure that the code is secure from the ground up. The [OWASP top 10](https://owasp.org/www-project-top-ten/) is another invaluable guide to build secure code. - -The build process is a good place to put scanning tools to detect problems in source code before they make it into production. Most every project has dependencies on some other packages. A tool that can scan for outdated packages will catch problems in a nightly build. Even when building Docker images, it's useful to check and make sure that the base image doesn't have known vulnerabilities. Another thing to check is that nobody has accidentally checked in credentials. - -## Built-in security - -Azure is designed to balance usability and security for most users. Different users are going to have different security requirements, so they need to fine-tune their approach to cloud security. Microsoft publishes a great deal of security information in the [Trust Center](https://azure.microsoft.com/support/trust-center/). This resource should be the first stop for those professionals interested in understanding how the built-in attack mitigation technologies work. - -Within the Azure portal, the [Azure Advisor](https://azure.microsoft.com/services/advisor/) is a system that is constantly scanning an environment and making recommendations. Some of these recommendations are designed to save users money, but others are designed to identify potentially insecure configurations, such as having a storage container open to the world and not protected by a Virtual Network. - -## Azure network infrastructure - -In an on-premises deployment environment, a great deal of energy is dedicated to setting up networking. Setting up routers, switches, and the such is complicated work. Networks allow certain resources to talk to other resources and prevent access in some cases. A frequent network rule is to restrict access to the production environment from the development environment on the off chance that a half-developed piece of code runs awry and deletes a swath of data. - -Out of the box, most PaaS Azure resources have only the most basic and permissive networking setup. For instance, anybody on the Internet can access an app service. New SQL Server instances typically come restricted, so that external parties can't access them, but the IP address ranges used by Azure itself are permitted through. So, while the SQL server is protected from external threats, an attacker only needs to set up an Azure bridgehead from where they can launch attacks against all SQL instances on Azure. - -Fortunately, most Azure resources can be placed into an Azure Virtual Network that allows fine-grained access control. Similar to the way that on-premises networks establish private networks that are protected from the wider world, virtual networks are islands of private IP addresses that are located within the Azure network. - -![Figure 9-1 A virtual network in Azure](./media/virtual-network.png) - -**Figure 9-1**. A virtual network in Azure. - -In the same way that on-premises networks have a firewall governing access to the network, you can establish a similar firewall at the boundary of the virtual network. By default, all the resources on a virtual network can still talk to the Internet. It's only incoming connections that require some form of explicit firewall exception. - -With the network established, internal resources like storage accounts can be set up to only allow for access by resources that are also on the Virtual Network. This firewall provides an extra level of security, should the keys for that storage account be leaked, attackers wouldn't be able to connect to it to exploit the leaked keys. This scenario is another example of the principle of least privilege. - -The nodes in an Azure Kubernetes cluster can participate in a virtual network just like other resources that are more native to Azure. This functionality is called [Azure Container Networking Interface](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md). In effect, it allocates a subnet within the virtual network on which virtual machines and container images are allocated. - -Continuing down the path of illustrating the principle of least privilege, not every resource within a Virtual Network needs to talk to every other resource. For instance, in an application that provides a web API over a storage account and a SQL database, it's unlikely that the database and the storage account need to talk to one another. Any data sharing between them would go through the web application. So, a [network security group (NSG)](/azure/virtual-network/security-overview) could be used to deny traffic between the two services. - -A policy of denying communication between resources can be annoying to implement, especially coming from a background of using Azure without traffic restrictions. On some other clouds, the concept of network security groups is much more prevalent. For instance, the default policy on AWS is that resources can't communicate among themselves until enabled by rules in an NSG. While slower to develop this, a more restrictive environment provides a more secure default. Making use of proper DevOps practices, especially using [Azure Resource Manager or Terraform](infrastructure-as-code.md) to manage permissions can make controlling the rules easier. - -Virtual Networks can also be useful when setting up communication between on-premises and cloud resources. A virtual private network can be used to seamlessly attach the two networks together. This approach allows running a virtual network without any sort of gateway for scenarios where all the users are on-site. There are a number of technologies that can be used to establish this network. The simplest is to use a [site-to-site VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways?toc=%252fazure%252fvirtual-network%252ftoc.json#s2smulti) that can be established between many routers and Azure. Traffic is encrypted and tunneled over the Internet at the same cost per byte as any other traffic. For scenarios where more bandwidth or more security is desirable, Azure offers a service called [Express Route](/azure/vpn-gateway/vpn-gateway-about-vpngateways?toc=%252fazure%252fvirtual-network%252ftoc.json#ExpressRoute) that uses a private circuit between an on-premises network and Azure. It's more costly and difficult to establish but also more secure. - -## Role-based access control for restricting access to Azure resources - -RBAC is a system that provides an identity to applications running in Azure. Applications can access resources using this identity instead of or in addition to using keys or passwords. - -## Security Principals - -The first component in RBAC is a security principal. A security principal can be a user, group, service principal, or managed identity. - -![Figure 9-2 Different types of security principals](./media/rbac-security-principal.png) - -**Figure 9-2**. Different types of security principals. - -- User - Any user who has an account in Azure Active Directory is a user. -- Group - A collection of users from Azure Active Directory. As a member of a group, a user takes on the roles of that group in addition to their own. -- Service principal - A security identity under which services or applications run. -- Managed identity - An Azure Active Directory identity managed by Azure. Managed identities are typically used when developing cloud applications that manage the credentials for authenticating to Azure services. - -The security principal can be applied to most any resource. This aspect means that it's possible to assign a security principal to a container running within Azure Kubernetes, allowing it to access secrets stored in Key Vault. An Azure Function could take on a permission allowing it to talk to an Active Directory instance to validate a JWT for a calling user. Once services are enabled with a service principal, their permissions can be managed granularly using roles and scopes. - -## Roles - -A security principal can take on many roles or, using a more sartorial analogy, wear many hats. Each role defines a series of permissions such as "Read messages from Azure Service Bus endpoint". The effective permission set of a security principal is the combination of all the permissions assigned to all the roles that a security principal has. Azure has a large number of built-in roles and users can define their own roles. - -![Figure 9-3 RBAC role definitions](./media/rbac-role-definition.png) - -**Figure 9-3**. RBAC role definitions. - -Built into Azure are also a number of high-level roles such as Owner, Contributor, Reader, and User Account Administrator. With the Owner role, a security principal can access all resources and assign permissions to others. A contributor has the same level of access to all resources but they can't assign permissions. A Reader can only view existing Azure resources and a User Account Administrator can manage access to Azure resources. - -More granular built-in roles such as [DNS Zone Contributor](/azure/role-based-access-control/built-in-roles#dns-zone-contributor) have rights limited to a single service. Security principals can take on any number of roles. - -## Scopes - -Roles can be applied to a restricted set of resources within Azure. For instance, applying scope to the previous example of reading from a Service Bus queue, you can narrow the permission to a single queue: "Read messages from Azure Service Bus endpoint `blah.servicebus.windows.net/queue1`" - -The scope can be as narrow as a single resource or it can be applied to an entire resource group, subscription, or even management group. - -When testing if a security principal has certain permission, the combination of role and scope are taken into account. This combination provides a powerful authorization mechanism. - -## Deny - -Previously, only "allow" rules were permitted for RBAC. This behavior made some scopes complicated to build. For instance, allowing a security principal access to all storage accounts except one required granting explicit permission to a potentially endless list of storage accounts. Every time a new storage account was created, it would have to be added to this list of accounts. This added management overhead that certainly wasn't desirable. - -Deny rules take precedence over allow rules. Now representing the same "allow all but one" scope could be represented as two rules "allow all" and "deny this one specific one". Deny rules not only ease management but allow for resources that are extra secure by denying access to everybody. - -## Checking access - -As you can imagine, having a large number of roles and scopes can make figuring out the effective permission of a service principal quite difficult. Piling deny rules on top of that, only serves to increase the complexity. Fortunately, there's a [permissions calculator](/azure/role-based-access-control/check-access) that can show the effective permissions for any service principal. It's typically found under the IAM tab in the portal, as shown in Figure 9-3. - -![Figure 9-4 Permission calculator for an app service](./media/check-rbac.png) - -**Figure 9-4**. Permission calculator for an app service. - -## Securing secrets - -Passwords and certificates are a common attack vector for attackers. Password-cracking hardware can do a brute-force attack and try to guess billions of passwords per second. So it's important that the passwords that are used to access resources are strong, with a large variety of characters. These passwords are exactly the kind of passwords that are near impossible to remember. Fortunately, the passwords in Azure don't actually need to be known by any human. - -Many security [experts suggest](https://www.troyhunt.com/password-managers-dont-have-to-be-perfect-they-just-have-to-be-better-than-not-having-one/) that using a password manager to keep your own passwords is the best approach. While it centralizes your passwords in one location, it also allows using highly complex passwords and ensuring they're unique for each account. The same system exists within Azure: a central store for secrets. - -## Azure Key Vault - -Azure Key Vault provides a centralized location to store passwords for things such as databases, API keys, and certificates. Once a secret is entered into the Vault, it's never shown again and the commands to extract and view it are purposefully complicated. The information in the safe is protected using either software encryption or FIPS 140-2 Level 2 validated Hardware Security Modules. - -Access to the key vault is provided through RBACs, meaning that not just any user can access the information in the vault. Say a web application wishes to access the database connection string stored in Azure Key Vault. To gain access, applications need to run using a service principal. Under this assumed role, they can read the secrets from the safe. There are a number of different security settings that can further limit the access that an application has to the vault, so that it can't update secrets but only read them. - -Access to the key vault can be monitored to ensure that only the expected applications are accessing the vault. The logs can be integrated back into Azure Monitor, unlocking the ability to set up alerts when unexpected conditions are encountered. - -## Kubernetes - -Within Kubernetes, there's a similar service for maintaining small pieces of secret information. Kubernetes Secrets can be set via the typical `kubectl` executable. - -Creating a secret is as simple as finding the base64 version of the values to be stored: - -```console -echo -n 'admin' | base64 -YWRtaW4= -echo -n '1f2d1e2e67df' | base64 -MWYyZDFlMmU2N2Rm -``` - -Then adding it to a secrets file named `secret.yml` for example that looks similar to the following example: - -```yml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -``` - -Finally, this file can be loaded into Kubernetes by running the following command: - -```console -kubectl apply -f ./secret.yaml -``` - -These secrets can then be mounted into volumes or exposed to container processes through environment variables. The [Twelve-factor app](https://12factor.net/) approach to building applications suggests using the lowest common denominator to transmit settings to an application. Environment variables are the lowest common denominator, because they're supported no matter the operating system or application. - -An alternative to use the built-in Kubernetes secrets is to access the secrets in Azure Key Vault from within Kubernetes. The simplest way to do this is to assign an RBAC role to the container looking to load secrets. The application can then use the Azure Key Vault APIs to access the secrets. However, this approach requires modifications to the code and doesn't follow the pattern of using environment variables. Instead, it's possible to inject values into a container. This approach is actually more secure than using the Kubernetes secrets directly, as they can be accessed by users on the cluster. - -## Encryption in transit and at rest - -Keeping data safe is important whether it's on disk or transiting between various different services. The most effective way to keep data from leaking is to encrypt it into a format that can't be easily read by others. Azure supports a wide range of encryption options. - -### In transit - -There are several ways to encrypt traffic on the network in Azure. The access to Azure services is typically done over connections that use Transport Layer Security (TLS). For instance, all the connections to the Azure APIs require TLS connections. Equally, connections to endpoints in Azure storage can be restricted to work only over TLS encrypted connections. - -TLS is a complicated protocol and simply knowing that the connection is using TLS isn't sufficient to ensure security. For instance, TLS 1.0 is chronically insecure, and TLS 1.1 isn't much better. Even within the versions of TLS, there are various settings that can make the connections easier to decrypt. The best course of action is to check and see if the server connection is using up-to-date and well configured protocols. - -This check can be done by an external service such as SSL labs' SSL Server Test. A test run against a typical Azure endpoint, in this case a service bus endpoint, yields a near perfect score of A. - -Even services like Azure SQL databases use TLS encryption to keep data hidden. The interesting part about encrypting the data in transit using TLS is that it isn't possible, even for Microsoft, to listen in on the connection between computers running TLS. This should provide comfort for companies concerned that their data may be at risk from Microsoft proper or even a state actor with more resources than the standard attacker. - -![Figure 9-5 SSL labs report showing a score of A for a Service Bus endpoint.](./media/ssl-report.png) - -**Figure 9-5**. SSL labs report showing a score of A for a Service Bus endpoint. - -While this level of encryption isn't going to be sufficient for all time, it should inspire confidence that Azure TLS connections are quite secure. Azure will continue to evolve its security standards as encryption improves. It's nice to know that there's somebody watching the security standards and updating Azure as they improve. - -### At rest - -In any application, there are a number of places where data rests on the disk. The application code itself is loaded from some storage mechanism. Most applications also use some kind of a database such as SQL Server, Cosmos DB, or even the amazingly price-efficient Table Storage. These databases all use heavily encrypted storage to ensure that nobody other than the applications with proper permissions can read your data. Even the system operators can't read data that has been encrypted. So customers can remain confident their secret information remains secret. - -### Storage - -The underpinning of much of Azure is the Azure Storage engine. Virtual machine disks are mounted on top of Azure Storage. Azure Kubernetes Service runs on virtual machines that, themselves, are hosted on Azure Storage. Even serverless technologies, such as Azure Functions Apps and Azure Container Instances, run out of disk that is part of Azure Storage. - -If Azure Storage is well encrypted, then it provides for a foundation for most everything else to also be encrypted. Azure Storage [is encrypted](/azure/storage/common/storage-service-encryption) with [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard). This is a well-regarded encryption technology having been the subject of extensive academic scrutiny over the last 20 or so years. At present, there's no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES. - -By default, the keys used for encrypting Azure Storage are managed by Microsoft. There are extensive protections in place to ensure to prevent malicious access to these keys. However, users with particular encryption requirements can also [provide their own storage keys](/azure/storage/common/storage-encryption-keys-powershell) that are managed in Azure Key Vault. These keys can be revoked at any time, which would effectively render the contents of the Storage account using them inaccessible. - -Virtual machines use encrypted storage, but it's possible to provide another layer of encryption by using technologies like BitLocker on Windows or DM-Crypt on Linux. These technologies mean that even if the disk image was leaked off of storage, it would remain near impossible to read it. - -### Azure SQL - -Databases hosted on Azure SQL use a technology called [Transparent Data Encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) to ensure data remains encrypted. It's enabled by default on all newly created SQL databases, but must be enabled manually for legacy databases. TDE executes real-time encryption and decryption of not just the database, but also the backups and transaction logs. - -The encryption parameters are stored in the `master` database and, on startup, are read into memory for the remaining operations. This means that the `master` database must remain unencrypted. The actual key is managed by Microsoft. However, users with exacting security requirements may provide their own key in Key Vault in much the same way as is done for Azure Storage. The Key Vault provides for such services as key rotation and revocation. - -The "Transparent" part of TDS comes from the fact that there aren't client changes needed to use an encrypted database. While this approach provides for good security, leaking the database password is enough for users to be able to decrypt the data. There's another approach that encrypts individual columns or tables in a database. [Always Encrypted](/azure/sql-database/sql-database-always-encrypted-azure-key-vault) ensures that at no point the encrypted data appears in plain text inside the database. - -Setting up this tier of encryption requires running through a wizard in SQL Server Management Studio to select the sort of encryption and where in Key Vault to store the associated keys. - -![Figure 9-6 Selecting columns in a table to be encrypted using Always Encrypted](./media/always-encrypted.png) - -**Figure 9-6**. Selecting columns in a table to be encrypted using Always Encrypted. - -Client applications that read information from these encrypted columns need to make special allowances to read encrypted data. Connection strings need to be updated with `Column Encryption Setting=Enabled` and client credentials must be retrieved from the Key Vault. The SQL Server client must then be primed with the column encryption keys. Once that is done, the remaining actions use the standard interfaces to SQL Client. That is, tools like Dapper and Entity Framework, which are built on top of SQL Client, will continue to work without changes. Always Encrypted may not yet be available for every SQL Server driver on every language. - -The combination of TDE and Always Encrypted, both of which can be used with client-specific keys, ensures that even the most exacting encryption requirements are supported. - -### Cosmos DB - -Cosmos DB is the newest database provided by Microsoft in Azure. It has been built from the ground up with security and cryptography in mind. AES-256bit encryption is standard for all Cosmos DB databases and can't be disabled. Coupled with the TLS 1.2 requirement for communication, the entire storage solution is encrypted. - -![Figure 9-7 The flow of data encryption within Cosmos DB](./media/cosmos-encryption.png) - -**Figure 9-7**. The flow of data encryption within Cosmos DB. - -While Cosmos DB doesn't provide for supplying customer encryption keys, there has been significant work done by the team to ensure it remains PCI-DSS compliant without that. Cosmos DB also doesn't support any sort of single column encryption similar to Azure SQL's Always Encrypted yet. - -## Keeping secure - -Azure has all the tools necessary to release a highly secure product. However, a chain is only as strong as its weakest link. If the applications deployed on top of Azure aren't developed with a proper security mindset and good security audits, then they become the weak link in the chain. There are many great static analysis tools, encryption libraries, and security practices that can be used to ensure that the software installed on Azure is as secure as Azure itself. Examples include [static analysis tools](https://www.mend.io/sca/), [encryption libraries](https://www.libressl.org/), and [security practices](https://azure.microsoft.com/resources/videos/red-vs-blue-internal-security-penetration-testing-of-microsoft-azure/). - ->[!div class="step-by-step"] ->[Previous](security.md) ->[Next](devops.md) diff --git a/docs/architecture/cloud-native/candidate-apps.md b/docs/architecture/cloud-native/candidate-apps.md deleted file mode 100644 index 5b7f0cd194dee..0000000000000 --- a/docs/architecture/cloud-native/candidate-apps.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: Candidate apps for cloud native -description: Learn which types of applications benefit from a cloud-native approach -author: robvet -ms.date: 12/14/2023 ---- - -# Candidate apps for cloud native - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Think about the apps your organization needs to build. Then, look at the existing apps in your portfolio. How many of them warrant a cloud-native architecture? All of them? Perhaps some? - -Applying cost/benefit analysis, there's a good chance some wouldn't support the effort. The cost of becoming cloud native would far exceed the business value of the application. - -What type of application might be a candidate for cloud native? - -- Strategic enterprise systems that need to constantly evolve business capabilities/features - -- An application that requires a high release velocity - with high confidence - -- A system where individual features must release *without* a full redeployment of the entire system - -- An application developed by teams with expertise in different technology stacks - -- An application with components that must scale independently - -Smaller, less impactful line-of-business applications might fare well with a simple monolithic architecture hosted in a Cloud PaaS environment. - -Then there are legacy systems. While we'd all like to build new applications, we're often responsible for modernizing legacy workloads that are critical to the business. - -## Modernizing legacy apps - -The free Microsoft e-book [Modernize existing .NET applications with Azure cloud and Windows Containers](https://dotnet.microsoft.com/download/thank-you/modernizing-existing-net-apps-ebook) provides guidance about migrating on-premises workloads into cloud. Figure 1-10 shows that there isn't a single, one-size-fits-all strategy for modernizing legacy applications. - -![Strategies for migrating legacy workloads](./media/strategies-for-migrating-legacy-workloads.png) - -**Figure 1-10**. Strategies for migrating legacy workloads - -Monolithic apps that are non-critical might benefit from a quick **lift-and-shift** migration. Here, the on-premises workload is rehosted to a cloud-based VM, without changes. This approach uses the [IaaS (Infrastructure as a Service) model](https://azure.microsoft.com/overview/what-is-iaas/). Azure includes several tools such as [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/), [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/), and [Azure Database Migration Service](https://azure.microsoft.com/campaigns/database-migration/) to help streamline the move. While this strategy can yield some cost savings, such applications typically weren't designed to unlock and leverage the benefits of cloud computing. - -Legacy apps that are critical to the business often benefit from an enhanced **Cloud Optimized** migration. This approach includes deployment optimizations that enable key cloud services - without changing the core architecture of the application. For example, you might [containerize](/virtualization/windowscontainers/about/) the application and deploy it to a container orchestrator, like [Azure Kubernetes Services](https://azure.microsoft.com/services/kubernetes-service/), discussed later in this book. Once in the cloud, the application can consume cloud backing services such as databases, message queues, monitoring, and distributed caching. - -Finally, monolithic apps that provide strategic enterprise functions might best benefit from a *Cloud-Native* approach, the subject of this book. This approach provides agility and velocity. But, it comes at a cost of replatforming, rearchitecting, and rewriting code. Over time, a legacy application could be decomposed into microservices, containerized, and ultimately _replatformed_ into a cloud-native architecture. - -If you and your team believe a cloud-native approach is appropriate, it behooves you to rationalize the decision with your organization. What exactly is the business problem that a cloud-native approach will solve? How would it align with business needs? - -- Rapid releases of features with increased confidence? - -- Fine-grained scalability - more efficient usage of resources? - -- Improved system resiliency? - -- Improved system performance? - -- More visibility into operations? - -- Blend development platforms and data stores to arrive at the best tool for the job? - -- Future-proof application investment? - -The right migration strategy depends on organizational priorities and the systems you're targeting. For many, it may be more cost effective to cloud-optimize a monolithic application or add coarse-grained services to an N-Tier app. In these cases, you can still make full use of cloud PaaS capabilities like the ones offered by Azure App Service. - -## Summary - -In this chapter, we introduced cloud-native computing. We provided a definition along with the key capabilities that drive a cloud-native application. We looked at the types of applications that might justify this investment and effort. - -With the introduction behind, we now dive into a much more detailed look at cloud native. - -### References - -- [Cloud Native Computing Foundation](https://www.cncf.io/) - -- [.NET Microservices: Architecture for Containerized .NET applications](https://dotnet.microsoft.com/download/thank-you/microservices-architecture-ebook) - -- [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/) - -- [Modernize existing .NET applications with Azure cloud and Windows Containers](https://dotnet.microsoft.com/download/thank-you/modernizing-existing-net-apps-ebook) - -- [Cloud Native Patterns by Cornelia Davis](https://www.manning.com/books/cloud-native-patterns) - -- [Cloud native applications: Ship faster, reduce risk, and grow your business](https://tanzu.vmware.com/cloud-native) - -- [Dapr documents](https://dapr.io/) - -- [Beyond the Twelve-Factor Application](https://content.pivotal.io/blog/beyond-the-twelve-factor-app) - -- [What is Infrastructure as Code](/devops/deliver/what-is-infrastructure-as-code) - -- [Uber Engineering's Micro Deploy: Deploying Daily with Confidence](https://www.uber.com/blog/micro-deploy-code/) - -- [How Netflix Deploys Code](https://www.infoq.com/news/2013/06/netflix/) - -- [Overload Control for Scaling WeChat Microservices](https://www.cs.columbia.edu/~ruigu/papers/socc18-final100.pdf) - ->[!div class="step-by-step"] ->[Previous](definition.md) ->[Next](introduce-eshoponcontainers-reference-app.md) diff --git a/docs/architecture/cloud-native/centralized-configuration.md b/docs/architecture/cloud-native/centralized-configuration.md deleted file mode 100644 index d54b20f647f7c..0000000000000 --- a/docs/architecture/cloud-native/centralized-configuration.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: Centralized configuration -description: Centralizing configuration for cloud-native applications using Azure App Configuration and AzureKey Vault. -ms.date: 04/06/2022 ---- - -# Centralized configuration - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Unlike a monolithic app in which everything runs within a single instance, a cloud-native application consists of independent services distributed across virtual machines, containers, and geographic regions. Managing configuration settings for dozens of interdependent services can be challenging. Duplicate copies of configuration settings across different locations are error prone and difficult to manage. Centralized configuration is a critical requirement for distributed cloud-native applications. - -As discussed in [Chapter 1](introduction.md), the Twelve-Factor App recommendations require strict separation between code and configuration. Configuration must be stored externally from the application and read-in as needed. Storing configuration values as constants or literal values in code is a violation. The same configuration values are often be used by many services in the same application. Additionally, we must support the same values across multiple environments, such as dev, testing, and production. The best practice is store them in a centralized configuration store. - -The Azure cloud presents several great options. - -## Azure App Configuration - -[Azure App Configuration](/azure/azure-app-configuration/overview) is a fully managed Azure service that stores non-secret configuration settings in a secure, centralized location. Stored values can be shared among multiple services and applications. - -The service is simple to use and provides several benefits: - -- Flexible key/value representations and mappings -- Tagging with Azure labels -- Dedicated UI for management -- Encryption of sensitive information -- Querying and batch retrieval - -Azure App Configuration maintains changes made to key-value settings for seven days. The point-in-time snapshot feature enables you to reconstruct the history of a setting and even rollback for a failed deployment. - -App Configuration automatically caches each setting to avoid excessive calls to the configuration store. The refresh operation waits until the cached value of a setting expires to update that setting, even when its value changes in the configuration store. The default cache expiration time is 30 seconds. You can override the expiration time. - -App Configuration encrypts all configuration values in transit and at rest. Key names and labels are used as indexes for retrieving configuration data and aren't encrypted. - -Although App Configuration provides hardened security, Azure Key Vault is still the best place for storing application secrets. Key Vault provides hardware-level encryption, granular access policies, and management operations such as certificate rotation. You can create App Configuration values that reference secrets stored in a Key Vault. - -## Azure Key Vault - -Key Vault is a managed service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. A vault is a logical group of secrets. - -Key Vault greatly reduces the chances that secrets may be accidentally leaked. When using Key Vault, application developers no longer need to store security information in their application. This practice eliminates the need to store this information inside your code. For example, an application may need to connect to a database. Instead of storing the connection string in the app's code, you can store it securely in Key Vault. - -Your applications can securely access the information they need by using URIs. These URIs allow the applications to retrieve specific versions of a secret. There's no need to write custom code to protect any of the secret information stored in Key Vault. - -Access to Key Vault requires proper caller authentication and authorization. Typically, each cloud-native microservice uses a ClientId/ClientSecret combination. It's important to keep these credentials outside source control. A best practice is to set them in the application's environment. Direct access to Key Vault from AKS can be achieved using [Key Vault FlexVolume](https://github.com/Azure/kubernetes-keyvault-flexvol). - -## Configuration in eShop - -The eShopOnContainers application includes local application settings files with each microservice. These files are checked into source control, but don't include production secrets such as connection strings or API keys. In production, individual settings may be overwritten with per-service environment variables. Injecting secrets in environment variables is a common practice for hosted applications, but doesn't provide a central configuration store. To support centralized management of configuration settings, each microservice includes a setting to toggle between its use of local settings or Azure Key Vault settings. - -## References - -- [The eShopOnContainers Architecture](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Architecture) -- [Orchestrating microservices and multi-container applications for high scalability and availability](../microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md) -- [Azure API Management](/azure/api-management/api-management-key-concepts) -- [Azure SQL Database Overview](/azure/sql-database/sql-database-technical-overview) -- [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) -- [Azure Cosmos DB's API for MongoDB](/azure/cosmos-db/mongodb-introduction) -- [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) -- [Azure Monitor overview](/azure/azure-monitor/overview) -- [eShopOnContainers: Create Kubernetes cluster in AKS](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Deploy-to-Azure-Kubernetes-Service-(AKS)#create-kubernetes-cluster-in-aks) -- [eShopOnContainers: Azure Dev Spaces](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Azure-Dev-Spaces) -- [Azure Dev Spaces](/azure/dev-spaces/about) - ->[!div class="step-by-step"] ->[Previous](deploy-eshoponcontainers-azure.md) ->[Next](scale-applications.md) diff --git a/docs/architecture/cloud-native/combine-containers-serverless-approaches.md b/docs/architecture/cloud-native/combine-containers-serverless-approaches.md deleted file mode 100644 index 65b815398ebdb..0000000000000 --- a/docs/architecture/cloud-native/combine-containers-serverless-approaches.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Combining containers and serverless approaches for cloud-native services -description: Combining containers and Kubernetes with serverless approaches -ms.date: 04/06/2022 ---- - -# Combining containers and serverless approaches - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Cloud-native applications typically implement services leveraging containers and orchestration. There are often opportunities to expose some of the application's services as Azure Functions. However, with a cloud-native app deployed to Kubernetes, it would be nice to leverage Azure Functions within this same toolset. Fortunately, you can wrap Azure Functions inside Docker containers and deploy them using the same processes and tools as the rest of your Kubernetes-based app. - -## When does it make sense to use containers with serverless? - -Your Azure Function has no knowledge of the platform on which it's deployed. For some scenarios, you may have specific requirements and need to customize the environment on which your function code will run. You'll need a custom image that supports dependencies or a configuration not supported by the default image. In these cases, it makes sense to deploy your function in a custom Docker container. - -## When should you avoid using containers with Azure Functions? - -If you want to use consumption billing, you can't run your function in a container. What's more, if you deploy your function to a Kubernetes cluster, you'll no longer benefit from the built-in scaling provided by Azure Functions. You'll need to use Kubernetes' scaling features, described earlier in this chapter. - -## How to combine serverless and Docker containers - -To wrap an Azure Function in a Docker container, install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools) and then run the following command: - -```console -func init ProjectName --worker-runtime dotnet --docker -``` - -When the project is created, it will include a Dockerfile and the worker runtime configured to `dotnet`. Now, you can create and test your function locally. Build and run it using the `docker build` and `docker run` commands. For detailed steps to get started building Azure Functions with Docker support, see the [Create a function on Linux using a custom image](/azure/azure-functions/functions-create-function-linux-custom-image) tutorial. - -## How to combine serverless and Kubernetes with KEDA - -In this chapter, you've seen that the Azure Functions' platform automatically scales out to meet demand. When deploying containerized functions to AKS, however, you lose the built-in scaling functionality. To the rescue comes [Kubernetes-based Event Driven (KEDA)](/azure/azure-functions/functions-kubernetes-keda). It enables fine-grained autoscaling for `event-driven Kubernetes workloads`, including containerized functions. - -KEDA provides event-driven scaling functionality to the Functions' runtime in a Docker container. KEDA can scale from zero instances (when no events are occurring) out to `n instances`, based on load. It enables autoscaling by exposing custom metrics to the Kubernetes autoscaler (Horizontal Pod Autoscaler). Using Functions containers with KEDA makes it possible to replicate serverless function capabilities in any Kubernetes cluster. - -It's worth noting that the KEDA project is now managed by the Cloud Native Computing Foundation (CNCF). - ->[!div class="step-by-step"] ->[Previous](leverage-serverless-functions.md) ->[Next](deploy-containers-azure.md) diff --git a/docs/architecture/cloud-native/communication-patterns.md b/docs/architecture/cloud-native/communication-patterns.md deleted file mode 100644 index 9df9659de68aa..0000000000000 --- a/docs/architecture/cloud-native/communication-patterns.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Cloud-native communication patterns -description: Learn about key service communication concerns in cloud-native applications -author: robvet -ms.date: 04/06/2022 ---- - -# Cloud-native communication patterns - -[!INCLUDE [download-alert](includes/download-alert.md)] - -When constructing a cloud-native system, communication becomes a significant design decision. How does a front-end client application communicate with a back-end microservice? How do back-end microservices communicate with each other? What are the principles, patterns, and best practices to consider when implementing communication in cloud-native applications? - -## Communication considerations - -In a monolithic application, communication is straightforward. The code modules execute together in the same executable space (process) on a server. This approach can have performance advantages as everything runs together in shared memory, but results in tightly coupled code that becomes difficult to maintain, evolve, and scale. - -Cloud-native systems implement a microservice-based architecture with many small, independent microservices. Each microservice executes in a separate process and typically runs inside a container that is deployed to a *cluster*. - -A cluster groups a pool of virtual machines together to form a highly available environment. They're managed with an orchestration tool, which is responsible for deploying and managing the containerized microservices. Figure 4-1 shows a [Kubernetes](https://kubernetes.io) cluster deployed into the Azure cloud with the fully managed [Azure Kubernetes Services](/azure/aks/intro-kubernetes). - -![A Kubernetes cluster in Azure](./media/kubernetes-cluster-in-azure.png) - -**Figure 4-1**. A Kubernetes cluster in Azure - -Across the cluster, [microservices](/azure/architecture/microservices/) communicate with each other through APIs and [messaging technologies](/azure/service-bus-messaging/compare-messaging-services). - -While they provide many benefits, microservices are no free lunch. Local in-process method calls between components are now replaced with network calls. Each microservice must communicate over a network protocol, which adds complexity to your system: - -- Network congestion, latency, and transient faults are a constant concern. -- Resiliency (that is, retrying failed requests) is essential. -- Some calls must be idempotent as to keep consistent state. -- Each microservice must authenticate and authorize calls. -- Each message must be serialized and then deserialized - which can be expensive. -- Message encryption/decryption becomes important. - -The book [.NET Microservices: Architecture for Containerized .NET Applications](https://dotnet.microsoft.com/download/thank-you/microservices-architecture-ebook), available for free from Microsoft, provides an in-depth coverage of communication patterns for microservice applications. In this chapter, we provide a high-level overview of these patterns along with implementation options available in the Azure cloud. - -In this chapter, we'll first address communication between front-end applications and back-end microservices. We'll then look at back-end microservices communicate with each other. We'll explore the up and gRPC communication technology. Finally, we'll look new innovative communication patterns using service mesh technology. We'll also see how the Azure cloud provides different kinds of *backing services* to support cloud-native communication. - ->[!div class="step-by-step"] ->[Previous](other-deployment-options.md) ->[Next](front-end-communication.md) diff --git a/docs/architecture/cloud-native/definition.md b/docs/architecture/cloud-native/definition.md deleted file mode 100644 index 0917795ff0f87..0000000000000 --- a/docs/architecture/cloud-native/definition.md +++ /dev/null @@ -1,357 +0,0 @@ ---- -title: What is Cloud Native? -description: Learn about the foundational pillars that provide the bedrock for cloud-native systems -author: robvet -ms.date: 12/14/2023 ---- - -# What is Cloud Native? - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Stop what you're doing and ask your colleagues to define the term "Cloud Native". There's a good chance you'll get several different answers. - -Let's start with a simple definition: - -> *Cloud-native architecture and technologies are an approach to designing, constructing, and operating workloads that are built in the cloud and take full advantage of the cloud computing model.* - -The [Cloud Native Computing Foundation](https://www.cncf.io/) provides the [official definition](https://github.com/cncf/toc/blob/main/DEFINITION.md): - -> *Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.* - -> *These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.* - -Cloud native is about *speed* and *agility*. Business systems are evolving from enabling business capabilities to weapons of strategic transformation that accelerate business velocity and growth. It's imperative to get new ideas to market immediately. - -At the same time, business systems have also become increasingly complex with users demanding more. They expect rapid responsiveness, innovative features, and zero downtime. Performance problems, recurring errors, and the inability to move fast are no longer acceptable. Your users will visit your competitor. Cloud-native systems are designed to embrace rapid change, large scale, and resilience. - -Here are some companies who have implemented cloud-native techniques. Think about the speed, agility, and scalability they've achieved. - -| Company | Experience | -| :-------- | :-------- | -| [Netflix](https://www.infoq.com/news/2013/06/netflix/) | Has 600+ services in production. Deploys 100 times per day. | -| [Uber](https://www.uber.com/blog/micro-deploy-code/) | Has 1,000+ services in production. Deploys several thousand times each week. | -| [WeChat](https://www.cs.columbia.edu/~ruigu/papers/socc18-final100.pdf) | Has 3,000+ services in production. Deploys 1,000 times a day. | - -As you can see, Netflix, Uber, and, WeChat expose cloud-native systems that consist of many independent services. This architectural style enables them to rapidly respond to market conditions. They instantaneously update small areas of a live, complex application, without a full redeployment. They individually scale services as needed. - -## The pillars of cloud native - -The speed and agility of cloud native derive from many factors. Foremost is *cloud infrastructure*. But there's more: Five other foundational pillars shown in Figure 1-3 also provide the bedrock for cloud-native systems. - -![Cloud-native foundational pillars](./media/cloud-native-foundational-pillars.png) - -**Figure 1-3**. Cloud-native foundational pillars - -Let's take some time to better understand the significance of each pillar. - -## The cloud - -Cloud-native systems take full advantage of the cloud service model. - -Designed to thrive in a dynamic, virtualized cloud environment, these systems make extensive use of [Platform as a Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/) compute infrastructure and managed services. They treat the underlying infrastructure as *disposable* - provisioned in minutes and resized, scaled, or destroyed on demand – via automation. - -Consider the difference between how we treat pets and commodities. In a traditional data center, servers are treated as pets: a physical machine, given a meaningful name, and cared for. You scale by adding more resources to the same machine (scaling up). If the server becomes sick, you nurse it back to health. Should the server become unavailable, everyone notices. - -The commodities service model is different. You provision each instance as a virtual machine or container. They're identical and assigned a system identifier such as Service-01, Service-02, and so on. You scale by creating more instances (scaling out). Nobody notices when an instance becomes unavailable. - -The commodities model embraces immutable infrastructure. Servers aren't repaired or modified. If one fails or requires updating, it's destroyed and a new one is provisioned – all done via automation. - -Cloud-native systems embrace the commodities service model. They continue to run as the infrastructure scales in or out with no regard to the machines upon which they're running. - -The Azure cloud platform supports this type of highly elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities. - -## Modern design - -How would you design a cloud-native app? What would your architecture look like? To what principles, patterns, and best practices would you adhere? What infrastructure and operational concerns would be important? - -### The Twelve-Factor Application - -A widely accepted methodology for constructing cloud-based applications is the [Twelve-Factor Application](https://12factor.net/). It describes a set of principles and practices that developers follow to construct applications optimized for modern cloud environments. Special attention is given to portability across environments and declarative automation. - -While applicable to any web-based application, many practitioners consider Twelve-Factor a solid foundation for building cloud-native apps. Systems built upon these principles can deploy and scale rapidly and add features to react quickly to market changes. - -The following table highlights the Twelve-Factor methodology: - -| Factor | Explanation | -| :-------- | :-------- | -| 1 - Code Base | A single code base for each microservice, stored in its own repository. Tracked with version control, it can deploy to multiple environments (QA, Staging, Production). | -| 2 - Dependencies | Each microservice isolates and packages its own dependencies, embracing changes without impacting the entire system. | -| 3 - Configurations | Configuration information is moved out of the microservice and externalized through a configuration management tool outside of the code. The same deployment can propagate across environments with the correct configuration applied. | -| 4 - Backing Services | Ancillary resources (data stores, caches, message brokers) should be exposed via an addressable URL. Doing so decouples the resource from the application, enabling it to be interchangeable. | -| 5 - Build, Release, Run | Each release must enforce a strict separation across the build, release, and run stages. Each should be tagged with a unique ID and support the ability to roll back. Modern CI/CD systems help fulfill this principle. | -| 6 - Processes | Each microservice should execute in its own process, isolated from other running services. Externalize required state to a backing service such as a distributed cache or data store. | -| 7 - Port Binding | Each microservice should be self-contained with its interfaces and functionality exposed on its own port. Doing so provides isolation from other microservices. | -| 8 - Concurrency | When capacity needs to increase, scale out services horizontally across multiple identical processes (copies) as opposed to scaling-up a single large instance on the most powerful machine available. Develop the application to be concurrent making scaling out in cloud environments seamless. | -| 9 - Disposability | Service instances should be disposable. Favor fast startup to increase scalability opportunities and graceful shutdowns to leave the system in a correct state. Docker containers along with an orchestrator inherently satisfy this requirement. | -| 10 - Dev/Prod Parity | Keep environments across the application lifecycle as similar as possible, avoiding costly shortcuts. Here, the adoption of containers can greatly contribute by promoting the same execution environment. | -| 11 - Logging | Treat logs generated by microservices as event streams. Process them with an event aggregator. Propagate log data to data-mining/log management tools like Azure Monitor or Splunk and eventually to long-term archival. | -| 12 - Admin Processes | Run administrative/management tasks, such as data cleanup or computing analytics, as one-off processes. Use independent tools to invoke these tasks from the production environment, but separately from the application. | - -In the book, [Beyond the Twelve-Factor App](https://content.pivotal.io/blog/beyond-the-twelve-factor-app), author Kevin Hoffman details each of the original 12 factors (written in 2011). Additionally, he discusses three extra factors that reflect today's modern cloud application design. - -| New Factor | Explanation | -| :-------- | :-------- | -| 13 - API First | Make everything a service. Assume your code will be consumed by a front-end client, gateway, or another service. | -| 14 - Telemetry | On a workstation, you have deep visibility into your application and its behavior. In the cloud, you don't. Make sure your design includes the collection of monitoring, domain-specific, and health/system data. | -| 15 - Authentication/ Authorization | Implement identity from the start. Consider [RBAC (role-based access control)](/azure/role-based-access-control/overview) features available in public clouds. | - -We'll refer to many of the 12+ factors in this chapter and throughout the book. - -### Azure Well-Architected Framework - -Designing and deploying cloud-based workloads can be challenging, especially when implementing cloud-native architecture. Microsoft provides industry standard best practices to help you and your team deliver robust cloud solutions. - -The [Microsoft Well-Architected Framework](/azure/architecture/framework/) provides a set of guiding tenets that can be used to improve the quality of a cloud-native workload. The framework consists of five pillars of architecture excellence: - -| Tenets | Description | -| :-------- | :-------- | -| [Cost management](/azure/architecture/framework/#cost-optimization) | Focus on generating incremental value early. Apply *Build-Measure-Learn* principles to accelerate time to market while avoiding capital-intensive solutions. Using a pay-as-you-go strategy, invest as you scale out, rather than delivering a large investment up front. | -| [Operational excellence](/azure/architecture/framework/#operational-excellence) | Automate the environment and operations to increase speed and reduce human error. Roll problem updates back or forward quickly. Implement monitoring and diagnostics from the start. | -| [Performance efficiency](/azure/architecture/framework/#performance-efficiency) | Efficiently meet demands placed on your workloads. Favor horizontal scaling (scaling out) and design it into your systems. Continually conduct performance and load testing to identify potential bottlenecks. | -| [Reliability](/azure/architecture/framework/#reliability) | Build workloads that are both resilient and available. Resiliency enables workloads to recover from failures and continue functioning. Availability ensures users access to your workload at all times. Design applications to expect failures and recover from them. | -| [Security](/azure/architecture/framework/#security) | Implement security across the entire lifecycle of an application, from design and implementation to deployment and operations. Pay close attention to identity management, infrastructure access, application security, and data sovereignty and encryption. | - -To get started, Microsoft provides a set of [online assessments](/assessments/?mode=pre-assessment&session=local) to help you assess your current cloud workloads against the five well-architected pillars. - -## Microservices - -Cloud-native systems embrace microservices, a popular architectural style for constructing modern applications. - -Built as a distributed set of small, independent services that interact through a shared fabric, microservices share the following characteristics: - -- Each implements a specific business capability within a larger domain context. - -- Each is developed autonomously and can be deployed independently. - -- Each is self-contained encapsulating its own data storage technology, dependencies, and programming platform. - -- Each runs in its own process and communicates with others using standard communication protocols such as HTTP/HTTPS, gRPC, WebSockets, or [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol). - -- They compose together to form an application. - -Figure 1-4 contrasts a monolithic application approach with a microservices approach. Note how the monolith is composed of a layered architecture, which executes in a single process. It typically consumes a relational database. The microservice approach, however, segregates functionality into independent services, each with its own logic, state, and data. Each microservice hosts its own datastore. - -![Monolithic deployment versus microservices](./media/monolithic-vs-microservices.png) - -**Figure 1-4.** Monolithic versus microservices architecture - -Note how microservices promote the **Processes** principle from the [Twelve-Factor Application](https://12factor.net/), discussed earlier in the chapter. - -> *Factor \#6 specifies "Each microservice should execute in its own process, isolated from other running services."* - -### Why microservices? - -Microservices provide agility. - -Earlier in the chapter, we compared an eCommerce application built as a monolith to that with microservices. In the example, we saw some clear benefits: - -- Each microservice has an autonomous lifecycle and can evolve independently and deploy frequently. You don't have to wait for a quarterly release to deploy a new feature or update. You can update a small area of a live application with less risk of disrupting the entire system. The update can be made without a full redeployment of the application. - -- Each microservice can scale independently. Instead of scaling the entire application as a single unit, you scale out only those services that require more processing power to meet desired performance levels and service-level agreements. Fine-grained scaling provides for greater control of your system and helps reduce overall costs as you scale portions of your system, not everything. - -An excellent reference guide for understanding microservices is [.NET Microservices: Architecture for Containerized .NET Applications](https://dotnet.microsoft.com/download/thank-you/microservices-architecture-ebook). The book deep dives into microservices design and architecture. It's a companion for a [full-stack microservice reference architecture](https://github.com/dotnet-architecture/eShopOnContainers) available as a free download from Microsoft. - -### Developing microservices - -Microservices can be created upon any modern development platform. - -The Microsoft .NET platform is an excellent choice. Free and open source, it has many built-in features that simplify microservice development. .NET is cross-platform. Applications can be built and run on Windows, macOS, and most flavors of Linux. - -.NET is highly performant and has scored well in comparison to Node.js and other competing platforms. Interestingly, [TechEmpower](https://www.techempower.com/) conducted an extensive set of [performance benchmarks](https://www.techempower.com/benchmarks/#section=data-r17&hw=ph&test=plaintext) across many web application platforms and frameworks. .NET scored in the top 10 - well above Node.js and other competing platforms. - -[.NET](https://github.com/dotnet/core) is maintained by Microsoft and the .NET community on GitHub. - -### Microservice challenges - -While distributed cloud-native microservices can provide immense agility and speed, they present many challenges: - -#### *Communication* - -How will front-end client applications communicate with backed-end core microservices? Will you allow direct communication? Or, might you abstract the back-end microservices with a gateway facade that provides flexibility, control, and security? - -How will back-end core microservices communicate with each other? Will you allow direct HTTP calls that can increase coupling and impact performance and agility? Or might you consider decoupled messaging with queue and topic technologies? - -Communication is covered in the [Cloud-native communication patterns](./communication-patterns.md) chapter. - -#### *Resiliency* - -A microservices architecture moves your system from in-process to out-of-process network communication. In a distributed architecture, what happens when Service B isn't responding to a network call from Service A? Or, what happens when Service C becomes temporarily unavailable and other services calling it become blocked? - -Resiliency is covered in the [Cloud-native resiliency](./resiliency.md) chapter. - -#### *Distributed Data* - -By design, each microservice encapsulates its own data, exposing operations via its public interface. If so, how do you query data or implement a transaction across multiple services? - -Distributed data is covered in the [Cloud-native data patterns](./distributed-data.md) chapter. - -#### *Secrets* - -How will your microservices securely store and manage secrets and sensitive configuration data? - -Secrets are covered in detail [Cloud-native security](./security.md). - -### Manage Complexity with Dapr - -[Dapr](https://dapr.io/) is a distributed, open-source application runtime. Through an architecture of pluggable components, it dramatically simplifies the *plumbing* behind distributed applications. It provides a **dynamic glue** that binds your application with pre-built infrastructure capabilities and components from the Dapr runtime. Figure 1-5 shows Dapr from 20,000 feet. - -![Dapr at 20,000 feet](./media/dapr-high-level.png) -**Figure 1-5**. Dapr at 20,000 feet. - -In the top row of the figure, note how Dapr provides [language-specific SDKs](https://docs.dapr.io/developing-applications/sdks/) for popular development platforms. Dapr v1 includes support for .NET, Go, Node.js, Python, PHP, Java, and JavaScript. - -While language-specific SDKs enhance the developer experience, Dapr is platform agnostic. Under the hood, Dapr's programming model exposes capabilities through standard HTTP/gRPC communication protocols. Any programming platform can call Dapr via its native HTTP and gRPC APIs. - -The blue boxes across the center of the figure represent the Dapr building blocks. Each exposes pre-built plumbing code for a distributed application capability that your application can consume. - -The components row represents a large set of pre-defined infrastructure components that your application can consume. Think of components as infrastructure code you don't have to write. - -The bottom row highlights the portability of Dapr and the diverse environments across which it can run. - -Looking ahead, Dapr has the potential to have a profound impact on cloud-native application development. - -## Containers - -It's natural to hear the term *container* mentioned in any *cloud native* conversation. In the book, [Cloud Native Patterns](https://www.manning.com/books/cloud-native-patterns), author Cornelia Davis observes that, "Containers are a great enabler of cloud-native software." The Cloud Native Computing Foundation places microservice containerization as the first step in their [Cloud-Native Trail Map](https://raw.githubusercontent.com/cncf/trailmap/master/CNCF_TrailMap_latest.png) - guidance for enterprises beginning their cloud-native journey. - -Containerizing a microservice is simple and straightforward. The code, its dependencies, and runtime are packaged into a binary called a [container image](https://docs.docker.com/glossary/?term=image). Images are stored in a container registry, which acts as a repository or library for images. A registry can be located on your development computer, in your data center, or in a public cloud. Docker itself maintains a public registry via [Docker Hub](https://hub.docker.com/). The Azure cloud features a private [container registry](https://azure.microsoft.com/services/container-registry/) to store container images close to the cloud applications that will run them. - -When an application starts or scales, you transform the container image into a running container instance. The instance runs on any computer that has a [container runtime](https://kubernetes.io/docs/setup/production-environment/container-runtimes/) engine installed. You can have as many instances of the containerized service as needed. - -Figure 1-6 shows three different microservices, each in its own container, all running on a single host. - -![Multiple containers running on a container host](./media/hosting-mulitple-containers.png) - -**Figure 1-6**. Multiple containers running on a container host - -Note how each container maintains its own set of dependencies and runtime, which can be different from one another. Here, we see different versions of the Product microservice running on the same host. Each container shares a slice of the underlying host operating system, memory, and processor, but is isolated from one another. - -Note how well the container model embraces the **Dependencies** principle from the [Twelve-Factor Application](https://12factor.net/). - -> *Factor \#2 specifies that "Each microservice isolates and packages its own dependencies, embracing changes without impacting the entire system."* - -Containers support both Linux and Windows workloads. The Azure cloud openly embraces both. Interestingly, it's Linux, not Windows Server, that has become the more popular operating system in Azure. - -While several container vendors exist, [Docker](https://www.docker.com/) has captured the lion's share of the market. The company has been driving the software container movement. It has become the de facto standard for packaging, deploying, and running cloud-native applications. - -### Why containers? - -Containers provide portability and guarantee consistency across environments. By encapsulating everything into a single package, you *isolate* the microservice and its dependencies from the underlying infrastructure. - -You can deploy the container in any environment that hosts the Docker runtime engine. Containerized workloads also eliminate the expense of pre-configuring each environment with frameworks, software libraries, and runtime engines. - -By sharing the underlying operating system and host resources, a container has a much smaller footprint than a full virtual machine. The smaller size increases the *density*, or number of microservices, that a given host can run at one time. - -### Container orchestration - -While tools such as Docker create images and run containers, you also need tools to manage them. Container management is done with a special software program called a **container orchestrator**. When operating at scale with many independent running containers, orchestration is essential. - -Figure 1-7 shows management tasks that container orchestrators automate. - -![What container orchestrators do](./media/what-container-orchestrators-do.png) - -**Figure 1-7**. What container orchestrators do - -The following table describes common orchestration tasks. - -| Tasks | Explanation | -| :-------- | :-------- | -| Scheduling | Automatically provision container instances.| -| Affinity/anti-affinity | Provision containers nearby or far apart from each other, helping availability and performance. | -| Health monitoring | Automatically detect and correct failures.| -| Failover | Automatically reprovision a failed instance to a healthy machine.| -| Scaling | Automatically add or remove a container instance to meet demand.| -| Networking | Manage a networking overlay for container communication.| -| Service Discovery | Enable containers to locate each other.| -| Rolling Upgrades | Coordinate incremental upgrades with zero downtime deployment. Automatically roll back problematic changes.| - -Note how container orchestrators embrace the **Disposability** and **Concurrency** principles from the [Twelve-Factor Application](https://12factor.net/). - -> *Factor \#9 specifies that "Service instances should be disposable, favoring fast startups to increase scalability opportunities and graceful shutdowns to leave the system in a correct state."* Docker containers along with an orchestrator inherently satisfy this requirement." - -> *Factor \#8 specifies that "Services scale out across a large number of small identical processes (copies) as opposed to scaling-up a single large instance on the most powerful machine available."* - -While several container orchestrators exist, [Kubernetes](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/) has become the de facto standard for the cloud-native world. It's a portable, extensible, open-source platform for managing containerized workloads. - -You could host your own instance of Kubernetes, but then you'd be responsible for provisioning and managing its resources - which can be complex. The Azure cloud features Kubernetes as a managed service. Both [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) and [Azure Red Hat OpenShift (ARO)](https://azure.microsoft.com/services/openshift/) enable you to fully leverage the features and power of Kubernetes as a managed service, without having to install and maintain it. - -Container orchestration is covered in detail in [Scaling Cloud-Native Applications](./scale-applications.md). - -## Backing services - -Cloud-native systems depend upon many different ancillary resources, such as data stores, message brokers, monitoring, and identity services. These services are known as [backing services](https://12factor.net/backing-services). - - Figure 1-8 shows many common backing services that cloud-native systems consume. - -![Common backing services](./media/common-backing-services.png) - -**Figure 1-8**. Common backing services - -You could host your own backing services, but then you'd be responsible for licensing, provisioning, and managing those resources. - -Cloud providers offer a rich assortment of *managed backing services.* Instead of owning the service, you simply consume it. The cloud provider operates the resource at scale and bears the responsibility for performance, security, and maintenance. Monitoring, redundancy, and availability are built into the service. Providers guarantee service level performance and fully support their managed services - open a ticket and they fix your issue. - -Cloud-native systems favor managed backing services from cloud vendors. The savings in time and labor can be significant. The operational risk of hosting your own and experiencing trouble can get expensive fast. - -A best practice is to treat a backing service as an **attached resource**, dynamically bound to a microservice with configuration information (a URL and credentials) stored in an external configuration. This guidance is spelled out in the [Twelve-Factor Application](https://12factor.net/), discussed earlier in the chapter. - ->*Factor \#4* specifies that backing services "should be exposed via an addressable URL. Doing so decouples the resource from the application, enabling it to be interchangeable." - ->*Factor \#3* specifies that "Configuration information is moved out of the microservice and externalized through a configuration management tool outside of the code." - -With this pattern, a backing service can be attached and detached without code changes. You might promote a microservice from QA to a staging environment. You update the microservice configuration to point to the backing services in staging and inject the settings into your container through an environment variable. - -Cloud vendors provide APIs for you to communicate with their proprietary backing services. These libraries encapsulate the proprietary plumbing and complexity. However, communicating directly with these APIs will tightly couple your code to that specific backing service. It's a widely accepted practice to insulate the implementation details of the vendor API. Introduce an intermediation layer, or intermediate API, exposing generic operations to your service code and wrap the vendor code inside it. This loose coupling enables you to swap out one backing service for another or move your code to a different cloud environment without having to make changes to the mainline service code. Dapr, discussed earlier, follows this model with its set of [prebuilt building blocks](https://docs.dapr.io/developing-applications/building-blocks/). - -On a final thought, backing services also promote the **Statelessness** principle from the [Twelve-Factor Application](https://12factor.net/), discussed earlier in the chapter. - ->*Factor \#6* specifies that, "Each microservice should execute in its own process, isolated from other running services. Externalize required state to a backing service such as a distributed cache or data store." - -Backing services are discussed in [Cloud-native data patterns](./distributed-data.md) and [Cloud-native communication patterns](./communication-patterns.md). - -## Automation - -As you've seen, cloud-native systems embrace microservices, containers, and modern system design to achieve speed and agility. But, that's only part of the story. How do you provision the cloud environments upon which these systems run? How do you rapidly deploy app features and updates? How do you round out the full picture? - -Enter the widely accepted practice of [Infrastructure as Code](/devops/deliver/what-is-infrastructure-as-code), or IaC. - -With IaC, you automate platform provisioning and application deployment. You essentially apply software engineering practices such as testing and versioning to your DevOps practices. Your infrastructure and deployments are automated, consistent, and repeatable. - -### Automating infrastructure - -Tools like [Azure Resource Manager](/azure/azure-resource-manager/management/overview), [Azure Bicep](/azure/azure-resource-manager/bicep/overview), [Terraform](https://www.terraform.io/) from HashiCorp, and the [Azure CLI](/cli/azure/), enable you to declaratively script the cloud infrastructure you require. Resource names, locations, capacities, and secrets are parameterized and dynamic. The script is versioned and checked into source control as an artifact of your project. You invoke the script to provision a consistent and repeatable infrastructure across system environments, such as QA, staging, and production. - -Under the hood, IaC is idempotent, meaning that you can run the same script over and over without side effects. If the team needs to make a change, they edit and rerun the script. Only the updated resources are affected. - -In the article, [What is Infrastructure as Code](/devops/deliver/what-is-infrastructure-as-code), Author Sam Guckenheimer describes how, "Teams who implement IaC can deliver stable environments rapidly and at scale. They avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. DevOps teams can work together with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly, reliably, and at scale." - -### Automating deployments - -The [Twelve-Factor Application](https://12factor.net/), discussed earlier, calls for separate steps when transforming completed code into a running application. - -> *Factor \#5* specifies that "Each release must enforce a strict separation across the build, release and run stages. Each should be tagged with a unique ID and support the ability to roll back." - -Modern CI/CD systems help fulfill this principle. They provide separate build and delivery steps that help ensure consistent and quality code that's readily available to users. - -Figure 1-9 shows the separation across the deployment process. - -![Deployments Steps in CI/CD Pipeline](./media/build-release-run-pipeline.png) - -**Figure 1-9**. Deployment steps in a CI/CD Pipeline - -In the previous figure, pay special attention to separation of tasks: - -1. The developer constructs a feature in their development environment, iterating through what is called the "inner loop" of code, run, and debug. -2. When complete, that code is *pushed* into a code repository, such as GitHub, Azure DevOps, or BitBucket. -3. The push triggers a build stage that transforms the code into a binary artifact. The work is implemented with a [Continuous Integration (CI)](https://martinfowler.com/articles/continuousIntegration.html) pipeline. It automatically builds, tests, and packages the application. -4. The release stage picks up the binary artifact, applies external application and environment configuration information, and produces an immutable release. The release is deployed to a specified environment. The work is implemented with a [Continuous Delivery (CD)](https://martinfowler.com/bliki/ContinuousDelivery.html) pipeline. Each release should be identifiable. You can say, "This deployment is running Release 2.1.1 of the application." -5. Finally, the released feature is run in the target execution environment. Releases are immutable meaning that any change must create a new release. - -Applying these practices, organizations have radically evolved how they ship software. Many have moved from quarterly releases to on-demand updates. The goal is to catch problems early in the development cycle when they're less expensive to fix. The longer the duration between integrations, the more expensive problems become to resolve. With consistency in the integration process, teams can commit code changes more frequently, leading to better collaboration and software quality. - -Infrastructure as code and deployment automation, along with GitHub and Azure DevOps are discussed in detail in [DevOps](./devops.md). - ->[!div class="step-by-step"] ->[Previous](introduction.md) ->[Next](candidate-apps.md) diff --git a/docs/architecture/cloud-native/deploy-containers-azure.md b/docs/architecture/cloud-native/deploy-containers-azure.md deleted file mode 100644 index b16da9d4ae15f..0000000000000 --- a/docs/architecture/cloud-native/deploy-containers-azure.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Deploying containers in Azure -description: Deploying Containers in Azure with Azure Container Registry, Azure Kubernetes Service, and Azure Dev Spaces. -ms.date: 12/14/2023 ---- - -# Deploying containers in Azure - -[!INCLUDE [download-alert](includes/download-alert.md)] - -We've discussed containers in this chapter and in chapter 1. We've seen that containers provide many benefits to cloud-native applications, including portability. In the Azure cloud, you can deploy the same containerized services across staging and production environments. Azure provides several options for hosting these containerized workloads: - -- Azure Kubernetes Services (AKS) -- Azure Container Instance (ACI) -- Azure Web Apps for Containers - -## Azure Container Registry - -When containerizing a microservice, you first build a container "image." The image is a binary representation of the service code, dependencies, and runtime. While you can manually create an image using the `Docker Build` command from the Docker API, a better approach is to create it as part of an automated build process. - -Once created, container images are stored in container registries. They enable you to build, store, and manage container images. There are many registries available, both public and private. Azure Container Registry (ACR) is a fully managed container registry service in the Azure cloud. It persists your images inside the Azure network, reducing the time to deploy them to Azure container hosts. You can also secure them using the same security and identity procedures that you use for other Azure resources. - -You create an Azure Container Registry using the [Azure portal](/azure/container-registry/container-registry-get-started-portal), [Azure CLI](/azure/container-registry/container-registry-get-started-azure-cli), or [PowerShell tools](/azure/container-registry/container-registry-get-started-powershell). Creating a registry in Azure is simple. It requires an Azure subscription, resource group, and a unique name. Figure 3-10 shows the basic options for creating a registry, which will be hosted at `registryname.azurecr.io`. - -![Create container registry](./media/create-container-registry.png) - -**Figure 3-10**. Create container registry - -Once you've created the registry, you'll need to authenticate with it before you can use it. Typically, you'll log into the registry using the Azure CLI command: - -```azurecli -az acr login --name *registryname* -``` - -Once authenticated, you can use docker commands to push container images to it. Before you can do so, however, you must tag your image with the fully qualified name (URL) of your ACR login server. It will have the format *registryname*.azurecr.io. - -```console -docker tag mycontainer myregistry.azurecr.io/mycontainer:v1 -``` - -After you've tagged the image, you use the `docker push` command to push the image to your ACR instance. - -```console -docker push myregistry.azurecr.io/mycontainer:v1 -``` - -After you push an image to the registry, it's a good idea to remove the image from your local Docker environment, using this command: - -```console -docker rmi myregistry.azurecr.io/mycontainer:v1 -``` - -As a best practice, you shouldn't manually push images to a container registry. Instead, use a build pipeline defined in a tool like GitHub or Azure DevOps. Learn more in the [Cloud-Native DevOps chapter](devops.md). - -## ACR Tasks - -[ACR Tasks](/azure/container-registry/container-registry-tasks-overview) is a set of features available from the Azure Container Registry. It extends your inner-loop development cycle by building and managing container images in the Azure cloud. Instead of invoking a `docker build` and `docker push` locally on your development machine, they're automatically handled by ACR Tasks in the cloud. - -The following AZ CLI command both builds a container image and pushes it to ACR: - -```azurecli -# create a container registry -az acr create --resource-group myResourceGroup --name myContainerRegistry008 --sku Basic - -# build container image in ACR and push it into your container registry -az acr build --image sample/hello-world:v1 --registry myContainerRegistry008 --file Dockerfile . -``` - -As you can see from the previous command block, there's no need to install Docker Desktop on your development machine. Additionally, you can configure ACR Task triggers to rebuild containers images on both source code and base image updates. - -## Azure Kubernetes Service - -We discussed Azure Kubernetes Service (AKS) at length in this chapter. We've seen that it's the de facto container orchestrator managing containerized cloud-native applications. - -Once you deploy an image to a registry, such as ACR, you can configure AKS to automatically pull and deploy it. With a CI/CD pipeline in place, you might configure a [canary release](https://martinfowler.com/bliki/CanaryRelease.html) strategy to minimize the risk involved when rapidly deploying updates. The new version of the app is initially configured in production with no traffic routed to it. Then, the system will route a small percentage of users to the newly deployed version. As the team gains confidence in the new version, it can roll out more instances and retire the old. AKS easily supports this style of deployment. - -As with most resources in Azure, you can create an Azure Kubernetes Service cluster using the portal, command-line, or automation tools like Helm or Terraform. To get started with a new cluster, you need to provide the following information: - -- Azure subscription -- Resource group -- Kubernetes cluster name -- Region -- Kubernetes version -- DNS name prefix -- Node size -- Node count - -This information is sufficient to get started. As part of the creation process in the Azure portal, you can also configure options for the following features of your cluster: - -- Scale -- Authentication -- Networking -- Monitoring -- Tags - -This [quickstart walks through deploying an AKS cluster using the Azure portal](/azure/aks/kubernetes-walkthrough-portal). - -## Azure Bridge to Kubernetes - -Cloud-native applications can grow large and complex, requiring significant compute resources to run. In these scenarios, the entire application can't be hosted on a development machine (especially a laptop). [Azure Bridge to Kubernetes](/visualstudio/bridge/overview-bridge-to-kubernetes) addresses the shortcoming. It enables developers to work with a local version of their service while hosting the entire application in an AKS development cluster. - -When ready, developers test their changes locally while running against the full application in the AKS cluster - without replicating dependencies. Under the hood, the bridge merges code from the local machine with services in AKS. Developers can rapidly iterate and debug code directly in Kubernetes using Visual Studio or Visual Studio Code. - -Gabe Monroy, former VP of Product Management at Microsoft, describes it well: - -> Imagine you're a new employee trying to fix a bug in a complex microservices application consisting of dozens of components, each with their own configuration and backing services. To get started, you must configure your local development environment so that it can mimic production including setting up your IDE, building tool chain, containerized service dependencies, a local Kubernetes environment, mocks for backing services, and more. With all the time involved setting up your development environment, fixing that first bug could take days! Or you could just use Bridge to Kubernetes and AKS. - ->[!div class="step-by-step"] ->[Previous](combine-containers-serverless-approaches.md) ->[Next](scale-containers-serverless.md) diff --git a/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md b/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md deleted file mode 100644 index 82411428273cd..0000000000000 --- a/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Deploying eShopOnContainers to Azure -description: Deploying the eShopOnContainers application using Azure Kubernetes Service, Helm, and DevSpaces. -ms.date: 04/06/2022 ---- - -# Deploying eShopOnContainers to Azure - -[!INCLUDE [download-alert](includes/download-alert.md)] - -The eShopOnContainers application can be deployed to various Azure platforms. The recommended approach is to deploy the application to Azure Kubernetes Services (AKS). Helm, a Kubernetes deployment tool, is available to reduce deployment complexity. Optionally, developers may implement Azure Dev Spaces for Kubernetes to streamline their development process. - -## Azure Kubernetes Service - -To host eShop in AKS, the first step is to create an AKS cluster. To do so, you might use the Azure portal, which will walk you through the required steps. You could also create a cluster from the Azure CLI, taking care to enable Role-Based Access Control (RBAC) and application routing. The eShopOnContainers' documentation details the steps for creating your own AKS cluster. Once created, you can access and manage the cluster from the Kubernetes dashboard. - -You can now deploy the eShop application to the cluster using Helm. - -## Deploying to Azure Kubernetes Service using Helm - -Helm is an application package manager tool that works directly with Kubernetes. It helps you define, install, and upgrade Kubernetes applications. While simple apps can be deployed to AKS with custom CLI scripts or simple deployment files, complex apps can contain many Kubernetes objects and benefit from Helm. - -Using Helm, applications include text-based configuration files, called Helm charts, which declaratively describe the application and configuration in Helm packages. Charts use standard YAML-formatted files to describe a related set of Kubernetes resources. They're versioned alongside the application code they describe. Helm Charts range from simple to complex depending on the requirements of the installation they describe. - -Helm is composed of a command-line client tool, which consumes helm charts and launches commands to a server component named, Tiller. Tiller communicates with the Kubernetes API to ensure the correct provisioning of your containerized workloads. Helm is maintained by the Cloud-native Computing Foundation. - -The following yaml file presents a Helm template: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: {{ .Values.app.svc.marketing }} - labels: - app: {{ template "marketing-api.name" . }} - chart: {{ template "marketing-api.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} -spec: - type: {{ .Values.service.type }} - ports: - - port: {{ .Values.service.port }} - targetPort: http - protocol: TCP - name: http - selector: - app: {{ template "marketing-api.name" . }} - release: {{ .Release.Name }} -``` - -Note how the template describes a dynamic set of key/value pairs. When the template is invoked, values that enclosed in curly braces are pulled in from other yaml-based configuration files. - -You'll find the eShopOnContainers helm charts in the /k8s/helm folder. Figure 2-6 shows how the different components of the application are organized into a folder structure used by helm to define and managed deployments. - -![The eShopOnContainers helm folder](./media/eshoponcontainers-helm-folder.png) -**Figure 2-6**. The eShopOnContainers helm folder. - -Each individual component is installed using a `helm install` command. eShop includes a "deploy all" script that loops through and installs the components using their respective helm charts. The result is a repeatable process, versioned with the application in source control, that anyone on the team can deploy to an AKS cluster with a one-line script command. - -> Note that version 3 of Helm officially removes the need for the Tiller server component. More information on this enhancement can be found [here](https://medium.com/better-programming/why-is-tiller-missing-in-helm-3-2347c446714). - -## Azure Functions and Logic Apps (Serverless) - -The eShopOnContainers sample includes support for tracking online marketing campaigns. An Azure Function is used to track marketing campaign details for a given campaign ID. Rather than creating a full microservice, a single Azure Function is simpler and sufficient. Azure Functions have a simple build and deployment model, especially when configured to run in Kubernetes. Deploying the function is scripted using Azure Resource Manager (ARM) templates and the Azure CLI. This campaign service isn't customer-facing and invokes a single operation, making it a great candidate for Azure Functions. The function requires minimal configuration, including a database connection string data and image base URI settings. You configure Azure Functions in the Azure portal. - ->[!div class="step-by-step"] ->[Previous](map-eshoponcontainers-azure-services.md) ->[Next](centralized-configuration.md) diff --git a/docs/architecture/cloud-native/devops.md b/docs/architecture/cloud-native/devops.md deleted file mode 100644 index 8c5d891fdaee0..0000000000000 --- a/docs/architecture/cloud-native/devops.md +++ /dev/null @@ -1,278 +0,0 @@ ---- -title: DevOps -description: DevOps considerations for cloud-native applications -ms.date: 04/06/2022 ---- - -# DevOps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -The favorite mantra of software consultants is to answer "It depends" to any question posed. It isn't because software consultants are fond of not taking a position. It's because there's no one true answer to any questions in software. There's no absolute right and wrong, but rather a balance between opposites. - -Take, for instance, the two major schools of developing web applications: Single Page Applications (SPAs) versus server-side applications. On the one hand, the user experience tends to be better with SPAs and the amount of traffic to the web server can be minimized making it possible to host them on something as simple as static hosting. On the other hand, SPAs tend to be slower to develop and more difficult to test. Which one is the right choice? Well, it depends on your situation. - -Cloud-native applications aren't immune to that same dichotomy. They have clear advantages in terms of speed of development, stability, and scalability, but managing them can be quite a bit more difficult. - -Years ago, it wasn't uncommon for the process of moving an application from development to production to take a month, or even more. Companies released software on a 6-month or even every year cadence. One needs to look no further than Microsoft Windows to get an idea for the cadence of releases that were acceptable before the ever-green days of Windows 10. Five years passed between Windows XP and Vista, a further three between Vista and Windows 7. - -It's now fairly well established that being able to release software rapidly gives fast-moving companies a huge market advantage over their more sloth-like competitors. It's for that reason that major updates to Windows 10 are now approximately every six months. - -The patterns and practices that enable faster, more reliable releases to deliver value to the business are collectively known as DevOps. They consist of a wide range of ideas spanning the entire software development life cycle from specifying an application all the way up to delivering and operating that application. - -DevOps emerged before microservices and it's likely that the movement towards smaller, more fit to purpose services wouldn't have been possible without DevOps to make releasing and operating not just one but many applications in production easier. - -![Figure 10-1 Search trends show that the growth in microservices doesn't start until after DevOps is a fairly well-established idea.](./media/microservices-vs-devops.png) - -**Figure 10-1** - DevOps and microservices. - -Through good DevOps practices, it's possible to realize the advantages of cloud-native applications without suffocating under a mountain of work actually operating the applications. - -There's no golden hammer when it comes to DevOps. Nobody can sell a complete and all-encompassing solution for releasing and operating high-quality applications. This is because each application is wildly different from all others. However, there are tools that can make DevOps a far less daunting proposition. One of these tools is known as Azure DevOps. - -## Azure DevOps - -Azure DevOps has a long pedigree. It can trace its roots back to when Team Foundation Server first moved online and through the various name changes: Visual Studio Online and Visual Studio Team Services. Through the years, however, it has become far more than its predecessors. - -Azure DevOps is divided into five major components: - -![Figure 10-2 The five major areas of Azure DevOps](./media/devops-components.png) - -**Figure 10-2** - Azure DevOps. - -**Azure Repos** - Source code management that supports the venerable Team Foundation Version Control (TFVC) and the industry favorite [Git](https://en.wikipedia.org/wiki/Git). Pull requests provide a way to enable social coding by fostering discussion of changes as they're made. - -**Azure Boards** - Provides an issue and work item tracking tool that strives to allow users to pick the workflows that work best for them. It comes with a number of pre-configured templates including ones to support SCRUM and Kanban styles of development. - -**Azure Pipelines** - A build and release management system that supports tight integration with Azure. Builds can be run on various platforms from Windows to Linux to macOS. Build agents may be provisioned in the cloud or on-premises. - -**Azure Test Plans** - No QA person will be left behind with the test management and exploratory testing support offered by the Test Plans feature. - -**Azure Artifacts** - An artifact feed that allows companies to create their own, internal, versions of NuGet, npm, and others. It serves a double purpose of acting as a cache of upstream packages if there's a failure of a centralized repository. - -The top-level organizational unit in Azure DevOps is known as a Project. Within each project the various components, such as Azure Artifacts, can be turned on and off. Each of these components provides different advantages for cloud-native applications. The three most useful are repositories, boards, and pipelines. If users want to manage their source code in another repository stack, such as GitHub, but still take advantage of Azure Pipelines and other components, that's perfectly possible. - -Fortunately, development teams have many options when selecting a repository. One of them is GitHub. - -## GitHub Actions - -Founded in 2009, GitHub is a widely popular web-based repository for hosting projects, documentation, and code. Many large tech companies, such as Apple, Amazon, Google, and mainstream corporations use GitHub. GitHub uses the open-source, distributed version control system named Git as its foundation. On top, it then adds its own set of features, including defect tracking, feature and pull requests, tasks management, and wikis for each code base. - -As GitHub evolves, it too is adding DevOps features. For example, GitHub has its own continuous integration/continuous delivery (CI/CD) pipeline, called `GitHub Actions`. GitHub Actions is a community-powered workflow automation tool. It lets DevOps teams integrate with their existing tooling, mix and match new products, and hook into their software lifecycle, including existing CI/CD partners." - -GitHub has over 40 million users, making it the largest host of source code in the world. In October of 2018, Microsoft purchased GitHub. Microsoft has pledged that GitHub will remain an [open platform](https://techcrunch.com/2018/06/04/microsoft-promises-to-keep-github-independent-and-open/) that any developer can plug into and extend. It continues to operate as an independent company. GitHub offers plans for enterprise, team, professional, and free accounts. - -## Source control - -Organizing the code for a cloud-native application can be challenging. Instead of a single giant application, the cloud-native applications tend to be made up of a web of smaller applications that talk with one another. As with all things in computing, the best arrangement of code remains an open question. There are examples of successful applications using different kinds of layouts, but two variants seem to have the most popularity. - -Before getting down into the actual source control itself, it's probably worth deciding on how many projects are appropriate. Within a single project, there's support for multiple repositories, and build pipelines. Boards are a little more complicated, but there too, the tasks can easily be assigned to multiple teams within a single project. It's possible to support hundreds, even thousands of developers, out of a single Azure DevOps project. Doing so is likely the best approach as it provides a single place for all developer to work out of and reduces the confusion of finding that one application when developers are unsure in which project in which it resides. - -Splitting up code for microservices within the Azure DevOps project can be slightly more challenging. - -![Figure 10-3 Single versus Multiple Repositories](./media/single-repository-vs-multiple.png) - -**Figure 10-3** - One vs. many repositories. - -### Repository per microservice - -At first glance, this approach seems like the most logical approach to splitting up the source code for microservices. Each repository can contain the code needed to build the one microservice. The advantages to this approach are readily visible: - -1. Instructions for building and maintaining the application can be added to a README file at the root of each repository. When flipping through the repositories, it's easy to find these instructions, reducing spin-up time for developers. -2. Every service is located in a logical place, easily found by knowing the name of the service. -3. Builds can easily be set up such that they're only triggered when a change is made to the owning repository. -4. The number of changes coming into a repository is limited to the small number of developers working on the project. -5. Security is easy to set up by restricting the repositories to which developers have read and write permissions. -6. Repository level settings can be changed by the owning team with a minimum of discussion with others. - -One of the key ideas behind microservices is that services should be siloed and separated from each other. When using Domain Driven Design to decide on the boundaries for services the services act as transactional boundaries. Database updates shouldn't span multiple services. This collection of related data is referred to as a bounded context. This idea is reflected by the isolation of microservice data to a database separate and autonomous from the rest of the services. It makes a great deal of sense to carry this idea all the way through to the source code. - -However, this approach isn't without its issues. One of the more gnarly development problems of our time is managing dependencies. Consider the number of files that make up the average `node_modules` directory. A fresh install of something like `create-react-app` is likely to bring with it thousands of packages. The question of how to manage these dependencies is a difficult one. - -If a dependency is updated, then downstream packages must also update this dependency. Unfortunately, that takes development work so, invariably, the `node_modules` directory ends up with multiple versions of a single package, each one a dependency of some other package that is versioned at a slightly different cadence. When deploying an application, which version of a dependency should be used? The version that is currently in production? The version that is currently in Beta but is likely to be in production by the time the consumer makes it to production? Difficult problems that aren't resolved by just using microservices. - -There are libraries that are depended upon by a wide variety of projects. By dividing the microservices up with one in each repository the internal dependencies can best be resolved by using the internal repository, Azure Artifacts. Builds for libraries will push their latest versions into Azure Artifacts for internal consumption. The downstream project must still be manually updated to take a dependency on the newly updated packages. - -Another disadvantage presents itself when moving code between services. Although it would be nice to believe that the first division of an application into microservices is 100% correct, the reality is that rarely we're so prescient as to make no service division mistakes. Thus, functionality and the code that drives it will need to move from service to service: repository to repository. When leaping from one repository to another, the code loses its history. There are many cases, especially in the event of an audit, where having full history on a piece of code is invaluable. - -The final and most important disadvantage is coordinating changes. In a true microservices application, there should be no deployment dependencies between services. It should be possible to deploy services A, B, and C in any order as they have loose coupling. In reality, however, there are times when it's desirable to make a change that crosses multiple repositories at the same time. Some examples include updating a library to close a security hole or changing a communication protocol used by all services. - -To do a cross-repository change requires a commit to each repository be made in succession. Each change in each repository will need to be pull-requested and reviewed separately. This activity can be difficult to coordinate. - -An alternative to using many repositories is to put all the source code together in a giant, all knowing, single repository. - -### Single repository - -In this approach, sometimes referred to as a [monorepository](https://danluu.com/monorepo/), all the source code for every service is put into the same repository. At first, this approach seems like a terrible idea likely to make dealing with source code unwieldy. There are, however, some marked advantages to working this way. - -The first advantage is that it's easier to manage dependencies between projects. Instead of relying on some external artifact feed, projects can directly import one another. This means that updates are instant, and conflicting versions are likely to be found at compile time on the developer's workstation. In effect, shifting some of the integration testing left. - -When moving code between projects, it's now easier to preserve the history as the files will be detected as having been moved rather than being rewritten. - -Another advantage is that wide ranging changes that cross service boundaries can be made in a single commit. This activity reduces the overhead of having potentially dozens of changes to review individually. - -There are many tools that can perform static analysis of code to detect insecure programming practices or problematic use of APIs. In a multi-repository world, each repository will need to be iterated over to find the problems in them. The single repository allows running the analysis all in one place. - -There are also many disadvantages to the single repository approach. One of the most worrying ones is that having a single repository raises security concerns. If the contents of a repository are leaked in a repository per service model, the amount of code lost is minimal. With a single repository, everything the company owns could be lost. There have been many examples in the past of this happening and derailing entire game development efforts. Having multiple repositories exposes less surface area, which is a desirable trait in most security practices. - -The size of the single repository is likely to become unmanageable rapidly. This presents some interesting performance implications. It may become necessary to use specialized tools such as [Virtual File System for Git](https://github.com/Microsoft/VFSForGit), which was originally designed to improve the experience for developers on the Windows team. - -Frequently the argument for using a single repository boils down to an argument that Facebook or Google use this method for source code arrangement. If the approach is good enough for these companies, then, surely, it's the correct approach for all companies. The truth of the matter is that few companies operate on anything like the scale of Facebook or Google. The problems that occur at those scales are different from those most developers will face. What is good for the goose may not be good for the gander. - -In the end, either solution can be used to host the source code for microservices. However, in most cases, the management, and engineering overhead of operating in a single repository isn't worth the meager advantages. Splitting code up over multiple repositories encourages better separation of concerns and encourages autonomy among development teams. - -### Standard directory structure - -Regardless of the single versus multiple repositories debate each service will have its own directory. One of the best optimizations to allow developers to cross between projects quickly is to maintain a standard directory structure. - -![Figure 10-4 A standard directory structure for both the email and sign-in services](./media/dir-struct.png) - -**Figure 10-4** - Standard directory structure. - -Whenever a new project is created, a template that puts in place the correct structure should be used. This template can also include such useful items as a skeleton README file and an `azure-pipelines.yml`. In any microservice architecture, a high degree of variance between projects makes bulk operations against the services more difficult. - -There are many tools that can provide templating for an entire directory, containing several source code directories. [Yeoman](https://yeoman.io/) is popular in the JavaScript world and GitHub have recently released [Repository Templates](https://github.blog/2019-06-06-generate-new-repositories-with-repository-templates/), which provide much of the same functionality. - -## Task management - -Managing tasks in any project can be difficult. Up front there are countless questions to be answered about the sort of workflows to set up to ensure optimal developer productivity. - -Cloud-native applications tend to be smaller than traditional software products or at least they're divided into smaller services. Tracking of issues or tasks related to these services remains as important as with any other software project. Nobody wants to lose track of some work item or explain to a customer that their issue wasn't properly logged. Boards are configured at the project level but within each project, areas can be defined. These allow breaking down issues across several components. The advantage to keeping all the work for the entire application in one place is that it's easy to move work items from one team to another as they're understood better. - -Azure DevOps comes with a number of popular templates pre-configured. In the most basic configuration, all that is needed to know is what's in the backlog, what people are working on, and what's done. It's important to have this visibility into the process of building software, so that work can be prioritized and completed tasks reported to the customer. Of course, few software projects stick to a process as simple as `to do`, `doing`, and `done`. It doesn't take long for people to start adding steps like `QA` or `Detailed Specification` to the process. - -One of the more important parts of Agile methodologies is self-introspection at regular intervals. These reviews are meant to provide insight into what problems the team is facing and how they can be improved. Frequently, this means changing the flow of issues and features through the development process. So, it's perfectly healthy to expand the layouts of the boards with additional stages. - -The stages in the boards aren't the only organizational tool. Depending on the configuration of the board, there's a hierarchy of work items. The most granular item that can appear on a board is a task. Out of the box a task contains fields for a title, description, a priority, an estimate of the amount of work remaining and the ability to link to other work items or development items (branches, commits, pull requests, builds, and so forth). Work items can be classified into different areas of the application and different iterations (sprints) to make finding them easier. - -![Figure 10-5 An example task in Azure DevOps](./media/task-details.png) - -**Figure 10-5** - Task in Azure DevOps. - -The description field supports the normal styles you'd expect (bold, italic underscore and strike through) and the ability to insert images. This makes it a powerful tool for use when specifying work or bugs. - -Tasks can be rolled up into features, which define a larger unit of work. Features, in turn, can be [rolled up into epics](/azure/devops/boards/backlogs/define-features-epics?view=azure-devops&preserve-view=true). Classifying tasks in this hierarchy makes it much easier to understand how close a large feature is to rolling out. - -![Figure 10-6 Work item types configured by default in the Basic process template](./media/board-issue-types.png) - -**Figure 10-6** - Work item in Azure DevOps. - -There are different kinds of views into the issues in Azure Boards. Items that aren't yet scheduled appear in the backlog. From there, they can be assigned to a sprint. A sprint is a time box during which it's expected some quantity of work will be completed. This work can include tasks but also the resolution of tickets. Once there, the entire sprint can be managed from the Sprint board section. This view shows how work is progressing and includes a burn down chart to give an ever-updating estimate of if the sprint will be successful. - -![Figure 10-7 A board with a sprint defined](./media/sprint-board.png) - -**Figure 10-7** - Board in Azure DevOps. - -By now, it should be apparent that there's a great deal of power in the Boards in Azure DevOps. For developers, there are easy views of what is being worked on. For project managers views into upcoming work as well as an overview of existing work. For managers, there are plenty of reports about resourcing and capacity. Unfortunately, there's nothing magical about cloud-native applications that eliminate the need to track work. But if you must track work, there are a few places where the experience is better than in Azure DevOps. - -## CI/CD pipelines - -Almost no change in the software development life cycle has been so revolutionary as the advent of continuous integration (CI) and continuous delivery (CD). Building and running automated tests against the source code of a project as soon as a change is checked in catches mistakes early. Prior to the advent of continuous integration builds, it wouldn't be uncommon to pull code from the repository and find that it didn't pass tests or couldn't even be built. This resulted in tracking down the source of the breakage. - -Traditionally shipping software to the production environment required extensive documentation and a list of steps. Each one of these steps needed to be manually completed in a very error prone process. - -![Figure 10-8 A checklist](./media/checklist.png) - -**Figure 10-8** - Checklist. - -The sister of continuous integration is continuous delivery in which the freshly built packages are deployed to an environment. The manual process can't scale to match the speed of development so automation becomes more important. Checklists are replaced by scripts that can execute the same tasks faster and more accurately than any human. - -The environment to which continuous delivery delivers might be a test environment or, as is being done by many major technology companies, it could be the production environment. The latter requires an investment in high-quality tests that can give confidence that a change isn't going to break production for users. In the same way that continuous integration caught issues in the code early continuous delivery catches issues in the deployment process early. - -The importance of automating the build and delivery process is accentuated by cloud-native applications. Deployments happen more frequently and to more environments so manually deploying borders on impossible. - -### Azure Builds - -Azure DevOps provides a set of tools to make continuous integration and deployment easier than ever. These tools are located under Azure Pipelines. The first of them is Azure Builds, which is a tool for running YAML-based build definitions at scale. Users can either bring their own build machines (great for if the build requires a meticulously set up environment) or use a machine from a constantly refreshed pool of Azure hosted virtual machines. These hosted build agents come pre-installed with a wide range of development tools for not just .NET development but for everything from Java to Python to iPhone development. - -DevOps includes a wide range of out of the box build definitions that can be customized for any build. The build definitions are defined in a file called `azure-pipelines.yml` and checked into the repository so they can be versioned along with the source code. This makes it much easier to make changes to the build pipeline in a branch as the changes can be checked into just that branch. An example `azure-pipelines.yml` for building an ASP.NET web application on full framework is show in Figure 10-9. - -```yml -name: $(rev:r) - -variables: - version: 9.2.0.$(Build.BuildNumber) - solution: Portals.sln - artifactName: drop - buildPlatform: any cpu - buildConfiguration: release - -pool: - name: Hosted VisualStudio - demands: - - msbuild - - visualstudio - - vstest - -steps: -- task: NuGetToolInstaller@0 - displayName: 'Use NuGet 4.4.1' - inputs: - versionSpec: 4.4.1 - -- task: NuGetCommand@2 - displayName: 'NuGet restore' - inputs: - restoreSolution: '$(solution)' - -- task: VSBuild@1 - displayName: 'Build solution' - inputs: - solution: '$(solution)' - msbuildArgs: '-p:DeployOnBuild=true -p:WebPublishMethod=Package -p:PackageAsSingleFile=true -p:SkipInvalidConfigurations=true -p:PackageLocation="$(build.artifactstagingdirectory)\\"' - platform: '$(buildPlatform)' - configuration: '$(buildConfiguration)' - -- task: VSTest@2 - displayName: 'Test Assemblies' - inputs: - testAssemblyVer2: | - **\$(buildConfiguration)\**\*test*.dll - !**\obj\** - !**\*testadapter.dll - platform: '$(buildPlatform)' - configuration: '$(buildConfiguration)' - -- task: CopyFiles@2 - displayName: 'Copy UI Test Files to: $(build.artifactstagingdirectory)' - inputs: - SourceFolder: UITests - TargetFolder: '$(build.artifactstagingdirectory)/uitests' - -- task: PublishBuildArtifacts@1 - displayName: 'Publish Artifact' - inputs: - PathtoPublish: '$(build.artifactstagingdirectory)' - ArtifactName: '$(artifactName)' - condition: succeededOrFailed() -``` - -**Figure 10-9** - A sample azure-pipelines.yml - -This build definition uses a number of built-in tasks that make creating builds as simple as building a Lego set (simpler than the giant Millennium Falcon). For instance, the NuGet task restores NuGet packages, while the VSBuild task calls the Visual Studio build tools to perform the actual compilation. There are hundreds of different tasks available in Azure DevOps, with thousands more that are maintained by the community. It's likely that no matter what build tasks you're looking to run, somebody has built one already. - -Builds can be triggered manually, by a check-in, on a schedule, or by the completion of another build. In most cases, building on every check-in is desirable. Builds can be filtered so that different builds run against different parts of the repository or against different branches. This allows for scenarios like running fast builds with reduced testing on pull requests and running a full regression suite against the trunk on a nightly basis. - -The end result of a build is a collection of files known as build artifacts. These artifacts can be passed along to the next step in the build process or added to an Azure Artifacts feed, so they can be consumed by other builds. - -### Azure DevOps releases - -Builds take care of compiling the software into a shippable package, but the artifacts still need to be pushed out to a testing environment to complete continuous delivery. For this, Azure DevOps uses a separate tool called Releases. The Releases tool makes use of the same tasks' library that were available to the Build but introduce a concept of "stages". A stage is an isolated environment into which the package is installed. For instance, a product might make use of a development, a QA, and a production environment. Code is continuously delivered into the development environment where automated tests can be run against it. Once those tests pass the release moves onto the QA environment for manual testing. Finally, the code is pushed to production where it's visible to everybody. - -![Figure 10-10 An example release pipeline with Develop, QA, and Production phases](./media/release-pipeline.png) - -**Figure 10-10** - Release pipeline - -Each stage in the build can be automatically triggered by the completion of the previous phase. In many cases, however, this isn't desirable. Moving code into production might require approval from somebody. The Releases tool supports this by allowing approvers at each step of the release pipeline. Rules can be set up such that a specific person or group of people must sign off on a release before it makes into production. These gates allow for manual quality checks and also for compliance with any regulatory requirements related to control what goes into production. - -### Everybody gets a build pipeline - -There's no cost to configuring many build pipelines, so it's advantageous to have at least one build pipeline per microservice. Ideally, microservices are independently deployable to any environment so having each one able to be released via its own pipeline without releasing a mass of unrelated code is perfect. Each pipeline can have its own set of approvals allowing for variations in build process for each service. - -### Versioning releases - -One drawback to using the Releases functionality is that it can't be defined in a checked-in `azure-pipelines.yml` file. There are many reasons you might want to do that from having per-branch release definitions to including a release skeleton in your project template. Fortunately, work is ongoing to shift some of the stages support into the Build component. This will be known as multi-stage build and the [first version is available now](https://devblogs.microsoft.com/devops/whats-new-with-azure-pipelines/)! - ->[!div class="step-by-step"] ->[Previous](azure-security.md) ->[Next](feature-flags.md) diff --git a/docs/architecture/cloud-native/distributed-data.md b/docs/architecture/cloud-native/distributed-data.md deleted file mode 100644 index 8e0949af71a26..0000000000000 --- a/docs/architecture/cloud-native/distributed-data.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -title: Cloud-native data patterns -description: Contrast data storage in monolithic and cloud-native applications. -author: robvet -ms.date: 04/06/2022 ---- - -# Cloud-native data patterns - -[!INCLUDE [download-alert](includes/download-alert.md)] - -As we've seen throughout this book, a cloud-native approach changes the way you design, deploy, and manage applications. It also changes the way you manage and store data. - -Figure 5-1 contrasts the differences. - -![Data storage in cloud-native applications](./media/distributed-data.png) - -**Figure 5-1**. Data management in cloud-native applications - -Experienced developers will easily recognize the architecture on the left-side of figure 5-1. In this *monolithic application*, business service components collocate together in a shared services tier, sharing data from a single relational database. - -In many ways, a single database keeps data management simple. Querying data across multiple tables is straightforward. Changes to data update together or they all rollback. [ACID transactions](/windows/desktop/cossdk/acid-properties) guarantee strong and immediate consistency. - -Designing for cloud-native, we take a different approach. On the right-side of Figure 5-1, note how business functionality segregates into small, independent [microservices](/azure/architecture/guide/architecture-styles/microservices). Each microservice encapsulates a specific business capability and its own data. The monolithic database decomposes into a distributed data model with many smaller databases, each aligning with a microservice. When the smoke clears, we emerge with a design that exposes a *database per microservice*. - -## Database-per-microservice, why? - -This database per microservice provides many benefits, especially for systems that must evolve rapidly and support massive scale. With this model... - -- Domain data is encapsulated within the service -- Data schema can evolve without directly impacting other services -- Each data store can independently scale -- A data store failure in one service won't directly impact other services - -Segregating data also enables each microservice to implement the data store type that is best optimized for its workload, storage needs, and read/write patterns. Choices include relational, document, key-value, and even graph-based data stores. - -Figure 5-2 presents the principle of polyglot persistence in a cloud-native system. - -![Polyglot data persistence](./media/polyglot-data-persistence.png) - -**Figure 5-2**. Polyglot data persistence - -Note in the previous figure how each microservice supports a different type of data store. - -- The product catalog microservice consumes a relational database to accommodate the rich relational structure of its underlying data. -- The shopping cart microservice consumes a distributed cache that supports its simple, key-value data store. -- The ordering microservice consumes both a NoSql document database for write operations along with a highly denormalized key/value store to accommodate high-volumes of read operations. - -While relational databases remain relevant for microservices with complex data, NoSQL databases have gained considerable popularity. They provide massive scale and high availability. Their schemaless nature allows developers to move away from an architecture of typed data classes and ORMs that make change expensive and time-consuming. We cover NoSQL databases later in this chapter. - - While encapsulating data into separate microservices can increase agility, performance, and scalability, it also presents many challenges. In the next section, we discuss these challenges along with patterns and practices to help overcome them. - -## Cross-service queries - -While microservices are independent and focus on specific functional capabilities, like inventory, shipping, or ordering, they frequently require integration with other microservices. Often the integration involves one microservice *querying* another for data. Figure 5-3 shows the scenario. - -![Querying across microservices](./media/cross-service-query.png) - -**Figure 5-3**. Querying across microservices - -In the preceding figure, we see a shopping basket microservice that adds an item to a user's shopping basket. While the data store for this microservice contains basket and line item data, it doesn't maintain product or pricing data. Instead, those data items are owned by the catalog and pricing microservices. This aspect presents a problem. How can the shopping basket microservice add a product to the user's shopping basket when it doesn't have product nor pricing data in its database? - -One option discussed in Chapter 4 is a [direct HTTP call](service-to-service-communication.md#queries) from the shopping basket to the catalog and pricing microservices. However, in chapter 4, we said synchronous HTTP calls *couple* microservices together, reducing their autonomy and diminishing their architectural benefits. - -We could also implement a [request-reply pattern](/azure/architecture/patterns/async-request-reply) with separate inbound and outbound queues for each service. However, this pattern is complicated and requires plumbing to correlate request and response messages. -While it does decouple the backend microservice calls, the calling service must still synchronously wait for the call to complete. Network congestion, transient faults, or an overloaded microservice and can result in long-running and even failed operations. - -Instead, a widely accepted pattern for removing cross-service dependencies is the [Materialized View Pattern](/azure/architecture/patterns/materialized-view), shown in Figure 5-4. - -![Materialized view pattern](./media/materialized-view-pattern.png) - -**Figure 5-4**. Materialized View Pattern - -With this pattern, you place a local data table (known as a *read model*) in the shopping basket service. This table contains a denormalized copy of the data needed from the product and pricing microservices. Copying the data directly into the shopping basket microservice eliminates the need for expensive cross-service calls. With the data local to the service, you improve the service's response time and reliability. Additionally, having its own copy of the data makes the shopping basket service more resilient. If the catalog service should become unavailable, it wouldn't directly impact the shopping basket service. The shopping basket can continue operating with the data from its own store. - -The catch with this approach is that you now have duplicate data in your system. However, *strategically* duplicating data in cloud-native systems is an established practice and not considered an anti-pattern, or bad practice. Keep in mind that *one and only one service* can own a data set and have authority over it. You'll need to synchronize the read models when the system of record is updated. Synchronization is typically implemented via asynchronous messaging with a [publish/subscribe pattern](service-to-service-communication.md#events), as shown in Figure 5.4. - -## Distributed transactions - -While querying data across microservices is difficult, implementing a transaction across several microservices is even more complex. The inherent challenge of maintaining data consistency across independent data sources in different microservices can't be understated. The lack of distributed transactions in cloud-native applications means that you must manage distributed transactions programmatically. You move from a world of *immediate consistency* to that of *eventual consistency*. - -Figure 5-5 shows the problem. - -![Transaction in saga pattern](./media/saga-transaction-operation.png) - -**Figure 5-5**. Implementing a transaction across microservices - -In the preceding figure, five independent microservices participate in a distributed transaction that creates an order. Each microservice maintains its own data store and implements a local transaction for its store. To create the order, the local transaction for *each* individual microservice must succeed, or *all* must abort and roll back the operation. While built-in transactional support is available inside each of the microservices, there's no support for a distributed transaction that would span across all five services to keep data consistent. - -Instead, you must construct this distributed transaction *programmatically*. - -A popular pattern for adding distributed transactional support is the [Saga pattern](/azure/architecture/reference-architectures/saga/saga). It's implemented by grouping local transactions together programmatically and sequentially invoking each one. If any of the local transactions fail, the Saga aborts the operation and invokes a set of [compensating transactions](/azure/architecture/patterns/compensating-transaction). The compensating transactions undo the changes made by the preceding local transactions and restore data consistency. Figure 5-6 shows a failed transaction with the Saga pattern. - -![Roll back in saga pattern](./media/saga-rollback-operation.png) - -**Figure 5-6**. Rolling back a transaction - -In the previous figure, the *Update Inventory* operation has failed in the Inventory microservice. The Saga invokes a set of compensating transactions (in red) to adjust the inventory counts, cancel the payment and the order, and return the data for each microservice back to a consistent state. - -Saga patterns are typically choreographed as a series of related events, or orchestrated as a set of related commands. In Chapter 4, we discussed the service aggregator pattern that would be the foundation for an orchestrated saga implementation. We also discussed eventing along with [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) and [Azure Event Grid](/azure/event-grid/overview) topics that would be a foundation for a choreographed saga implementation. - -## High volume data - -Large cloud-native applications often support high-volume data requirements. In these scenarios, traditional data storage techniques can cause bottlenecks. For complex systems that deploy on a large scale, both Command and Query Responsibility Segregation (CQRS) and Event Sourcing may improve application performance. - -### CQRS - -[CQRS](/azure/architecture/patterns/cqrs), is an architectural pattern that can help maximize performance, scalability, and security. The pattern separates operations that read data from those operations that write data. - -For normal scenarios, the same entity model and data repository object are used for *both* read and write operations. - -However, a high volume data scenario can benefit from separate models and data tables for reads and writes. To improve performance, the read operation could query against a highly denormalized representation of the data to avoid expensive repetitive table joins and table locks. The *write* operation, known as a [command](/azure/architecture/guide/technology-choices/messaging#commands), would update against a fully normalized representation of the data that would guarantee consistency. You then need to implement a mechanism to keep both representations in sync. Typically, whenever the write table is modified, it publishes an [event](/azure/architecture/guide/technology-choices/messaging#events) that replicates the modification to the read table. - -Figure 5-7 shows an implementation of the CQRS pattern. - -![Command and Query Responsibility Segregation](./media/cqrs-implementation.png) - -**Figure 5-7**. CQRS implementation - -In the previous figure, separate command and query models are implemented. Each data write operation is saved to the write store and then propagated to the read store. Pay close attention to how the data propagation process operates on the principle of [eventual consistency](https://www.cloudcomputingpatterns.org/eventual_consistency/). The read model eventually synchronizes with the write model, but there may be some lag in the process. We discuss eventual consistency in the next section. - -This separation enables reads and writes to scale independently. Read operations use a schema optimized for queries, while the writes use a schema optimized for updates. Read queries go against denormalized data, while complex business logic can be applied to the write model. As well, you might impose tighter security on write operations than those exposing reads. - -Implementing CQRS can improve application performance for cloud-native services. However, it does result in a more complex design. Apply this principle carefully and strategically to those sections of your cloud-native application that will benefit from it. For more on CQRS, see the Microsoft book [.NET Microservices: Architecture for Containerized .NET Applications](../microservices/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns.md). - -### Event sourcing - -Another approach to optimizing high volume data scenarios involves [Event Sourcing](/azure/architecture/patterns/event-sourcing). - -A system typically stores the current state of a data entity. If a user changes their phone number, for example, the customer record is updated with the new number. We always know the current state of a data entity, but each update overwrites the previous state. - -In most cases, this model works fine. In high volume systems, however, overhead from transactional locking and frequent update operations can impact database performance, responsiveness, and limit scalability. - -Event Sourcing takes a different approach to capturing data. Each operation that affects data is persisted to an event store. Instead of updating the state of a data record, we append each change to a sequential list of past events - similar to an accountant's ledger. The Event Store becomes the system of record for the data. It's used to propagate various materialized views within the bounded context of a microservice. Figure 5.8 shows the pattern. - -![Event Sourcing](./media/event-sourcing.png) - -**Figure 5-8**. Event Sourcing - -In the previous figure, note how each entry (in blue) for a user's shopping cart is appended to an underlying event store. In the adjoining materialized view, the system projects the current state by replaying all the events associated with each shopping cart. This view, or read model, is then exposed back to the UI. Events can also be integrated with external systems and applications or queried to determine the current state of an entity. With this approach, you maintain history. You know not only the current state of an entity, but also how you reached this state. - -Mechanically speaking, event sourcing simplifies the write model. There are no updates or deletes. Appending each data entry as an immutable event minimizes contention, locking, and concurrency conflicts associated with relational databases. Building read models with the materialized view pattern enables you to decouple the view from the write model and choose the best data store to optimize the needs of your application UI. - -For this pattern, consider a data store that directly supports event sourcing. Azure Cosmos DB, MongoDB, Cassandra, CouchDB, and RavenDB are good candidates. - -As with all patterns and technologies, implement strategically and when needed. While event sourcing can provide increased performance and scalability, it comes at the expense of complexity and a learning curve. - ->[!div class="step-by-step"] ->[Previous](service-mesh-communication-infrastructure.md) ->[Next](relational-vs-nosql-data.md) diff --git a/docs/architecture/cloud-native/elastic-search-in-azure.md b/docs/architecture/cloud-native/elastic-search-in-azure.md deleted file mode 100644 index c59696142b125..0000000000000 --- a/docs/architecture/cloud-native/elastic-search-in-azure.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: Elasticsearch in cloud-native applications -description: Learn about adding Elastic Search capabilities to cloud-native applications. -author: robvet -ms.date: 04/06/2022 ---- - -# Elasticsearch in a cloud-native app - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Elasticsearch is a distributed search and analytics system that enables complex search capabilities across diverse types of data. It's open source and widely popular. Consider how the following companies integrate Elasticsearch into their application: - -- [Wikipedia](https://blog.wikimedia.org/2014/01/06/wikimedia-moving-to-elasticsearch/) for full-text and incremental (search as you type) searching. -- [GitHub](https://www.elastic.co/customers/github) to index and expose over 8 million code repositories. -- [Docker](https://www.elastic.co/customers/docker) for making its container library discoverable. - -Elasticsearch is built on top of the [Apache Lucene](https://lucene.apache.org/core/) full-text search engine. Lucene provides high-performance document indexing and querying. It indexes data with an inverted indexing scheme – instead of mapping pages to keywords, it maps keywords to pages just like a glossary at the end of a book. Lucene has powerful query syntax capabilities and can query data by: - -- Term (a full word) -- Prefix (starts-with word) -- Wildcard (using "\*" or "?" filters) -- Phrase (a sequence of text in a document) -- Boolean value (complex searches combining queries) - -While Lucene provides low-level plumbing for searching, Elasticsearch provides the server that sits on top of Lucene. Elasticsearch adds higher-level functionality to simplify working Lucene, including a RESTful API to access Lucene's indexing and searching functionality. It also provides a distributed infrastructure capable of massive scalability, fault tolerance, and high availability. - -For larger cloud-native applications with complex search requirements, Elasticsearch is available as managed service in Azure. The Microsoft Azure Marketplace features preconfigured templates which developers can use to deploy an Elasticsearch cluster on Azure. - -From the Microsoft Azure Marketplace, developers can use preconfigured templates built to quickly deploy an Elasticsearch cluster on Azure. Using the Azure-managed offering, you can deploy up to 50 data nodes, 20 coordinating nodes, and three dedicated master nodes. - -## Summary - -This chapter presented a detailed look at data in cloud-native systems. We started by contrasting data storage in monolithic applications with data storage patterns in cloud-native systems. We looked at data patterns implemented in cloud-native systems, including cross-service queries, distributed transactions, and patterns to deal with high-volume systems. We contrasted SQL with NoSQL data. We looked at data storage options available in Azure that include both Microsoft-centric and open-source options. Finally, we discussed caching and Elasticsearch in a cloud-native application. - -### References - -- [Command and Query Responsibility Segregation (CQRS) pattern](/azure/architecture/patterns/cqrs) - -- [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing) - -- [Why isn't RDBMS Partition Tolerant in CAP Theorem and why is it Available?](https://stackoverflow.com/questions/36404765/why-isnt-rdbms-partition-tolerant-in-cap-theorem-and-why-is-it-available) - -- [Materialized View](/azure/architecture/patterns/materialized-view) - -- All you really need to know about open source databases (IBM blog) - -- [Compensating Transaction pattern](/azure/architecture/patterns/compensating-transaction) - -- [Saga Pattern](https://microservices.io/patterns/data/saga.html) - -- [Saga Patterns | How to implement business transactions using microservices](https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/) - -- [Compensating Transaction pattern](/azure/architecture/patterns/compensating-transaction) - -- [Getting Behind the 9-Ball: Cosmos DB Consistency Levels Explained](https://blog.jeremylikness.com/blog/2018-03-23_getting-behind-the-9ball-cosmosdb-consistency-levels/) - -- [On RDBMS, NoSQL and NewSQL databases. Interview with John Ryan](http://www.odbms.org/blog/2018/03/on-rdbms-nosql-and-newsql-databases-interview-with-john-ryan/) - -- [SQL vs NoSQL vs NewSQL: The Full Comparison](https://www.xenonstack.com/blog/sql-vs-nosql-vs-newsql/) - -- [DASH: Four Properties of Kubernetes-Native Databases](https://thenewstack.io/dash-four-properties-of-kubernetes-native-databases/) - -- [CockroachDB](https://www.cockroachlabs.com/) - -- [TiDB](https://pingcap.com/en/) - -- [YugabyteDB](https://www.yugabyte.com/) - -- [Vitess](https://vitess.io/) - -- [Elasticsearch: The Definitive Guide](https://shop.oreilly.com/product/0636920028505.do) - -- [Introduction to Apache Lucene](https://www.baeldung.com/lucene) - ->[!div class="step-by-step"] ->[Previous](azure-caching.md) ->[Next](resiliency.md) diff --git a/docs/architecture/cloud-native/feature-flags.md b/docs/architecture/cloud-native/feature-flags.md deleted file mode 100644 index c2beb046e4046..0000000000000 --- a/docs/architecture/cloud-native/feature-flags.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: Feature flags -description: Implement feature flags in cloud-native applications leveraging Azure App Config -author: robvet -ms.date: 04/06/2022 ---- - -# Feature flags - -[!INCLUDE [download-alert](includes/download-alert.md)] - -In chapter 1, we affirmed that cloud native is much about speed and agility. Users expect rapid responsiveness, innovative features, and zero downtime. `Feature flags` are a modern deployment technique that helps increase agility for cloud-native applications. They enable you to deploy new features into a production environment, but restrict their availability. With the flick of a switch, you can activate a new feature for specific users without restarting the app or deploying new code. They separate the release of new features from their code deployment. - -Feature flags are built upon conditional logic that control visibility of functionality for users at run time. In modern cloud-native systems, it's common to deploy new features into production early, but test them with a limited audience. As confidence increases, the feature can be incrementally rolled out to wider audiences. - -Other use cases for feature flags include: - -- Restrict premium functionality to specific customer groups willing to pay higher subscription fees. -- Stabilize a system by quickly deactivating a problem feature, avoiding the risks of a rollback or immediate hotfix. -- Disable an optional feature with high resource consumption during peak usage periods. -- Conduct `experimental feature releases` to small user segments to validate feasibility and popularity. - -Feature flags also promote `trunk-based` development. It's a source-control branching model where developers collaborate on features in a single branch. The approach minimizes the risk and complexity of merging large numbers of long-running feature branches. Features are unavailable until activated. - -## Implementing feature flags - -At its core, a feature flag is a reference to a simple `decision object`. It returns a Boolean state of `on` or `off`. The flag typically wraps a block of code that encapsulates a feature capability. The state of the flag determines whether that code block executes for a given user. Figure 10-11 shows the implementation. - -```csharp -if (featureFlag) { - // Run this code block if the featureFlag value is true -} else { - // Run this code block if the featureFlag value is false -} -``` - -**Figure 10-11** - Simple feature flag implementation. - -Note how this approach separates the decision logic from the feature code. - -In chapter 1, we discussed the `Twelve-Factor App`. The guidance recommended keeping configuration settings external from application executable code. When needed, settings can be read in from the external source. Feature flag configuration values should also be independent from their codebase. By externalizing flag configuration in a separate repository, you can change flag state without modifying and redeploying the application. - -[Azure App Configuration](/azure/azure-app-configuration/overview) provides a centralized repository for feature flags. With it, you define different kinds of feature flags and manipulate their states quickly and confidently. You add the App Configuration client libraries to your application to enable feature flag functionality. Various programming language frameworks are supported. - -Feature flags can be easily implemented in an [ASP.NET Core service](/azure/azure-app-configuration/use-feature-flags-dotnet-core). Installing the .NET Feature Management libraries and App Configuration provider enable you to declaratively add feature flags to your code. They enable `FeatureGate` attributes so that you don't have to manually write if statements across your codebase. - -Once configured in your Startup class, you can add feature flag functionality at the controller, action, or middleware level. Figure 10-12 presents controller and action implementation: - -```csharp -[FeatureGate(MyFeatureFlags.FeatureA)] -public class ProductController : Controller -{ - ... -} -``` - -```csharp -[FeatureGate(MyFeatureFlags.FeatureA)] -public IActionResult UpdateProductStatus() -{ - return ObjectResult(ProductDto); -} -``` - -**Figure 10-12** - Feature flag implementation in a controller and action. - -If a feature flag is disabled, the user will receive a 404 (Not Found) status code with no response body. - -Feature flags can also be injected directly into C# classes. Figure 10-13 shows feature flag injection: - -```csharp -public class ProductController : Controller -{ - private readonly IFeatureManager _featureManager; - - public ProductController(IFeatureManager featureManager) - { - _featureManager = featureManager; - } -} -``` - -**Figure 10-13** - Feature flag injection into a class. - -The Feature Management libraries manage the feature flag lifecycle behind the scenes. For example, to minimize high numbers of calls to the configuration store, the libraries cache flag states for a specified duration. They can guarantee the immutability of flag states during a request call. They also offer a `Point-in-time snapshot`. You can reconstruct the history of any key-value and provide its past value at any moment within the previous seven days. - ->[!div class="step-by-step"] ->[Previous](devops.md) ->[Next](infrastructure-as-code.md) diff --git a/docs/architecture/cloud-native/front-end-communication.md b/docs/architecture/cloud-native/front-end-communication.md deleted file mode 100644 index f25783d2373c8..0000000000000 --- a/docs/architecture/cloud-native/front-end-communication.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: Front-end client communication -description: Learn how front-end clients communicate with cloud-native systems -author: robvet -ms.date: 04/06/2022 ---- - -# Front-end client communication - -[!INCLUDE [download-alert](includes/download-alert.md)] - -In a cloud-native system, front-end clients (mobile, web, and desktop applications) require a communication channel to interact with independent back-end microservices. - -What are the options? - -To keep things simple, a front-end client could *directly communicate* with the back-end microservices, shown in Figure 4-2. - -![Direct client to service communication](./media/direct-client-to-service-communication.png) - -**Figure 4-2.** Direct client to service communication - -With this approach, each microservice has a public endpoint that is accessible by front-end clients. In a production environment, you'd place a load balancer in front of the microservices, routing traffic proportionately. - -While simple to implement, direct client communication would be acceptable only for simple microservice applications. This pattern tightly couples front-end clients to core back-end services, opening the door for many problems, including: - -- Client susceptibility to back-end service refactoring. -- A wider attack surface as core back-end services are directly exposed. -- Duplication of cross-cutting concerns across each microservice. -- Overly complex client code - clients must keep track of multiple endpoints and handle failures in a resilient way. - -Instead, a widely accepted cloud design pattern is to implement an [API Gateway Service](../microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md) between the front-end applications and back-end services. The pattern is shown in Figure 4-3. - -![API Gateway Pattern](./media/api-gateway-pattern.png) - -**Figure 4-3.** API gateway pattern - -In the previous figure, note how the API Gateway service abstracts the back-end core microservices. Implemented as a web API, it acts as a *reverse proxy*, routing incoming traffic to the internal microservices. - -The gateway insulates the client from internal service partitioning and refactoring. If you change a back-end service, you accommodate for it in the gateway without breaking the client. It's also your first line of defense for cross-cutting concerns, such as identity, caching, resiliency, metering, and throttling. Many of these cross-cutting concerns can be off-loaded from the back-end core services to the gateway, simplifying the back-end services. - -Care must be taken to keep the API Gateway simple and fast. Typically, business logic is kept out of the gateway. A complex gateway risks becoming a bottleneck and eventually a monolith itself. Larger systems often expose multiple API Gateways segmented by client type (mobile, web, desktop) or back-end functionality. The [Backend for Frontends](/azure/architecture/patterns/backends-for-frontends) pattern provides direction for implementing multiple gateways. The pattern is shown in Figure 4-4. - -![Backend for Frontend Pattern](./media/backend-for-frontend-pattern.png) - -**Figure 4-4.** Backend for frontend pattern - -Note in the previous figure how incoming traffic is sent to a specific API gateway - based upon client type: web, mobile, or desktop app. This approach makes sense as the capabilities of each device differ significantly across form factor, performance, and display limitations. Typically mobile applications expose less functionality than a browser or desktop applications. Each gateway can be optimized to match the capabilities and functionality of the corresponding device. - -## Simple Gateways - -To start, you could build your own API Gateway service. A quick search of GitHub will provide many examples. - -For simple .NET cloud-native applications, you might consider the [Ocelot Gateway](https://github.com/ThreeMammals/Ocelot). Open source and created for .NET microservices, it's lightweight, fast, scalable. Like any API Gateway, its primary functionality is to forward incoming HTTP requests to downstream services. Additionally, it supports a wide variety of capabilities that are configurable in a .NET middleware pipeline. - -[YARP](https://github.com/microsoft/reverse-proxy) (Yet Another Reverse proxy) is another open source reverse proxy led by a group of Microsoft product teams. Downloadable as a NuGet package, YARP plugs into the ASP.NET framework as middleware and is highly customizable. You'll find YARP [well-documented](https://microsoft.github.io/reverse-proxy/articles/getting-started.html) with various usage examples. - -For enterprise cloud-native applications, there are several managed Azure services that can help jump-start your efforts. - -## Azure Application Gateway - -For simple gateway requirements, you may consider [Azure Application Gateway](/azure/application-gateway/overview). Available as an Azure [PaaS service](https://azure.microsoft.com/overview/what-is-paas/), it includes basic gateway features such as URL routing, SSL termination, and a Web Application Firewall. The service supports [Layer-7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/) capabilities. With Layer 7, you can route requests based on the actual content of an HTTP message, not just low-level TCP network packets. - -Throughout this book, we evangelize hosting cloud-native systems in [Kubernetes](https://www.infoworld.com/article/3268073/what-is-kubernetes-your-next-application-platform.html). A container orchestrator, Kubernetes automates the deployment, scaling, and operational concerns of containerized workloads. Azure Application Gateway can be configured as an API gateway for [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/) cluster. - -The [Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) enables Azure Application Gateway to work directly with [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/). Figure 4.5 shows the architecture. - -![Application Gateway Ingress Controller](./media/application-gateway-ingress-controller.png) - -**Figure 4-5.** Application Gateway Ingress Controller - -Kubernetes includes a built-in feature that supports HTTP (Level 7) load balancing, called [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/). Ingress defines a set of rules for how microservice instances inside AKS can be exposed to the outside world. In the previous image, the ingress controller interprets the ingress rules configured for the cluster and automatically configures the Azure Application Gateway. Based on those rules, the Application Gateway routes traffic to microservices running inside AKS. The ingress controller listens for changes to ingress rules and makes the appropriate changes to the Azure Application Gateway. - -## Azure API Management - -For moderate to large-scale cloud-native systems, you may consider [Azure API Management](https://azure.microsoft.com/services/api-management/). It's a cloud-based service that not only solves your API Gateway needs, but provides a full-featured developer and administrative experience. API Management is shown in Figure 4-6. - -![Azure API Management](./media/azure-api-management.png) - -**Figure 4-6.** Azure API Management - -To start, API Management exposes a gateway server that allows controlled access to back-end services based upon configurable rules and policies. These services can be in the Azure cloud, your on-prem data center, or other public clouds. API keys and JWT tokens determine who can do what. All traffic is logged for analytical purposes. - -For developers, API Management offers a developer portal that provides access to services, documentation, and sample code for invoking them. Developers can use Swagger/Open API to inspect service endpoints and analyze their usage. The service works across the major development platforms: .NET, Java, Golang, and more. - -The publisher portal exposes a management dashboard where administrators expose APIs and manage their behavior. Service access can be granted, service health monitored, and service telemetry gathered. Administrators apply *policies* to each endpoint to affect behavior. [Policies](/azure/api-management/api-management-howto-policies) are pre-built statements that execute sequentially for each service call. Policies are configured for an inbound call, outbound call, or invoked upon an error. Policies can be applied at different service scopes as to enable deterministic ordering when combining policies. The product ships with a large number of prebuilt [policies](/azure/api-management/api-management-policies). - -Here are examples of how policies can affect the behavior of your cloud-native services: - -- Restrict service access. -- Enforce authentication. -- Throttle calls from a single source, if necessary. -- Enable caching. -- Block calls from specific IP addresses. -- Control the flow of the service. -- Convert requests from SOAP to REST or between different data formats, such as from XML to JSON. - -Azure API Management can expose back-end services that are hosted anywhere – in the cloud or your data center. For legacy services that you may expose in your cloud-native systems, it supports both REST and SOAP APIs. Even other Azure services can be exposed through API Management. You could place a managed API on top of an Azure backing service like [Azure Service Bus](https://azure.microsoft.com/services/service-bus/) or [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/). Azure API Management doesn't include built-in load-balancing support and should be used in conjunction with a load-balancing service. - -Azure API Management is available across [four different tiers](https://azure.microsoft.com/pricing/details/api-management/): - -- Developer -- Basic -- Standard -- Premium - -The Developer tier is meant for non-production workloads and evaluation. The other tiers offer progressively more power, features, and higher service level agreements (SLAs). The Premium tier provides [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) and [multi-region support](/azure/api-management/api-management-howto-deploy-multi-region). All tiers have a fixed price per hour. - -The Azure cloud also offers a [serverless tier](https://azure.microsoft.com/blog/announcing-azure-api-management-for-serverless-architectures/) for Azure API Management. Referred to as the *consumption pricing tier*, the service is a variant of API Management designed around the serverless computing model. Unlike the "pre-allocated" pricing tiers previously shown, the consumption tier provides instant provisioning and pay-per-action pricing. - -It enables API Gateway features for the following use cases: - -- Microservices implemented using serverless technologies such as [Azure Functions](/azure/azure-functions/functions-overview) and [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/). -- Azure backing service resources such as Service Bus queues and topics, Azure storage, and others. -- Microservices where traffic has occasional large spikes but remains low most the time. - -The consumption tier uses the same underlying service API Management components, but employs an entirely different architecture based on dynamically allocated resources. It aligns perfectly with the serverless computing model: - -- No infrastructure to manage. -- No idle capacity. -- High-availability. -- Automatic scaling. -- Cost is based on actual usage. - -The new consumption tier is a great choice for cloud-native systems that expose serverless resources as APIs. - -## Real-time communication - -Real-time, or push, communication is another option for front-end applications that communicate with back-end cloud-native systems over HTTP. Applications, such as financial-tickers, online education, gaming, and job-progress updates, require instantaneous, real-time responses from the back-end. With normal HTTP communication, there's no way for the client to know when new data is available. The client must continually *poll* or send requests to the server. With *real-time* communication, the server can push new data to the client at any time. - -Real-time systems are often characterized by high-frequency data flows and large numbers of concurrent client connections. Manually implementing real-time connectivity can quickly become complex, requiring non-trivial infrastructure to ensure scalability and reliable messaging to connected clients. You could find yourself managing an instance of Azure Redis Cache and a set of load balancers configured with sticky sessions for client affinity. - -[Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) is a fully managed Azure service that simplifies real-time communication for your cloud-native applications. Technical implementation details like capacity provisioning, scaling, and persistent connections are abstracted away. They're handled for you with a 99.9% service-level agreement. You focus on application features, not infrastructure plumbing. - -Once enabled, a cloud-based HTTP service can push content updates directly to connected clients, including browser, mobile and desktop applications. Clients are updated without the need to poll the server. Azure SignalR abstracts the transport technologies that create real-time connectivity, including WebSockets, Server-Side Events, and Long Polling. Developers focus on sending messages to all or specific subsets of connected clients. - -Figure 4-7 shows a set of HTTP Clients connecting to a Cloud-native application with Azure SignalR enabled. - -![Azure SignalR](./media/azure-signalr-service.png) - -**Figure 4-7.** Azure SignalR - -Another advantage of Azure SignalR Service comes with implementing Serverless cloud-native services. Perhaps your code is executed on demand with Azure Functions triggers. This scenario can be tricky because your code doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. - -Azure SignalR Service closely integrates with other Azure services, such as Azure SQL Database, Service Bus, or Redis Cache, opening up many possibilities for your cloud-native applications. - ->[!div class="step-by-step"] ->[Previous](communication-patterns.md) ->[Next](service-to-service-communication.md) diff --git a/docs/architecture/cloud-native/grpc.md b/docs/architecture/cloud-native/grpc.md deleted file mode 100644 index 12d9c4ea4d4c0..0000000000000 --- a/docs/architecture/cloud-native/grpc.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: gRPC -description: Learn about gRPC, its role in cloud-native applications, and how it differs from HTTP RESTful communication. -author: robvet -no-loc: [Blazor, "Blazor WebAssembly"] -ms.date: 12/14/2023 ---- - -# gRPC - -[!INCLUDE [download-alert](includes/download-alert.md)] - -So far in this book, we've focused on [REST-based](/azure/architecture/best-practices/api-design) communication. We've seen that REST is a flexible architectural style that defines CRUD-based operations against entity resources. Clients interact with resources across HTTP with a request/response communication model. While REST is widely implemented, a newer communication technology, gRPC, has gained tremendous momentum across the cloud-native community. - -## What is gRPC? - -gRPC is a modern, high-performance framework that evolves the age-old [remote procedure call (RPC)](https://en.wikipedia.org/wiki/Remote_procedure_call) protocol. At the application level, gRPC streamlines messaging between clients and back-end services. Originating from Google, gRPC is open source and part of the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) ecosystem of cloud-native offerings. CNCF considers gRPC an [incubating project](https://github.com/cncf/toc/blob/main/process/graduation_criteria.md). Incubating means end users are using the technology in production applications, and the project has a healthy number of contributors. - -A typical gRPC client app will expose a local, in-process function that implements a business operation. Under the covers, that local function invokes another function on a remote machine. What appears to be a local call essentially becomes a transparent out-of-process call to a remote service. The RPC plumbing abstracts the point-to-point networking communication, serialization, and execution between computers. - -In cloud-native applications, developers often work across programming languages, frameworks, and technologies. This *interoperability* complicates message contracts and the plumbing required for cross-platform communication. gRPC provides a "uniform horizontal layer" that abstracts these concerns. Developers code in their native platform focused on business functionality, while gRPC handles communication plumbing. - -gRPC offers comprehensive support across most popular development stacks, including Java, JavaScript, C#, Go, Swift, and NodeJS. - -## gRPC Benefits - -gRPC uses HTTP/2 for its transport protocol. While compatible with HTTP 1.1, HTTP/2 features many advanced capabilities: - -- A binary framing protocol for data transport - unlike HTTP 1.1, which is text based. -- Multiplexing support for sending multiple parallel requests over the same connection - HTTP 1.1 limits processing to one request/response message at a time. -- Bidirectional full-duplex communication for sending both client requests and server responses simultaneously. -- Built-in streaming enabling requests and responses to asynchronously stream large data sets. -- Header compression that reduces network usage. - -gRPC is lightweight and highly performant. It can be up to 8x faster than JSON serialization with messages 60-80% smaller. In Microsoft [Windows Communication Foundation (WCF)](../../framework/wcf/whats-wcf.md) parlance, gRPC performance exceeds the speed and efficiency of the highly optimized [NetTCP bindings](/dotnet/api/system.servicemodel.nettcpbinding?view=netframework-4.8&preserve-view=true). Unlike NetTCP, which favors the Microsoft stack, gRPC is cross-platform. - -## Protocol Buffers - -gRPC embraces an open-source technology called [Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview). They provide a highly efficient and platform-neutral serialization format for serializing structured messages that services send to each other. Using a cross-platform Interface Definition Language (IDL), developers define a service contract for each microservice. The contract, implemented as a text-based `.proto` file, describes the methods, inputs, and outputs for each service. The same contract file can be used for gRPC clients and services built on different development platforms. - -Using the proto file, the Protobuf compiler, `protoc`, generates both client and service code for your target platform. The code includes the following components: - -- Strongly typed objects, shared by the client and service, that represent the service operations and data elements for a message. -- A strongly typed base class with the required network plumbing that the remote gRPC service can inherit and extend. -- A client stub that contains the required plumbing to invoke the remote gRPC service. - -At run time, each message is serialized as a standard Protobuf representation and exchanged between the client and remote service. Unlike JSON or XML, Protobuf messages are serialized as compiled binary bytes. - -## gRPC support in .NET - -gRPC is integrated into .NET Core 3.0 SDK and later. The following tools support it: - -- Visual Studio 2022 with the ASP.NET and web development workload installed -- Visual Studio Code -- The `dotnet` CLI - -The SDK includes tooling for endpoint routing, built-in IoC, and logging. The open-source Kestrel web server supports HTTP/2 connections. Figure 4-20 shows a Visual Studio 2022 template that scaffolds a skeleton project for a gRPC service. Note how .NET fully supports Windows, Linux, and macOS. - -![gRPC Support in Visual Studio 2022](./media/visual-studio-2022-grpc-template.png) - -**Figure 4-20**. gRPC support in Visual Studio 2022 - -Figure 4-21 shows the skeleton gRPC service generated from the built-in scaffolding included in Visual Studio 2022. - -![gRPC project in Visual Studio 2022](./media/grpc-project.png ) - -**Figure 4-21**. gRPC project in Visual Studio 2022 - -In the previous figure, note the proto description file and service code. As you'll see shortly, Visual Studio generates additional configuration in both the Startup class and underlying project file. - -## gRPC usage - -Favor gRPC for the following scenarios: - -- Synchronous backend microservice-to-microservice communication where an immediate response is required to continue processing. -- Polyglot environments that need to support mixed programming platforms. -- Low latency and high throughput communication where performance is critical. -- Point-to-point real-time communication - gRPC can push messages in real time without polling and has excellent support for bi-directional streaming. -- Network constrained environments – binary gRPC messages are always smaller than an equivalent text-based JSON message. - -At the time of this writing, gRPC is primarily used with backend services. Modern browsers can't provide the level of HTTP/2 control required to support a front-end gRPC client. That said, there's support for [gRPC-Web with .NET](https://devblogs.microsoft.com/aspnet/grpc-web-for-net-now-available/) that enables gRPC communication from browser-based apps built with JavaScript or Blazor WebAssembly technologies. [gRPC-Web](https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md) enables an ASP.NET Core gRPC app to support gRPC features in browser apps: - -- Strongly typed, code-generated clients -- Compact Protobuf messages -- Server streaming - -## gRPC implementation - -The microservice reference architecture, [eShop on Containers](https://github.com/dotnet-architecture/eShopOnContainers), from Microsoft, shows how to implement gRPC services in .NET applications. Figure 4-22 presents the back-end architecture. - -![Backend architecture for eShop on Containers](./media/eshop-with-aggregators.png) - -**Figure 4-22**. Backend architecture for eShop on Containers - -In the previous figure, note how eShop embraces the [Backend for Frontends pattern](/azure/architecture/patterns/backends-for-frontends) (BFF) by exposing multiple API gateways. We discussed the BFF pattern earlier in this chapter. Pay close attention to the Aggregator microservice (in gray) that sits between the Web-Shopping API Gateway and backend Shopping microservices. The Aggregator receives a single request from a client, dispatches it to various microservices, aggregates the results, and sends them back to the requesting client. Such operations typically require synchronous communication as to produce an immediate response. In eShop, backend calls from the Aggregator are performed using gRPC as shown in Figure 4-23. - -![gRPC in eShop on Containers](./media/grpc-implementation.png) - -**Figure 4-23**. gRPC in eShop on Containers - -gRPC communication requires both client and server components. In the previous figure, note how the Shopping Aggregator implements a gRPC client. The client makes synchronous gRPC calls (in red) to backend microservices, each of which implement a gRPC server. Both the client and server take advantage of the built-in gRPC plumbing from the .NET SDK. Client-side *stubs* provide the plumbing to invoke remote gRPC calls. Server-side components provide gRPC plumbing that custom service classes can inherit and consume. - -Microservices that expose both a RESTful API and gRPC communication require multiple endpoints to manage traffic. You would open an endpoint that listens for HTTP traffic for the RESTful calls and another for gRPC calls. The gRPC endpoint must be configured for the HTTP/2 protocol that is required for gRPC communication. - -While we strive to decouple microservices with asynchronous communication patterns, some operations require direct calls. gRPC should be the primary choice for direct synchronous communication between microservices. Its high-performance communication protocol, based on HTTP/2 and protocol buffers, make it a perfect choice. - -## Looking ahead - -Looking ahead, gRPC will continue to gain traction for cloud-native systems. The performance benefits and ease of development are compelling. However, REST will likely be around for a long time. It excels for publicly exposed APIs and for backward compatibility reasons. - ->[!div class="step-by-step"] ->[Previous](service-to-service-communication.md) ->[Next](service-mesh-communication-infrastructure.md) diff --git a/docs/architecture/cloud-native/identity-server.md b/docs/architecture/cloud-native/identity-server.md deleted file mode 100644 index 6725202e50614..0000000000000 --- a/docs/architecture/cloud-native/identity-server.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: IdentityServer for Cloud Native Apps -description: Architecting Cloud Native .NET Apps for Azure | IdentityServer -ms.date: 04/06/2022 ---- - -# IdentityServer for cloud-native applications - -[!INCLUDE [download-alert](includes/download-alert.md)] - -IdentityServer is an authentication server that implements OpenID Connect (OIDC) and OAuth 2.0 standards for ASP.NET Core. It's designed to provide a common way to authenticate requests to all of your applications, whether they're web, native, mobile, or API endpoints. IdentityServer can be used to implement Single Sign-On (SSO) for multiple applications and application types. It can be used to authenticate actual users via sign-in forms and similar user interfaces as well as service-based authentication that typically involves token issuance, verification, and renewal without any user interface. IdentityServer is designed to be a customizable solution. Each instance is typically customized to suit an individual organization and/or set of applications' needs. - -## Common web app scenarios - -Typically, applications need to support some or all of the following scenarios: - -- Human users accessing web applications with a browser. -- Human users accessing back-end Web APIs from browser-based apps. -- Human users on mobile/native clients accessing back-end Web APIs. -- Other applications accessing back-end Web APIs (without an active user or user interface). -- Any application may need to interact with other Web APIs, using its own identity or delegating to the user's identity. - -![Application types and scenarios](./media/application-types.png) - -**Figure 8-1**. Application types and scenarios. - -In each of these scenarios, the exposed functionality needs to be secured against unauthorized use. At a minimum, this typically requires authenticating the user or principal making a request for a resource. This authentication may use one of several common protocols such as SAML2p, WS-Fed, or OpenID Connect. Communicating with APIs typically uses the OAuth2 protocol and its support for security tokens. Separating these critical cross-cutting security concerns and their implementation details from the applications themselves ensures consistency and improves security and maintainability. Outsourcing these concerns to a dedicated product like IdentityServer helps the requirement for every application to solve these problems itself. - -IdentityServer provides middleware that runs within an ASP.NET Core application and adds support for OpenID Connect and OAuth2 (see [supported specifications](https://docs.duendesoftware.com/identityserver/v6/overview/specs/)). Organizations would create their own ASP.NET Core app using IdentityServer middleware to act as the STS for all of their token-based security protocols. The IdentityServer middleware exposes endpoints to support standard functionality, including: - -- Authorize (authenticate the end user) -- Token (request a token programmatically) -- Discovery (metadata about the server) -- User Info (get user information with a valid access token) -- Device Authorization (used to start device flow authorization) -- Introspection (token validation) -- Revocation (token revocation) -- End Session (trigger single sign-out across all apps) - -## Getting started - -IdentityServer4 is available under dual license: - -* RPL - lets you use the IdentityServer4 free if used in open-source work -* Paid - lets you use the IdentityServer4 in a commercial scenario - -For more information about pricing, see the official product's [pricing page](https://duendesoftware.com/products/identityserver). - -You can add it to your applications using its NuGet packages. The main package is [IdentityServer4](https://www.nuget.org/packages/IdentityServer4/), which has been downloaded over four million times. The base package doesn't include any user interface code and only supports in-memory configuration. To use it with a database, you'll also want a data provider like [IdentityServer4.EntityFramework](https://www.nuget.org/packages/IdentityServer4.EntityFramework), which uses Entity Framework Core to store configuration and operational data for IdentityServer. For user interface, you can copy files from the [Quickstart UI repository](https://github.com/IdentityServer/IdentityServer4.Quickstart.UI) into your ASP.NET Core MVC application to add support for sign in and sign out using IdentityServer middleware. - -## Configuration - -IdentityServer supports different kinds of protocols and social authentication providers that can be configured as part of each custom installation. This is typically done in the ASP.NET Core application's `Program` class (or in the `Startup` class in the `ConfigureServices` method). The configuration involves specifying the supported protocols and the paths to the servers and endpoints that will be used. Figure 8-2 shows an example configuration taken from the IdentityServer4 Quickstart UI project: - -```csharp -public class Startup -{ - public void ConfigureServices(IServiceCollection services) - { - services.AddMvc(); - - // some details omitted - services.AddIdentityServer(); - - services.AddAuthentication() - .AddGoogle("Google", options => - { - options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme; - - options.ClientId = ""; - options.ClientSecret = ""; - }) - .AddOpenIdConnect("demoidsrv", "IdentityServer", options => - { - options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme; - options.SignOutScheme = IdentityServerConstants.SignoutScheme; - - options.Authority = "https://demo.identityserver.io/"; - options.ClientId = "implicit"; - options.ResponseType = "id_token"; - options.SaveTokens = true; - options.CallbackPath = new PathString("/signin-idsrv"); - options.SignedOutCallbackPath = new PathString("/signout-callback-idsrv"); - options.RemoteSignOutPath = new PathString("/signout-idsrv"); - - options.TokenValidationParameters = new TokenValidationParameters - { - NameClaimType = "name", - RoleClaimType = "role" - }; - }); - } -} -``` - -**Figure 8-2**. Configuring IdentityServer. - -## JavaScript clients - -Many cloud-native applications use server-side APIs and rich client single page applications (SPAs) on the front end. IdentityServer ships a [JavaScript client](https://docs.duendesoftware.com/identityserver/v6/quickstarts/js_clients/) (`oidc-client.js`) via NPM that can be added to SPAs to enable them to use IdentityServer for sign in, sign out, and token-based authentication of web APIs. - -## References - -- [IdentityServer documentation](https://docs.duendesoftware.com/identityserver/v6/) -- [Application types](/azure/active-directory/develop/app-types) -- [JavaScript OIDC client](https://docs.duendesoftware.com/identityserver/v6/quickstarts/js_clients/) - ->[!div class="step-by-step"] ->[Previous](azure-active-directory.md) ->[Next](security.md) diff --git a/docs/architecture/cloud-native/identity.md b/docs/architecture/cloud-native/identity.md deleted file mode 100644 index c713ec0359b0c..0000000000000 --- a/docs/architecture/cloud-native/identity.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: Cloud-native identity -description: Architecting Cloud Native .NET Apps for Azure | Identity -ms.date: 04/06/2022 ---- - -# Cloud-native identity - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Most software applications need to have some knowledge of the user or process that is calling them. The user or process interacting with an application is known as a security principal, and the process of authenticating and authorizing these principals is known as identity management, or simply *identity*. Simple applications may include all of their identity management within the application, but this approach doesn't scale well with many applications and many kinds of security principals. Windows supports the use of Active Directory to provide centralized authentication and authorization. - - - -While this solution is effective within corporate networks, it isn't designed for use by users or applications that are outside of the AD domain. With the growth of Internet-based applications and the rise of cloud-native apps, security models have evolved. - -In today's cloud-native identity model, architecture is assumed to be distributed. Apps can be deployed anywhere and may communicate with other apps anywhere. Clients may communicate with these apps from anywhere, and in fact, clients may consist of any combination of platforms and devices. Cloud-native identity solutions use open standards to achieve secure application access from clients. These clients range from human users on PCs or phones, to other apps hosted anywhere online, to set-top boxes and IOT devices running any software platform anywhere in the world. - -Modern cloud-native identity solutions typically use access tokens that are issued by a secure token service/server (STS) to a security principal once their identity is determined. The access token, typically a JSON Web Token (JWT), includes *claims* about the security principal. These claims will minimally include the user's identity but may also include other claims that can be used by applications to determine the level of access to grant the principal. - - - -Typically, the STS is only responsible for authenticating the principal. Determining their level of access to resources is left to other parts of the application. - -## References - -- [Microsoft identity platform](/azure/active-directory/develop/) - ->[!div class="step-by-step"] ->[Previous](azure-monitor.md) ->[Next](authentication-authorization.md) diff --git a/docs/architecture/cloud-native/includes/download-alert.md b/docs/architecture/cloud-native/includes/download-alert.md deleted file mode 100644 index f276f70524046..0000000000000 --- a/docs/architecture/cloud-native/includes/download-alert.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -author: IEvangelist -ms.author: dapine -ms.date: 04/06/2022 -ms.topic: include ---- - -> [!TIP] -> :::row::: -> :::column span="3"::: -> This content is an excerpt from the eBook, Architecting Cloud Native .NET Applications for Azure, available on [.NET Docs](/dotnet/architecture/cloud-native) or as a free downloadable PDF that can be read offline. -> -> > [!div class="nextstepaction"] -> > [Download PDF](https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf) -> :::column-end::: -> :::column::: -> :::image type="content" source="../media/cover-thumbnail.png" alt-text="Cloud Native .NET apps for Azure eBook cover thumbnail."::: -> :::column-end::: -> :::row-end::: diff --git a/docs/architecture/cloud-native/index.md b/docs/architecture/cloud-native/index.md deleted file mode 100644 index e569c039f08ee..0000000000000 --- a/docs/architecture/cloud-native/index.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: Architecting Cloud Native .NET Applications for Azure -description: A guide for building cloud-native applications leveraging containers, microservices, and serverless features of Azure. -author: ardalis -ms.date: 01/10/2022 ---- - -# Architecting Cloud Native .NET Applications for Azure - -![cover image](./media/cover.png) - -**EDITION v1.0.3** - -Refer [changelog](https://aka.ms/cn-ebook-changelog) for the book updates and community contributions. - -PUBLISHED BY - -Microsoft Developer Division, .NET, and Visual Studio product teams - -A division of Microsoft Corporation - -One Microsoft Way - -Redmond, Washington 98052-6399 - -Copyright © 2023 by Microsoft Corporation - -All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. - -This book is provided "as-is" and expresses the author's views and opinions. The views, opinions, and information expressed in this book, including URL and other Internet website references, may change without notice. - -Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred. - -Microsoft and the trademarks listed at on the "Trademarks" webpage are trademarks of the Microsoft group of companies. - -Mac and macOS are trademarks of Apple Inc. - -The Docker whale logo is a registered trademark of Docker, Inc. Used by permission. - -All other marks and logos are property of their respective owners. - -Authors: - -> **Rob Vettor**, Principal MTC (Microsoft Technology Center) Architect for Cloud App Innovation, Microsoft -> -> **Steve "ardalis" Smith**, Software Architect and Trainer - [Ardalis.com](https://ardalis.com) - -Participants and Reviewers: - -> **Cesar De la Torre**, Principal Program Manager, .NET team, Microsoft -> -> **Nish Anil**, Senior Program Manager, .NET team, Microsoft -> -> **Jeremy Likness**, Senior Program Manager, .NET team, Microsoft -> -> **Cecil Phillip**, Senior Cloud Advocate, Microsoft -> -> **Sumit Ghosh**, Principal Consultant at Neudesic - -Editors: - -> **Maira Wenzel**, Program Manager, .NET team, Microsoft - -> **David Pine**, Senior Content Developer, .NET docs, Microsoft - -## Version - -This guide has been written to cover **.NET 7** version along with many additional updates related to the same “wave” of technologies (that is, Azure and additional third-party technologies) coinciding in time with the .NET 7 release. - -## Who should use this guide - -The audience for this guide is mainly developers, development leads, and architects who are interested in learning how to build applications designed for the cloud. - -A secondary audience is technical decision-makers who plan to choose whether to build their applications using a cloud-native approach. - -## How you can use this guide - -This guide begins by defining cloud native and introducing a reference application built using cloud-native principles and technologies. Beyond these first two chapters, the rest of the book is broken up into specific chapters focused on topics common to most cloud-native applications. You can jump to any of these chapters to learn about cloud-native approaches to: - -- Data and data access -- Communication patterns -- Scaling and scalability -- Application resiliency -- Monitoring and health -- Identity and security -- DevOps - -This guide is available both in [PDF](https://dotnet.microsoft.com/download/e-book/cloud-native-azure/pdf) form and online. Feel free to forward this document or links to its online version to your team to help ensure common understanding of these topics. Most of these topics benefit from a consistent understanding of the underlying principles and patterns, as well as the trade-offs involved in decisions related to these topics. Our goal with this document is to equip teams and their leaders with the information they need to make well-informed decisions for their applications' architecture, development, and hosting. - -[!INCLUDE [feedback](../includes/feedback.md)] - ->[!div class="step-by-step"] ->[Next](introduction.md) diff --git a/docs/architecture/cloud-native/infrastructure-as-code.md b/docs/architecture/cloud-native/infrastructure-as-code.md deleted file mode 100644 index b985be202b5da..0000000000000 --- a/docs/architecture/cloud-native/infrastructure-as-code.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: Infrastructure as code -description: Embracing Infrastructure as Code (IaC) with cloud-native applications -ms.date: 04/06/2022 -ms.custom: devx-track-terraform, devx-track-arm-template, devx-track-azurecli ---- - -# Infrastructure as code - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Cloud-native systems embrace microservices, containers, and modern system design to achieve speed and agility. They provide automated build and release stages to ensure consistent and quality code. But, that's only part of the story. How do you provision the cloud environments upon which these systems run? - -Modern cloud-native applications embrace the widely accepted practice of [Infrastructure as Code](/devops/deliver/what-is-infrastructure-as-code), or `IaC`. With IaC, you automate platform provisioning. You essentially apply software engineering practices such as testing and versioning to your DevOps practices. Your infrastructure and deployments are automated, consistent, and repeatable. Just as continuous delivery automated the traditional model of manual deployments, Infrastructure as Code (IaC) is evolving how application environments are managed. - -Tools like Azure Resource Manager (ARM), Terraform, and the Azure Command Line Interface (CLI) enable you to declaratively script the cloud infrastructure you require. - -## Azure Resource Manager templates - -ARM stands for [Azure Resource Manager](/azure/azure-resource-manager/management/overview). It's an API provisioning engine that is built into Azure and exposed as an API service. ARM enables you to deploy, update, delete, and manage the resources contained in Azure resource group in a single, coordinated operation. You provide the engine with a JSON-based template that specifies the resources you require and their configuration. ARM automatically orchestrates the deployment in the correct order respecting dependencies. The engine ensures idempotency. If a desired resource already exists with the same configuration, provisioning will be ignored. - -Azure Resource Manager templates are a JSON-based language for defining various resources in Azure. The basic schema looks something like Figure 10-14. - -```json -{ - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "", - "apiProfile": "", - "parameters": { }, - "variables": { }, - "functions": [ ], - "resources": [ ], - "outputs": { } -} -``` - -**Figure 10-14** - The schema for a Resource Manager template - -Within this template, one might define a storage container inside the resources section like so: - -```json -"resources": [ - { - "type": "Microsoft.Storage/storageAccounts", - "name": "[variables('storageAccountName')]", - "location": "[parameters('location')]", - "apiVersion": "2018-07-01", - "sku": { - "name": "[parameters('storageAccountType')]" - }, - "kind": "StorageV2", - "properties": {} - } - ], -``` - -**Figure 10-15** - An example of a storage account defined in a Resource Manager template - -An ARM template can be parameterized with dynamic environment and configuration information. Doing so enables it to be reused to define different environments, such as development, QA, or production. Normally, the template creates all resources within a single Azure resource group. It's possible to define multiple resource groups in a single Resource Manager template, if needed. You can delete all resources in an environment by deleting the resource group itself. Cost analysis can also be run at the resource group level, allowing for quick accounting of how much each environment is costing. - -There are many examples of ARM templates available in the [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) project on GitHub. They can help accelerate creating a new template or modifying an existing one. - -Resource Manager templates can be run in many of ways. Perhaps the simplest way is to simply paste them into the Azure portal. For experimental deployments, this method can be quick. They can also be run as part of a build or release process in Azure DevOps. There are tasks that will leverage connections into Azure to run the templates. Changes to Resource Manager templates are applied incrementally, meaning that to add a new resource requires just adding it to the template. The tooling will reconcile differences between the current resources and those defined in the template. Resources will then be created or altered so they match what is defined in the template. - -## Terraform - -Cloud-native applications are often constructed to be `cloud agnostic`. Being so means the application isn't tightly coupled to a particular cloud vendor and can be deployed to any public cloud. - -[Terraform](https://www.terraform.io/) is a commercial templating tool that can provision cloud-native applications across all the major cloud players: Azure, Google Cloud Platform, AWS, and AliCloud. Instead of using JSON as the template definition language, it uses the slightly more terse HCL (Hashicorp Configuration Language). - -An example Terraform file that does the same as the previous Resource Manager template (Figure 10-15) is shown in Figure 10-16: - -```terraform -provider "azurerm" { - version = "=1.28.0" -} - -resource "azurerm_resource_group" "testrg" { - name = "production" - location = "West US" -} - -resource "azurerm_storage_account" "testsa" { - name = "${var.storageAccountName}" - resource_group_name = "${azurerm_resource_group.testrg.name}" - location = "${var.region}" - account_tier = "${var.tier}" - account_replication_type = "${var.replicationType}" - -} -``` - -**Figure 10-16** - An example of a Resource Manager template - -Terraform also provides intuitive error messages for problem templates. There's also a handy validate task that can be used in the build phase to catch template errors early. - -As with Resource Manager templates, command-line tools are available to deploy Terraform templates. There are also community-created tasks in Azure Pipelines that can validate and apply Terraform templates. - -Sometimes Terraform and ARM templates output meaningful values, such as a connection string to a newly created database. This information can be captured in the build pipeline and used in subsequent tasks. - -## Azure CLI Scripts and Tasks - -Finally, you can leverage [Azure CLI](/cli/azure/) to declaratively script your cloud infrastructure. Azure CLI scripts can be created, found, and shared to provision and configure almost any Azure resource. The CLI is simple to use with a gentle learning curve. Scripts are executed within either PowerShell or Bash. They're also straightforward to debug, especially when compared with ARM templates. - -Azure CLI scripts work well when you need to tear down and redeploy your infrastructure. Updating an existing environment can be tricky. Many CLI commands aren't idempotent. That means they'll recreate the resource each time they're run, even if the resource already exists. It's always possible to add code that checks for the existence of each resource before creating it. But, doing so, your script can become bloated and difficult to manage. - -These scripts can also be embedded in Azure DevOps pipelines as `Azure CLI tasks`. Executing the pipeline invokes the script. - -Figure 10-17 shows a YAML snippet that lists the version of Azure CLI and the details of the subscription. Note how Azure CLI commands are included in an inline script. - -```yaml -- task: AzureCLI@2 - displayName: Azure CLI - inputs: - azureSubscription: - scriptType: ps - scriptLocation: inlineScript - inlineScript: | - az --version - az account show -``` - -**Figure 10-17** - Azure CLI script - -In the article, [What is Infrastructure as Code](/devops/deliver/what-is-infrastructure-as-code), Author Sam Guckenheimer describes how, "Teams who implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. DevOps teams can work together with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly, reliably, and at scale." - ->[!div class="step-by-step"] ->[Previous](feature-flags.md) ->[Next](application-bundles.md) diff --git a/docs/architecture/cloud-native/infrastructure-resiliency-azure.md b/docs/architecture/cloud-native/infrastructure-resiliency-azure.md deleted file mode 100644 index 983e524ed57cd..0000000000000 --- a/docs/architecture/cloud-native/infrastructure-resiliency-azure.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -title: Azure platform resiliency -description: Architecting Cloud Native .NET Apps for Azure | Cloud Infrastructure Resiliency with Azure -author: robvet -ms.date: 04/06/2022 ---- - -# Azure platform resiliency - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Building a reliable application in the cloud is different from traditional on-premises application development. While historically you purchased higher-end hardware to scale up, in a cloud environment you scale out. Instead of trying to prevent failures, the goal is to minimize their effects and keep the system stable. - -That said, reliable cloud applications display distinct characteristics: - -- They're resilient, recover gracefully from problems, and continue to function. -- They're highly available (HA) and run as designed in a healthy state with no significant downtime. - -Understanding how these characteristics work together - and how they affect cost - is essential to building a reliable cloud-native application. We'll next look at ways that you can build resiliency and availability into your cloud-native applications leveraging features from the Azure cloud. - -## Design with resiliency - -We've said resiliency enables your application to react to failure and still remain functional. The whitepaper, [Resilience in Azure whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/Resilience%20in%20Azure.pdf), provides guidance for achieving resilience in the Azure platform. Here are some key recommendations: - -- *Hardware failure.* Build redundancy into the application by deploying components across different fault domains. For example, ensure that Azure VMs are placed in different racks by using Availability Sets. - -- *Datacenter failure.* Build redundancy into the application with fault isolation zones across datacenters. For example, ensure that Azure VMs are placed in different fault-isolated datacenters by using Azure Availability Zones. - -- *Regional failure.* Replicate the data and components into another region so that applications can be quickly recovered. For example, use Azure Site Recovery to replicate Azure VMs to another Azure region. - -- *Heavy load.* Load balance across instances to handle spikes in usage. For example, put two or more Azure VMs behind a load balancer to distribute traffic to all VMs. - -- *Accidental data deletion or corruption.* Back up data so it can be restored if there's any deletion or corruption. For example, use Azure Backup to periodically back up -your Azure VMs. - -## Design with redundancy - -Failures vary in scope of impact. A hardware failure, such as a failed disk, can affect a single node in a cluster. A failed network switch could affect an entire server rack. Less common failures, such as loss of power, could disrupt a whole datacenter. Rarely, an entire region becomes unavailable. - -[Redundancy](/azure/architecture/guide/design-principles/redundancy) is one way to provide application resilience. The exact level of redundancy needed depends upon your business requirements and will affect both the cost and complexity of your system. For example, a multi-region deployment is more expensive and more complex to manage than a single-region deployment. You'll need operational procedures to manage failover and failback. The additional cost and complexity might be justified for some business scenarios, but not others. - -To architect redundancy, you need to identify the critical paths in your application, and then determine if there's redundancy at each point in the path? If a subsystem should fail, will the application fail over to something else? Finally, you need a clear understanding of those features built into the Azure cloud platform that you can leverage to meet your redundancy requirements. Here are recommendations for architecting redundancy: - -- *Deploy multiple instances of services.* If your application depends on a single instance of a service, it creates a single point of failure. Provisioning multiple instances improves both resiliency and scalability. When hosting in Azure Kubernetes Service, you can declaratively configure redundant instances (replica sets) in the Kubernetes manifest file. The replica count value can be managed programmatically, in the portal, or through autoscaling features. - -- *Leveraging a load balancer.* Load-balancing distributes your application's requests to healthy service instances and automatically removes unhealthy instances from rotation. When deploying to Kubernetes, load balancing can be specified in the Kubernetes manifest file in the Services section. - -- *Plan for multiregion deployment.* If you deploy your application to a single region, and that region becomes unavailable, your application will also become unavailable. This may be unacceptable under the terms of your application's service level agreements. Instead, consider deploying your application and its services across multiple regions. For example, an Azure Kubernetes Service (AKS) cluster is deployed to a single region. To protect your system from a regional failure, you might deploy your application to multiple AKS clusters across different regions and use the [Paired Regions](/azure/virtual-machines/regions#region-pairs) feature to coordinate platform updates and prioritize recovery efforts. - -- *Enable [geo-replication](/azure/sql-database/sql-database-active-geo-replication).* Geo-replication for services such as Azure SQL Database and Cosmos DB will create secondary replicas of your data across multiple regions. While both services will automatically replicate data within the same region, geo-replication protects you against a regional outage by enabling you to fail over to a secondary region. Another best practice for geo-replication centers around storing container images. To deploy a service in AKS, you need to store and pull the image from a repository. Azure Container Registry integrates with AKS and can securely store container images. To improve performance and availability, consider geo-replicating your images to a registry in each region where you have an AKS cluster. Each AKS cluster then pulls container images from the local container registry in its region as shown in Figure 6-4: - -![Replicated resources across regions](./media/replicated-resources.png) - -**Figure 6-4**. Replicated resources across regions - -- *Implement a DNS traffic load balancer.* [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview) provides high-availability for critical applications by load-balancing at the DNS level. It can route traffic to different regions based on geography, cluster response time, and even application endpoint health. For example, Azure Traffic Manager can direct customers to the closest AKS cluster and application instance. If you have multiple AKS clusters in different regions, use Traffic Manager to control how traffic flows to the applications that run in each cluster. Figure 6-5 shows this scenario. - -![AKS and Azure Traffic Manager](./media/aks-traffic-manager.png) - -**Figure 6-5**. AKS and Azure Traffic Manager - -## Design for scalability - -The cloud thrives on scaling. The ability to increase/decrease system resources to address increasing/decreasing system load is a key tenet of the Azure cloud. But, to effectively scale an application, you need an understanding of the scaling features of each Azure service that you include in your application. Here are recommendations for effectively implementing scaling in your system. - -- *Design for scaling.* An application must be designed for scaling. To start, services should be stateless so that requests can be routed to any instance. Having stateless services also means that adding or removing an instance doesn't adversely impact current users. - -- *Partition workloads*. Decomposing domains into independent, self-contained microservices enable each service to scale independently of others. Typically, services will have different scalability needs and requirements. Partitioning enables you to scale only what needs to be scaled without the unnecessary cost of scaling an entire application. - -- *Favor scale-out.* Cloud-based applications favor scaling out resources as opposed to scaling up. Scaling out (also known as horizontal scaling) involves adding more service resources to an existing system to meet and share a desired level of performance. Scaling up (also known as vertical scaling) involves replacing existing resources with more powerful hardware (more disk, memory, and processing cores). Scaling out can be invoked automatically with the autoscaling features available in some Azure cloud resources. Scaling out across multiple resources also adds redundancy to the overall system. Finally scaling up a single resource is typically more expensive than scaling out across many smaller resources. Figure 6-6 shows the two approaches: - -![Scale up versus scale out](./media/scale-up-scale-out.png) - -**Figure 6-6.** Scale up versus scale out - -- *Scale proportionally.* When scaling a service, think in terms of *resource sets*. If you were to dramatically scale out a specific service, what impact would that have on back-end data stores, caches and dependent services? Some resources such as Cosmos DB can scale out proportionally, while many others can't. You want to ensure that you don't scale out a resource to a point where it will exhaust other associated resources. - -- *Avoid affinity.* A best practice is to ensure a node doesn't require local affinity, often referred to as a *sticky session*. A request should be able to route to any instance. If you need to persist state, it should be saved to a distributed cache, such as [Azure Redis cache](https://azure.microsoft.com/services/cache/). - -- *Take advantage of platform autoscaling features.* Use built-in autoscaling features whenever possible, rather than custom or third-party mechanisms. Where possible, use scheduled scaling rules to ensure that resources are available without a startup delay, but add reactive autoscaling to the rules as appropriate, to cope with unexpected changes in demand. For more information, see [Autoscaling guidance](/azure/architecture/best-practices/auto-scaling). - -- *Scale out aggressively.* A final practice would be to scale out aggressively so that you can quickly meet immediate spikes in traffic without losing business. And, then scale in (that is, remove unneeded instances) conservatively to keep the system stable. A simple way to implement this is to set the cool down period, which is the time to wait between scaling operations, to five minutes for adding resources and up to 15 minutes for removing instances. - -## Built-in retry in services - -We encouraged the best practice of implementing programmatic retry operations in an earlier section. Keep in mind that many Azure services and their corresponding client SDKs also include retry mechanisms. The following list summarizes retry features in the many of the Azure services that are discussed in this book: - -- *Azure Cosmos DB.* The class from the client API automatically retires failed attempts. The number of retries and maximum wait time are configurable. Exceptions thrown by the client API are either requests that exceed the retry policy or non-transient errors. - -- *Azure Redis Cache.* The Redis StackExchange client uses a connection manager class that includes retries on failed attempts. The number of retries, specific retry policy and wait time are all configurable. - -- *Azure Service Bus.* The Service Bus client exposes a [RetryPolicy class](xref:Microsoft.ServiceBus.RetryPolicy) that can be configured with a back-off interval, retry count, and , which specifies the maximum time an operation can take. The default policy is nine maximum retry attempts with a 30-second backoff period between attempts. - -- *Azure SQL Database.* Retry support is provided when using the [Entity Framework Core](/ef/core/miscellaneous/connection-resiliency) library. - -- *Azure Storage.* The storage client library support retry operations. The strategies vary across Azure storage tables, blobs, and queues. As well, alternate retries switch between primary and secondary storage services locations when the geo-redundancy feature is enabled. - -- *Azure Event Hubs.* The Event Hub client library features a RetryPolicy property, which includes a configurable exponential backoff feature. - ->[!div class="step-by-step"] ->[Previous](application-resiliency-patterns.md) ->[Next](resilient-communications.md) diff --git a/docs/architecture/cloud-native/introduce-eshoponcontainers-reference-app.md b/docs/architecture/cloud-native/introduce-eshoponcontainers-reference-app.md deleted file mode 100644 index f932e3916a84a..0000000000000 --- a/docs/architecture/cloud-native/introduce-eshoponcontainers-reference-app.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Introducing eShopOnContainers reference app -description: Introducing the eShopOnContainers Cloud Native Microservices Reference App for ASP.NET Core and Azure. -ms.date: 04/06/2022 ---- - -# Introducing eShopOnContainers reference app - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Microsoft, in partnership with leading community experts, has produced a full-featured cloud-native microservices reference application, eShopOnContainers. This application is built to showcase using .NET and Docker, and optionally Azure, Kubernetes, and Visual Studio, to build an online storefront. - -![eShopOnContainers Sample App Screenshot.](./media/eshoponcontainers-sample-app-screenshot.jpg) - -**Figure 2-1**. eShopOnContainers Sample App Screenshot. - -Before starting this chapter, we recommend that you download the [eShopOnContainers reference application](https://github.com/dotnet-architecture/eShopOnContainers). If you do so, it should be easier for you to follow along with the information presented. - -## Features and requirements - -Let's start with a review of the application's features and requirements. The eShopOnContainers application represents an online store that sells various physical products like t-shirts and coffee mugs. If you've bought anything online before, the experience of using the store should be relatively familiar. Here are some of the basic features the store implements: - -- List catalog items -- Filter items by type -- Filter items by brand -- Add items to the shopping basket -- Edit or remove items from the basket -- Checkout -- Register an account -- Sign in -- Sign out -- Review orders - -The application also has the following non-functional requirements: - -- It needs to be highly available and it must scale automatically to meet increased traffic (and scale back down once traffic subsides). -- It should provide easy-to-use monitoring of its health and diagnostic logs to help troubleshoot any issues it encounters. -- It should support an agile development process, including support for continuous integration and deployment (CI/CD). -- In addition to the two web front ends (traditional and Single Page Application), the application must also support mobile client apps running different kinds of operating systems. -- It should support cross-platform hosting and cross-platform development. - -![eShopOnContainers reference application development architecture.](./media/eshoponcontainers-development-architecture.png) - -**Figure 2-2**. eShopOnContainers reference application development architecture. - -The eShopOnContainers application is accessible from web or mobile clients that access the application over HTTPS targeting either the ASP.NET Core MVC server application or an appropriate API Gateway. API Gateways offer several advantages, such as decoupling back-end services from individual front-end clients and providing better security. The application also makes use of a related pattern known as Backends-for-Frontends (BFF), which recommends creating separate API gateways for each front-end client. The reference architecture demonstrates breaking up the API gateways based on whether the request is coming from a web or mobile client. - -The application's functionality is broken up into many distinct microservices. There are services responsible for authentication and identity, listing items from the product catalog, managing users' shopping baskets, and placing orders. Each of these separate services has its own persistent storage. There's no single primary data store with which all services interact. Instead, coordination and communication between the services is done on an as-needed basis and by using a message bus. - -Each of the different microservices is designed differently, based on their individual requirements. This aspect means their technology stack may differ, although they're all built using .NET and designed for the cloud. Simpler services provide basic Create-Read-Update-Delete (CRUD) access to the underlying data stores, while more advanced services use Domain-Driven Design approaches and patterns to manage business complexity. - -![Different kinds of microservices](./media/different-kinds-of-microservices.png) - -**Figure 2-3**. Different kinds of microservices. - -## Overview of the code - -Because it uses microservices, the eShopOnContainers app includes quite a few separate projects and solutions in its GitHub repository. In addition to separate solutions and executable files, the various services are designed to run inside their own containers, both during local development and at run time in production. Figure 2-4 shows the full Visual Studio solution, in which the various different projects are organized. - -![Projects in Visual Studio solution.](./media/projects-in-visual-studio-solution.png) - -**Figure 2-4**. Projects in Visual Studio solution. - -The code is organized to support the different microservices, and within each microservice, the code is broken up into domain logic, infrastructure concerns, and user interface or service endpoint. In many cases, each service's dependencies can be fulfilled by Azure services in production, and alternative options for local development. Let's examine how the application's requirements map to Azure services. - -## Understanding microservices - -This book focuses on cloud-native applications built using Azure technology. To learn more about microservices best practices and how to architect microservice-based applications, read the companion book, [.NET Microservices: Architecture for Containerized .NET Applications](https://dotnet.microsoft.com/download/thank-you/microservices-architecture-ebook). - ->[!div class="step-by-step"] ->[Previous](candidate-apps.md) ->[Next](map-eshoponcontainers-azure-services.md) diff --git a/docs/architecture/cloud-native/introduction.md b/docs/architecture/cloud-native/introduction.md deleted file mode 100644 index 7cae872465c4e..0000000000000 --- a/docs/architecture/cloud-native/introduction.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: Introduction to cloud-native applications -description: Learn about cloud-native computing -author: robvet -ms.date: 04/06/2022 ---- - -# Introduction to cloud-native applications - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Another day, at the office, working on "the next big thing." - -Your cellphone rings. It's your friendly recruiter - the one who calls daily with exciting new opportunities. - -But this time it's different: Start-up, equity, and plenty of funding. - -The mention of the cloud, microservices, and cutting-edge technology pushes you over the edge. - -Fast forward a few weeks and you're now a new employee in a design session architecting a major eCommerce application. You're going to compete with the leading eCommerce sites. - -How will you build it? - -If you follow the guidance from past 15 years, you'll most likely build the system shown in Figure 1.1. - -![Traditional monolithic design](./media/monolithic-design.png) - -**Figure 1-1**. Traditional monolithic design - -You construct a large core application containing all of your domain logic. It includes modules such as Identity, Catalog, Ordering, and more. They directly communicate with each other within a single server process. The modules share a large relational database. The core exposes functionality via an HTML interface and a mobile app. - -Congratulations! You just created a monolithic application. - -Not all is bad. Monoliths offer some distinct advantages. For example, they're straightforward to... - -- build -- test -- deploy -- troubleshoot -- vertically scale - -Many successful apps that exist today were created as monoliths. The app is a hit and continues to evolve, iteration after iteration, adding more functionality. - -At some point, however, you begin to feel uncomfortable. You find yourself losing control of the application. As time goes on, the feeling becomes more intense, and you eventually enter a state known as the `Fear Cycle`: - -- The app has become so overwhelmingly complicated that no single person understands it. -- You fear making changes - each change has unintended and costly side effects. -- New features/fixes become tricky, time-consuming, and expensive to implement. -- Each release becomes as small as possible and requires a full deployment of the entire application. -- One unstable component can crash the entire system. -- New technologies and frameworks aren't an option. -- It's difficult to implement agile delivery methodologies. -- Architectural erosion sets in as the code base deteriorates with never-ending "quick fixes." -- Finally, the _consultants_ come in and tell you to rewrite it. - -Sound familiar? - -Many organizations have addressed this monolithic fear cycle by adopting a cloud-native approach to building systems. Figure 1-2 shows the same system built applying cloud-native techniques and practices. - -![Cloud-Native Design](./media/cloud-native-design.png) - -**Figure 1-2**. Cloud-native design - -Note how the application is decomposed across a set of small isolated microservices. Each service is self-contained and encapsulates its own code, data, and dependencies. Each is deployed in a software container and managed by a container orchestrator. Instead of a large relational database, each service owns it own datastore, the type of which vary based upon the data needs. Note how some services depend on a relational database, but other on NoSQL databases. One service stores its state in a distributed cache. Note how all traffic routes through an API Gateway service that is responsible for routing traffic to the core back-end services and enforcing many cross-cutting concerns. Most importantly, the application takes full advantage of the scalability, availability, and resiliency features found in modern cloud platforms. - -### Cloud-native computing - -Hmm... We just used the term, _Cloud Native_. Your first thought might be, "What exactly does that mean?" Another industry buzzword concocted by software vendors to market more stuff?" - -Fortunately it's far different, and hopefully this book will help convince you. - -Within a short time, cloud native has become a driving trend in the software industry. It's a new way to construct large, complex systems. The approach takes full advantage of modern software development practices, technologies, and cloud infrastructure. Cloud native changes the way you design, implement, deploy, and operationalize systems. - -Unlike the continuous hype that drives our industry, cloud native is _for-real_. Consider the [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF), a consortium of over 400 major corporations. Its charter is to make cloud-native computing ubiquitous across technology and cloud stacks. As one of the most influential open-source groups, it hosts many of the fastest-growing open source-projects in GitHub. These projects include [Kubernetes](https://kubernetes.io/), [Prometheus](https://prometheus.io/), [Helm](https://helm.sh/), [Envoy](https://www.envoyproxy.io/), and [gRPC](https://grpc.io/). - -The CNCF fosters an ecosystem of open-source and vendor-neutrality. Following that lead, this book presents cloud-native principles, patterns, and best practices that are technology agnostic. At the same time, we discuss the services and infrastructure available in the Microsoft Azure cloud for constructing cloud-native systems. - -So, what exactly is Cloud Native? Sit back, relax, and let us help you explore this new world. - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](definition.md) diff --git a/docs/architecture/cloud-native/leverage-containers-orchestrators.md b/docs/architecture/cloud-native/leverage-containers-orchestrators.md deleted file mode 100644 index ebdd680139360..0000000000000 --- a/docs/architecture/cloud-native/leverage-containers-orchestrators.md +++ /dev/null @@ -1,228 +0,0 @@ ---- -title: Leveraging containers and orchestrators -description: Leveraging Docker Containers and Kubernetes Orchestrators in Azure -ms.date: 04/06/2022 ---- - -# Leveraging containers and orchestrators - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Containers and orchestrators are designed to solve problems common to monolithic deployment approaches. - -## Challenges with monolithic deployments - -Traditionally, most applications have been deployed as a single unit. Such applications are referred to as a monolith. This general approach of deploying applications as single units even if they're composed of multiple modules or assemblies is known as monolithic architecture, as shown in Figure 3-1. - -![Monolithic architecture.](./media/monolithic-design.png) - -**Figure 3-1**. Monolithic architecture. - -Although they have the benefit of simplicity, monolithic architectures face many challenges: - -### Deployment - -Additionally, they require a restart of the application, which may temporarily impact availability if zero-downtime techniques are not applied while deploying. - -### Scaling - -A monolithic application is hosted entirely on a single machine instance, often requiring high-capability hardware. If any part of the monolith requires scaling, another copy of the entire application must be deployed to another machine. With a monolith, you can't scale application components individually - it's all or nothing. Scaling components that don't require scaling results in inefficient and costly resource usage. - -### Environment - -Monolithic applications are typically deployed to a hosting environment with a pre-installed operating system, runtime, and library dependencies. This environment may not match that upon which the application was developed or tested. Inconsistencies across application environments are a common source of problems for monolithic deployments. - -### Coupling - -A monolithic application is likely to experience high coupling across its functional components. Without hard boundaries, system changes often result in unintended and costly side effects. New features/fixes become tricky, time-consuming, and expensive to implement. Updates require extensive testing. Coupling also makes it difficult to refactor components or swap in alternative implementations. Even when constructed with a strict separation of concerns, architectural erosion sets in as the monolithic code base deteriorates with never-ending "special cases." - -### Platform lock-in - -A monolithic application is constructed with a single technology stack. While offering uniformity, this commitment can become a barrier to innovation. New features and components will be built using the application's current stack - even when more modern technologies may be a better choice. A longer-term risk is your technology stack becoming outdated and obsolete. Rearchitecting an entire application to a new, more modern platform is at best expensive and risky. - -## What are the benefits of containers and orchestrators? - -We introduced containers in Chapter 1. We highlighted how the Cloud Native Computing Foundation (CNCF) ranks containerization as the first step in their [Cloud-Native Trail Map](https://raw.githubusercontent.com/cncf/trailmap/master/CNCF_TrailMap_latest.png) - guidance for enterprises beginning their cloud-native journey. In this section, we discuss the benefits of containers. - -Docker is the most popular container management platform. It works with containers on both Linux or Windows. Containers provide separate but reproducible application environments that run the same way on any system. This aspect makes them perfect for developing and hosting cloud-native services. Containers are isolated from one another. Two containers on the same host hardware can have different versions of software, without causing conflicts. - -Containers are defined by simple text-based files that become project artifacts and are checked into source control. While full servers and virtual machines require manual effort to update, containers are easily version-controlled. Apps built to run in containers can be developed, tested, and deployed using automated tools as part of a build pipeline. - -Containers are immutable. Once you define a container, you can recreate and run it exactly the same way. This immutability lends itself to component-based design. If some parts of an application evolve differently than others, why redeploy the entire app when you can just deploy the parts that change most frequently? Different features and cross-cutting concerns of an app can be broken up into separate units. Figure 3-2 shows how a monolithic app can take advantage of containers and microservices by delegating certain features or functionality. The remaining functionality in the app itself has also been containerized. - -![Breaking up a monolithic app to use microservices in the back end.](./media/cloud-native-design.png) - -**Figure 3-2**. Decomposing a monolithic app to embrace microservices. - -Each cloud-native service is built and deployed in a separate container. Each can update as needed. Individual services can be hosted on nodes with resources appropriate to each service. The environment each service runs in is immutable, shared across dev, test, and production environments, and easily versioned. Coupling between different areas of the application occurs explicitly as calls or messages between services, not compile-time dependencies within the monolith. You can also choose the technology that best suites a given capability without requiring changes to the rest of the app. - -Containerized services require automated management. It wouldn't be feasible to manually administer a large set of independently deployed containers. For example, consider the following tasks: - -- How will container instances be provisioned across a cluster of many machines? -- Once deployed, how will containers discover and communicate with each other? -- How can containers scale in or out on-demand? -- How do you monitor the health of each container? -- How do you protect a container against hardware and software failures? -- How do upgrade containers for a live application with zero downtime? - -Container orchestrators address and automate these and other concerns. - -In the cloud-native eco-system, Kubernetes has become the de facto container orchestrator. It's an open-source platform managed by the Cloud Native Computing Foundation (CNCF). Kubernetes automates the deployment, scaling, and operational concerns of containerized workloads across a machine cluster. However, installing and managing Kubernetes is notoriously complex. - -A much better approach is to leverage Kubernetes as a managed service from a cloud vendor. The Azure cloud features a fully managed Kubernetes platform entitled [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/). AKS abstracts the complexity and operational overhead of managing Kubernetes. You consume Kubernetes as a cloud service; Microsoft takes responsibility for managing and supporting it. AKS also tightly integrates with other Azure services and dev tools. - -AKS is a cluster-based technology. A pool of federated virtual machines, or nodes, is deployed to the Azure cloud. Together they form a highly available environment, or cluster. The cluster appears as a seamless, single entity to your cloud-native application. Under the hood, AKS deploys your containerized services across these nodes following a predefined strategy that evenly distributes the load. - -## What are the scaling benefits? - -Services built on containers can leverage scaling benefits provided by orchestration tools like Kubernetes. By design containers only know about themselves. Once you have multiple containers that need to work together, you should organize them at a higher level. Organizing large numbers of containers and their shared dependencies, such as network configuration, is where orchestration tools come in to save the day! Kubernetes creates an abstraction layer over groups of containers and organizes them into *pods*. Pods run on worker machines referred to as *nodes*. This organized structure is referred to as a *cluster*. Figure 3-3 shows the different components of a Kubernetes cluster. - -![Kubernetes cluster components.](./media/kubernetes-cluster-components.png) -**Figure 3-3**. Kubernetes cluster components. - -Scaling containerized workloads is a key feature of container orchestrators. AKS supports automatic scaling across two dimensions: Container instances and compute nodes. Together they give AKS the ability to quickly and efficiently respond to spikes in demand and add additional resources. We discuss scaling in AKS later in this chapter. - -### Declarative versus imperative - -Kubernetes supports both declarative and imperative configuration. The imperative approach involves running various commands that tell Kubernetes what to do each step of the way. Run this image. Delete this pod. Expose this port. With the declarative approach, you create a configuration file, called a manifest, to describe what you want instead of what to do. Kubernetes reads the manifest and transforms your desired end state into actual end state. - -Imperative commands are great for learning and interactive experimentation. However, you'll want to declaratively create Kubernetes manifest files to embrace an infrastructure as code approach, providing for reliable and repeatable deployments. The manifest file becomes a project artifact and is used in your CI/CD pipeline for automating Kubernetes deployments. - -If you've already configured your cluster using imperative commands, you can export a declarative manifest by using `kubectl get svc SERVICENAME -o yaml > service.yaml`. This command produces a manifest similar to one shown below: - -```yaml -apiVersion: v1 -kind: Service -metadata: - creationTimestamp: "2019-09-13T13:58:47Z" - labels: - component: apiserver - provider: kubernetes - name: kubernetes - namespace: default - resourceVersion: "153" - selfLink: /api/v1/namespaces/default/services/kubernetes - uid: 9b1fac62-d62e-11e9-8968-00155d38010d -spec: - clusterIP: 10.96.0.1 - ports: - - name: https - port: 443 - protocol: TCP - targetPort: 6443 - sessionAffinity: None - type: ClusterIP -status: - loadBalancer: {} -``` - -When using declarative configuration, you can preview the changes that will be made before committing them by using `kubectl diff -f FOLDERNAME` against the folder where your configuration files are located. Once you're sure you want to apply the changes, run `kubectl apply -f FOLDERNAME`. Add `-R` to recursively process a folder hierarchy. - -You can also use declarative configuration with other Kubernetes features, one of which being deployments. Declarative deployments help manage releases, updates, and scaling. They instruct the Kubernetes deployment controller on how to deploy new changes, scale out load, or roll back to a previous revision. If a cluster is unstable, a declarative deployment will automatically return the cluster back to a desired state. For example, if a node should crash, the deployment mechanism will redeploy a replacement to achieve your desired state - -Using declarative configuration allows infrastructure to be represented as code that can be checked in and versioned alongside the application code. It provides improved change control and better support for continuous deployment using a build and deploy pipeline. - -## What scenarios are ideal for containers and orchestrators? - -The following scenarios are ideal for using containers and orchestrators. - -### Applications requiring high uptime and scalability - -Individual applications that have high uptime and scalability requirements are ideal candidates for cloud-native architectures using microservices, containers, and orchestrators. They can be developed in containers, tested across versioned environments, and deployed into production with zero downtime. The use of Kubernetes clusters ensures such apps can also scale on demand and recover automatically from node failures. - -### Large numbers of applications - -Organizations that deploy and maintain large numbers of applications benefit from containers and orchestrators. The up front effort of setting up containerized environments and Kubernetes clusters is primarily a fixed cost. Deploying, maintaining, and updating individual applications has a cost that varies with the number of applications. Beyond a few applications, the complexity of maintaining custom applications manually exceeds the cost of implementing a solution using containers and orchestrators. - -## When should you avoid using containers and orchestrators? - -If you're unable to build your application following the Twelve-Factor App principles, you should consider avoiding containers and orchestrators. In these cases, consider a VM-based hosting platform, or possibly some hybrid system. With it, you can always spin off certain pieces of functionality into separate containers or even serverless functions. - -## Development resources - -This section shows a short list of development resources that may help you get started using containers and orchestrators for your next application. If you're looking for guidance on how to design your cloud-native microservices architecture app, read this book's companion, [.NET Microservices: Architecture for Containerized .NET Applications](https://dotnet.microsoft.com/download/thank-you/microservices-architecture-ebook). - -### Local Kubernetes Development - -Kubernetes deployments provide great value in production environments, but can also run locally on your development machine. While you may work on individual microservices independently, there may be times when you'll need to run the entire system locally - just as it will run when deployed to production. There are several tools that can help: Minikube and Docker Desktop. Visual Studio also provides tooling for Docker development. - -### Minikube - -What is Minikube? The Minikube project says "Minikube implements a local Kubernetes cluster on macOS, Linux, and Windows." Its primary goals are "to be the best tool for local Kubernetes application development and to support all Kubernetes features that fit." Installing Minikube is separate from Docker, but Minikube supports different hypervisors than Docker Desktop supports. The following Kubernetes features are currently supported by Minikube: - -- DNS -- NodePorts -- ConfigMaps and secrets -- Dashboards -- Container runtimes: Docker, rkt, CRI-O, and containerd -- Enabling Container Network Interface (CNI) -- Ingress - -After installing Minikube, you can quickly start using it by running the `minikube start` command, which downloads an image and start the local Kubernetes cluster. Once the cluster is started, you interact with it using the standard Kubernetes `kubectl` commands. - -### Docker Desktop - -You can also work with Kubernetes directly from Docker Desktop on Windows. It is your only option if you're using Windows Containers, and is a great choice for non-Windows containers as well. Figure 3-4 shows how to enable local Kubernetes support when running Docker Desktop. - -![Configuring Kubernetes in Docker Desktop](./media/docker-desktop-kubernetes.png) - -**Figure 3-4**. Configuring Kubernetes in Docker Desktop. - -Docker Desktop is the most popular tool for configuring and running containerized apps locally. When you work with Docker Desktop, you can develop locally against the exact same set of Docker container images that you'll deploy to production. Docker Desktop is designed to "build, test, and ship" containerized apps locally. It supports both Linux and Windows containers. Once you push your images to an image registry, like Azure Container Registry or Docker Hub, AKS can pull and deploy them to production. - -### Visual Studio Docker Tooling - -Visual Studio supports Docker development for web-based applications. When you create a new ASP.NET Core application, you have an option to configure it with Docker support, as shown in Figure 3-5. - -![Visual Studio Enable Docker Support](./media/visual-studio-enable-docker-support.png) - -**Figure 3-5**. Visual Studio Enable Docker Support - -When this option is selected, the project is created with a `Dockerfile` in its root, which can be used to build and host the app in a Docker container. An example Dockerfile is shown in Figure 3-6. - -```dockerfile -FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base -WORKDIR /app -EXPOSE 80 -EXPOSE 443 - -FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build -WORKDIR /src -COPY ["eShopWeb/eShopWeb.csproj", "eShopWeb/"] -RUN dotnet restore "eShopWeb/eShopWeb.csproj" -COPY . . -WORKDIR "/src/eShopWeb" -RUN dotnet build "eShopWeb.csproj" -c Release -o /app/build - -FROM build AS publish -RUN dotnet publish "eShopWeb.csproj" -c Release -o /app/publish - -FROM base AS final -WORKDIR /app -COPY --from=publish /app/publish . -ENTRYPOINT ["dotnet", "eShopWeb.dll"] -``` - -**Figure 3-6**. Visual Studio generated Dockerfile - -Once support is added, you can run your application in a Docker container in Visual Studio. Figure 3-7 shows the different run options available from a new ASP.NET Core project created with Docker support added. - -![Visual Studio Docker Run Options](./media/visual-studio-docker-run-options.png) - -**Figure 3-7**. Visual Studio Docker Run Options - -Also, at any time you can add Docker support to an existing ASP.NET Core application. From the Visual Studio Solution Explorer, right-click on the project and select **Add** > **Docker Support**, as shown in Figure 3-8. - -![Visual Studio Add Docker Support](./media/visual-studio-add-docker-support.png) - -**Figure 3-8**. Adding Docker support to Visual Studio - -### Visual Studio Code Docker Tooling - -There are many extensions available for Visual Studio Code that support Docker development. - -Microsoft provides the [Docker for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker). This extension simplifies the process of adding container support to applications. It scaffolds required files, builds Docker images, and enables you to debug your app inside a container. The extension features a visual explorer that makes it easy to take actions on containers and images such as start, stop, inspect, remove, and more. The extension also supports Docker Compose enabling you to manage multiple running containers as a single unit. - ->[!div class="step-by-step"] ->[Previous](scale-applications.md) ->[Next](leverage-serverless-functions.md) diff --git a/docs/architecture/cloud-native/leverage-serverless-functions.md b/docs/architecture/cloud-native/leverage-serverless-functions.md deleted file mode 100644 index a9ae198896701..0000000000000 --- a/docs/architecture/cloud-native/leverage-serverless-functions.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: Leveraging serverless functions -description: Leveraging Serverless and Azure Functions in Cloud-Native Applications -ms.date: 04/06/2022 ---- - -# Leveraging serverless functions - -[!INCLUDE [download-alert](includes/download-alert.md)] - -In the spectrum from managing physical machines to leveraging cloud capabilities, serverless lives at the extreme end. Your only responsibility is your code, and you only pay when your code runs. Azure Functions provides a way to build serverless capabilities into your cloud-native applications. - -## What is serverless? - -Serverless is a relatively new service model of cloud computing. It doesn't mean that servers are optional - your code still runs on a server somewhere. The distinction is that the application team no longer concerns itself with managing server infrastructure. Instead, the cloud vendor own this responsibility. The development team increases its productivity by delivering business solutions to customers, not plumbing. - -Serverless computing uses event-triggered stateless containers to host your services. They can scale out and in to meet demand as-needed. Serverless platforms like Azure Functions have tight integration with other Azure services like queues, events, and storage. - -## What challenges are solved by serverless? - -Serverless platforms address many time-consuming and expensive concerns: - -- Purchasing machines and software licenses -- Housing, securing, configuring, and maintaining the machines and their networking, power, and A/C requirements -- Patching and upgrading operating systems and software -- Configuring web servers or machine services to host application software -- Configuring application software within its platform - -Many companies allocate large budgets to support hardware infrastructure concerns. Moving to the cloud can help reduce these costs; shifting applications to serverless can help eliminate them. - -## What is the difference between a microservice and a serverless function? - -Typically, a microservice encapsulates a business capability, such as a shopping cart for an online eCommerce site. It exposes multiple operations that enable a user to manage their shopping experience. A function, however, is a small, lightweight block of code that executes a single-purpose operation in response to an event. -Microservices are typically constructed to respond to requests, often from an interface. Requests can be HTTP Rest- or gRPC-based. Serverless services respond to events. Its event-driven architecture is ideal for processing short-running, background tasks. - -## What scenarios are appropriate for serverless? - -Serverless exposes individual short-running functions that are invoked in response to a trigger. This makes them ideal for processing background tasks. - -An application might need to send an email as a step in a workflow. Instead of sending the notification as part of a microservice request, place the message details onto a queue. An Azure Function can dequeue the message and asynchronously send the email. Doing so could improve the performance and scalability of the microservice. [Queue-based load leveling](/azure/architecture/patterns/queue-based-load-leveling) can be implemented to avoid bottlenecks related to sending the emails. Additionally, this stand-alone service could be reused as a utility across many different applications. - -Asynchronous messaging from queues and topics is a common pattern to trigger serverless functions. However, Azure Functions can be triggered by other events, such as changes to Azure Blob Storage. A service that supports image uploads could have an Azure Function responsible for optimizing the image size. The function could be triggered directly by inserts into Azure Blob Storage, keeping complexity out of the microservice operations. - -Many services have long-running processes as part of their workflows. Often these tasks are done as part of the user's interaction with the application. These tasks can force the user to wait, negatively impacting their experience. Serverless computing provides a great way to move slower tasks outside of the user interaction loop. These tasks can scale with demand without requiring the entire application to scale. - -## When should you avoid serverless? - -Serverless solutions provision and scale on demand. When a new instance is invoked, cold starts are a common issue. A cold start is the period of time it takes to provision this instance. Normally, this delay might be a few seconds, but can be longer depending on various factors. Once provisioned, a single instance is kept alive as long as it receives periodic requests. But, if a service is called less frequently, Azure may remove it from memory and require a cold start when reinvoked. Cold starts are also required when a function scales out to a new instance. - -Figure 3-9 shows a cold-start pattern. Note the extra steps required when the app is cold. - -![Cold versus warm start](./media/cold-start-warm-start.png) -**Figure 3-9**. Cold start versus warm start. - -To avoid cold starts entirely, you might switch from a [consumption plan to a dedicated plan](https://azure.microsoft.com/blog/understanding-serverless-cold-start/). You can also configure one or more [pre-warmed instances](/azure/azure-functions/functions-premium-plan#pre-warmed-instances) with the premium plan upgrade. In these cases, when you need to add another instance, it's already up and ready to go. These options can help mitigate the cold start issue associated with serverless computing. - -Cloud providers bill for serverless based on compute execution time and consumed memory. Long running operations or high memory consumption workloads aren't always the best candidates for serverless. Serverless functions favor small chunks of work that can complete quickly. Most serverless platforms require individual functions to complete within a few minutes. Azure Functions defaults to a 5-minute time-out duration, which can be configured up to 10 minutes. The Azure Functions premium plan can mitigate this issue as well, defaulting time-outs to 30 minutes with an unbounded higher limit that can be configured. Compute time isn't calendar time. More advanced functions using the [Azure Durable Functions framework](/azure/azure-functions/durable/durable-functions-overview?tabs=csharp) may pause execution over a course of several days. The billing is based on actual execution time - when the function wakes up and resumes processing. - -Finally, leveraging Azure Functions for application tasks adds complexity. It's wise to first architect your application with a modular, loosely coupled design. Then, identify if there are benefits serverless would offer that justify the additional complexity. - ->[!div class="step-by-step"] ->[Previous](leverage-containers-orchestrators.md) ->[Next](combine-containers-serverless-approaches.md) diff --git a/docs/architecture/cloud-native/logging-with-elastic-stack.md b/docs/architecture/cloud-native/logging-with-elastic-stack.md deleted file mode 100644 index 4ba26339c7d74..0000000000000 --- a/docs/architecture/cloud-native/logging-with-elastic-stack.md +++ /dev/null @@ -1,117 +0,0 @@ ---- -title: Logging with Elastic Stack -description: Logging using Elastic Stack, Logstash, and Kibana -ms.date: 04/06/2022 ---- - -# Logging with Elastic Stack - -[!INCLUDE [download-alert](includes/download-alert.md)] - -There are many good centralized logging tools and they vary in cost from being free, open-source tools, to more expensive options. In many cases, the free tools are as good as or better than the paid offerings. One such tool is a combination of three open-source components: Elasticsearch, Logstash, and Kibana. - -Collectively these tools are known as the Elastic Stack or ELK stack. - -## Elastic Stack - -The Elastic Stack is a powerful option for gathering information from a Kubernetes cluster. Kubernetes supports sending logs to an Elasticsearch endpoint, and for the [most part](https://www.elastic.co/guide/en/kibana/master/logging-configuration.html), all you need to get started is to set the environment variables as shown in Figure 7-5: - -```kubernetes -KUBE_LOGGING_DESTINATION=elasticsearch -KUBE_ENABLE_NODE_LOGGING=true -``` - -**Figure 7-5**. Configuration variables for Kubernetes - -This step will install Elasticsearch on the cluster and target sending all the cluster logs to it. - -![An example of a Kibana dashboard showing the results of a query against logs ingested from Kubernetes](./media/kibana-dashboard.png) -**Figure 7-6**. An example of a Kibana dashboard showing the results of a query against logs that are ingested from Kubernetes - -## What are the advantages of Elastic Stack? - -Elastic Stack provides centralized logging in a low-cost, scalable, cloud-friendly manner. Its user interface streamlines data analysis so you can spend your time gleaning insights from your data instead of fighting with a clunky interface. It supports a wide variety of inputs so as your distributed application spans more and different kinds of services, you can expect to continue to be able to feed log and metric data into the system. The Elastic Stack also supports fast searches even across large data sets, making it possible even for large applications to log detailed data and still be able to have visibility into it in a performant fashion. - -## Logstash - -The first component is [Logstash](https://www.elastic.co/products/logstash). This tool is used to gather log information from a large variety of different sources. For instance, Logstash can read logs from disk and also receive messages from logging libraries like [Serilog](https://serilog.net/). Logstash can do some basic filtering and expansion on the logs as they arrive. For instance, if your logs contain IP addresses then Logstash may be configured to do a geographical lookup and obtain a country/region or even city of origin for that message. - -Serilog is a logging library for .NET languages, which allows for parameterized logging. Instead of generating a textual log message that embeds fields, parameters are kept separate. This library allows for more intelligent filtering and searching. A sample Serilog configuration for writing to Logstash appears in Figure 7-7. - -```csharp -var log = new LoggerConfiguration() - .WriteTo.Http("http://localhost:8080") - .CreateLogger(); -``` - -**Figure 7-7**. Serilog config for writing log information directly to logstash over HTTP - -Logstash would use a configuration like the one shown in Figure 7-8. - -``` -input { - http { - #default host 0.0.0.0:8080 - codec => json - } -} - -output { - elasticsearch { - hosts => "elasticsearch:9200" - index=>"sales-%{+xxxx.ww}" - } -} -``` - -**Figure 7-8**. A Logstash configuration for consuming logs from Serilog - -For scenarios where extensive log manipulation isn't needed there's an alternative to Logstash known as [Beats](https://www.elastic.co/products/beats). Beats is a family of tools that can gather a wide variety of data from logs to network data and uptime information. Many applications will use both Logstash and Beats. - -Once the logs have been gathered by Logstash, it needs somewhere to put them. While Logstash supports many different outputs, one of the more exciting ones is Elasticsearch. - -## Elasticsearch - -Elasticsearch is a powerful search engine that can index logs as they arrive. It makes running queries against the logs quick. Elasticsearch can handle huge quantities of logs and, in extreme cases, can be scaled out across many nodes. - -Log messages that have been crafted to contain parameters or that have had parameters split from them through Logstash processing, can be queried directly as Elasticsearch preserves this information. - -A query that searches for the top 10 pages visited by `jill@example.com`, appears in Figure 7-9. - -```json -"query": { - "match": { - "user": "jill@example.com" - } - }, - "aggregations": { - "top_10_pages": { - "terms": { - "field": "page", - "size": 10 - } - } - } -``` - -**Figure 7-9**. An Elasticsearch query for finding top 10 pages visited by a user - -## Visualizing information with Kibana web dashboards - -The final component of the stack is Kibana. This tool is used to provide interactive visualizations in a web dashboard. Dashboards may be crafted even by users who are non-technical. Most data that is resident in the Elasticsearch index, can be included in the Kibana dashboards. Individual users may have different dashboard desires and Kibana enables this customization through allowing user-specific dashboards. - -## Installing Elastic Stack on Azure - -The Elastic stack can be installed on Azure in many ways. As always, it's possible to [provision virtual machines and install Elastic Stack on them directly](/azure/virtual-machines/linux/tutorial-elasticsearch). This option is preferred by some experienced users as it offers the highest degree of customizability. Deploying on infrastructure as a service introduces significant management overhead forcing those who take that path to take ownership of all the tasks associated with infrastructure as a service such as securing the machines and keeping up-to-date with patches. - -An option with less overhead is to make use of one of the many Docker containers on which the Elastic Stack has already been configured. These containers can be dropped into an existing Kubernetes cluster and run alongside application code. The [sebp/elk](https://elk-docker.readthedocs.io/) container is a well-documented and tested Elastic Stack container. - -Another option is a [recently announced ELK-as-a-service offering](https://devops.com/logz-io-unveils-azure-open-source-elk-monitoring-solution/). - -## References - -- [Install Elastic Stack on Azure](/azure/virtual-machines/linux/tutorial-elasticsearch) - ->[!div class="step-by-step"] ->[Previous](observability-patterns.md) ->[Next](monitoring-azure-kubernetes.md) diff --git a/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md b/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md deleted file mode 100644 index c7d4ce25e7f6e..0000000000000 --- a/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: Mapping eShopOnContainers to Azure Services -description: Mapping eShopOnContainers to Azure Services like Azure Kubernetes Service, API Gateway, and Azure Service Bus. -ms.date: 04/06/2022 ---- - -# Mapping eShopOnContainers to Azure Services - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Although not required, Azure is well-suited to supporting the eShopOnContainers because the project was built to be a cloud-native application. The application is built with .NET, so it can run on Linux or Windows containers depending on the Docker host. The application is made up of multiple autonomous microservices, each with its own data. The different microservices showcase different approaches, ranging from simple CRUD operations to more complex DDD and CQRS patterns. Microservices communicate with clients over HTTP and with one another via message-based communication. The application supports multiple platforms for clients as well, since it adopts HTTP as a standard communication protocol and includes ASP.NET Core and Xamarin mobile apps that run on Android, iOS, and Windows platforms. - -The application's architecture is shown in Figure 2-5. On the left are the client apps, broken up into mobile, traditional Web, and Web Single Page Application (SPA) flavors. On the right are the server-side components that make up the system, each of which can be hosted in Docker containers and Kubernetes clusters. The traditional web app is powered by the ASP.NET Core MVC application shown in yellow. This app and the mobile and web SPA applications communicate with the individual microservices through one or more API gateways. The API gateways follow the "backends for front ends" (BFF) pattern, meaning that each gateway is designed to support a given front-end client. The individual microservices are listed to the right of the API gateways and include both business logic and some kind of persistence store. The different services make use of SQL Server databases, Redis cache instances, and MongoDB/CosmosDB stores. On the far right is the system's Event Bus, which is used for communication between the microservices. - -![eShopOnContainers Architecture](./media/eshoponcontainers-architecture.png) -**Figure 2-5**. The eShopOnContainers Architecture. - -The server-side components of this architecture all map easily to Azure services. - -## Container orchestration and clustering - -The application's container-hosted services, from ASP.NET Core MVC apps to individual Catalog and Ordering microservices, can be hosted and managed in Azure Kubernetes Service (AKS). The application can run locally on Docker and Kubernetes, and the same containers can then be deployed to staging and production environments hosted in AKS. This process can be automated as we'll see in the next section. - -AKS provides management services for individual clusters of containers. The application will deploy separate containers for each microservice in the AKS cluster, as shown in the architecture diagram above. This approach allows each individual service to scale independently according to its resource demands. Each microservice can also be deployed independently, and ideally such deployments should incur zero system downtime. - -## API Gateway - -The eShopOnContainers application has multiple front-end clients and multiple different back-end services. There's no one-to-one correspondence between the client applications and the microservices that support them. In such a scenario, there may be a great deal of complexity when writing client software to interface with the various back-end services in a secure manner. Each client would need to address this complexity on its own, resulting in duplication and many places in which to make updates as services change or new policies are implemented. - -Azure API Management (APIM) helps organizations publish APIs in a consistent, manageable fashion. APIM consists of three components: the API Gateway, and administration portal (the Azure portal), and a developer portal. - -The API Gateway accepts API calls and routes them to the appropriate back-end API. It can also provide additional services like verification of API keys or JWT tokens and API transformation on the fly without code modifications (for instance, to accommodate clients expecting an older interface). - -The Azure portal is where you define the API schema and package different APIs into products. You also configure user access, view reports, and configure policies for quotas or transformations. - -The developer portal serves as the main resource for developers. It provides developers with API documentation, an interactive test console, and reports on their own usage. Developers also use the portal to create and manage their own accounts, including subscription and API key support. - -Using APIM, applications can expose several different groups of services, each providing a back end for a particular front-end client. APIM is recommended for complex scenarios. For simpler needs, the lightweight API Gateway Ocelot can be used. The eShopOnContainers app uses Ocelot because of its simplicity and because it can be deployed into the same application environment as the application itself. [Learn more about eShopOnContainers, APIM, and Ocelot.](../microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md#azure-api-management) - -Another option if your application is using AKS is to deploy the Azure Gateway Ingress Controller as a pod within your AKS cluster. This approach allows your cluster to integrate with an Azure Application Gateway, allowing the gateway to load-balance traffic to the AKS pods. [Learn more about the Azure Gateway Ingress Controller for AKS](https://github.com/Azure/application-gateway-kubernetes-ingress). - -## Data - -The various back-end services used by eShopOnContainers have different storage requirements. Several microservices use SQL Server databases. The Basket microservice leverages a Redis cache for its persistence. The Locations microservice expects a MongoDB API for its data. Azure supports each of these data formats. - -For SQL Server database support, Azure has products for everything from single databases up to highly scalable SQL Database elastic pools. Individual microservices can be configured to communicate with their own individual SQL Server databases quickly and easily. These databases can be scaled as needed to support each separate microservice according to its needs. - -The eShopOnContainers application stores the user's current shopping basket between requests. This aspect is managed by the Basket microservice that stores the data in a Redis cache. In development, this cache can be deployed in a container, while in production it can utilize Azure Cache for Redis. Azure Cache for Redis is a fully managed service offering high performance and reliability without the need to deploy and manage Redis instances or containers on your own. - -The Locations microservice uses a MongoDB NoSQL database for its persistence. During development, the database can be deployed in its own container, while in production the service can leverage [Azure Cosmos DB's API for MongoDB](/azure/cosmos-db/mongodb-introduction). One of the benefits of Azure Cosmos DB is its ability to leverage multiple different communication protocols, including a SQL API and common NoSQL APIs including MongoDB, Cassandra, Gremlin, and Azure Table Storage. Azure Cosmos DB offers a fully managed and globally distributed database as a service that can scale to meet the needs of the services that use it. - -Distributed data in cloud-native applications is covered in more detail in [chapter 5](distributed-data.md). - -## Event Bus - -The application uses events to communicate changes between different services. This functionality can be implemented with various implementations, and locally the eShopOnContainers application uses [RabbitMQ](https://www.rabbitmq.com/). When hosted in Azure, the application would leverage [Azure Service Bus](/azure/service-bus/) for its messaging. Azure Service Bus is a fully managed integration message broker that allows applications and services to communicate with one another in a decoupled, reliable, asynchronous manner. Azure Service Bus supports individual queues as well as separate *topics* to support publisher-subscriber scenarios. The eShopOnContainers application would leverage topics with Azure Service Bus to support distributing messages from one microservice to any other microservice that needed to react to a given message. - -## Resiliency - -Once deployed to production, the eShopOnContainers application would be able to take advantage of several Azure services available to improve its resiliency. The application publishes health checks, which can be integrated with Application Insights to provide reporting and alerts based on the app's availability. Azure resources also provide diagnostic logs that can be used to identify and correct bugs and performance issues. Resource logs provide detailed information on when and how different Azure resources are used by the application. You'll learn more about cloud-native resiliency features in [chapter 6](resiliency.md). - ->[!div class="step-by-step"] ->[Previous](introduce-eshoponcontainers-reference-app.md) ->[Next](deploy-eshoponcontainers-azure.md) diff --git a/docs/architecture/cloud-native/media/acr-runinstance-contextmenu.png b/docs/architecture/cloud-native/media/acr-runinstance-contextmenu.png deleted file mode 100644 index 393ee014288d2..0000000000000 Binary files a/docs/architecture/cloud-native/media/acr-runinstance-contextmenu.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/aggregator-service.png b/docs/architecture/cloud-native/media/aggregator-service.png deleted file mode 100644 index d6a438af49277..0000000000000 Binary files a/docs/architecture/cloud-native/media/aggregator-service.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/aks-cluster-autoscaler.png b/docs/architecture/cloud-native/media/aks-cluster-autoscaler.png deleted file mode 100644 index 0c999841b0e76..0000000000000 Binary files a/docs/architecture/cloud-native/media/aks-cluster-autoscaler.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/aks-traffic-manager.png b/docs/architecture/cloud-native/media/aks-traffic-manager.png deleted file mode 100644 index 4dbb6168a9a5d..0000000000000 Binary files a/docs/architecture/cloud-native/media/aks-traffic-manager.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/always-encrypted.png b/docs/architecture/cloud-native/media/always-encrypted.png deleted file mode 100644 index c3f91d7df6261..0000000000000 Binary files a/docs/architecture/cloud-native/media/always-encrypted.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/api-gateway-pattern.png b/docs/architecture/cloud-native/media/api-gateway-pattern.png deleted file mode 100644 index a56c05fad5d43..0000000000000 Binary files a/docs/architecture/cloud-native/media/api-gateway-pattern.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/application-gateway-ingress-controller.png b/docs/architecture/cloud-native/media/application-gateway-ingress-controller.png deleted file mode 100644 index a0cc7d93e7ab6..0000000000000 Binary files a/docs/architecture/cloud-native/media/application-gateway-ingress-controller.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/application-types.png b/docs/architecture/cloud-native/media/application-types.png deleted file mode 100644 index 2c2a370e8ad29..0000000000000 Binary files a/docs/architecture/cloud-native/media/application-types.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/application_insights_example.png b/docs/architecture/cloud-native/media/application_insights_example.png deleted file mode 100644 index 0e356738b6311..0000000000000 Binary files a/docs/architecture/cloud-native/media/application_insights_example.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/azure-api-management.png b/docs/architecture/cloud-native/media/azure-api-management.png deleted file mode 100644 index 76f7e9173f1db..0000000000000 Binary files a/docs/architecture/cloud-native/media/azure-api-management.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/azure-event-hub.png b/docs/architecture/cloud-native/media/azure-event-hub.png deleted file mode 100644 index 4b792c714d872..0000000000000 Binary files a/docs/architecture/cloud-native/media/azure-event-hub.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/azure-managed-databases.png b/docs/architecture/cloud-native/media/azure-managed-databases.png deleted file mode 100644 index 5bae3e9aeed95..0000000000000 Binary files a/docs/architecture/cloud-native/media/azure-managed-databases.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/azure-monitor.png b/docs/architecture/cloud-native/media/azure-monitor.png deleted file mode 100644 index 93b8b61241741..0000000000000 Binary files a/docs/architecture/cloud-native/media/azure-monitor.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/azure-signalr-service.png b/docs/architecture/cloud-native/media/azure-signalr-service.png deleted file mode 100644 index 4dee3f1db17ff..0000000000000 Binary files a/docs/architecture/cloud-native/media/azure-signalr-service.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/azure_dashboard.png b/docs/architecture/cloud-native/media/azure_dashboard.png deleted file mode 100644 index 585e50ffc4831..0000000000000 Binary files a/docs/architecture/cloud-native/media/azure_dashboard.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/backend-for-frontend-pattern.png b/docs/architecture/cloud-native/media/backend-for-frontend-pattern.png deleted file mode 100644 index f5429f59e5709..0000000000000 Binary files a/docs/architecture/cloud-native/media/backend-for-frontend-pattern.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/board-issue-types.png b/docs/architecture/cloud-native/media/board-issue-types.png deleted file mode 100644 index a4b3bbfa08d7c..0000000000000 Binary files a/docs/architecture/cloud-native/media/board-issue-types.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/build-release-run-pipeline.png b/docs/architecture/cloud-native/media/build-release-run-pipeline.png deleted file mode 100644 index 2d1e96884e856..0000000000000 Binary files a/docs/architecture/cloud-native/media/build-release-run-pipeline.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/caching-in-a-cloud-native-app.png b/docs/architecture/cloud-native/media/caching-in-a-cloud-native-app.png deleted file mode 100644 index 03f35443629de..0000000000000 Binary files a/docs/architecture/cloud-native/media/caching-in-a-cloud-native-app.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cap-theorem.png b/docs/architecture/cloud-native/media/cap-theorem.png deleted file mode 100644 index 7dafd70a83ad4..0000000000000 Binary files a/docs/architecture/cloud-native/media/cap-theorem.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/centralized-logging.png b/docs/architecture/cloud-native/media/centralized-logging.png deleted file mode 100644 index c1618318d7f77..0000000000000 Binary files a/docs/architecture/cloud-native/media/centralized-logging.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/chaining-http-queries.png b/docs/architecture/cloud-native/media/chaining-http-queries.png deleted file mode 100644 index 36fc4781099be..0000000000000 Binary files a/docs/architecture/cloud-native/media/chaining-http-queries.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/check-rbac.png b/docs/architecture/cloud-native/media/check-rbac.png deleted file mode 100644 index a2f587fe1f0fc..0000000000000 Binary files a/docs/architecture/cloud-native/media/check-rbac.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/checklist.png b/docs/architecture/cloud-native/media/checklist.png deleted file mode 100644 index d64e326c4f70d..0000000000000 Binary files a/docs/architecture/cloud-native/media/checklist.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/circuit-breaker-pattern.png b/docs/architecture/cloud-native/media/circuit-breaker-pattern.png deleted file mode 100644 index bc76ca2dd1356..0000000000000 Binary files a/docs/architecture/cloud-native/media/circuit-breaker-pattern.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cloud-native-design.png b/docs/architecture/cloud-native/media/cloud-native-design.png deleted file mode 100644 index 0ec09b847b389..0000000000000 Binary files a/docs/architecture/cloud-native/media/cloud-native-design.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cloud-native-foundational-pillars.png b/docs/architecture/cloud-native/media/cloud-native-foundational-pillars.png deleted file mode 100644 index 8fb17e7cf9e18..0000000000000 Binary files a/docs/architecture/cloud-native/media/cloud-native-foundational-pillars.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cold-start-warm-start.png b/docs/architecture/cloud-native/media/cold-start-warm-start.png deleted file mode 100644 index 9610f97b654a3..0000000000000 Binary files a/docs/architecture/cloud-native/media/cold-start-warm-start.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/command-interaction-with-queue.png b/docs/architecture/cloud-native/media/command-interaction-with-queue.png deleted file mode 100644 index 0230bbda42ce5..0000000000000 Binary files a/docs/architecture/cloud-native/media/command-interaction-with-queue.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/common-backing-services.png b/docs/architecture/cloud-native/media/common-backing-services.png deleted file mode 100644 index d2fa18c65c013..0000000000000 Binary files a/docs/architecture/cloud-native/media/common-backing-services.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/containers-dashboard.png b/docs/architecture/cloud-native/media/containers-dashboard.png deleted file mode 100644 index b28024223e3bd..0000000000000 Binary files a/docs/architecture/cloud-native/media/containers-dashboard.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/containers-diagram.png b/docs/architecture/cloud-native/media/containers-diagram.png deleted file mode 100644 index 4da4e7c7c0f3a..0000000000000 Binary files a/docs/architecture/cloud-native/media/containers-diagram.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cosmos-consistency-level-graph.png b/docs/architecture/cloud-native/media/cosmos-consistency-level-graph.png deleted file mode 100644 index be5abf59a3f99..0000000000000 Binary files a/docs/architecture/cloud-native/media/cosmos-consistency-level-graph.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cosmos-db-overview.png b/docs/architecture/cloud-native/media/cosmos-db-overview.png deleted file mode 100644 index bc74f642ade80..0000000000000 Binary files a/docs/architecture/cloud-native/media/cosmos-db-overview.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cosmos-db-partitioning.png b/docs/architecture/cloud-native/media/cosmos-db-partitioning.png deleted file mode 100644 index 25f2f09d79869..0000000000000 Binary files a/docs/architecture/cloud-native/media/cosmos-db-partitioning.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cosmos-encryption.png b/docs/architecture/cloud-native/media/cosmos-encryption.png deleted file mode 100644 index d0790c7f3e785..0000000000000 Binary files a/docs/architecture/cloud-native/media/cosmos-encryption.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cover-thumbnail.png b/docs/architecture/cloud-native/media/cover-thumbnail.png deleted file mode 100644 index cf5af8e182f41..0000000000000 Binary files a/docs/architecture/cloud-native/media/cover-thumbnail.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cover.png b/docs/architecture/cloud-native/media/cover.png deleted file mode 100644 index 0ba5d2e6d8025..0000000000000 Binary files a/docs/architecture/cloud-native/media/cover.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cqrs-implementation.png b/docs/architecture/cloud-native/media/cqrs-implementation.png deleted file mode 100644 index 558d4a5173470..0000000000000 Binary files a/docs/architecture/cloud-native/media/cqrs-implementation.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/create-container-registry.png b/docs/architecture/cloud-native/media/create-container-registry.png deleted file mode 100644 index f8e2aeb905abb..0000000000000 Binary files a/docs/architecture/cloud-native/media/create-container-registry.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/cross-service-query.png b/docs/architecture/cloud-native/media/cross-service-query.png deleted file mode 100644 index ee9cab5dca626..0000000000000 Binary files a/docs/architecture/cloud-native/media/cross-service-query.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/dapr-high-level.png b/docs/architecture/cloud-native/media/dapr-high-level.png deleted file mode 100644 index 54abd341bccc6..0000000000000 Binary files a/docs/architecture/cloud-native/media/dapr-high-level.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/devops-components.png b/docs/architecture/cloud-native/media/devops-components.png deleted file mode 100644 index b57a9b41f0f34..0000000000000 Binary files a/docs/architecture/cloud-native/media/devops-components.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/different-kinds-of-microservices.png b/docs/architecture/cloud-native/media/different-kinds-of-microservices.png deleted file mode 100644 index e0621d8a4da9b..0000000000000 Binary files a/docs/architecture/cloud-native/media/different-kinds-of-microservices.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/dir-struct.png b/docs/architecture/cloud-native/media/dir-struct.png deleted file mode 100644 index b2cad02574c9c..0000000000000 Binary files a/docs/architecture/cloud-native/media/dir-struct.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/direct-client-to-service-communication.png b/docs/architecture/cloud-native/media/direct-client-to-service-communication.png deleted file mode 100644 index 459e0ed0a7bb3..0000000000000 Binary files a/docs/architecture/cloud-native/media/direct-client-to-service-communication.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/direct-http-communication.png b/docs/architecture/cloud-native/media/direct-http-communication.png deleted file mode 100644 index d4ed795f4488a..0000000000000 Binary files a/docs/architecture/cloud-native/media/direct-http-communication.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/distributed-cloud-native-environment.png b/docs/architecture/cloud-native/media/distributed-cloud-native-environment.png deleted file mode 100644 index 9f6e8f9c3ee43..0000000000000 Binary files a/docs/architecture/cloud-native/media/distributed-cloud-native-environment.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/distributed-data.png b/docs/architecture/cloud-native/media/distributed-data.png deleted file mode 100644 index f50378bf550d7..0000000000000 Binary files a/docs/architecture/cloud-native/media/distributed-data.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/docker-desktop-kubernetes.png b/docs/architecture/cloud-native/media/docker-desktop-kubernetes.png deleted file mode 100644 index 897fc69402160..0000000000000 Binary files a/docs/architecture/cloud-native/media/docker-desktop-kubernetes.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/eshop-with-aggregators.png b/docs/architecture/cloud-native/media/eshop-with-aggregators.png deleted file mode 100644 index c632aadc1e189..0000000000000 Binary files a/docs/architecture/cloud-native/media/eshop-with-aggregators.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/eshoponcontainers-architecture.png b/docs/architecture/cloud-native/media/eshoponcontainers-architecture.png deleted file mode 100644 index f97b157eb3ded..0000000000000 Binary files a/docs/architecture/cloud-native/media/eshoponcontainers-architecture.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/eshoponcontainers-development-architecture.png b/docs/architecture/cloud-native/media/eshoponcontainers-development-architecture.png deleted file mode 100644 index 12d0f2afeb2d1..0000000000000 Binary files a/docs/architecture/cloud-native/media/eshoponcontainers-development-architecture.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/eshoponcontainers-helm-folder.png b/docs/architecture/cloud-native/media/eshoponcontainers-helm-folder.png deleted file mode 100644 index f763296d8fd82..0000000000000 Binary files a/docs/architecture/cloud-native/media/eshoponcontainers-helm-folder.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/eshoponcontainers-sample-app-screenshot.jpg b/docs/architecture/cloud-native/media/eshoponcontainers-sample-app-screenshot.jpg deleted file mode 100644 index 1fe678e927339..0000000000000 Binary files a/docs/architecture/cloud-native/media/eshoponcontainers-sample-app-screenshot.jpg and /dev/null differ diff --git a/docs/architecture/cloud-native/media/event-driven-messaging.png b/docs/architecture/cloud-native/media/event-driven-messaging.png deleted file mode 100644 index 5acf76fc0ff30..0000000000000 Binary files a/docs/architecture/cloud-native/media/event-driven-messaging.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/event-grid-anatomy.png b/docs/architecture/cloud-native/media/event-grid-anatomy.png deleted file mode 100644 index 64d70d92de429..0000000000000 Binary files a/docs/architecture/cloud-native/media/event-grid-anatomy.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/event-hub-partitioning.png b/docs/architecture/cloud-native/media/event-hub-partitioning.png deleted file mode 100644 index feac964c43b26..0000000000000 Binary files a/docs/architecture/cloud-native/media/event-hub-partitioning.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/event-sourcing.png b/docs/architecture/cloud-native/media/event-sourcing.png deleted file mode 100644 index c4fbb8e62505f..0000000000000 Binary files a/docs/architecture/cloud-native/media/event-sourcing.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/grpc-implementation.png b/docs/architecture/cloud-native/media/grpc-implementation.png deleted file mode 100644 index 4bf5998f7b9b1..0000000000000 Binary files a/docs/architecture/cloud-native/media/grpc-implementation.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/grpc-project.png b/docs/architecture/cloud-native/media/grpc-project.png deleted file mode 100644 index 4131bf62a90da..0000000000000 Binary files a/docs/architecture/cloud-native/media/grpc-project.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/hosting-mulitple-containers.png b/docs/architecture/cloud-native/media/hosting-mulitple-containers.png deleted file mode 100644 index 1616c52998afc..0000000000000 Binary files a/docs/architecture/cloud-native/media/hosting-mulitple-containers.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/istio-control-and-data-plane.png b/docs/architecture/cloud-native/media/istio-control-and-data-plane.png deleted file mode 100644 index 409d59f663b18..0000000000000 Binary files a/docs/architecture/cloud-native/media/istio-control-and-data-plane.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/kibana-dashboard.png b/docs/architecture/cloud-native/media/kibana-dashboard.png deleted file mode 100644 index e310e27106fe1..0000000000000 Binary files a/docs/architecture/cloud-native/media/kibana-dashboard.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/kubernetes-cluster-components.png b/docs/architecture/cloud-native/media/kubernetes-cluster-components.png deleted file mode 100644 index 8ef83ee4e6a4e..0000000000000 Binary files a/docs/architecture/cloud-native/media/kubernetes-cluster-components.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/kubernetes-cluster-in-azure.png b/docs/architecture/cloud-native/media/kubernetes-cluster-in-azure.png deleted file mode 100644 index 1562d0d8e4dbb..0000000000000 Binary files a/docs/architecture/cloud-native/media/kubernetes-cluster-in-azure.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/local-log-file-per-service.png b/docs/architecture/cloud-native/media/local-log-file-per-service.png deleted file mode 100644 index 083de237587ea..0000000000000 Binary files a/docs/architecture/cloud-native/media/local-log-file-per-service.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/materialized-view-pattern.png b/docs/architecture/cloud-native/media/materialized-view-pattern.png deleted file mode 100644 index 773c9649dd13e..0000000000000 Binary files a/docs/architecture/cloud-native/media/materialized-view-pattern.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/microservices-vs-devops.png b/docs/architecture/cloud-native/media/microservices-vs-devops.png deleted file mode 100644 index 43b44690647fb..0000000000000 Binary files a/docs/architecture/cloud-native/media/microservices-vs-devops.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/monolithic-design.png b/docs/architecture/cloud-native/media/monolithic-design.png deleted file mode 100644 index 9f8abe90fb3ca..0000000000000 Binary files a/docs/architecture/cloud-native/media/monolithic-design.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/monolithic-vs-microservices.png b/docs/architecture/cloud-native/media/monolithic-vs-microservices.png deleted file mode 100644 index 18270534b633b..0000000000000 Binary files a/docs/architecture/cloud-native/media/monolithic-vs-microservices.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/multiple-node-monolith-logging.png b/docs/architecture/cloud-native/media/multiple-node-monolith-logging.png deleted file mode 100644 index be159f5c1d267..0000000000000 Binary files a/docs/architecture/cloud-native/media/multiple-node-monolith-logging.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/polyglot-data-persistence.png b/docs/architecture/cloud-native/media/polyglot-data-persistence.png deleted file mode 100644 index 7767a668cd1f9..0000000000000 Binary files a/docs/architecture/cloud-native/media/polyglot-data-persistence.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/powerbidashboard.png b/docs/architecture/cloud-native/media/powerbidashboard.png deleted file mode 100644 index 27cc0044323f2..0000000000000 Binary files a/docs/architecture/cloud-native/media/powerbidashboard.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/projects-in-visual-studio-solution.png b/docs/architecture/cloud-native/media/projects-in-visual-studio-solution.png deleted file mode 100644 index 0e7665c7331d6..0000000000000 Binary files a/docs/architecture/cloud-native/media/projects-in-visual-studio-solution.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/rbac-role-definition.png b/docs/architecture/cloud-native/media/rbac-role-definition.png deleted file mode 100644 index b923bb14f35c3..0000000000000 Binary files a/docs/architecture/cloud-native/media/rbac-role-definition.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/rbac-security-principal.png b/docs/architecture/cloud-native/media/rbac-security-principal.png deleted file mode 100644 index 2ad160dcfa551..0000000000000 Binary files a/docs/architecture/cloud-native/media/rbac-security-principal.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/release-pipeline.png b/docs/architecture/cloud-native/media/release-pipeline.png deleted file mode 100644 index c3e3668369017..0000000000000 Binary files a/docs/architecture/cloud-native/media/release-pipeline.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/replicated-resources.png b/docs/architecture/cloud-native/media/replicated-resources.png deleted file mode 100644 index a01232e4aa332..0000000000000 Binary files a/docs/architecture/cloud-native/media/replicated-resources.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/request-reply-pattern.png b/docs/architecture/cloud-native/media/request-reply-pattern.png deleted file mode 100644 index 3f6710c01a7d6..0000000000000 Binary files a/docs/architecture/cloud-native/media/request-reply-pattern.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/retry-pattern.png b/docs/architecture/cloud-native/media/retry-pattern.png deleted file mode 100644 index 6fae26af1f35e..0000000000000 Binary files a/docs/architecture/cloud-native/media/retry-pattern.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/saga-rollback-operation.png b/docs/architecture/cloud-native/media/saga-rollback-operation.png deleted file mode 100644 index a56c6084e2c08..0000000000000 Binary files a/docs/architecture/cloud-native/media/saga-rollback-operation.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/saga-transaction-operation.png b/docs/architecture/cloud-native/media/saga-transaction-operation.png deleted file mode 100644 index f231e276ff39c..0000000000000 Binary files a/docs/architecture/cloud-native/media/saga-transaction-operation.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/scale-up-scale-out.png b/docs/architecture/cloud-native/media/scale-up-scale-out.png deleted file mode 100644 index b62e67c821625..0000000000000 Binary files a/docs/architecture/cloud-native/media/scale-up-scale-out.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/service-bus-queue.png b/docs/architecture/cloud-native/media/service-bus-queue.png deleted file mode 100644 index 18ce379dd6cc3..0000000000000 Binary files a/docs/architecture/cloud-native/media/service-bus-queue.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/service-mesh-with-side-car.png b/docs/architecture/cloud-native/media/service-mesh-with-side-car.png deleted file mode 100644 index dbf2eaf112e89..0000000000000 Binary files a/docs/architecture/cloud-native/media/service-mesh-with-side-car.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/single-monolith-logging.png b/docs/architecture/cloud-native/media/single-monolith-logging.png deleted file mode 100644 index 594f454911c2f..0000000000000 Binary files a/docs/architecture/cloud-native/media/single-monolith-logging.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/single-repository-vs-multiple.png b/docs/architecture/cloud-native/media/single-repository-vs-multiple.png deleted file mode 100644 index 5c7a32194f178..0000000000000 Binary files a/docs/architecture/cloud-native/media/single-repository-vs-multiple.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/sprint-board.png b/docs/architecture/cloud-native/media/sprint-board.png deleted file mode 100644 index 431f5c184b767..0000000000000 Binary files a/docs/architecture/cloud-native/media/sprint-board.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/ssl-report.png b/docs/architecture/cloud-native/media/ssl-report.png deleted file mode 100644 index d276a5a1381de..0000000000000 Binary files a/docs/architecture/cloud-native/media/ssl-report.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/storage-queue-hierarchy.png b/docs/architecture/cloud-native/media/storage-queue-hierarchy.png deleted file mode 100644 index 7a30973d29d1e..0000000000000 Binary files a/docs/architecture/cloud-native/media/storage-queue-hierarchy.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/strategies-for-migrating-legacy-workloads.png b/docs/architecture/cloud-native/media/strategies-for-migrating-legacy-workloads.png deleted file mode 100644 index 041c2c52caac8..0000000000000 Binary files a/docs/architecture/cloud-native/media/strategies-for-migrating-legacy-workloads.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/task-details.png b/docs/architecture/cloud-native/media/task-details.png deleted file mode 100644 index e99bee57cc8c3..0000000000000 Binary files a/docs/architecture/cloud-native/media/task-details.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/topic-architecture.png b/docs/architecture/cloud-native/media/topic-architecture.png deleted file mode 100644 index 0f0242004a2a8..0000000000000 Binary files a/docs/architecture/cloud-native/media/topic-architecture.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/types-of-nosql-datastores.png b/docs/architecture/cloud-native/media/types-of-nosql-datastores.png deleted file mode 100644 index 8773b1df21838..0000000000000 Binary files a/docs/architecture/cloud-native/media/types-of-nosql-datastores.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/virtual-network.png b/docs/architecture/cloud-native/media/virtual-network.png deleted file mode 100644 index 1af610e434b31..0000000000000 Binary files a/docs/architecture/cloud-native/media/virtual-network.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/visual-studio-2022-grpc-template.png b/docs/architecture/cloud-native/media/visual-studio-2022-grpc-template.png deleted file mode 100644 index bd90f732e64ad..0000000000000 Binary files a/docs/architecture/cloud-native/media/visual-studio-2022-grpc-template.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/visual-studio-add-docker-support.png b/docs/architecture/cloud-native/media/visual-studio-add-docker-support.png deleted file mode 100644 index 8305d72397a3c..0000000000000 Binary files a/docs/architecture/cloud-native/media/visual-studio-add-docker-support.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/visual-studio-docker-run-options.png b/docs/architecture/cloud-native/media/visual-studio-docker-run-options.png deleted file mode 100644 index 91a70356f729f..0000000000000 Binary files a/docs/architecture/cloud-native/media/visual-studio-docker-run-options.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/visual-studio-enable-docker-support.png b/docs/architecture/cloud-native/media/visual-studio-enable-docker-support.png deleted file mode 100644 index e602280970694..0000000000000 Binary files a/docs/architecture/cloud-native/media/visual-studio-enable-docker-support.png and /dev/null differ diff --git a/docs/architecture/cloud-native/media/what-container-orchestrators-do.png b/docs/architecture/cloud-native/media/what-container-orchestrators-do.png deleted file mode 100644 index bd5265eecb998..0000000000000 Binary files a/docs/architecture/cloud-native/media/what-container-orchestrators-do.png and /dev/null differ diff --git a/docs/architecture/cloud-native/monitoring-azure-kubernetes.md b/docs/architecture/cloud-native/monitoring-azure-kubernetes.md deleted file mode 100644 index 81a2ef21ceba9..0000000000000 --- a/docs/architecture/cloud-native/monitoring-azure-kubernetes.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Monitoring in Azure Kubernetes Services -description: Monitoring in Azure Kubernetes Services -ms.date: 04/06/2022 ---- - -# Monitoring in Azure Kubernetes Services - -[!INCLUDE [download-alert](includes/download-alert.md)] - -The built-in logging in Kubernetes is primitive. However, there are some great options for getting the logs out of Kubernetes and into a place where they can be properly analyzed. If you need to monitor your AKS clusters, configuring Elastic Stack for Kubernetes is a great solution. - -## Azure Monitor for Containers - -[Azure Monitor for Containers](/azure/azure-monitor/insights/container-insights-overview) supports consuming logs from not just Kubernetes but also from other orchestration engines such as DC/OS, Docker Swarm, and Red Hat OpenShift. - -![Consuming logs from various containers](./media/containers-diagram.png) -**Figure 7-10**. Consuming logs from various containers - -[Prometheus](https://prometheus.io/) is a popular open source metric monitoring solution. It is part of the Cloud Native Compute Foundation. Typically, using Prometheus requires managing a Prometheus server with its own store. However, [Azure Monitor for Containers provides direct integration with Prometheus metrics endpoints](/azure/azure-monitor/insights/container-insights-prometheus-integration), so a separate server is not required. - -Log and metric information is gathered not just from the containers running in the cluster but also from the cluster hosts themselves. It allows correlating log information from the two making it much easier to track down an error. - -Installing the log collectors differs on [Windows](/azure/azure-monitor/insights/containers#configure-a-log-analytics-windows-agent-for-kubernetes) and [Linux](/azure/azure-monitor/insights/containers#configure-a-log-analytics-linux-agent-for-kubernetes) clusters. But in both cases the log collection is implemented as a Kubernetes [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), meaning that the log collector is run as a container on each of the nodes. - -No matter which orchestrator or operating system is running the Azure Monitor daemon, the log information is forwarded to the same Azure Monitor tools with which users are familiar. This approach ensures a parallel experience in environments that mix different log sources such as a hybrid Kubernetes/Azure Functions environment. - -![A sample dashboard showing logging and metric information from a number of running containers.](./media/containers-dashboard.png) -**Figure 7-11**. A sample dashboard showing logging and metric information from many running containers. - -## Log.Finalize() - -Logging is one of the most overlooked and yet most important parts of deploying any application at scale. As the size and complexity of applications increase, then so does the difficulty of debugging them. Having top quality logs available makes debugging much easier and moves it from the realm of "nearly impossible" to "a pleasant experience". - ->[!div class="step-by-step"] ->[Previous](logging-with-elastic-stack.md) ->[Next](azure-monitor.md) diff --git a/docs/architecture/cloud-native/monitoring-health.md b/docs/architecture/cloud-native/monitoring-health.md deleted file mode 100644 index 1f664b77ffc8b..0000000000000 --- a/docs/architecture/cloud-native/monitoring-health.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Monitoring and health -description: Monitoring and Health -ms.date: 04/06/2022 ---- - -# Monitoring and health - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Microservices and cloud-native applications go hand in hand with good DevOps practices. DevOps is many things to many people but perhaps one of the better definitions comes from cloud advocate and DevOps evangelist Donovan Brown: - -"DevOps is the union of people, process, and products to enable continuous delivery of value to our end users." - -Unfortunately, with terse definitions, there's always room to say more things. One of the key components of DevOps is ensuring that the applications running in production are functioning properly and efficiently. To gauge the health of the application in production, it's necessary to monitor the various logs and metrics being produced from the servers, hosts, and the application proper. The number of different services running in support of a cloud-native application makes monitoring the health of individual components and the application as a whole a critical challenge. - ->[!div class="step-by-step"] ->[Previous](resilient-communications.md) ->[Next](observability-patterns.md) diff --git a/docs/architecture/cloud-native/observability-patterns.md b/docs/architecture/cloud-native/observability-patterns.md deleted file mode 100644 index 1aa69da1ec4df..0000000000000 --- a/docs/architecture/cloud-native/observability-patterns.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -title: Observability patterns -description: Observability patterns for cloud-native applications -ms.date: 04/06/2022 ---- - -# Observability patterns - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Just as patterns have been developed to aid in the layout of code in applications, there are patterns for operating applications in a reliable way. Three useful patterns in maintaining applications have emerged: **logging**, **monitoring**, and **alerts**. - -## When to use logging - -No matter how careful we are, applications almost always behave in unexpected ways in production. When users report problems with an application, it's useful to be able to see what was going on with the app when the problem occurred. One of the most tried and true ways of capturing information about what an application is doing while it's running is to have the application write down what it's doing. This process is known as logging. Anytime failures or problems occur in production, the goal should be to reproduce the conditions under which the failures occurred, in a non-production environment. Having good logging in place provides a roadmap for developers to follow in order to duplicate problems in an environment that can be tested and experimented with. - -### Challenges when logging with cloud-native applications - -In traditional applications, log files are typically stored on the local machine. In fact, on Unix-like operating systems, there's a folder structure defined to hold any logs, typically under `/var/log`. - -![Logging to a file in a monolithic app.](./media/single-monolith-logging.png) -**Figure 7-1**. Logging to a file in a monolithic app. - -The usefulness of logging to a flat file on a single machine is vastly reduced in a cloud environment. Applications producing logs may not have access to the local disk or the local disk may be highly transient as containers are shuffled around physical machines. Even simple scaling up of monolithic applications across multiple nodes can make it challenging to locate the appropriate file-based log file. - -![Logging to files in a scaled monolithic app.](./media/multiple-node-monolith-logging.png) -**Figure 7-2**. Logging to files in a scaled monolithic app. - -Cloud-native applications developed using a microservices architecture also pose some challenges for file-based loggers. User requests may now span multiple services that are run on different machines and may include serverless functions with no access to a local file system at all. It would be very challenging to correlate the logs from a user or a session across these many services and machines. - -![Logging to local files in a microservices app.](./media/local-log-file-per-service.png) -**Figure 7-3**. Logging to local files in a microservices app. - -Finally, the number of users in some cloud-native applications is high. Imagine that each user generates a hundred lines of log messages when they log into an application. In isolation, that is manageable, but multiply that over 100,000 users and the volume of logs becomes large enough that specialized tools are needed to support effective use of the logs. - -### Logging in cloud-native applications - -Every programming language has tooling that permits writing logs, and typically the overhead for writing these logs is low. Many of the logging libraries provide logging different kinds of criticalities, which can be tuned at run time. For instance, the [Serilog library](https://serilog.net/) is a popular structured logging library for .NET that provides the following logging levels: - -* Verbose -* Debug -* Information -* Warning -* Error -* Fatal - -These different log levels provide granularity in logging. When the application is functioning properly in production, it may be configured to only log important messages. When the application is misbehaving, then the log level can be increased so more verbose logs are gathered. This balances performance against ease of debugging. - -The high performance of logging tools and the tunability of verbosity should encourage developers to log frequently. Many favor a pattern of logging the entry and exit of each method. This approach may sound like overkill, but it's infrequent that developers will wish for less logging. In fact, it's not uncommon to perform deployments for the sole purpose of adding logging around a problematic method. Err on the side of too much logging and not on too little. Some tools can be used to automatically provide this kind of logging. - -Because of the challenges associated with using file-based logs in cloud-native apps, centralized logs are preferred. Logs are collected by the applications and shipped to a central logging application which indexes and stores the logs. This class of system can ingest tens of gigabytes of logs every day. - -It's also helpful to follow some standard practices when building logging that spans many services. For instance, generating a [correlation ID](https://blog.rapid7.com/2016/12/23/the-value-of-correlation-ids/) at the start of a lengthy interaction, and then logging it in each message that is related to that interaction, makes it easier to search for all related messages. One need only find a single message and extract the correlation ID to find all the related messages. Another example is ensuring that the log format is the same for every service, whatever the language or logging library it uses. This standardization makes reading logs much easier. Figure 7-4 demonstrates how a microservices architecture can leverage centralized logging as part of its workflow. - -![Logs from various sources are ingested into a centralized log store.](./media/centralized-logging.png) -**Figure 7-4**. Logs from various sources are ingested into a centralized log store. - -## Challenges with detecting and responding to potential app health issues - -Some applications aren't mission critical. Maybe they're only used internally, and when a problem occurs, the user can contact the team responsible and the application can be restarted. However, customers often have higher expectations for the applications they consume. You should know when problems occur with your application *before* users do, or before users notify you. Otherwise, the first you know about a problem may be when you notice an angry deluge of social media posts deriding your application or even your organization. - -Some scenarios you may need to consider include: - -- One service in your application keeps failing and restarting, resulting in intermittent slow responses. -- At some times of the day, your application's response time is slow. -- After a recent deployment, load on the database has tripled. - -Implemented properly, monitoring can let you know about conditions that will lead to problems, letting you address underlying conditions before they result in any significant user impact. - -### Monitoring cloud-native apps - -Some centralized logging systems take on an additional role of collecting telemetry outside of pure logs. They can collect metrics, such as time to run a database query, average response time from a web server, and even CPU load averages and memory pressure as reported by the operating system. In conjunction with the logs, these systems can provide a holistic view of the health of nodes in the system and the application as a whole. - -The metric-gathering capabilities of the monitoring tools can also be fed manually from within the application. Business flows that are of particular interest such as new users signing up or orders being placed, may be instrumented such that they increment a counter in the central monitoring system. This aspect unlocks the monitoring tools to not only monitor the health of the application but the health of the business. - -Queries can be constructed in the log aggregation tools to look for certain statistics or patterns, which can then be displayed in graphical form, on custom dashboards. Frequently, teams will invest in large, wall-mounted displays that rotate through the statistics related to an application. This way, it's simple to see the problems as they occur. - -Cloud-native monitoring tools provide real-time telemetry and insight into apps regardless of whether they're single-process monolithic applications or distributed microservice architectures. They include tools that allow collection of data from the app as well as tools for querying and displaying information about the app's health. - -## Challenges with reacting to critical problems in cloud-native apps - -If you need to react to problems with your application, you need some way to alert the right personnel. This is the third cloud-native application observability pattern and depends on logging and monitoring. Your application needs to have logging in place to allow problems to be diagnosed, and in some cases to feed into monitoring tools. It needs monitoring to aggregate application metrics and health data in one place. Once this has been established, rules can be created that will trigger alerts when certain metrics fall outside of acceptable levels. - -Generally, alerts are layered on top of monitoring such that certain conditions trigger appropriate alerts to notify team members of urgent problems. Some scenarios that may require alerts include: - -- One of your application's services is not responding after 1 minute of downtime. -- Your application is returning unsuccessful HTTP responses to more than 1% of requests. -- Your application's average response time for key endpoints exceeds 2000 ms. - -### Alerts in cloud-native apps - -You can craft queries against the monitoring tools to look for known failure conditions. For instance, queries could search through the incoming logs for indications of HTTP status code 500, which indicates a problem on a web server. As soon as one of these is detected, then an e-mail or an SMS could be sent to the owner of the originating service who can begin to investigate. - -Typically, though, a single 500 error isn't enough to determine that a problem has occurred. It could mean that a user mistyped their password or entered some malformed data. The alert queries can be crafted to only fire when a larger than average number of 500 errors are detected. - -One of the most damaging patterns in alerting is to fire too many alerts for humans to investigate. Service owners will rapidly become desensitized to errors that they've previously investigated and found to be benign. Then, when true errors occur, they'll be lost in the noise of hundreds of false positives. The parable of the [Boy Who Cried Wolf](https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf) is frequently told to children to warn them of this very danger. It's important to ensure that the alerts that do fire are indicative of a real problem. - ->[!div class="step-by-step"] ->[Previous](monitoring-health.md) ->[Next](logging-with-elastic-stack.md) diff --git a/docs/architecture/cloud-native/other-deployment-options.md b/docs/architecture/cloud-native/other-deployment-options.md deleted file mode 100644 index 87aa6e104c867..0000000000000 --- a/docs/architecture/cloud-native/other-deployment-options.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: Other container deployment options -description: Other Container Deployment Options using Azure -ms.date: 04/06/2022 ---- - -# Other container deployment options - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Aside from Azure Kubernetes Service (AKS), you can also deploy containers to Azure App Service for Containers and Azure Container Instances. - -## When does it make sense to deploy to App Service for Containers? - -Simple production applications that don't require orchestration are well suited to Azure App Service for Containers. - -## How to deploy to App Service for Containers - -To deploy to [Azure App Service for Containers](https://azure.microsoft.com/services/app-service/containers/), you'll need an Azure Container Registry (ACR) instance and credentials to access it. Push your container image to the ACR repository so that your Azure App Service can pull it when needed. Once complete, you can configure the app for Continuous Deployment. Doing so will automatically deploy updates whenever the image changes in ACR. - -## When does it make sense to deploy to Azure Container Instances? - -[Azure Container Instances (ACI)](https://azure.microsoft.com/services/container-instances/) enables you to run Docker containers in a managed, serverless cloud environment, without having to set up virtual machines or clusters. It's a great solution for short-running workloads that can run in an isolated container. Consider ACI for simple services, testing scenarios, task automation, and build jobs. ACI spins-up a container instance, performs the task, and then spins it down. - -## How to deploy an app to Azure Container Instances - -To deploy to [Azure Container Instances (ACI)](/azure/container-instances/), you need an Azure Container Registry (ACR) and credentials for accessing it. Once you push your container image to the repository, it's available to pull into ACI. You can work with ACI using the Azure portal or command-line interface. ACR provides tight integration with ACI. Figure 3-12 shows how to push an individual container image to ACR. - -![Azure Container Registry Run Instance](./media/acr-runinstance-contextmenu.png) - -**Figure 3-12**. Azure Container Registry Run Instance - -Creating an instance in ACI can be done quickly. Specify the image registry, Azure resource group information, the amount of memory to allocate, and the port on which to listen. This [quickstart shows how to deploy a container instance to ACI using the Azure portal](/azure/container-instances/container-instances-quickstart-portal). - -Once the deployment completes, find the newly deployed container's IP address and communicate with it over the port you specified. - -Azure Container Instances offers the fastest way to run simple container workloads in Azure. You don't need to configure an app service, orchestrator, or virtual machine. For scenarios where you require full container orchestration, service discovery, automatic scaling, or coordinated upgrades, we recommend Azure Kubernetes Service (AKS). - -## References - -- [What is Kubernetes?](https://blog.newrelic.com/engineering/what-is-kubernetes/) -- [Installing Kubernetes with Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) -- [MiniKube vs Docker Desktop](https://medium.com/containers-101/local-kubernetes-for-windows-minikube-vs-docker-desktop-25a1c6d3b766) -- [Visual Studio Tools for Docker](/dotnet/standard/containerized-lifecycle-architecture/design-develop-containerized-apps/visual-studio-tools-for-docker) -- [Understanding serverless cold start](https://azure.microsoft.com/blog/understanding-serverless-cold-start/) -- [Pre-warmed Azure Functions instances](/azure/azure-functions/functions-premium-plan#pre-warmed-instances) -- [Create a function on Linux using a custom image](/azure/azure-functions/functions-create-function-linux-custom-image) -- [Run Azure Functions in a Docker Container](https://markheath.net/post/azure-functions-docker) -- [Create a function on Linux using a custom image](/azure/azure-functions/functions-create-function-linux-custom-image) -- [Azure Functions with Kubernetes Event Driven Autoscaling](/azure/azure-functions/functions-kubernetes-keda) -- [Canary Release](https://martinfowler.com/bliki/CanaryRelease.html) -- [Azure Dev Spaces with VS Code](/azure/dev-spaces/quickstart-netcore) -- [Azure Dev Spaces with Visual Studio](/azure/dev-spaces/quickstart-netcore-visualstudio) -- [AKS Multiple Node Pools](/azure/aks/use-multiple-node-pools) -- [AKS Cluster Autoscaler](/azure/aks/cluster-autoscaler) -- [Tutorial: Scale applications in AKS](/azure/aks/tutorial-kubernetes-scale) -- [Azure Functions scale and hosting](/azure/azure-functions/functions-scale) -- [Azure Container Instances Docs](/azure/container-instances/) -- [Deploy Container Instance from ACR](/azure/container-instances/container-instances-using-azure-container-registry#deploy-with-azure-portal) - ->[!div class="step-by-step"] ->[Previous](scale-containers-serverless.md) ->[Next](communication-patterns.md) diff --git a/docs/architecture/cloud-native/relational-vs-nosql-data.md b/docs/architecture/cloud-native/relational-vs-nosql-data.md deleted file mode 100644 index 3b12625a1f9b6..0000000000000 --- a/docs/architecture/cloud-native/relational-vs-nosql-data.md +++ /dev/null @@ -1,299 +0,0 @@ ---- -title: Relational vs. NoSQL data -description: Learn about relational and NoSQL data in cloud-native applications -author: robvet -ms.date: 04/06/2022 ---- - -# SQL vs. NoSQL data - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Relational (SQL) and non-relational (NoSQL) are two types of database systems commonly implemented in cloud-native apps. They're built differently, store data differently, and accessed differently. In this section, we'll look at both. Later in this chapter, we'll look at an emerging database technology called *NewSQL*. - -*Relational databases* have been a prevalent technology for decades. They're mature, proven, and widely implemented. Competing database products, tooling, and expertise abound. Relational databases provide a store of related data tables. These tables have a fixed schema, use SQL (Structured Query Language) to manage data, and support ACID guarantees: atomicity, consistency, isolation and durability. - -*NoSQL databases* refer to high-performance, non-relational data stores. They excel in their ease-of-use, scalability, resilience, and availability characteristics. Instead of joining tables of normalized data, NoSQL stores unstructured or semi-structured data, often in key-value pairs or JSON documents. NoSQL databases typically don't provide ACID guarantees beyond the scope of a single database partition. High volume services that require sub second response time favor NoSQL datastores. - -The impact of [NoSQL](https://www.geeksforgeeks.org/introduction-to-nosql/) technologies for distributed cloud-native systems can't be overstated. The proliferation of new data technologies in this space has disrupted solutions that once exclusively relied on relational databases. - -NoSQL databases include several different models for accessing and managing data, each suited to specific use cases. Figure 5-9 presents four common models. - -![NoSQL data models](./media/types-of-nosql-datastores.png) - -**Figure 5-9**: Data models for NoSQL databases - -| Model | Characteristics | -| :-------- | :-------- | -| Document Store | Data and metadata are stored hierarchically in JSON-based documents inside the database. | -| Key Value Store | The simplest of the NoSQL databases, data is represented as a collection of key-value pairs. | -| Wide-Column Store | Related data is stored as a set of nested-key/value pairs within a single column. | -| Graph Store | Data is stored in a graph structure as node, edge, and data properties. | - -## The CAP theorem - -As a way to understand the differences between these types of databases, consider the CAP theorem, a set of principles applied to distributed systems that store state. Figure 5-10 shows the three properties of the CAP theorem. - -![CAP theorem](./media/cap-theorem.png) - -**Figure 5-10**. The CAP theorem - -The theorem states that distributed data systems will offer a trade-off between consistency, availability, and partition tolerance. And, that any database can only guarantee *two* of the three properties: - -- *Consistency.* Every node in the cluster responds with the most recent data, even if the system must block the request until all replicas update. If you query a "consistent system" for an item that is currently updating, you'll wait for that response until all replicas successfully update. However, you'll receive the most current data. It should be understood that the term "consistency" as it's used in the context of the CAP theorem has a technical meaning that is distinct from the way "consistency" is defined in the context of ACID guarantees. - -- *Availability.* Every request received by a non-failing node in the system must result in a response. Put it simply, if you query an "available system" for an item that is updating, you'll get the best possible answer the service can provide at that moment. But note that "availability" as defined by CAP theorem is technically different from "high availability" as it's conventionally known for distributed systems. - -- *Partition Tolerance.* Guarantees the system continues to operate even if a replicated data node fails or loses connectivity with other replicated data nodes. - -CAP theorem explains the tradeoffs associated with managing consistency and availability during a network partition; however tradeoffs with respect to consistency and performance also exist with the absence of a network partition. CAP theorem is often further extended to [PACELC](http://www.cs.umd.edu/~abadi/papers/abadi-pacelc.pdf) to explain the tradeoffs more comprehensively. - -> [!NOTE] -> Even if you choose availability over consistency, in times of network partition, availability will suffer. CAP available system is more available to some of its clients but it's not necessarily "highly available" to all its clients. - -Relational databases typically provide consistency and availability, but not partition tolerance. They're typically provisioned to a single server and scale vertically by adding more resources to the machine. - -Many relational database systems support built-in replication features where copies of the primary database can be made to other secondary server instances. Write operations are made to the primary instance and replicated to each of the secondaries. Upon a failure, the primary instance can fail over to a secondary to provide high availability. Secondaries can also be used to distribute read operations. While writes operations always go against the primary replica, read operations can be routed to any of the secondaries to reduce system load. - -Data can also be horizontally partitioned across multiple nodes, such as with [sharding](/azure/sql-database/sql-database-elastic-scale-introduction). But, sharding dramatically increases operational overhead by spitting data across many pieces that cannot easily communicate. It can be costly and time consuming to manage. Relational features that include table joins, transactions, and referential integrity require steep performance penalties in sharded deployments. - -Replication consistency and recovery point objectives can be tuned by configuring whether replication occurs synchronously or asynchronously. If data replicas were to lose network connectivity in a "highly consistent" or synchronous relational database cluster, you wouldn't be able to write to the database. The system would reject the write operation as it can't replicate that change to the other data replica. Every data replica has to update before the transaction can complete. - -NoSQL databases typically support high availability and partition tolerance. They scale out horizontally, often across commodity servers. This approach provides tremendous availability, both within and across geographical regions at a reduced cost. You partition and replicate data across these machines, or nodes, providing redundancy and fault tolerance. Consistency is typically tuned through consensus protocols or quorum mechanisms. They provide more control when navigating tradeoffs between tuning synchronous versus asynchronous replication in relational systems. - -If data replicas were to lose connectivity in a "highly available" NoSQL database cluster, you could still complete a write operation to the database. The database cluster would allow the write operation and update each data replica as it becomes available. NoSQL databases that support multiple writable replicas can further strengthen high availability by avoiding the need for failover when optimizing recovery time objective. - -Modern NoSQL databases typically implement partitioning capabilities as a feature of their system design. Partition management is often built-in to the database, and routing is achieved through placement hints - often called partition keys. A flexible data models enables the NoSQL databases to lower the burden of schema management and improve availability when deploying application updates that require data model changes. - -High availability and massive scalability are often more critical to the business than relational table joins and referential integrity. Developers can implement techniques and patterns such as Sagas, CQRS, and asynchronous messaging to embrace eventual consistency. - -> Nowadays, care must be taken when considering the CAP theorem constraints. A new type of database, called NewSQL, has emerged which extends the relational database engine to support both horizontal scalability and the scalable performance of NoSQL systems. - -## Considerations for relational vs. NoSQL systems - -Based upon specific data requirements, a cloud-native-based microservice can implement a relational, NoSQL datastore or both. - -| Consider a NoSQL datastore when: | Consider a relational database when: | -| :-------- | :-------- | -| You have high volume workloads that require predictable latency at large scale (for example, latency measured in milliseconds while performing millions of transactions per second) | Your workload volume generally fits within thousands of transactions per second | -| Your data is dynamic and frequently changes | Your data is highly structured and requires referential integrity | -| Relationships can be de-normalized data models | Relationships are expressed through table joins on normalized data models | -| Data retrieval is simple and expressed without table joins | You work with complex queries and reports| -| Data is typically replicated across geographies and requires finer control over consistency, availability, and performance | Data is typically centralized, or can be replicated regions asynchronously | -| Your application will be deployed to commodity hardware, such as with public clouds | Your application will be deployed to large, high-end hardware | - -In the next sections, we'll explore the options available in the Azure cloud for storing and managing your cloud-native data. - -## Database as a Service - -To start, you could provision an Azure virtual machine and install your database of choice for each service. While you'd have full control over the environment, you'd forgo many built-in features of the cloud platform. You'd also be responsible for managing the virtual machine and database for each service. This approach could quickly become time-consuming and expensive. - -Instead, cloud-native applications favor data services exposed as a Database as a Service (DBaaS). Fully managed by a cloud vendor, these services provide built-in security, scalability, and monitoring. Instead of owning the service, you simply consume it as a [backing service](./definition.md#backing-services). The provider operates the resource at scale and bears the responsibility for performance and maintenance. - -They can be configured across cloud availability zones and regions to achieve high availability. They all support just-in-time capacity and a pay-as-you-go model. Azure features different kinds of managed data service options, each with specific benefits. - -We'll first look at relational DBaaS services available in Azure. You'll see that Microsoft's flagship SQL Server database is available along with several open-source options. Then, we'll talk about the NoSQL data services in Azure. - -## Azure relational databases - -For cloud-native microservices that require relational data, Azure offers four managed relational databases as a service (DBaaS) offerings, shown in Figure 5-11. - -![Managed relational databases in Azure](./media/azure-managed-databases.png) - -**Figure 5-11**. Managed relational databases available in Azure - -In the previous figure, note how each sits upon a common DBaaS infrastructure which features key capabilities at no additional cost. - -These features are especially important to organizations who provision large numbers of databases, but have limited resources to administer them. -You can provision an Azure database in minutes by selecting the amount of processing cores, memory, and underlying storage. You can scale the database on-the-fly and dynamically adjust resources with little to no downtime. - -## Azure SQL Database - -Development teams with expertise in Microsoft SQL Server should consider -[Azure SQL Database](/azure/sql-database/). It's a fully managed relational database-as-a-service (DBaaS) based on the Microsoft SQL Server Database Engine. The service shares many features found in the on-premises version of SQL Server and runs the latest stable version of the SQL Server Database Engine. - -For use with a cloud-native microservice, Azure SQL Database is available with three deployment options: - -- A Single Database represents a fully managed SQL Database running on an [Azure SQL Database server](/azure/sql-database/sql-database-servers) in the Azure cloud. The database is considered [*contained*](/sql/relational-databases/databases/contained-databases) as it has no configuration dependencies on the underlying database server. - -- A [Managed Instance](/azure/sql-database/sql-database-managed-instance) is a fully managed instance of the Microsoft SQL Server Database Engine that provides near-100% compatibility with an on-premises SQL Server. This option supports larger databases, up to 35 TB and is placed in an [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) for better isolation. - -- [Azure SQL Database serverless](/azure/sql-database/sql-database-serverless) is a compute tier for a single database that automatically scales based on workload demand. It bills only for the amount of compute used per second. The service is well suited for workloads with intermittent, unpredictable usage patterns, interspersed with periods of inactivity. The serverless compute tier also automatically pauses databases during inactive periods so that only storage charges are billed. It automatically resumes when activity returns. - -Beyond the traditional Microsoft SQL Server stack, Azure also features managed versions of three popular open-source databases. - -## Open-source databases in Azure - -Open-source relational databases have become a popular choice for cloud-native applications. Many enterprises favor them over commercial database products, especially for cost savings. Many development teams enjoy their flexibility, community-backed development, and ecosystem of tools and extensions. Open-source databases can be deployed across multiple cloud providers, helping minimize the concern of "vendor lock-in." - -Developers can easily self-host any open-source database on an Azure VM. While providing full control, this approach puts you on the hook for the management, monitoring, and maintenance of the database and VM. - -However, Microsoft continues its commitment to keeping Azure an "open platform" by offering several popular open-source databases as *fully managed* DBaaS services. - -### Azure Database for MySQL - -[MySQL](https://en.wikipedia.org/wiki/MySQL) is an open-source relational database and a pillar for applications built on the [LAMP software stack](https://en.wikipedia.org/wiki/LAMP_(software_bundle)). Widely chosen for *read heavy* workloads, it's used by many large organizations, including Facebook, Twitter, and YouTube. The community edition is available for free, while the enterprise edition requires a license purchase. Originally created in 1995, the product was purchased by Sun Microsystems in 2008. Oracle acquired Sun and MySQL in 2010. - -[Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) is a managed relational database service based on the open-source MySQL Server engine. It uses the MySQL Community edition. The Azure MySQL server is the administrative point for the service. It's the same MySQL server engine used for on-premises deployments. The engine can create a single database per server or multiple databases per server that share resources. You can continue to manage data using the same open-source tools without having to learn new skills or manage virtual machines. - -### Azure Database for MariaDB - -[MariaDB](https://mariadb.com/) Server is another popular open-source database server. It was created as a *fork* of MySQL when Oracle purchased Sun Microsystems, who owned MySQL. The intent was to ensure that MariaDB remained open-source. As MariaDB is a fork of MySQL, the data and table definitions are compatible, and the client protocols, structures, and APIs, are close-knit. - -MariaDB has a strong community and is used by many large enterprises. While Oracle continues to maintain, enhance, and support MySQL, the MariaDB foundation manages MariaDB, allowing public contributions to the product and documentation. - -[Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) is a fully managed relational database as a service in the Azure cloud. The service is based on the MariaDB community edition server engine. It can handle mission-critical workloads with predictable performance and dynamic scalability. - -### Azure Database for PostgreSQL - -[PostgreSQL](https://www.postgresql.org/) is an open-source relational database with over 30 years of active development. PostgreSQL has a strong reputation for reliability and data integrity. It's feature rich, SQL compliant, and considered more performant than MySQL - especially for workloads with complex queries and heavy writes. Many large enterprises including Apple, Red Hat, and Fujitsu have built products using PostgreSQL. - -[Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) is a fully managed relational database service, based on the open-source Postgres database engine. The service supports many development platforms, including C++, Java, Python, Node, C\#, and PHP. You can migrate PostgreSQL databases to it using the command-line interface tool or Azure Data Migration Service. - -Azure Database for PostgreSQL is available with two deployment options: - -- The [Single Server](/azure/postgresql/concepts-servers) deployment option is a central administrative point for multiple databases to which you can deploy many databases. The pricing is structured per-server based upon cores and storage. - -- The [Hyperscale (Citus) option](https://azure.microsoft.com/blog/get-high-performance-scaling-for-your-azure-database-workloads-with-hyperscale/) is powered by Citus Data technology. It enables high performance by *horizontally scaling* a single database across hundreds of nodes to deliver fast performance and scale. This option allows the engine to fit more data in memory, parallelize queries across hundreds of nodes, and index data faster. - -## NoSQL data in Azure - -Cosmos DB is a fully managed, globally distributed NoSQL database service in the Azure cloud. It has been adopted by many large companies across the world, including Coca-Cola, Skype, ExxonMobil, and Liberty Mutual. - -If your services require fast response from anywhere in the world, high availability, or elastic scalability, Cosmos DB is a great choice. Figure 5-12 shows Cosmos DB. - -![Overview of Cosmos DB](./media/cosmos-db-overview.png) - -**Figure 5-12**: Overview of Azure Cosmos DB - -The previous figure presents many of the built-in cloud-native capabilities available in Cosmos DB. In this section, we'll take a closer look at them. - -### Global support - -Cloud-native applications often have a global audience and require global scale. - -You can distribute Cosmos databases across regions or around the world, placing data close to your users, improving response time, and reducing latency. You can add or remove a database from a region without pausing or redeploying your services. In the background, Cosmos DB transparently replicates the data to each of the configured regions. - -Cosmos DB supports [active/active](https://kemptechnologies.com/white-papers/unfog-confusion-active-passive-activeactive-load-balancing/) clustering at the global level, enabling you to configure any of your database regions to support *both writes and reads*. - -The [Multi-region write](/azure/cosmos-db/conflict-resolution-policies) protocol is an important feature in Cosmos DB that enables the following functionality: - -- Unlimited elastic write and read scalability. - -- 99.999% read and write availability all around the world. - -- Guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile. - -With the Cosmos DB [Multi-Homing APIs](/azure/cosmos-db/distribute-data-globally), your microservice is automatically aware of the nearest Azure region and sends requests to it. The nearest region is identified by Cosmos DB without any configuration changes. Should a region become unavailable, the Multi-Homing feature will automatically route requests to the next nearest available region. - -### Multi-model support - -When replatforming monolithic applications to a cloud-native architecture, development teams sometimes have to migrate open-source, NoSQL data stores. Cosmos DB can help you preserve your investment in these NoSQL datastores with its *multi-model* data platform. The following table shows the supported NoSQL [compatibility APIs](https://www.wikiwand.com/en/Cosmos_DB). - -| Provider | Description | -| :-------- | :-------- | -| NoSQL API | API for NoSQL stores data in document format | -| Mongo DB API | Supports Mongo DB APIs and JSON documents| -| Gremlin API | Supports Gremlin API with graph-based nodes and edge data representations | -| Cassandra API | Supports Casandra API for wide-column data representations | -| Table API | Supports Azure Table Storage with premium enhancements | -| PostgreSQL API | Managed service for running PostgreSQL at any scale | - -Development teams can migrate existing Mongo, Gremlin, or Cassandra databases into Cosmos DB with minimal changes to data or code. For new apps, development teams can choose among open-source options or the built-in SQL API model. - -> Internally, Cosmos stores the data in a simple struct format made up of primitive data types. For each request, the database engine translates the primitive data into the model representation you've selected. - -In the previous table, note the [Table API](/azure/cosmos-db/table-introduction) option. This API is an evolution of Azure Table Storage. Both share the same underlying table model, but the Cosmos DB Table API adds premium enhancements not available in the Azure Storage API. The following table contrasts the features. - -| Feature | Azure Table Storage | Azure Cosmos DB | -|:--------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------| -| Latency | Fast | Single-digit millisecond latency for reads and writes anywhere in the world | -| Throughput | Limit of 20,000 operations per table | Unlimited operations per table | -| Global Distribution | Single region with optional single secondary read region | Turnkey distributions to all regions with automatic failover | -| Indexing | Available for partition and row key properties only | Automatic indexing of all properties | -| Pricing | Optimized for cold workloads (low throughput : storage ratio) | Optimized for hot workloads (high throughput : storage ratio) | - -Microservices that consume Azure Table storage can easily migrate to the Cosmos DB Table API. No code changes are required. - -### Tunable consistency - -Earlier in the *Relational vs. NoSQL* section, we discussed the subject of *data consistency*. Data consistency refers to the *integrity* of your data. Cloud-native services with distributed data rely on replication and must make a fundamental tradeoff between read consistency, availability, and latency. - -Most distributed databases allow developers to choose between two consistency models: strong consistency and eventual consistency. *Strong consistency* is the gold standard of data programmability. It guarantees that a query will always return the most current data - even if the system must incur latency waiting for an update to replicate across all database copies. While a database configured for *eventual consistency* will return data immediately, even if that data isn't the most current copy. The latter option enables higher availability, greater scale, and increased performance. - -Azure Cosmos DB offers five well-defined [consistency models](/azure/cosmos-db/consistency-levels) shown in Figure 5-13. - -![Cosmos DB consistency graph](./media/cosmos-consistency-level-graph.png) - -**Figure 5-13**: Cosmos DB Consistency Levels - - These options enable you to make precise choices and granular tradeoffs for consistency, availability, and the performance for your data. The levels are presented in the following table. - -| Consistency Level | Description | -| :-------- | :-------- | -| Eventual | No ordering guarantee for reads. Replicas will eventually converge. | -| Constant Prefix | Reads are still eventual, but data is returned in the ordering in which it is written. | -| Session | Guarantees you can read any data written during the current session. It is the default consistency level. | -| Bounded Staleness | Reads trail writes by interval that you specify. | -| Strong | Reads are guaranteed to return most recent committed version of an item. A client never sees an uncommitted or partial read. | - -In the article [Getting Behind the 9-Ball: Cosmos DB Consistency Levels Explained](https://blog.jeremylikness.com/blog/2018-03-23_getting-behind-the-9ball-cosmosdb-consistency-levels/), Microsoft Program Manager Jeremy Likness provides an excellent explanation of the five models. - -### Partitioning - -Azure Cosmos DB embraces automatic [partitioning](/azure/cosmos-db/partitioning-overview) to scale a database to meet the performance needs of your cloud-native services. - -You manage data in Cosmos DB data by creating databases, containers, and items. - -Containers live in a Cosmos DB database and represent a schema-agnostic grouping of items. Items are the data that you add to the container. They're represented as documents, rows, nodes, or edges. All items added to a container are automatically indexed. - -To partition the container, items are divided into distinct subsets called logical partitions. Logical partitions are populated based on the value of a partition key that is associated with each item in a container. Figure 5-14 shows two containers each with a logical partition based on a partition key value. - -![Cosmos DB partitioning mechanics](./media/cosmos-db-partitioning.png) - -**Figure 5-14**: Cosmos DB partitioning mechanics - -Note in the previous figure how each item includes a partition key of either 'city' or 'airport'. The key determines the item's logical partition. Items with a city code are assigned to the container on the left, and items with an airport code, to the container on the right. Combining the partition key value with the ID value creates an item's index, which uniquely identifies the item. - -Internally, Cosmos DB automatically manages the placement of [logical partitions](/azure/cosmos-db/partition-data) on physical partitions to satisfy the scalability and performance needs of the container. As application throughput and storage requirements increase, Azure Cosmos DB redistributes logical partitions across a greater number of servers. Redistribution operations are managed by Cosmos DB and invoked without interruption or downtime. - -## NewSQL databases - -*NewSQL* is an emerging database technology that combines the distributed scalability of NoSQL with the ACID guarantees of a relational database. NewSQL databases are important for business systems that must process high-volumes of data, across distributed environments, with full transactional support and ACID compliance. While a NoSQL database can provide massive scalability, it does not guarantee data consistency. Intermittent problems from inconsistent data can place a burden on the development team. Developers must construct safeguards into their microservice code to manage problems caused by inconsistent data. - -The Cloud Native Computing Foundation (CNCF) features several NewSQL database projects. - -| Project | Characteristics | -| :-------- | :-------- | -| Cockroach DB |An ACID-compliant, relational database that scales globally. Add a new node to a cluster and CockroachDB takes care of balancing the data across instances and geographies. It creates, manages, and distributes replicas to ensure reliability. It's open source and freely available. | -| TiDB | An open-source database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL-compatible and features horizontal scalability, strong consistency, and high availability. TiDB acts like a MySQL server. You can continue to use existing MySQL client libraries, without requiring extensive code changes to your application. | -| YugabyteDB | An open source, high-performance, distributed SQL database. It supports low query latency, resilience against failures, and global data distribution. YugabyteDB is PostgreSQL-compatible and handles scale-out RDBMS and internet-scale OLTP workloads. The product also supports NoSQL and is compatible with Cassandra. | -|Vitess | Vitess is a database solution for deploying, scaling, and managing large clusters of MySQL instances. It can run in a public or private cloud architecture. Vitess combines and extends many important MySQL features and features both vertical and horizontal sharding support. Originated by YouTube, Vitess has been serving all YouTube database traffic since 2011. | - -The open-source projects in the previous figure are available from the Cloud Native Computing Foundation. Three of the offerings are full database products, which include .NET support. The other, Vitess, is a database clustering system that horizontally scales large clusters of MySQL instances. - -A key design goal for NewSQL databases is to work natively in Kubernetes, taking advantage of the platform's resiliency and scalability. - -NewSQL databases are designed to thrive in ephemeral cloud environments where underlying virtual machines can be restarted or rescheduled at a moment's notice. The databases are designed to survive node failures without data loss nor downtime. CockroachDB, for example, is able to survive a machine loss by maintaining three consistent replicas of any data across the nodes in a cluster. - -Kubernetes uses a *Services construct* to allow a client to address a group of identical NewSQL databases processes from a single DNS entry. By decoupling the database instances from the address of the service with which it's associated, we can scale without disrupting existing application instances. Sending a request to any service at a given time will always yield the same result. - -In this scenario, all database instances are equal. There are no primary or secondary relationships. Techniques like *consensus replication* found in CockroachDB allow any database node to handle any request. If the node that receives a load-balanced request has the data it needs locally, it responds immediately. If not, the node becomes a gateway and forwards the request to the appropriate nodes to get the correct answer. From the client's perspective, every database node is the same: They appear as a single *logical* database with the consistency guarantees of a single-machine system, despite having dozens or even hundreds of nodes that are working behind the scenes. - -For a detailed look at the mechanics behind NewSQL databases, see the [DASH: Four Properties of Kubernetes-Native Databases](https://thenewstack.io/dash-four-properties-of-kubernetes-native-databases/) article. - -## Data migration to the cloud - -One of the more time-consuming tasks is migrating data from one data platform to another. The [Azure Data Migration Service](https://azure.microsoft.com/services/database-migration/) can help expedite such efforts. It can migrate data from several external database sources into Azure Data platforms with minimal downtime. Target platforms include the following services: - -- Azure SQL Database -- Azure Database for MySQL -- Azure Database for MariaDB -- Azure Database for PostgreSQL -- Azure Cosmos DB - -The service provides recommendations to guide you through the changes required to execute a migration, both small or large. - ->[!div class="step-by-step"] ->[Previous](distributed-data.md) ->[Next](azure-caching.md) diff --git a/docs/architecture/cloud-native/resiliency.md b/docs/architecture/cloud-native/resiliency.md deleted file mode 100644 index 761dc52ebb162..0000000000000 --- a/docs/architecture/cloud-native/resiliency.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Cloud-native resiliency -description: Architecting Cloud Native .NET Apps for Azure | Cloud Native Resiliency -author: robvet -ms.date: 04/06/2022 ---- - -# Cloud-native resiliency - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Resiliency is the ability of your system to react to failure and still remain functional. It's not about avoiding failure, but accepting failure and constructing your cloud-native services to respond to it. You want to return to a fully functioning state quickly as possible. - -Unlike traditional monolithic applications, where everything runs together in a single process, cloud-native systems embrace a distributed architecture as shown in Figure 6-1: - -![Distributed cloud-native environment](./media/distributed-cloud-native-environment.png) - -**Figure 6-1.** Distributed cloud-native environment - -In the previous figure, each microservice and cloud-based [backing service](https://12factor.net/backing-services) execute in a separate process, across server infrastructure, communicating via network-based calls. - -Operating in this environment, a service must be sensitive to many different challenges: - -- Unexpected network latency - the time for a service request to travel to the receiver and back. - -- [Transient faults](/azure/architecture/best-practices/transient-faults) - short-lived network connectivity errors. - -- Blockage by a long-running synchronous operation. - -- A host process that has crashed and is being restarted or moved. - -- An overloaded microservice that can't respond for a short time. - -- An in-flight orchestrator operation such as a rolling upgrade or moving a service from one node to another. - -- Hardware failures. - -Cloud platforms can detect and mitigate many of these infrastructure issues. It may restart, scale out, and even redistribute your service to a different node. However, to take full advantage of this built-in protection, you must design your services to react to it and thrive in this dynamic environment. - -In the following sections, we'll explore defensive techniques that your service and managed cloud resources can leverage to minimize downtime and disruption. - ->[!div class="step-by-step"] ->[Previous](elastic-search-in-azure.md) ->[Next](application-resiliency-patterns.md) diff --git a/docs/architecture/cloud-native/resilient-communications.md b/docs/architecture/cloud-native/resilient-communications.md deleted file mode 100644 index 7545274bd6580..0000000000000 --- a/docs/architecture/cloud-native/resilient-communications.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: Resilient communication -description: Architecting Cloud Native .NET Apps for Azure | Resilient Communication -author: robvet -ms.date: 04/06/2022 ---- - -# Resilient communications - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Throughout this book, we've embraced a microservice-based architectural approach. While such an architecture provides important benefits, it presents many challenges: - -- *Out-of-process network communication.* Each microservice communicates over a network protocol that introduces network congestion, latency, and transient faults. - -- *Service discovery.* How do microservices discover and communicate with each other when running across a cluster of machines with their own IP addresses and ports? - -- *Resiliency.* How do you manage short-lived failures and keep the system stable? - -- *Load balancing.* How does inbound traffic get distributed across multiple instances of a microservice? - -- *Security.* How are security concerns such as transport-level encryption and certificate management enforced? - -- *Distributed Monitoring.* - How do you correlate and capture traceability and monitoring for a single request across multiple consuming microservices? - -You can address these concerns with different libraries and frameworks, but the implementation can be expensive, complex, and time-consuming. You also end up with infrastructure concerns coupled to business logic. - -## Service mesh - -A better approach is an evolving technology entitled *Service Mesh*. A [service mesh](https://www.nginx.com/blog/what-is-a-service-mesh/) is a configurable infrastructure layer with built-in capabilities to handle service communication and the other challenges mentioned above. It decouples these concerns by moving them into a service proxy. The proxy is deployed into a separate process (called a [sidecar](/azure/architecture/patterns/sidecar)) to provide isolation from business code. However, the sidecar is linked to the service - it's created with it and shares its lifecycle. Figure 6-7 shows this scenario. - -![Service mesh with a side car](./media/service-mesh-with-side-car.png) - -**Figure 6-7**. Service mesh with a side car - -In the previous figure, note how the proxy intercepts and manages communication among the microservices and the cluster. - -A service mesh is logically split into two disparate components: A [data plane](https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc) and [control plane](https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc). Figure 6-8 shows these components and their responsibilities. - -![Service mesh control and data plane](./media/istio-control-and-data-plane.png) - -**Figure 6-8.** Service mesh control and data plane - -Once configured, a service mesh is highly functional. It can retrieve a corresponding pool of instances from a service discovery endpoint. The mesh can then send a request to a specific instance, recording the latency and response type of the result. A mesh can choose the instance most likely to return a fast response based on many factors, including its observed latency for recent requests. - -If an instance is unresponsive or fails, the mesh will retry the request on another instance. If it returns errors, a mesh will evict the instance from the load-balancing pool and restate it after it heals. If a request times out, a mesh can fail and then retry the request. A mesh captures and emits metrics and distributed tracing to a centralized metrics system. - -## Istio and Envoy - -While a few service mesh options currently exist, [Istio](https://istio.io/docs/concepts/what-is-istio/) is the most popular at the time of this writing. Istio is a joint venture from IBM, Google, and Lyft. It's an open-source offering that can be integrated into a new or existing distributed application. The technology provides a consistent and complete solution to secure, connect, and monitor microservices. Its features include: - -- Secure service-to-service communication in a cluster with strong identity-based authentication and authorization. -- Automatic load balancing for HTTP, [gRPC](https://grpc.io/), WebSocket, and TCP traffic. -- Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection. -- A pluggable policy layer and configuration API supporting access controls, rate limits, and quotas. -- Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress. - -A key component for an Istio implementation is a proxy service entitled the [Envoy proxy](https://www.envoyproxy.io/docs/envoy/latest/intro/what_is_envoy). It runs alongside each service and provides a platform-agnostic foundation for the following features: - -- Dynamic service discovery. -- Load balancing. -- TLS termination. -- HTTP and gRPC proxies. -- Circuit breaker resiliency. -- Health checks. -- Rolling updates with [canary](https://martinfowler.com/bliki/CanaryRelease.html) deployments. - -As previously discussed, Envoy is deployed as a sidecar to each microservice in the cluster. - -## Integration with Azure Kubernetes Services - -The Azure cloud embraces Istio and provides direct support for it within Azure Kubernetes Services. The following links can help you get started: - -- [Installing Istio in AKS](/azure/aks/istio-install) -- [Using AKS and Istio](/azure/aks/istio-scenario-routing) - -### References - -- [Polly](https://old.dotnetfoundation.org/projects/polly) - -- [Retry pattern](/azure/architecture/patterns/retry) - -- [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker) - -- [Resilience in Azure whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/Resilience%20in%20Azure.pdf) - -- [network latency](https://www.techopedia.com/definition/8553/network-latency) - -- [Redundancy](/azure/architecture/guide/design-principles/redundancy) - -- [geo-replication](/azure/sql-database/sql-database-active-geo-replication) - -- [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview) - -- [Autoscaling guidance](/azure/architecture/best-practices/auto-scaling) - -- [Istio](https://istio.io/docs/concepts/what-is-istio/) - -- [Envoy proxy](https://www.envoyproxy.io/docs/envoy/latest/intro/what_is_envoy) - ->[!div class="step-by-step"] ->[Previous](infrastructure-resiliency-azure.md) ->[Next](monitoring-health.md) diff --git a/docs/architecture/cloud-native/scale-applications.md b/docs/architecture/cloud-native/scale-applications.md deleted file mode 100644 index 2a238f1078e8a..0000000000000 --- a/docs/architecture/cloud-native/scale-applications.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Scaling cloud-native applications -description: Scaling cloud-native applications with Azure Kubernetes Service and Azure Functions to meet user demand in a cost effective way. -ms.date: 04/06/2022 ---- - -# Scaling cloud-native applications - -[!INCLUDE [download-alert](includes/download-alert.md)] - -One of the most-often touted advantages of moving to a cloud hosting environment is scalability. Scalability, or the ability for an application to accept additional user load without compromising performance for each user. It's most often achieved by breaking up an application into small pieces that can each be given whatever resources they require. Cloud vendors enable massive scalability anytime and anywhere in the world. - - In this chapter, we discuss technologies that enable cloud-native applications to scale to meet user demand. These technologies include: - -- Containers -- Orchestrators -- Serverless computing - ->[!div class="step-by-step"] ->[Previous](centralized-configuration.md) ->[Next](leverage-containers-orchestrators.md) diff --git a/docs/architecture/cloud-native/scale-containers-serverless.md b/docs/architecture/cloud-native/scale-containers-serverless.md deleted file mode 100644 index a596022ca22a1..0000000000000 --- a/docs/architecture/cloud-native/scale-containers-serverless.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Scaling containers and serverless applications -description: Scaling cloud-native applications with Azure Kubernetes Service to meet user demand. -ms.date: 04/06/2022 ---- - -# Scaling containers and serverless applications - -[!INCLUDE [download-alert](includes/download-alert.md)] - -There are two ways to scale an application: up or out. The former refers to adding capacity to a single resource, while the latter refers to adding more resources to increase capacity. - -## The simple solution: scaling up - -Upgrading an existing host server with increased CPU, memory, disk I/O speed, and network I/O speed is known as *scaling up*. Scaling up a cloud-native application involves choosing more capable resources from the cloud vendor. For example, you can create a new node pool with larger VMs in your Kubernetes cluster. Then, migrate your containerized services to the new pool. - -Serverless apps scale up by choosing the [premium Functions plan](/azure/azure-functions/functions-scale) or premium instance sizes from a dedicated app service plan. - -## Scaling out cloud-native apps - -Cloud-native applications often experience large fluctuations in demand and require scale on a moment's notice. They favor scaling out. Scaling out is done horizontally by adding additional machines (called nodes) or application instances to an existing cluster. In Kubernetes, you can scale manually by adjusting configuration settings for the app (for example, [scaling a node pool](/azure/aks/use-multiple-node-pools#scale-a-node-pool-manually)), or through autoscaling. - -AKS clusters can autoscale in one of two ways: - -First, the [Horizontal Pod Autoscaler](/azure/aks/tutorial-kubernetes-scale#autoscale-pods) monitors resource demand and automatically scales your POD replicas to meet it. When traffic increases, additional replicas are automatically provisioned to scale out your services. Likewise, when demand decreases, they're removed to scale-in your services. You define the metric on which to scale, for example, CPU usage. You can also specify the minimum and maximum number of replicas to run. AKS monitors that metric and scales accordingly. - -Next, the [AKS Cluster Autoscaler](/azure/aks/cluster-autoscaler) feature enables you to automatically scale compute nodes across a Kubernetes cluster to meet demand. With it, you can automatically add new VMs to the underlying Azure Virtual Machine Scale Set whenever more compute capacity of is required. It also removes nodes when no longer required. - -Figure 3-11 shows the relationship between these two scaling services. - -![Scaling out an App Service plan.](./media/aks-cluster-autoscaler.png) - -**Figure 3-11**. Scaling out an App Service plan. - -Working together, both ensure an optimal number of container instances and compute nodes to support fluctuating demand. The horizontal pod autoscaler optimizes the number of pods required. The cluster autoscaler optimizes the number of nodes required. - -### Scaling Azure Functions - -Azure Functions automatically scale out upon demand. Server resources are dynamically allocated and removed based on the number of triggered events. You're only charged for compute resources consumed when your functions run. Billing is based upon the number of executions, execution time, and memory used. - -While the default consumption plan provides an economical and scalable solution for most apps, the premium option allows developers flexibility for custom Azure Functions requirements. Upgrading to the premium plan provides control over instance sizes, pre-warmed instances (to avoid cold start delays), and dedicated VMs. - ->[!div class="step-by-step"] ->[Previous](deploy-containers-azure.md) ->[Next](other-deployment-options.md) diff --git a/docs/architecture/cloud-native/security.md b/docs/architecture/cloud-native/security.md deleted file mode 100644 index 1f90167f1e955..0000000000000 --- a/docs/architecture/cloud-native/security.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Cloud-native security -description: Architecting Cloud Native .NET Apps for Azure | Security -ms.date: 04/06/2022 ---- - -# Cloud-native security - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Not a day goes by where the news doesn't contain some story about a company being hacked or somehow losing their customers' data. Even countries/regions aren't immune to the problems created by treating security as an afterthought. For years, companies have treated the security of customer data and, in fact, their entire networks as something of a "nice to have". Windows servers were left unpatched, ancient versions of PHP kept running, and MongoDB databases left wide open to the world. - -However, there are starting to be real-world consequences for not maintaining a security mindset when building and deploying applications. Many companies learned the hard way what can happen when servers and desktops aren't patched during the 2017 outbreak of [NotPetya](https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/). The cost of these attacks has easily reached into the billions, with some estimates putting the losses from this single attack at 10 billion US dollars. - -Even governments aren't immune to hacking incidents. The city of Baltimore was held ransom by [criminals](https://www.vox.com/recode/2019/5/21/18634505/baltimore-ransom-robbinhood-mayor-jack-young-hackers) making it impossible for citizens to pay their bills or use city services. - -There has also been an increase in legislation that mandates certain data protections for personal data. In Europe, GDPR has been in effect for more than a year and, more recently, California passed their own version called CCDA, which comes into effect January 1, 2020. The fines under GDPR can be so punishing as to put companies out of business. Google has already been fined 50 million Euros for violations, but that's just a drop in the bucket compared with the potential fines. - -In short, security is serious business. - ->[!div class="step-by-step"] ->[Previous](identity-server.md) ->[Next](azure-security.md) diff --git a/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md b/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md deleted file mode 100644 index e4aff223b1853..0000000000000 --- a/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Service Mesh communication infrastructure -description: Learn about how service mesh technologies streamline cloud-native microservice communication -author: robvet -ms.date: 12/14/2023 ---- - -# Service Mesh communication infrastructure - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Throughout this chapter, we've explored the challenges of microservice communication. We said that development teams need to be sensitive to how back-end services communicate with each other. Ideally, the less inter-service communication, the better. However, avoidance isn't always possible as back-end services often rely on one another to complete operations. - -We explored different approaches for implementing synchronous HTTP communication and asynchronous messaging. In each of the cases, the developer is burdened with implementing communication code. Communication code is complex and time intensive. Incorrect decisions can lead to significant performance issues. - -A more modern approach to microservice communication centers around a new and rapidly evolving technology entitled *Service Mesh*. A [service mesh](https://www.nginx.com/blog/what-is-a-service-mesh/) is a configurable infrastructure layer with built-in capabilities to handle service-to-service communication, resiliency, and many cross-cutting concerns. It moves the responsibility for these concerns out of the microservices and into service mesh layer. Communication is abstracted away from your microservices. - -A key component of a service mesh is a proxy. In a cloud-native application, an instance of a proxy is typically colocated with each microservice. While they execute in separate processes, the two are closely linked and share the same lifecycle. This pattern, known as the [Sidecar pattern](/azure/architecture/patterns/sidecar), and is shown in Figure 4-24. - -![Service mesh with a side car](./media/service-mesh-with-side-car.png) - -**Figure 4-24**. Service mesh with a side car - -Note in the previous figure how messages are intercepted by a proxy that runs alongside each microservice. Each proxy can be configured with traffic rules specific to the microservice. It understands messages and can route them across your services and the outside world. - -Along with managing service-to-service communication, the Service Mesh provides support for service discovery and load balancing. - -Once configured, a service mesh is highly functional. The mesh retrieves a corresponding pool of instances from a service discovery endpoint. It sends a request to a specific service instance, recording the latency and response type of the result. It chooses the instance most likely to return a fast response based on different factors, including the observed latency for recent requests. - -A service mesh manages traffic, communication, and networking concerns at the application level. It understands messages and requests. A service mesh typically integrates with a container orchestrator. Kubernetes supports an extensible architecture in which a service mesh can be added. - -In chapter 6, we deep-dive into Service Mesh technologies including a discussion on its architecture and available open-source implementations. - -## Summary - -In this chapter, we discussed cloud-native communication patterns. We started by examining how front-end clients communicate with back-end microservices. Along the way, we talked about API Gateway platforms and real-time communication. We then looked at how microservices communicate with other back-end services. We looked at both synchronous HTTP communication and asynchronous messaging across services. We covered gRPC, an upcoming technology in the cloud-native world. Finally, we introduced a new and rapidly evolving technology entitled Service Mesh that can streamline microservice communication. - -Special emphasis was on managed Azure services that can help implement communication in cloud-native systems: - -- [Azure Application Gateway](/azure/application-gateway/overview) -- [Azure API Management](https://azure.microsoft.com/services/api-management/) -- [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) -- [Azure Storage Queues](/azure/storage/queues/storage-queues-introduction) -- [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) -- [Azure Event Grid](/azure/event-grid/overview) -- [Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) - -We next move to distributed data in cloud-native systems and the benefits and challenges that it presents. - -### References - -- [.NET Microservices: Architecture for Containerized .NET applications](https://dotnet.microsoft.com/download/thank-you/microservices-architecture-ebook) - -- [Designing Interservice Communication for Microservices](/azure/architecture/microservices/design/interservice-communication) - -- [Azure SignalR Service, a fully managed service to add real-time functionality](https://azure.microsoft.com/blog/azure-signalr-service-a-fully-managed-service-to-add-real-time-functionality/) - -- [Azure API Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) - -- [gRPC Documentation](https://grpc.io/docs/guides/) - -- [Comparing gRPC Services with HTTP APIs](/aspnet/core/grpc/comparison?view=aspnetcore-3.0&preserve-view=false) - -- [Building gRPC Services with .NET video](/Shows/The-Cloud-Native-Show/Building-Microservices-with-gRPC-and-NET) - ->[!div class="step-by-step"] ->[Previous](grpc.md) ->[Next](distributed-data.md) diff --git a/docs/architecture/cloud-native/service-to-service-communication.md b/docs/architecture/cloud-native/service-to-service-communication.md deleted file mode 100644 index 58bdb72985857..0000000000000 --- a/docs/architecture/cloud-native/service-to-service-communication.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -title: Service-to-service communication -description: Learn how back-end cloud-native microservices communicate with other back-end microservices. -author: robvet -ms.date: 04/06/2022 ---- - -# Service-to-service communication - -[!INCLUDE [download-alert](includes/download-alert.md)] - -Moving from the front-end client, we now address back-end microservices communicate with each other. - -When constructing a cloud-native application, you'll want to be sensitive to how back-end services communicate with each other. Ideally, the less inter-service communication, the better. However, avoidance isn't always possible as back-end services often rely on one another to complete an operation. - -There are several widely accepted approaches to implementing cross-service communication. The *type of communication interaction* will often determine the best approach. - -Consider the following interaction types: - -- *Query* – when a calling microservice requires a response from a called microservice, such as, "Hey, give me the buyer information for a given customer Id." - -- *Command* – when the calling microservice needs another microservice to execute an action but doesn't require a response, such as, "Hey, just ship this order." - -- *Event* – when a microservice, called the publisher, raises an event that state has changed or an action has occurred. Other microservices, called subscribers, who are interested, can react to the event appropriately. The publisher and the subscribers aren't aware of each other. - -Microservice systems typically use a combination of these interaction types when executing operations that require cross-service interaction. Let's take a close look at each and how you might implement them. - -## Queries - -Many times, one microservice might need to *query* another, requiring an immediate response to complete an operation. A shopping basket microservice may need product information and a price to add an item to its basket. There are many approaches for implementing query operations. - -### Request/Response Messaging - -One option for implementing this scenario is for the calling back-end microservice to make direct HTTP requests to the microservices it needs to query, shown in Figure 4-8. - -![Direct HTTP communication](./media/direct-http-communication.png) - -**Figure 4-8**. Direct HTTP communication - -While direct HTTP calls between microservices are relatively simple to implement, care should be taken to minimize this practice. To start, these calls are always *synchronous* and will block the operation until a result is returned or the request times outs. What were once self-contained, independent services, able to evolve independently and deploy frequently, now become coupled to each other. As coupling among microservices increase, their architectural benefits diminish. - -Executing an infrequent request that makes a single direct HTTP call to another microservice might be acceptable for some systems. However, high-volume calls that invoke direct HTTP calls to multiple microservices aren't advisable. They can increase latency and negatively impact the performance, scalability, and availability of your system. Even worse, a long series of direct HTTP communication can lead to deep and complex chains of synchronous microservices calls, shown in Figure 4-9: - -![Chaining HTTP queries](./media/chaining-http-queries.png) - -**Figure 4-9**. Chaining HTTP queries - -You can certainly imagine the risk in the design shown in the previous image. What happens if Step \#3 fails? Or Step \#8 fails? How do you recover? What if Step \#6 is slow because the underlying service is busy? How do you continue? Even if all works correctly, think of the latency this call would incur, which is the sum of the latency of each step. - -The large degree of coupling in the previous image suggests the services weren't optimally modeled. It would behoove the team to revisit their design. - -### Materialized View pattern - -A popular option for removing microservice coupling is the [Materialized View pattern](/azure/architecture/patterns/materialized-view). With this pattern, a microservice stores its own local, denormalized copy of data that's owned by other services. Instead of the Shopping Basket microservice querying the Product Catalog and Pricing microservices, it maintains its own local copy of that data. This pattern eliminates unnecessary coupling and improves reliability and response time. The entire operation executes inside a single process. We explore this pattern and other data concerns in Chapter 5. - -### Service Aggregator Pattern - -Another option for eliminating microservice-to-microservice coupling is an [Aggregator microservice](https://devblogs.microsoft.com/cesardelatorre/designing-and-implementing-api-gateways-with-ocelot-in-a-microservices-and-container-based-architecture/), shown in purple in Figure 4-10. - -![Aggregator service](./media/aggregator-service.png) - -**Figure 4-10**. Aggregator microservice - -The pattern isolates an operation that makes calls to multiple back-end microservices, centralizing its logic into a specialized microservice. The purple checkout aggregator microservice in the previous figure orchestrates the workflow for the Checkout operation. It includes calls to several back-end microservices in a sequenced order. Data from the workflow is aggregated and returned to the caller. While it still implements direct HTTP calls, the aggregator microservice reduces direct dependencies among back-end microservices. - -### Request/Reply Pattern - -Another approach for decoupling synchronous HTTP messages is a [Request-Reply Pattern](https://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReply.html), which uses queuing communication. Communication using a queue is always a one-way channel, with a producer sending the message and consumer receiving it. With this pattern, both a request queue and response queue are implemented, shown in Figure 4-11. - -![Request-reply pattern](./media/request-reply-pattern.png) - -**Figure 4-11**. Request-reply pattern - -Here, the message producer creates a query-based message that contains a unique correlation ID and places it into a request queue. The consuming service dequeues the messages, processes it and places the response into the response queue with the same correlation ID. The producer service dequeues the message, matches it with the correlation ID and continues processing. We cover queues in detail in the next section. - -## Commands - -Another type of communication interaction is a *command*. A microservice may need another microservice to perform an action. The Ordering microservice may need the Shipping microservice to create a shipment for an approved order. In Figure 4-12, one microservice, called a Producer, sends a message to another microservice, the Consumer, commanding it to do something. - -![Command interaction with a queue](./media/command-interaction-with-queue.png) - -**Figure 4-12**. Command interaction with a queue - -Most often, the Producer doesn't require a response and can *fire-and-forget* the message. If a reply is needed, the Consumer sends a separate message back to Producer on another channel. A command message is best sent asynchronously with a message queue. supported by a lightweight message broker. In the previous diagram, note how a queue separates and decouples both services. - -As many message queues may dispatch the same message more than once, known as at-least-once delivery, the consumer must be able to identify and handle these scenarios correctly using the relevant [idempotent message processing patterns](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-data-platform#idempotent-message-processing). - -A message queue is an intermediary construct through which a producer and consumer pass a message. Queues implement an asynchronous, point-to-point messaging pattern. The Producer knows where a command needs to be sent and routes appropriately. The queue guarantees that a message is processed by exactly one of the consumer instances that are reading from the channel. In this scenario, either the producer or consumer service can scale out without affecting the other. As well, technologies can be disparate on each side, meaning that we might have a Java microservice calling a [Golang](https://golang.org) microservice. - -In chapter 1, we talked about *backing services*. Backing services are ancillary resources upon which cloud-native systems depend. Message queues are backing services. The Azure cloud supports two types of message queues that your cloud-native systems can consume to implement command messaging: Azure Storage Queues and Azure Service Bus Queues. - -### Azure Storage Queues - -Azure storage queues offer a simple queueing infrastructure that is fast, affordable, and backed by Azure storage accounts. - -[Azure Storage Queues](/azure/storage/queues/storage-queues-introduction) feature a REST-based queuing mechanism with reliable and persistent messaging. They provide a minimal feature set, but are inexpensive and store millions of messages. Their capacity ranges up to 500 TB. A single message can be up to 64 KB in size. - -You can access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. Storage queues can scale out to large numbers of concurrent clients to handle traffic spikes. - -That said, there are limitations with the service: - -- Message order isn't guaranteed. - -- A message can only persist for seven days before it's automatically removed. - -- Support for state management, duplicate detection, or transactions isn't available. - -Figure 4-13 shows the hierarchy of an Azure Storage Queue. - -![Storage queue hierarchy](./media/storage-queue-hierarchy.png) - -**Figure 4-13**. Storage queue hierarchy - -In the previous figure, note how storage queues store their messages in the underlying Azure Storage account. - -For developers, Microsoft provides several client and server-side libraries for Storage queue processing. Most major platforms are supported including .NET, Java, JavaScript, Ruby, Python, and Go. Developers should never communicate directly with these libraries. Doing so will tightly couple your microservice code to the Azure Storage Queue service. It's a better practice to insulate the implementation details of the API. Introduce an intermediation layer, or intermediate API, that exposes generic operations and encapsulates the concrete library. This loose coupling enables you to swap out one queuing service for another without having to make changes to the mainline service code. - -Azure Storage queues are an economical option to implement command messaging in your cloud-native applications. Especially when a queue size will exceed 80 GB, or a simple feature set is acceptable. You only pay for the storage of the messages; there are no fixed hourly charges. - -### Azure Service Bus Queues - -For more complex messaging requirements, consider Azure Service Bus queues. - -Sitting atop a robust message infrastructure, [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) supports a *brokered messaging model*. Messages are reliably stored in a broker (the queue) until received by the consumer. The queue guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were added to the queue. - -The size of a message can be much larger, up to 256 KB. Messages are persisted in the queue for an unlimited period of time. Service Bus supports not only HTTP-based calls, but also provides full support for the [AMQP protocol](/azure/service-bus-messaging/service-bus-amqp-overview). AMQP is an open-standard across vendors that supports a binary protocol and higher degrees of reliability. - -Service Bus provides a rich set of features, including [transaction support](/azure/service-bus-messaging/service-bus-transactions) and a [duplicate detection feature](/azure/service-bus-messaging/duplicate-detection). The queue guarantees "at most once delivery" per message. It automatically discards a message that has already been sent. If a producer is in doubt, it can resend the same message, and Service Bus guarantees that only one copy will be processed. Duplicate detection frees you from having to build additional infrastructure plumbing. - -Two more enterprise features are partitioning and sessions. A conventional Service Bus queue is handled by a single message broker and stored in a single message store. But, [Service Bus Partitioning](/azure/service-bus-messaging/service-bus-partitioning) spreads the queue across multiple message brokers and message stores. The overall throughput is no longer limited by the performance of a single message broker or messaging store. A temporary outage of a messaging store doesn't render a partitioned queue unavailable. - -[Service Bus Sessions](https://codingcanvas.com/azure-service-bus-sessions/) provide a way to group-related messages. Imagine a workflow scenario where messages must be processed together and the operation completed at the end. To take advantage, sessions must be explicitly enabled for the queue and each related messaged must contain the same session ID. - -However, there are some important caveats: Service Bus queues size is limited to 80 GB, which is much smaller than what's available from store queues. Additionally, Service Bus queues incur a base cost and charge per operation. - -Figure 4-14 outlines the high-level architecture of a Service Bus queue. - -![Service Bus queue](./media/service-bus-queue.png) - -**Figure 4-14**. Service Bus queue - -In the previous figure, note the point-to-point relationship. Two instances of the same provider are enqueuing messages into a single Service Bus queue. Each message is consumed by only one of three consumer instances on the right. Next, we discuss how to implement messaging where different consumers may all be interested the same message. - -## Events - -Message queuing is an effective way to implement communication where a producer can asynchronously send a consumer a message. However, what happens when *many different consumers* are interested in the same message? A dedicated message queue for each consumer wouldn't scale well and would become difficult to manage. - -To address this scenario, we move to the third type of message interaction, the *event*. One microservice announces that an action had occurred. Other microservices, if interested, react to the action, or event. This is also known as the [event-driven architectural style](/azure/architecture/guide/architecture-styles/event-driven). - -Eventing is a two-step process. For a given state change, a microservice publishes an event to a message broker, making it available to any other interested microservice. The interested microservice is notified by subscribing to the event in the message broker. You use the [Publish/Subscribe](/azure/architecture/patterns/publisher-subscriber) pattern to implement [event-based communication](/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/integration-event-based-microservice-communications). - -Figure 4-15 shows a shopping basket microservice publishing an event with two other microservices subscribing to it. - -![Event-Driven messaging](./media/event-driven-messaging.png) - -**Figure 4-15**. Event-Driven messaging - -Note the *event bus* component that sits in the middle of the communication channel. It's a custom class that encapsulates the message broker and decouples it from the underlying application. The ordering and inventory microservices independently operate the event with no knowledge of each other, nor the shopping basket microservice. When the registered event is published to the event bus, they act upon it. - -With eventing, we move from queuing technology to *topics*. A [topic](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions) is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 4-16 shows a topic architecture. - -![Topic architecture](./media/topic-architecture.png) - -**Figure 4-16**. Topic architecture - -In the previous figure, publishers send messages to the topic. At the end, subscribers receive messages from subscriptions. In the middle, the topic forwards messages to subscriptions based on a set of rules, shown in dark blue boxes. Rules act as a filter that forward specific messages to a subscription. Here, a "GetPrice" event would be sent to the price and logging subscriptions as the logging subscription has chosen to receive all messages. A "GetInformation" event would be sent to the information and logging subscriptions. - -The Azure cloud supports two different topic services: Azure Service Bus Topics and Azure EventGrid. - -### Azure Service Bus Topics - -Sitting on top of the same robust brokered message model of Azure Service Bus queues are [Azure Service Bus Topics](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). A topic can receive messages from multiple independent publishers and send messages to up to 2,000 subscribers. Subscriptions can be dynamically added or removed at run time without stopping the system or recreating the topic. - -Many advanced features from Azure Service Bus queues are also available for topics, including [Duplicate Detection](/azure/service-bus-messaging/duplicate-detection) and [Transaction support](/azure/service-bus-messaging/service-bus-transactions). By default, Service Bus topics are handled by a single message broker and stored in a single message store. But, [Service Bus Partitioning](/azure/service-bus-messaging/service-bus-partitioning) scales a topic by spreading it across many message brokers and message stores. - -[Scheduled Message Delivery](/azure/service-bus-messaging/message-sequencing) tags a message with a specific time for processing. The message won't appear in the topic before that time. [Message Deferral](/azure/service-bus-messaging/message-deferral) enables you to defer a retrieval of a message to a later time. Both are commonly used in workflow processing scenarios where operations are processed in a particular order. You can postpone processing of received messages until prior work has been completed. - -Service Bus topics are a robust and proven technology for enabling publish/subscribe communication in your cloud-native systems. - -### Azure Event Grid - -While Azure Service Bus is a battle-tested messaging broker with a full set of enterprise features, [Azure Event Grid](/azure/event-grid/overview) is the new kid on the block. - -At first glance, Event Grid may look like just another topic-based messaging system. However, it's different in many ways. Focused on event-driven workloads, it enables real-time event processing, deep Azure integration, and an open-platform - all on serverless infrastructure. It's designed for contemporary cloud-native and serverless applications - -As a centralized *eventing backplane*, or pipe, Event Grid reacts to events inside Azure resources and from your own services. - -Event notifications are published to an Event Grid Topic, which, in turn, routes each event to a subscription. Subscribers map to subscriptions and consume the events. Like Service Bus, Event Grid supports a *filtered subscriber model* where a subscription sets rule for the events it wishes to receive. Event Grid provides fast throughput with a guarantee of 10 million events per second enabling near real-time delivery - far more than what Azure Service Bus can generate. - -A sweet spot for Event Grid is its deep integration into the fabric of Azure infrastructure. An Azure resource, such as Cosmos DB, can publish built-in events directly to other interested Azure resources - without the need for custom code. Event Grid can publish events from an Azure Subscription, Resource Group, or Service, giving developers fine-grained control over the lifecycle of cloud resources. However, Event Grid isn't limited to Azure. It's an open platform that can consume custom HTTP events published from applications or third-party services and route events to external subscribers. - -When publishing and subscribing to native events from Azure resources, no coding is required. With simple configuration, you can integrate events from one Azure resource to another leveraging built-in plumbing for Topics and Subscriptions. Figure 4-17 shows the anatomy of Event Grid. - -![Event Grid anatomy](./media/event-grid-anatomy.png) - -**Figure 4-17**. Event Grid anatomy - -A major difference between EventGrid and Service Bus is the underlying *message exchange pattern*. - -Service Bus implements an older style *pull model* in which the downstream subscriber actively polls the topic subscription for new messages. On the upside, this approach gives the subscriber full control of the pace at which it processes messages. It controls when and how many messages to process at any given time. Unread messages remain in the subscription until processed. A significant shortcoming is the latency between the time the event is generated and the polling operation that pulls that message to the subscriber for processing. Also, the overhead of constant polling for the next event consumes resources and money. - -EventGrid, however, is different. It implements a *push model* in which events are sent to the EventHandlers as received, giving near real-time event delivery. It also reduces cost as the service is triggered only when it's needed to consume an event – not continually as with polling. That said, an event handler must handle the incoming load and provide throttling mechanisms to protect itself from becoming overwhelmed. Many Azure services that consume these events, such as Azure Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads. - -Event Grid is a fully managed serverless cloud service. It dynamically scales based on your traffic and charges you only for your actual usage, not pre-purchased capacity. The first 100,000 operations per month are free – operations being defined as event ingress (incoming event notifications), subscription delivery attempts, management calls, and filtering by subject. With 99.99% availability, EventGrid guarantees the delivery of an event within a 24-hour period, with built-in retry functionality for unsuccessful delivery. Undelivered messages can be moved to a "dead-letter" queue for resolution. Unlike Azure Service Bus, Event Grid is tuned for fast performance and doesn't support features like ordered messaging, transactions, and sessions. - -### Streaming messages in the Azure cloud - -Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events like a new document has been inserted into a Cosmos DB. But, what if your cloud-native system needs to process a *stream of related events*? [Event streams](/archive/msdn-magazine/2015/february/microsoft-azure-the-rise-of-event-stream-oriented-systems) are more complex. They're typically time-ordered, interrelated, and must be processed as a group. - -[Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and [process millions of events per second](/azure/event-hubs/event-hubs-about). Shown in Figure 4-18, it's often a front door for an event pipeline, decoupling ingest stream from event consumption. - -![Azure Event Hub](./media/azure-event-hub.png) - -**Figure 4-18**. Azure Event Hub - -Event Hub supports low latency and configurable time retention. Unlike queues and topics, Event Hubs keep event data after it's been read by a consumer. This feature enables other data analytic services, both internal and external, to replay the data for further analysis. Events stored in event hub are only deleted upon expiration of the retention period, which is one day by default, but configurable. - -Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports Kafka 1.0. [Existing Kafka applications can communicate with Event Hub](/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview) using the Kafka protocol providing an alternative to managing large Kafka clusters. Many open-source cloud-native systems embrace Kafka. - -Event Hubs implements message streaming through a [partitioned consumer model](/azure/event-hubs/event-hubs-features) in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables tremendous horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. Figure 4-19 shows partitioning in an Event Hub. - -![Event Hub partitioning](./media/event-hub-partitioning.png) - -**Figure 4-19**. Event Hub partitioning - -Instead of reading from the same resource, each consumer group reads across a subset, or partition, of the message stream. - -For cloud-native applications that must stream large numbers of events, Azure Event Hub can be a robust and affordable solution. - ->[!div class="step-by-step"] ->[Previous](front-end-communication.md) ->[Next](grpc.md) diff --git a/docs/architecture/cloud-native/summary.md b/docs/architecture/cloud-native/summary.md deleted file mode 100644 index 30577a6252ec7..0000000000000 --- a/docs/architecture/cloud-native/summary.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Summary - Architecting cloud-native .NET apps for Azure -description: Learn the key conclusions about Cloud Native applications that are fully developed in the Architecting Cloud-Native .NET Apps for Azure guide/e-book. -ms.custom: kr2b-contr-experiment -ms.date: 04/06/2022 ---- - -# Summary: Architecting cloud-native apps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -In summary, here are important conclusions from this guide: - -- **Cloud-native** is about designing modern applications that embrace rapid change, large scale, and resilience, in modern, dynamic environments such as public, private, and hybrid clouds. - -- The **[Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF)** is an influential open-source consortium of over 300 major corporations. It's responsible for driving the adoption of cloud-native computing across technology and cloud stacks. - -- **CNCF guidelines** recommend that cloud-native applications embrace six important pillars as shown in Figure 11-1: - - ![Cloud-native foundational pillars](./media/cloud-native-foundational-pillars.png) - - **Figure 11-1**. Cloud-native foundational pillars - -- These cloud-native pillars include: - - The cloud and its underlying service model - - Modern design principles - - Microservices - - Containerization and container orchestration - - Cloud-based backing services, such as databases and message brokers - - Automation, including Infrastructure as Code and code deployment - -- **Kubernetes** is the hosting environment of choice for most cloud-native applications. Smaller, simple services are sometimes hosted in serverless platforms, such as Azure Functions. Among many key automation features, both environments provide automatic scaling to handle fluctuating workload volumes. - -- **Service communication** becomes a significant design decision when constructing a cloud-native application. Applications typically expose an API gateway to manage front-end client communication. Then backend microservices strive to communicate with each other implementing asynchronous communication patterns, when possible. - -- **gRPC** is a modern, high-performance framework that evolves the age-old remote procedure call (RPC) protocol. Cloud-native applications often embrace gRPC to streamline messaging between back-end services. gRPC uses HTTP/2 for its transport protocol. It can be up to 8x faster than JSON serialization with message sizes 60-80% smaller. gRPC is open source and managed by the Cloud Native Computing Foundation (CNCF). - -- **Distributed data** is a model often implemented by cloud-native applications. Applications segregate business functionality into small, independent microservices. Each microservice encapsulates its own dependencies, data, and state. The classic shared database model evolves into one of many smaller databases, each aligning with a microservice. When the smoke clears, we emerge with a design that exposes a `database-per-microservice` model. - -- **No-SQL databases** refer to high-performance, non-relational data stores. They excel in their ease-of-use, scalability, resilience, and availability characteristics. High volume services that require sub second response time favor NoSQL datastores. The proliferation of NoSQL technologies for distributed cloud-native systems can't be overstated. - -- **NewSQL** is an emerging database technology that combines the distributed scalability of NoSQL and the ACID guarantees of a relational database. NewSQL databases target business systems that must process high-volumes of data, across distributed environments, with full transactional/ACID compliance. The Cloud Native Computing Foundation (CNCF) features several NewSQL database projects. - -- **Resiliency** is the ability of your system to react to failure and still remain functional. Cloud-native systems embrace distributed architecture where failure is inevitable. Applications must be constructed to respond elegantly to failure and quickly return to a fully functioning state. - -- **Service meshes** are a configurable infrastructure layer with built-in capabilities to handle service communication and other cross-cutting challenges. They decouple cross-cutting responsibilities from your business code. These responsibilities move into a service proxy. Referred to as the `Sidecar pattern`, the proxy is deployed into a separate process to provide isolation from your business code. - -- **Observability** is a key design consideration for cloud-native applications. As services are distributed across a cluster of nodes, centralized logging, monitoring, and alerts, become mandatory. Azure Monitor is a collection of cloud-based tools designed to provide visibility into the state of your system. - -- **Infrastructure as Code** is a widely accepted practice that automates platform provisioning. Your infrastructure and deployments are automated, consistent, and repeatable. Tools like Azure Resource Manager, Terraform, and the Azure CLI, enable you to declaratively script the cloud infrastructure you require. - -- **Code automation** is a requirement for cloud-native applications. Modern CI/CD systems help fulfill this principle. They provide separate build and deployment steps that help ensure consistent and quality code. The build stage transforms the code into a binary artifact. The release stage picks up the binary artifact, applies external environment configuration, and deploys it to a specified environment. Azure DevOps and GitHub are full-featured DevOps environments. - ->[!div class="step-by-step"] ->[Previous](application-bundles.md) diff --git a/docs/architecture/cloud-native/toc.yml b/docs/architecture/cloud-native/toc.yml deleted file mode 100644 index e5d36f21295ef..0000000000000 --- a/docs/architecture/cloud-native/toc.yml +++ /dev/null @@ -1,100 +0,0 @@ -items: -- name: "Architecting Cloud-Native .NET Apps for Azure" - href: index.md - items: - - name: Introduction to cloud-native applications - href: introduction.md - items: - - name: What is Cloud Native? - href: definition.md - - name: Candidate apps for Cloud Native - href: candidate-apps.md - - name: Introducing the eShopOnContainers reference app - href: introduce-eshoponcontainers-reference-app.md - items: - - name: Mapping eShopOnContainers to Azure Services - href: map-eshoponcontainers-azure-services.md - - name: Deploying eShopOnContainers to Azure - href: deploy-eshoponcontainers-azure.md - - name: Centralized configuration - href: centralized-configuration.md - - name: Scaling cloud-native .NET applications - href: scale-applications.md - items: - - name: Leveraging containers and orchestrators - href: leverage-containers-orchestrators.md - - name: Leveraging serverless functions - href: leverage-serverless-functions.md - - name: Combining containers and serverless approaches - href: combine-containers-serverless-approaches.md - - name: Deploying containers in Azure - href: deploy-containers-azure.md - - name: Scaling containers and serverless applications - href: scale-containers-serverless.md - - name: Other deployment options - href: other-deployment-options.md - - name: Cloud-native communication patterns - href: communication-patterns.md - items: - - name: Front-end client communication - href: front-end-communication.md - - name: Service to service communication - href: service-to-service-communication.md - - name: gRPC - href: grpc.md - - name: Service Mesh communication infrastructure - href: service-mesh-communication-infrastructure.md - - name: Cloud-native data patterns - href: distributed-data.md - items: - - name: Relational vs. NoSQL data - href: relational-vs-nosql-data.md - - name: Caching in a cloud-native application - href: azure-caching.md - - name: Elasticsearch in Azure - href: elastic-search-in-azure.md - - name: Cloud-native resiliency - href: resiliency.md - items: - - name: Application resiliency patterns - href: application-resiliency-patterns.md - - name: Cloud infrastructure resiliency with Azure - href: infrastructure-resiliency-azure.md - - name: Resilient communication - href: resilient-communications.md - - name: Monitoring and health - href: monitoring-health.md - items: - - name: Observability patterns - href: observability-patterns.md - - name: Logging with Elastic Stack - href: logging-with-elastic-stack.md - - name: Monitoring in Azure Kubernetes Services - href: monitoring-azure-kubernetes.md - - name: Azure Monitor - href: azure-monitor.md - - name: Cloud-native identity - href: identity.md - items: - - name: Authentication and authorization in cloud-native apps - href: authentication-authorization.md - - name: Azure Active Directory - href: azure-active-directory.md - - name: Identity Server - href: identity-server.md - - name: Cloud-native security - href: security.md - items: - - name: Azure Security for cloud-native apps - href: azure-security.md - - name: DevOps - href: devops.md - items: - - name: Feature flags - href: feature-flags.md - - name: Infrastructure as Code - href: infrastructure-as-code.md - - name: Cloud Native Application Bundles - href: application-bundles.md - - name: Summary - Architecting cloud-native .NET apps for Azure - href: summary.md diff --git a/docs/architecture/index.yml b/docs/architecture/index.yml index 18c17adc4dd44..8d66dcca6809a 100644 --- a/docs/architecture/index.yml +++ b/docs/architecture/index.yml @@ -8,7 +8,7 @@ metadata: title: .NET application architecture documentation description: Learn recommended practices for architecting, building, and migrating .NET apps. ms.topic: hub-page - ms.date: 12/14/2023 + ms.date: 10/23/2024 ms.service: dotnet ms.collection: collection @@ -17,57 +17,12 @@ metadata: highlightedContent: # itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | video | whats-new items: - - title: ".NET Microservices: Architecture for containerized .NET apps" - itemType: architecture - url: microservices/index.md - title: Blazor for ASP.NET Web Forms developers itemType: architecture url: blazor-for-web-forms-developers/index.md - title: Architecting cloud-native .NET apps for Azure itemType: architecture url: cloud-native/index.md - - title: Architect modern web apps with ASP.NET Core and Azure - itemType: architecture - url: modern-web-apps-azure/index.md - title: Enterprise Application Patterns Using .NET MAUI itemType: architecture - url: maui/index.md - -conceptualContent: -# itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | video | whats-new - items: - - title: Migrate .NET apps to Azure - links: - - text: Migrate your .NET app to Azure - itemType: get-started - url: https://dotnet.microsoft.com/apps/cloud/migrate-to-azure - - - title: Develop mobile and desktop apps - links: - - text: Build mobile and desktop apps with .NET MAUI - itemType: learn - url: /training/paths/build-apps-with-dotnet-maui - - - title: Develop cloud-native .NET apps for Azure - links: - - text: Hello World Microservice tutorial - itemType: learn - url: https://dotnet.microsoft.com/learn/aspnet/microservice-tutorial/intro - - text: Create and deploy a cloud-native ASP.NET Core microservice - itemType: learn - url: /training/modules/microservices-aspnet-core - - text: Deploy a cloud-native ASP.NET Core microservice with GitHub Actions - itemType: learn - url: /training/modules/microservices-devops-aspnet-core - - text: Implement resiliency in a cloud-native ASP.NET Core microservice - itemType: learn - url: /training/modules/microservices-resiliency-aspnet-core - - - title: Design guidelines - links: - - text: Framework design guidelines - itemType: concept - url: ../standard/design-guidelines/index.md - - text: Library design guidelines - itemType: concept - url: ../standard/library-guidance/index.md + url: maui/index.md \ No newline at end of file diff --git a/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md b/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md deleted file mode 100644 index fed29b20d70c3..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Asynchronous message-based communication -description: .NET Microservices Architecture for Containerized .NET Applications | Asynchronous message-based communications is an essential concept in the microservices architecture, because it's the best way to keep microservices independent from one another while also being synchronized eventually. -ms.date: 01/13/2021 ---- - -# Asynchronous message-based communication - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Asynchronous messaging and event-driven communication are critical when propagating changes across multiple [microservices](/azure/architecture/guide/architecture-styles/microservices) and their related domain models. As mentioned earlier in the discussion microservices and Bounded Contexts (BCs), models (User, Customer, Product, Account, etc.) can mean different things to different microservices or BCs. That means that when changes occur, you need some way to reconcile changes across the different models. A solution is eventual consistency and event-driven communication based on asynchronous messaging. - -When using messaging, processes communicate by exchanging messages asynchronously. A client makes a command or a request to a service by sending it a message. If the service needs to reply, it sends a different message back to the client. Since it's a message-based communication, the client assumes that the reply won't be received immediately, and that there might be no response at all. - -A message is composed by a header (metadata such as identification or security information) and a body. Messages are usually sent through asynchronous protocols like AMQP. - -The preferred infrastructure for this type of communication in the microservices community is a lightweight message broker, which is different than the large brokers and orchestrators used in SOA. In a lightweight message broker, the infrastructure is typically "dumb," acting only as a message broker, with simple implementations such as [RabbitMQ](https://www.rabbitmq.com/) or a scalable service bus in the cloud like [Azure Service Bus](/azure/service-bus-messaging/). In this scenario, most of the "smart" thinking still lives in the endpoints that are producing and consuming messages-that is, in the microservices. - -Another rule you should try to follow, as much as possible, is to use only asynchronous messaging between the internal services, and to use synchronous communication (such as HTTP) only from the client apps to the front-end services (API Gateways plus the first level of microservices). - -There are two kinds of asynchronous messaging communication: single receiver message-based communication, and multiple receivers message-based communication. The following sections provide details about them. - -## Single-receiver message-based communication - -Message-based asynchronous communication with a single receiver means there's point-to-point communication that delivers a message to exactly one of the consumers that's reading from the channel, and that the message is processed just once. However, there are special situations. For instance, in a cloud system that tries to automatically recover from failures, the same message could be re-sent multiple times. Due to network or other failures, the client has to be able to retry sending messages, and the server has to implement an operation to be [idempotent](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-data-platform#idempotent-message-processing) in order to process a particular message just once. - -Single-receiver message-based communication is especially well suited for sending asynchronous commands from one microservice to another as shown in Figure 4-18 that illustrates this approach. - -Once you start sending message-based communication (either with commands or events), you should avoid mixing message-based communication with synchronous HTTP communication. - -![A single microservice receiving an asynchronous message](./media/asynchronous-message-based-communication/single-receiver-message-based-communication.png) - -**Figure 4-18**. A single microservice receiving an asynchronous message - -When the commands come from client applications, they can be implemented as HTTP synchronous commands. Use message-based commands when you need higher [scalability](/azure/architecture/guide/design-principles/scale-out) or when you're already in a message-based business process. - -## Multiple-receivers message-based communication - -As a more flexible approach, you might also want to use a [publish/subscribe](/azure/architecture/patterns/publisher-subscriber) mechanism so that your communication from the sender will be available to additional subscriber microservices or to external applications. Thus, it helps you to follow the [open/closed principle](https://en.wikipedia.org/wiki/Open/closed_principle) in the sending service. That way, additional subscribers can be added in the future without the need to modify the sender service. - -When you use a publish/subscribe communication, you might be using an event bus interface to publish events to any subscriber. - -## Asynchronous event-driven communication - -When using asynchronous event-driven communication, a microservice publishes an integration event when something happens within its domain and another microservice needs to be aware of it, like a price change in a product catalog microservice. Additional microservices subscribe to the events so they can receive them asynchronously. When that happens, the receivers might update their own domain entities, which can cause more integration events to be published. This publish/subscribe system is performed by using an implementation of an event bus. The event bus can be designed as an abstraction or interface, with the API that's needed to subscribe or unsubscribe to events and to publish events. The event bus can also have one or more implementations based on any inter-process and messaging broker, like a messaging queue or service bus that supports asynchronous communication and a publish/subscribe model. - -If a system uses eventual consistency driven by integration events, it's recommended that this approach is made clear to the end user. The system shouldn't use an approach that mimics integration events, like [SignalR](/aspnet/signalr/overview/getting-started/introduction-to-signalr) or polling systems from the client. The end user and the business owner have to explicitly embrace eventual consistency in the system and realize that in many cases the business doesn't have any problem with this approach, as long as it's explicit. This approach is important because users might expect to see some results immediately and this aspect might not happen with eventual consistency. - -As noted earlier in the [Challenges and solutions for distributed data management](distributed-data-management.md) section, you can use integration events to implement business tasks that span multiple microservices. Thus, you'll have eventual consistency between those services. An eventually consistent transaction is made up of a collection of distributed actions. At each action, the related microservice updates a domain entity and publishes another integration event that raises the next action within the same end-to-end business task. - -An important point is that you might want to communicate to multiple microservices that are subscribed to the same event. To do so, you can use publish/subscribe messaging based on event-driven communication, as shown in Figure 4-19. This publish/subscribe mechanism isn't exclusive to the microservice architecture. It's similar to the way [Bounded Contexts](https://martinfowler.com/bliki/BoundedContext.html) in DDD should communicate, or to the way you propagate updates from the write database to the read database in the [Command and Query Responsibility Segregation (CQRS)](https://martinfowler.com/bliki/CQRS.html) architecture pattern. The goal is to have eventual consistency between multiple data sources across your distributed system. - -![Diagram showing asynchronous event-driven communications.](./media/asynchronous-message-based-communication/asynchronous-event-driven-communication.png) - -**Figure 4-19**. Asynchronous event-driven message communication - -In asynchronous event-driven communication, one microservice publishes events to an event bus and many microservices can subscribe to it, to get notified and act on it. Your implementation will determine what protocol to use for event-driven, message-based communications. [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol) can help achieve reliable queued communication. - -The amount of data to share in these events is another important consideration, whether just an identifier or also including various elements of business data as well. These considerations are discussed in this blog post on [thin vs fat integration events](https://codeopinion.com/thin-vs-fat-integration-events/). - -When you use an event bus, you might want to use an abstraction level (like an event bus interface) based on a related implementation in classes with code using the API from a message broker like [RabbitMQ](https://www.rabbitmq.com/) or a service bus like [Azure Service Bus with Topics](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). Alternatively, you might want to use a higher-level service bus like [NServiceBus](https://particular.net/nservicebus), [MassTransit](https://masstransit.io/), or [Brighter](https://www.goparamore.io/) to articulate your event bus and publish/subscribe system. - -## A note about messaging technologies for production systems - -The messaging technologies available for implementing your abstract event bus are at different levels. For instance, products like RabbitMQ (a messaging broker transport) and Azure Service Bus sit at a lower level than other products like [NServiceBus](https://particular.net/nservicebus), [MassTransit](https://masstransit.io/), or [Brighter](https://www.goparamore.io/), which can work on top of RabbitMQ and Azure Service Bus. Your choice depends on how many rich features at the application level and out-of-the-box scalability you need for your application. For implementing just a proof-of-concept event bus for your development environment, as it was done in the eShopOnContainers sample, a simple implementation on top of RabbitMQ running on a Docker container might be enough. - -However, for mission-critical and production systems that need hyper-scalability, you might want to evaluate Azure Service Bus. For high-level abstractions and features that make the development of distributed applications easier, we recommend that you evaluate other commercial and open-source service buses, such as [NServiceBus](https://particular.net/nservicebus), [MassTransit](https://masstransit.io/), and [Brighter](https://www.goparamore.io/). Of course, you can build your own service-bus features on top of lower-level technologies like RabbitMQ and Docker. But that plumbing work might cost too much for a custom enterprise application. - -## Resiliently publishing to the event bus - -A challenge when implementing an event-driven architecture across multiple microservices is how to atomically update state in the original microservice while resiliently publishing its related integration event into the event bus, somehow based on transactions. The following are a few ways to accomplish this functionality, although there could be additional approaches as well. - -- Using a transactional (DTC-based) queue like MSMQ. (However, this is a legacy approach.) - -- Using transaction log mining. - -- Using full [Event Sourcing](/azure/architecture/patterns/event-sourcing) pattern. - -- Using the [Outbox pattern](https://www.kamilgrzybek.com/design/the-outbox-pattern/): a transactional database table as a message queue that will be the base for an event-creator component that would create the event and publish it. - -For a more complete description of the challenges in this space, including how messages with potentially incorrect data can end up being published, see [Data platform for mission-critical workloads on Azure: Every message must be processed](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-data-platform#every-message-must-be-processed). - -Additional topics to consider when using asynchronous communication are message idempotence and message deduplication. These topics are covered in the section [Implementing event-based communication between microservices (integration events)](../multi-container-microservice-net-applications/integration-event-based-microservice-communications.md) later in this guide. - -## Additional resources - -- **Event Driven Messaging** \ - - -- **Publish/Subscribe Channel** \ - - -- **Udi Dahan. Clarified CQRS** \ - - -- **Command and Query Responsibility Segregation (CQRS)** \ - [https://learn.microsoft.com/azure/architecture/patterns/cqrs](/azure/architecture/patterns/cqrs) - -- **Communicating Between Bounded Contexts** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/jj591572(v=pandp.10)](/previous-versions/msp-n-p/jj591572(v=pandp.10)) - -- **Eventual consistency** \ - - -- **Jimmy Bogard. Refactoring Towards Resilience: Evaluating Coupling** \ - - -> [!div class="step-by-step"] -> [Previous](communication-in-microservice-architecture.md) -> [Next](maintain-microservice-apis.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md b/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md deleted file mode 100644 index 78ac23de2f629..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: Communication in a microservice architecture -description: Explore different ways of communication between microservices, understanding the implications of synchronous and asynchronous ways. -ms.date: 01/30/2020 ---- -# Communication in a microservice architecture - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -In a monolithic application running on a single process, components invoke one another using language-level method or function calls. These can be strongly coupled if you're creating objects with code (for example, `new ClassName()`), or can be invoked in a decoupled way if you're using Dependency Injection by referencing abstractions rather than concrete object instances. Either way, the objects are running within the same process. The biggest challenge when changing from a monolithic application to a microservices-based application lies in changing the communication mechanism. A direct conversion from in-process method calls into RPC calls to services will cause a chatty and not efficient communication that won't perform well in distributed environments. The challenges of designing distributed system properly are well enough known that there's even a canon known as the [Fallacies of distributed computing](https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing) that lists assumptions that developers often make when moving from monolithic to distributed designs. - -There isn't one solution, but several. One solution involves isolating the business microservices as much as possible. You then use asynchronous communication between the internal microservices and replace fine-grained communication that's typical in intra-process communication between objects with coarser-grained communication. You can do this by grouping calls, and by returning data that aggregates the results of multiple internal calls, to the client. - -A microservices-based application is a distributed system running on multiple processes or services, usually even across multiple servers or hosts. Each service instance is typically a process. Therefore, services must interact using an inter-process communication protocol such as HTTP, AMQP, or a binary protocol like TCP, depending on the nature of each service. - -The microservice community promotes the philosophy of "[smart endpoints and dumb pipes](https://simplicable.com/new/smart-endpoints-and-dumb-pipes)". This slogan encourages a design that's as decoupled as possible between microservices, and as cohesive as possible within a single microservice. As explained earlier, each microservice owns its own data and its own domain logic. But the microservices composing an end-to-end application are usually simply choreographed by using REST communications rather than complex protocols such as WS-\* and flexible event-driven communications instead of centralized business-process-orchestrators. - -The two commonly used protocols are HTTP request/response with resource APIs (when querying most of all), and lightweight asynchronous messaging when communicating updates across multiple microservices. These are explained in more detail in the following sections. - -## Communication types - -Client and services can communicate through many different types of communication, each one targeting a different scenario and goals. Initially, those types of communications can be classified in two axes. - -The first axis defines if the protocol is synchronous or asynchronous: - -- Synchronous protocol. HTTP is a synchronous protocol. The client sends a request and waits for a response from the service. That's independent of the client code execution that could be synchronous (thread is blocked) or asynchronous (thread isn't blocked, and the response will reach a callback eventually). The important point here is that the protocol (HTTP/HTTPS) is synchronous and the client code can only continue its task when it receives the HTTP server response. - -- Asynchronous protocol. Other protocols like AMQP (a protocol supported by many operating systems and cloud environments) use asynchronous messages. The client code or message sender usually doesn't wait for a response. It just sends the message as when sending a message to a RabbitMQ queue or any other message broker. - -The second axis defines if the communication has a single receiver or multiple receivers: - -- Single receiver. Each request must be processed by exactly one receiver or service. An example of this communication is the [Command pattern](https://en.wikipedia.org/wiki/Command_pattern). - -- Multiple receivers. Each request can be processed by zero to multiple receivers. This type of communication must be asynchronous. An example is the [publish/subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) mechanism used in patterns like [Event-driven architecture](https://microservices.io/patterns/data/event-driven-architecture.html). This is based on an event-bus interface or message broker when propagating data updates between multiple microservices through events; it's usually implemented through a service bus or similar artifact like [Azure Service Bus](https://azure.microsoft.com/services/service-bus/) by using [topics and subscriptions](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). - -A microservice-based application will often use a combination of these communication styles. The most common type is single-receiver communication with a synchronous protocol like HTTP/HTTPS when invoking a regular Web API HTTP service. Microservices also typically use messaging protocols for asynchronous communication between microservices. - -These axes are good to know so you have clarity on the possible communication mechanisms, but they're not the important concerns when building microservices. Neither the asynchronous nature of client thread execution nor the asynchronous nature of the selected protocol are the important points when integrating microservices. What *is* important is being able to integrate your microservices asynchronously while maintaining the independence of microservices, as explained in the following section. - -## Asynchronous microservice integration enforces microservice's autonomy - -As mentioned, the important point when building a microservices-based application is the way you integrate your microservices. Ideally, you should try to minimize the communication between the internal microservices. The fewer communications between microservices, the better. But in many cases, you'll have to somehow integrate the microservices. When you need to do that, the critical rule here is that the communication between the microservices should be asynchronous. That doesn't mean that you have to use a specific protocol (for example, asynchronous messaging versus synchronous HTTP). It just means that the communication between microservices should be done only by propagating data asynchronously, but try not to depend on other internal microservices as part of the initial service's HTTP request/response operation. - -If possible, never depend on synchronous communication (request/response) between multiple microservices, not even for queries. The goal of each microservice is to be autonomous and available to the client consumer, even if the other services that are part of the end-to-end application are down or unhealthy. If you think you need to make a call from one microservice to other microservices (like performing an HTTP request for a data query) to be able to provide a response to a client application, you have an architecture that won't be resilient when some microservices fail. - -Moreover, having HTTP dependencies between microservices, like when creating long request/response cycles with HTTP request chains, as shown in the first part of the Figure 4-15, not only makes your microservices not autonomous but also their performance is impacted as soon as one of the services in that chain isn't performing well. - -The more you add synchronous dependencies between microservices, such as query requests, the worse the overall response time gets for the client apps. - -![Diagram showing three types of communications across microservices.](./media/communication-in-microservice-architecture/sync-vs-async-patterns-across-microservices.png) - -**Figure 4-15**. Anti-patterns and patterns in communication between microservices - -As shown in the above diagram, in synchronous communication a "chain" of requests is created between microservices while serving the client request. This is an anti-pattern. In asynchronous communication microservices use asynchronous messages or http polling to communicate with other microservices, but the client request is served right away. - -If your microservice needs to raise an additional action in another microservice, if possible, do not perform that action synchronously and as part of the original microservice request and reply operation. Instead, do it asynchronously (using asynchronous messaging or integration events, queues, etc.). But, as much as possible, do not invoke the action synchronously as part of the original synchronous request and reply operation. - -And finally (and this is where most of the issues arise when building microservices), if your initial microservice needs data that's originally owned by other microservices, do not rely on making synchronous requests for that data. Instead, replicate or propagate that data (only the attributes you need) into the initial service's database by using eventual consistency (typically by using integration events, as explained in upcoming sections). - -As noted earlier in the [Identifying domain-model boundaries for each microservice](identify-microservice-domain-model-boundaries.md) section, duplicating some data across several microservices isn't an incorrect design—on the contrary, when doing that you can translate the data into the specific language or terms of that additional domain or Bounded Context. For instance, in the [eShopOnContainers application](https://github.com/dotnet-architecture/eShopOnContainers) you have a microservice named `identity-api` that's in charge of most of the user's data with an entity named `User`. However, when you need to store data about the user within the `Ordering` microservice, you store it as a different entity named `Buyer`. The `Buyer` entity shares the same identity with the original `User` entity, but it might have only the few attributes needed by the `Ordering` domain, and not the whole user profile. - -You might use any protocol to communicate and propagate data asynchronously across microservices in order to have eventual consistency. As mentioned, you could use integration events using an event bus or message broker or you could even use HTTP by polling the other services instead. It doesn't matter. The important rule is to not create synchronous dependencies between your microservices. - -The following sections explain the multiple communication styles you can consider using in a microservice-based application. - -## Communication styles - -There are many protocols and choices you can use for communication, depending on the communication type you want to use. If you're using a synchronous request/response-based communication mechanism, protocols such as HTTP and REST approaches are the most common, especially if you're publishing your services outside the Docker host or microservice cluster. If you're communicating between services internally (within your Docker host or microservices cluster), you might also want to use binary format communication mechanisms (like WCF using TCP and binary format). Alternatively, you can use asynchronous, message-based communication mechanisms such as AMQP. - -There are also multiple message formats like JSON or XML, or even binary formats, which can be more efficient. If your chosen binary format isn't a standard, it's probably not a good idea to publicly publish your services using that format. You could use a non-standard format for internal communication between your microservices. You might do this when communicating between microservices within your Docker host or microservice cluster (for example, Docker orchestrators), or for proprietary client applications that talk to the microservices. - -### Request/response communication with HTTP and REST - -When a client uses request/response communication, it sends a request to a service, then the service processes the request and sends back a response. Request/response communication is especially well suited for querying data for a real-time UI (a live user interface) from client apps. Therefore, in a microservice architecture you'll probably use this communication mechanism for most queries, as shown in Figure 4-16. - -![Diagram showing request/response comms for live queries and updates.](./media/communication-in-microservice-architecture/request-response-comms-live-queries-updates.png) - -**Figure 4-16**. Using HTTP request/response communication (synchronous or asynchronous) - -When a client uses request/response communication, it assumes that the response will arrive in a short time, typically less than a second, or a few seconds at most. For delayed responses, you need to implement asynchronous communication based on [messaging patterns](/azure/architecture/patterns/category/messaging) and [messaging technologies](https://en.wikipedia.org/wiki/Message-oriented_middleware), which is a different approach that we explain in the next section. - -A popular architectural style for request/response communication is [REST](https://en.wikipedia.org/wiki/Representational_state_transfer). This approach is based on, and tightly coupled to, the [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) protocol, embracing HTTP verbs like GET, POST, and PUT. REST is the most commonly used architectural communication approach when creating services. You can implement REST services when you develop ASP.NET Core Web API services. - -There's additional value when using HTTP REST services as your interface definition language. For instance, if you use [Swagger metadata](https://swagger.io/) to describe your service API, you can use tools that generate client stubs that can directly discover and consume your services. - -### Additional resources - -- **Martin Fowler. Richardson Maturity Model** A description of the REST model. \ - - -- **Swagger** The official site. \ - - -### Push and real-time communication based on HTTP - -Another possibility (usually for different purposes than REST) is a real-time and one-to-many communication with higher-level frameworks such as [ASP.NET SignalR](https://www.asp.net/signalr) and protocols such as [WebSockets](https://en.wikipedia.org/wiki/WebSocket). - -As Figure 4-17 shows, real-time HTTP communication means that you can have server code pushing content to connected clients as the data becomes available, rather than having the server wait for a client to request new data. - -![Diagram showing push and real-time comms based on SignalR.](./media/communication-in-microservice-architecture/one-to-many-communication.png) - -**Figure 4-17**. One-to-many real-time asynchronous message communication - -SignalR is a good way to achieve real-time communication for pushing content to the clients from a back-end server. Since communication is in real time, client apps show the changes almost instantly. This is usually handled by a protocol such as WebSockets, using many WebSockets connections (one per client). A typical example is when a service communicates a change in the score of a sports game to many client web apps simultaneously. - ->[!div class="step-by-step"] ->[Previous](direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md) ->[Next](asynchronous-message-based-communication.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/containerize-monolithic-applications.md b/docs/architecture/microservices/architect-microservice-container-applications/containerize-monolithic-applications.md deleted file mode 100644 index 55c7e522886ec..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/containerize-monolithic-applications.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: Containerizing monolithic applications -description: Containerizing monolithic applications, although doesn't get all the benefits from the microservices architecture, has important deployment benefits that can be delivered right away. -ms.date: 11/19/2021 ---- -# Containerizing monolithic applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -You might want to build a single, monolithically deployed web application or service and deploy it as a container. The application itself might not be internally monolithic, but structured as several libraries, components, or even layers (application layer, domain layer, data-access layer, etc.). Externally, however, it's a single container—a single process, a single web application, or a single service. - -To manage this model, you deploy a single container to represent the application. To increase capacity, you scale out, that is, just add more copies with a load balancer in front. The simplicity comes from managing a single deployment in a single container or VM. - -![Diagram showing a monolithic containerized application's components.](./media/containerize-monolithic-applications/monolithic-containerized-application.png) - -**Figure 4-1**. Example of the architecture of a containerized monolithic application - -You can include multiple components, libraries, or internal layers in each container, as illustrated in Figure 4-1. A monolithic containerized application has most of its functionality within a single container, with internal layers or libraries, and scales out by cloning the container on multiple servers/VMs. However, this monolithic pattern might conflict with the container principle "a container does one thing, and does it in one process", but might be ok for some cases. - -The downside of this approach becomes evident if the application grows, requiring it to scale. If the entire application can scale, it isn't really a problem. However, in most cases, just a few parts of the application are the choke points that require scaling, while other components are used less. - -For example, in a typical e-commerce application, you likely need to scale the product information subsystem, because many more customers browse products than purchase them. More customers use their basket than use the payment pipeline. Fewer customers add comments or view their purchase history. And you might have only a handful of employees that need to manage the content and marketing campaigns. If you scale the monolithic design, all the code for these different tasks is deployed multiple times and scaled at the same grade. - -There are multiple ways to scale an application-horizontal duplication, splitting different areas of the application, and partitioning similar business concepts or data. But, in addition to the problem of scaling all components, changes to a single component require complete retesting of the entire application, and a complete redeployment of all the instances. - -However, the monolithic approach is common, because the development of the application is initially easier than for microservices approaches. Thus, many organizations develop using this architectural approach. While some organizations have had good enough results, others are hitting limits. Many organizations designed their applications using this model because tools and infrastructure made it too difficult to build service-oriented architectures (SOA) years ago, and they did not see the need-until the application grew. - -From an infrastructure perspective, each server can run many applications within the same host and have an acceptable ratio of efficiency in resources usage, as shown in Figure 4-2. - -![Diagram showing one host running many apps in containers.](./media/containerize-monolithic-applications/host-multiple-apps-containers.png) - -**Figure 4-2**. Monolithic approach: Host running multiple apps, each app running as a container - -Monolithic applications in Microsoft Azure can be deployed using dedicated VMs for each instance. Additionally, using [Azure virtual machine scale sets](https://azure.microsoft.com/documentation/services/virtual-machine-scale-sets/), you can easily scale the VMs. [Azure App Service](https://azure.microsoft.com/services/app-service/) can also run monolithic applications and easily scale instances without requiring you to manage the VMs. Since 2016, Azure App Services can run single instances of Docker containers as well, simplifying deployment. - -As a QA environment or a limited production environment, you can deploy multiple Docker host VMs and balance them using the Azure balancer, as shown in Figure 4-3. This lets you manage scaling with a coarse-grain approach, because the whole application lives within a single container. - -![Diagram showing several hosts running the monolithic app containers.](./media/containerize-monolithic-applications/docker-infrastructure-monolithic-application.png) - -**Figure 4-3**. Example of multiple hosts scaling up a single container application - -Deployment to the various hosts can be managed with traditional deployment techniques. Docker hosts can be managed with commands like `docker run` or `docker-compose` performed manually, or through automation such as continuous delivery (CD) pipelines. - -## Deploying a monolithic application as a container - -There are benefits to using containers to manage monolithic application deployments. Scaling container instances is far faster and easier than deploying additional VMs. Even if you use virtual machine scale sets, VMs take time to start. When deployed as traditional application instances instead of containers, the configuration of the application is managed as part of the VM, which isn't ideal. - -Deploying updates as Docker images is far faster and network efficient. Docker images typically start in seconds, which speeds rollouts. Tearing down a Docker image instance is as easy as issuing a `docker stop` command, and typically completes in less than a second. - -Because containers are immutable by design, you never need to worry about corrupted VMs. In contrast, update scripts for a VM might forget to account for some specific configuration or file left on disk. - -While monolithic applications can benefit from Docker, we're touching only on the benefits. Additional benefits of managing containers come from deploying with container orchestrators, which manage the various instances and lifecycle of each container instance. Breaking up the monolithic application into subsystems that can be scaled, developed, and deployed individually is your entry point into the realm of microservices. - -## Publishing a single-container-based application to Azure App Service - -Whether you want to get validation of a container deployed to Azure or when an application is simply a single-container application, Azure App Service provides a great way to provide scalable single-container-based services. Using Azure App Service is simple. It provides great integration with Git to make it easy to take your code, build it in Visual Studio, and deploy it directly to Azure. - -![Screenshot of Create App Service dialog showing a Container Registry.](./media/containerize-monolithic-applications/publish-azure-app-service-container.png) - -**Figure 4-4**. Publishing a single-container application to Azure App Service from Visual Studio 2022 - -Without Docker, if you needed other capabilities, frameworks, or dependencies that aren't supported in Azure App Service, you had to wait until the Azure team updated those dependencies in App Service. Or you had to switch to other services like Azure Cloud Services or VMs, where you had further control and you could install a required component or framework for your application. - -Container support in Visual Studio 2017 and later gives you the ability to include whatever you want in your application environment, as shown in Figure 4-4. Since you're running it in a container, if you add a dependency to your application, you can include the dependency in your Dockerfile or Docker image. - -As also shown in Figure 4-4, the publish flow pushes an image through a container registry. This can be the Azure Container Registry (a registry close to your deployments in Azure and secured by Azure Active Directory groups and accounts), or any other Docker registry, like Docker Hub or an on-premises registry. - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](docker-application-state-data.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice.md b/docs/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice.md deleted file mode 100644 index cc0bfcd88ba83..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: Data sovereignty per microservice -description: Data sovereignty per microservice is one of the key points of microservices. Each microservice must be the sole owner of its database, sharing it with no other. Of course all instances of a microservice connect to the same high availability database. -ms.date: 09/20/2018 ---- -# Data sovereignty per microservice - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -An important rule for microservices architecture is that each microservice must own its domain data and logic. Just as a full application owns its logic and data, so must each microservice own its logic and data under an autonomous lifecycle, with independent deployment per microservice. - -This means that the conceptual model of the domain will differ between subsystems or microservices. Consider enterprise applications, where customer relationship management (CRM) applications, transactional purchase subsystems, and customer support subsystems each call on unique customer entity attributes and data, and where each employs a different Bounded Context (BC). - -This principle is similar in [Domain-driven design (DDD)](https://en.wikipedia.org/wiki/Domain-driven_design), where each [Bounded Context](https://martinfowler.com/bliki/BoundedContext.html) or autonomous subsystem or service must own its domain model (data plus logic and behavior). Each DDD Bounded Context correlates to one business microservice (one or several services). This point about the Bounded Context pattern is expanded in the next section. - -On the other hand, the traditional (monolithic data) approach used in many applications is to have a single centralized database or just a few databases. This is often a normalized SQL database that's used for the whole application and all its internal subsystems, as shown in Figure 4-7. - -![Diagram showing the two database approaches.](./media/data-sovereignty-per-microservice/data-sovereignty-comparison.png) - -**Figure 4-7**. Data sovereignty comparison: monolithic database versus microservices - -In the traditional approach, there's a single database shared across all services, typically in a tiered architecture. In the microservices approach, each microservice owns its model/data. The centralized database approach initially looks simpler and seems to enable reuse of entities in different subsystems to make everything consistent. But the reality is you end up with huge tables that serve many different subsystems, and that include attributes and columns that aren't needed in most cases. It's like trying to use the same physical map for hiking a short trail, taking a day-long car trip, and learning geography. - -A monolithic application with typically a single relational database has two important benefits: [ACID transactions](https://en.wikipedia.org/wiki/ACID) and the SQL language, both working across all the tables and data related to your application. This approach provides a way to easily write a query that combines data from multiple tables. - -However, data access becomes much more complicated when you move to a microservices architecture. Even when using ACID transactions within a microservice or Bounded Context, it is crucial to consider that the data owned by each microservice is private to that microservice and should only be accessed either synchronously through its API endpoints(REST, gRPC, SOAP, etc) or asynchronously via messaging(AMQP or similar). - -Encapsulating the data ensures that the microservices are loosely coupled and can evolve independently of one another. If multiple services were accessing the same data, schema updates would require coordinated updates to all the services. This would break the microservice lifecycle autonomy. But distributed data structures mean that you can't make a single ACID transaction across microservices. This in turn means you must use eventual consistency when a business process spans multiple microservices. This is much harder to implement than simple SQL joins, because you can't create integrity constraints or use distributed transactions between separate databases, as we'll explain later on. Similarly, many other relational database features aren't available across multiple microservices. - -Going even further, different microservices often use different *kinds* of databases. Modern applications store and process diverse kinds of data, and a relational database isn't always the best choice. For some use cases, a NoSQL database such as Azure CosmosDB or MongoDB might have a more convenient data model and offer better performance and scalability than a SQL database like SQL Server or Azure SQL Database. In other cases, a relational database is still the best approach. Therefore, microservices-based applications often use a mixture of SQL and NoSQL databases, which is sometimes called the [polyglot persistence](https://martinfowler.com/bliki/PolyglotPersistence.html) approach. - -A partitioned, polyglot-persistent architecture for data storage has many benefits. These include loosely coupled services and better performance, scalability, costs, and manageability. However, it can introduce some distributed data management challenges, as explained in "[Identifying domain-model boundaries](identify-microservice-domain-model-boundaries.md)" later in this chapter. - -## The relationship between microservices and the Bounded Context pattern - -The concept of microservice derives from the [Bounded Context (BC) pattern](https://martinfowler.com/bliki/BoundedContext.html) in [domain-driven design (DDD)](https://en.wikipedia.org/wiki/Domain-driven_design). DDD deals with large models by dividing them into multiple BCs and being explicit about their boundaries. Each BC must have its own model and database; likewise, each microservice owns its related data. In addition, each BC usually has its own [ubiquitous language](https://martinfowler.com/bliki/UbiquitousLanguage.html) to help communication between software developers and domain experts. - -Those terms (mainly domain entities) in the ubiquitous language can have different names in different Bounded Contexts, even when different domain entities share the same identity (that is, the unique ID that's used to read the entity from storage). For instance, in a user-profile Bounded Context, the User domain entity might share identity with the Buyer domain entity in the ordering Bounded Context. - -A microservice is therefore like a Bounded Context, but it also specifies that it's a distributed service. It's built as a separate process for each Bounded Context, and it must use the distributed protocols noted earlier, like HTTP/HTTPS, WebSockets, or [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol). The Bounded Context pattern, however, doesn't specify whether the Bounded Context is a distributed service or if it's simply a logical boundary (such as a generic subsystem) within a monolithic-deployment application. - -It's important to highlight that defining a service for each Bounded Context is a good place to start. But you don't have to constrain your design to it. Sometimes you must design a Bounded Context or business microservice composed of several physical services. But ultimately, both patterns -Bounded Context and microservice- are closely related. - -DDD benefits from microservices by getting real boundaries in the form of distributed microservices. But ideas like not sharing the model between microservices are what you also want in a Bounded Context. - -### Additional resources - -- **Chris Richardson. Pattern: Database per service** \ - - -- **Martin Fowler. BoundedContext** \ - - -- **Martin Fowler. PolyglotPersistence** \ - - -- **Alberto Brandolini. Strategic Domain Driven Design with Context Mapping** \ - - ->[!div class="step-by-step"] ->[Previous](microservices-architecture.md) ->[Next](logical-versus-physical-architecture.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md b/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md deleted file mode 100644 index 3b5cf83332f65..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -title: The API gateway pattern versus the direct client-to-microservice communication -description: Understand the differences and the uses of the API gateway pattern and the direct client-to-microservice communication. -ms.date: 01/13/2021 ---- -# The API gateway pattern versus the Direct client-to-microservice communication - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -In a microservices architecture, each microservice exposes a set of (typically) fine-grained endpoints. This fact can impact the client-to-microservice communication, as explained in this section. - -## Direct client-to-microservice communication - -A possible approach is to use a direct client-to-microservice communication architecture. In this approach, a client app can make requests directly to some of the microservices, as shown in Figure 4-12. - -![Diagram showing client-to-microservice communication architecture.](./media/direct-client-to-microservice-communication.png) - -**Figure 4-12**. Using a direct client-to-microservice communication architecture - -In this approach, each microservice has a public endpoint, sometimes with a different TCP port for each microservice. An example of a URL for a particular service could be the following URL in Azure: - -`http://eshoponcontainers.westus.cloudapp.azure.com:88/` - -In a production environment based on a cluster, that URL would map to the load balancer used in the cluster, which in turn distributes the requests across the microservices. In production environments, you could have an Application Delivery Controller (ADC) like [Azure Application Gateway](/azure/application-gateway/application-gateway-introduction) between your microservices and the Internet. This layer acts as a transparent tier that not only performs load balancing, but secures your services by offering SSL termination. This approach improves the load of your hosts by offloading CPU-intensive SSL termination and other routing duties to the Azure Application Gateway. In any case, a load balancer and ADC are transparent from a logical application architecture point of view. - -A direct client-to-microservice communication architecture could be good enough for a small microservice-based application, especially if the client app is a server-side web application like an ASP.NET MVC app. However, when you build large and complex microservice-based applications (for example, when handling dozens of microservice types), and especially when the client apps are remote mobile apps or SPA web applications, that approach faces a few issues. - -Consider the following questions when developing a large application based on microservices: - -- *How can client apps minimize the number of requests to the back end and reduce chatty communication to multiple microservices?* - -Interacting with multiple microservices to build a single UI screen increases the number of round trips across the Internet. This approach increases latency and complexity on the UI side. Ideally, responses should be efficiently aggregated in the server side. This approach reduces latency, since multiple pieces of data come back in parallel and some UI can show data as soon as it's ready. - -- *How can you handle cross-cutting concerns such as authorization, data transformations, and dynamic request dispatching?* - -Implementing security and cross-cutting concerns like security and authorization on every microservice can require significant development effort. A possible approach is to have those services within the Docker host or internal cluster to restrict direct access to them from the outside, and to implement those cross-cutting concerns in a centralized place, like an API Gateway. - -- *How can client apps communicate with services that use non-Internet-friendly protocols?* - -Protocols used on the server side (like AMQP or binary protocols) are not supported in client apps. Therefore, requests must be performed through protocols like HTTP/HTTPS and translated to the other protocols afterwards. A *man-in-the-middle* approach can help in this situation. - -- *How can you shape a facade especially made for mobile apps?* - -The API of multiple microservices might not be well designed for the needs of different client applications. For instance, the needs of a mobile app might be different than the needs of a web app. For mobile apps, you might need to optimize even further so that data responses can be more efficient. You might do this functionality by aggregating data from multiple microservices and returning a single set of data, and sometimes eliminating any data in the response that isn't needed by the mobile app. And, of course, you might compress that data. Again, a facade or API in between the mobile app and the microservices can be convenient for this scenario. - -## Why consider API Gateways instead of direct client-to-microservice communication - -In a microservices architecture, the client apps usually need to consume functionality from more than one microservice. If that consumption is performed directly, the client needs to handle multiple calls to microservice endpoints. What happens when the application evolves and new microservices are introduced or existing microservices are updated? If your application has many microservices, handling so many endpoints from the client apps can be a nightmare. Since the client app would be coupled to those internal endpoints, evolving the microservices in the future can cause high impact for the client apps. - -Therefore, having an intermediate level or tier of indirection (Gateway) can be convenient for microservice-based applications. If you don't have API Gateways, the client apps must send requests directly to the microservices and that raises problems, such as the following issues: - -- **Coupling**: Without the API Gateway pattern, the client apps are coupled to the internal microservices. The client apps need to know how the multiple areas of the application are decomposed in microservices. When evolving and refactoring the internal microservices, those actions impact maintenance because they cause breaking changes to the client apps due to the direct reference to the internal microservices from the client apps. Client apps need to be updated frequently, making the solution harder to evolve. - -- **Too many round trips**: A single page/screen in the client app might require several calls to multiple services. That approach can result in multiple network round trips between the client and the server, adding significant latency. Aggregation handled in an intermediate level could improve the performance and user experience for the client app. - -- **Security issues**: Without a gateway, all the microservices must be exposed to the "external world", making the attack surface larger than if you hide internal microservices that aren't directly used by the client apps. The smaller the attack surface is, the more secure your application can be. - -- **Cross-cutting concerns**: Each publicly published microservice must handle concerns such as authorization and SSL. In many situations, those concerns could be handled in a single tier so the internal microservices are simplified. - -## What is the API Gateway pattern? - -When you design and build large or complex microservice-based applications with multiple client apps, a good approach to consider can be an [API Gateway](https://microservices.io/patterns/apigateway.html). This pattern is a service that provides a single-entry point for certain groups of microservices. It's similar to the [Facade pattern](https://en.wikipedia.org/wiki/Facade_pattern) from object-oriented design, but in this case, it's part of a distributed system. The API Gateway pattern is also sometimes known as the "backend for frontend" ([BFF](https://samnewman.io/patterns/architectural/bff/)) because you build it while thinking about the needs of the client app. - -Therefore, the API gateway sits between the client apps and the microservices. It acts as a reverse proxy, routing requests from clients to services. It can also provide other cross-cutting features such as authentication, SSL termination, and cache. - -Figure 4-13 shows how a custom API Gateway can fit into a simplified microservice-based architecture with just a few microservices. - -![Diagram showing an API Gateway implemented as a custom service.](./media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/custom-service-api-gateway.png) - -**Figure 4-13**. Using an API Gateway implemented as a custom service - -Apps connect to a single endpoint, the API Gateway, that's configured to forward requests to individual microservices. In this example, the API Gateway would be implemented as a custom ASP.NET Core WebHost service running as a container. - -It's important to highlight that in that diagram, you would be using a single custom API Gateway service facing multiple and different client apps. That fact can be an important risk because your API Gateway service will be growing and evolving based on many different requirements from the client apps. Eventually, it will be bloated because of those different needs and effectively it could be similar to a monolithic application or monolithic service. That's why it's very much recommended to split the API Gateway in multiple services or multiple smaller API Gateways, one per client app form-factor type, for instance. - -You need to be careful when implementing the API Gateway pattern. Usually it isn't a good idea to have a single API Gateway aggregating all the internal microservices of your application. If it does, it acts as a monolithic aggregator or orchestrator and violates microservice autonomy by coupling all the microservices. - -Therefore, the API Gateways should be segregated based on business boundaries and the client apps and not act as a single aggregator for all the internal microservices. - -When splitting the API Gateway tier into multiple API Gateways, if your application has multiple client apps, that can be a primary pivot when identifying the multiple API Gateways types, so that you can have a different facade for the needs of each client app. This case is a pattern named "Backend for Frontend" ([BFF](https://samnewman.io/patterns/architectural/bff/)) where each API Gateway can provide a different API tailored for each client app type, possibly even based on the client form factor by implementing specific adapter code which underneath calls multiple internal microservices, as shown in the following image: - -![Diagram showing multiple custom API Gateways.](./media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/multiple-custom-api-gateways.png) - -**Figure 4-13.1**. Using multiple custom API Gateways - -Figure 4-13.1 shows API Gateways that are segregated by client type; one for mobile clients and one for web clients. A traditional web app connects to an MVC microservice that uses the web API Gateway. The example depicts a simplified architecture with multiple fine-grained API Gateways. In this case, the boundaries identified for each API Gateway are based purely on the "Backend for Frontend" ([BFF](https://samnewman.io/patterns/architectural/bff/)) pattern, hence based just on the API needed per client app. But in larger applications you should also go further and create other API Gateways based on business boundaries as a second design pivot. - -## Main features in the API Gateway pattern - -An API Gateway can offer multiple features. Depending on the product it might offer richer or simpler features, however, the most important and foundational features for any API Gateway are the following design patterns: - -**Reverse proxy or gateway routing.** The API Gateway offers a reverse proxy to redirect or route requests (layer 7 routing, usually HTTP requests) to the endpoints of the internal microservices. The gateway provides a single endpoint or URL for the client apps and then internally maps the requests to a group of internal microservices. This routing feature helps to decouple the client apps from the microservices but it's also convenient when modernizing a monolithic API by sitting the API Gateway in between the monolithic API and the client apps, then you can add new APIs as new microservices while still using the legacy monolithic API until it's split into many microservices in the future. Because of the API Gateway, the client apps won't notice if the APIs being used are implemented as internal microservices or a monolithic API and more importantly, when evolving and refactoring the monolithic API into microservices, thanks to the API Gateway routing, client apps won't be impacted with any URI change. - -For more information, see [Gateway routing pattern](/azure/architecture/patterns/gateway-routing). - -**Requests aggregation.** As part of the gateway pattern you can aggregate multiple client requests (usually HTTP requests) targeting multiple internal microservices into a single client request. This pattern is especially convenient when a client page/screen needs information from several microservices. With this approach, the client app sends a single request to the API Gateway that dispatches several requests to the internal microservices and then aggregates the results and sends everything back to the client app. The main benefit and goal of this design pattern is to reduce chattiness between the client apps and the backend API, which is especially important for remote apps out of the datacenter where the microservices live, like mobile apps or requests coming from SPA apps that come from JavaScript in client remote browsers. For regular web apps performing the requests in the server environment (like an ASP.NET Core MVC web app), this pattern is not so important as the latency is very much smaller than for remote client apps. - -Depending on the API Gateway product you use, it might be able to perform this aggregation. However, in many cases it's more flexible to create aggregation microservices under the scope of the API Gateway, so you define the aggregation in code (that is, C# code): - -For more information, see [Gateway aggregation pattern](/azure/architecture/patterns/gateway-aggregation). - -**Cross-cutting concerns or gateway offloading.** Depending on the features offered by each API Gateway product, you can offload functionality from individual microservices to the gateway, which simplifies the implementation of each microservice by consolidating cross-cutting concerns into one tier. This approach is especially convenient for specialized features that can be complex to implement properly in every internal microservice, such as the following functionality: - -- Authentication and authorization -- Service discovery integration -- Response caching -- Retry policies, circuit breaker, and QoS -- Rate limiting and throttling -- Load balancing -- Logging, tracing, correlation -- Headers, query strings, and claims transformation -- IP allowlisting - -For more information, see [Gateway offloading pattern](/azure/architecture/patterns/gateway-offloading). - -## Using products with API Gateway features - -There can be many more cross-cutting concerns offered by the API Gateways products depending on each implementation. We'll explore here: - -- [Azure API Management](https://azure.microsoft.com/services/api-management/) -- [Ocelot](https://github.com/ThreeMammals/Ocelot) - -### Azure API Management - -[Azure API Management](https://azure.microsoft.com/services/api-management/) (as shown in Figure 4-14) not only solves your API Gateway needs but provides features like gathering insights from your APIs. If you're using an API management solution, an API Gateway is only a component within that full API management solution. - -![Diagram showing how to use Azure API Management as your API gateway.](./media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/api-gateway-azure-api-management.png) - -**Figure 4-14**. Using Azure API Management for your API Gateway - -Azure API Management solves both your API Gateway and Management needs like logging, security, metering, etc. In this case, when using a product like Azure API Management, the fact that you might have a single API Gateway is not so risky because these kinds of API Gateways are "thinner", meaning that you don't implement custom C# code that could evolve towards a monolithic component. - -The API Gateway products usually act like a reverse proxy for ingress communication, where you can also filter the APIs from the internal microservices plus apply authorization to the published APIs in this single tier. - -The insights available from an API Management system help you get an understanding of how your APIs are being used and how they are performing. They do this activity by letting you view near real-time analytics reports and identifying trends that might impact your business. Plus, you can have logs about request and response activity for further online and offline analysis. - -With Azure API Management, you can secure your APIs using a key, a token, and IP filtering. These features let you enforce flexible and fine-grained quotas and rate limits, modify the shape and behavior of your APIs using policies, and improve performance with response caching. - -In this guide and the reference sample application (eShopOnContainers), the architecture is limited to a simpler and custom-made containerized architecture in order to focus on plain containers without using PaaS products like Azure API Management. But for large microservice-based applications that are deployed into Microsoft Azure, we encourage you to evaluate Azure API Management as the base for your API Gateways in production. - -### Ocelot - -[Ocelot](https://github.com/ThreeMammals/Ocelot) is a lightweight API Gateway, recommended for simpler approaches. Ocelot is an Open Source .NET Core-based API Gateway especially made for microservices architectures that need unified points of entry into their systems. It's lightweight, fast, and scalable and provides routing and authentication among many other features. - -The main reason to choose Ocelot for the [eShopOnContainers reference application 2.0](https://github.com/dotnet-architecture/eShopOnContainers/releases/tag/2.0) is because Ocelot is a .NET Core lightweight API Gateway that you can deploy into the same application deployment environment where you're deploying your microservices/containers, such as a Docker Host, Kubernetes, etc. And since it's based on .NET Core, it's cross-platform allowing you to deploy on Linux or Windows. - -The previous diagrams showing custom API Gateways running in containers are precisely how you can also run Ocelot in a container and microservice-based application. - -In addition, there are many other products in the market offering API Gateways features, such as Apigee, Kong, MuleSoft, WSO2, and other products like Linkerd and Istio for service mesh ingress controller features. - -After the initial architecture and patterns explanation sections, the next sections explain how to implement API Gateways with [Ocelot](https://github.com/ThreeMammals/Ocelot). - -## Drawbacks of the API Gateway pattern - -- The most important drawback is that when you implement an API Gateway, you're coupling that tier with the internal microservices. Coupling like this might introduce serious difficulties for your application. Clemens Vaster, architect at the Azure Service Bus team, refers to this potential difficulty as "the new ESB" in the "[Messaging and Microservices](https://www.youtube.com/watch?v=rXi5CLjIQ9k)" session at GOTO 2016. - -- Using a microservices API Gateway creates an additional possible single point of failure. - -- An API Gateway can introduce increased response time due to the additional network call. However, this extra call usually has less impact than having a client interface that's too chatty directly calling the internal microservices. - -- If not scaled out properly, the API Gateway can become a bottleneck. - -- An API Gateway requires additional development cost and future maintenance if it includes custom logic and data aggregation. Developers must update the API Gateway in order to expose each microservice's endpoints. Moreover, implementation changes in the internal microservices might cause code changes at the API Gateway level. However, if the API Gateway is just applying security, logging, and versioning (as when using Azure API Management), this additional development cost might not apply. - -- If the API Gateway is developed by a single team, there can be a development bottleneck. This aspect is another reason why a better approach is to have several fined-grained API Gateways that respond to different client needs. You could also segregate the API Gateway internally into multiple areas or layers that are owned by the different teams working on the internal microservices. - -## Additional resources - -- **Chris Richardson. Pattern: API Gateway / Backend for Front-End** \ - - -- **API Gateway pattern** \ - [https://learn.microsoft.com/azure/architecture/microservices/gateway](/azure/architecture/microservices/gateway) - -- **Aggregation and composition pattern** \ - - -- **Azure API Management** \ - - -- **Udi Dahan. Service Oriented Composition** \ - - -- **Clemens Vasters. Messaging and Microservices at GOTO 2016 (video)** \ - - -- **API Gateway in a Nutshell** (ASP.NET Core API Gateway Tutorial Series) \ - - ->[!div class="step-by-step"] ->[Previous](identify-microservice-domain-model-boundaries.md) ->[Next](communication-in-microservice-architecture.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md b/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md deleted file mode 100644 index 17a39618534d5..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Challenges and solutions for distributed data management -description: Learn what are the challenges and solutions for distributed data management in the microservices world. -ms.date: 09/20/2018 ---- -# Challenges and solutions for distributed data management - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -## Challenge \#1: How to define the boundaries of each microservice - -Defining microservice boundaries is probably the first challenge anyone encounters. Each microservice has to be a piece of your application and each microservice should be autonomous with all the benefits and challenges that it conveys. But how do you identify those boundaries? - -First, you need to focus on the application's logical domain models and related data. Try to identify decoupled islands of data and different contexts within the same application. Each context could have a different business language (different business terms). The contexts should be defined and managed independently. The terms and entities that are used in those different contexts might sound similar, but you might discover that in a particular context, a business concept with one is used for a different purpose in another context, and might even have a different name. For instance, a user can be referred as a user in the identity or membership context, as a customer in a CRM context, as a buyer in an ordering context, and so forth. - -The way you identify boundaries between multiple application contexts with a different domain for each context is exactly how you can identify the boundaries for each business microservice and its related domain model and data. You always attempt to minimize the coupling between those microservices. This guide goes into more detail about this identification and domain model design in the section [Identifying domain-model boundaries for each microservice](identify-microservice-domain-model-boundaries.md) later. - -## Challenge \#2: How to create queries that retrieve data from several microservices - -A second challenge is how to implement queries that retrieve data from several microservices, while avoiding chatty communication to the microservices from remote client apps. An example could be a single screen from a mobile app that needs to show user information that's owned by the basket, catalog, and user identity microservices. Another example would be a complex report involving many tables located in multiple microservices. The right solution depends on the complexity of the queries. But in any case, you'll need a way to aggregate information if you want to improve the efficiency in the communications of your system. The most popular solutions are the following. - -**API Gateway.** For simple data aggregation from multiple microservices that own different databases, the recommended approach is an aggregation microservice referred to as an API Gateway. However, you need to be careful about implementing this pattern, because it can be a choke point in your system, and it can violate the principle of microservice autonomy. To mitigate this possibility, you can have multiple fined-grained API Gateways each one focusing on a vertical "slice" or business area of the system. The API Gateway pattern is explained in more detail in the [API Gateway section](direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md#why-consider-api-gateways-instead-of-direct-client-to-microservice-communication) later. - -**GraphQL Federation** One option to consider if your microservices are already using GraphQL is [GraphQL Federation](https://www.apollographql.com/docs/federation/). Federation allows you to define "subgraphs" from other services and compose them into an aggregate "supergraph" that acts as a standalone schema. - -**CQRS with query/reads tables.** Another solution for aggregating data from multiple microservices is the [Materialized View pattern](/azure/architecture/patterns/materialized-view). In this approach, you generate, in advance (prepare denormalized data before the actual queries happen), a read-only table with the data that's owned by multiple microservices. The table has a format suited to the client app's needs. - -Consider something like the screen for a mobile app. If you have a single database, you might pull together the data for that screen using a SQL query that performs a complex join involving multiple tables. However, when you have multiple databases, and each database is owned by a different microservice, you cannot query those databases and create a SQL join. Your complex query becomes a challenge. You can address the requirement using a CQRS approach—you create a denormalized table in a different database that's used just for queries. The table can be designed specifically for the data you need for the complex query, with a one-to-one relationship between fields needed by your application's screen and the columns in the query table. It could also serve for reporting purposes. - -This approach not only solves the original problem (how to query and join across microservices), but it also improves performance considerably when compared with a complex join, because you already have the data that the application needs in the query table. Of course, using Command and Query Responsibility Segregation (CQRS) with query/reads tables means additional development work, and you'll need to embrace eventual consistency. Nonetheless, requirements on performance and high scalability in [collaborative scenarios](http://udidahan.com/2011/10/02/why-you-should-be-using-cqrs-almost-everywhere/) (or competitive scenarios, depending on the point of view) are where you should apply CQRS with multiple databases. - -**"Cold data" in central databases.** For complex reports and queries that might not require real-time data, a common approach is to export your "hot data" (transactional data from the microservices) as "cold data" into large databases that are used only for reporting. That central database system can be a Big Data-based system, like Hadoop; a data warehouse like one based on Azure SQL Data Warehouse; or even a single SQL database that's used just for reports (if size won't be an issue). - -Keep in mind that this centralized database would be used only for queries and reports that do not need real-time data. The original updates and transactions, as your source of truth, have to be in your microservices data. The way you would synchronize data would be either by using event-driven communication (covered in the next sections) or by using other database infrastructure import/export tools. If you use event-driven communication, that integration process would be similar to the way you propagate data as described earlier for CQRS query tables. - -However, if your application design involves constantly aggregating information from multiple microservices for complex queries, it might be a symptom of a bad design -a microservice should be as isolated as possible from other microservices. (This excludes reports/analytics that always should use cold-data central databases.) Having this problem often might be a reason to merge microservices. You need to balance the autonomy of evolution and deployment of each microservice with strong dependencies, cohesion, and data aggregation. - -## Challenge \#3: How to achieve consistency across multiple microservices - -As stated previously, the data owned by each microservice is private to that microservice and can only be accessed using its microservice API. Therefore, a challenge presented is how to implement end-to-end business processes while keeping consistency across multiple microservices. - -To analyze this problem, let's look at an example from the [eShopOnContainers reference application](https://aka.ms/eshoponcontainers). The Catalog microservice maintains information about all the products, including the product price. The Basket microservice manages temporal data about product items that users are adding to their shopping baskets, which includes the price of the items at the time they were added to the basket. When a product's price is updated in the catalog, that price should also be updated in the active baskets that hold that same product, plus the system should probably warn the user saying that a particular item's price has changed since they added it to their basket. - -In a hypothetical monolithic version of this application, when the price changes in the products table, the catalog subsystem could simply use an ACID transaction to update the current price in the Basket table. - -However, in a microservices-based application, the Product and Basket tables are owned by their respective microservices. No microservice should ever include tables/storage owned by another microservice in its own transactions, not even in direct queries, as shown in Figure 4-9. - -![Diagram showing that microservices database data can't be shared.](./media/distributed-data-management/indepentent-microservice-databases.png) - -**Figure 4-9**. A microservice can't directly access a table in another microservice - -The Catalog microservice shouldn't update the Basket table directly, because the Basket table is owned by the Basket microservice. To make an update to the Basket microservice, the Catalog microservice should use eventual consistency probably based on asynchronous communication such as integration events (message and event-based communication). This is how the [eShopOnContainers](https://aka.ms/eshoponcontainers) reference application performs this type of consistency across microservices. - -As stated by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), you need to choose between availability and ACID strong consistency. Most microservice-based scenarios demand availability and high scalability as opposed to strong consistency. Mission-critical applications must remain up and running, and developers can work around strong consistency by using techniques for working with weak or eventual consistency. This is the approach taken by most microservice-based architectures. - -Moreover, ACID-style or two-phase commit transactions are not just against microservices principles; most NoSQL databases (like Azure Cosmos DB, MongoDB, etc.) do not support two-phase commit transactions, typical in distributed databases scenarios. However, maintaining data consistency across services and databases is essential. This challenge is also related to the question of how to propagate changes across multiple microservices when certain data needs to be redundant—for example, when you need to have the product's name or description in the Catalog microservice and the Basket microservice. - -A good solution for this problem is to use eventual consistency between microservices articulated through event-driven communication and a publish-and-subscribe system. These topics are covered in the section [Asynchronous event-driven communication](asynchronous-message-based-communication.md#asynchronous-event-driven-communication) later in this guide. - -## Challenge \#4: How to design communication across microservice boundaries - -Communicating across microservice boundaries is a real challenge. In this context, communication doesn't refer to what protocol you should use (HTTP and REST, AMQP, messaging, and so on). Instead, it addresses what communication style you should use, and especially how coupled your microservices should be. Depending on the level of coupling, when failure occurs, the impact of that failure on your system will vary significantly. - -In a distributed system like a microservices-based application, with so many artifacts moving around and with distributed services across many servers or hosts, components will eventually fail. Partial failure and even larger outages will occur, so you need to design your microservices and the communication across them considering the common risks in this type of distributed system. - -A popular approach is to implement HTTP (REST)-based microservices, due to their simplicity. An HTTP-based approach is perfectly acceptable; the issue here is related to how you use it. If you use HTTP requests and responses just to interact with your microservices from client applications or from API Gateways, that's fine. But if you create long chains of synchronous HTTP calls across microservices, communicating across their boundaries as if the microservices were objects in a monolithic application, your application will eventually run into problems. - -For instance, imagine that your client application makes an HTTP API call to an individual microservice like the Ordering microservice. If the Ordering microservice in turn calls additional microservices using HTTP within the same request/response cycle, you're creating a chain of HTTP calls. It might sound reasonable initially. However, there are important points to consider when going down this path: - -- Blocking and low performance. Due to the synchronous nature of HTTP, the original request doesn't get a response until all the internal HTTP calls are finished. Imagine if the number of these calls increases significantly and at the same time one of the intermediate HTTP calls to a microservice is blocked. The result is that performance is impacted, and the overall scalability will be exponentially affected as additional HTTP requests increase. - -- Coupling microservices with HTTP. Business microservices shouldn't be coupled with other business microservices. Ideally, they shouldn't "know" about the existence of other microservices. If your application relies on coupling microservices as in the example, achieving autonomy per microservice will be almost impossible. - -- Failure in any one microservice. If you implemented a chain of microservices linked by HTTP calls, when any of the microservices fails (and eventually they will fail) the whole chain of microservices will fail. A microservice-based system should be designed to continue to work as well as possible during partial failures. Even if you implement client logic that uses retries with exponential backoff or circuit breaker mechanisms, the more complex the HTTP call chains are, the more complex it is to implement a failure strategy based on HTTP. - -In fact, if your internal microservices are communicating by creating chains of HTTP requests as described, it could be argued that you have a monolithic application, but one based on HTTP between processes instead of intra-process communication mechanisms. - -Therefore, in order to enforce microservice autonomy and have better resiliency, you should minimize the use of chains of request/response communication across microservices. It's recommended that you use only asynchronous interaction for inter-microservice communication, either by using asynchronous message- and event-based communication, or by using (asynchronous) HTTP polling independently of the original HTTP request/response cycle. - -The use of asynchronous communication is explained with additional details later in this guide in the sections [Asynchronous microservice integration enforces microservice's autonomy](communication-in-microservice-architecture.md#asynchronous-microservice-integration-enforces-microservices-autonomy) and [Asynchronous message-based communication](asynchronous-message-based-communication.md). - -## Additional resources - -- **CAP theorem** \ - - -- **Eventual consistency** \ - - -- **Data Consistency Primer** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/dn589800(v=pandp.10)](/previous-versions/msp-n-p/dn589800(v=pandp.10)) - -- **Martin Fowler. CQRS (Command and Query Responsibility Segregation)** \ - - -- **Materialized View** \ - [https://learn.microsoft.com/azure/architecture/patterns/materialized-view](/azure/architecture/patterns/materialized-view) - -- **Charles Row. ACID vs. BASE: The Shifting pH of Database Transaction Processing** \ - - -- **Compensating Transaction** \ - [https://learn.microsoft.com/azure/architecture/patterns/compensating-transaction](/azure/architecture/patterns/compensating-transaction) - -- **Udi Dahan. Service Oriented Composition** \ - - ->[!div class="step-by-step"] ->[Previous](logical-versus-physical-architecture.md) ->[Next](identify-microservice-domain-model-boundaries.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md b/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md deleted file mode 100644 index d10612cdd4a85..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: Manage state and data in Docker applications -description: State and data management in Docker applications. Microservice instances are expendable, but data is NOT. Learn how to handle this with microservices. -ms.date: 09/20/2018 ---- -# Manage state and data in Docker applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -In most cases, you can think of a container as an instance of a process. A process doesn't maintain persistent state. While a container can write to its local storage, assuming that an instance will be around indefinitely would be like assuming that a single location in memory will be durable. You should assume that container images, like processes, have multiple instances or will eventually be killed. If they're managed with a container orchestrator, you should assume that they might get moved from one node or VM to another. - -The following solutions are used to manage data in Docker applications: - -From the Docker host, as [Docker Volumes](https://docs.docker.com/engine/admin/volumes/): - -- **Volumes** are stored in an area of the host filesystem that's managed by Docker. - -- **Bind mounts** can map to any folder in the host filesystem, so access can't be controlled from Docker process and can pose a security risk as a container could access sensitive OS folders. - -- **tmpfs mounts** are like virtual folders that only exist in the host's memory and are never written to the filesystem. - -From remote storage: - -- [Azure Storage](https://azure.microsoft.com/documentation/services/storage/), which provides geo-distributable storage, providing a good long-term persistence solution for containers. - -- Remote relational databases like [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) or NoSQL databases like [Azure Cosmos DB](/azure/cosmos-db/introduction), or cache services like [Redis](https://redis.io/). - -From the Docker container: - -- **Overlay File System**. This Docker feature implements a copy-on-write task that stores updated information to the root file system of the container. That information is "on top" of the original image on which the container is based. If the container is deleted from the system, those changes are lost. Therefore, while it's possible to save the state of a container within its local storage, designing a system around this would conflict with the premise of container design, which by default is stateless. - -However, using Docker Volumes is now the preferred way to handle local data in Docker. If you need more information about storage in containers check on [Docker storage drivers](https://docs.docker.com/engine/storage/drivers/select-storage-driver/) and [About storage drivers](https://docs.docker.com/engine/storage/drivers/). - -The following provides more detail about these options: - -**Volumes** are directories mapped from the host OS to directories in containers. When code in the container has access to the directory, that access is actually to a directory on the host OS. This directory is not tied to the lifetime of the container itself, and the directory is managed by Docker and isolated from the core functionality of the host machine. Thus, data volumes are designed to persist data independently of the life of the container. If you delete a container or an image from the Docker host, the data persisted in the data volume isn't deleted. - -Volumes can be named or anonymous (the default). Named volumes are the evolution of **Data Volume Containers** and make it easy to share data between containers. Volumes also support volume drivers that allow you to store data on remote hosts, among other options. - -**Bind mounts** are available since a long time ago and allow the mapping of any folder to a mount point in a container. Bind mounts have more limitations than volumes and some important security issues, so volumes are the recommended option. - -**tmpfs mounts** are basically virtual folders that live only in the host's memory and are never written to the filesystem. They are fast and secure but use memory and are only meant for temporary, non-persistent data. - -As shown in Figure 4-5, regular Docker volumes can be stored outside of the containers themselves but within the physical boundaries of the host server or VM. However, Docker containers can't access a volume from one host server or VM to another. In other words, with these volumes, it isn't possible to manage data shared between containers that run on different Docker hosts, although it could be achieved with a volume driver that supports remote hosts. - -![Diagram showing volumes and external data sources for container-based apps.](./media/docker-application-state-data/volumes-external-data-sources.png) - -**Figure 4-5**. Volumes and external data sources for container-based applications - -Volumes can be shared between containers, but only in the same host, unless you use a remote driver that supports remote hosts. In addition, when Docker containers are managed by an orchestrator, containers might "move" between hosts, depending on the optimizations performed by the cluster. Therefore, it isn't recommended that you use data volumes for business data. But they're a good mechanism to work with trace files, temporal files, or similar that will not impact business data consistency. - -**Remote data sources and cache** tools like Azure SQL Database, Azure Cosmos DB, or a remote cache like Redis can be used in containerized applications the same way they are used when developing without containers. This is a proven way to store business application data. - -**Azure Storage.** Business data usually will need to be placed in external resources or databases, like Azure Storage. Azure Storage, in concrete, provides the following services in the cloud: - -- Blob storage stores unstructured object data. A blob can be any type of text or binary data, such as document or media files (images, audio, and video files). Blob storage is also referred to as Object storage. - -- File storage offers shared storage for legacy applications using standard SMB protocol. Azure virtual machines and cloud services can share file data across application components via mounted shares. On-premises applications can access file data in a share via the File service REST API. - -- Table storage stores structured datasets. Table storage is a NoSQL key-attribute data store, which allows rapid development and fast access to large quantities of data. - -**Relational databases and NoSQL databases.** There are many choices for external databases, from relational databases like SQL Server, PostgreSQL, Oracle, or NoSQL databases like Azure Cosmos DB, MongoDB, etc. These databases are not going to be explained as part of this guide since they are in a completely different subject. - ->[!div class="step-by-step"] ->[Previous](containerize-monolithic-applications.md) ->[Next](service-oriented-architecture.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md b/docs/architecture/microservices/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md deleted file mode 100644 index 358b2388b9807..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Identifying domain-model boundaries for each microservice -description: Explore the essence of partitioning a large application into microservices to achieve a sound architecture. -ms.date: 09/20/2018 ---- -# Identify domain-model boundaries for each microservice - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The goal when identifying model boundaries and size for each microservice isn't to get to the most granular separation possible, although you should tend toward small microservices if possible. Instead, your goal should be to get to the most meaningful separation guided by your domain knowledge. The emphasis isn't on the size, but instead on business capabilities. In addition, if there's clear cohesion needed for a certain area of the application based on a high number of dependencies, that indicates the need for a single microservice, too. Cohesion is a way to identify how to break apart or group together microservices. Ultimately, while you gain more knowledge about the domain, you should adapt the size of your microservice, iteratively. Finding the right size isn't a one-shot process. - -[Sam Newman](https://samnewman.io/), a recognized promoter of microservices and author of the book [Building Microservices](https://samnewman.io/books/building_microservices/), highlights that you should design your microservices based on the Bounded Context (BC) pattern (part of domain-driven design), as introduced earlier. Sometimes, a BC could be composed of several physical services, but not vice versa. - -A domain model with specific domain entities applies within a concrete BC or microservice. A BC delimits the applicability of a domain model and gives developer team members a clear and shared understanding of what must be cohesive and what can be developed independently. These are the same goals for microservices. - -Another tool that informs your design choice is [Conway's law](https://en.wikipedia.org/wiki/Conway%27s_law), which states that an application will reflect the social boundaries of the organization that produced it. But sometimes the opposite is true -the company's organization is formed by the software. You might need to reverse Conway's law and build the boundaries the way you want the company to be organized, leaning toward business process consulting. - -To identify bounded contexts, you can use a DDD pattern called the [Context Mapping pattern](https://www.infoq.com/articles/ddd-contextmapping). With Context Mapping, you identify the various contexts in the application and their boundaries. It's common to have a different context and boundary for each small subsystem, for instance. The Context Map is a way to define and make explicit those boundaries between domains. A BC is autonomous and includes the details of a single domain -details like the domain entities- and defines integration contracts with other BCs. This is similar to the definition of a microservice: it's autonomous, it implements certain domain capability, and it must provide interfaces. This is why Context Mapping and the Bounded Context pattern are good approaches for identifying the domain model boundaries of your microservices. - -When designing a large application, you'll see how its domain model can be fragmented - a domain expert from the catalog domain will name entities differently in the catalog and inventory domains than a shipping domain expert, for instance. Or the user domain entity might be different in size and number of attributes when dealing with a CRM expert who wants to store every detail about the customer than for an ordering domain expert who just needs partial data about the customer. It's very hard to disambiguate all domain terms across all the domains related to a large application. But the most important thing is that you shouldn't try to unify the terms. Instead, accept the differences and richness provided by each domain. If you try to have a unified database for the whole application, attempts at a unified vocabulary will be awkward and won't sound right to any of the multiple domain experts. Therefore, BCs (implemented as microservices) will help you to clarify where you can use certain domain terms and where you'll need to split the system and create additional BCs with different domains. - -You'll know that you got the right boundaries and sizes of each BC and domain model if you have few strong relationships between domain models, and you do not usually need to merge information from multiple domain models when performing typical application operations. - -Perhaps the best answer to the question of how large a domain model for each microservice should be is the following: it should have an autonomous BC, as isolated as possible, that enables you to work without having to constantly switch to other contexts (other microservice's models). In Figure 4-10, you can see how multiple microservices (multiple BCs) each has their own model and how their entities can be defined, depending on the specific requirements for each of the identified domains in your application. - -![Diagram showing entities in several model boundaries.](./media/identify-microservice-domain-model-boundaries/identify-entities-microservice-model-boundries.png) - -**Figure 4-10**. Identifying entities and microservice model boundaries - -Figure 4-10 illustrates a sample scenario related to an online conference management system. The same entity appears as "Users", "Buyers", "Payers", and "Customers" depending on the bounded context. You've identified several BCs that could be implemented as microservices, based on domains that domain experts defined for you. As you can see, there are entities that are present just in a single microservice model, like Payments in the Payment microservice. Those will be easy to implement. - -However, you might also have entities that have a different shape but share the same identity across the multiple domain models from the multiple microservices. For example, the User entity is identified in the Conferences Management microservice. That same user, with the same identity, is the one named Buyers in the Ordering microservice, or the one named Payer in the Payment microservice, and even the one named Customer in the Customer Service microservice. This is because, depending on the [ubiquitous language](https://martinfowler.com/bliki/UbiquitousLanguage.html) that each domain expert is using, a user might have a different perspective even with different attributes. The user entity in the microservice model named Conferences Management might have most of its personal data attributes. However, that same user in the shape of Payer in the microservice Payment or in the shape of Customer in the microservice Customer Service might not need the same list of attributes. - -A similar approach is illustrated in Figure 4-11. - -![Diagram showing how to decompose a data model into multiple domain models.](./media/identify-microservice-domain-model-boundaries/decompose-traditional-data-models.png) - -**Figure 4-11**. Decomposing traditional data models into multiple domain models - -When decomposing a traditional data model between bounded contexts, you can have different entities that share the same identity (a buyer is also a user) with different attributes in each bounded context. You can see how the user is present in the Conferences Management microservice model as the User entity and is also present in the form of the Buyer entity in the Pricing microservice, with alternate attributes or details about the user when it's actually a buyer. Each microservice or BC might not need all the data related to a User entity, just part of it, depending on the problem to solve or the context. For instance, in the Pricing microservice model, you do not need the address or the name of the user, just the ID (as identity) and Status, which will have an impact on discounts when pricing the seats per buyer. - -The Seat entity has the same name but different attributes in each domain model. However, Seat shares identity based on the same ID, as happens with User and Buyer. - -Basically, there's a shared concept of a user that exists in multiple services (domains), which all share the identity of that user. But in each domain model there might be additional or different details about the user entity. Therefore, there needs to be a way to map a user entity from one domain (microservice) to another. - -There are several benefits to not sharing the same user entity with the same number of attributes across domains. One benefit is to reduce duplication, so that microservice models do not have any data that they do not need. Another benefit is having a primary microservice that owns a certain type of data per entity so that updates and queries for that type of data are driven only by that microservice. - ->[!div class="step-by-step"] ->[Previous](distributed-data-management.md) ->[Next](direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/index.md b/docs/architecture/microservices/architect-microservice-container-applications/index.md deleted file mode 100644 index cde99dba5887d..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/index.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Architecting container and microservice-based applications -description: Architecting container and microservice-based applications is no small feat and shouldn't be taken lightly. Learn the core concepts in this chapter. -ms.date: 01/13/2021 ---- -# Architecting container and microservice-based applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -*Microservices offer great benefits but also raise huge new challenges. Microservice architecture patterns are fundamental pillars when creating a microservice-based application.* - -Earlier in this guide, you learned basic concepts about containers and Docker. That information was the minimum you needed to get started with containers. Even though containers are enablers of, and a great fit for microservices, they aren't mandatory for a microservice architecture. Many architectural concepts in this architecture section could be applied without containers. However, this guide focuses on the intersection of both due to the already introduced importance of containers. - -Enterprise applications can be complex and are often composed of multiple services instead of a single service-based application. For those cases, you need to understand other architectural approaches, such as the microservices and certain Domain-Driven Design (DDD) patterns plus container orchestration concepts. Note that this chapter describes not just microservices on containers, but any containerized application, as well. - -## Container design principles - -In the container model, a container image instance represents a single process. By defining a container image as a process boundary, you can create primitives that can be used to scale or batch the process. - -When you design a container image, you'll see an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) definition in the Dockerfile. This definition defines the process whose lifetime controls the lifetime of the container. When the process completes, the container lifecycle ends. Containers might represent long-running processes like web servers, but can also represent short-lived processes like batch jobs, which formerly might have been implemented as Azure [WebJobs](https://github.com/Azure/azure-webjobs-sdk/wiki). - -If the process fails, the container ends, and the orchestrator takes over. If the orchestrator was configured to keep five instances running and one fails, the orchestrator will create another container instance to replace the failed process. In a batch job, the process is started with parameters. When the process completes, the work is complete. This guidance drills-down on orchestrators, later on. - -You might find a scenario where you want multiple processes running in a single container. For that scenario, since there can be only one entry point per container, you could run a script within the container that launches as many programs as needed. For example, you can use [Supervisor](http://supervisord.org/) or a similar tool to take care of launching multiple processes inside a single container. However, even though you can find architectures that hold multiple processes per container, that approach isn't very common. - ->[!div class="step-by-step"] ->[Previous](../net-core-net-framework-containers/official-net-docker-images.md) ->[Next](containerize-monolithic-applications.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/logical-versus-physical-architecture.md b/docs/architecture/microservices/architect-microservice-container-applications/logical-versus-physical-architecture.md deleted file mode 100644 index 2ddb17ca12ead..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/logical-versus-physical-architecture.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: Logical architecture versus physical architecture -description: Understand the differences between Logical and physical architectures. -ms.date: 09/20/2018 ---- -# Logical architecture versus physical architecture - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -It's useful at this point to stop and discuss the distinction between logical architecture and physical architecture, and how this applies to the design of microservice-based applications. - -To begin, building microservices doesn't require the use of any specific technology. For instance, Docker containers aren't mandatory to create a microservice-based architecture. Those microservices could also be run as plain processes. Microservices is a logical architecture. - -Moreover, even when a microservice could be physically implemented as a single service, process, or container (for simplicity's sake, that's the approach taken in the initial version of [eShopOnContainers](https://aka.ms/MicroservicesArchitecture)), this parity between business microservice and physical service or container isn't necessarily required in all cases when you build a large and complex application composed of many dozens or even hundreds of services. - -This is where there's a difference between an application's logical architecture and physical architecture. The logical architecture and logical boundaries of a system do not necessarily map one-to-one to the physical or deployment architecture. It can happen, but it often doesn't. - -Although you might have identified certain business microservices or Bounded Contexts, it doesn't mean that the best way to implement them is always by creating a single service (such as an ASP.NET Web API) or single Docker container for each business microservice. Having a rule saying each business microservice has to be implemented using a single service or container is too rigid. - -Therefore, a business microservice or Bounded Context is a logical architecture that might coincide (or not) with physical architecture. The important point is that a business microservice or Bounded Context must be autonomous by allowing code and state to be independently versioned, deployed, and scaled. - -As Figure 4-8 shows, the catalog business microservice could be composed of several services or processes. These could be multiple ASP.NET Web API services or any other kind of services using HTTP or any other protocol. More importantly, the services could share the same data, as long as these services are cohesive with respect to the same business domain. - -![Diagram of the Catalog business microservice with physical servers.](./media/logical-versus-physical-architecture/multiple-physical-services.png) - -**Figure 4-8**. Business microservice with several physical services - -The services in the example share the same data model because the Web API service targets the same data as the Search service. So, in the physical implementation of the business microservice, you're splitting that functionality so you can scale each of those internal services up or down as needed. Maybe the Web API service usually needs more instances than the Search service, or vice versa. - -In short, the logical architecture of microservices doesn't always have to coincide with the physical deployment architecture. In this guide, whenever we mention a microservice, we mean a business or logical microservice that could map to one or more (physical) services. In most cases, this will be a single service, but it might be more. - ->[!div class="step-by-step"] ->[Previous](data-sovereignty-per-microservice.md) ->[Next](distributed-data-management.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/maintain-microservice-apis.md b/docs/architecture/microservices/architect-microservice-container-applications/maintain-microservice-apis.md deleted file mode 100644 index d8a4f03ea58a0..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/maintain-microservice-apis.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Creating, evolving, and versioning microservice APIs and contracts -description: Create microservice APIs and contracts considering evolution and versioning because needs change. -ms.date: 01/13/2021 ---- -# Creating, evolving, and versioning microservice APIs and contracts - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -A microservice API is a contract between the service and its clients. You'll be able to evolve a microservice independently only if you do not break its API contract, which is why the contract is so important. If you change the contract, it will impact your client applications or your API Gateway. - -The nature of the API definition depends on which protocol you're using. For instance, if you're using messaging, like AMQP, the API consists of the message types. If you're using HTTP and RESTful services, the API consists of the URLs and the request and response JSON formats. - -However, even if you're thoughtful about your initial contract, a service API will need to change over time. When that happens—and especially if your API is a public API consumed by multiple client applications — you typically can't force all clients to upgrade to your new API contract. You usually need to incrementally deploy new versions of a service in a way that both old and new versions of a service contract are running simultaneously. Therefore, it's important to have a strategy for your service versioning. - -When the API changes are small, like if you add attributes or parameters to your API, clients that use an older API should switch and work with the new version of the service. You might be able to provide default values for any missing attributes that are required, and the clients might be able to ignore any extra response attributes. - -However, sometimes you need to make major and incompatible changes to a service API. Because you might not be able to force client applications or services to upgrade immediately to the new version, a service must support older versions of the API for some period. If you're using an HTTP-based mechanism such as REST, one approach is to embed the API version number in the URL or into an HTTP header. Then you can decide between implementing both versions of the service simultaneously within the same service instance, or deploying different instances that each handle a version of the API. A good approach for this functionality is the [Mediator pattern](https://en.wikipedia.org/wiki/Mediator_pattern) (for example, [MediatR library](https://github.com/jbogard/MediatR)) to decouple the different implementation versions into independent handlers. - -Finally, if you're using a REST architecture, [Hypermedia](https://www.infoq.com/articles/mark-baker-hypermedia) is the best solution for versioning your services and allowing evolvable APIs. - -## Additional resources - -- **Scott Hanselman. ASP.NET Core RESTful Web API versioning made easy** \ - - -- **Versioning a RESTful web API** \ - [https://learn.microsoft.com/azure/architecture/best-practices/api-design#versioning-a-restful-web-api](/azure/architecture/best-practices/api-design#versioning-a-restful-web-api) - -- **Roy Fielding. Versioning, Hypermedia, and REST** \ - - ->[!div class="step-by-step"] ->[Previous](asynchronous-message-based-communication.md) ->[Next](microservices-addressability-service-registry.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/asynchronous-message-based-communication/asynchronous-event-driven-communication.png b/docs/architecture/microservices/architect-microservice-container-applications/media/asynchronous-message-based-communication/asynchronous-event-driven-communication.png deleted file mode 100644 index 1d54b1f58808a..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/asynchronous-message-based-communication/asynchronous-event-driven-communication.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/asynchronous-message-based-communication/single-receiver-message-based-communication.png b/docs/architecture/microservices/architect-microservice-container-applications/media/asynchronous-message-based-communication/single-receiver-message-based-communication.png deleted file mode 100644 index db825f2f63d78..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/asynchronous-message-based-communication/single-receiver-message-based-communication.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/one-to-many-communication.png b/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/one-to-many-communication.png deleted file mode 100644 index d4c8101c68423..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/one-to-many-communication.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/request-response-comms-live-queries-updates.png b/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/request-response-comms-live-queries-updates.png deleted file mode 100644 index 710bbb87db391..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/request-response-comms-live-queries-updates.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/sync-vs-async-patterns-across-microservices.png b/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/sync-vs-async-patterns-across-microservices.png deleted file mode 100644 index 4a98e249533ff..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/communication-in-microservice-architecture/sync-vs-async-patterns-across-microservices.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/docker-infrastructure-monolithic-application.png b/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/docker-infrastructure-monolithic-application.png deleted file mode 100644 index 14bc2fdd99145..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/docker-infrastructure-monolithic-application.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/host-multiple-apps-containers.png b/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/host-multiple-apps-containers.png deleted file mode 100644 index ec74fad036478..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/host-multiple-apps-containers.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/monolithic-containerized-application.png b/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/monolithic-containerized-application.png deleted file mode 100644 index 03c5489cb32ca..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/monolithic-containerized-application.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/publish-azure-app-service-container.png b/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/publish-azure-app-service-container.png deleted file mode 100644 index cefbb64f086da..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/containerize-monolithic-applications/publish-azure-app-service-container.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/data-sovereignty-per-microservice/data-sovereignty-comparison.png b/docs/architecture/microservices/architect-microservice-container-applications/media/data-sovereignty-per-microservice/data-sovereignty-comparison.png deleted file mode 100644 index 8da7600e10cda..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/data-sovereignty-per-microservice/data-sovereignty-comparison.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/api-gateway-azure-api-management.png b/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/api-gateway-azure-api-management.png deleted file mode 100644 index 7eb114b898027..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/api-gateway-azure-api-management.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/custom-service-api-gateway.png b/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/custom-service-api-gateway.png deleted file mode 100644 index 5c3a78871e95b..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/custom-service-api-gateway.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/multiple-custom-api-gateways.png b/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/multiple-custom-api-gateways.png deleted file mode 100644 index 228eccf414c5a..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern/multiple-custom-api-gateways.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication.png b/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication.png deleted file mode 100644 index 28a6d9302b6a7..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/direct-client-to-microservice-communication.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/distributed-data-management/indepentent-microservice-databases.png b/docs/architecture/microservices/architect-microservice-container-applications/media/distributed-data-management/indepentent-microservice-databases.png deleted file mode 100644 index f0784511478ae..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/distributed-data-management/indepentent-microservice-databases.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/docker-application-state-data/volumes-external-data-sources.png b/docs/architecture/microservices/architect-microservice-container-applications/media/docker-application-state-data/volumes-external-data-sources.png deleted file mode 100644 index 3578b12b23712..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/docker-application-state-data/volumes-external-data-sources.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/identify-microservice-domain-model-boundaries/decompose-traditional-data-models.png b/docs/architecture/microservices/architect-microservice-container-applications/media/identify-microservice-domain-model-boundaries/decompose-traditional-data-models.png deleted file mode 100644 index 34f336876c6b4..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/identify-microservice-domain-model-boundaries/decompose-traditional-data-models.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/identify-microservice-domain-model-boundaries/identify-entities-microservice-model-boundries.png b/docs/architecture/microservices/architect-microservice-container-applications/media/identify-microservice-domain-model-boundaries/identify-entities-microservice-model-boundries.png deleted file mode 100644 index 9feb7cfd1d3e9..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/identify-microservice-domain-model-boundaries/identify-entities-microservice-model-boundries.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/logical-versus-physical-architecture/multiple-physical-services.png b/docs/architecture/microservices/architect-microservice-container-applications/media/logical-versus-physical-architecture/multiple-physical-services.png deleted file mode 100644 index 93d4334b95ea2..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/logical-versus-physical-architecture/multiple-physical-services.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/microservice-based-composite-ui-shape-layout/microservice-generate-composite-ui.png b/docs/architecture/microservices/architect-microservice-container-applications/media/microservice-based-composite-ui-shape-layout/microservice-generate-composite-ui.png deleted file mode 100644 index b744899fa48d3..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/microservice-based-composite-ui-shape-layout/microservice-generate-composite-ui.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/microservice-based-composite-ui-shape-layout/monolith-ui-consume-microservices.png b/docs/architecture/microservices/architect-microservice-container-applications/media/microservice-based-composite-ui-shape-layout/monolith-ui-consume-microservices.png deleted file mode 100644 index 9f5a3d94617fb..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/microservice-based-composite-ui-shape-layout/monolith-ui-consume-microservices.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/microservices-architecture/monolith-deployment-vs-microservice-approach.png b/docs/architecture/microservices/architect-microservice-container-applications/media/microservices-architecture/monolith-deployment-vs-microservice-approach.png deleted file mode 100644 index 3bdf55d06b60d..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/microservices-architecture/monolith-deployment-vs-microservice-approach.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/resilient-high-availability-microservices/microservice-platform.png b/docs/architecture/microservices/architect-microservice-container-applications/media/resilient-high-availability-microservices/microservice-platform.png deleted file mode 100644 index c71ae86d94cf4..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/resilient-high-availability-microservices/microservice-platform.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/azure-container-apps-logo.png b/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/azure-container-apps-logo.png deleted file mode 100644 index 8f41e7cf9892d..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/azure-container-apps-logo.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/azure-kubernetes-service-logo.png b/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/azure-kubernetes-service-logo.png deleted file mode 100644 index 899cb1a0e9ccb..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/azure-kubernetes-service-logo.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/composed-docker-applications-cluster.png b/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/composed-docker-applications-cluster.png deleted file mode 100644 index 70b69020c6d9e..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/composed-docker-applications-cluster.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-cluster-simplified-structure.png b/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-cluster-simplified-structure.png deleted file mode 100644 index a4366d2e22f21..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-cluster-simplified-structure.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-container-orchestration-system-logo.png b/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-container-orchestration-system-logo.png deleted file mode 100644 index 597e45b7715c5..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-container-orchestration-system-logo.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-development-environment.png b/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-development-environment.png deleted file mode 100644 index 8d6791efc3c66..0000000000000 Binary files a/docs/architecture/microservices/architect-microservice-container-applications/media/scalable-available-multi-container-microservice-applications/kubernetes-development-environment.png and /dev/null differ diff --git a/docs/architecture/microservices/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md b/docs/architecture/microservices/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md deleted file mode 100644 index 8dc9b831962e7..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: Creating composite UI based on microservices -description: Microservices architecture is not only for the back end. Get a peek view at using it in the front end. -ms.date: 01/13/2021 ---- -# Creating composite UI based on microservices - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Microservices architecture often starts with the server-side handling data and logic, but, in many cases, the UI is still handled as a monolith. However, a more advanced approach, called [micro frontends](https://martinfowler.com/articles/micro-frontends.html), is to design your application UI based on microservices as well. That means having a composite UI produced by the microservices, instead of having microservices on the server and just a monolithic client app consuming the microservices. With this approach, the microservices you build can be complete with both logic and visual representation. - -Figure 4-20 shows the simpler approach of just consuming microservices from a monolithic client application. Of course, you could have an ASP.NET MVC service in between producing the HTML and JavaScript. The figure is a simplification that highlights that you have a single (monolithic) client UI consuming the microservices, which just focus on logic and data and not on the UI shape (HTML and JavaScript). - -![Diagram of a monolithic UI app connecting to microservices.](./media/microservice-based-composite-ui-shape-layout/monolith-ui-consume-microservices.png) - -**Figure 4-20**. A monolithic UI application consuming back-end microservices - -In contrast, a composite UI is precisely generated and composed by the microservices themselves. Some of the microservices drive the visual shape of specific areas of the UI. The key difference is that you have client UI components (TypeScript classes, for example) based on templates, and the data-shaping-UI ViewModel for those templates comes from each microservice. - -At client application start-up time, each of the client UI components (TypeScript classes, for example) registers itself with an infrastructure microservice capable of providing ViewModels for a given scenario. If the microservice changes the shape, the UI changes also. - -Figure 4-21 shows a version of this composite UI approach. This approach is simplified because you might have other microservices that are aggregating granular parts that are based on different techniques. It depends on whether you're building a traditional web approach (ASP.NET MVC) or an SPA (Single Page Application). - -![Diagram of a composite UI made up of many view models.](./media/microservice-based-composite-ui-shape-layout/microservice-generate-composite-ui.png) - -**Figure 4-21**. Example of a composite UI application shaped by back-end microservices - -Each of those UI composition microservices would be similar to a small API Gateway. But in this case, each one is responsible for a small UI area. - -A composite UI approach that's driven by microservices can be more challenging or less so, depending on what UI technologies you're using. For instance, you won't use the same techniques for building a traditional web application that you use for building an SPA or for native mobile app (as when developing Xamarin apps, which can be more challenging for this approach). - -The [eShopOnContainers](https://aka.ms/MicroservicesArchitecture) sample application uses the monolithic UI approach for multiple reasons. First, it's an introduction to microservices and containers. A composite UI is more advanced but also requires further complexity when designing and developing the UI. Second, eShopOnContainers also provides a native mobile app based on Xamarin, which would make it more complex on the client C\# side. - -However, we encourage you to use the following references to learn more about composite UI based on microservices. - -## Additional resources - -- **Micro Frontends (Martin Fowler's blog)** - - -- **Micro Frontends (Michael Geers site)** - - -- **Composite UI using ASP.NET (Particular's Workshop)** - - -- **Ruben Oostinga. The Monolithic Frontend in the Microservices Architecture** - - -- **Mauro Servienti. The secret of better UI composition** - - -- **Viktor Farcic. Including Front-End Web Components Into Microservices** - - -- **Managing Frontend in the Microservices Architecture** - - ->[!div class="step-by-step"] ->[Previous](microservices-addressability-service-registry.md) ->[Next](resilient-high-availability-microservices.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/microservices-addressability-service-registry.md b/docs/architecture/microservices/architect-microservice-container-applications/microservices-addressability-service-registry.md deleted file mode 100644 index 378bc3df1e370..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/microservices-addressability-service-registry.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: Microservices addressability and the service registry -description: Understand the role of the container image registries in the microservices architecture. -ms.date: 11/19/2021 ---- -# Microservices addressability and the service registry - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Each microservice has a unique name (URL) that's used to resolve its location. Your microservice needs to be addressable wherever it's running. If you have to think about which computer is running a particular microservice, things can go bad quickly. In the same way that DNS resolves a URL to a particular computer, your microservice needs to have a unique name so that its current location is discoverable. Microservices need addressable names that make them independent from the infrastructure that they're running on. This approach implies that there's an interaction between how your service is deployed and how it's discovered, because there needs to be a [service registry](https://microservices.io/patterns/service-registry.html). In the same vein, when a computer fails, the registry service must be able to indicate where the service is now running. - -The [service registry pattern](https://microservices.io/patterns/service-registry.html) is a key part of service discovery. The registry is a database containing the network locations of service instances. A service registry needs to be highly available and up-to-date. Clients could cache network locations obtained from the service registry. However, that information eventually goes out of date and clients can no longer discover service instances. So, a service registry consists of a cluster of servers that use a replication protocol to maintain consistency. - -In some microservice deployment environments (called clusters, to be covered in a later section), service discovery is built in. For example, an Azure Kubernetes Service (AKS) environment can handle service instance registration and deregistration. It also runs a proxy on each cluster host that plays the role of server-side discovery router. - -## Additional resources - -- **Chris Richardson. Pattern: Service registry** \ - - -- **Auth0. The Service Registry** \ - - -- **Gabriel Schenker. Service discovery** \ - - ->[!div class="step-by-step"] ->[Previous](maintain-microservice-apis.md) ->[Next](microservice-based-composite-ui-shape-layout.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/microservices-architecture.md b/docs/architecture/microservices/architect-microservice-container-applications/microservices-architecture.md deleted file mode 100644 index 4b9ea7019d5b4..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/microservices-architecture.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: Microservices architecture -description: .NET Microservices Architecture for Containerized .NET Applications | 30.000 feet view of Microservices architecture. -ms.date: 09/20/2018 ---- -# Microservices architecture - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -As the name implies, a microservices architecture is an approach to building a server application as a set of small services. That means a microservices architecture is mainly oriented to the back-end, although the approach is also being used for the front end. Each service runs in its own process and communicates with other processes using protocols such as HTTP/HTTPS, WebSockets, or [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol). Each microservice implements a specific end-to-end domain or business capability within a certain context boundary, and each must be developed autonomously and be deployable independently. Finally, each microservice should own its related domain data model and domain logic (sovereignty and decentralized data management) and could be based on different data storage technologies (SQL, NoSQL) and different programming languages. - -What size should a microservice be? When developing a microservice, size shouldn't be the important point. Instead, the important point should be to create loosely coupled services so you have autonomy of development, deployment, and scale, for each service. Of course, when identifying and designing microservices, you should try to make them as small as possible as long as you don't have too many direct dependencies with other microservices. More important than the size of the microservice is the internal cohesion it must have and its independence from other services. - -Why a microservices architecture? In short, it provides long-term agility. Microservices enable better maintainability in complex, large, and highly-scalable systems by letting you create applications based on many independently deployable services that each have granular and autonomous lifecycles. - -As an additional benefit, microservices can scale out independently. Instead of having a single monolithic application that you must scale out as a unit, you can instead scale out specific microservices. That way, you can scale just the functional area that needs more processing power or network bandwidth to support demand, rather than scaling out other areas of the application that don't need to be scaled. That means cost savings because you need less hardware. - -![Diagram of the differences between the two deployment methods.](./media/microservices-architecture/monolith-deployment-vs-microservice-approach.png) - -**Figure 4-6**. Monolithic deployment versus the microservices approach - -As Figure 4-6 shows, in the traditional monolithic approach, the application scales by cloning the whole app in several servers/VM. In the microservices approach, functionality is segregated in smaller services, so each service can scale independently. The microservices approach allows agile changes and rapid iteration of each microservice, because you can change specific, small areas of complex, large, and scalable applications. - -Architecting fine-grained microservices-based applications enables continuous integration and continuous delivery practices. It also accelerates delivery of new functions into the application. Fine-grained composition of applications also allows you to run and test microservices in isolation, and to evolve them autonomously while maintaining clear contracts between them. As long as you don't change the interfaces or contracts, you can change the internal implementation of any microservice or add new functionality without breaking other microservices. - -The following are important aspects to enable success in going into production with a microservices-based system: - -- Monitoring and health checks of the services and infrastructure. - -- Scalable infrastructure for the services (that is, cloud and orchestrators). - -- Security design and implementation at multiple levels: authentication, authorization, secrets management, secure communication, etc. - -- Rapid application delivery, usually with different teams focusing on different microservices. - -- DevOps and CI/CD practices and infrastructure. - -Of these, only the first three are covered or introduced in this guide. The last two points, which are related to application lifecycle, are covered in the additional [Containerized Docker Application Lifecycle with Microsoft Platform and Tools](https://aka.ms/dockerlifecycleebook) e-book. - -## Additional resources - -- **Mark Russinovich. Microservices: An application revolution powered by the cloud** \ - - -- **Martin Fowler. Microservices** \ - - -- **Martin Fowler. Microservice Prerequisites** \ - - -- **Jimmy Nilsson. Chunk Cloud Computing** \ - - -- **Cesar de la Torre. Containerized Docker Application Lifecycle with Microsoft Platform and Tools** (downloadable e-book) \ - - ->[!div class="step-by-step"] ->[Previous](service-oriented-architecture.md) ->[Next](data-sovereignty-per-microservice.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md b/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md deleted file mode 100644 index 561d3c3b6ce75..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: Resiliency and high availability in microservices -description: Microservices have to be designed to withstand transient network and dependencies failures they must be resilient to achieve high availability. -ms.date: 01/13/2021 ---- -# Resiliency and high availability in microservices - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Dealing with unexpected failures is one of the hardest problems to solve, especially in a distributed system. Much of the code that developers write involves handling exceptions, and this is also where the most time is spent in testing. The problem is more involved than writing code to handle failures. What happens when the machine where the microservice is running fails? Not only do you need to detect this microservice failure (a hard problem on its own), but you also need something to restart your microservice. - -A microservice needs to be resilient to failures and to be able to restart often on another machine for availability. This resiliency also comes down to the state that was saved on behalf of the microservice, where the microservice can recover this state from, and whether the microservice can restart successfully. In other words, there needs to be resiliency in the compute capability (the process can restart at any time) as well as resilience in the state or data (no data loss, and the data remains consistent). - -The problems of resiliency are compounded during other scenarios, such as when failures occur during an application upgrade. The microservice, working with the deployment system, needs to determine whether it can continue to move forward to the newer version or instead roll back to a previous version to maintain a consistent state. Questions such as whether enough machines are available to keep moving forward and how to recover previous versions of the microservice need to be considered. This approach requires the microservice to emit health information so that the overall application and orchestrator can make these decisions. - -In addition, resiliency is related to how cloud-based systems must behave. As mentioned, a cloud-based system must embrace failures and must try to automatically recover from them. For instance, in case of network or container failures, client apps or client services must have a strategy to retry sending messages or to retry requests, since in many cases failures in the cloud are partial. The [Implementing Resilient Applications](../implement-resilient-applications/index.md) section in this guide addresses how to handle partial failure. It describes techniques like retries with exponential backoff or the Circuit Breaker pattern in .NET by using libraries like [Polly](https://github.com/App-vNext/Polly), which offers a large variety of policies to handle this subject. - -## Health management and diagnostics in microservices - -It may seem obvious, and it's often overlooked, but a microservice must report its health and diagnostics. Otherwise, there's little insight from an operations perspective. Correlating diagnostic events across a set of independent services and dealing with machine clock skews to make sense of the event order is challenging. In the same way that you interact with a microservice over agreed-upon protocols and data formats, there's a need for standardization in how to log health and diagnostic events that ultimately end up in an event store for querying and viewing. In a microservices approach, it's key that different teams agree on a single logging format. There needs to be a consistent approach to viewing diagnostic events in the application. - -### Health checks - -Health is different from diagnostics. Health is about the microservice reporting its current state to take appropriate actions. A good example is working with upgrade and deployment mechanisms to maintain availability. Although a service might currently be unhealthy due to a process crash or machine reboot, the service might still be operational. The last thing you need is to make this worse by performing an upgrade. The best approach is to do an investigation first or allow time for the microservice to recover. Health events from a microservice help us make informed decisions and, in effect, help create self-healing services. - -In the [Implementing health checks in ASP.NET Core services](../implement-resilient-applications/monitor-app-health.md#implement-health-checks-in-aspnet-core-services) section of this guide, we explain how to use a new ASP.NET HealthChecks library in your microservices so they can report their state to a monitoring service to take appropriate actions. - -You also have the option of using an excellent open-source library called AspNetCore.Diagnostics.HealthChecks, available on [GitHub](https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks) and as a [NuGet package](https://www.nuget.org/packages/Microsoft.AspNetCore.Diagnostics.HealthChecks/). This library also does health checks, with a twist, it handles two types of checks: - -- **Liveness**: Checks if the microservice is alive, that is, if it's able to accept requests and respond. -- **Readiness**: Checks if the microservice's dependencies (Database, queue services, etc.) are themselves ready, so the microservice can do what it's supposed to do. - -### Using diagnostics and logs event streams - -Logs provide information about how an application or service is running, including exceptions, warnings, and simple informational messages. Usually, each log is in a text format with one line per event, although exceptions also often show the stack trace across multiple lines. - -In monolithic server-based applications, you can write logs to a file on disk (a logfile) and then analyze it with any tool. Since application execution is limited to a fixed server or VM, it generally isn't too complex to analyze the flow of events. However, in a distributed application where multiple services are executed across many nodes in an orchestrator cluster, being able to correlate distributed events is a challenge. - -A microservice-based application should not try to store the output stream of events or logfiles by itself, and not even try to manage the routing of the events to a central place. It should be transparent, meaning that each process should just write its event stream to a standard output that underneath will be collected by the execution environment infrastructure where it's running. An example of these event stream routers is [Microsoft.Diagnostic.EventFlow](https://github.com/Azure/diagnostics-eventflow), which collects event streams from multiple sources and publishes it to output systems. These can include simple standard output for a development environment or cloud systems like [Azure Monitor](https://azure.microsoft.com/services/monitor//) and [Azure Diagnostics](/azure/azure-monitor/platform/diagnostics-extension-overview). There are also good third-party log analysis platforms and tools that can search, alert, report, and monitor logs, even in real time, like [Splunk](https://www.splunk.com/goto/Splunk_Log_Management?ac=ga_usa_log_analysis_phrase_Mar17&_kk=logs%20analysis&gclid=CNzkzIrex9MCFYGHfgodW5YOtA). - -### Orchestrators managing health and diagnostics information - -When you create a microservice-based application, you need to deal with complexity. Of course, a single microservice is simple to deal with, but dozens or hundreds of types and thousands of instances of microservices is a complex problem. It isn't just about building your microservice architecture—you also need high availability, addressability, resiliency, health, and diagnostics if you intend to have a stable and cohesive system. - -![Diagram of clusters supplying a support platform for microservices.](./media/resilient-high-availability-microservices/microservice-platform.png) - -**Figure 4-22**. A Microservice Platform is fundamental for an application's health management - -The complex problems shown in Figure 4-22 are hard to solve by yourself. Development teams should focus on solving business problems and building custom applications with microservice-based approaches. They should not focus on solving complex infrastructure problems; if they did, the cost of any microservice-based application would be huge. Therefore, there are microservice-oriented platforms, referred to as orchestrators or microservice clusters, that try to solve the hard problems of building and running a service and using infrastructure resources efficiently. This approach reduces the complexities of building applications that use a microservices approach. - -Different orchestrators might sound similar, but the diagnostics and health checks offered by each of them differ in features and state of maturity, sometimes depending on the OS platform, as explained in the next section. - -## Additional resources - -- **The Twelve-Factor App. XI. Logs: Treat logs as event streams** \ - - -- **Microsoft Diagnostic EventFlow Library** GitHub repo. \ - - -- **What is Azure Diagnostics** \ - [https://learn.microsoft.com/azure/azure-diagnostics](/azure/azure-diagnostics) - -- **Connect Windows computers to the Azure Monitor service** \ - [https://learn.microsoft.com/azure/azure-monitor/platform/agent-windows](/azure/azure-monitor/platform/agent-windows) - -- **Logging What You Mean: Using the Semantic Logging Application Block** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/dn440729(v=pandp.60)](/previous-versions/msp-n-p/dn440729(v=pandp.60)) - -- **Splunk** Official site. \ - - -- **EventSource Class** API for events tracing for Windows (ETW) \ - [https://learn.microsoft.com/dotnet/api/system.diagnostics.tracing.eventsource](xref:System.Diagnostics.Tracing.EventSource) - ->[!div class="step-by-step"] ->[Previous](microservice-based-composite-ui-shape-layout.md) ->[Next](scalable-available-multi-container-microservice-applications.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md b/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md deleted file mode 100644 index e1c5b9073e7a8..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Orchestrate microservices and multi-container applications for high scalability and availability -description: Discover the options to orchestrate microservices and multi-container applications for high scalability and availability and the possibilities of Azure Dev Spaces while developing Kubernetes application lifecycle. -ms.date: 11/19/2021 ---- -# Orchestrate microservices and multi-container applications for high scalability and availability - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Using orchestrators for production-ready applications is essential if your application is based on microservices or simply split across multiple containers. As introduced previously, in a microservice-based approach, each microservice owns its model and data so that it will be autonomous from a development and deployment point of view. But even if you have a more traditional application that's composed of multiple services (like SOA), you'll also have multiple containers or services comprising a single business application that need to be deployed as a distributed system. These kinds of systems are complex to scale out and manage; therefore, you absolutely need an orchestrator if you want to have a production-ready and scalable multi-container application. - -Figure 4-23 illustrates deployment into a cluster of an application composed of multiple microservices (containers). - -![Diagram showing Composed Docker applications in a cluster.](./media/scalable-available-multi-container-microservice-applications/composed-docker-applications-cluster.png) - -**Figure 4-23**. A cluster of containers - -You use one container for each service instance. Docker containers are "units of deployment" and a container is an instance of a Docker image. A host handles many containers. It looks like a logical approach. But how are you handling load-balancing, routing, and orchestrating these composed applications? - -The plain Docker Engine in single Docker hosts meets the needs of managing single image instances on one host, but it falls short when it comes to managing multiple containers deployed on multiple hosts for more complex distributed applications. In most cases, you need a management platform that will automatically start containers, scale out containers with multiple instances per image, suspend them or shut them down when needed, and ideally also control how they access resources like the network and data storage. - -To go beyond the management of individual containers or simple composed apps and move toward larger enterprise applications with microservices, you must turn to orchestration and clustering platforms. - -From an architecture and development point of view, if you're building large enterprise composed of microservices-based applications, it's important to understand the following platforms and products that support advanced scenarios: - -**Clusters and orchestrators.** When you need to scale out applications across many Docker hosts, as when a large microservice-based application, it's critical to be able to manage all those hosts as a single cluster by abstracting the complexity of the underlying platform. That's what the container clusters and orchestrators provide. Kubernetes is an example of an orchestrator, and is available in Azure through Azure Kubernetes Service. - -**Schedulers.** *Scheduling* means to have the capability for an administrator to launch containers in a cluster so they also provide a UI. A cluster scheduler has several responsibilities: to use the cluster's resources efficiently, to set the constraints provided by the user, to efficiently load-balance containers across nodes or hosts, and to be robust against errors while providing high availability. - -The concepts of a cluster and a scheduler are closely related, so the products provided by different vendors often provide both sets of capabilities. The following list shows the most important platform and software choices you have for clusters and schedulers. These orchestrators are generally offered in public clouds like Azure. - -## Software platforms for container clustering, orchestration, and scheduling - -| Platform | Description | -|:---:|---| -| **Kubernetes**
![An image of the Kubernetes logo.](./media/scalable-available-multi-container-microservice-applications/kubernetes-container-orchestration-system-logo.png) | [*Kubernetes*](https://kubernetes.io/) is an open-source product that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts.

*Kubernetes* provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery.

*Kubernetes* is mature in Linux, less mature in Windows. | -| **Azure Kubernetes Service (AKS)**
![An image of the Azure Kubernetes Service logo.](./media/scalable-available-multi-container-microservice-applications/azure-kubernetes-service-logo.png) | [AKS](https://azure.microsoft.com/services/kubernetes-service/) is a managed Kubernetes container orchestration service in Azure that simplifies Kubernetes cluster's management, deployment, and operations. | -| **Azure Container Apps**
![An image of the Azure Container Apps Service logo.](./media/scalable-available-multi-container-microservice-applications/azure-container-apps-logo.png) | [Azure Container Apps](https://azure.microsoft.com/services/container-apps/) is a managed serverless container service for building and deploying modern apps at scale. | - -## Using container-based orchestrators in Microsoft Azure - -Several cloud vendors offer Docker containers support plus Docker clusters and orchestration support, including Microsoft Azure, Amazon EC2 Container Service, and Google Container Engine. Microsoft Azure provides Docker cluster and orchestrator support through Azure Kubernetes Service (AKS). - -## Using Azure Kubernetes Service - -A Kubernetes cluster pools multiple Docker hosts and exposes them as a single virtual Docker host, so you can deploy multiple containers into the cluster and scale-out with any number of container instances. The cluster will handle all the complex management plumbing, like scalability, health, and so forth. - -AKS provides a way to simplify the creation, configuration, and management of a cluster of virtual machines in Azure that are preconfigured to run containerized applications. Using an optimized configuration of popular open-source scheduling and orchestration tools, AKS enables you to use your existing skills or draw on a large and growing body of community expertise to deploy and manage container-based applications on Microsoft Azure. - -Azure Kubernetes Service optimizes the configuration of popular Docker clustering open-source tools and technologies specifically for Azure. You get an open solution that offers portability for both your containers and your application configuration. You select the size, the number of hosts, and the orchestrator tools, and AKS handles everything else. - -![Diagram showing a Kubernetes cluster structure.](./media/scalable-available-multi-container-microservice-applications/kubernetes-cluster-simplified-structure.png) - -**Figure 4-24**. Kubernetes cluster's simplified structure and topology - -In figure 4-24, you can see the structure of a Kubernetes cluster where a master node (VM) controls most of the coordination of the cluster and you can deploy containers to the rest of the nodes, which are managed as a single pool from an application point of view and allows you to scale to thousands or even tens of thousands of containers. - -## Development environment for Kubernetes - -In the development environment, Docker announced in July 2018 that Kubernetes can also run in a single development machine (Windows 10 or macOS) by installing [Docker Desktop](https://docs.docker.com/install/). You can later deploy to the cloud (AKS) for further integration tests, as shown in figure 4-25. - -![Diagram showing Kubernetes on a dev machine then deployed to AKS](./media/scalable-available-multi-container-microservice-applications/kubernetes-development-environment.png) - -**Figure 4-25**. Running Kubernetes in dev machine and the cloud - -## Getting started with Azure Kubernetes Service (AKS) - -To begin using AKS, you deploy an AKS cluster from the Azure portal or by using the CLI. For more information on deploying a Kubernetes cluster in Azure, see [Deploy an Azure Kubernetes Service (AKS) cluster](/azure/aks/kubernetes-walkthrough-portal). - -There are no fees for any of the software installed by default as part of AKS. All default options are implemented with open-source software. AKS is available for multiple virtual machines in Azure. You're charged only for the compute instances you choose, and the other underlying infrastructure resources consumed, such as storage and networking. There are no incremental charges for AKS itself. - -The default production deployment option for Kubernetes is to use Helm charts, which are introduced in the next section. - -## Deploy with Helm charts into Kubernetes clusters - -When deploying an application to a Kubernetes cluster, you can use the original kubectl.exe CLI tool using deployment files based on the native format (.yaml files), as already mentioned in the previous section. However, for more complex Kubernetes applications such as when deploying complex microservice-based applications, it's recommended to use [Helm](https://helm.sh/). - -Helm Charts helps you define, version, install, share, upgrade, or rollback even the most complex Kubernetes application. - -Going further, Helm usage is also recommended because other Kubernetes environments in Azure, such as [Azure Dev Spaces](/azure/dev-spaces/azure-dev-spaces) are also based on Helm charts. - -Helm is maintained by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) - in collaboration with Microsoft, Google, Bitnami, and the Helm contributor community. - -For more implementation information on Helm charts and Kubernetes, see the [Using Helm Charts to deploy eShopOnContainers to AKS](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Deploy-to-Azure-Kubernetes-Service-(AKS)) post. - -## Additional resources - -- **Getting started with Azure Kubernetes Service (AKS)** \ - [https://learn.microsoft.com/azure/aks/kubernetes-walkthrough-portal](/azure/aks/kubernetes-walkthrough-portal) - -- **Azure Dev Spaces** \ - [https://learn.microsoft.com/azure/dev-spaces/azure-dev-spaces](/azure/dev-spaces/azure-dev-spaces) - -- **Kubernetes** The official site. \ - - ->[!div class="step-by-step"] ->[Previous](resilient-high-availability-microservices.md) ->[Next](../docker-application-development-process/index.md) diff --git a/docs/architecture/microservices/architect-microservice-container-applications/service-oriented-architecture.md b/docs/architecture/microservices/architect-microservice-container-applications/service-oriented-architecture.md deleted file mode 100644 index 32c63caaf59ad..0000000000000 --- a/docs/architecture/microservices/architect-microservice-container-applications/service-oriented-architecture.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Service-oriented architecture -description: Learn the fundamental differences between microservices and a Service-oriented architecture (SOA). -ms.date: 09/20/2018 ---- -# Service-oriented architecture - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Service-oriented architecture (SOA) was an overused term and has meant different things to different people. But as a common denominator, SOA means that you structure your application by decomposing it into multiple services (most commonly as HTTP services) that can be classified as different types like subsystems or tiers. - -Those services can now be deployed as Docker containers, which solves deployment issues, because all the dependencies are included in the container image. However, when you need to scale up SOA applications, you might have scalability and availability challenges if you're deploying based on single Docker hosts. This is where Docker clustering software or an orchestrator can help you, as explained in later sections where deployment approaches for microservices are described. - -Docker containers are useful (but not required) for both traditional service-oriented architectures and the more advanced microservices architectures. - -Microservices derive from SOA, but SOA is different from microservices architecture. Features like large central brokers, central orchestrators at the organization level, and the [Enterprise Service Bus (ESB)](https://en.wikipedia.org/wiki/Enterprise_service_bus) are typical in SOA. But in most cases, these are anti-patterns in the microservice community. In fact, some people argue that "The microservice architecture is SOA done right." - -This guide focuses on microservices, because a SOA approach is less prescriptive than the requirements and techniques used in a microservice architecture. If you know how to build a microservice-based application, you also know how to build a simpler service-oriented application. - ->[!div class="step-by-step"] ->[Previous](docker-application-state-data.md) ->[Next](microservices-architecture.md) diff --git a/docs/architecture/microservices/container-docker-introduction/docker-containers-images-registries.md b/docs/architecture/microservices/container-docker-introduction/docker-containers-images-registries.md deleted file mode 100644 index 37838ad3ff0a4..0000000000000 --- a/docs/architecture/microservices/container-docker-introduction/docker-containers-images-registries.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: Docker containers, images, and registries -description: .NET Microservices Architecture for Containerized .NET Applications | Docker containers, images, and registries -ms.date: 01/13/2021 ---- -# Docker containers, images, and registries - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -When using Docker, a developer creates an app or service and packages it and its dependencies into a container image. An image is a static representation of the app or service and its configuration and dependencies. - -To run the app or service, the app's image is instantiated to create a container, which will be running on the Docker host. Containers are initially tested in a development environment or PC. - -Developers should store images in a registry, which acts as a library of images and is needed when deploying to production orchestrators. Docker maintains a public registry via [Docker Hub](https://hub.docker.com/); other vendors provide registries for different collections of images, including [Azure Container Registry](https://azure.microsoft.com/services/container-registry/). Alternatively, enterprises can have a private registry on-premises for their own Docker images. - -Figure 2-4 shows how images and registries in Docker relate to other components. It also shows the multiple registry offerings from vendors. - -![A diagram showing the basic taxonomy in Docker.](./media/docker-containers-images-registries/taxonomy-of-docker-terms-and-concepts.png) - -**Figure 2-4**. Taxonomy of Docker terms and concepts - -The registry is like a bookshelf where images are stored and available to be pulled for building containers to run services or web apps. There are private Docker registries on-premises and on the public cloud. Docker Hub is a public registry maintained by Docker, along the Docker Trusted Registry an enterprise-grade solution, Azure offers the Azure Container Registry. AWS, Google, and others also have container registries. - -Putting images in a registry lets you store static and immutable application bits, including all their dependencies at a framework level. Those images can then be versioned and deployed in multiple environments and therefore provide a consistent deployment unit. - -Private image registries, either hosted on-premises or in the cloud, are recommended when: - -- Your images must not be shared publicly due to confidentiality. - -- You want to have minimum network latency between your images and your chosen deployment environment. For example, if your production environment is Azure cloud, you probably want to store your images in [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) so that network latency will be minimal. In a similar way, if your production environment is on-premises, you might want to have an on-premises Docker Trusted Registry available within the same local network. - ->[!div class="step-by-step"] ->[Previous](docker-terminology.md) ->[Next](../net-core-net-framework-containers/index.md) diff --git a/docs/architecture/microservices/container-docker-introduction/docker-defined.md b/docs/architecture/microservices/container-docker-introduction/docker-defined.md deleted file mode 100644 index b57e2e5bf4db6..0000000000000 --- a/docs/architecture/microservices/container-docker-introduction/docker-defined.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -title: What is Docker? -description: .NET Microservices Architecture for Containerized .NET Applications | What is Docker? -ms.date: 08/31/2018 ---- -# What is Docker? - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -[Docker](https://www.docker.com/) is an [open-source project](https://github.com/docker/docker) for automating the deployment of applications as portable, self-sufficient containers that can run on the cloud or on-premises. Docker is also a [company](https://www.docker.com/) that promotes and evolves this technology, working in collaboration with cloud, Linux, and Windows vendors, including Microsoft. - -![Diagram showing the places Docker containers can run.](./media/docker-defined/docker-containers-run-anywhere.png) - -**Figure 2-2**. Docker deploys containers at all layers of the hybrid cloud. - -Docker containers can run anywhere, on-premises in the customer datacenter, in an external service provider or in the cloud, on Azure. Docker image containers can run natively on Linux and Windows. However, Windows images can run only on Windows hosts and Linux images can run on Linux hosts and Windows hosts (using a Hyper-V Linux VM, so far), where host means a server or a VM. - -Developers can use development environments on Windows, Linux, or macOS. On the development computer, the developer runs a Docker host where Docker images are deployed, including the app and its dependencies. Developers who work on Linux or on macOS use a Docker host that is Linux based, and they can create images only for Linux containers. (Developers working on macOS can edit code or run the Docker CLI from macOS, but as of the time of this writing, containers don't run directly on macOS.) Developers who work on Windows can create images for either Linux or Windows Containers. - -To host containers in development environments and provide additional developer tools, Docker ships Docker Desktop for [Windows](https://hub.docker.com/editions/community/docker-ce-desktop-windows) or for [macOS](https://hub.docker.com/editions/community/docker-ce-desktop-mac). These products install the necessary VM (the Docker host) to host the containers. - -To run [Windows Containers](/virtualization/windowscontainers/about/), there are two types of runtimes: - -- Windows Server Containers provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and with all containers running on the host. - -- Hyper-V Containers expand on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host isn't shared with the Hyper-V Containers, providing better isolation. - -The images for these containers are created the same way and function the same. The difference is in how the container is created from the image running a Hyper-V Container requires an extra parameter. For details, see [Hyper-V Containers](/virtualization/windowscontainers/manage-containers/hyperv-container). - -## Comparing Docker containers with virtual machines - -Figure 2-3 shows a comparison between VMs and Docker containers. - -| Virtual Machines | Docker Containers | -| -----------------| ------------------| -|![Diagram showing the hardware/software stack of a traditional VM.](./media/docker-defined/virtual-machine-hardware-software.png)|![Diagram showing the hardware/software stack for Docker containers.](./media/docker-defined/docker-container-hardware-software.png)| -|Virtual machines include the application, the required libraries or binaries, and a full guest operating system. Full virtualization requires more resources than containerization. | Containers include the application and all its dependencies. However, they share the OS kernel with other containers, running as isolated processes in user space on the host operating system. (Except in Hyper-V containers, where each container runs inside of a special virtual machine per container.) | - -**Figure 2-3**. Comparison of traditional virtual machines to Docker containers - -For VMs, there are three base layers in the host server, from the bottom-up: infrastructure, Host Operating System and a Hypervisor and on top of all that each VM has its own OS and all necessary libraries. For Docker, the host server only has the infrastructure and the OS and on top of that, the container engine, that keeps container isolated but sharing the base OS services. - -Because containers require far fewer resources (for example, they don't need a full OS), they're easy to deploy and they start fast. This allows you to have higher density, meaning that it allows you to run more services on the same hardware unit, thereby reducing costs. - -As a side effect of running on the same kernel, you get less isolation than VMs. - -The main goal of an image is that it makes the environment (dependencies) the same across different deployments. This means that you can debug it on your machine and then deploy it to another machine with the same environment guaranteed. - -A container image is a way to package an app or service and deploy it in a reliable and reproducible way. You could say that Docker isn't only a technology but also a philosophy and a process. - -When using Docker, you won't hear developers say, "It works on my machine, why not in production?" They can simply say, "It runs on Docker", because the packaged Docker application can be executed on any supported Docker environment, and it runs the way it was intended to on all deployment targets (such as Dev, QA, staging, and production). - -## A simple analogy - -Perhaps a simple analogy can help getting the grasp of the core concept of Docker. - -Let's go back in time to the 1950s for a moment. There were no word processors, and the photocopiers were used everywhere (kind of). - -Imagine you're responsible for quickly issuing batches of letters as required, to mail them to customers, using real paper and envelopes, to be delivered physically to each customer's address (there was no email back then). - -At some point, you realize the letters are just a composition of a large set of paragraphs, which are picked and arranged as needed, according to the purpose of the letter, so you devise a system to issue letters quickly, expecting to get a hefty raise. - -The system is simple: - -1. You begin with a deck of transparent sheets containing one paragraph each. - -2. To issue a set of letters, you pick the sheets with the paragraphs you need, then you stack and align them so they look and read fine. - -3. Finally, you place the set in the photocopier and press start to produce as many letters as required. - -So, simplifying, that's the core idea of Docker. - -In Docker, each layer is the resulting set of changes that happen to the filesystem after executing a command, such as, installing a program. - -So, when you "look" at the filesystem after the layer has been copied, you see all the files, included in the layer when the program was installed. - -You can think of an image as an auxiliary read-only hard disk ready to be installed in a "computer" where the operating system is already installed. - -Similarly, you can think of a container as the "computer" with the image hard disk installed. The container, just like a computer, can be powered on or off. - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](docker-terminology.md) diff --git a/docs/architecture/microservices/container-docker-introduction/docker-terminology.md b/docs/architecture/microservices/container-docker-introduction/docker-terminology.md deleted file mode 100644 index bc3d2458ed9c8..0000000000000 --- a/docs/architecture/microservices/container-docker-introduction/docker-terminology.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Docker terminology -description: .NET Microservices Architecture for Containerized .NET Applications | Docker terminology -ms.date: 01/13/2021 ---- -# Docker terminology - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -This section lists terms and definitions you should be familiar with before getting deeper into Docker. For further definitions, see the extensive [glossary](https://docs.docker.com/glossary/) provided by Docker. - -**Container image**: A package with all the dependencies and information needed to create a container. An image includes all the dependencies (such as frameworks) plus deployment and execution configuration to be used by a container runtime. Usually, an image derives from multiple base images that are layers stacked on top of each other to form the container's filesystem. An image is immutable once it has been created. - -**Dockerfile**: A text file that contains instructions for building a Docker image. It's like a batch script, the first line states the base image to begin with and then follow the instructions to install required programs, copy files, and so on, until you get the working environment you need. - -**Build**: The action of building a container image based on the information and context provided by its Dockerfile, plus additional files in the folder where the image is built. You can build images with the following Docker command: - -```bash -docker build -``` - -**Container**: An instance of a Docker image. A container represents the execution of a single application, process, or service. It consists of the contents of a Docker image, an execution environment, and a standard set of instructions. When scaling a service, you create multiple instances of a container from the same image. Or a batch job can create multiple containers from the same image, passing different parameters to each instance. - -**Volumes**: Offer a writable filesystem that the container can use. Since images are read-only but most programs need to write to the filesystem, volumes add a writable layer, on top of the container image, so the programs have access to a writable filesystem. The program doesn't know it's accessing a layered filesystem, it's just the filesystem as usual. Volumes live in the host system and are managed by Docker. - -**Tag**: A mark or label you can apply to images so that different images or versions of the same image (depending on the version number or the target environment) can be identified. - -**Multi-stage Build**: Is a feature, since Docker 17.05 or higher, that helps to reduce the size of the final images. For example, a large base image, containing the SDK can be used for compiling and publishing and then a small runtime-only base image can be used to host the application. - -**Repository (repo)**: A collection of related Docker images, labeled with a tag that indicates the image version. Some repos contain multiple variants of a specific image, such as an image containing SDKs (heavier), an image containing only runtimes (lighter), etc. Those variants can be marked with tags. A single repo can contain platform variants, such as a Linux image and a Windows image. - -**Registry**: A service that provides access to repositories. The default registry for most public images is [Docker Hub](https://hub.docker.com/) (owned by Docker as an organization). A registry usually contains repositories from multiple teams. Companies often have private registries to store and manage images they've created. Azure Container Registry is another example. - -**Multi-arch image**: For multi-architecture (or [multi-platform](https://docs.docker.com/build/building/multi-platform/)), it's a Docker feature that simplifies the selection of the appropriate image, according to the platform where Docker is running. For example, when a Dockerfile requests a base image **FROM mcr.microsoft.com/dotnet/sdk:8.0** from the registry, it actually gets **8.0-nanoserver-ltsc2022**, **8.0-nanoserver-1809** or **8.0-bullseye-slim**, depending on the operating system and version where Docker is running. - -**Docker Hub**: A public registry to upload images and work with them. Docker Hub provides Docker image hosting, public or private registries, build triggers and web hooks, and integration with GitHub and Bitbucket. - -**Azure Container Registry**: A public resource for working with Docker images and its components in Azure. This provides a registry that's close to your deployments in Azure and that gives you control over access, making it possible to use your Azure Active Directory groups and permissions. - -**Docker Trusted Registry (DTR)**: A Docker registry service (from Docker) that can be installed on-premises so it lives within the organization's datacenter and network. It's convenient for private images that should be managed within the enterprise. Docker Trusted Registry is included as part of the Docker Datacenter product. - -**Docker Desktop**: Development tools for Windows and macOS for building, running, and testing containers locally. Docker Desktop for Windows provides development environments for both Linux and Windows Containers. The Linux Docker host on Windows is based on a [Hyper-V](https://www.microsoft.com/cloud-platform/server-virtualization) virtual machine. The host for Windows Containers is directly based on Windows. Docker Desktop for Mac is based on the Apple Hypervisor framework and the [xhyve hypervisor](https://github.com/mist64/xhyve), which provides a Linux Docker host virtual machine on macOS. Docker Desktop for Windows and for Mac replaces Docker Toolbox, which was based on Oracle VirtualBox. - -**Compose**: A command-line tool and YAML file format with metadata for defining and running multi-container applications. You define a single application based on multiple images with one or more .yml files that can override values depending on the environment. After you've created the definitions, you can deploy the whole multi-container application with a single command (docker-compose up) that creates a container per image on the Docker host. - -**Cluster**: A collection of Docker hosts exposed as if it were a single virtual Docker host, so that the application can scale to multiple instances of the services spread across multiple hosts within the cluster. Docker clusters can be created with Kubernetes, Azure Service Fabric, Docker Swarm and Mesosphere DC/OS. - -**Orchestrator**: A tool that simplifies the management of clusters and Docker hosts. Orchestrators enable you to manage their images, containers, and hosts through a command-line interface (CLI) or a graphical UI. You can manage container networking, configurations, load balancing, service discovery, high availability, Docker host configuration, and more. An orchestrator is responsible for running, distributing, scaling, and healing workloads across a collection of nodes. Typically, orchestrator products are the same products that provide cluster infrastructure, like Kubernetes and Azure Service Fabric, among other offerings in the market. - ->[!div class="step-by-step"] ->[Previous](docker-defined.md) ->[Next](docker-containers-images-registries.md) diff --git a/docs/architecture/microservices/container-docker-introduction/index.md b/docs/architecture/microservices/container-docker-introduction/index.md deleted file mode 100644 index 86ffc9b9c4a8f..0000000000000 --- a/docs/architecture/microservices/container-docker-introduction/index.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Introduction to Containers and Docker -description: .NET Microservices Architecture for Containerized .NET Applications | Introduction to Containers and Docker -ms.date: 01/13/2021 ---- -# Introduction to Containers and Docker - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Containerization is an approach to software development in which an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) are packaged together as a container image. The containerized application can be tested as a unit and deployed as a container image instance to the host operating system (OS). - -Just as shipping containers allow goods to be transported by ship, train, or truck regardless of the cargo inside, software containers act as a standard unit of software deployment that can contain different code and dependencies. Containerizing software this way enables developers and IT professionals to deploy them across environments with little or no modification. - -Containers also isolate applications from each other on a shared OS. Containerized applications run on top of a container host that in turn runs on the OS (Linux or Windows). Containers therefore have a significantly smaller footprint than virtual machine (VM) images. - -Each container can run a whole web application or a service, as shown in Figure 2-1. In this example, Docker host is a container host, and App1, App2, Svc 1, and Svc 2 are containerized applications or services. - -![Diagram showing four containers running in a VM or a server.](./media/index/multiple-containers-single-host.png) - -**Figure 2-1**. Multiple containers running on a container host - -Another benefit of containerization is scalability. You can scale out quickly by creating new containers for short-term tasks. From an application point of view, instantiating an image (creating a container) is similar to instantiating a process like a service or a web app. For reliability, however, when you run multiple instances of the same image across multiple host servers, you typically want each container (image instance) to run in a different host server or VM in different fault domains. - -In short, containers offer the benefits of isolation, portability, agility, scalability, and control across the whole application lifecycle workflow. The most important benefit is the environment's isolation provided between Dev and Ops. - ->[!div class="step-by-step"] ->[Previous](../index.md) ->[Next](docker-defined.md) diff --git a/docs/architecture/microservices/container-docker-introduction/media/docker-containers-images-registries/taxonomy-of-docker-terms-and-concepts.png b/docs/architecture/microservices/container-docker-introduction/media/docker-containers-images-registries/taxonomy-of-docker-terms-and-concepts.png deleted file mode 100644 index b9b235098511b..0000000000000 Binary files a/docs/architecture/microservices/container-docker-introduction/media/docker-containers-images-registries/taxonomy-of-docker-terms-and-concepts.png and /dev/null differ diff --git a/docs/architecture/microservices/container-docker-introduction/media/docker-defined/docker-container-hardware-software.png b/docs/architecture/microservices/container-docker-introduction/media/docker-defined/docker-container-hardware-software.png deleted file mode 100644 index 3513aa70a03f7..0000000000000 Binary files a/docs/architecture/microservices/container-docker-introduction/media/docker-defined/docker-container-hardware-software.png and /dev/null differ diff --git a/docs/architecture/microservices/container-docker-introduction/media/docker-defined/docker-containers-run-anywhere.png b/docs/architecture/microservices/container-docker-introduction/media/docker-defined/docker-containers-run-anywhere.png deleted file mode 100644 index 5aa41d252114c..0000000000000 Binary files a/docs/architecture/microservices/container-docker-introduction/media/docker-defined/docker-containers-run-anywhere.png and /dev/null differ diff --git a/docs/architecture/microservices/container-docker-introduction/media/docker-defined/virtual-machine-hardware-software.png b/docs/architecture/microservices/container-docker-introduction/media/docker-defined/virtual-machine-hardware-software.png deleted file mode 100644 index 53f6dec67af81..0000000000000 Binary files a/docs/architecture/microservices/container-docker-introduction/media/docker-defined/virtual-machine-hardware-software.png and /dev/null differ diff --git a/docs/architecture/microservices/container-docker-introduction/media/index/multiple-containers-single-host.png b/docs/architecture/microservices/container-docker-introduction/media/index/multiple-containers-single-host.png deleted file mode 100644 index fadded84dfe55..0000000000000 Binary files a/docs/architecture/microservices/container-docker-introduction/media/index/multiple-containers-single-host.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md b/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md deleted file mode 100644 index 41301079a5f69..0000000000000 --- a/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md +++ /dev/null @@ -1,573 +0,0 @@ ---- -title: Development workflow for Docker apps -description: Learn details of the workflow for developing Docker-based applications. Optimize Dockerfiles and use the simplified workflow available in Visual Studio. -ms.date: 09/10/2024 ---- -# Development workflow for Docker apps - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The application development life cycle starts at your computer, as a developer, where you code the application using your preferred language and test it locally. With this workflow, no matter which language, framework, and platform you choose, you're always developing and testing Docker containers, but doing so locally. - -Each container (an instance of a Docker image) includes the following components: - -- An operating system selection, for example, a Linux distribution, Windows Nano Server, or Windows Server Core. -- Files added during development, for example, source code and application binaries. -- Configuration information, such as environment settings and dependencies. - -## Workflow for developing Docker container-based applications - -This section describes the *inner-loop* development workflow for Docker container-based applications. The inner-loop workflow means it's not considering the broader DevOps workflow, which can include up to production deployment, and just focuses on the development work done on the developer's computer. The initial steps to set up the environment aren't included, since those steps are done only once. - -An application is composed of your own services plus additional libraries (dependencies). The following are the basic steps you usually take when building a Docker application, as illustrated in Figure 5-1. - -:::image type="complex" source="./media/docker-app-development-workflow/life-cycle-containerized-apps-docker-cli.png" alt-text="Diagram showing the seven steps it takes to create a containerized app."::: -The development process for Docker apps: 1 - Code your App, 2 - Write Dockerfile/s, 3 - Create images defined at Dockerfile/s, 4 - (optional) Compose services in the docker-compose.yml file, 5 - Run container or docker-compose app, 6 - Test your app or microservices, 7 - Push to repo and repeat. -:::image-end::: - -**Figure 5-1.** Step-by-step workflow for developing Docker containerized apps - -In this section, this whole process is detailed and every major step is explained by focusing on a Visual Studio environment. - -When you're using an editor/CLI development approach (for example, Visual Studio Code plus Docker CLI on macOS or Windows), you need to know every step, generally in more detail than if you're using Visual Studio. For more information about working in a CLI environment, see the e-book [Containerized Docker Application lifecycle with Microsoft Platforms and Tools](https://aka.ms/dockerlifecycleebook/). - -When you're using Visual Studio 2022, many of those steps are handled for you, which dramatically improves your productivity. This is especially true when you're using Visual Studio 2022 and targeting multi-container applications. For instance, with just one mouse click, Visual Studio adds the `Dockerfile` and `docker-compose.yml` file to your projects with the configuration for your application. When you run the application in Visual Studio, it builds the Docker image and runs the multi-container application directly in Docker; it even allows you to debug several containers at once. These features will boost your development speed. - -However, just because Visual Studio makes those steps automatic doesn't mean that you don't need to know what's going on underneath with Docker. Therefore, the following guidance details every step. - -![Image for Step 1.](./media/docker-app-development-workflow/step-1-code-your-app.png) - -## Step 1. Start coding and create your initial application or service baseline - -Developing a Docker application is similar to the way you develop an application without Docker. The difference is that while developing for Docker, you're deploying and testing your application or services running within Docker containers in your local environment (either a Linux VM setup by Docker or directly Windows if using Windows Containers). - -### Set up your local environment with Visual Studio - -To begin, make sure you have [Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/) for Windows installed, as explained in the following instructions: - -[Get started with Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/) - -In addition, you need Visual Studio 2022 version 17.0, with the **.ASP.NET and web development** workload installed, as shown in Figure 5-2. - -![Screenshot of the .NET Core cross-platform development selection.](./media/docker-app-development-workflow/dotnet-core-cross-platform-development.png) - -**Figure 5-2**. Selecting the **ASP.NET and web development** workload during Visual Studio 2022 setup - -You can start coding your application in plain .NET (usually in .NET Core or later if you're planning to use containers) even before enabling Docker in your application and deploying and testing in Docker. However, it is recommended that you start working on Docker as soon as possible, because that will be the real environment and any issues can be discovered as soon as possible. This is encouraged because Visual Studio makes it so easy to work with Docker that it almost feels transparent—the best example when debugging multi-container applications from Visual Studio. - -### Additional resources - -- **Get started with Docker Desktop for Windows** \ - - -- **Visual Studio 2022** \ - [https://visualstudio.microsoft.com/downloads/](https://visualstudio.microsoft.com/downloads/) - -![Image for Step 2.](./media/docker-app-development-workflow/step-2-write-dockerfile.png) - -## Step 2. Create a Dockerfile related to an existing .NET base image - -You need a Dockerfile for each custom image you want to build; you also need a Dockerfile for each container to be deployed, whether you deploy automatically from Visual Studio or manually using the Docker CLI (docker run and docker-compose commands). If your application contains a single custom service, you need a single Dockerfile. If your application contains multiple services (as in a microservices architecture), you need one Dockerfile for each service. - -The Dockerfile is placed in the root folder of your application or service. It contains the commands that tell Docker how to set up and run your application or service in a container. You can manually create a Dockerfile in code and add it to your project along with your .NET dependencies. - -With Visual Studio and its tools for Docker, this task requires only a few mouse clicks. When you create a new project in Visual Studio 2022, there's an option named **Enable Docker Support**, as shown in Figure 5-3. - -![Screenshot showing Enable Docker Support check box.](./media/docker-app-development-workflow/enable-docker-support-check-box.png) - -**Figure 5-3**. Enabling Docker Support when creating a new ASP.NET Core project in Visual Studio 2022 - -You can also enable Docker support on an existing ASP.NET Core web app project by right-clicking the project in **Solution Explorer** and selecting **Add** > **Docker Support...**, as shown in Figure 5-4. - -![Screenshot showing the Docker Support option in the Add menu.](./media/docker-app-development-workflow/add-docker-support-option.png) - -**Figure 5-4**. Enabling Docker support in an existing Visual Studio 2022 project - -This action adds a *Dockerfile* to the project with the required configuration, and is only available on ASP.NET Core projects. - -In a similar fashion, Visual Studio can also add a `docker-compose.yml` file for the whole solution with the option **Add > Container Orchestrator Support...**. In step 4, we'll explore this option in greater detail. - -### Using an existing official .NET Docker image - -You usually build a custom image for your container on top of a base image you get from an official repository like the [Docker Hub](https://hub.docker.com/) registry. That's precisely what happens under the covers when you enable Docker support in Visual Studio. Your Dockerfile will use an existing `dotnet/core/aspnet` image. - -Earlier we explained which Docker images and repos you can use, depending on the framework and OS you've chosen. For instance, if you want to use ASP.NET Core (Linux or Windows), the image to use is `mcr.microsoft.com/dotnet/aspnet:8.0`. Therefore, you just need to specify what base Docker image you'll use for your container. You do that by adding `FROM mcr.microsoft.com/dotnet/aspnet:8.0` to your Dockerfile. This is automatically performed by Visual Studio, but if you were to update the version, you update this value. - -Using an official .NET image repository from Docker Hub with a version number ensures that the same language features are available on all machines (including development, testing, and production). - -The following example shows a sample Dockerfile for an ASP.NET Core container. - -```dockerfile -FROM mcr.microsoft.com/dotnet/aspnet:8.0 -ARG source -WORKDIR /app -EXPOSE 80 -COPY ${source:-obj/Docker/publish} . -ENTRYPOINT ["dotnet", " MySingleContainerWebApp.dll "] -``` - -In this case, the image is based on version 8.0 of the official ASP.NET Core Docker image (multi-arch for Linux and Windows). This is the setting `FROM mcr.microsoft.com/dotnet/aspnet:8.0`. (For more information about this base image, see the [ASP.NET Core Docker Image](https://hub.docker.com/_/microsoft-dotnet-aspnet/) page.) In the Dockerfile, you also need to instruct Docker to listen on the TCP port you will use at runtime (in this case, port 80, as configured with the EXPOSE setting). - -You can specify additional configuration settings in the Dockerfile, depending on the language and framework you're using. For instance, the ENTRYPOINT line with `["dotnet", "MySingleContainerWebApp.dll"]` tells Docker to run a .NET application. If you're using the SDK and the .NET CLI (dotnet CLI) to build and run the .NET application, this setting would be different. The bottom line is that the ENTRYPOINT line and other settings will be different depending on the language and platform you choose for your application. - -### Additional resources - -- **Building Docker Images for ASP.NET Core Applications** \ - [https://learn.microsoft.com/dotnet/core/docker/building-net-docker-images](/aspnet/core/host-and-deploy/docker/building-net-docker-images) - -- **Building container images**. In the official Docker documentation.\ - - -- **Staying up-to-date with .NET Container Images** \ - - -- **Using .NET and Docker Together - DockerCon 2018 Update** \ - - -### Using multi-arch image repositories - -A single repo can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms (that is, Linux and Windows). For example, the [.NET](https://hub.docker.com/_/microsoft-dotnet/) repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same repo name. - -If you specify a tag, targeting a platform that is explicit like in the following cases: - -- `mcr.microsoft.com/dotnet/aspnet:8.0-bullseye-slim` \ - Targets: .NET 8 runtime-only on Linux - -- `mcr.microsoft.com/dotnet/aspnet:8.0-nanoserver-ltsc2022` \ - Targets: .NET 8 runtime-only on Windows Nano Server - -But, if you specify the same image name, even with the same tag, the multi-arch images (like the `aspnet` image) will use the Linux or Windows version depending on the Docker host OS you're deploying, as shown in the following example: - -- `mcr.microsoft.com/dotnet/aspnet:8.0` \ - Multi-arch: .NET 8 runtime-only on Linux or Windows Nano Server depending on the Docker host OS - -This way, when you pull an image from a Windows host, it will pull the Windows variant, and pulling the same image name from a Linux host will pull the Linux variant. - -### Multi-stage builds in Dockerfile - -The Dockerfile is similar to a batch script. Similar to what you would do if you had to set up the machine from the command line. - -It starts with a base image that sets up the initial context, it's like the startup filesystem, that sits on top of the host OS. It's not an OS, but you can think of it like "the" OS inside the container. - -The execution of every command line creates a new layer on the filesystem with the changes from the previous one, so that, when combined, produce the resulting filesystem. - -Since every new layer "rests" on top of the previous one and the resulting image size increases with every command, images can get very large if they have to include, for example, the SDK needed to build and publish an application. - -This is where multi-stage builds get into the plot (from Docker 17.05 and higher) to do their magic. - -The core idea is that you can separate the Dockerfile execution process in stages, where a stage is an initial image followed by one or more commands, and the last stage determines the final image size. - -In short, multi-stage builds allow splitting the creation in different "phases" and then assemble the final image taking only the relevant directories from the intermediate stages. The general strategy to use this feature is: - -1. Use a base SDK image (doesn't matter how large), with everything needed to build and publish the application to a folder and then - -2. Use a base, small, runtime-only image and copy the publishing folder from the previous stage to produce a small final image. - -Probably the best way to understand multi-stage is going through a Dockerfile in detail, line by line, so let's begin with the initial Dockerfile created by Visual Studio when adding Docker support to a project and will get into some optimizations later. - -The initial Dockerfile might look something like this: - -```dockerfile - 1 FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base - 2 WORKDIR /app - 3 EXPOSE 80 - 4 - 5 FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build - 6 WORKDIR /src - 7 COPY src/Services/Catalog/Catalog.API/Catalog.API.csproj … - 8 COPY src/BuildingBlocks/HealthChecks/src/Microsoft.AspNetCore.HealthChecks … - 9 COPY src/BuildingBlocks/HealthChecks/src/Microsoft.Extensions.HealthChecks … -10 COPY src/BuildingBlocks/EventBus/IntegrationEventLogEF/ … -11 COPY src/BuildingBlocks/EventBus/EventBus/EventBus.csproj … -12 COPY src/BuildingBlocks/EventBus/EventBusRabbitMQ/EventBusRabbitMQ.csproj … -13 COPY src/BuildingBlocks/EventBus/EventBusServiceBus/EventBusServiceBus.csproj … -14 COPY src/BuildingBlocks/WebHostCustomization/WebHost.Customization … -15 COPY src/BuildingBlocks/HealthChecks/src/Microsoft.Extensions … -16 COPY src/BuildingBlocks/HealthChecks/src/Microsoft.Extensions … -17 RUN dotnet restore src/Services/Catalog/Catalog.API/Catalog.API.csproj -18 COPY . . -19 WORKDIR /src/src/Services/Catalog/Catalog.API -20 RUN dotnet build Catalog.API.csproj -c Release -o /app -21 -22 FROM build AS publish -23 RUN dotnet publish Catalog.API.csproj -c Release -o /app -24 -25 FROM base AS final -26 WORKDIR /app -27 COPY --from=publish /app . -28 ENTRYPOINT ["dotnet", "Catalog.API.dll"] -``` - -And these are the details, line by line: - -- **Line #1:** Begin a stage with a "small" runtime-only base image, call it **base** for reference. - -- **Line #2:** Create the **/app** directory in the image. - -- **Line #3:** Expose port **80**. - -- **Line #5:** Begin a new stage with the "large" image for building/publishing. Call it **build** for reference. - -- **Line #6:** Create directory **/src** in the image. - -- **Line #7:** Up to line 16, copy referenced **.csproj** project files to be able to restore packages later. - -- **Line #17:** Restore packages for the **Catalog.API** project and the referenced projects. - -- **Line #18:** Copy **all directory tree for the solution** (except the files/directories included in the **.dockerignore** file) to the **/src** directory in the image. - -- **Line #19:** Change the current folder to the **Catalog.API** project. - -- **Line #20:** Build the project (and other project dependencies) and output to the **/app** directory in the image. - -- **Line #22:** Begin a new stage continuing from the build. Call it **publish** for reference. - -- **Line #23:** Publish the project (and dependencies) and output to the **/app** directory in the image. - -- **Line #25:** Begin a new stage continuing from **base** and call it **final**. - -- **Line #26:** Change the current directory to **/app**. - -- **Line #27:** Copy the **/app** directory from stage **publish** to the current directory. - -- **Line #28:** Define the command to run when the container is started. - -Now let's explore some optimizations to improve the whole process performance that, in the case of eShopOnContainers, means about 22 minutes or more to build the complete solution in Linux containers. - -You'll take advantage of Docker's layer cache feature, which is quite simple: if the base image and the commands are the same as some previously executed, it can just use the resulting layer without the need to execute the commands, thus saving some time. - -So, let's focus on the **build** stage, lines 5-6 are mostly the same, but lines 7-17 are different for every service from eShopOnContainers, so they have to execute every single time, however if you changed lines 7-16 to: - -```dockerfile -COPY . . -``` - -Then it would be just the same for every service, it would copy the whole solution and would create a larger layer but: - -1. The copy process would only be executed the first time (and when rebuilding if a file is changed) and would use the cache for all other services and - -2. Since the larger image occurs in an intermediate stage, it doesn't affect the final image size. - -The next significant optimization involves the `restore` command executed in line 17, which is also different for every service of eShopOnContainers. If you change that line to just: - -```dockerfile -RUN dotnet restore -``` - -It would restore the packages for the whole solution, but then again, it would do it just once, instead of the 15 times with the current strategy. - -However, `dotnet restore` only runs if there's a single project or solution file in the folder, so achieving this is a bit more complicated and the way to solve it, without getting into too many details, is this: - -1. Add the following lines to **.dockerignore**: - - - `*.sln`, to ignore all solution files in the main folder tree - - - `!eShopOnContainers-ServicesAndWebApps.sln`, to include only this solution file. - -2. Include the `/ignoreprojectextensions:.dcproj` argument to `dotnet restore`, so it also ignores the docker-compose project and only restores the packages for the eShopOnContainers-ServicesAndWebApps solution. - -For the final optimization, it just happens that line 20 is redundant, as line 23 also builds the application and comes, in essence, right after line 20, so there goes another time-consuming command. - -The resulting file is then: - -```dockerfile - 1 FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base - 2 WORKDIR /app - 3 EXPOSE 80 - 4 - 5 FROM mcr.microsoft.com/dotnet/sdk:8.0 AS publish - 6 WORKDIR /src - 7 COPY . . - 8 RUN dotnet restore /ignoreprojectextensions:.dcproj - 9 WORKDIR /src/src/Services/Catalog/Catalog.API -10 RUN dotnet publish Catalog.API.csproj -c Release -o /app -11 -12 FROM base AS final -13 WORKDIR /app -14 COPY --from=publish /app . -15 ENTRYPOINT ["dotnet", "Catalog.API.dll"] -``` - -### Creating your base image from scratch - -You can create your own Docker base image from scratch. This scenario is not recommended for someone who is starting with Docker, but if you want to set the specific bits of your own base image, you can do so. - -### Additional resources - -- **Multi-arch .NET Core images**.\ - - -- **Create a base image**. Official Docker documentation.\ - - -![Image for Step 3.](./media/docker-app-development-workflow/step-3-create-dockerfile-defined-images.png) - -## Step 3. Create your custom Docker images and embed your application or service in them - -For each service in your application, you need to create a related image. If your application is made up of a single service or web application, you just need a single image. - -Note that the Docker images are built automatically for you in Visual Studio. The following steps are only needed for the editor/CLI workflow and explained for clarity about what happens underneath. - -You, as a developer, need to develop and test locally until you push a completed feature or change to your source control system (for example, to GitHub). This means that you need to create the Docker images and deploy containers to a local Docker host (Windows or Linux VM) and run, test, and debug against those local containers. - -To create a custom image in your local environment by using Docker CLI and your Dockerfile, you can use the docker build command, as in Figure 5-5. - -![Screenshot showing the console output of the docker build command.](./media/docker-app-development-workflow/run-docker-build-command.png) - -**Figure 5-5**. Creating a custom Docker image - -Optionally, instead of directly running docker build from the project folder, you can first generate a deployable folder with the required .NET libraries and binaries by running `dotnet publish`, and then use the `docker build` command. - -This will create a Docker image with the name `cesardl/netcore-webapi-microservice-docker:first`. In this case, `:first` is a tag that represents a specific version. You can repeat this step for each custom image you need to create for your composed Docker application. - -When an application is made of multiple containers (that is, it is a multi-container application), you can also use the `docker-compose up --build` command to build all the related images with a single command by using the metadata exposed in the related docker-compose.yml files. - -You can find the existing images in your local repository by using the docker images command, as shown in Figure 5-6. - -![Console output from command docker images, showing existing images.](./media/docker-app-development-workflow/view-existing-images-with-docker-images.png) - -**Figure 5-6.** Viewing existing images using the docker images command - -### Creating Docker images with Visual Studio - -When you use Visual Studio to create a project with Docker support, you don't explicitly create an image. Instead, the image is created for you when you press F5 (or Ctrl+F5) to run the dockerized application or service. This step is automatic in Visual Studio and you won't see it happen, but it's important that you know what's going on underneath. - -![Image for the optional Step 4.](./media/docker-app-development-workflow/step-4-define-services-docker-compose-yml.png) - -## Step 4. Define your services in docker-compose.yml when building a multi-container Docker application - -The [docker-compose.yml](https://docs.docker.com/compose/compose-file/) file lets you define a set of related services to be deployed as a composed application with deployment commands. It also configures its dependency relations and runtime configuration. - -To use a docker-compose.yml file, you need to create the file in your main or root solution folder, with content similar to that in the following example: - -```yml -version: '3.4' - -services: - - webmvc: - image: eshop/web - environment: - - CatalogUrl=http://catalog-api - - OrderingUrl=http://ordering-api - ports: - - "80:80" - depends_on: - - catalog-api - - ordering-api - - catalog-api: - image: eshop/catalog-api - environment: - - ConnectionString=Server=sqldata;Port=1433;Database=CatalogDB;… - ports: - - "81:80" - depends_on: - - sqldata - - ordering-api: - image: eshop/ordering-api - environment: - - ConnectionString=Server=sqldata;Database=OrderingDb;… - ports: - - "82:80" - extra_hosts: - - "CESARDLBOOKVHD:10.0.75.1" - depends_on: - - sqldata - - sqldata: - image: mcr.microsoft.com/mssql/server:latest - environment: - - SA_PASSWORD=[PLACEHOLDER] - - ACCEPT_EULA=Y - ports: - - "5433:1433" -``` - -[!INCLUDE [managed-identities](../../../includes/managed-identities.md)] - -This docker-compose.yml file is a simplified and merged version. It contains static configuration data for each container (like the name of the custom image), which is always required, and configuration information that might depend on the deployment environment, like the connection string. In later sections, you will learn how to split the docker-compose.yml configuration into multiple docker-compose files and override values depending on the environment and execution type (debug or release). - -The docker-compose.yml file example defines four services: the `webmvc` service (a web application), two microservices (`ordering-api` and `basket-api`), and one data source container, `sqldata`, based on SQL Server for Linux running as a container. Each service will be deployed as a container, so a Docker image is required for each. - -The docker-compose.yml file specifies not only what containers are being used, but how they are individually configured. For instance, the `webmvc` container definition in the .yml file: - -- Uses a pre-built `eshop/web:latest` image. However, you could also configure the image to be built as part of the docker-compose execution with an additional configuration based on a build: section in the docker-compose file. - -- Initializes two environment variables (CatalogUrl and OrderingUrl). - -- Forwards the exposed port 80 on the container to the external port 80 on the host machine. - -- Links the web app to the catalog and ordering service with the depends_on setting. This causes the service to wait until those services are started. - -We will revisit the docker-compose.yml file in a later section when we cover how to implement microservices and multi-container apps. - -### Working with docker-compose.yml in Visual Studio 2022 - -Besides adding a Dockerfile to a project, as we mentioned before, Visual Studio 2017 (from version 15.8 on) can add orchestrator support for Docker Compose to a solution. - -When you add container orchestrator support, as shown in Figure 5-7, for the first time, Visual Studio creates the Dockerfile for the project and creates a new (service section) project in your solution with several global `docker-compose*.yml` files, and then adds the project to those files. You can then open the docker-compose.yml files and update them with additional features. - -Repeat this operation for every project you want to include in the docker-compose.yml file. - -At the time of this writing, Visual Studio supports **Docker Compose** orchestrators. - -![Screenshot showing the Container Orchestrator Support option in the project context menu.](./media/docker-app-development-workflow/add-container-orchestrator-support-option.png) - -**Figure 5-7**. Adding Docker support in Visual Studio 2022 by right-clicking an ASP.NET Core project - -After you add orchestrator support to your solution in Visual Studio, you will also see a new node (in the `docker-compose.dcproj` project file) in Solution Explorer that contains the added docker-compose.yml files, as shown in Figure 5-8. - -![Screenshot of docker-compose node in Solution Explorer.](./media/docker-app-development-workflow/docker-compose-tree-node.png) - -**Figure 5-8**. The **docker-compose** tree node added in Visual Studio 2022 Solution Explorer - -You could deploy a multi-container application with a single docker-compose.yml file by using the `docker-compose up` command. However, Visual Studio adds a group of them so you can override values depending on the environment (development or production) and execution type (release or debug). This capability will be explained in later sections. - -![Image for the Step 5.](./media/docker-app-development-workflow/step-5-run-containers-compose-app.png) - -## Step 5. Build and run your Docker application - -If your application only has a single container, you can run it by deploying it to your Docker host (VM or physical server). However, if your application contains multiple services, you can deploy it as a composed application, either using a single CLI command (`docker-compose up)`, or with Visual Studio, which will use that command under the covers. Let's look at the different options. - -### Option A: Running a single-container application - -#### Using Docker CLI - -You can run a Docker container using the `docker run` command, as shown in Figure 5-9: - -```console -docker run -t -d -p 80:5000 cesardl/netcore-webapi-microservice-docker:first -``` - -The above command will create a new container instance from the specified image, every time it's run. You can use the `--name` parameter to give a name to the container and then use `docker start {name}` (or use the container ID or automatic name) to run an existing container instance. - -![Screenshot running a Docker container using the docker run command.](./media/docker-app-development-workflow/use-docker-run-command.png) - -**Figure 5-9**. Running a Docker container using the docker run command - -In this case, the command binds the internal port 5000 of the container to port 80 of the host machine. This means that the host is listening on port 80 and forwarding to port 5000 on the container. - -The hash shown is the container ID and it's also assigned a random readable name if the `--name` option is not used. - -#### Using Visual Studio - -If you haven't added container orchestrator support, you can also run a single container app in Visual Studio by pressing Ctrl+F5 and you can also use F5 to debug the application within the container. The container runs locally using docker run. - -### Option B: Running a multi-container application - -In most enterprise scenarios, a Docker application will be composed of multiple services, which means you need to run a multi-container application, as shown in Figure 5-10. - -![VM with several Docker containers](./media/docker-app-development-workflow/vm-with-docker-containers-deployed.png) - -**Figure 5-10**. VM with Docker containers deployed - -#### Using Docker CLI - -To run a multi-container application with the Docker CLI, you use the `docker-compose up` command. This command uses the **docker-compose.yml** file that you have at the solution level to deploy a multi-container application. Figure 5-11 shows the results when running the command from your main solution directory, which contains the docker-compose.yml file. - -![Screen view when running the docker-compose up command](./media/docker-app-development-workflow/results-docker-compose-up.png) - -**Figure 5-11**. Example results when running the docker-compose up command - -After the docker-compose up command runs, the application and its related containers are deployed into your Docker host, as depicted in Figure 5-10. - -#### Using Visual Studio - -Running a multi-container application using Visual Studio 2019 can't get any simpler. You just press Ctrl+F5 to run or F5 to debug, as usual, setting up the **docker-compose** project as the startup project. Visual Studio handles all needed setup, so you can create breakpoints as usual and debug what finally become independent processes running in "remote servers", with the debugger already attached, just like that. - -As mentioned before, each time you add Docker solution support to a project within a solution, that project is configured in the global (solution-level) docker-compose.yml file, which lets you run or debug the whole solution at once. Visual Studio will start one container for each project that has Docker solution support enabled, and perform all the internal steps for you (dotnet publish, docker build, etc.). - -If you want to take a peek at all the drudgery, take a look at the file: - -`{root solution folder}\obj\Docker\docker-compose.vs.debug.g.yml` - -The important point here is that, as shown in Figure 5-12, in Visual Studio 2019 there is an additional **Docker** command for the F5 key action. This option lets you run or debug a multi-container application by running all the containers that are defined in the docker-compose.yml files at the solution level. The ability to debug multiple-container solutions means that you can set several breakpoints, each breakpoint in a different project (container), and while debugging from Visual Studio you will stop at breakpoints defined in different projects and running on different containers. - -![Screenshot of the debug toolbar running a docker-compose project.](./media/docker-app-development-workflow/debug-toolbar-docker-compose-project.png) - -**Figure 5-12**. Running multi-container apps in Visual Studio 2022 - -### Additional resources - -- **Deploy an ASP.NET container to a remote Docker host** \ - [https://learn.microsoft.com/visualstudio/containers/hosting-web-apps-in-docker](/visualstudio/containers/hosting-web-apps-in-docker) - -### A note about testing and deploying with orchestrators - -The docker-compose up and docker run commands (or running and debugging the containers in Visual Studio) are adequate for testing containers in your development environment. But you should not use this approach for production deployments, where you should target orchestrators like [Kubernetes](https://kubernetes.io/) or [Service Fabric](https://azure.microsoft.com/services/service-fabric/). If you're using Kubernetes, you have to use [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/) to organize containers and [services](https://kubernetes.io/docs/concepts/services-networking/service/) to network them. You also use [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) to organize pod creation and modification. - -![Image for the Step 6.](./media/docker-app-development-workflow/step-6-test-app-microservices.png) - -## Step 6. Test your Docker application using your local Docker host - -This step will vary depending on what your application is doing. In a simple .NET Web application that is deployed as a single container or service, you can access the service by opening a browser on the Docker host and navigating to that site, as shown in Figure 5-13. (If the configuration in the Dockerfile maps the container to a port on the host that is anything other than 80, include the host port in the URL.) - -![Screenshot of the response from localhost/API/values.](./media/docker-app-development-workflow/test-docker-app-locally-localhost.png) - -**Figure 5-13**. Example of testing your Docker application locally using localhost - -If localhost is not pointing to the Docker host IP (by default, when using Docker CE, it should), to navigate to your service, use the IP address of your machine's network card. - -This URL in the browser uses port 80 for the particular container example being discussed. However, internally the requests are being redirected to port 5000, because that was how it was deployed with the docker run command, as explained in a previous step. - -You can also test the application using curl from the terminal, as shown in Figure 5-14. In a Docker installation on Windows, the default Docker Host IP is always 10.0.75.1 in addition to your machine's actual IP address. - -![Console output from getting the http://10.0.75.1/API/values with curl.](./media/docker-app-development-workflow/test-docker-app-locally-curl.png) - -**Figure 5-14**. Example of testing your Docker application locally using curl - -### Testing and debugging containers with Visual Studio 2022 - -When running and debugging the containers with Visual Studio 2022, you can debug the .NET application in much the same way as you would when running without containers. - -### Testing and debugging without Visual Studio - -If you're developing using the editor/CLI approach, debugging containers is more difficult and you'll probably want to debug by generating traces. - -### Additional resources - -- **Quickstart: Docker in Visual Studio.** \ - [https://learn.microsoft.com/visualstudio/containers/container-tools](/visualstudio/containers/container-tools) - -- **Debugging apps in a local Docker container** \ - [https://learn.microsoft.com/visualstudio/containers/edit-and-refresh](/visualstudio/containers/edit-and-refresh) - -## Simplified workflow when developing containers with Visual Studio - -Effectively, the workflow when using Visual Studio is a lot simpler than if you use the editor/CLI approach. Most of the steps required by Docker related to the Dockerfile and docker-compose.yml files are hidden or simplified by Visual Studio, as shown in Figure 5-15. - -:::image type="complex" source="./media/docker-app-development-workflow/simplified-life-cycle-containerized-apps-docker-cli.png" alt-text="Diagram showing the five simplified steps it takes to create an app."::: -The development process for Docker apps: 1 - Code your App, 2 - Write Dockerfile/s, 3 - Create images defined at Dockerfile/s, 4 - (optional) Compose services in the docker-compose.yml file, 5 - Run container or docker-compose app, 6 - Test your app or microservices, 7 - Push to repo and repeat. -:::image-end::: - -**Figure 5-15**. Simplified workflow when developing with Visual Studio - -In addition, you need to perform step 2 (adding Docker support to your projects) just once. Therefore, the workflow is similar to your usual development tasks when using .NET for any other development. You need to know what is going on under the covers (the image build process, what base images you're using, deployment of containers, etc.) and sometimes you will also need to edit the Dockerfile or docker-compose.yml file to customize behaviors. But most of the work is greatly simplified by using Visual Studio, making you a lot more productive. - -## Using PowerShell commands in a Dockerfile to set up Windows Containers - -[Windows Containers](/virtualization/windowscontainers/about/index) allow you to convert your existing Windows applications into Docker images and deploy them with the same tools as the rest of the Docker ecosystem. To use Windows Containers, you run PowerShell commands in the Dockerfile, as shown in the following example: - -```dockerfile -FROM mcr.microsoft.com/windows/servercore -LABEL Description="IIS" Vendor="Microsoft" Version="10" -RUN powershell -Command Add-WindowsFeature Web-Server -CMD [ "ping", "localhost", "-t" ] -``` - -In this case, we are using a Windows Server Core base image (the FROM setting) and installing IIS with a PowerShell command (the RUN setting). In a similar way, you could also use PowerShell commands to set up additional components like ASP.NET 4.x, .NET Framework 4.6, or any other Windows software. For example, the following command in a Dockerfile sets up ASP.NET 4.5: - -```dockerfile -RUN powershell add-windowsfeature web-asp-net45 -``` - -### Additional resources - -- **aspnet-docker/Dockerfile.** Example PowerShell commands to run from dockerfiles to include Windows features.\ - - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](../multi-container-microservice-net-applications/index.md) diff --git a/docs/architecture/microservices/docker-application-development-process/index.md b/docs/architecture/microservices/docker-application-development-process/index.md deleted file mode 100644 index 3b613ad67d71b..0000000000000 --- a/docs/architecture/microservices/docker-application-development-process/index.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Development process for Docker-based applications -description: Get a high-level overview of the options for developing Docker-based applications. Using your choice of Visual Studio for Windows or Visual Studio Code for multiplatform support (Windows, macOS, and Linux). -ms.date: 11/19/2021 ---- -# Development process for Docker-based applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -*Develop containerized .NET applications the way you like, either Integrated Development Environment (IDE) focused with Visual Studio and Visual Studio tools for Docker or CLI/Editor focused with Docker CLI and Visual Studio Code.* - -## Development environment for Docker apps - -### Development tool choices: IDE or editor - -Whether you prefer a full and powerful IDE or a lightweight and agile editor, Microsoft has tools that you can use for developing Docker applications. - -**Visual Studio (for Windows).** Docker-based .NET 8 application development with Visual Studio requires Visual Studio 2022 version 17.0 or later. Visual Studio 2022 comes with tools for Docker already built in. The tools for Docker let you develop, run, and validate your applications directly in the target Docker environment. You can press F5 to run and debug your application (single container or multiple containers) directly into a Docker host, or press CTRL + F5 to edit and refresh your application without having to rebuild the container. This IDE is the most powerful development choice for Docker-based apps. - -**Visual Studio Code and Docker CLI**. If you prefer a lightweight and cross-platform editor that supports any development language, you can use Visual Studio Code and the Docker CLI. This IDE is a cross-platform development approach for macOS, Linux, and Windows. Additionally, Visual Studio Code supports extensions for Docker such as IntelliSense for Dockerfiles and shortcut tasks to run Docker commands from the editor. - -By installing [Docker Desktop](https://hub.docker.com/search/?type=edition&offering=community), you can use a single Docker CLI to build apps for both Windows and Linux. - -### Additional resources - -- **Visual Studio**. Official site. \ - [https://visualstudio.microsoft.com/vs/](https://visualstudio.microsoft.com/vs/?utm_medium=microsoft&utm_source=learn.microsoft.com&utm_campaign=inline+link) - -- **Visual Studio Code**. Official site. \ - - -- **Docker Desktop for Windows** \ - [https://hub.docker.com/editions/community/docker-ce-desktop-windows](https://hub.docker.com/editions/community/docker-ce-desktop-windows) - -- **Docker Desktop for Mac** \ - [https://hub.docker.com/editions/community/docker-ce-desktop-mac](https://hub.docker.com/editions/community/docker-ce-desktop-mac) - -## .NET languages and frameworks for Docker containers - -As mentioned in earlier sections of this guide, you can use .NET Framework, .NET 8, or the open-source Mono project when developing Docker containerized .NET applications. You can develop in C\#, F\#, or Visual Basic when targeting Linux or Windows Containers, depending on which .NET framework is in use. For more details about.NET languages, see the blog post [The .NET Language Strategy](https://devblogs.microsoft.com/dotnet/the-net-language-strategy/). - ->[!div class="step-by-step"] ->[Previous](../architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md) ->[Next](docker-app-development-workflow.md) diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/add-container-orchestrator-support-option.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/add-container-orchestrator-support-option.png deleted file mode 100644 index 83af8d4f52cb1..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/add-container-orchestrator-support-option.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/add-docker-support-option.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/add-docker-support-option.png deleted file mode 100644 index 41ca5cb3056ab..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/add-docker-support-option.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/debug-toolbar-docker-compose-project.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/debug-toolbar-docker-compose-project.png deleted file mode 100644 index 03a0aaf23bf96..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/debug-toolbar-docker-compose-project.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/docker-compose-tree-node.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/docker-compose-tree-node.png deleted file mode 100644 index bfa112c5ae598..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/docker-compose-tree-node.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/dotnet-core-cross-platform-development.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/dotnet-core-cross-platform-development.png deleted file mode 100644 index 5f5c31da408c2..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/dotnet-core-cross-platform-development.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/enable-docker-support-check-box.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/enable-docker-support-check-box.png deleted file mode 100644 index df906e71e8912..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/enable-docker-support-check-box.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/life-cycle-containerized-apps-docker-cli.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/life-cycle-containerized-apps-docker-cli.png deleted file mode 100644 index a1279c0abe6be..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/life-cycle-containerized-apps-docker-cli.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/results-docker-compose-up.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/results-docker-compose-up.png deleted file mode 100644 index 985e6b9c40ec7..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/results-docker-compose-up.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/run-docker-build-command.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/run-docker-build-command.png deleted file mode 100644 index 05f0d7e22d12b..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/run-docker-build-command.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/simplified-life-cycle-containerized-apps-docker-cli.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/simplified-life-cycle-containerized-apps-docker-cli.png deleted file mode 100644 index f75a2affcf32e..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/simplified-life-cycle-containerized-apps-docker-cli.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-1-code-your-app.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-1-code-your-app.png deleted file mode 100644 index 20e1bbfcf94c2..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-1-code-your-app.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-2-write-dockerfile.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-2-write-dockerfile.png deleted file mode 100644 index e774b9494a14e..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-2-write-dockerfile.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-3-create-dockerfile-defined-images.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-3-create-dockerfile-defined-images.png deleted file mode 100644 index 7f59b552cc96a..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-3-create-dockerfile-defined-images.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-4-define-services-docker-compose-yml.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-4-define-services-docker-compose-yml.png deleted file mode 100644 index a297c596ad024..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-4-define-services-docker-compose-yml.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-5-run-containers-compose-app.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-5-run-containers-compose-app.png deleted file mode 100644 index b7f634e14a06a..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-5-run-containers-compose-app.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-6-test-app-microservices.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-6-test-app-microservices.png deleted file mode 100644 index 8beb3340022df..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/step-6-test-app-microservices.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/test-docker-app-locally-curl.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/test-docker-app-locally-curl.png deleted file mode 100644 index 23d5215f999f0..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/test-docker-app-locally-curl.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/test-docker-app-locally-localhost.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/test-docker-app-locally-localhost.png deleted file mode 100644 index 0578dc52f29e9..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/test-docker-app-locally-localhost.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/use-docker-run-command.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/use-docker-run-command.png deleted file mode 100644 index af5c5be41746d..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/use-docker-run-command.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/view-existing-images-with-docker-images.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/view-existing-images-with-docker-images.png deleted file mode 100644 index b0e26dca761b9..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/view-existing-images-with-docker-images.png and /dev/null differ diff --git a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/vm-with-docker-containers-deployed.png b/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/vm-with-docker-containers-deployed.png deleted file mode 100644 index ff6caa09008c5..0000000000000 Binary files a/docs/architecture/microservices/docker-application-development-process/media/docker-app-development-workflow/vm-with-docker-containers-deployed.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/handle-partial-failure.md b/docs/architecture/microservices/implement-resilient-applications/handle-partial-failure.md deleted file mode 100644 index 8075acfc77345..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/handle-partial-failure.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Handling partial failure -description: Learn how to handle partial failures gracefully. A microservice might not be fully functional but it might still be able to do some useful work. -ms.date: 10/16/2018 ---- -# Handle partial failure - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -In distributed systems like microservices-based applications, there's an ever-present risk of partial failure. For instance, a single microservice/container can fail or might not be available to respond for a short time, or a single VM or server can crash. Since clients and services are separate processes, a service might not be able to respond in a timely way to a client's request. The service might be overloaded and responding very slowly to requests or might simply not be accessible for a short time because of network issues. - -For example, consider the Order details page from the eShopOnContainers sample application. If the ordering microservice is unresponsive when the user tries to submit an order, a bad implementation of the client process (the MVC web application)—for example, if the client code were to use synchronous RPCs with no timeout—would block threads indefinitely waiting for a response. Besides creating a bad user experience, every unresponsive wait consumes or blocks a thread, and threads are extremely valuable in highly scalable applications. If there are many blocked threads, eventually the application's runtime can run out of threads. In that case, the application can become globally unresponsive instead of just partially unresponsive, as shown in Figure 8-1. - -![Diagram showing partial failures.](./media/handle-partial-failure/partial-failures-diagram.png) - -**Figure 8-1**. Partial failures because of dependencies that impact service thread availability - -In a large microservices-based application, any partial failure can be amplified, especially if most of the internal microservices interaction is based on synchronous HTTP calls (which is considered an anti-pattern). Think about a system that receives millions of incoming calls per day. If your system has a bad design that's based on long chains of synchronous HTTP calls, these incoming calls might result in many more millions of outgoing calls (let's suppose a ratio of 1:4) to dozens of internal microservices as synchronous dependencies. This situation is shown in Figure 8-2, especially dependency \#3, that starts a chain, calling dependency #4, which then calls #5. - -![Diagram showing multiple distributed dependencies.](./media/handle-partial-failure/multiple-distributed-dependencies.png) - -**Figure 8-2**. The impact of having an incorrect design featuring long chains of HTTP requests - -Intermittent failure is guaranteed in a distributed and cloud-based system, even if every dependency itself has excellent availability. It's a fact you need to consider. - -If you do not design and implement techniques to ensure fault tolerance, even small downtimes can be amplified. As an example, 50 dependencies each with 99.99% of availability would result in several hours of downtime each month because of this ripple effect. When a microservice dependency fails while handling a high volume of requests, that failure can quickly saturate all available request threads in each service and crash the whole application. - -![Diagram showing partial failure amplified in microservices.](./media/handle-partial-failure/partial-failure-amplified-microservices.png) - -**Figure 8-3**. Partial failure amplified by microservices with long chains of synchronous HTTP calls - -To minimize this problem, in the section [Asynchronous microservice integration enforce microservice's autonomy](../architect-microservice-container-applications/communication-in-microservice-architecture.md#asynchronous-microservice-integration-enforces-microservices-autonomy), this guide encourages you to use asynchronous communication across the internal microservices. - -In addition, it's essential that you design your microservices and client applications to handle partial failures—that is, to build resilient microservices and client applications. - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](partial-failure-strategies.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/implement-circuit-breaker-pattern.md b/docs/architecture/microservices/implement-resilient-applications/implement-circuit-breaker-pattern.md deleted file mode 100644 index ee19c97e8b8a7..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/implement-circuit-breaker-pattern.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -title: Implementing the Circuit Breaker pattern -description: Learn how to implement the Circuit Breaker pattern as a complementary system to Http retries. -ms.date: 03/09/2022 ---- - -# Implement the Circuit Breaker pattern - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -As noted earlier, you should handle faults that might take a variable amount of time to recover from, as might happen when you try to connect to a remote service or resource. Handling this type of fault can improve the stability and resiliency of an application. - -In a distributed environment, calls to remote resources and services can fail due to transient faults, such as slow network connections and timeouts, or if resources are responding slowly or are temporarily unavailable. These faults typically correct themselves after a short time, and a robust cloud application should be prepared to handle them by using a strategy like the "Retry pattern". - -However, there can also be situations where faults are due to unanticipated events that might take much longer to fix. These faults can range in severity from a partial loss of connectivity to the complete failure of a service. In these situations, it might be pointless for an application to continually retry an operation that's unlikely to succeed. - -Instead, the application should be coded to accept that the operation has failed and handle the failure accordingly. - -Using Http retries carelessly could result in creating a Denial of Service ([DoS](https://en.wikipedia.org/wiki/Denial-of-service_attack)) attack within your own software. As a microservice fails or performs slowly, multiple clients might repeatedly retry failed requests. That creates a dangerous risk of exponentially increasing traffic targeted at the failing service. - -Therefore, you need some kind of defense barrier so that excessive requests stop when it isn't worth to keep trying. That defense barrier is precisely the circuit breaker. - -The Circuit Breaker pattern has a different purpose than the "Retry pattern". The "Retry pattern" enables an application to retry an operation in the expectation that the operation will eventually succeed. The Circuit Breaker pattern prevents an application from performing an operation that's likely to fail. An application can combine these two patterns. However, the retry logic should be sensitive to any exception returned by the circuit breaker, and it should abandon retry attempts if the circuit breaker indicates that a fault is not transient. - -## Implement Circuit Breaker pattern with `IHttpClientFactory` and Polly - -As when implementing retries, the recommended approach for circuit breakers is to take advantage of proven .NET libraries like Polly and its native integration with `IHttpClientFactory`. - -Adding a circuit breaker policy into your `IHttpClientFactory` outgoing middleware pipeline is as simple as adding a single incremental piece of code to what you already have when using `IHttpClientFactory`. - -The only addition here to the code used for HTTP call retries is the code where you add the Circuit Breaker policy to the list of policies to use, as shown in the following incremental code. - -```csharp -// Program.cs -var retryPolicy = GetRetryPolicy(); -var circuitBreakerPolicy = GetCircuitBreakerPolicy(); - -builder.Services.AddHttpClient() - .SetHandlerLifetime(TimeSpan.FromMinutes(5)) // Sample: default lifetime is 2 minutes - .AddHttpMessageHandler() - .AddPolicyHandler(retryPolicy) - .AddPolicyHandler(circuitBreakerPolicy); -``` - -The `AddPolicyHandler()` method is what adds policies to the `HttpClient` objects you'll use. In this case, it's adding a Polly policy for a circuit breaker. - -To have a more modular approach, the Circuit Breaker Policy is defined in a separate method called `GetCircuitBreakerPolicy()`, as shown in the following code: - -```csharp -// also in Program.cs -static IAsyncPolicy GetCircuitBreakerPolicy() -{ - return HttpPolicyExtensions - .HandleTransientHttpError() - .CircuitBreakerAsync(5, TimeSpan.FromSeconds(30)); -} -``` - -In the code example above, the circuit breaker policy is configured so it breaks or opens the circuit when there have been five consecutive faults when retrying the Http requests. When that happens, the circuit will break for 30 seconds: in that period, calls will be failed immediately by the circuit-breaker rather than actually be placed. The policy automatically interprets [relevant exceptions and HTTP status codes](/aspnet/core/fundamentals/http-requests#handle-transient-faults) as faults. - -Circuit breakers should also be used to redirect requests to a fallback infrastructure if you had issues in a particular resource that's deployed in a different environment than the client application or service that's performing the HTTP call. That way, if there's an outage in the datacenter that impacts only your backend microservices but not your client applications, the client applications can redirect to the fallback services. Polly is planning a new policy to automate this [failover policy](https://github.com/App-vNext/Polly/wiki/Polly-Roadmap#failover-policy) scenario. - -All those features are for cases where you're managing the failover from within the .NET code, as opposed to having it managed automatically for you by Azure, with location transparency. - -From a usage point of view, when using HttpClient, there's no need to add anything new here because the code is the same than when using `HttpClient` with `IHttpClientFactory`, as shown in previous sections. - -## Test Http retries and circuit breakers in eShopOnContainers - -Whenever you start the eShopOnContainers solution in a Docker host, it needs to start multiple containers. Some of the containers are slower to start and initialize, like the SQL Server container. This is especially true the first time you deploy the eShopOnContainers application into Docker because it needs to set up the images and the database. The fact that some containers start slower than others can cause the rest of the services to initially throw HTTP exceptions, even if you set dependencies between containers at the docker-compose level, as explained in previous sections. Those docker-compose dependencies between containers are just at the process level. The container's entry point process might be started, but SQL Server might not be ready for queries. The result can be a cascade of errors, and the application can get an exception when trying to consume that particular container. - -You might also see this type of error on startup when the application is deploying to the cloud. In that case, orchestrators might be moving containers from one node or VM to another (that is, starting new instances) when balancing the number of containers across the cluster's nodes. - -The way 'eShopOnContainers' solves those issues when starting all the containers is by using the Retry pattern illustrated earlier. - -### Test the circuit breaker in eShopOnContainers - -There are a few ways you can break/open the circuit and test it with eShopOnContainers. - -One option is to lower the allowed number of retries to 1 in the circuit breaker policy and redeploy the whole solution into Docker. With a single retry, there's a good chance that an HTTP request will fail during deployment, the circuit breaker will open, and you get an error. - -Another option is to use custom middleware that's implemented in the **Basket** microservice. When this middleware is enabled, it catches all HTTP requests and returns status code 500. You can enable the middleware by making a GET request to the failing URI, like the following: - -- `GET http://localhost:5103/failing`\ - This request returns the current state of the middleware. If the middleware is enabled, the request return status code 500. If the middleware is disabled, there's no response. - -- `GET http://localhost:5103/failing?enable`\ - This request enables the middleware. - -- `GET http://localhost:5103/failing?disable`\ - This request disables the middleware. - -For instance, once the application is running, you can enable the middleware by making a request using the following URI in any browser. Note that the ordering microservice uses port 5103. - -`http://localhost:5103/failing?enable` - -You can then check the status using the URI `http://localhost:5103/failing`, as shown in Figure 8-5. - -![Screenshot of checking the status of failing middleware simulation.](./media/implement-circuit-breaker-pattern/failing-middleware-simulation.png) - -**Figure 8-5**. Checking the state of the "Failing" ASP.NET middleware – In this case, disabled. - -At this point, the Basket microservice responds with status code 500 whenever you call invoke it. - -Once the middleware is running, you can try making an order from the MVC web application. Because the requests fail, the circuit will open. - -In the following example, you can see that the MVC web application has a catch block in the logic for placing an order. If the code catches an open-circuit exception, it shows the user a friendly message telling them to wait. - -```csharp -public class CartController : Controller -{ - //… - public async Task Index() - { - try - { - var user = _appUserParser.Parse(HttpContext.User); - //Http requests using the Typed Client (Service Agent) - var vm = await _basketSvc.GetBasket(user); - return View(vm); - } - catch (BrokenCircuitException) - { - // Catches error when Basket.api is in circuit-opened mode - HandleBrokenCircuitException(); - } - return View(); - } - - private void HandleBrokenCircuitException() - { - TempData["BasketInoperativeMsg"] = "Basket Service is inoperative, please try later on. (Business message due to Circuit-Breaker)"; - } -} -``` - -Here's a summary. The Retry policy tries several times to make the HTTP request and gets HTTP errors. When the number of retries reaches the maximum number set for the Circuit Breaker policy (in this case, 5), the application throws a BrokenCircuitException. The result is a friendly message, as shown in Figure 8-6. - -![Screenshot of the MVC web app with basket service inoperative error.](./media/implement-circuit-breaker-pattern/basket-service-inoperative.png) - -**Figure 8-6**. Circuit breaker returning an error to the UI - -You can implement different logic for when to open/break the circuit. Or you can try an HTTP request against a different back-end microservice if there's a fallback datacenter or redundant back-end system. - -Finally, another possibility for the `CircuitBreakerPolicy` is to use `Isolate` (which forces open and holds open the circuit) and `Reset` (which closes it again). These could be used to build a utility HTTP endpoint that invokes Isolate and Reset directly on the policy. Such an HTTP endpoint could also be used, suitably secured, in production for temporarily isolating a downstream system, such as when you want to upgrade it. Or it could trip the circuit manually to protect a downstream system you suspect to be faulting. - -## Additional resources - -- **Circuit Breaker pattern**\ - [https://learn.microsoft.com/azure/architecture/patterns/circuit-breaker](/azure/architecture/patterns/circuit-breaker) - ->[!div class="step-by-step"] ->[Previous](implement-http-call-retries-exponential-backoff-polly.md) ->[Next](monitor-app-health.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly.md b/docs/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly.md deleted file mode 100644 index 23ed5d85e32e2..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -title: Implement HTTP call retries with exponential backoff with Polly -description: Learn how to handle HTTP failures with Polly and IHttpClientFactory. -ms.date: 01/13/2021 ---- - -# Implement HTTP call retries with exponential backoff with IHttpClientFactory and Polly policies - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The recommended approach for retries with exponential backoff is to take advantage of more advanced .NET libraries like the open-source [Polly library](https://github.com/App-vNext/Polly). - -Polly is a .NET library that provides resilience and transient-fault handling capabilities. You can implement those capabilities by applying Polly policies such as Retry, Circuit Breaker, Bulkhead Isolation, Timeout, and Fallback. Polly targets .NET Framework 4.x and .NET Standard 1.0, 1.1, and 2.0 (which supports .NET Core and later). - -The following steps show how you can use Http retries with Polly integrated into `IHttpClientFactory`, which is explained in the previous section. - -**Install .NET packages** - -First, you will need to install the `Microsoft.Extensions.Http.Polly` package. - -- [Install with Visual Studio](/nuget/consume-packages/install-use-packages-visual-studio) -- [Install with dotnet CLI](/nuget/consume-packages/install-use-packages-dotnet-cli) -- [Install with nuget.exe CLI](/nuget/consume-packages/install-use-packages-nuget-cli) -- [Install with Package Manager Console (PowerShell)](/nuget/consume-packages/install-use-packages-powershell) - -**Reference the .NET 8 packages** - -`IHttpClientFactory` is available since .NET Core 2.1, however, we recommend you use the latest .NET 8 packages from NuGet in your project. You typically also need to reference the extension package `Microsoft.Extensions.Http.Polly`. - -**Configure a client with Polly's Retry policy, in app startup** - -The **AddPolicyHandler()** method is what adds policies to the `HttpClient` objects you'll use. In this case, it's adding a Polly's policy for Http Retries with exponential backoff. - -To have a more modular approach, the Http Retry Policy can be defined in a separate method within the _Program.cs_ file, as shown in the following code: - -```csharp -static IAsyncPolicy GetRetryPolicy() -{ - return HttpPolicyExtensions - .HandleTransientHttpError() - .OrResult(msg => msg.StatusCode == System.Net.HttpStatusCode.NotFound) - .WaitAndRetryAsync(6, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, - retryAttempt))); -} -``` - -As shown in previous sections, you need to define a named or typed client HttpClient configuration in your standard _Program.cs_ app configuration. Now you add incremental code specifying the policy for the Http retries with exponential backoff, as follows: - -```csharp -// Program.cs -builder.Services.AddHttpClient() - .SetHandlerLifetime(TimeSpan.FromMinutes(5)) //Set lifetime to five minutes - .AddPolicyHandler(GetRetryPolicy()); -``` - -With Polly, you can define a Retry policy with the number of retries, the exponential backoff configuration, and the actions to take when there's an HTTP exception, such as logging the error. In this case, the policy is configured to try six times with an exponential retry, starting at two seconds. - -## Add a jitter strategy to the retry policy - -A regular Retry policy can affect your system in cases of high concurrency and scalability and under high contention. To overcome peaks of similar retries coming from many clients in partial outages, a good workaround is to add a jitter strategy to the retry algorithm/policy. This strategy can improve the overall performance of the end-to-end system. As recommended in [Polly: Retry with Jitter](https://github.com/App-vNext/Polly/wiki/Retry-with-jitter), a good jitter strategy can be implemented by smooth and evenly distributed retry intervals applied with a well-controlled median initial retry delay on an exponential backoff. This approach helps to spread out the spikes when the issue arises. The principle is illustrated by the following example: - -```csharp - -var delay = Backoff.DecorrelatedJitterBackoffV2(medianFirstRetryDelay: TimeSpan.FromSeconds(1), retryCount: 5); - -var retryPolicy = Policy - .Handle() - .WaitAndRetryAsync(delay); -``` - -## Additional resources - -- **Retry pattern** - [https://learn.microsoft.com/azure/architecture/patterns/retry](/azure/architecture/patterns/retry) - -- **Polly and IHttpClientFactory** - - -- **Polly (.NET resilience and transient-fault-handling library)** - - -- **Polly: Retry with Jitter** - - -- **Marc Brooker. Jitter: Making Things Better With Randomness** - - ->[!div class="step-by-step"] ->[Previous](use-httpclientfactory-to-implement-resilient-http-requests.md) ->[Next](implement-circuit-breaker-pattern.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections.md b/docs/architecture/microservices/implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections.md deleted file mode 100644 index e24cdeb9421c0..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: Implement resilient Entity Framework Core SQL connections -description: Learn how to implement resilient Entity Framework Core SQL connections. This technique is especially important when using Azure SQL Database in the cloud. -ms.date: 09/10/2024 ---- -# Implement resilient Entity Framework Core SQL connections - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -For Azure SQL DB, Entity Framework (EF) Core already provides internal database connection resiliency and retry logic. But you need to enable the Entity Framework execution strategy for each connection if you want to have [resilient EF Core connections](/ef/core/miscellaneous/connection-resiliency). - -For instance, the following code at the EF Core connection level enables resilient SQL connections that are retried if the connection fails. - -```csharp -// Program.cs from any ASP.NET Core Web API -// Other code ... -builder.Services.AddDbContext(options => - { - options.UseSqlServer( - builder.Configuration["ConnectionString"], - sqlServerOptionsAction: sqlOptions => - { - sqlOptions.EnableRetryOnFailure( - maxRetryCount: 10, - maxRetryDelay: TimeSpan.FromSeconds(30), - errorNumbersToAdd: null); - }); - }); -``` - -[!INCLUDE [managed-identities](../../../includes/managed-identities.md)] - -## Execution strategies and explicit transactions using BeginTransaction and multiple DbContexts - -When retries are enabled in EF Core connections, each operation you perform using EF Core becomes its own retryable operation. Each query and each call to `SaveChanges` will be retried as a unit if a transient failure occurs. - -However, if your code initiates a transaction using `BeginTransaction`, you're defining your own group of operations that need to be treated as a unit. Everything inside the transaction has to be rolled back if a failure occurs. - -If you try to execute that transaction when using an EF execution strategy (retry policy) and you call `SaveChanges` from multiple DbContexts, you'll get an exception like this one: - -> System.InvalidOperationException: The configured execution strategy 'SqlServerRetryingExecutionStrategy' does not support user initiated transactions. Use the execution strategy returned by 'DbContext.Database.CreateExecutionStrategy()' to execute all the operations in the transaction as a retriable unit. - -The solution is to manually invoke the EF execution strategy with a delegate representing everything that needs to be executed. If a transient failure occurs, the execution strategy will invoke the delegate again. For example, the following code shows how it's implemented in eShopOnContainers with two multiple DbContexts (\_catalogContext and the IntegrationEventLogContext) when updating a product and then saving the ProductPriceChangedIntegrationEvent object, which needs to use a different DbContext. - -```csharp -public async Task UpdateProduct( - [FromBody]CatalogItem productToUpdate) -{ - // Other code ... - - var oldPrice = catalogItem.Price; - var raiseProductPriceChangedEvent = oldPrice != productToUpdate.Price; - - // Update current product - catalogItem = productToUpdate; - - // Save product's data and publish integration event through the Event Bus - // if price has changed - if (raiseProductPriceChangedEvent) - { - //Create Integration Event to be published through the Event Bus - var priceChangedEvent = new ProductPriceChangedIntegrationEvent( - catalogItem.Id, productToUpdate.Price, oldPrice); - - // Achieving atomicity between original Catalog database operation and the - // IntegrationEventLog thanks to a local transaction - await _catalogIntegrationEventService.SaveEventAndCatalogContextChangesAsync( - priceChangedEvent); - - // Publish through the Event Bus and mark the saved event as published - await _catalogIntegrationEventService.PublishThroughEventBusAsync( - priceChangedEvent); - } - // Just save the updated product because the Product's Price hasn't changed. - else - { - await _catalogContext.SaveChangesAsync(); - } -} -``` - -The first is `_catalogContext` and the second `DbContext` is within the `_catalogIntegrationEventService` object. The Commit action is performed across all `DbContext` objects using an EF execution strategy. - -To achieve this multiple `DbContext` commit, the `SaveEventAndCatalogContextChangesAsync` uses a `ResilientTransaction` class, as shown in the following code: - -```csharp -public class CatalogIntegrationEventService : ICatalogIntegrationEventService -{ - //… - public async Task SaveEventAndCatalogContextChangesAsync( - IntegrationEvent evt) - { - // Use of an EF Core resiliency strategy when using multiple DbContexts - // within an explicit BeginTransaction(): - // https://learn.microsoft.com/ef/core/miscellaneous/connection-resiliency - await ResilientTransaction.New(_catalogContext).ExecuteAsync(async () => - { - // Achieving atomicity between original catalog database - // operation and the IntegrationEventLog thanks to a local transaction - await _catalogContext.SaveChangesAsync(); - await _eventLogService.SaveEventAsync(evt, - _catalogContext.Database.CurrentTransaction.GetDbTransaction()); - }); - } -} -``` - -The `ResilientTransaction.ExecuteAsync` method basically begins a transaction from the passed `DbContext` (`_catalogContext`) and then makes the `EventLogService` use that transaction to save changes from the `IntegrationEventLogContext` and then commits the whole transaction. - -```csharp -public class ResilientTransaction -{ - private DbContext _context; - private ResilientTransaction(DbContext context) => - _context = context ?? throw new ArgumentNullException(nameof(context)); - - public static ResilientTransaction New (DbContext context) => - new ResilientTransaction(context); - - public async Task ExecuteAsync(Func action) - { - // Use of an EF Core resiliency strategy when using multiple DbContexts - // within an explicit BeginTransaction(): - // https://learn.microsoft.com/ef/core/miscellaneous/connection-resiliency - var strategy = _context.Database.CreateExecutionStrategy(); - await strategy.ExecuteAsync(async () => - { - await using var transaction = await _context.Database.BeginTransactionAsync(); - await action(); - await transaction.CommitAsync(); - }); - } -} -``` - -## Additional resources - -- **Connection Resiliency and Command Interception with EF in an ASP.NET MVC Application** \ - [https://learn.microsoft.com/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/connection-resiliency-and-command-interception-with-the-entity-framework-in-an-asp-net-mvc-application](/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/connection-resiliency-and-command-interception-with-the-entity-framework-in-an-asp-net-mvc-application) - -- **Cesar de la Torre. Using Resilient Entity Framework Core SQL Connections and Transactions** \ - - ->[!div class="step-by-step"] ->[Previous](implement-retries-exponential-backoff.md) ->[Next](use-httpclientfactory-to-implement-resilient-http-requests.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/implement-retries-exponential-backoff.md b/docs/architecture/microservices/implement-resilient-applications/implement-retries-exponential-backoff.md deleted file mode 100644 index 34cade53d2e7a..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/implement-retries-exponential-backoff.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Implement retries with exponential backoff -description: Learn how to implement retries with exponential backoff. -ms.date: 10/16/2018 ---- -# Implement retries with exponential backoff - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -[*Retries with exponential backoff*](/azure/architecture/patterns/retry) is a technique that retries an operation, with an exponentially increasing wait time, up to a maximum retry count has been reached (the [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff)). This technique embraces the fact that cloud resources might intermittently be unavailable for more than a few seconds for any reason. For example, an orchestrator might be moving a container to another node in a cluster for load balancing. During that time, some requests might fail. Another example could be a database like SQL Azure, where a database can be moved to another server for load balancing, causing the database to be unavailable for a few seconds. - -There are many approaches to implement retries logic with exponential backoff. - ->[!div class="step-by-step"] ->[Previous](partial-failure-strategies.md) ->[Next](implement-resilient-entity-framework-core-sql-connections.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/index.md b/docs/architecture/microservices/implement-resilient-applications/index.md deleted file mode 100644 index 170d04f0a96ad..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/index.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Implement resilient applications -description: Learn about resilience, a core concept in a microservices architecture. You must know how to handle transient failures gracefully when they occur. -ms.date: 01/30/2020 ---- -# Implement resilient applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -*Your microservice and cloud-based applications must embrace the partial failures that will certainly occur eventually. You must design your application to be resilient to those partial failures.* - -Resiliency is the ability to recover from failures and continue to function. It isn't about avoiding failures but accepting the fact that failures will happen and responding to them in a way that avoids downtime or data loss. The goal of resiliency is to return the application to a fully functioning state after a failure. - -It's challenging enough to design and deploy a microservices-based application. But you also need to keep your application running in an environment where some sort of failure is certain. Therefore, your application should be resilient. It should be designed to cope with partial failures, like network outages or nodes or VMs crashing in the cloud. Even microservices (containers) being moved to a different node within a cluster can cause intermittent short failures within the application. - -The many individual components of your application should also incorporate health monitoring features. By following the guidelines in this chapter, you can create an application that can work smoothly in spite of transient downtime or the normal hiccups that occur in complex and cloud-based deployments. - ->[!IMPORTANT] -> eShopOnContainer had been using the [Polly library](https://thepollyproject.azurewebsites.net/) to implement resiliency using [Typed Clients](./use-httpclientfactory-to-implement-resilient-http-requests.md) up until the release 3.0.0. -> -> Starting with release 3.0.0, the HTTP calls resiliency is implemented using a [Linkerd mesh](https://linkerd.io/), that handles retries in a transparent and configurable fashion, within a Kubernetes cluster, without having to handle those concerns in the code. -> -> The Polly library is still used to add resilience to database connections, specially while starting up the services. - ->[!WARNING] -> All code samples and images in this section were valid before using Linkerd and are not updated to reflect the current actual code. So they make sense in the context of this section. - ->[!div class="step-by-step"] ->[Previous](../microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md) ->[Next](handle-partial-failure.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/multiple-distributed-dependencies.png b/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/multiple-distributed-dependencies.png deleted file mode 100644 index 2246f67555253..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/multiple-distributed-dependencies.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/partial-failure-amplified-microservices.png b/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/partial-failure-amplified-microservices.png deleted file mode 100644 index 1956d9485fe24..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/partial-failure-amplified-microservices.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/partial-failures-diagram.png b/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/partial-failures-diagram.png deleted file mode 100644 index 360b074a4196b..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/handle-partial-failure/partial-failures-diagram.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/implement-circuit-breaker-pattern/basket-service-inoperative.png b/docs/architecture/microservices/implement-resilient-applications/media/implement-circuit-breaker-pattern/basket-service-inoperative.png deleted file mode 100644 index d5c52c8c29c72..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/implement-circuit-breaker-pattern/basket-service-inoperative.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/implement-circuit-breaker-pattern/failing-middleware-simulation.png b/docs/architecture/microservices/implement-resilient-applications/media/implement-circuit-breaker-pattern/failing-middleware-simulation.png deleted file mode 100644 index 61b9559ead4cb..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/implement-circuit-breaker-pattern/failing-middleware-simulation.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/aspnet-core-diagnostics-health-checks.png b/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/aspnet-core-diagnostics-health-checks.png deleted file mode 100644 index 2bf67b1ae7954..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/aspnet-core-diagnostics-health-checks.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/health-check-json-response.png b/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/health-check-json-response.png deleted file mode 100644 index f22552f8b9df6..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/health-check-json-response.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/health-check-status-ui.png b/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/health-check-status-ui.png deleted file mode 100644 index 716aad2548194..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/monitor-app-health/health-check-status-ui.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/media/use-httpclientfactory-to-implement-resilient-http-requests/client-application-code.png b/docs/architecture/microservices/implement-resilient-applications/media/use-httpclientfactory-to-implement-resilient-http-requests/client-application-code.png deleted file mode 100644 index 21a16ed5c5bea..0000000000000 Binary files a/docs/architecture/microservices/implement-resilient-applications/media/use-httpclientfactory-to-implement-resilient-http-requests/client-application-code.png and /dev/null differ diff --git a/docs/architecture/microservices/implement-resilient-applications/monitor-app-health.md b/docs/architecture/microservices/implement-resilient-applications/monitor-app-health.md deleted file mode 100644 index 9ac267232c224..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/monitor-app-health.md +++ /dev/null @@ -1,273 +0,0 @@ ---- -title: Health monitoring -description: Explore one way of implementing health monitoring. -ms.date: 09/10/2024 ---- -# Health monitoring - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Health monitoring can allow near-real-time information about the state of your containers and microservices. Health monitoring is critical to multiple aspects of operating microservices and is especially important when orchestrators perform partial application upgrades in phases, as explained later. - -Microservices-based applications often use heartbeats or health checks to enable their performance monitors, schedulers, and orchestrators to keep track of the multitude of services. If services cannot send some sort of "I'm alive" signal, either on demand or on a schedule, your application might face risks when you deploy updates, or it might just detect failures too late and not be able to stop cascading failures that can end up in major outages. - -In the typical model, services send reports about their status, and that information is aggregated to provide an overall view of the state of health of your application. If you're using an orchestrator, you can provide health information to your orchestrator's cluster, so that the cluster can act accordingly. If you invest in high-quality health reporting that's customized for your application, you can detect and fix issues for your running application much more easily. - -## Implement health checks in ASP.NET Core services - -When developing an ASP.NET Core microservice or web application, you can use the built-in health checks feature that was released in ASP .NET Core 2.2 ([Microsoft.Extensions.Diagnostics.HealthChecks](https://www.nuget.org/packages/Microsoft.Extensions.Diagnostics.HealthChecks)). Like many ASP.NET Core features, health checks come with a set of services and a middleware. - -Health check services and middleware are easy to use and provide capabilities that let you validate if any external resource needed for your application (like a SQL Server database or a remote API) is working properly. When you use this feature, you can also decide what it means that the resource is healthy, as we explain later. - -To use this feature effectively, you need to first configure services in your microservices. Second, you need a front-end application that queries for the health reports. That front-end application could be a custom reporting application, or it could be an orchestrator itself that can react accordingly to the health states. - -### Use the HealthChecks feature in your back-end ASP.NET microservices - -In this section, you'll learn how to implement the HealthChecks feature in a sample ASP.NET Core 8.0 Web API application when using the [Microsoft.Extensions.Diagnostics.HealthChecks](https://www.nuget.org/packages/Microsoft.Extensions.Diagnostics.HealthChecks) package. The Implementation of this feature in a large-scale microservices like the eShopOnContainers is explained in the next section. - -To begin, you need to define what constitutes a healthy status for each microservice. In the sample application, we define the microservice is healthy if its API is accessible via HTTP and its related SQL Server database is also available. - -In .NET 8, with the built-in APIs, you can configure the services, add a Health Check for the microservice and its dependent SQL Server database in this way: - -```csharp -// Program.cs from .NET 8 Web API sample - -//... -// Registers required services for health checks -builder.Services.AddHealthChecks() - // Add a health check for a SQL Server database - .AddCheck( - "OrderingDB-check", - new SqlConnectionHealthCheck(builder.Configuration["ConnectionString"]), - HealthStatus.Unhealthy, - new string[] { "orderingdb" }); -``` - -In the previous code, the `services.AddHealthChecks()` method configures a basic HTTP check that returns a status code **200** with "Healthy". Further, the `AddCheck()` extension method configures a custom `SqlConnectionHealthCheck` that checks the related SQL Database's health. - -The `AddCheck()` method adds a new health check with a specified name and the implementation of type `IHealthCheck`. You can add multiple Health Checks using AddCheck method, so a microservice won't provide a "healthy" status until all its checks are healthy. - -`SqlConnectionHealthCheck` is a custom class that implements `IHealthCheck`, which takes a connection string as a constructor parameter and executes a simple query to check if the connection to the SQL database is successful. It returns `HealthCheckResult.Healthy()` if the query was executed successfully and a `FailureStatus` with the actual exception when it fails. - -```csharp -// Sample SQL Connection Health Check -public class SqlConnectionHealthCheck : IHealthCheck -{ - private const string DefaultTestQuery = "Select 1"; - - public string ConnectionString { get; } - - public string TestQuery { get; } - - public SqlConnectionHealthCheck(string connectionString) - : this(connectionString, testQuery: DefaultTestQuery) - { - } - - public SqlConnectionHealthCheck(string connectionString, string testQuery) - { - ConnectionString = connectionString ?? throw new ArgumentNullException(nameof(connectionString)); - TestQuery = testQuery; - } - - public async Task CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = default(CancellationToken)) - { - using (var connection = new SqlConnection(ConnectionString)) - { - try - { - await connection.OpenAsync(cancellationToken); - - if (TestQuery != null) - { - var command = connection.CreateCommand(); - command.CommandText = TestQuery; - - await command.ExecuteNonQueryAsync(cancellationToken); - } - } - catch (DbException ex) - { - return new HealthCheckResult(status: context.Registration.FailureStatus, exception: ex); - } - } - - return HealthCheckResult.Healthy(); - } -} -``` - -[!INCLUDE [managed-identities](../../../includes/managed-identities.md)] - -Note that in the previous code, `Select 1` is the query used to check the Health of the database. To monitor the availability of your microservices, orchestrators like Kubernetes periodically perform health checks by sending requests to test the microservices. It's important to keep your database queries efficient so that these operations are quick and don’t result in a higher utilization of resources. - -Finally, add a middleware that responds to the url path `/hc`: - -```csharp -// Program.cs from .NET 8 Web Api sample - -app.MapHealthChecks("/hc"); -``` - -When the endpoint `/hc` is invoked, it runs all the health checks that are configured in the `AddHealthChecks()` method in the Startup class and shows the result. - -### HealthChecks implementation in eShopOnContainers - -Microservices in eShopOnContainers rely on multiple services to perform its task. For example, the `Catalog.API` microservice from eShopOnContainers depends on many services, such as Azure Blob Storage, SQL Server, and RabbitMQ. Therefore, it has several health checks added using the `AddCheck()` method. For every dependent service, a custom `IHealthCheck` implementation that defines its respective health status would need to be added. - -The open-source project [AspNetCore.Diagnostics.HealthChecks](https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks) solves this problem by providing custom health check implementations for each of these enterprise services, that are built on top of .NET 8. Each health check is available as an individual NuGet package that can be easily added to the project. eShopOnContainers uses them extensively in all its microservices. - -For instance, in the `Catalog.API` microservice, the following NuGet packages were added: - -![Screenshot of the AspNetCore.Diagnostics.HealthChecks NuGet packages.](./media/monitor-app-health/aspnet-core-diagnostics-health-checks.png) - -**Figure 8-7**. Custom Health Checks implemented in Catalog.API using AspNetCore.Diagnostics.HealthChecks - -In the following code, the health check implementations are added for each dependent service and then the middleware is configured: - -```csharp -// Extension method from Catalog.api microservice -// -public static IServiceCollection AddCustomHealthCheck(this IServiceCollection services, IConfiguration configuration) -{ - var accountName = configuration.GetValue("AzureStorageAccountName"); - var accountKey = configuration.GetValue("AzureStorageAccountKey"); - - var hcBuilder = services.AddHealthChecks(); - - hcBuilder - .AddSqlServer( - configuration["ConnectionString"], - name: "CatalogDB-check", - tags: new string[] { "catalogdb" }); - - if (!string.IsNullOrEmpty(accountName) && !string.IsNullOrEmpty(accountKey)) - { - hcBuilder - .AddAzureBlobStorage( - $"DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={accountKey};EndpointSuffix=core.windows.net", - name: "catalog-storage-check", - tags: new string[] { "catalogstorage" }); - } - if (configuration.GetValue("AzureServiceBusEnabled")) - { - hcBuilder - .AddAzureServiceBusTopic( - configuration["EventBusConnection"], - topicName: "eshop_event_bus", - name: "catalog-servicebus-check", - tags: new string[] { "servicebus" }); - } - else - { - hcBuilder - .AddRabbitMQ( - $"amqp://{configuration["EventBusConnection"]}", - name: "catalog-rabbitmqbus-check", - tags: new string[] { "rabbitmqbus" }); - } - - return services; -} -``` - -Finally, add the HealthCheck middleware to listen to “/hc” endpoint: - -```csharp -// HealthCheck middleware -app.UseHealthChecks("/hc", new HealthCheckOptions() -{ - Predicate = _ => true, - ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse -}); -``` - -### Query your microservices to report about their health status - -When you've configured health checks as described in this article and you have the microservice running in Docker, you can directly check from a browser if it's healthy. You have to publish the container port in the Docker host, so you can access the container through the external Docker host IP or through `host.docker.internal`, as shown in figure 8-8. - -![Screenshot of the JSON response returned by a health check.](media/monitor-app-health/health-check-json-response.png) - -**Figure 8-8**. Checking health status of a single service from a browser - -In that test, you can see that the `Catalog.API` microservice (running on port 5101) is healthy, returning HTTP status 200 and status information in JSON. The service also checked the health of its SQL Server database dependency and RabbitMQ, so the health check reported itself as healthy. - -## Use watchdogs - -A watchdog is a separate service that can watch health and load across services, and report health about the microservices by querying with the `HealthChecks` library introduced earlier. This can help prevent errors that would not be detected based on the view of a single service. Watchdogs also are a good place to host code that can perform remediation actions for known conditions without user interaction. - -The eShopOnContainers sample contains a web page that displays sample health check reports, as shown in Figure 8-9. This is the simplest watchdog you could have since it only shows the state of the microservices and web applications in eShopOnContainers. Usually a watchdog also takes actions when it detects unhealthy states. - -Fortunately, [AspNetCore.Diagnostics.HealthChecks](https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks) also provides [AspNetCore.HealthChecks.UI](https://www.nuget.org/packages/AspNetCore.HealthChecks.UI/) NuGet package that can be used to display the health check results from the configured URIs. - -![Screenshot of the Health Checks UI eShopOnContainers health statuses.](./media/monitor-app-health/health-check-status-ui.png) - -**Figure 8-9**. Sample health check report in eShopOnContainers - -In summary, this watchdog service queries each microservice's "/hc" endpoint. This will execute all the health checks defined within it and return an overall health state depending on all those checks. The HealthChecksUI is easy to consume with a few configuration entries and two lines of code that needs to be added into the *Startup.cs* of the watchdog service. - -Sample configuration file for health check UI: - -```json -// Configuration -{ - "HealthChecksUI": { - "HealthChecks": [ - { - "Name": "Ordering HTTP Check", - "Uri": "http://host.docker.internal:5102/hc" - }, - { - "Name": "Ordering HTTP Background Check", - "Uri": "http://host.docker.internal:5111/hc" - }, - //... - ]} -} -``` - -_Program.cs_ file that adds HealthChecksUI: - -```csharp -// Program.cs from WebStatus(Watch Dog) service -// -// Registers required services for health checks -builder.Services.AddHealthChecksUI(); -// build the app, register other middleware -app.UseHealthChecksUI(config => config.UIPath = "/hc-ui"); -``` - -## Health checks when using orchestrators - -To monitor the availability of your microservices, orchestrators like Kubernetes and Service Fabric periodically perform health checks by sending requests to test the microservices. When an orchestrator determines that a service/container is unhealthy, it stops routing requests to that instance. It also usually creates a new instance of that container. - -For instance, most orchestrators can use health checks to manage zero-downtime deployments. Only when the status of a service/container changes to healthy will the orchestrator start routing traffic to service/container instances. - -Health monitoring is especially important when an orchestrator performs an application upgrade. Some orchestrators (like Azure Service Fabric) update services in phases—for example, they might update one-fifth of the cluster surface for each application upgrade. The set of nodes that's upgraded at the same time is referred to as an *upgrade domain*. After each upgrade domain has been upgraded and is available to users, that upgrade domain must pass health checks before the deployment moves to the next upgrade domain. - -Another aspect of service health is reporting metrics from the service. This is an advanced capability of the health model of some orchestrators, like Service Fabric. Metrics are important when using an orchestrator because they are used to balance resource usage. Metrics also can be an indicator of system health. For example, you might have an application that has many microservices, and each instance reports a requests-per-second (RPS) metric. If one service is using more resources (memory, processor, etc.) than another service, the orchestrator could move service instances around in the cluster to try to maintain even resource utilization. - -Note that Azure Service Fabric provides its own [Health Monitoring model](/azure/service-fabric/service-fabric-health-introduction), which is more advanced than simple health checks. - -## Advanced monitoring: visualization, analysis, and alerts - -The final part of monitoring is visualizing the event stream, reporting on service performance, and alerting when an issue is detected. You can use different solutions for this aspect of monitoring. - -You can use simple custom applications showing the state of your services, like the custom page shown when explaining the [AspNetCore.Diagnostics.HealthChecks](https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks). Or you could use more advanced tools like [Azure Monitor](https://azure.microsoft.com/services/monitor/) to raise alerts based on the stream of events. - -Finally, if you're storing all the event streams, you can use Microsoft Power BI or other solutions like Kibana or Splunk to visualize the data. - -## Additional resources - -- **HealthChecks and HealthChecks UI for ASP.NET Core** \ - - -- **Introduction to Service Fabric health monitoring** \ - [https://learn.microsoft.com/azure/service-fabric/service-fabric-health-introduction](/azure/service-fabric/service-fabric-health-introduction) - -- **Azure Monitor** \ - - ->[!div class="step-by-step"] ->[Previous](implement-circuit-breaker-pattern.md) ->[Next](../secure-net-microservices-web-applications/index.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/partial-failure-strategies.md b/docs/architecture/microservices/implement-resilient-applications/partial-failure-strategies.md deleted file mode 100644 index 2832f1a798d02..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/partial-failure-strategies.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Strategies for handling partial failure -description: Get to know several strategies for handling partial failures gracefully. -ms.date: 10/16/2018 ---- -# Strategies to handle partial failure - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -To deal with partial failures, use one of the strategies described here. - -**Use asynchronous communication (for example, message-based communication) across internal microservices**. It's highly advisable not to create long chains of synchronous HTTP calls across the internal microservices because that incorrect design will eventually become the main cause of bad outages. On the contrary, except for the front-end communications between the client applications and the first level of microservices or fine-grained API Gateways, it's recommended to use only asynchronous (message-based) communication once past the initial request/response cycle, across the internal microservices. Eventual consistency and event-driven architectures will help to minimize ripple effects. These approaches enforce a higher level of microservice autonomy and therefore prevent against the problem noted here. - -**Use retries with exponential backoff**. This technique helps to avoid short and intermittent failures by performing call retries a certain number of times, in case the service was not available only for a short time. This might occur due to intermittent network issues or when a microservice/container is moved to a different node in a cluster. However, if these retries are not designed properly with circuit breakers, it can aggravate the ripple effects, ultimately even causing a [Denial of Service (DoS)](https://en.wikipedia.org/wiki/Denial-of-service_attack). - -**Work around network timeouts**. In general, clients should be designed not to block indefinitely and to always use timeouts when waiting for a response. Using timeouts ensures that resources are never tied up indefinitely. - -**Use the Circuit Breaker pattern**. In this approach, the client process tracks the number of failed requests. If the error rate exceeds a configured limit, a "circuit breaker" trips so that further attempts fail immediately. (If a large number of requests are failing, that suggests the service is unavailable and that sending requests is pointless.) After a timeout period, the client should try again and, if the new requests are successful, close the circuit breaker. - -**Provide fallbacks**. In this approach, the client process performs fallback logic when a request fails, such as returning cached data or a default value. This is an approach suitable for queries, and is more complex for updates or commands. - -**Limit the number of queued requests**. Clients should also impose an upper bound on the number of outstanding requests that a client microservice can send to a particular service. If the limit has been reached, it's probably pointless to make additional requests, and those attempts should fail immediately. In terms of implementation, the Polly [Bulkhead Isolation](https://github.com/App-vNext/Polly/wiki/Bulkhead) policy can be used to fulfill this requirement. This approach is essentially a parallelization throttle with as the implementation. It also permits a "queue" outside the bulkhead. You can proactively shed excess load even before execution (for example, because capacity is deemed full). This makes its response to certain failure scenarios faster than a circuit breaker would be, since the circuit breaker waits for the failures. The BulkheadPolicy object in [Polly](https://thepollyproject.azurewebsites.net/) exposes how full the bulkhead and queue are, and offers events on overflow so can also be used to drive automated horizontal scaling. - -## Additional resources - -- **Resiliency patterns**\ - [https://learn.microsoft.com/azure/architecture/framework/resiliency/reliability-patterns](/azure/architecture/framework/resiliency/reliability-patterns) - -- **Adding Resilience and Optimizing Performance**\ - [https://learn.microsoft.com/previous-versions/msp-n-p/jj591574(v=pandp.10)](/previous-versions/msp-n-p/jj591574(v=pandp.10)) - -- **Bulkhead.** GitHub repo. Implementation with Polly policy.\ - - -- **Designing resilient applications for Azure**\ - [https://learn.microsoft.com/azure/architecture/framework/resiliency/app-design](/azure/architecture/framework/resiliency/app-design) - -- **Transient fault handling**\ - [https://learn.microsoft.com/azure/architecture/best-practices/transient-faults](/azure/architecture/best-practices/transient-faults) - ->[!div class="step-by-step"] ->[Previous](handle-partial-failure.md) ->[Next](implement-retries-exponential-backoff.md) diff --git a/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md b/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md deleted file mode 100644 index 6820b6f857021..0000000000000 --- a/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md +++ /dev/null @@ -1,211 +0,0 @@ ---- -title: Use IHttpClientFactory to implement resilient HTTP requests -description: Learn how to use IHttpClientFactory, available since .NET Core 2.1, for creating `HttpClient` instances, making it easy for you to use it in your applications. -ms.date: 01/13/2021 ---- -# Use IHttpClientFactory to implement resilient HTTP requests - -[!INCLUDE [download-alert](../includes/download-alert.md)] - - is a contract implemented by `DefaultHttpClientFactory`, an opinionated factory, available since .NET Core 2.1, for creating instances to be used in your applications. - -## Issues with the original HttpClient class available in .NET - -The original and well-known class can be easily used, but in some cases, it isn't being properly used by many developers. - -Though this class implements `IDisposable`, declaring and instantiating it within a `using` statement is not preferred because when the `HttpClient` object gets disposed of, the underlying socket is not immediately released, which can lead to a _socket exhaustion_ problem. For more information about this issue, see the blog post [You're using HttpClient wrong and it's destabilizing your software](https://aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/). - -Therefore, `HttpClient` is intended to be instantiated once and reused throughout the life of an application. Instantiating an `HttpClient` class for every request will exhaust the number of sockets available under heavy loads. That issue will result in `SocketException` errors. Possible approaches to solve that problem are based on the creation of the `HttpClient` object as singleton or static, as explained in this [Microsoft article on HttpClient usage](../../../csharp/tutorials/console-webapiclient.md). This can be a good solution for short-lived console apps or similar, that run a few times a day. - -Another issue that developers run into is when using a shared instance of `HttpClient` in long-running processes. In a situation where the HttpClient is instantiated as a singleton or a static object, it fails to handle the DNS changes as described in this [issue](https://github.com/dotnet/runtime/issues/18348) of the dotnet/runtime GitHub repository. - -However, the issue isn't really with `HttpClient` per se, but with the [default constructor for HttpClient](/dotnet/api/system.net.http.httpclient.-ctor#system-net-http-httpclient-ctor), because it creates a new concrete instance of , which is the one that has *sockets exhaustion* and DNS changes issues mentioned above. - -To address the issues mentioned above and to make `HttpClient` instances manageable, .NET Core 2.1 introduced two approaches, one of them being . It's an interface that's used to configure and create `HttpClient` instances in an app through Dependency Injection (DI). It also provides extensions for Polly-based middleware to take advantage of delegating handlers in HttpClient. - -The alternative is to use `SocketsHttpHandler` with configured `PooledConnectionLifetime`. This approach is applied to long-lived, `static` or singleton `HttpClient` instances. To learn more about different strategies, see [HttpClient guidelines for .NET](../../../fundamentals/networking/http/httpclient-guidelines.md). - -[Polly](https://thepollyproject.azurewebsites.net/) is a transient-fault-handling library that helps developers add resiliency to their applications, by using some pre-defined policies in a fluent and thread-safe manner. - -## Benefits of using IHttpClientFactory - -The current implementation of , that also implements , offers the following benefits: - -- Provides a central location for naming and configuring logical `HttpClient` objects. For example, you may configure a client (Service Agent) that's pre-configured to access a specific microservice. -- Codify the concept of outgoing middleware via delegating handlers in `HttpClient` and implementing Polly-based middleware to take advantage of Polly's policies for resiliency. -- `HttpClient` already has the concept of delegating handlers that could be linked together for outgoing HTTP requests. You can register HTTP clients into the factory and you can use a Polly handler to use Polly policies for Retry, CircuitBreakers, and so on. -- Manage the lifetime of to avoid the mentioned problems/issues that can occur when managing `HttpClient` lifetimes yourself. - -> [!TIP] -> The `HttpClient` instances injected by DI can be disposed of safely, because the associated `HttpMessageHandler` is managed by the factory. Injected `HttpClient` instances are *Transient* from a DI perspective, while `HttpMessageHandler` instances can be regarded as *Scoped*. `HttpMessageHandler` instances have their own DI scopes, **separate** from the application scopes (for example, ASP.NET incoming request scopes). For more information, see [Using HttpClientFactory in .NET](../../../core/extensions/httpclient-factory.md#message-handler-scopes-in-ihttpclientfactory). - -> [!NOTE] -> The implementation of `IHttpClientFactory` (`DefaultHttpClientFactory`) is tightly tied to the DI implementation in the `Microsoft.Extensions.DependencyInjection` NuGet package. If you need to use `HttpClient` without DI or with other DI implementations, consider using a `static` or singleton `HttpClient` with `PooledConnectionLifetime` set up. For more information, see [HttpClient guidelines for .NET](../../../fundamentals/networking/http/httpclient-guidelines.md). - -## Multiple ways to use IHttpClientFactory - -There are several ways that you can use `IHttpClientFactory` in your application: - -- Basic usage -- Use Named Clients -- Use Typed Clients -- Use Generated Clients - -For the sake of brevity, this guidance shows the most structured way to use `IHttpClientFactory`, which is to use Typed Clients (Service Agent pattern). However, all options are documented and are currently listed in this [article covering the `IHttpClientFactory` usage](/aspnet/core/fundamentals/http-requests#consumption-patterns). - -> [!NOTE] -> If your app requires cookies, it might be better to avoid using in your app. For alternative ways of managing clients, see [Guidelines for using HTTP clients](../../../fundamentals/networking/http/httpclient-guidelines.md) - -## How to use Typed Clients with IHttpClientFactory - -So, what's a "Typed Client"? It's just an `HttpClient` that's pre-configured for some specific use. This configuration can include specific values such as the base server, HTTP headers or time outs. - -The following diagram shows how Typed Clients are used with `IHttpClientFactory`: - -![Diagram showing how typed clients are used with IHttpClientFactory.](./media/use-httpclientfactory-to-implement-resilient-http-requests/client-application-code.png) - -**Figure 8-4**. Using `IHttpClientFactory` with Typed Client classes. - -In the above image, a `ClientService` (used by a controller or client code) uses an `HttpClient` created by the registered `IHttpClientFactory`. This factory assigns an `HttpMessageHandler` from a pool to the `HttpClient`. The `HttpClient` can be configured with Polly's policies when registering the `IHttpClientFactory` in the DI container with the extension method . - -To configure the above structure, add in your application by installing the `Microsoft.Extensions.Http` NuGet package that includes the extension method for . This extension method registers the internal `DefaultHttpClientFactory` class to be used as a singleton for the interface `IHttpClientFactory`. It defines a transient configuration for the . This message handler ( object), taken from a pool, is used by the `HttpClient` returned from the factory. - -In the next snippet, you can see how `AddHttpClient()` can be used to register Typed Clients (Service Agents) that need to use `HttpClient`. - -```csharp -// Program.cs -//Add http client services at ConfigureServices(IServiceCollection services) -builder.Services.AddHttpClient(); -builder.Services.AddHttpClient(); -builder.Services.AddHttpClient(); -``` - -Registering the client services as shown in the previous snippet, makes the `DefaultClientFactory` create a standard `HttpClient` for each service. The typed client is registered as transient with DI container. In the preceding code, `AddHttpClient()` registers _CatalogService_, _BasketService_, _OrderingService_ as transient services so they can be injected and consumed directly without any need for additional registrations. - -You could also add instance-specific configuration in the registration to, for example, configure the base address, and add some resiliency policies, as shown in the following: - -```csharp -builder.Services.AddHttpClient(client => -{ - client.BaseAddress = new Uri(builder.Configuration["BaseUrl"]); -}) - .AddPolicyHandler(GetRetryPolicy()) - .AddPolicyHandler(GetCircuitBreakerPolicy()); -``` - -In this next example, you can see the configuration of one of the above policies: - -```csharp -static IAsyncPolicy GetRetryPolicy() -{ - return HttpPolicyExtensions - .HandleTransientHttpError() - .OrResult(msg => msg.StatusCode == System.Net.HttpStatusCode.NotFound) - .WaitAndRetryAsync(6, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))); -} -``` - -You can find more details about using Polly in the [Next article](implement-http-call-retries-exponential-backoff-polly.md). - -### HttpClient lifetimes - -Each time you get an `HttpClient` object from the `IHttpClientFactory`, a new instance is returned. But each `HttpClient` uses an `HttpMessageHandler` that's pooled and reused by the `IHttpClientFactory` to reduce resource consumption, as long as the `HttpMessageHandler`'s lifetime hasn't expired. - -Pooling of handlers is desirable as each handler typically manages its own underlying HTTP connections; creating more handlers than necessary can result in connection delays. Some handlers also keep connections open indefinitely, which can prevent the handler from reacting to DNS changes. - -The `HttpMessageHandler` objects in the pool have a lifetime that's the length of time that an `HttpMessageHandler` instance in the pool can be reused. The default value is two minutes, but it can be overridden per Typed Client. To override it, call `SetHandlerLifetime()` on the that's returned when creating the client, as shown in the following code: - -```csharp -//Set 5 min as the lifetime for the HttpMessageHandler objects in the pool used for the Catalog Typed Client -builder.Services.AddHttpClient() - .SetHandlerLifetime(TimeSpan.FromMinutes(5)); -``` - -Each Typed Client can have its own configured handler lifetime value. Set the lifetime to `InfiniteTimeSpan` to disable handler expiry. - -### Implement your Typed Client classes that use the injected and configured HttpClient - -As a previous step, you need to have your Typed Client classes defined, such as the classes in the sample code, like 'BasketService', 'CatalogService', 'OrderingService', etc. – A Typed Client is a class that accepts an `HttpClient` object (injected through its constructor) and uses it to call some remote HTTP service. For example: - -```csharp -public class CatalogService : ICatalogService -{ - private readonly HttpClient _httpClient; - private readonly string _remoteServiceBaseUrl; - - public CatalogService(HttpClient httpClient) - { - _httpClient = httpClient; - } - - public async Task GetCatalogItems(int page, int take, - int? brand, int? type) - { - var uri = API.Catalog.GetAllCatalogItems(_remoteServiceBaseUrl, - page, take, brand, type); - - var responseString = await _httpClient.GetStringAsync(uri); - - var catalog = JsonConvert.DeserializeObject(responseString); - return catalog; - } -} -``` - -The Typed Client (`CatalogService` in the example) is activated by DI (Dependency Injection), which means it can accept any registered service in its constructor, in addition to `HttpClient`. - -A Typed Client is effectively a transient object, that means a new instance is created each time one is needed. It receives a new `HttpClient` instance each time it's constructed. However, the `HttpMessageHandler` objects in the pool are the objects that are reused by multiple `HttpClient` instances. - -### Use your Typed Client classes - -Finally, once you have your typed classes implemented, you can have them registered and configured with `AddHttpClient()`. After that you can use them wherever services are injected by DI, such as in Razor page code or an MVC web app controller, shown in the below code from eShopOnContainers: - -```csharp -namespace Microsoft.eShopOnContainers.WebMVC.Controllers -{ - public class CatalogController : Controller - { - private ICatalogService _catalogSvc; - - public CatalogController(ICatalogService catalogSvc) => - _catalogSvc = catalogSvc; - - public async Task Index(int? BrandFilterApplied, - int? TypesFilterApplied, - int? page, - [FromQuery]string errorMsg) - { - var itemsPage = 10; - var catalog = await _catalogSvc.GetCatalogItems(page ?? 0, - itemsPage, - BrandFilterApplied, - TypesFilterApplied); - //… Additional code - } - - } -} -``` - -Up to this point, the above code snippet only shows the example of performing regular HTTP requests. But the 'magic' comes in the following sections where it shows how all the HTTP requests made by `HttpClient` can have resilient policies such as retries with exponential backoff, circuit breakers, security features using auth tokens, or even any other custom feature. And all of these can be done just by adding policies and delegating handlers to your registered Typed Clients. - -## Additional resources - -- **HttpClient guidelines for .NET** - [https://learn.microsoft.com/en-us/dotnet/fundamentals/networking/http/httpclient-guidelines](../../../fundamentals/networking/http/httpclient-guidelines.md) - -- **Using HttpClientFactory in .NET** - [https://learn.microsoft.com/en-us/dotnet/core/extensions/httpclient-factory](../../../core/extensions/httpclient-factory.md) - -- **Using HttpClientFactory in ASP.NET Core** - [https://learn.microsoft.com/aspnet/core/fundamentals/http-requests](/aspnet/core/fundamentals/http-requests) - -- **HttpClientFactory source code in the `dotnet/runtime` GitHub repository** - - -- **Polly (.NET resilience and transient-fault-handling library)** - - ->[!div class="step-by-step"] ->[Previous](implement-resilient-entity-framework-core-sql-connections.md) ->[Next](implement-http-call-retries-exponential-backoff-polly.md) diff --git a/docs/architecture/microservices/includes/download-alert.md b/docs/architecture/microservices/includes/download-alert.md deleted file mode 100644 index 8a574cabbd223..0000000000000 --- a/docs/architecture/microservices/includes/download-alert.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -author: IEvangelist -ms.author: dapine -ms.date: 04/08/2022 -ms.topic: include ---- - -> [!TIP] -> :::row::: -> :::column span="3"::: -> This content is an excerpt from the eBook, .NET Microservices Architecture for Containerized .NET Applications, available on [.NET Docs](/dotnet/architecture/microservices) or as a free downloadable PDF that can be read offline. -> -> > [!div class="nextstepaction"] -> > [Download PDF](https://dotnet.microsoft.com/download/e-book/microservices-architecture/pdf) -> :::column-end::: -> :::column::: -> :::image type="content" source="../media/cover-thumbnail.png" alt-text=".NET Microservices Architecture for Containerized .NET Applications eBook cover thumbnail."::: -> :::column-end::: -> :::row-end::: diff --git a/docs/architecture/microservices/index.md b/docs/architecture/microservices/index.md deleted file mode 100644 index d37d3dd9edc29..0000000000000 --- a/docs/architecture/microservices/index.md +++ /dev/null @@ -1,177 +0,0 @@ ---- -title: .NET Microservices. Architecture for Containerized .NET Applications -description: .NET Microservices Architecture for Containerized .NET Applications | Microservices are modular and independently deployable services. Docker containers (for Linux and Windows) simplify deployment and testing by bundling a service and its dependencies into a single unit, which is then run in an isolated environment. -ms.date: 01/10/2022 ---- -# .NET Microservices: Architecture for Containerized .NET Applications - -![Book cover](./media/cover-large.png) - -**EDITION v7.0** - Updated to ASP.NET Core 7.0 - -Refer [changelog](https://aka.ms/MicroservicesEbookChangelog) for the book updates and community contributions. - -This guide is an introduction to developing microservices-based applications and managing them using containers. It discusses architectural design and implementation approaches using .NET and Docker containers. - -To make it easier to get started, the guide focuses on a reference containerized and microservice-based application that you can explore. The reference application is available at the [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers) GitHub repo. - -## Action links - -- This e-book is also available in a PDF format (English version only) [Download](https://aka.ms/microservicesebook) - -- Clone/Fork the reference application [eShopOnContainers on GitHub](https://github.com/dotnet-architecture/eShopOnContainers) - -- Watch the [introductory video](https://aka.ms/microservices-video) - -- Get to know the [Microservices Architecture](https://aka.ms/MicroservicesArchitecture) right away - -## Introduction - -Enterprises are increasingly realizing cost savings, solving deployment problems, and improving DevOps and production operations by using containers. Microsoft has been releasing container innovations for Windows and Linux by creating products like Azure Kubernetes Service and Azure Service Fabric, and by partnering with industry leaders like Docker, Mesosphere, and Kubernetes. These products deliver container solutions that help companies build and deploy applications at cloud speed and scale, whatever their choice of platform or tools. - -Docker is becoming the de facto standard in the container industry, supported by the most significant vendors in the Windows and Linux ecosystems. (Microsoft is one of the main cloud vendors supporting Docker). In the future, Docker will probably be ubiquitous in any datacenter in the cloud or on-premises. - -In addition, the [microservices](https://martinfowler.com/articles/microservices.html) architecture is emerging as an important approach for distributed mission-critical applications. In a microservice-based architecture, the application is built on a collection of services that can be developed, tested, deployed, and versioned independently. - -## About this guide - -This guide is an introduction to developing microservices-based applications and managing them using containers. It discusses architectural design and implementation approaches using .NET and Docker containers. To make it easier to get started with containers and microservices, the guide focuses on a reference containerized and microservice-based application that you can explore. The sample application is available at the [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers) GitHub repo. - -This guide provides foundational development and architectural guidance primarily at a development environment level with a focus on two technologies: Docker and .NET. Our intention is that you read this guide when thinking about your application design without focusing on the infrastructure (cloud or on-premises) of your production environment. You will make decisions about your infrastructure later, when you create your production-ready applications. Therefore, this guide is intended to be infrastructure agnostic and more development-environment-centric. - -After you have studied this guide, your next step would be to learn about production-ready microservices on Microsoft Azure. - -## Version - -This guide has been revised to cover **.NET 7** version along with many additional updates related to the same "wave" of technologies (that is, Azure and additional third-party technologies) coinciding in time with the .NET 7 release. - -> [!NOTE] -> A new version of this eBook is being created for .NET 8 and the new [eShop](https://github.com/dotnet/eshop) sample. - -## What this guide does not cover - -This guide does not focus on the application lifecycle, DevOps, CI/CD pipelines, or team work. The complementary guide [Containerized Docker Application Lifecycle with Microsoft Platform and Tools](https://aka.ms/dockerlifecycleebook) focuses on that subject. The current guide also does not provide implementation details on Azure infrastructure, such as information on specific orchestrators. - -### Additional resources - -- **Containerized Docker Application Lifecycle with Microsoft Platform and Tools** (downloadable e-book) - - -## Who should use this guide - -We wrote this guide for developers and solution architects who are new to Docker-based application development and to microservices-based architecture. This guide is for you if you want to learn how to architect, design, and implement proof-of-concept applications with Microsoft development technologies (with special focus on .NET) and with Docker containers. - -You will also find this guide useful if you are a technical decision maker, such as an enterprise architect, who wants an architecture and technology overview before you decide on what approach to select for new and modern distributed applications. - -### How to use this guide - -The first part of this guide introduces Docker containers, discusses how to choose between .NET 7 and the .NET Framework as a development framework, and provides an overview of microservices. This content is for architects and technical decision makers who want an overview but don't need to focus on code implementation details. - -The second part of the guide starts with the [Development process for Docker based applications](./docker-application-development-process/index.md) section. It focuses on the development and microservice patterns for implementing applications using .NET and Docker. This section will be of most interest to developers and architects who want to focus on code and on patterns and implementation details. - -## Related microservice and container-based reference application: eShopOnContainers - -The eShopOnContainers application is an open-source reference app for .NET and microservices that is designed to be deployed using Docker containers. The application consists of multiple subsystems, including several e-store UI front-ends (a Web MVC app, a Web SPA, and a native mobile app). It also includes the back-end microservices and containers for all required server-side operations. - -The purpose of the application is to showcase architectural patterns. **IT IS NOT A PRODUCTION-READY TEMPLATE** to start real-world applications. In fact, the application is in a permanent beta state, as it's also used to test new potentially interesting technologies as they show up. - -[!INCLUDE [feedback](../includes/feedback.md)] - -## Credits - -Co-Authors: - -> **Cesar de la Torre**, Sr. PM, .NET product team, Microsoft Corp. -> -> **Bill Wagner**, Sr. Content Developer, C+E, Microsoft Corp. -> -> **Mike Rousos**, Principal Software Engineer, DevDiv CAT team, Microsoft - -Editors: - -> **Mike Pope** -> -> **Steve Hoag** - -Participants and reviewers: - -> **Jeffrey Richter**, Partner Software Eng, Azure team, Microsoft -> -> **Jimmy Bogard**, Chief Architect at Headspring -> -> **Udi Dahan**, Founder & CEO, Particular Software -> -> **Jimmy Nilsson**, Co-founder and CEO of Factor10 -> -> **Glenn Condron**, Sr. Program Manager, ASP.NET team -> -> **Mark Fussell**, Principal PM Lead, Azure Service Fabric team, Microsoft -> -> **Diego Vega**, PM Lead, Entity Framework team, Microsoft -> -> **Barry Dorrans**, Sr. Security Program Manager -> -> **Rowan Miller**, Sr. Program Manager, Microsoft -> -> **Ankit Asthana**, Principal PM Manager, .NET team, Microsoft -> -> **Scott Hunter**, Partner Director PM, .NET team, Microsoft -> -> **Nish Anil**, Sr. Program Manager, .NET team, Microsoft -> -> **Dylan Reisenberger**, Architect and Dev Lead at Polly -> -> **Steve "ardalis" Smith** - Software Architect and Trainer - [Ardalis.com](https://ardalis.com) -> -> **Ian Cooper**, Coding Architect at Brighter -> -> **Unai Zorrilla**, Architect and Dev Lead at Plain Concepts -> -> **Eduard Tomas**, Dev Lead at Plain Concepts -> -> **Ramon Tomas**, Developer at Plain Concepts -> -> **David Sanz**, Developer at Plain Concepts -> -> **Javier Valero**, Chief Operating Officer at Grupo Solutio -> -> **Pierre Millet**, Sr. Consultant, Microsoft -> -> **Michael Friis**, Product Manager, Docker Inc -> -> **Charles Lowell**, Software Engineer, VS CAT team, Microsoft -> -> **Miguel Veloso**, Software Development Engineer at Plain Concepts -> -> **Sumit Ghosh**, Principal Consultant at Neudesic - -## Copyright - -PUBLISHED BY - -Microsoft Developer Division, .NET and Visual Studio product teams - -A division of Microsoft Corporation - -One Microsoft Way - -Redmond, Washington 98052-6399 - -Copyright © 2023 by Microsoft Corporation - -All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. - -This book is provided "as-is" and expresses the author's views and opinions. The views, opinions and information expressed in this book, including URL and other Internet website references, may change without notice. - -Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred. - -Microsoft and the trademarks listed at on the "Trademarks" webpage are trademarks of the Microsoft group of companies. - -Mac and macOS are trademarks of Apple Inc. - -The Docker whale logo is a registered trademark of Docker, Inc. Used by permission. - -All other marks and logos are property of their respective owners. - ->[!div class="step-by-step"] ->[Next](container-docker-introduction/index.md) diff --git a/docs/architecture/microservices/key-takeaways.md b/docs/architecture/microservices/key-takeaways.md deleted file mode 100644 index 26b43a6dfce3d..0000000000000 --- a/docs/architecture/microservices/key-takeaways.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: .NET Microservices Architecture key takeaways -description: Get the key takeaways from the .NET Microservices Architecture for Containerized .NET Applications guide/e-book, to have a quick look at the high-level issues involved when using a microservices architecture, like benefits and drawbacks, DDD patterns for design and development, as well as resiliency, security, and the use of orchestrators. -ms.date: 10/19/2018 ---- -# .NET Microservices Architecture key takeaways - -[!INCLUDE [download-alert](includes/download-alert.md)] - -As a summary and key takeaways, the following are the most important conclusions from this guide. - -**Benefits of using containers.** Container-based solutions provide important cost savings because they help reduce deployment problems caused by failing dependencies in production environments. Containers significantly improve DevOps and production operations. - -**Containers will be ubiquitous.** Docker-based containers are becoming the de facto standard in the industry, supported by key vendors in the Windows and Linux ecosystems, such as Microsoft, Amazon AWS, Google, and IBM. Docker will probably soon be ubiquitous in both the cloud and on-premises datacenters. - -**Containers as a unit of deployment.** A Docker container is becoming the standard unit of deployment for any server-based application or service. - -**Microservices.** The microservices architecture is becoming the preferred approach for distributed and large or complex mission-critical applications based on many independent subsystems in the form of autonomous services. In a microservice-based architecture, the application is built as a collection of services that are developed, tested, versioned, deployed, and scaled independently. Each service can include any related autonomous database. - -**Domain-driven design and SOA.** The microservices architecture patterns derive from service-oriented architecture (SOA) and domain-driven design (DDD). When you design and develop microservices for environments with evolving business needs and rules, it's important to consider DDD approaches and patterns. - -**Microservices challenges.** Microservices offer many powerful capabilities, like independent deployment, strong subsystem boundaries, and technology diversity. However, they also raise many new challenges related to distributed application development, such as fragmented and independent data models, resilient communication between microservices, eventual consistency, and operational complexity that results from aggregating logging and monitoring information from multiple microservices. These aspects introduce a much higher complexity level than a traditional monolithic application. As a result, only specific scenarios are suitable for microservice-based applications. These include large and complex applications with multiple evolving subsystems. In these cases, it's worth investing in a more complex software architecture, because it will provide better long-term agility and application maintenance. - -**Containers for any application.** Containers are convenient for microservices, but can also be useful for monolithic applications based on the traditional .NET Framework, when using Windows Containers. The benefits of using Docker, such as solving many deployment-to-production issues and providing state-of-the-art Dev and Test environments, apply to many different types of applications. - -**CLI versus IDE.** With Microsoft tools, you can develop containerized .NET applications using your preferred approach. You can develop with a CLI and an editor-based environment by using the Docker CLI and Visual Studio Code. Or you can use an IDE-focused approach with Visual Studio and its unique features for Docker, such as multi-container debugging. - -**Resilient cloud applications.** In cloud-based systems and distributed systems in general, there is always the risk of partial failure. Since clients and services are separate processes (containers), a service might not be able to respond in a timely way to a client's request. For example, a service might be down because of a partial failure or for maintenance; the service might be overloaded and responding slowly to requests; or it might not be accessible for a short time because of network issues. Therefore, a cloud-based application must embrace those failures and have a strategy in place to respond to those failures. These strategies can include retry policies (resending messages or retrying requests) and implementing circuit-breaker patterns to avoid exponential load of repeated requests. Basically, cloud-based applications must have resilient mechanisms—either based on cloud infrastructure or custom, as the high-level ones provided by orchestrators or service buses. - -**Security.** Our modern world of containers and microservices can expose new vulnerabilities. There are several ways to implement basic application security, based on authentication and authorization. However, container security must consider additional key components that result in inherently safer applications. A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords, and the like, commonly referred to as application secrets. Any secure solution must follow security best practices, such as encrypting secrets while in transit and at rest, and preventing secrets from leaking when consumed by the final application. Those secrets need to be stored and kept safely, as when using Azure Key Vault. - -**Orchestrators.** Container-based orchestrators, such as Azure Kubernetes Service and Azure Service Fabric are key part of any significant microservice and container-based application. These applications carry with them high complexity, scalability needs, and go through constant evolution. This guide has introduced orchestrators and their role in microservice-based and container-based solutions. If your application needs are moving you toward complex containerized apps, you will find it useful to seek out additional resources for learning more about orchestrators. - ->[!div class="step-by-step"] ->[Previous](secure-net-microservices-web-applications/azure-key-vault-protects-secrets.md) diff --git a/docs/architecture/microservices/media/cover-large.png b/docs/architecture/microservices/media/cover-large.png deleted file mode 100644 index c839c7aba2fd9..0000000000000 Binary files a/docs/architecture/microservices/media/cover-large.png and /dev/null differ diff --git a/docs/architecture/microservices/media/cover-thumbnail.png b/docs/architecture/microservices/media/cover-thumbnail.png deleted file mode 100644 index e9c523d88cff0..0000000000000 Binary files a/docs/architecture/microservices/media/cover-thumbnail.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns.md deleted file mode 100644 index faed19bed2d57..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Applying simplified CQRS and DDD patterns in a microservice -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the overall relation between CQRS and DDD patterns. -ms.date: 01/13/2021 ---- -# Apply simplified CQRS and DDD patterns in a microservice - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -CQRS is an architectural pattern that separates the models for reading and writing data. The related term [Command Query Separation (CQS)](https://martinfowler.com/bliki/CommandQuerySeparation.html) was originally defined by Bertrand Meyer in his book *Object-Oriented Software Construction*. The basic idea is that you can divide a system's operations into two sharply separated categories: - -- Queries. These queries return a result and don't change the state of the system, and they're free of side effects. - -- Commands. These commands change the state of a system. - -CQS is a simple concept: it is about methods within the same object being either queries or commands. Each method either returns state or mutates state, but not both. Even a single repository pattern object can comply with CQS. CQS can be considered a foundational principle for CQRS. - -[Command and Query Responsibility Segregation (CQRS)](https://martinfowler.com/bliki/CQRS.html) was introduced by Greg Young and strongly promoted by Udi Dahan and others. It's based on the CQS principle, although it's more detailed. It can be considered a pattern based on commands and events plus optionally on asynchronous messages. In many cases, CQRS is related to more advanced scenarios, like having a different physical database for reads (queries) than for writes (updates). Moreover, a more evolved CQRS system might implement [Event-Sourcing (ES)](https://martinfowler.com/eaaDev/EventSourcing.html) for your updates database, so you would only store events in the domain model instead of storing the current-state data. However, this approach is not used in this guide. This guide uses the simplest CQRS approach, which consists of just separating the queries from the commands. - -The separation aspect of CQRS is achieved by grouping query operations in one layer and commands in another layer. Each layer has its own data model (note that we say model, not necessarily a different database) and is built using its own combination of patterns and technologies. More importantly, the two layers can be within the same tier or microservice, as in the example (ordering microservice) used for this guide. Or they could be implemented on different microservices or processes so they can be optimized and scaled out separately without affecting one another. - -CQRS means having two objects for a read/write operation where in other contexts there's one. There are reasons to have a denormalized reads database, which you can learn about in more advanced CQRS literature. But we aren't using that approach here, where the goal is to have more flexibility in the queries instead of limiting the queries with constraints from DDD patterns like aggregates. - -An example of this kind of service is the ordering microservice from the eShopOnContainers reference application. This service implements a microservice based on a simplified CQRS approach. It uses a single data source or database, but two logical models plus DDD patterns for the transactional domain, as shown in Figure 7-2. - -![Diagram showing a high level Simplified CQRS and DDD microservice.](./media/apply-simplified-microservice-cqrs-ddd-patterns/simplified-cqrs-ddd-microservice.png) - -**Figure 7-2**. Simplified CQRS- and DDD-based microservice - -The Logical "Ordering" Microservice includes its Ordering database, which can be, but doesn't have to be, the same Docker host. Having the database in the same Docker host is good for development, but not for production. - -The application layer can be the Web API itself. The important design aspect here is that the microservice has split the queries and ViewModels (data models especially created for the client applications) from the commands, domain model, and transactions following the CQRS pattern. This approach keeps the queries independent from restrictions and constraints coming from DDD patterns that only make sense for transactions and updates, as explained in later sections. - -## Additional resources - -- **Greg Young. Versioning in an Event Sourced System** (Free to read online e-book) \ - - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](eshoponcontainers-cqrs-ddd-microservice.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/client-side-validation.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/client-side-validation.md deleted file mode 100644 index a3bd536537107..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/client-side-validation.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: Client-side validation (validation in the presentation layers) -description: .NET Microservices Architecture for Containerized .NET Applications | Explore the key concepts of client-side validation. -ms.date: 10/08/2018 ---- -# Client-side validation (validation in the presentation layers) - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Even when the source of truth is the domain model and ultimately you must have validation at the domain model level, validation can still be handled at both the domain model level (server side) and the UI (client side). - -Client-side validation is a great convenience for users. It saves time they would otherwise spend waiting for a round trip to the server that might return validation errors. In business terms, even a few fractions of seconds multiplied hundreds of times each day adds up to a lot of time, expense, and frustration. Straightforward and immediate validation enables users to work more efficiently and produce better quality input and output. - -Just as the view model and the domain model are different, view model validation and domain model validation might be similar but serve a different purpose. If you are concerned about DRY (the Don't Repeat Yourself principle), consider that in this case code reuse might also mean coupling, and in enterprise applications it is more important not to couple the server side to the client side than to follow the DRY principle. - -Even when using client-side validation, you should always validate your commands or input DTOs in server code, because the server APIs are a possible attack vector. Usually, doing both is your best bet because if you have a client application, from a UX perspective, it is best to be proactive and not allow the user to enter invalid information. - -Therefore, in client-side code you typically validate the ViewModels. You could also validate the client output DTOs or commands before you send them to the services. - -The implementation of client-side validation depends on what kind of client application you are building. It will be different if you are validating data in a web MVC web application with most of the code in .NET, a SPA web application with that validation being coded in JavaScript or TypeScript, or a mobile app coded with Xamarin and C#. - -## Additional resources - -### Validation in ASP.NET Core apps - -- **Rick Anderson. Adding validation** \ - [https://learn.microsoft.com/aspnet/core/tutorials/first-mvc-app/validation](/aspnet/core/tutorials/first-mvc-app/validation) - -### Validation in SPA Web apps (Angular 2, TypeScript, JavaScript, Blazor WebAssembly) - -- **Form Validation** \ - - -- **Validation.** Breeze documentation. \ - - -- **ASP.NET Core Blazor forms and input components** \ - -In summary, these are the most important concepts in regards to validation: - -- Entities and aggregates should enforce their own consistency and be "always valid". Aggregate roots are responsible for multi-entity consistency within the same aggregate. - -- If you think that an entity needs to enter an invalid state, consider using a different object model—for example, using a temporary DTO until you create the final domain entity. - -- If you need to create several related objects, such as an aggregate, and they are only valid once all of them have been created, consider using the Factory pattern. - -- In most of the cases, having redundant validation in the client side is good, because the application can be proactive. - ->[!div class="step-by-step"] ->[Previous](domain-model-layer-validations.md) ->[Next](domain-events-design-implementation.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/cqrs-microservice-reads.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/cqrs-microservice-reads.md deleted file mode 100644 index e84c0a59a8b0f..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/cqrs-microservice-reads.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: Implementing reads/queries in a CQRS microservice -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the implementation of the queries side of CQRS on the ordering microservice in eShopOnContainers using Dapper. -ms.date: 01/13/2021 ---- -# Implement reads/queries in a CQRS microservice - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -For reads/queries, the ordering microservice from the eShopOnContainers reference application implements the queries independently from the DDD model and transactional area. This implementation was done primarily because the demands for queries and for transactions are drastically different. Writes execute transactions that must be compliant with the domain logic. Queries, on the other hand, are idempotent and can be segregated from the domain rules. - -The approach is simple, as shown in Figure 7-3. The API interface is implemented by the Web API controllers using any infrastructure, such as a micro Object Relational Mapper (ORM) like Dapper, and returning dynamic ViewModels depending on the needs of the UI applications. - -![Diagram showing high-level queries-side in simplified CQRS.](./media/cqrs-microservice-reads/simple-approach-cqrs-queries.png) - -**Figure 7-3**. The simplest approach for queries in a CQRS microservice - -The simplest approach for the queries-side in a simplified CQRS approach can be implemented by querying the database with a Micro-ORM like Dapper, returning dynamic ViewModels. The query definitions query the database and return a dynamic ViewModel built on the fly for each query. Since the queries are idempotent, they won't change the data no matter how many times you run a query. Therefore, you don't need to be restricted by any DDD pattern used in the transactional side, like aggregates and other patterns, and that is why queries are separated from the transactional area. You query the database for the data that the UI needs and return a dynamic ViewModel that does not need to be statically defined anywhere (no classes for the ViewModels) except in the SQL statements themselves. - -Since this approach is simple, the code required for the queries side (such as code using a micro ORM like [Dapper](https://github.com/StackExchange/Dapper)) can be implemented [within the same Web API project](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/Services/Ordering/Ordering.API/Application/Queries/OrderQueries.cs). Figure 7-4 shows this approach. The queries are defined in the **Ordering.API** microservice project within the eShopOnContainers solution. - -![Screenshot of the Ordering.API project's Queries folder.](./media/cqrs-microservice-reads/ordering-api-queries-folder.png) - -**Figure 7-4**. Queries in the Ordering microservice in eShopOnContainers - -## Use ViewModels specifically made for client apps, independent from domain model constraints - -Since the queries are performed to obtain the data needed by the client applications, the returned type can be specifically made for the clients, based on the data returned by the queries. These models, or Data Transfer Objects (DTOs), are called ViewModels. - -The returned data (ViewModel) can be the result of joining data from multiple entities or tables in the database, or even across multiple aggregates defined in the domain model for the transactional area. In this case, because you are creating queries independent of the domain model, the aggregates boundaries and constraints are ignored and you're free to query any table and column you might need. This approach provides great flexibility and productivity for the developers creating or updating the queries. - -The ViewModels can be static types defined in classes (as is implemented in the ordering microservice). Or they can be created dynamically based on the queries performed, which is agile for developers. - -## Use Dapper as a micro ORM to perform queries - -You can use any micro ORM, Entity Framework Core, or even plain ADO.NET for querying. In the sample application, Dapper was selected for the ordering microservice in eShopOnContainers as a good example of a popular micro ORM. It can run plain SQL queries with great performance, because it's a light framework. Using Dapper, you can write a SQL query that can access and join multiple tables. - -Dapper is an open-source project (original created by Sam Saffron), and is part of the building blocks used in [Stack Overflow](https://stackoverflow.com/). To use Dapper, you just need to install it through the [Dapper NuGet package](https://www.nuget.org/packages/Dapper), as shown in the following figure: - -![Screenshot of the Dapper package in the NuGet packages view.](./media/cqrs-microservice-reads/drapper-package-nuget.png) - -You also need to add a `using` directive so your code has access to the Dapper extension methods. - -When you use Dapper in your code, you directly use the class available in the namespace. Through the QueryAsync method and other extension methods that extend the class, you can run queries in a straightforward and performant way. - -## Dynamic versus static ViewModels - -When returning ViewModels from the server-side to client apps, you can think about those ViewModels as DTOs (Data Transfer Objects) that can be different to the internal domain entities of your entity model because the ViewModels hold the data the way the client app needs. Therefore, in many cases, you can aggregate data coming from multiple domain entities and compose the ViewModels precisely according to how the client app needs that data. - -Those ViewModels or DTOs can be defined explicitly (as data holder classes), like the `OrderSummary` class shown in a later code snippet. Or, you could just return dynamic ViewModels or dynamic DTOs based on the attributes returned by your queries as a dynamic type. - -### ViewModel as dynamic type - -As shown in the following code, a `ViewModel` can be directly returned by the queries by just returning a *dynamic* type that internally is based on the attributes returned by a query. That means that the subset of attributes to be returned is based on the query itself. Therefore, if you add a new column to the query or join, that data is dynamically added to the returned `ViewModel`. - -```csharp -using Dapper; -using Microsoft.Extensions.Configuration; -using System.Data.SqlClient; -using System.Threading.Tasks; -using System.Dynamic; -using System.Collections.Generic; - -public class OrderQueries : IOrderQueries -{ - public async Task> GetOrdersAsync() - { - using (var connection = new SqlConnection(_connectionString)) - { - connection.Open(); - return await connection.QueryAsync( - @"SELECT o.[Id] as ordernumber, - o.[OrderDate] as [date],os.[Name] as [status], - SUM(oi.units*oi.unitprice) as total - FROM [ordering].[Orders] o - LEFT JOIN[ordering].[orderitems] oi ON o.Id = oi.orderid - LEFT JOIN[ordering].[orderstatus] os on o.OrderStatusId = os.Id - GROUP BY o.[Id], o.[OrderDate], os.[Name]"); - } - } -} -``` - -The important point is that by using a dynamic type, the returned collection of data is dynamically assembled as the ViewModel. - -**Pros:** This approach reduces the need to modify static ViewModel classes whenever you update the SQL sentence of a query, making this design approach agile when coding, straightforward, and quick to evolve in regard to future changes. - -**Cons:** In the long term, dynamic types can negatively impact the clarity and the compatibility of a service with client apps. In addition, middleware software like Swashbuckle cannot provide the same level of documentation on returned types if using dynamic types. - -### ViewModel as predefined DTO classes - -**Pros**: Having static, predefined ViewModel classes, like "contracts" based on explicit DTO classes, is definitely better for public APIs but also for long-term microservices, even if they are only used by the same application. - -If you want to specify response types for Swagger, you need to use explicit DTO classes as the return type. Therefore, predefined DTO classes allow you to offer richer information from Swagger. That improves the API documentation and compatibility when consuming an API. - -**Cons**: As mentioned earlier, when updating the code, it takes some more steps to update the DTO classes. - -*Tip based on our experience*: In the queries implemented at the Ordering microservice in eShopOnContainers, we started developing by using dynamic ViewModels as it was straightforward and agile on the early development stages. But, once the development was stabilized, we chose to refactor the APIs and use static or pre-defined DTOs for the ViewModels, because it is clearer for the microservice's consumers to know explicit DTO types, used as "contracts". - -In the following example, you can see how the query is returning data by using an explicit ViewModel DTO class: the OrderSummary class. - -```csharp -using Dapper; -using Microsoft.Extensions.Configuration; -using System.Data.SqlClient; -using System.Threading.Tasks; -using System.Dynamic; -using System.Collections.Generic; - -public class OrderQueries : IOrderQueries -{ - public async Task> GetOrdersAsync() - { - using (var connection = new SqlConnection(_connectionString)) - { - connection.Open(); - return await connection.QueryAsync( - @"SELECT o.[Id] as ordernumber, - o.[OrderDate] as [date],os.[Name] as [status], - SUM(oi.units*oi.unitprice) as total - FROM [ordering].[Orders] o - LEFT JOIN[ordering].[orderitems] oi ON o.Id = oi.orderid - LEFT JOIN[ordering].[orderstatus] os on o.OrderStatusId = os.Id - GROUP BY o.[Id], o.[OrderDate], os.[Name] - ORDER BY o.[Id]"); - } - } -} -``` - -#### Describe response types of Web APIs - -Developers consuming web APIs and microservices are most concerned with what is returned—specifically response types and error codes (if not standard). The response types are handled in the XML comments and data annotations. - -Without proper documentation in the Swagger UI, the consumer lacks knowledge of what types are being returned or what HTTP codes can be returned. That problem is fixed by adding the , so Swashbuckle can generate richer information about the API return model and values, as shown in the following code: - -```csharp -namespace Microsoft.eShopOnContainers.Services.Ordering.API.Controllers -{ - [Route("api/v1/[controller]")] - [Authorize] - public class OrdersController : Controller - { - //Additional code... - [Route("")] - [HttpGet] - [ProducesResponseType(typeof(IEnumerable), - (int)HttpStatusCode.OK)] - public async Task GetOrders() - { - var userid = _identityService.GetUserIdentity(); - var orders = await _orderQueries - .GetOrdersFromUserAsync(Guid.Parse(userid)); - return Ok(orders); - } - } -} -``` - -However, the `ProducesResponseType` attribute cannot use dynamic as a type but requires to use explicit types, like the `OrderSummary` ViewModel DTO, shown in the following example: - -```csharp -public class OrderSummary -{ - public int ordernumber { get; set; } - public DateTime date { get; set; } - public string status { get; set; } - public double total { get; set; } -} -// or using C# 8 record types: -public record OrderSummary(int ordernumber, DateTime date, string status, double total); -``` - -This is another reason why explicit returned types are better than dynamic types, in the long term. When using the `ProducesResponseType` attribute, you can also specify what is the expected outcome regarding possible HTTP errors/codes, like 200, 400, etc. - -In the following image, you can see how Swagger UI shows the ResponseType information. - -![Screenshot of the Swagger UI page for the Ordering API.](./media/cqrs-microservice-reads/swagger-ordering-http-api.png) - -**Figure 7-5**. Swagger UI showing response types and possible HTTP status codes from a Web API - -The image shows some example values based on the ViewModel types and the possible HTTP status codes that can be returned. - -## Additional resources - -- **Dapper** - - -- **Julie Lerman. Data Points - Dapper, Entity Framework and Hybrid Apps (MSDN magazine article)** - [https://learn.microsoft.com/archive/msdn-magazine/2016/may/data-points-dapper-entity-framework-and-hybrid-apps](/archive/msdn-magazine/2016/may/data-points-dapper-entity-framework-and-hybrid-apps) - -- **ASP.NET Core Web API Help Pages using Swagger** - [https://learn.microsoft.com/aspnet/core/tutorials/web-api-help-pages-using-swagger?tabs=visual-studio](/aspnet/core/tutorials/web-api-help-pages-using-swagger?tabs=visual-studio) - -- **Create record types** [https://learn.microsoft.com/dotnet/csharp/whats-new/tutorials/records](../../../csharp/tutorials/records.md) - ->[!div class="step-by-step"] ->[Previous](eshoponcontainers-cqrs-ddd-microservice.md) ->[Next](ddd-oriented-microservice.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/ddd-oriented-microservice.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/ddd-oriented-microservice.md deleted file mode 100644 index 08e7f0bc7f362..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/ddd-oriented-microservice.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: Designing a DDD-oriented microservice -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the design of the DDD-oriented ordering microservice and its application layers. -ms.date: 01/13/2021 ---- -# Design a DDD-oriented microservice - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Domain-driven design (DDD) advocates modeling based on the reality of business as relevant to your use cases. In the context of building applications, DDD talks about problems as domains. It describes independent problem areas as Bounded Contexts (each Bounded Context correlates to a microservice), and emphasizes a common language to talk about these problems. It also suggests many technical concepts and patterns, like domain entities with rich models (no [anemic-domain model](https://martinfowler.com/bliki/AnemicDomainModel.html)), value objects, aggregates, and aggregate root (or root entity) rules to support the internal implementation. This section introduces the design and implementation of those internal patterns. - -Sometimes these DDD technical rules and patterns are perceived as obstacles that have a steep learning curve for implementing DDD approaches. But the important part is not the patterns themselves, but organizing the code so it is aligned to the business problems, and using the same business terms (ubiquitous language). In addition, DDD approaches should be applied only if you are implementing complex microservices with significant business rules. Simpler responsibilities, like a CRUD service, can be managed with simpler approaches. - -Where to draw the boundaries is the key task when designing and defining a microservice. DDD patterns help you understand the complexity in the domain. For the domain model for each Bounded Context, you identify and define the entities, value objects, and aggregates that model your domain. You build and refine a domain model that is contained within a boundary that defines your context. And that is explicit in the form of a microservice. The components within those boundaries end up being your microservices, although in some cases a BC or business microservices can be composed of several physical services. DDD is about boundaries and so are microservices. - -## Keep the microservice context boundaries relatively small - -Determining where to place boundaries between Bounded Contexts balances two competing goals. First, you want to initially create the smallest possible microservices, although that should not be the main driver; you should create a boundary around things that need cohesion. Second, you want to avoid chatty communications between microservices. These goals can contradict one another. You should balance them by decomposing the system into as many small microservices as you can until you see communication boundaries growing quickly with each additional attempt to separate a new Bounded Context. Cohesion is key within a single bounded context. - -It is similar to the [Inappropriate Intimacy code smell](https://sourcemaking.com/refactoring/smells/inappropriate-intimacy) when implementing classes. If two microservices need to collaborate a lot with each other, they should probably be the same microservice. - -Another way to look at this aspect is autonomy. If a microservice must rely on another service to directly service a request, it is not truly autonomous. - -## Layers in DDD microservices - -Most enterprise applications with significant business and technical complexity are defined by multiple layers. The layers are a logical artifact, and are not related to the deployment of the service. They exist to help developers manage the complexity in the code. Different layers (like the domain model layer versus the presentation layer, etc.) might have different types, which mandate translations between those types. - -For example, an entity could be loaded from the database. Then part of that information, or an aggregation of information including additional data from other entities, can be sent to the client UI through a REST Web API. The point here is that the domain entity is contained within the domain model layer and should not be propagated to other areas that it does not belong to, like to the presentation layer. - -Additionally, you need to have always-valid entities (see the [Designing validations in the domain model layer](domain-model-layer-validations.md) section) controlled by aggregate roots (root entities). Therefore, entities should not be bound to client views, because at the UI level some data might still not be validated. This reason is what the ViewModel is for. The ViewModel is a data model exclusively for presentation layer needs. The domain entities do not belong directly to the ViewModel. Instead, you need to translate between ViewModels and domain entities and vice versa. - -When tackling complexity, it is important to have a domain model controlled by aggregate roots that make sure that all the invariants and rules related to that group of entities (aggregate) are performed through a single entry-point or gate, the aggregate root. - -Figure 7-5 shows how a layered design is implemented in the eShopOnContainers application. - -![Diagram showing the layers in a domain-driven design microservice.](./media/ddd-oriented-microservice/domain-driven-design-microservice.png) - -**Figure 7-5**. DDD layers in the ordering microservice in eShopOnContainers - -The three layers in a DDD microservice like Ordering. Each layer is a VS project: Application layer is Ordering.API, Domain layer is Ordering.Domain and the Infrastructure layer is Ordering.Infrastructure. You want to design the system so that each layer communicates only with certain other layers. That approach may be easier to enforce if layers are implemented as different class libraries, because you can clearly identify what dependencies are set between libraries. For instance, the domain model layer should not take a dependency on any other layer (the domain model classes should be Plain Old Class Objects, or [POCO](../../../standard/glossary.md#poco), classes). As shown in Figure 7-6, the **Ordering.Domain** layer library has dependencies only on the .NET libraries or NuGet packages, but not on any other custom library, such as data library or persistence library. - -![Screenshot of Ordering.Domain dependencies.](./media/ddd-oriented-microservice/ordering-domain-dependencies.png) - -**Figure 7-6**. Layers implemented as libraries allow better control of dependencies between layers - -### The domain model layer - -Eric Evans's excellent book [Domain Driven Design](https://domainlanguage.com/ddd/) says the following about the domain model layer and the application layer. - -**Domain Model Layer**: Responsible for representing concepts of the business, information about the business situation, and business rules. State that reflects the business situation is controlled and used here, even though the technical details of storing it are delegated to the infrastructure. This layer is the heart of business software. - -The domain model layer is where the business is expressed. When you implement a microservice domain model layer in .NET, that layer is coded as a class library with the domain entities that capture data plus behavior (methods with logic). - -Following the [Persistence Ignorance](https://deviq.com/persistence-ignorance/) and the [Infrastructure Ignorance](https://ayende.com/blog/3137/infrastructure-ignorance) principles, this layer must completely ignore data persistence details. These persistence tasks should be performed by the infrastructure layer. Therefore, this layer should not take direct dependencies on the infrastructure, which means that an important rule is that your domain model entity classes should be POCOs. - -Domain entities should not have any direct dependency (like deriving from a base class) on any data access infrastructure framework like Entity Framework or NHibernate. Ideally, your domain entities should not derive from or implement any type defined in any infrastructure framework. - -Most modern ORM frameworks like Entity Framework Core allow this approach, so that your domain model classes are not coupled to the infrastructure. However, having POCO entities is not always possible when using certain NoSQL databases and frameworks, like Actors and Reliable Collections in Azure Service Fabric. - -Even when it is important to follow the Persistence Ignorance principle for your Domain model, you should not ignore persistence concerns. It is still important to understand the physical data model and how it maps to your entity object model. Otherwise you can create impossible designs. - -Also, this aspect does not mean you can take a model designed for a relational database and directly move it to a NoSQL or document-oriented database. In some entity models, the model might fit, but usually it does not. There are still constraints that your entity model must adhere to, based both on the storage technology and ORM technology. - -### The application layer - -Moving on to the application layer, we can again cite Eric Evans's book [Domain Driven Design](https://domainlanguage.com/ddd/): - -**Application Layer:** Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. - -A microservice's application layer in .NET is commonly coded as an ASP.NET Core Web API project. The project implements the microservice's interaction, remote network access, and the external Web APIs used from the UI or client apps. It includes queries if using a CQRS approach, commands accepted by the microservice, and even the event-driven communication between microservices (integration events). The ASP.NET Core Web API that represents the application layer must not contain business rules or domain knowledge (especially domain rules for transactions or updates); these should be owned by the domain model class library. The application layer must only coordinate tasks and must not hold or define any domain state (domain model). It delegates the execution of business rules to the domain model classes themselves (aggregate roots and domain entities), which will ultimately update the data within those domain entities. - -Basically, the application logic is where you implement all use cases that depend on a given front end. For example, the implementation related to a Web API service. - -The goal is that the domain logic in the domain model layer, its invariants, the data model, and related business rules must be completely independent from the presentation and application layers. Most of all, the domain model layer must not directly depend on any infrastructure framework. - -### The infrastructure layer - -The infrastructure layer is how the data that is initially held in domain entities (in memory) is persisted in databases or another persistent store. An example is using Entity Framework Core code to implement the Repository pattern classes that use a DBContext to persist data in a relational database. - -In accordance with the previously mentioned [Persistence Ignorance](https://deviq.com/persistence-ignorance/) and [Infrastructure Ignorance](https://ayende.com/blog/3137/infrastructure-ignorance) principles, the infrastructure layer must not "contaminate" the domain model layer. You must keep the domain model entity classes agnostic from the infrastructure that you use to persist data (EF or any other framework) by not taking hard dependencies on frameworks. Your domain model layer class library should have only your domain code, just POCO entity classes implementing the heart of your software and completely decoupled from infrastructure technologies. - -Thus, your layers or class libraries and projects should ultimately depend on your domain model layer (library), not vice versa, as shown in Figure 7-7. - -![Diagram showing dependencies that exist between DDD service layers.](./media/ddd-oriented-microservice/ddd-service-layer-dependencies.png) - -**Figure 7-7**. Dependencies between layers in DDD - -Dependencies in a DDD Service, the Application layer depends on Domain and Infrastructure, and Infrastructure depends on Domain, but Domain doesn't depend on any layer. This layer design should be independent for each microservice. As noted earlier, you can implement the most complex microservices following DDD patterns, while implementing simpler data-driven microservices (simple CRUD in a single layer) in a simpler way. - -#### Additional resources - -- **DevIQ. Persistence Ignorance principle** \ - - -- **Oren Eini. Infrastructure Ignorance** \ - - -- **Angel Lopez. Layered Architecture In Domain-Driven Design** \ - - ->[!div class="step-by-step"] ->[Previous](cqrs-microservice-reads.md) ->[Next](microservice-domain-model.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md deleted file mode 100644 index 456a931e2f448..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md +++ /dev/null @@ -1,358 +0,0 @@ ---- -title: "Domain events: Design and implementation" -description: .NET Microservices Architecture for Containerized .NET Applications | Get an in-depth view of domain events, a key concept to establish communication between aggregates. -ms.date: 10/08/2018 ---- -# Domain events: Design and implementation - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Use domain events to explicitly implement side effects of changes within your domain. In other words, and using DDD terminology, use domain events to explicitly implement side effects across multiple aggregates. Optionally, for better scalability and less impact in database locks, use eventual consistency between aggregates within the same domain. - -## What is a domain event? - -An event is something that has happened in the past. A domain event is, something that happened in the domain that you want other parts of the same domain (in-process) to be aware of. The notified parts usually react somehow to the events. - -An important benefit of domain events is that side effects can be expressed explicitly. - -For example, if you're just using Entity Framework and there has to be a reaction to some event, you would probably code whatever you need close to what triggers the event. So the rule gets coupled, implicitly, to the code, and you have to look into the code to, hopefully, realize the rule is implemented there. - -On the other hand, using domain events makes the concept explicit, because there's a `DomainEvent` and at least one `DomainEventHandler` involved. - -For example, in the eShop application, when an order is created, the user becomes a buyer, so an `OrderStartedDomainEvent` is raised and handled in the `ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler`, so the underlying concept is evident. - -In short, domain events help you to express, explicitly, the domain rules, based in the ubiquitous language provided by the domain experts. Domain events also enable a better separation of concerns among classes within the same domain. - -It's important to ensure that, just like a database transaction, either all the operations related to a domain event finish successfully or none of them do. - -Domain events are similar to messaging-style events, with one important difference. With real messaging, message queuing, message brokers, or a service bus using AMQP, a message is always sent asynchronously and communicated across processes and machines. This is useful for integrating multiple Bounded Contexts, microservices, or even different applications. However, with domain events, you want to raise an event from the domain operation you're currently running, but you want any side effects to occur within the same domain. - -The domain events and their side effects (the actions triggered afterwards that are managed by event handlers) should occur almost immediately, usually in-process, and within the same domain. Thus, domain events could be synchronous or asynchronous. Integration events, however, should always be asynchronous. - -## Domain events versus integration events - -Semantically, domain and integration events are the same thing: notifications about something that just happened. However, their implementation must be different. Domain events are just messages pushed to a domain event dispatcher, which could be implemented as an in-memory mediator based on an IoC container or any other method. - -On the other hand, the purpose of integration events is to propagate committed transactions and updates to additional subsystems, whether they are other microservices, Bounded Contexts or even external applications. Hence, they should occur only if the entity is successfully persisted, otherwise it's as if the entire operation never happened. - -As mentioned before, integration events must be based on asynchronous communication between multiple microservices (other Bounded Contexts) or even external systems/applications. - -Thus, the event bus interface needs some infrastructure that allows inter-process and distributed communication between potentially remote services. It can be based on a commercial service bus, queues, a shared database used as a mailbox, or any other distributed and ideally push based messaging system. - -## Domain events as a preferred way to trigger side effects across multiple aggregates within the same domain - -If executing a command related to one aggregate instance requires additional domain rules to be run on one or more additional aggregates, you should design and implement those side effects to be triggered by domain events. As shown in Figure 7-14, and as one of the most important use cases, a domain event should be used to propagate state changes across multiple aggregates within the same domain model. - -![Diagram showing a domain event controlling data to a Buyer aggregate.](./media/domain-events-design-implementation/domain-model-ordering-microservice.png) - -**Figure 7-14**. Domain events to enforce consistency between multiple aggregates within the same domain - -Figure 7-14 shows how consistency between aggregates is achieved by domain events. When the user initiates an order, the Order Aggregate sends an `OrderStarted` domain event. The OrderStarted domain event is handled by the Buyer Aggregate to create a Buyer object in the ordering microservice, based on the original user info from the identity microservice (with information provided in the CreateOrder command). - -Alternately, you can have the aggregate root subscribed for events raised by members of its aggregates (child entities). For instance, each OrderItem child entity can raise an event when the item price is higher than a specific amount, or when the product item amount is too high. The aggregate root can then receive those events and perform a global calculation or aggregation. - -It's important to understand that this event-based communication is not implemented directly within the aggregates; you need to implement domain event handlers. - -Handling the domain events is an application concern. The domain model layer should only focus on the domain logic—things that a domain expert would understand, not application infrastructure like handlers and side-effect persistence actions using repositories. Therefore, the application layer level is where you should have domain event handlers triggering actions when a domain event is raised. - -Domain events can also be used to trigger any number of application actions, and what is more important, must be open to increase that number in the future in a decoupled way. For instance, when the order is started, you might want to publish a domain event to propagate that info to other aggregates or even to raise application actions like notifications. - -The key point is the open number of actions to be executed when a domain event occurs. Eventually, the actions and rules in the domain and application will grow. The complexity or number of side-effect actions when something happens will grow, but if your code were coupled with "glue" (that is, creating specific objects with `new`), then every time you needed to add a new action you would also need to change working and tested code. - -This change could result in new bugs and this approach also goes against the [Open/Closed principle](https://en.wikipedia.org/wiki/Open/closed_principle) from [SOLID](https://en.wikipedia.org/wiki/SOLID). Not only that, the original class that was orchestrating the operations would grow and grow, which goes against the [Single Responsibility Principle (SRP)](https://en.wikipedia.org/wiki/Single_responsibility_principle). - -On the other hand, if you use domain events, you can create a fine-grained and decoupled implementation by segregating responsibilities using this approach: - -1. Send a command (for example, CreateOrder). -2. Receive the command in a command handler. - - Execute a single aggregate's transaction. - - (Optional) Raise domain events for side effects (for example, OrderStartedDomainEvent). -3. Handle domain events (within the current process) that will execute an open number of side effects in multiple aggregates or application actions. For example: - - Verify or create buyer and payment method. - - Create and send a related integration event to the event bus to propagate states across microservices or trigger external actions like sending an email to the buyer. - - Handle other side effects. - -As shown in Figure 7-15, starting from the same domain event, you can handle multiple actions related to other aggregates in the domain or additional application actions you need to perform across microservices connecting with integration events and the event bus. - -![Diagram showing a domain event passing data to several event handlers.](./media/domain-events-design-implementation/aggregate-domain-event-handlers.png) - -**Figure 7-15**. Handling multiple actions per domain - -There can be several handlers for the same domain event in the Application Layer, one handler can solve consistency between aggregates and another handler can publish an integration event, so other microservices can do something with it. The event handlers are typically in the application layer, because you'll use infrastructure objects like repositories or an application API for the microservice's behavior. In that sense, event handlers are similar to command handlers, so both are part of the application layer. The important difference is that a command should be processed only once. A domain event could be processed zero or *n* times, because it can be received by multiple receivers or event handlers with a different purpose for each handler. - -Having an open number of handlers per domain event allows you to add as many domain rules as needed, without affecting current code. For instance, implementing the following business rule might be as easy as adding a few event handlers (or even just one): - -> When the total amount purchased by a customer in the store, across any number of orders, exceeds $6,000, apply a 10% off discount to every new order and notify the customer with an email about that discount for future orders. - -## Implement domain events - -In C#, a domain event is simply a data-holding structure or class, like a DTO, with all the information related to what just happened in the domain, as shown in the following example: - -```csharp -public class OrderStartedDomainEvent : INotification -{ - public string UserId { get; } - public string UserName { get; } - public int CardTypeId { get; } - public string CardNumber { get; } - public string CardSecurityNumber { get; } - public string CardHolderName { get; } - public DateTime CardExpiration { get; } - public Order Order { get; } - - public OrderStartedDomainEvent(Order order, string userId, string userName, - int cardTypeId, string cardNumber, - string cardSecurityNumber, string cardHolderName, - DateTime cardExpiration) - { - Order = order; - UserId = userId; - UserName = userName; - CardTypeId = cardTypeId; - CardNumber = cardNumber; - CardSecurityNumber = cardSecurityNumber; - CardHolderName = cardHolderName; - CardExpiration = cardExpiration; - } -} -``` - -This is essentially a class that holds all the data related to the OrderStarted event. - -In terms of the ubiquitous language of the domain, since an event is something that happened in the past, the class name of the event should be represented as a past-tense verb, like OrderStartedDomainEvent or OrderShippedDomainEvent. That's how the domain event is implemented in the ordering microservice in eShop. - -As noted earlier, an important characteristic of events is that since an event is something that happened in the past, it shouldn't change. Therefore, it must be an immutable class. You can see in the previous code that the properties are read-only. There's no way to update the object, you can only set values when you create it. - -It's important to highlight here that if domain events were to be handled asynchronously, using a queue that required serializing and deserializing the event objects, the properties would have to be "private set" instead of read-only, so the deserializer would be able to assign the values upon dequeuing. This is not an issue in the Ordering microservice, as the domain event pub/sub is implemented synchronously using MediatR. - -### Raise domain events - -The next question is how to raise a domain event so it reaches its related event handlers. You can use multiple approaches. - -Udi Dahan originally proposed (for example, in several related posts, such as [Domain Events – Take 2](https://udidahan.com/2008/08/25/domain-events-take-2/)) using a static class for managing and raising the events. This might include a static class named DomainEvents that would raise domain events immediately when it's called, using syntax like `DomainEvents.Raise(Event myEvent)`. Jimmy Bogard wrote a blog post ([Strengthening your domain: Domain Events](https://lostechies.com/jimmybogard/2010/04/08/strengthening-your-domain-domain-events/)) that recommends a similar approach. - -However, when the domain events class is static, it also dispatches to handlers immediately. This makes testing and debugging more difficult, because the event handlers with side-effects logic are executed immediately after the event is raised. When you're testing and debugging, you just want to focus on what is happening in the current aggregate classes; you don't want to suddenly be redirected to other event handlers for side effects related to other aggregates or application logic. This is why other approaches have evolved, as explained in the next section. - -#### The deferred approach to raise and dispatch events - -Instead of dispatching to a domain event handler immediately, a better approach is to add the domain events to a collection and then to dispatch those domain events *right before* or *right* *after* committing the transaction (as with SaveChanges in EF). (This approach was described by Jimmy Bogard in this post [A better domain events pattern](https://lostechies.com/jimmybogard/2014/05/13/a-better-domain-events-pattern/).) - -Deciding if you send the domain events right before or right after committing the transaction is important, since it determines whether you will include the side effects as part of the same transaction or in different transactions. In the latter case, you need to deal with eventual consistency across multiple aggregates. This topic is discussed in the next section. - -The deferred approach is what eShop uses. First, you add the events happening in your entities into a collection or list of events per entity. That list should be part of the entity object, or even better, part of your base entity class, as shown in the following example of the Entity base class: - -```csharp -public abstract class Entity -{ - //... - private List _domainEvents; - public List DomainEvents => _domainEvents; - - public void AddDomainEvent(INotification eventItem) - { - _domainEvents = _domainEvents ?? new List(); - _domainEvents.Add(eventItem); - } - - public void RemoveDomainEvent(INotification eventItem) - { - _domainEvents?.Remove(eventItem); - } - //... Additional code -} -``` - -When you want to raise an event, you just add it to the event collection from code at any method of the aggregate-root entity. - -The following code, part of the [Order aggregate-root at eShop](https://github.com/dotnet/eShop/blob/main/src/Ordering.Domain/AggregatesModel/OrderAggregate/Order.cs), shows an example: - -```csharp -var orderStartedDomainEvent = new OrderStartedDomainEvent(this, //Order object - cardTypeId, cardNumber, - cardSecurityNumber, - cardHolderName, - cardExpiration); -this.AddDomainEvent(orderStartedDomainEvent); -``` - -Notice that the only thing that the AddDomainEvent method is doing is adding an event to the list. No event is dispatched yet, and no event handler is invoked yet. - -You actually want to dispatch the events later on, when you commit the transaction to the database. If you are using Entity Framework Core, that means in the SaveChanges method of your EF DbContext, as in the following code: - -```csharp -// EF Core DbContext -public class OrderingContext : DbContext, IUnitOfWork -{ - // ... - public async Task SaveEntitiesAsync(CancellationToken cancellationToken = default(CancellationToken)) - { - // Dispatch Domain Events collection. - // Choices: - // A) Right BEFORE committing data (EF SaveChanges) into the DB. This makes - // a single transaction including side effects from the domain event - // handlers that are using the same DbContext with Scope lifetime - // B) Right AFTER committing data (EF SaveChanges) into the DB. This makes - // multiple transactions. You will need to handle eventual consistency and - // compensatory actions in case of failures. - await _mediator.DispatchDomainEventsAsync(this); - - // After this line runs, all the changes (from the Command Handler and Domain - // event handlers) performed through the DbContext will be committed - var result = await base.SaveChangesAsync(); - } -} -``` - -With this code, you dispatch the entity events to their respective event handlers. - -The overall result is that you've decoupled the raising of a domain event (a simple add into a list in memory) from dispatching it to an event handler. In addition, depending on what kind of dispatcher you are using, you could dispatch the events synchronously or asynchronously. - -Be aware that transactional boundaries come into significant play here. If your unit of work and transaction can span more than one aggregate (as when using EF Core and a relational database), this can work well. But if the transaction cannot span aggregates, you have to implement additional steps to achieve consistency. This is another reason why persistence ignorance is not universal; it depends on the storage system you use. - -### Single transaction across aggregates versus eventual consistency across aggregates - -The question of whether to perform a single transaction across aggregates versus relying on eventual consistency across those aggregates is a controversial one. Many DDD authors like Eric Evans and Vaughn Vernon advocate the rule that one transaction = one aggregate and therefore argue for eventual consistency across aggregates. For example, in his book *Domain-Driven Design*, Eric Evans says this: - -> Any rule that spans Aggregates will not be expected to be up-to-date at all times. Through event processing, batch processing, or other update mechanisms, other dependencies can be resolved within some specific time. (page 128) - -Vaughn Vernon says the following in [Effective Aggregate Design. Part II: Making Aggregates Work Together](https://dddcommunity.org/wp-content/uploads/files/pdf_articles/Vernon_2011_2.pdf): - -> Thus, if executing a command on one aggregate instance requires that additional business rules execute on one or more aggregates, use eventual consistency \[...\] There is a practical way to support eventual consistency in a DDD model. An aggregate method publishes a domain event that is in time delivered to one or more asynchronous subscribers. - -This rationale is based on embracing fine-grained transactions instead of transactions spanning many aggregates or entities. The idea is that in the second case, the number of database locks will be substantial in large-scale applications with high scalability needs. Embracing the fact that highly scalable applications need not have instant transactional consistency between multiple aggregates helps with accepting the concept of eventual consistency. Atomic changes are often not needed by the business, and it is in any case the responsibility of the domain experts to say whether particular operations need atomic transactions or not. If an operation always needs an atomic transaction between multiple aggregates, you might ask whether your aggregate should be larger or wasn't correctly designed. - -However, other developers and architects like Jimmy Bogard are okay with spanning a single transaction across several aggregates—but only when those additional aggregates are related to side effects for the same original command. For instance, in [A better domain events pattern](https://lostechies.com/jimmybogard/2014/05/13/a-better-domain-events-pattern/), Bogard says this: - -> Typically, I want the side effects of a domain event to occur within the same logical transaction, but not necessarily in the same scope of raising the domain event \[...\] Just before we commit our transaction, we dispatch our events to their respective handlers. - -If you dispatch the domain events right *before* committing the original transaction, it is because you want the side effects of those events to be included in the same transaction. For example, if the EF DbContext SaveChanges method fails, the transaction will roll back all changes, including the result of any side effect operations implemented by the related domain event handlers. This is because the DbContext life scope is by default defined as "scoped." Therefore, the DbContext object is shared across multiple repository objects being instantiated within the same scope or object graph. This coincides with the HttpRequest scope when developing Web API or MVC apps. - -Actually, both approaches (single atomic transaction and eventual consistency) can be right. It really depends on your domain or business requirements and what the domain experts tell you. It also depends on how scalable you need the service to be (more granular transactions have less impact with regard to database locks). And it depends on how much investment you're willing to make in your code, since eventual consistency requires more complex code in order to detect possible inconsistencies across aggregates and the need to implement compensatory actions. Consider that if you commit changes to the original aggregate and afterwards, when the events are being dispatched, if there's an issue and the event handlers cannot commit their side effects, you'll have inconsistencies between aggregates. - -A way to allow compensatory actions would be to store the domain events in additional database tables so they can be part of the original transaction. Afterwards, you could have a batch process that detects inconsistencies and runs compensatory actions by comparing the list of events with the current state of the aggregates. The compensatory actions are part of a complex topic that will require deep analysis from your side, which includes discussing it with the business user and domain experts. - -In any case, you can choose the approach you need. But the initial deferred approach—raising the events before committing, so you use a single transaction—is the simplest approach when using EF Core and a relational database. It's easier to implement and valid in many business cases. It's also the approach used in the ordering microservice in eShop. - -But how do you actually dispatch those events to their respective event handlers? What's the `_mediator` object you see in the previous example? It has to do with the techniques and artifacts you use to map between events and their event handlers. - -### The domain event dispatcher: mapping from events to event handlers - -Once you're able to dispatch or publish the events, you need some kind of artifact that will publish the event, so that every related handler can get it and process side effects based on that event. - -One approach is a real messaging system or even an event bus, possibly based on a service bus as opposed to in-memory events. However, for the first case, real messaging would be overkill for processing domain events, since you just need to process those events within the same process (that is, within the same domain and application layer). - -### How to subscribe to domain events - -When you use MediatR, each event handler must use an event type that is provided on the generic parameter of the `INotificationHandler` interface, as you can see in the following code: - -```csharp -public class ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler - : INotificationHandler -``` - -Based on the relationship between event and event handler, which can be considered the subscription, the MediatR artifact can discover all the event handlers for each event and trigger each one of those event handlers. - -### How to handle domain events - -Finally, the event handler usually implements application layer code that uses infrastructure repositories to obtain the required additional aggregates and to execute side-effect domain logic. The following [domain event handler code at eShop](https://github.com/dotnet/eShop/blob/main/src/Ordering.API/Application/DomainEventHandlers/ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler.cs), shows an implementation example. - -```csharp -public class ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler - : INotificationHandler -{ - private readonly ILogger _logger; - private readonly IBuyerRepository _buyerRepository; - private readonly IOrderingIntegrationEventService _orderingIntegrationEventService; - - public ValidateOrAddBuyerAggregateWhenOrderStartedDomainEventHandler( - ILogger logger, - IBuyerRepository buyerRepository, - IOrderingIntegrationEventService orderingIntegrationEventService) - { - _buyerRepository = buyerRepository ?? throw new ArgumentNullException(nameof(buyerRepository)); - _orderingIntegrationEventService = orderingIntegrationEventService ?? throw new ArgumentNullException(nameof(orderingIntegrationEventService)); - _logger = logger ?? throw new ArgumentNullException(nameof(logger)); - } - - public async Task Handle( - OrderStartedDomainEvent domainEvent, CancellationToken cancellationToken) - { - var cardTypeId = domainEvent.CardTypeId != 0 ? domainEvent.CardTypeId : 1; - var buyer = await _buyerRepository.FindAsync(domainEvent.UserId); - var buyerExisted = buyer is not null; - - if (!buyerExisted) - { - buyer = new Buyer(domainEvent.UserId, domainEvent.UserName); - } - - buyer.VerifyOrAddPaymentMethod( - cardTypeId, - $"Payment Method on {DateTime.UtcNow}", - domainEvent.CardNumber, - domainEvent.CardSecurityNumber, - domainEvent.CardHolderName, - domainEvent.CardExpiration, - domainEvent.Order.Id); - - var buyerUpdated = buyerExisted ? - _buyerRepository.Update(buyer) : - _buyerRepository.Add(buyer); - - await _buyerRepository.UnitOfWork - .SaveEntitiesAsync(cancellationToken); - - var integrationEvent = new OrderStatusChangedToSubmittedIntegrationEvent( - domainEvent.Order.Id, domainEvent.Order.OrderStatus.Name, buyer.Name); - await _orderingIntegrationEventService.AddAndSaveEventAsync(integrationEvent); - - OrderingApiTrace.LogOrderBuyerAndPaymentValidatedOrUpdated( - _logger, buyerUpdated.Id, domainEvent.Order.Id); - } -} -``` - -The previous domain event handler code is considered application layer code because it uses infrastructure repositories, as explained in the next section on the infrastructure-persistence layer. Event handlers could also use other infrastructure components. - -#### Domain events can generate integration events to be published outside of the microservice boundaries - -Finally, it's important to mention that you might sometimes want to propagate events across multiple microservices. That propagation is an integration event, and it could be published through an event bus from any specific domain event handler. - -## Conclusions on domain events - -As stated, use domain events to explicitly implement side effects of changes within your domain. To use DDD terminology, use domain events to explicitly implement side effects across one or multiple aggregates. Additionally, and for better scalability and less impact on database locks, use eventual consistency between aggregates within the same domain. - -The reference app uses [MediatR](https://github.com/jbogard/MediatR) to propagate domain events synchronously across aggregates, within a single transaction. However, you could also use some AMQP implementation like [RabbitMQ](https://www.rabbitmq.com/) or [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) to propagate domain events asynchronously, using eventual consistency but, as mentioned above, you have to consider the need for compensatory actions in case of failures. - -## Additional resources - -- **Greg Young. What is a Domain Event?** \ - - -- **Jan Stenberg. Domain Events and Eventual Consistency** \ - - -- **Jimmy Bogard. A better domain events pattern** \ - - -- **Vaughn Vernon. Effective Aggregate Design Part II: Making Aggregates Work Together** \ - [https://dddcommunity.org/wp-content/uploads/files/pdf\_articles/Vernon\_2011\_2.pdf](https://dddcommunity.org/wp-content/uploads/files/pdf_articles/Vernon_2011_2.pdf) - -- **Jimmy Bogard. Strengthening your domain: Domain Events** \ - - -- **Udi Dahan. How to create fully encapsulated Domain Models** \ - - -- **Udi Dahan. Domain Events – Take 2** \ - - -- **Udi Dahan. Domain Events – Salvation** \ - - -- **Cesar de la Torre. Domain Events vs. Integration Events in DDD and microservices architectures** \ - - ->[!div class="step-by-step"] ->[Previous](client-side-validation.md) ->[Next](infrastructure-persistence-layer-design.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-model-layer-validations.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-model-layer-validations.md deleted file mode 100644 index 1775c9c6b4d53..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-model-layer-validations.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: Designing validations in the domain model layer -description: .NET Microservices Architecture for Containerized .NET Applications | Understand key concepts of domain model validations. -ms.date: 10/08/2018 ---- - -# Design validations in the domain model layer - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -In DDD, validation rules can be thought as invariants. The main responsibility of an aggregate is to enforce invariants across state changes for all the entities within that aggregate. - -Domain entities should always be valid entities. There are a certain number of invariants for an object that should always be true. For example, an order item object always has to have a quantity that must be a positive integer, plus an article name and price. Therefore, invariants enforcement is the responsibility of the domain entities (especially of the aggregate root) and an entity object should not be able to exist without being valid. Invariant rules are simply expressed as contracts, and exceptions or notifications are raised when they are violated. - -The reasoning behind this is that many bugs occur because objects are in a state they should never have been in. - -Let's propose we now have a SendUserCreationEmailService that takes a UserProfile ... how can we rationalize in that service that Name is not null? Do we check it again? Or more likely ... you just don't bother to check and "hope for the best"—you hope that someone bothered to validate it before sending it to you. Of course, using TDD one of the first tests we should be writing is that if I send a customer with a null name that it should raise an error. But once we start writing these kinds of tests over and over again we realize ... "what if we never allowed name to become null? we wouldn't have all of these tests!". - -## Implement validations in the domain model layer - -Validations are usually implemented in domain entity constructors or in methods that can update the entity. There are multiple ways to implement validations, such as verifying data and raising exceptions if the validation fails. There are also more advanced patterns such as using the Specification pattern for validations, and the Notification pattern to return a collection of errors instead of returning an exception for each validation as it occurs. - -### Validate conditions and throw exceptions - -The following code example shows the simplest approach to validation in a domain entity by raising an exception. In the references table at the end of this section you can see links to more advanced implementations based on the patterns we have discussed previously. - -```csharp -public void SetAddress(Address address) -{ - _shippingAddress = address?? throw new ArgumentNullException(nameof(address)); -} -``` - -A better example would demonstrate the need to ensure that either the internal state did not change, or that all the mutations for a method occurred. For example, the following implementation would leave the object in an invalid state: - -```csharp -public void SetAddress(string line1, string line2, - string city, string state, int zip) -{ - _shippingAddress.line1 = line1 ?? throw new ... - _shippingAddress.line2 = line2; - _shippingAddress.city = city ?? throw new ... - _shippingAddress.state = (IsValid(state) ? state : throw new …); -} -``` - -If the value of the state is invalid, the first address line and the city have already been changed. That might make the address invalid. - -A similar approach can be used in the entity's constructor, raising an exception to make sure that the entity is valid once it is created. - -### Use validation attributes in the model based on data annotations - -Data annotations, like the Required or MaxLength attributes, can be used to configure EF Core database field properties, as explained in detail in the [Table mapping](infrastructure-persistence-layer-implementation-entity-framework-core.md#table-mapping) section, but [they no longer work for entity validation in EF Core](https://github.com/dotnet/efcore/issues/3680) (neither does the method), as they have done since EF 4.x in .NET Framework. - -Data annotations and the interface can still be used for model validation during model binding, prior to the controller's actions invocation as usual, but that model is meant to be a ViewModel or DTO and that's an MVC or API concern not a domain model concern. - -Having made the conceptual difference clear, you can still use data annotations and `IValidatableObject` in the entity class for validation, if your actions receive an entity class object parameter, which is not recommended. In that case, validation will occur upon model binding, just before invoking the action and you can check the controller's ModelState.IsValid property to check the result, but then again, it happens in the controller, not before persisting the entity object in the DbContext, as it had done since EF 4.x. - -You can still implement custom validation in the entity class using data annotations and the `IValidatableObject.Validate` method, by overriding the DbContext's SaveChanges method. - -You can see a sample implementation for validating `IValidatableObject` entities in [this comment on GitHub](https://github.com/dotnet/efcore/issues/3680#issuecomment-155502539). That sample doesn't do attribute-based validations, but they should be easy to implement using reflection in the same override. - -However, from a DDD point of view, the domain model is best kept lean with the use of exceptions in your entity's behavior methods, or by implementing the Specification and Notification patterns to enforce validation rules. - -It can make sense to use data annotations at the application layer in ViewModel classes (instead of domain entities) that will accept input, to allow for model validation within the UI layer. However, this should not be done at the exclusion of validation within the domain model. - -### Validate entities by implementing the Specification pattern and the Notification pattern - -Finally, a more elaborate approach to implementing validations in the domain model is by implementing the Specification pattern in conjunction with the Notification pattern, as explained in some of the additional resources listed later. - -It is worth mentioning that you can also use just one of those patterns—for example, validating manually with control statements, but using the Notification pattern to stack and return a list of validation errors. - -### Use deferred validation in the domain - -There are various approaches to deal with deferred validations in the domain. In his book [Implementing Domain-Driven Design](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577), Vaughn Vernon discusses these in the section on validation. - -### Two-step validation - -Also consider two-step validation. Use field-level validation on your command Data Transfer Objects (DTOs) and domain-level validation inside your entities. You can do this by returning a result object instead of exceptions in order to make it easier to deal with the validation errors. - -Using field validation with data annotations, for example, you do not duplicate the validation definition. The execution, though, can be both server-side and client-side in the case of DTOs (commands and ViewModels, for instance). - -## Additional resources - -- **Rachel Appel. Introduction to model validation in ASP.NET Core MVC** \ - [https://learn.microsoft.com/aspnet/core/mvc/models/validation](/aspnet/core/mvc/models/validation) - -- **Rick Anderson. Adding validation** \ - [https://learn.microsoft.com/aspnet/core/tutorials/first-mvc-app/validation](/aspnet/core/tutorials/first-mvc-app/validation) - -- **Martin Fowler. Replacing Throwing Exceptions with Notification in Validations** \ - - -- **Specification and Notification Patterns** \ - - -- **Lev Gorodinski. Validation in Domain-Driven Design (DDD)** \ - - -- **Colin Jack. Domain Model Validation** \ - - -- **Jimmy Bogard. Validation in a DDD world** \ - - -> [!div class="step-by-step"] -> [Previous](enumeration-classes-over-enum-types.md) -> [Next](client-side-validation.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/enumeration-classes-over-enum-types.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/enumeration-classes-over-enum-types.md deleted file mode 100644 index abfb2a7736ff7..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/enumeration-classes-over-enum-types.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: Using Enumeration classes instead of enum types -description: .NET Microservices Architecture for Containerized .NET Applications | Lear how you can use enumeration classes, instead of enums, as a way to solve some limitations of the latter. -ms.date: 11/25/2020 ---- -# Use enumeration classes instead of enum types - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -[Enumerations](../../../csharp/language-reference/builtin-types/enum.md) (or *enum types* for short) are a thin language wrapper around an integral type. You might want to limit their use to when you are storing one value from a closed set of values. Classification based on sizes (small, medium, large) is a good example. Using enums for control flow or more robust abstractions can be a [code smell](https://deviq.com/antipatterns/code-smells). This type of usage leads to fragile code with many control flow statements checking values of the enum. - -Instead, you can create Enumeration classes that enable all the rich features of an object-oriented language. - -However, this isn't a critical topic and in many cases, for simplicity, you can still use regular [enum types](../../../csharp/language-reference/builtin-types/enum.md) if that's your preference. The use of enumeration classes is more related to business-related concepts. - -## Implement an Enumeration base class - -The ordering microservice in eShopOnContainers provides a sample Enumeration base class implementation, as shown in the following example: - -```csharp -public abstract class Enumeration : IComparable -{ - public string Name { get; private set; } - - public int Id { get; private set; } - - protected Enumeration(int id, string name) => (Id, Name) = (id, name); - - public override string ToString() => Name; - - public static IEnumerable GetAll() where T : Enumeration => - typeof(T).GetFields(BindingFlags.Public | - BindingFlags.Static | - BindingFlags.DeclaredOnly) - .Select(f => f.GetValue(null)) - .Cast(); - - public override bool Equals(object obj) - { - if (obj is not Enumeration otherValue) - { - return false; - } - - var typeMatches = GetType().Equals(obj.GetType()); - var valueMatches = Id.Equals(otherValue.Id); - - return typeMatches && valueMatches; - } - - public int CompareTo(object other) => Id.CompareTo(((Enumeration)other).Id); - - // Other utility methods ... -} -``` - -You can use this class as a type in any entity or value object, as for the following `CardType` : `Enumeration` class: - -```csharp -public class CardType - : Enumeration -{ - public static CardType Amex = new(1, nameof(Amex)); - public static CardType Visa = new(2, nameof(Visa)); - public static CardType MasterCard = new(3, nameof(MasterCard)); - - public CardType(int id, string name) - : base(id, name) - { - } -} -``` - -## Additional resources - -- **Jimmy Bogard. Enumeration classes** \ - - -- **Steve Smith. Enum Alternatives in C#** \ - - -- **Enumeration.cs.** Base Enumeration class in eShopOnContainers \ - - -- **CardType.cs**. Sample Enumeration class in eShopOnContainers. \ - - -- **SmartEnum**. Ardalis - Classes to help produce strongly typed smarter enums in .NET. \ - - ->[!div class="step-by-step"] ->[Previous](implement-value-objects.md) ->[Next](domain-model-layer-validations.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md deleted file mode 100644 index 9adf03b747b76..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: Applying CQRS and CQS approaches in a DDD microservice in eShopOnContainers -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the way CQRS is implemented in the ordering microservice in eShopOnContainers. -ms.date: 01/13/2021 ---- -# Apply CQRS and CQS approaches in a DDD microservice in eShopOnContainers - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The design of the ordering microservice at the eShopOnContainers reference application is based on CQRS principles. However, it uses the simplest approach, which is just separating the queries from the commands and using the same database for both actions. - -The essence of those patterns, and the important point here, is that queries are idempotent: no matter how many times you query a system, the state of that system won't change. In other words, queries are side-effect free. - -Therefore, you could use a different "reads" data model than the transactional logic "writes" domain model, even though the ordering microservices are using the same database. Hence, this is a simplified CQRS approach. - -On the other hand, commands, which trigger transactions and data updates, change state in the system. With commands, you need to be careful when dealing with complexity and ever-changing business rules. This is where you want to apply DDD techniques to have a better modeled system. - -The DDD patterns presented in this guide should not be applied universally. They introduce constraints on your design. Those constraints provide benefits such as higher quality over time, especially in commands and other code that modifies system state. However, those constraints add complexity with fewer benefits for reading and querying data. - -One such pattern is the Aggregate pattern, which we examine more in later sections. Briefly, in the Aggregate pattern, you treat many domain objects as a single unit as a result of their relationship in the domain. You might not always gain advantages from this pattern in queries; it can increase the complexity of query logic. For read-only queries, you do not get the advantages of treating multiple objects as a single Aggregate. You only get the complexity. - -As shown in Figure 7-2 in the previous section, this guide suggests using DDD patterns only in the transactional/updates area of your microservice (that is, as triggered by commands). Queries can follow a simpler approach and should be separated from commands, following a CQRS approach. - -For implementing the "queries side", you can choose between many approaches, from your full-blown ORM like EF Core, AutoMapper projections, stored procedures, views, materialized views or a micro ORM. - -In this guide and in eShopOnContainers (specifically the ordering microservice) we chose to implement straight queries using a micro ORM like [Dapper](https://github.com/StackExchange/dapper-dot-net). This guide lets you implement any query based on SQL statements to get the best performance, thanks to a light framework with little overhead. - -When you use this approach, any updates to your model that impact how entities are persisted to a SQL database also need separate updates to SQL queries used by Dapper or any other separate (non-EF) approaches to querying. - -## CQRS and DDD patterns are not top-level architectures - -It's important to understand that CQRS and most DDD patterns (like DDD layers or a domain model with aggregates) are not architectural styles, but only architecture patterns. Microservices, SOA, and event-driven architecture (EDA) are examples of architectural styles. They describe a system of many components, such as many microservices. CQRS and DDD patterns describe something inside a single system or component; in this case, something inside a microservice. - -Different Bounded Contexts (BCs) will employ different patterns. They have different responsibilities, and that leads to different solutions. It is worth emphasizing that forcing the same pattern everywhere leads to failure. Do not use CQRS and DDD patterns everywhere. Many subsystems, BCs, or microservices are simpler and can be implemented more easily using simple CRUD services or using another approach. - -There is only one application architecture: the architecture of the system or end-to-end application you are designing (for example, the microservices architecture). However, the design of each Bounded Context or microservice within that application reflects its own tradeoffs and internal design decisions at an architecture patterns level. Do not try to apply the same architectural patterns as CQRS or DDD everywhere. - -### Additional resources - -- **Martin Fowler. CQRS** \ - - -- **Greg Young. CQRS Documents** \ - - -- **Udi Dahan. Clarified CQRS** \ - - ->[!div class="step-by-step"] ->[Previous](apply-simplified-microservice-cqrs-ddd-patterns.md) ->[Next](cqrs-microservice-reads.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md deleted file mode 100644 index fe5fc55086036..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md +++ /dev/null @@ -1,351 +0,0 @@ ---- -title: Implementing value objects -description: .NET Microservices Architecture for Containerized .NET Applications | Get into the details and options to implement value objects using new Entity Framework features. -ms.date: 04/11/2022 ---- - -# Implement value objects - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -As discussed in earlier sections about entities and aggregates, identity is fundamental for entities. However, there are many objects and data items in a system that do not require an identity and identity tracking, such as value objects. - -A value object can reference other entities. For example, in an application that generates a route that describes how to get from one point to another, that route would be a value object. It would be a snapshot of points on a specific route, but this suggested route would not have an identity, even though internally it might refer to entities like City, Road, etc. - -Figure 7-13 shows the Address value object within the Order aggregate. - -![Diagram showing the Address value-object inside the Order Aggregate.](./media/implement-value-objects/value-object-within-aggregate.png) - -**Figure 7-13**. Address value object within the Order aggregate - -As shown in Figure 7-13, an entity is usually composed of multiple attributes. For example, the `Order` entity can be modeled as an entity with an identity and composed internally of a set of attributes such as OrderId, OrderDate, OrderItems, etc. But the address, which is simply a complex-value composed of country/region, street, city, etc., and has no identity in this domain, must be modeled and treated as a value object. - -## Important characteristics of value objects - -There are two main characteristics for value objects: - -- They have no identity. - -- They are immutable. - -The first characteristic was already discussed. Immutability is an important requirement. The values of a value object must be immutable once the object is created. Therefore, when the object is constructed, you must provide the required values, but you must not allow them to change during the object's lifetime. - -Value objects allow you to perform certain tricks for performance, thanks to their immutable nature. This is especially true in systems where there may be thousands of value object instances, many of which have the same values. Their immutable nature allows them to be reused; they can be interchangeable objects, since their values are the same and they have no identity. This type of optimization can sometimes make a difference between software that runs slowly and software with good performance. Of course, all these cases depend on the application environment and deployment context. - -## Value object implementation in C\# - -In terms of implementation, you can have a value object base class that has basic utility methods like equality based on the comparison between all the attributes (since a value object must not be based on identity) and other fundamental characteristics. The following example shows a value object base class used in the ordering microservice from eShopOnContainers. - -```csharp -public abstract class ValueObject -{ - protected static bool EqualOperator(ValueObject left, ValueObject right) - { - if (ReferenceEquals(left, null) ^ ReferenceEquals(right, null)) - { - return false; - } - return ReferenceEquals(left, right) || left.Equals(right); - } - - protected static bool NotEqualOperator(ValueObject left, ValueObject right) - { - return !(EqualOperator(left, right)); - } - - protected abstract IEnumerable GetEqualityComponents(); - - public override bool Equals(object obj) - { - if (obj == null || obj.GetType() != GetType()) - { - return false; - } - - var other = (ValueObject)obj; - - return this.GetEqualityComponents().SequenceEqual(other.GetEqualityComponents()); - } - - public override int GetHashCode() - { - return GetEqualityComponents() - .Select(x => x != null ? x.GetHashCode() : 0) - .Aggregate((x, y) => x ^ y); - } - // Other utility methods -} -``` - - - -The `ValueObject` is an `abstract class` type, but in this example, it doesn't overload the `==` and `!=` operators. You could choose to do so, making comparisons delegate to the `Equals` override. For example, consider the following operator overloads to the `ValueObject` type: - -```csharp -public static bool operator ==(ValueObject one, ValueObject two) -{ - return EqualOperator(one, two); -} - -public static bool operator !=(ValueObject one, ValueObject two) -{ - return NotEqualOperator(one, two); -} -``` - -You can use this class when implementing your actual value object, as with the `Address` value object shown in the following example: - -```csharp -public class Address : ValueObject -{ - public String Street { get; private set; } - public String City { get; private set; } - public String State { get; private set; } - public String Country { get; private set; } - public String ZipCode { get; private set; } - - public Address() { } - - public Address(string street, string city, string state, string country, string zipcode) - { - Street = street; - City = city; - State = state; - Country = country; - ZipCode = zipcode; - } - - protected override IEnumerable GetEqualityComponents() - { - // Using a yield return statement to return each element one at a time - yield return Street; - yield return City; - yield return State; - yield return Country; - yield return ZipCode; - } -} -``` - -This value object implementation of `Address` has no identity, and therefore no ID field is defined for it, either in the `Address` class definition or the `ValueObject` class definition. - -Having no ID field in a class to be used by Entity Framework (EF) was not possible until EF Core 2.0, which greatly helps to implement better value objects with no ID. That is precisely the explanation of the next section. - -It could be argued that value objects, being immutable, should be read-only (that is, have get-only properties), and that's indeed true. However, value objects are usually serialized and deserialized to go through message queues, and being read-only stops the deserializer from assigning values, so you just leave them as `private set`, which is read-only enough to be practical. - -### Value object comparison semantics - -Two instances of the `Address` type can be compared using all the following methods: - -```csharp -var one = new Address("1 Microsoft Way", "Redmond", "WA", "US", "98052"); -var two = new Address("1 Microsoft Way", "Redmond", "WA", "US", "98052"); - -Console.WriteLine(EqualityComparer
.Default.Equals(one, two)); // True -Console.WriteLine(object.Equals(one, two)); // True -Console.WriteLine(one.Equals(two)); // True -Console.WriteLine(one == two); // True -``` - -When all the values are the same, the comparisons are correctly evaluated as `true`. If you didn't choose to overload the `==` and `!=` operators, then the last comparison of `one == two` would evaluate as `false`. For more information, see [Overload ValueObject equality operators](#equal-op-overload). - -## How to persist value objects in the database with EF Core 2.0 and later - -You just saw how to define a value object in your domain model. But how can you actually persist it into the database using Entity Framework Core since it usually targets entities with identity? - -### Background and older approaches using EF Core 1.1 - -As background, a limitation when using EF Core 1.0 and 1.1 was that you could not use [complex types](xref:System.ComponentModel.DataAnnotations.Schema.ComplexTypeAttribute) as defined in EF 6.x in the traditional .NET Framework. Therefore, if using EF Core 1.0 or 1.1, you needed to store your value object as an EF entity with an ID field. Then, so it looked more like a value object with no identity, you could hide its ID so you make clear that the identity of a value object is not important in the domain model. You could hide that ID by using the ID as a [shadow property](/ef/core/modeling/shadow-properties). Since that configuration for hiding the ID in the model is set up in the EF infrastructure level, it would be kind of transparent for your domain model. - -In the initial version of eShopOnContainers (.NET Core 1.1), the hidden ID needed by EF Core infrastructure was implemented in the following way in the DbContext level, using Fluent API at the infrastructure project. Therefore, the ID was hidden from the domain model point of view, but still present in the infrastructure. - -```csharp -// Old approach with EF Core 1.1 -// Fluent API within the OrderingContext:DbContext in the Infrastructure project -void ConfigureAddress(EntityTypeBuilder
addressConfiguration) -{ - addressConfiguration.ToTable("address", DEFAULT_SCHEMA); - - addressConfiguration.Property("Id") // Id is a shadow property - .IsRequired(); - addressConfiguration.HasKey("Id"); // Id is a shadow property -} -``` - -However, the persistence of that value object into the database was performed like a regular entity in a different table. - -With EF Core 2.0 and later, there are new and better ways to persist value objects. - -## Persist value objects as owned entity types in EF Core 2.0 and later - -Even with some gaps between the canonical value object pattern in DDD and the owned entity type in EF Core, it's currently the best way to persist value objects with EF Core 2.0 and later. You can see limitations at the end of this section. - -The owned entity type feature was added to EF Core since version 2.0. - -An owned entity type allows you to map types that do not have their own identity explicitly defined in the domain model and are used as properties, such as a value object, within any of your entities. An owned entity type shares the same CLR type with another entity type (that is, it's just a regular class). The entity containing the defining navigation is the owner entity. When querying the owner, the owned types are included by default. - -Just by looking at the domain model, an owned type looks like it doesn't have any identity. However, under the covers, owned types do have the identity, but the owner navigation property is part of this identity. - -The identity of instances of owned types is not completely their own. It consists of three components: - -- The identity of the owner - -- The navigation property pointing to them - -- In the case of collections of owned types, an independent component (supported in EF Core 2.2 and later). - -For example, in the Ordering domain model at eShopOnContainers, as part of the Order entity, the Address value object is implemented as an owned entity type within the owner entity, which is the Order entity. `Address` is a type with no identity property defined in the domain model. It is used as a property of the Order type to specify the shipping address for a particular order. - -By convention, a shadow primary key is created for the owned type and it will be mapped to the same table as the owner by using table splitting. This allows to use owned types similarly to how complex types are used in EF6 in the traditional .NET Framework. - -It is important to note that owned types are never discovered by convention in EF Core, so you have to declare them explicitly. - -In eShopOnContainers, in the OrderingContext.cs file, within the `OnModelCreating()` method, multiple infrastructure configurations are applied. One of them is related to the Order entity. - -```csharp -// Part of the OrderingContext.cs class at the Ordering.Infrastructure project -// -protected override void OnModelCreating(ModelBuilder modelBuilder) -{ - modelBuilder.ApplyConfiguration(new ClientRequestEntityTypeConfiguration()); - modelBuilder.ApplyConfiguration(new PaymentMethodEntityTypeConfiguration()); - modelBuilder.ApplyConfiguration(new OrderEntityTypeConfiguration()); - modelBuilder.ApplyConfiguration(new OrderItemEntityTypeConfiguration()); - //...Additional type configurations -} -``` - -In the following code, the persistence infrastructure is defined for the Order entity: - -```csharp -// Part of the OrderEntityTypeConfiguration.cs class -// -public void Configure(EntityTypeBuilder orderConfiguration) -{ - orderConfiguration.ToTable("orders", OrderingContext.DEFAULT_SCHEMA); - orderConfiguration.HasKey(o => o.Id); - orderConfiguration.Ignore(b => b.DomainEvents); - orderConfiguration.Property(o => o.Id) - .ForSqlServerUseSequenceHiLo("orderseq", OrderingContext.DEFAULT_SCHEMA); - - //Address value object persisted as owned entity in EF Core 2.0 - orderConfiguration.OwnsOne(o => o.Address); - - orderConfiguration.Property("OrderDate").IsRequired(); - - //...Additional validations, constraints and code... - //... -} -``` - -In the previous code, the `orderConfiguration.OwnsOne(o => o.Address)` method specifies that the `Address` property is an owned entity of the `Order` type. - -By default, EF Core conventions name the database columns for the properties of the owned entity type as `EntityProperty_OwnedEntityProperty`. Therefore, the internal properties of `Address` will appear in the `Orders` table with the names `Address_Street`, `Address_City` (and so on for `State`, `Country`, and `ZipCode`). - -You can append the `Property().HasColumnName()` fluent method to rename those columns. In the case where `Address` is a public property, the mappings would be like the following: - -```csharp -orderConfiguration.OwnsOne(p => p.Address) - .Property(p=>p.Street).HasColumnName("ShippingStreet"); - -orderConfiguration.OwnsOne(p => p.Address) - .Property(p=>p.City).HasColumnName("ShippingCity"); -``` - -It's possible to chain the `OwnsOne` method in a fluent mapping. In the following hypothetical example, `OrderDetails` owns `BillingAddress` and `ShippingAddress`, which are both `Address` types. Then `OrderDetails` is owned by the `Order` type. - -```csharp -orderConfiguration.OwnsOne(p => p.OrderDetails, cb => - { - cb.OwnsOne(c => c.BillingAddress); - cb.OwnsOne(c => c.ShippingAddress); - }); -//... -//... -public class Order -{ - public int Id { get; set; } - public OrderDetails OrderDetails { get; set; } -} - -public class OrderDetails -{ - public Address BillingAddress { get; set; } - public Address ShippingAddress { get; set; } -} - -public class Address -{ - public string Street { get; set; } - public string City { get; set; } -} -``` - -### Additional details on owned entity types - -- Owned types are defined when you configure a navigation property to a particular type using the OwnsOne fluent API. - -- The definition of an owned type in our metadata model is a composite of: the owner type, the navigation property, and the CLR type of the owned type. - -- The identity (key) of an owned type instance in our stack is a composite of the identity of the owner type and the definition of the owned type. - -#### Owned entities capabilities - -- Owned types can reference other entities, either owned (nested owned types) or non-owned (regular reference navigation properties to other entities). - -- You can map the same CLR type as different owned types in the same owner entity through separate navigation properties. - -- Table splitting is set up by convention, but you can opt out by mapping the owned type to a different table using ToTable. - -- Eager loading is performed automatically on owned types, that is, there's no need to call `.Include()` on the query. - -- Can be configured with attribute `[Owned]`, using EF Core 2.1 and later. - -- Can handle collections of owned types (using version 2.2 and later). - -#### Owned entities limitations - -- You can't create a `DbSet` of an owned type (by design). - -- You can't call `ModelBuilder.Entity()` on owned types (currently by design). - -- No support for optional (that is, nullable) owned types that are mapped with the owner in the same table (that is, using table splitting). This is because mapping is done for each property, there is no separate sentinel for the null complex value as a whole. - -- No inheritance-mapping support for owned types, but you should be able to map two leaf types of the same inheritance hierarchies as different owned types. EF Core will not reason about the fact that they are part of the same hierarchy. - -#### Main differences with EF6's complex types - -- Table splitting is optional, that is, they can optionally be mapped to a separate table and still be owned types. - -## Additional resources - -- **Martin Fowler. ValueObject pattern** \ - - -- **Eric Evans. Domain-Driven Design: Tackling Complexity in the Heart of Software.** (Book; includes a discussion of value objects) \ - - -- **Vaughn Vernon. Implementing Domain-Driven Design.** (Book; includes a discussion of value objects) \ - - -- **Owned Entity Types** \ - [https://learn.microsoft.com/ef/core/modeling/owned-entities](/ef/core/modeling/owned-entities) - -- **Shadow Properties** \ - [https://learn.microsoft.com/ef/core/modeling/shadow-properties](/ef/core/modeling/shadow-properties) - -- **Complex types and/or value objects**. Discussion in the EF Core GitHub repo (Issues tab) \ - - -- **ValueObject.cs.** Base value object class in eShopOnContainers. \ - - -- **ValueObject.cs.** Base value object class in CSharpFunctionalExtensions. \ - - -- **Address class.** Sample value object class in eShopOnContainers. \ - - -> [!div class="step-by-step"] -> [Previous](seedwork-domain-model-base-classes-interfaces.md) -> [Next](enumeration-classes-over-enum-types.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/index.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/index.md deleted file mode 100644 index 858b678345a64..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/index.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: Tackling Business Complexity in a Microservice with DDD and CQRS Patterns -description: .NET Microservices Architecture for Containerized .NET Applications | Understand how to tackle complex business scenarios applying DDD and CQRS Patterns -ms.date: 10/08/2018 ---- -# Tackle Business Complexity in a Microservice with DDD and CQRS Patterns - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -*Design a domain model for each microservice or Bounded Context that reflects understanding of the business domain.* - -This section focuses on more advanced microservices that you implement when you need to tackle complex subsystems, or microservices derived from the knowledge of domain experts with ever-changing business rules. The architecture patterns used in this section are based on domain-driven design (DDD) and Command and Query Responsibility Segregation (CQRS) approaches, as illustrated in Figure 7-1. - -:::image type="complex" source="./media/index/internal-versus-external-architecture.png" alt-text="Diagram comparing external and internal architecture patterns."::: -Difference between external architecture: microservice patterns, API gateways, resilient communications, pub/sub, etc., and internal architecture: data driven/CRUD, DDD patterns, dependency injection, multiple libraries, etc. -:::image-end::: - -**Figure 7-1**. External microservice architecture versus internal architecture patterns for each microservice - -However, most of the techniques for data driven microservices, such as how to implement an ASP.NET Core Web API service or how to expose Swagger metadata with Swashbuckle or NSwag, are also applicable to the more advanced microservices implemented internally with DDD patterns. This section is an extension of the previous sections, because most of the practices explained earlier also apply here or for any kind of microservice. - -This section first provides details on the simplified CQRS patterns used in the eShopOnContainers reference application. Later, you will get an overview of the DDD techniques that enable you to find common patterns that you can reuse in your applications. - -DDD is a large topic with a rich set of resources for learning. You can start with books like [Domain-Driven Design](https://domainlanguage.com/ddd/) by Eric Evans and additional materials from Vaughn Vernon, Jimmy Nilsson, Greg Young, Udi Dahan, Jimmy Bogard, and many other DDD/CQRS experts. But most of all you need to try to learn how to apply DDD techniques from the conversations, whiteboarding, and domain modeling sessions with the experts in your concrete business domain. - -#### Additional resources - -##### DDD (Domain-Driven Design) - -- **Eric Evans. Domain Language** \ - - -- **Martin Fowler. Domain-Driven Design** \ - - -- **Jimmy Bogard. Strengthening your domain: a primer** \ - - -- **Distributed Domain-Driven Design webinar** \ - - -##### DDD books - -- **Eric Evans. Domain-Driven Design: Tackling Complexity in the Heart of Software** \ - - -- **Eric Evans. Domain-Driven Design Reference: Definitions and Pattern Summaries** \ - - -- **Vaughn Vernon. Implementing Domain-Driven Design** \ - - -- **Vaughn Vernon. Domain-Driven Design Distilled** \ - - -- **Jimmy Nilsson. Applying Domain-Driven Design and Patterns** \ - - -- **Cesar de la Torre. N-Layered Domain-Oriented Architecture Guide with .NET** \ - - -- **Abel Avram and Floyd Marinescu. Domain-Driven Design Quickly** \ - - -- **Scott Millett, Nick Tune - Patterns, Principles, and Practices of Domain-Driven Design** \ - - -##### DDD training - -- **Julie Lerman and Steve Smith. Domain-Driven Design Fundamentals** \ - - ->[!div class="step-by-step"] ->[Previous](../multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md) ->[Next](apply-simplified-microservice-cqrs-ddd-patterns.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design.md deleted file mode 100644 index f341c092d6f2e..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: Designing the infrastructure persistence layer -description: .NET Microservices Architecture for Containerized .NET Applications | Explore the repository pattern in the design of the infrastructure persistence layer. -ms.date: 10/08/2018 ---- -# Design the infrastructure persistence layer - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Data persistence components provide access to the data hosted within the boundaries of a microservice (that is, a microservice's database). They contain the actual implementation of components such as repositories and [Unit of Work](https://martinfowler.com/eaaCatalog/unitOfWork.html) classes, like custom Entity Framework (EF) objects. EF DbContext implements both the Repository and the Unit of Work patterns. - -## The Repository pattern - -The Repository pattern is a Domain-Driven Design pattern intended to keep persistence concerns outside of the system's domain model. One or more persistence abstractions - interfaces - are defined in the domain model, and these abstractions have implementations in the form of persistence-specific adapters defined elsewhere in the application. - -Repository implementations are classes that encapsulate the logic required to access data sources. They centralize common data access functionality, providing better maintainability and decoupling the infrastructure or technology used to access databases from the domain model. If you use an Object-Relational Mapper (ORM) like Entity Framework, the code that must be implemented is simplified, thanks to LINQ and strong typing. This lets you focus on the data persistence logic rather than on data access plumbing. - -The Repository pattern is a well-documented way of working with a data source. In the book [Patterns of Enterprise Application Architecture](https://www.amazon.com/Patterns-Enterprise-Application-Architecture-Martin/dp/0321127420/), Martin Fowler describes a repository as follows: - -> A repository performs the tasks of an intermediary between the domain model layers and data mapping, acting in a similar way to a set of domain objects in memory. Client objects declaratively build queries and send them to the repositories for answers. Conceptually, a repository encapsulates a set of objects stored in the database and operations that can be performed on them, providing a way that is closer to the persistence layer. Repositories, also, support the purpose of separating, clearly and in one direction, the dependency between the work domain and the data allocation or mapping. - -### Define one repository per aggregate - -For each aggregate or aggregate root, you should create one repository class. You may be able to leverage C# Generics to reduce the total number concrete classes you need to maintain (as demonstrated later in this chapter). In a microservice based on Domain-Driven Design (DDD) patterns, the only channel you should use to update the database should be the repositories. This is because they have a one-to-one relationship with the aggregate root, which controls the aggregate's invariants and transactional consistency. It's okay to query the database through other channels (as you can do following a CQRS approach), because queries don't change the state of the database. However, the transactional area (that is, the updates) must always be controlled by the repositories and the aggregate roots. - -Basically, a repository allows you to populate data in memory that comes from the database in the form of the domain entities. Once the entities are in memory, they can be changed and then persisted back to the database through transactions. - -As noted earlier, if you're using the CQS/CQRS architectural pattern, the initial queries are performed by side queries out of the domain model, performed by simple SQL statements using Dapper. This approach is much more flexible than repositories because you can query and join any tables you need, and these queries aren't restricted by rules from the aggregates. That data goes to the presentation layer or client app. - -If the user makes changes, the data to be updated comes from the client app or presentation layer to the application layer (such as a Web API service). When you receive a command in a command handler, you use repositories to get the data you want to update from the database. You update it in memory with the data passed with the commands, and you then add or update the data (domain entities) in the database through a transaction. - -It's important to emphasize again that you should only define one repository for each aggregate root, as shown in Figure 7-17. To achieve the goal of the aggregate root to maintain transactional consistency between all the objects within the aggregate, you should never create a repository for each table in the database. - -![Diagram showing relationships of domain and other infrastructure.](./media/infrastructure-persistence-layer-design/repository-aggregate-database-table-relationships.png) - -**Figure 7-17**. The relationship between repositories, aggregates, and database tables - -The above diagram shows the relationships between Domain and Infrastructure layers: Buyer Aggregate depends on the IBuyerRepository and Order Aggregate depends on the IOrderRepository interfaces, these interfaces are implemented in the Infrastructure layer by the corresponding repositories that depend on UnitOfWork, also implemented there, that accesses the tables in the Data tier. - -### Enforce one aggregate root per repository - -It can be valuable to implement your repository design in such a way that it enforces the rule that only aggregate roots should have repositories. You can create a generic or base repository type that constrains the type of entities it works with to ensure they have the `IAggregateRoot` marker interface. - -Thus, each repository class implemented at the infrastructure layer implements its own contract or interface, as shown in the following code: - -```csharp -namespace Microsoft.eShopOnContainers.Services.Ordering.Infrastructure.Repositories -{ - public class OrderRepository : IOrderRepository - { - // ... - } -} -``` - -Each specific repository interface implements the generic IRepository interface: - -```csharp -public interface IOrderRepository : IRepository -{ - Order Add(Order order); - // ... -} -``` - -However, a better way to have the code enforce the convention that each repository is related to a single aggregate is to implement a generic repository type. That way, it's explicit that you're using a repository to target a specific aggregate. That can be easily done by implementing a generic `IRepository` base interface, as in the following code: - -```csharp -public interface IRepository where T : IAggregateRoot -{ - //.... -} -``` - -### The Repository pattern makes it easier to test your application logic - -The Repository pattern allows you to easily test your application with unit tests. Remember that unit tests only test your code, not infrastructure, so the repository abstractions make it easier to achieve that goal. - -As noted in an earlier section, it's recommended that you define and place the repository interfaces in the domain model layer so the application layer, such as your Web API microservice, doesn't depend directly on the infrastructure layer where you've implemented the actual repository classes. By doing this and using Dependency Injection in the controllers of your Web API, you can implement mock repositories that return fake data instead of data from the database. This decoupled approach allows you to create and run unit tests that focus the logic of your application without requiring connectivity to the database. - -Connections to databases can fail and, more importantly, running hundreds of tests against a database is bad for two reasons. First, it can take a long time because of the large number of tests. Second, the database records might change and impact the results of your tests, especially if your tests are running in parallel, so that they might not be consistent. Unit tests typically can run in parallel; integration tests may not support parallel execution depending on their implementation. Testing against the database isn't a unit test but an integration test. You should have many unit tests running fast, but fewer integration tests against the databases. - -In terms of separation of concerns for unit tests, your logic operates on domain entities in memory. It assumes the repository class has delivered those. Once your logic modifies the domain entities, it assumes the repository class will store them correctly. The important point here is to create unit tests against your domain model and its domain logic. Aggregate roots are the main consistency boundaries in DDD. - -The repositories implemented in eShopOnContainers rely on EF Core's DbContext implementation of the Repository and Unit of Work patterns using its change tracker, so they don't duplicate this functionality. - -### The difference between the Repository pattern and the legacy Data Access class (DAL class) pattern - -A typical DAL object directly performs data access and persistence operations against storage, often at the level of a single table and row. Simple CRUD operations implemented with a set of DAL classes frequently do not support transactions (though this is not always the case). Most DAL class approaches make minimal use of abstractions, resulting in tight coupling between application or Business Logic Layer (BLL) classes that call the DAL objects. - -When using repository, the implementation details of persistence are encapsulated away from the domain model. The use of an abstraction provides ease of extending behavior through patterns like Decorators or Proxies. For instance, cross-cutting concerns like [caching](https://ardalis.com/building-a-cachedrepository-in-aspnet-core/), logging, and error handling can all be applied using these patterns rather than hard-coded in the data access code itself. It's also trivial to support multiple repository adapters which may be used in different environments, from local development to shared staging environments to production. - -### Implementing Unit of Work - -A [unit of work](https://martinfowler.com/eaaCatalog/unitOfWork.html) refers to a single transaction that involves multiple insert, update, or delete operations. In simple terms, it means that for a specific user action, such as a registration on a website, all the insert, update, and delete operations are handled in a single transaction. This is more efficient than handling multiple database operations in a chattier way. - -These multiple persistence operations are performed later in a single action when your code from the application layer commands it. The decision about applying the in-memory changes to the actual database storage is typically based on the Unit of Work pattern. In EF, the Unit of Work pattern is implemented by a and is executed when a call is made to `SaveChanges`. - -In many cases, this pattern or way of applying operations against the storage can increase application performance and reduce the possibility of inconsistencies. It also reduces transaction blocking in the database tables, because all the intended operations are committed as part of one transaction. This is more efficient in comparison to executing many isolated operations against the database. Therefore, the selected ORM can optimize the execution against the database by grouping several update actions within the same transaction, as opposed to many small and separate transaction executions. - -The Unit of Work pattern can be implemented with or without using the Repository pattern. - -### Repositories shouldn't be mandatory - -Custom repositories are useful for the reasons cited earlier, and that is the approach for the ordering microservice in eShopOnContainers. However, it isn't an essential pattern to implement in a DDD design or even in general .NET development. - -For instance, Jimmy Bogard, when providing direct feedback for this guide, said the following: - -> This'll probably be my biggest feedback. I'm really not a fan of repositories, mainly because they hide the important details of the underlying persistence mechanism. It's why I go for MediatR for commands, too. I can use the full power of the persistence layer, and push all that domain behavior into my aggregate roots. I don't usually want to mock my repositories – I still need to have that integration test with the real thing. Going CQRS meant that we didn't really have a need for repositories any more. - -Repositories might be useful, but they are not critical for your DDD design in the way that the Aggregate pattern and a rich domain model are. Therefore, use the Repository pattern or not, as you see fit. - -## Additional resources - -### Repository pattern - -- **Edward Hieatt and Rob Mee. Repository pattern.** \ - - -- **The Repository pattern** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/ff649690(v=pandp.10)](/previous-versions/msp-n-p/ff649690(v=pandp.10)) - -- **Eric Evans. Domain-Driven Design: Tackling Complexity in the Heart of Software.** (Book; includes a discussion of the Repository pattern) \ - - -### Unit of Work pattern - -- **Martin Fowler. Unit of Work pattern.** \ - - -- **Implementing the Repository and Unit of Work Patterns in an ASP.NET MVC Application** \ - [https://learn.microsoft.com/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application](/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application) - ->[!div class="step-by-step"] ->[Previous](domain-events-design-implementation.md) ->[Next](infrastructure-persistence-layer-implementation-entity-framework-core.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md deleted file mode 100644 index 571f229f3e9a7..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md +++ /dev/null @@ -1,495 +0,0 @@ ---- -title: Implementing the infrastructure persistence layer with Entity Framework Core -description: .NET Microservices Architecture for Containerized .NET Applications | Explore the implementation details for the infrastructure persistence layer, using Entity Framework Core. -ms.date: 01/13/2021 ---- - -# Implement the infrastructure persistence layer with Entity Framework Core - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -When you use relational databases such as SQL Server, Oracle, or PostgreSQL, a recommended approach is to implement the persistence layer based on Entity Framework (EF). EF supports LINQ and provides strongly typed objects for your model, as well as simplified persistence into your database. - -Entity Framework has a long history as part of the .NET Framework. When you use .NET, you should also use Entity Framework Core, which runs on Windows or Linux in the same way as .NET. EF Core is a complete rewrite of Entity Framework that's implemented with a much smaller footprint and important improvements in performance. - -## Introduction to Entity Framework Core - -Entity Framework (EF) Core is a lightweight, extensible, and cross-platform version of the popular Entity Framework data access technology. It was introduced with .NET Core in mid-2016. - -Since an introduction to EF Core is already available in Microsoft documentation, here we simply provide links to that information. - -### Additional resources - -- **Entity Framework Core** \ - [https://learn.microsoft.com/ef/core/](/ef/core/) - -- **Getting started with ASP.NET Core and Entity Framework Core using Visual Studio** \ - [https://learn.microsoft.com/aspnet/core/data/ef-mvc/](/aspnet/core/data/ef-mvc/) - -- **DbContext Class** \ - [https://learn.microsoft.com/dotnet/api/microsoft.entityframeworkcore.dbcontext](xref:Microsoft.EntityFrameworkCore.DbContext) - -- **Compare EF Core & EF6.x** \ - [https://learn.microsoft.com/ef/efcore-and-ef6/index](/ef/efcore-and-ef6/index) - -## Infrastructure in Entity Framework Core from a DDD perspective - -From a DDD point of view, an important capability of EF is the ability to use POCO domain entities, also known in EF terminology as POCO *code-first entities*. If you use POCO domain entities, your domain model classes are persistence-ignorant, following the [Persistence Ignorance](https://deviq.com/persistence-ignorance/) and the [Infrastructure Ignorance](https://ayende.com/blog/3137/infrastructure-ignorance) principles. - -Per DDD patterns, you should encapsulate domain behavior and rules within the entity class itself, so it can control invariants, validations, and rules when accessing any collection. Therefore, it is not a good practice in DDD to allow public access to collections of child entities or value objects. Instead, you want to expose methods that control how and when your fields and property collections can be updated, and what behavior and actions should occur when that happens. - -Since EF Core 1.1, to satisfy those DDD requirements, you can have plain fields in your entities instead of public properties. If you do not want an entity field to be externally accessible, you can just create the attribute or field instead of a property. You can also use private property setters. - -In a similar way, you can now have read-only access to collections by using a public property typed as `IReadOnlyCollection`, which is backed by a private field member for the collection (like a `List`) in your entity that relies on EF for persistence. Previous versions of Entity Framework required collection properties to support `ICollection`, which meant that any developer using the parent entity class could add or remove items through its property collections. That possibility would be against the recommended patterns in DDD. - -You can use a private collection while exposing a read-only `IReadOnlyCollection` object, as shown in the following code example: - -```csharp -public class Order : Entity -{ - // Using private fields, allowed since EF Core 1.1 - private DateTime _orderDate; - // Other fields ... - - private readonly List _orderItems; - public IReadOnlyCollection OrderItems => _orderItems; - - protected Order() { } - - public Order(int buyerId, int paymentMethodId, Address address) - { - // Initializations ... - } - - public void AddOrderItem(int productId, string productName, - decimal unitPrice, decimal discount, - string pictureUrl, int units = 1) - { - // Validation logic... - - var orderItem = new OrderItem(productId, productName, - unitPrice, discount, - pictureUrl, units); - _orderItems.Add(orderItem); - } -} -``` - -The `OrderItems` property can only be accessed as read-only using `IReadOnlyCollection`. This type is read-only so it is protected against regular external updates. - -EF Core provides a way to map the domain model to the physical database without "contaminating" the domain model. It is pure .NET POCO code, because the mapping action is implemented in the persistence layer. In that mapping action, you need to configure the fields-to-database mapping. In the following example of the `OnModelCreating` method from `OrderingContext` and the `OrderEntityTypeConfiguration` class, the call to `SetPropertyAccessMode` tells EF Core to access the `OrderItems` property through its field. - -```csharp -// At OrderingContext.cs from eShopOnContainers -protected override void OnModelCreating(ModelBuilder modelBuilder) -{ - // ... - modelBuilder.ApplyConfiguration(new OrderEntityTypeConfiguration()); - // Other entities' configuration ... -} - -// At OrderEntityTypeConfiguration.cs from eShopOnContainers -class OrderEntityTypeConfiguration : IEntityTypeConfiguration -{ - public void Configure(EntityTypeBuilder orderConfiguration) - { - orderConfiguration.ToTable("orders", OrderingContext.DEFAULT_SCHEMA); - // Other configuration - - var navigation = - orderConfiguration.Metadata.FindNavigation(nameof(Order.OrderItems)); - - //EF access the OrderItem collection property through its backing field - navigation.SetPropertyAccessMode(PropertyAccessMode.Field); - - // Other configuration - } -} -``` - -When you use fields instead of properties, the `OrderItem` entity is persisted as if it had a `List` property. However, it exposes a single accessor, the `AddOrderItem` method, for adding new items to the order. As a result, behavior and data are tied together and will be consistent throughout any application code that uses the domain model. - -## Implement custom repositories with Entity Framework Core - -At the implementation level, a repository is simply a class with data persistence code coordinated by a unit of work (DBContext in EF Core) when performing updates, as shown in the following class: - -```csharp -// using directives... -namespace Microsoft.eShopOnContainers.Services.Ordering.Infrastructure.Repositories -{ - public class BuyerRepository : IBuyerRepository - { - private readonly OrderingContext _context; - public IUnitOfWork UnitOfWork - { - get - { - return _context; - } - } - - public BuyerRepository(OrderingContext context) - { - _context = context ?? throw new ArgumentNullException(nameof(context)); - } - - public Buyer Add(Buyer buyer) - { - return _context.Buyers.Add(buyer).Entity; - } - - public async Task FindAsync(string buyerIdentityGuid) - { - var buyer = await _context.Buyers - .Include(b => b.Payments) - .Where(b => b.FullName == buyerIdentityGuid) - .SingleOrDefaultAsync(); - - return buyer; - } - } -} -``` - -The `IBuyerRepository` interface comes from the domain model layer as a contract. However, the repository implementation is done at the persistence and infrastructure layer. - -The EF DbContext comes through the constructor through Dependency Injection. It is shared between multiple repositories within the same HTTP request scope, thanks to its default lifetime (`ServiceLifetime.Scoped`) in the IoC container (which can also be explicitly set with `services.AddDbContext<>`). - -### Methods to implement in a repository (updates or transactions versus queries) - -Within each repository class, you should put the persistence methods that update the state of entities contained by its related aggregate. Remember there is one-to-one relationship between an aggregate and its related repository. Consider that an aggregate root entity object might have embedded child entities within its EF graph. For example, a buyer might have multiple payment methods as related child entities. - -Since the approach for the ordering microservice in eShopOnContainers is also based on CQS/CQRS, most of the queries are not implemented in custom repositories. Developers have the freedom to create the queries and joins they need for the presentation layer without the restrictions imposed by aggregates, custom repositories per aggregate, and DDD in general. Most of the custom repositories suggested by this guide have several update or transactional methods but just the query methods needed to get data to be updated. For example, the BuyerRepository repository implements a FindAsync method, because the application needs to know whether a particular buyer exists before creating a new buyer related to the order. - -However, the real query methods to get data to send to the presentation layer or client apps are implemented, as mentioned, in the CQRS queries based on flexible queries using Dapper. - -### Using a custom repository versus using EF DbContext directly - -The Entity Framework DbContext class is based on the Unit of Work and Repository patterns and can be used directly from your code, such as from an ASP.NET Core MVC controller. The Unit of Work and Repository patterns result in the simplest code, as in the CRUD catalog microservice in eShopOnContainers. In cases where you want the simplest code possible, you might want to directly use the DbContext class, as many developers do. - -However, implementing custom repositories provides several benefits when implementing more complex microservices or applications. The Unit of Work and Repository patterns are intended to encapsulate the infrastructure persistence layer so it is decoupled from the application and domain-model layers. Implementing these patterns can facilitate the use of mock repositories simulating access to the database. - -In Figure 7-18, you can see the differences between not using repositories (directly using the EF DbContext) versus using repositories, which makes it easier to mock those repositories. - -![Diagram showing the components and dataflow in the two repositories.](./media/infrastructure-persistence-layer-implementation-entity-framework-core/custom-repo-versus-db-context.png) - -**Figure 7-18**. Using custom repositories versus a plain DbContext - -Figure 7-18 shows that using a custom repository adds an abstraction layer that can be used to ease testing by mocking the repository. There are multiple alternatives when mocking. You could mock just repositories or you could mock a whole unit of work. Usually mocking just the repositories is enough, and the complexity to abstract and mock a whole unit of work is usually not needed. - -Later, when we focus on the application layer, you will see how Dependency Injection works in ASP.NET Core and how it is implemented when using repositories. - -In short, custom repositories allow you to test code more easily with unit tests that are not impacted by the data tier state. If you run tests that also access the actual database through the Entity Framework, they are not unit tests but integration tests, which are a lot slower. - -If you were using DbContext directly, you would have to mock it or to run unit tests by using an in-memory SQL Server with predictable data for unit tests. But mocking the DbContext or controlling fake data requires more work than mocking at the repository level. Of course, you could always test the MVC controllers. - -## EF DbContext and IUnitOfWork instance lifetime in your IoC container - -The `DbContext` object (exposed as an `IUnitOfWork` object) should be shared among multiple repositories within the same HTTP request scope. For example, this is true when the operation being executed must deal with multiple aggregates, or simply because you are using multiple repository instances. It is also important to mention that the `IUnitOfWork` interface is part of your domain layer, not an EF Core type. - -In order to do that, the instance of the `DbContext` object has to have its service lifetime set to ServiceLifetime.Scoped. This is the default lifetime when registering a `DbContext` with `builder.Services.AddDbContext` in your IoC container from the _Program.cs_ file in your ASP.NET Core Web API project. The following code illustrates this. - -```csharp -// Add framework services. -builder.Services.AddMvc(options => -{ - options.Filters.Add(typeof(HttpGlobalExceptionFilter)); -}).AddControllersAsServices(); - -builder.Services.AddEntityFrameworkSqlServer() - .AddDbContext(options => - { - options.UseSqlServer(Configuration["ConnectionString"], - sqlOptions => sqlOptions.MigrationsAssembly(typeof(Startup).GetTypeInfo(). - Assembly.GetName().Name)); - }, - ServiceLifetime.Scoped // Note that Scoped is the default choice - // in AddDbContext. It is shown here only for - // pedagogic purposes. - ); -``` - -The DbContext instantiation mode should not be configured as ServiceLifetime.Transient or ServiceLifetime.Singleton. - -## The repository instance lifetime in your IoC container - -In a similar way, repository's lifetime should usually be set as scoped (InstancePerLifetimeScope in Autofac). It could also be transient (InstancePerDependency in Autofac), but your service will be more efficient in regards to memory when using the scoped lifetime. - -```csharp -// Registering a Repository in Autofac IoC container -builder.RegisterType() - .As() - .InstancePerLifetimeScope(); -``` - -Using the singleton lifetime for the repository could cause you serious concurrency problems when your DbContext is set to scoped (InstancePerLifetimeScope) lifetime (the default lifetimes for a DBContext). As long as your service lifetimes for your repositories and your DbContext are both Scoped, you'll avoid these issues. - -### Additional resources - -- **Implementing the Repository and Unit of Work Patterns in an ASP.NET MVC Application** \ - - -- **Jonathan Allen. Implementation Strategies for the Repository Pattern with Entity Framework, Dapper, and Chain** \ - - -- **Cesar de la Torre. Comparing ASP.NET Core IoC container service lifetimes with Autofac IoC container instance scopes** \ - - -## Table mapping - -Table mapping identifies the table data to be queried from and saved to the database. Previously you saw how domain entities (for example, a product or order domain) can be used to generate a related database schema. EF is strongly designed around the concept of *conventions*. Conventions address questions like "What will the name of a table be?" or "What property is the primary key?" Conventions are typically based on conventional names. For example, it is typical for the primary key to be a property that ends with `Id`. - -By convention, each entity will be set up to map to a table with the same name as the `DbSet` property that exposes the entity on the derived context. If no `DbSet` value is provided for the given entity, the class name is used. - -### Data Annotations versus Fluent API - -There are many additional EF Core conventions, and most of them can be changed by using either data annotations or Fluent API, implemented within the OnModelCreating method. - -Data annotations must be used on the entity model classes themselves, which is a more intrusive way from a DDD point of view. This is because you are contaminating your model with data annotations related to the infrastructure database. On the other hand, Fluent API is a convenient way to change most conventions and mappings within your data persistence infrastructure layer, so the entity model will be clean and decoupled from the persistence infrastructure. - -### Fluent API and the OnModelCreating method - -As mentioned, in order to change conventions and mappings, you can use the OnModelCreating method in the DbContext class. - -The ordering microservice in eShopOnContainers implements explicit mapping and configuration, when needed, as shown in the following code. - -```csharp -// At OrderingContext.cs from eShopOnContainers -protected override void OnModelCreating(ModelBuilder modelBuilder) -{ - // ... - modelBuilder.ApplyConfiguration(new OrderEntityTypeConfiguration()); - // Other entities' configuration ... -} - -// At OrderEntityTypeConfiguration.cs from eShopOnContainers -class OrderEntityTypeConfiguration : IEntityTypeConfiguration -{ - public void Configure(EntityTypeBuilder orderConfiguration) - { - orderConfiguration.ToTable("orders", OrderingContext.DEFAULT_SCHEMA); - - orderConfiguration.HasKey(o => o.Id); - - orderConfiguration.Ignore(b => b.DomainEvents); - - orderConfiguration.Property(o => o.Id) - .UseHiLo("orderseq", OrderingContext.DEFAULT_SCHEMA); - - //Address value object persisted as owned entity type supported since EF Core 2.0 - orderConfiguration - .OwnsOne(o => o.Address, a => - { - a.WithOwner(); - }); - - orderConfiguration - .Property("_buyerId") - .UsePropertyAccessMode(PropertyAccessMode.Field) - .HasColumnName("BuyerId") - .IsRequired(false); - - orderConfiguration - .Property("_orderDate") - .UsePropertyAccessMode(PropertyAccessMode.Field) - .HasColumnName("OrderDate") - .IsRequired(); - - orderConfiguration - .Property("_orderStatusId") - .UsePropertyAccessMode(PropertyAccessMode.Field) - .HasColumnName("OrderStatusId") - .IsRequired(); - - orderConfiguration - .Property("_paymentMethodId") - .UsePropertyAccessMode(PropertyAccessMode.Field) - .HasColumnName("PaymentMethodId") - .IsRequired(false); - - orderConfiguration.Property("Description").IsRequired(false); - - var navigation = orderConfiguration.Metadata.FindNavigation(nameof(Order.OrderItems)); - - // DDD Patterns comment: - //Set as field (New since EF 1.1) to access the OrderItem collection property through its field - navigation.SetPropertyAccessMode(PropertyAccessMode.Field); - - orderConfiguration.HasOne() - .WithMany() - .HasForeignKey("_paymentMethodId") - .IsRequired(false) - .OnDelete(DeleteBehavior.Restrict); - - orderConfiguration.HasOne() - .WithMany() - .IsRequired(false) - .HasForeignKey("_buyerId"); - - orderConfiguration.HasOne(o => o.OrderStatus) - .WithMany() - .HasForeignKey("_orderStatusId"); - } -} -``` - -You could set all the Fluent API mappings within the same `OnModelCreating` method, but it's advisable to partition that code and have multiple configuration classes, one per entity, as shown in the example. Especially for large models, it is advisable to have separate configuration classes for configuring different entity types. - -The code in the example shows a few explicit declarations and mapping. However, EF Core conventions do many of those mappings automatically, so the actual code you would need in your case might be smaller. - -### The Hi/Lo algorithm in EF Core - -An interesting aspect of code in the preceding example is that it uses the [Hi/Lo algorithm](https://vladmihalcea.com/the-hilo-algorithm/) as the key generation strategy. - -The Hi/Lo algorithm is useful when you need unique keys before committing changes. As a summary, the Hi-Lo algorithm assigns unique identifiers to table rows while not depending on storing the row in the database immediately. This lets you start using the identifiers right away, as happens with regular sequential database IDs. - -The Hi/Lo algorithm describes a mechanism for getting a batch of unique IDs from a related database sequence. These IDs are safe to use because the database guarantees the uniqueness, so there will be no collisions between users. This algorithm is interesting for these reasons: - -- It does not break the Unit of Work pattern. - -- It gets sequence IDs in batches, to minimize round trips to the database. - -- It generates a human readable identifier, unlike techniques that use GUIDs. - -EF Core supports [HiLo](https://stackoverflow.com/questions/282099/whats-the-hi-lo-algorithm) with the `UseHiLo` method, as shown in the preceding example. - -### Map fields instead of properties - -With this feature, available since EF Core 1.1, you can directly map columns to fields. It is possible to not use properties in the entity class, and just to map columns from a table to fields. A common use for that would be private fields for any internal state that do not need to be accessed from outside the entity. - -You can do this with single fields or also with collections, like a `List<>` field. This point was mentioned earlier when we discussed modeling the domain model classes, but here you can see how that mapping is performed with the `PropertyAccessMode.Field` configuration highlighted in the previous code. - -### Use shadow properties in EF Core, hidden at the infrastructure level - -Shadow properties in EF Core are properties that do not exist in your entity class model. The values and states of these properties are maintained purely in the [ChangeTracker](/ef/core/api/microsoft.entityframeworkcore.changetracking.changetracker) class at the infrastructure level. - -## Implement the Query Specification pattern - -As introduced earlier in the design section, the Query Specification pattern is a Domain-Driven Design pattern designed as the place where you can put the definition of a query with optional sorting and paging logic. - -The Query Specification pattern defines a query in an object. For example, in order to encapsulate a paged query that searches for some products you can create a PagedProduct specification that takes the necessary input parameters (pageNumber, pageSize, filter, etc.). Then, within any Repository method (usually a List() overload) it would accept an IQuerySpecification and run the expected query based on that specification. - -An example of a generic Specification interface is the following code, which is similar to code used in the [eShopOnWeb](https://github.com/dotnet-architecture/eShopOnWeb) reference application. - -```csharp -// GENERIC SPECIFICATION INTERFACE -// https://github.com/dotnet-architecture/eShopOnWeb - -public interface ISpecification -{ - Expression> Criteria { get; } - List>> Includes { get; } - List IncludeStrings { get; } -} -``` - -Then, the implementation of a generic specification base class is the following. - -```csharp -// GENERIC SPECIFICATION IMPLEMENTATION (BASE CLASS) -// https://github.com/dotnet-architecture/eShopOnWeb - -public abstract class BaseSpecification : ISpecification -{ - public BaseSpecification(Expression> criteria) - { - Criteria = criteria; - } - public Expression> Criteria { get; } - - public List>> Includes { get; } = - new List>>(); - - public List IncludeStrings { get; } = new List(); - - protected virtual void AddInclude(Expression> includeExpression) - { - Includes.Add(includeExpression); - } - - // string-based includes allow for including children of children - // e.g. Basket.Items.Product - protected virtual void AddInclude(string includeString) - { - IncludeStrings.Add(includeString); - } -} -``` - -The following specification loads a single basket entity given either the basket's ID or the ID of the buyer to whom the basket belongs. It will [eagerly load](/ef/core/querying/related-data) the basket's `Items` collection. - -```csharp -// SAMPLE QUERY SPECIFICATION IMPLEMENTATION - -public class BasketWithItemsSpecification : BaseSpecification -{ - public BasketWithItemsSpecification(int basketId) - : base(b => b.Id == basketId) - { - AddInclude(b => b.Items); - } - - public BasketWithItemsSpecification(string buyerId) - : base(b => b.BuyerId == buyerId) - { - AddInclude(b => b.Items); - } -} -``` - -And finally, you can see below how a generic EF Repository can use such a specification to filter and eager-load data related to a given entity type T. - -```csharp -// GENERIC EF REPOSITORY WITH SPECIFICATION -// https://github.com/dotnet-architecture/eShopOnWeb - -public IEnumerable List(ISpecification spec) -{ - // fetch a Queryable that includes all expression-based includes - var queryableResultWithIncludes = spec.Includes - .Aggregate(_dbContext.Set().AsQueryable(), - (current, include) => current.Include(include)); - - // modify the IQueryable to include any string-based include statements - var secondaryResult = spec.IncludeStrings - .Aggregate(queryableResultWithIncludes, - (current, include) => current.Include(include)); - - // return the result of the query using the specification's criteria expression - return secondaryResult - .Where(spec.Criteria) - .AsEnumerable(); -} -``` - -In addition to encapsulating filtering logic, the specification can specify the shape of the data to be returned, including which properties to populate. - -Although we don't recommend returning `IQueryable` from a repository, it's perfectly fine to use them within the repository to build up a set of results. You can see this approach used in the List method above, which uses intermediate `IQueryable` expressions to build up the query's list of includes before executing the query with the specification's criteria on the last line. - -Learn [how the specification pattern is applied in the eShopOnWeb sample](https://github.com/dotnet-architecture/eShopOnWeb/wiki/Patterns#specification). - -### Additional resources - -- **Table Mapping** \ - [https://learn.microsoft.com/ef/core/modeling/relational/tables](/ef/core/modeling/relational/tables) - -- **Use HiLo to generate keys with Entity Framework Core** \ - - -- **Backing Fields** \ - [https://learn.microsoft.com/ef/core/modeling/backing-field](/ef/core/modeling/backing-field) - -- **Steve Smith. Encapsulated Collections in Entity Framework Core** \ - - -- **Shadow Properties** \ - [https://learn.microsoft.com/ef/core/modeling/shadow-properties](/ef/core/modeling/shadow-properties) - -- **The Specification pattern** \ - - - **Ardalis.Specification NuGet Package** Used by eShopOnWeb. \ - -> [!div class="step-by-step"] -> [Previous](infrastructure-persistence-layer-design.md) -> [Next](nosql-database-persistence-infrastructure.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/apply-simplified-microservice-cqrs-ddd-patterns/simplified-cqrs-ddd-microservice.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/apply-simplified-microservice-cqrs-ddd-patterns/simplified-cqrs-ddd-microservice.png deleted file mode 100644 index b2222726511e1..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/apply-simplified-microservice-cqrs-ddd-patterns/simplified-cqrs-ddd-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/drapper-package-nuget.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/drapper-package-nuget.png deleted file mode 100644 index 77c8aa27d153d..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/drapper-package-nuget.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/ordering-api-queries-folder.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/ordering-api-queries-folder.png deleted file mode 100644 index 8d7562893beb6..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/ordering-api-queries-folder.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/simple-approach-cqrs-queries.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/simple-approach-cqrs-queries.png deleted file mode 100644 index 09c98075ce90e..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/simple-approach-cqrs-queries.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/swagger-ordering-http-api.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/swagger-ordering-http-api.png deleted file mode 100644 index cb1ba6ddbb305..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/cqrs-microservice-reads/swagger-ordering-http-api.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/ddd-service-layer-dependencies.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/ddd-service-layer-dependencies.png deleted file mode 100644 index 2a1b106e4c6ee..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/ddd-service-layer-dependencies.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/domain-driven-design-microservice.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/domain-driven-design-microservice.png deleted file mode 100644 index 1e3d6342e9058..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/domain-driven-design-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/ordering-domain-dependencies.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/ordering-domain-dependencies.png deleted file mode 100644 index 9d6146e49131d..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/ddd-oriented-microservice/ordering-domain-dependencies.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/domain-events-design-implementation/aggregate-domain-event-handlers.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/domain-events-design-implementation/aggregate-domain-event-handlers.png deleted file mode 100644 index 5490549dec5ad..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/domain-events-design-implementation/aggregate-domain-event-handlers.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/domain-events-design-implementation/domain-model-ordering-microservice.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/domain-events-design-implementation/domain-model-ordering-microservice.png deleted file mode 100644 index 7649bedd881a5..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/domain-events-design-implementation/domain-model-ordering-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/implement-value-objects/value-object-within-aggregate.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/implement-value-objects/value-object-within-aggregate.png deleted file mode 100644 index d87a3f07cdead..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/implement-value-objects/value-object-within-aggregate.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/index/internal-versus-external-architecture.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/index/internal-versus-external-architecture.png deleted file mode 100644 index 74ab7bc1aae70..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/index/internal-versus-external-architecture.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/infrastructure-persistence-layer-design/repository-aggregate-database-table-relationships.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/infrastructure-persistence-layer-design/repository-aggregate-database-table-relationships.png deleted file mode 100644 index c3bf827b3fcb4..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/infrastructure-persistence-layer-design/repository-aggregate-database-table-relationships.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/infrastructure-persistence-layer-implementation-entity-framework-core/custom-repo-versus-db-context.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/infrastructure-persistence-layer-implementation-entity-framework-core/custom-repo-versus-db-context.png deleted file mode 100644 index 9d5a2d8277428..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/infrastructure-persistence-layer-implementation-entity-framework-core/custom-repo-versus-db-context.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/add-ha-message-queue.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/add-ha-message-queue.png deleted file mode 100644 index ac07b3414d8ca..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/add-ha-message-queue.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/high-level-writes-side.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/high-level-writes-side.png deleted file mode 100644 index 720440b48abbc..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/high-level-writes-side.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/mediator-cqrs-microservice.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/mediator-cqrs-microservice.png deleted file mode 100644 index bebfc0ef64878..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/mediator-cqrs-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/ordering-api-microservice.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/ordering-api-microservice.png deleted file mode 100644 index 5a00ea2f083a5..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-application-layer-implementation-web-api/ordering-api-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-domain-model/buyer-order-aggregate-pattern.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-domain-model/buyer-order-aggregate-pattern.png deleted file mode 100644 index 0421717ac70d0..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-domain-model/buyer-order-aggregate-pattern.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-domain-model/domain-entity-pattern.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-domain-model/domain-entity-pattern.png deleted file mode 100644 index d755b76f03eb6..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/microservice-domain-model/domain-entity-pattern.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/net-core-microservice-domain-model/ordering-microservice-container.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/net-core-microservice-domain-model/ordering-microservice-container.png deleted file mode 100644 index d3b151c2cd385..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/net-core-microservice-domain-model/ordering-microservice-container.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/net-core-microservice-domain-model/vs-solution-explorer-order-aggregate.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/net-core-microservice-domain-model/vs-solution-explorer-order-aggregate.png deleted file mode 100644 index f180a40f6ac87..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/net-core-microservice-domain-model/vs-solution-explorer-order-aggregate.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/azure-cosmos-db-global-distribution.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/azure-cosmos-db-global-distribution.png deleted file mode 100644 index 56a8fc69c37f5..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/azure-cosmos-db-global-distribution.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/eshoponcontainers-mongodb-containers.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/eshoponcontainers-mongodb-containers.png deleted file mode 100644 index 547d6584ac5d5..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/eshoponcontainers-mongodb-containers.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/mongodb-api-nuget-packages.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/mongodb-api-nuget-packages.png deleted file mode 100644 index 3ac3fbb67ae18..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/mongodb-api-nuget-packages.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/mongodb-api-wire-protocol.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/mongodb-api-wire-protocol.png deleted file mode 100644 index fa70cf8bb8ae9..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/nosql-database-persistence-infrastructure/mongodb-api-wire-protocol.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/seedwork-domain-model-base-classes-interfaces/vs-solution-seedwork-classes.png b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/seedwork-domain-model-base-classes-interfaces/vs-solution-seedwork-classes.png deleted file mode 100644 index 3fbcfb236ed31..0000000000000 Binary files a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/media/seedwork-domain-model-base-classes-interfaces/vs-solution-seedwork-classes.png and /dev/null differ diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md deleted file mode 100644 index 2795d0a46d552..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md +++ /dev/null @@ -1,938 +0,0 @@ ---- -title: Implementing the microservice application layer using the Web API -description: Understand the Dependency Injection and the Mediator patterns and their implementation details in the Web API application layer. -ms.date: 01/13/2021 ---- - -# Implement the microservice application layer using the Web API - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -## Use Dependency Injection to inject infrastructure objects into your application layer - -As mentioned previously, the application layer can be implemented as part of the artifact (assembly) you are building, such as within a Web API project or an MVC web app project. In the case of a microservice built with ASP.NET Core, the application layer will usually be your Web API library. If you want to separate what is coming from ASP.NET Core (its infrastructure plus your controllers) from your custom application layer code, you could also place your application layer in a separate class library, but that is optional. - -For instance, the application layer code of the ordering microservice is directly implemented as part of the **Ordering.API** project (an ASP.NET Core Web API project), as shown in Figure 7-23. - -:::image type="complex" source="./media/microservice-application-layer-implementation-web-api/ordering-api-microservice.png" alt-text="Screenshot of the Ordering.API microservice in the Solution Explorer."::: -The Solution Explorer view of the Ordering.API microservice, showing the subfolders under the Application folder: Behaviors, Commands, DomainEventHandlers, IntegrationEvents, Models, Queries, and Validations. -:::image-end::: - -**Figure 7-23**. The application layer in the Ordering.API ASP.NET Core Web API project - -ASP.NET Core includes a simple [built-in IoC container](/aspnet/core/fundamentals/dependency-injection) (represented by the IServiceProvider interface) that supports constructor injection by default, and ASP.NET makes certain services available through DI. ASP.NET Core uses the term *service* for any of the types you register that will be injected through DI. You configure the built-in container's services in your application's _Program.cs_ file. Your dependencies are implemented in the services that a type needs and that you register in the IoC container. - -Typically, you want to inject dependencies that implement infrastructure objects. A typical dependency to inject is a repository. But you could inject any other infrastructure dependency that you may have. For simpler implementations, you could directly inject your Unit of Work pattern object (the EF DbContext object), because the DBContext is also the implementation of your infrastructure persistence objects. - -In the following example, you can see how .NET is injecting the required repository objects through the constructor. The class is a command handler, which will get covered in the next section. - -```csharp -public class CreateOrderCommandHandler - : IRequestHandler -{ - private readonly IOrderRepository _orderRepository; - private readonly IIdentityService _identityService; - private readonly IMediator _mediator; - private readonly IOrderingIntegrationEventService _orderingIntegrationEventService; - private readonly ILogger _logger; - - // Using DI to inject infrastructure persistence Repositories - public CreateOrderCommandHandler(IMediator mediator, - IOrderingIntegrationEventService orderingIntegrationEventService, - IOrderRepository orderRepository, - IIdentityService identityService, - ILogger logger) - { - _orderRepository = orderRepository ?? throw new ArgumentNullException(nameof(orderRepository)); - _identityService = identityService ?? throw new ArgumentNullException(nameof(identityService)); - _mediator = mediator ?? throw new ArgumentNullException(nameof(mediator)); - _orderingIntegrationEventService = orderingIntegrationEventService ?? throw new ArgumentNullException(nameof(orderingIntegrationEventService)); - _logger = logger ?? throw new ArgumentNullException(nameof(logger)); - } - - public async Task Handle(CreateOrderCommand message, CancellationToken cancellationToken) - { - // Add Integration event to clean the basket - var orderStartedIntegrationEvent = new OrderStartedIntegrationEvent(message.UserId); - await _orderingIntegrationEventService.AddAndSaveEventAsync(orderStartedIntegrationEvent); - - // Add/Update the Buyer AggregateRoot - // DDD patterns comment: Add child entities and value-objects through the Order Aggregate-Root - // methods and constructor so validations, invariants and business logic - // make sure that consistency is preserved across the whole aggregate - var address = new Address(message.Street, message.City, message.State, message.Country, message.ZipCode); - var order = new Order(message.UserId, message.UserName, address, message.CardTypeId, message.CardNumber, message.CardSecurityNumber, message.CardHolderName, message.CardExpiration); - - foreach (var item in message.OrderItems) - { - order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice, item.Discount, item.PictureUrl, item.Units); - } - - _logger.LogInformation("----- Creating Order - Order: {@Order}", order); - - _orderRepository.Add(order); - - return await _orderRepository.UnitOfWork - .SaveEntitiesAsync(cancellationToken); - } -} -``` - -The class uses the injected repositories to execute the transaction and persist the state changes. It does not matter whether that class is a command handler, an ASP.NET Core Web API controller method, or a [DDD Application Service](https://lostechies.com/jimmybogard/2008/08/21/services-in-domain-driven-design/). It is ultimately a simple class that uses repositories, domain entities, and other application coordination in a fashion similar to a command handler. Dependency Injection works the same way for all the mentioned classes, as in the example using DI based on the constructor. - -### Register the dependency implementation types and interfaces or abstractions - -Before you use the objects injected through constructors, you need to know where to register the interfaces and classes that produce the objects injected into your application classes through DI. (Like DI based on the constructor, as shown previously.) - -#### Use the built-in IoC container provided by ASP.NET Core - -When you use the built-in IoC container provided by ASP.NET Core, you register the types you want to inject in the _Program.cs_ file, as in the following code: - -```csharp -// Register out-of-the-box framework services. -builder.Services.AddDbContext(c => - c.UseSqlServer(Configuration["ConnectionString"]), - ServiceLifetime.Scoped); - -builder.Services.AddMvc(); -// Register custom application dependencies. -builder.Services.AddScoped(); -``` - -The most common pattern when registering types in an IoC container is to register a pair of types—an interface and its related implementation class. Then when you request an object from the IoC container through any constructor, you request an object of a certain type of interface. For instance, in the previous example, the last line states that when any of your constructors have a dependency on IMyCustomRepository (interface or abstraction), the IoC container will inject an instance of the MyCustomSQLServerRepository implementation class. - -#### Use the Scrutor library for automatic types registration - -When using DI in .NET, you might want to be able to scan an assembly and automatically register its types by convention. This feature is not currently available in ASP.NET Core. However, you can use the [Scrutor](https://github.com/khellang/Scrutor) library for that. This approach is convenient when you have dozens of types that need to be registered in your IoC container. - -#### Additional resources - -- **Matthew King. Registering services with Scrutor** \ - - -- **Kristian Hellang. Scrutor.** GitHub repo. \ - - -#### Use Autofac as an IoC container - -You can also use additional IoC containers and plug them into the ASP.NET Core pipeline, as in the ordering microservice in eShopOnContainers, which uses [Autofac](https://autofac.org/). When using Autofac you typically register the types via modules, which allow you to split the registration types between multiple files depending on where your types are, just as you could have the application types distributed across multiple class libraries. - -For example, the following is the [Autofac application module](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/Services/Ordering/Ordering.API/Infrastructure/AutofacModules/ApplicationModule.cs) for the [Ordering.API Web API](https://github.com/dotnet-architecture/eShopOnContainers/tree/main/src/Services/Ordering/Ordering.API) project with the types you will want to inject. - -```csharp -public class ApplicationModule : Autofac.Module -{ - public string QueriesConnectionString { get; } - public ApplicationModule(string qconstr) - { - QueriesConnectionString = qconstr; - } - - protected override void Load(ContainerBuilder builder) - { - builder.Register(c => new OrderQueries(QueriesConnectionString)) - .As() - .InstancePerLifetimeScope(); - builder.RegisterType() - .As() - .InstancePerLifetimeScope(); - builder.RegisterType() - .As() - .InstancePerLifetimeScope(); - builder.RegisterType() - .As() - .InstancePerLifetimeScope(); - } -} -``` - -Autofac also has a feature to [scan assemblies and register types by name conventions](https://autofac.readthedocs.io/en/latest/register/scanning.html). - -The registration process and concepts are very similar to the way you can register types with the built-in ASP.NET Core IoC container, but the syntax when using Autofac is a bit different. - -In the example code, the abstraction IOrderRepository is registered along with the implementation class OrderRepository. This means that whenever a constructor is declaring a dependency through the IOrderRepository abstraction or interface, the IoC container will inject an instance of the OrderRepository class. - -The instance scope type determines how an instance is shared between requests for the same service or dependency. When a request is made for a dependency, the IoC container can return the following: - -- A single instance per lifetime scope (referred to in the ASP.NET Core IoC container as *scoped*). - -- A new instance per dependency (referred to in the ASP.NET Core IoC container as *transient*). - -- A single instance shared across all objects using the IoC container (referred to in the ASP.NET Core IoC container as *singleton*). - -#### Additional resources - -- **Introduction to Dependency Injection in ASP.NET Core** \ - [https://learn.microsoft.com/aspnet/core/fundamentals/dependency-injection](/aspnet/core/fundamentals/dependency-injection) - -- **Autofac.** Official documentation. \ - - -- **Comparing ASP.NET Core IoC container service lifetimes with Autofac IoC container instance scopes - Cesar de la Torre.** \ - - -## Implement the Command and Command Handler patterns - -In the DI-through-constructor example shown in the previous section, the IoC container was injecting repositories through a constructor in a class. But exactly where were they injected? In a simple Web API (for example, the catalog microservice in eShopOnContainers), you inject them at the MVC controllers' level, in a controller constructor, as part of the request pipeline of ASP.NET Core. However, in the initial code of this section (the [CreateOrderCommandHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/Services/Ordering/Ordering.API/Application/Commands/CreateOrderCommandHandler.cs) class from the Ordering.API service in eShopOnContainers), the injection of dependencies is done through the constructor of a particular command handler. Let us explain what a command handler is and why you would want to use it. - -The Command pattern is intrinsically related to the CQRS pattern that was introduced earlier in this guide. CQRS has two sides. The first area is queries, using simplified queries with the [Dapper](https://github.com/StackExchange/dapper-dot-net) micro ORM, which was explained previously. The second area is commands, which are the starting point for transactions, and the input channel from outside the service. - -As shown in Figure 7-24, the pattern is based on accepting commands from the client-side, processing them based on the domain model rules, and finally persisting the states with transactions. - -![Diagram showing the high-level data flow from the client to database.](./media/microservice-application-layer-implementation-web-api/high-level-writes-side.png) - -**Figure 7-24**. High-level view of the commands or "transactional side" in a CQRS pattern - -Figure 7-24 shows that the UI app sends a command through the API that gets to a `CommandHandler`, that depends on the Domain model and the Infrastructure, to update the database. - -### The command class - -A command is a request for the system to perform an action that changes the state of the system. Commands are imperative, and should be processed just once. - -Since commands are imperatives, they are typically named with a verb in the imperative mood (for example, "create" or "update"), and they might include the aggregate type, such as CreateOrderCommand. Unlike an event, a command is not a fact from the past; it is only a request, and thus may be refused. - -Commands can originate from the UI as a result of a user initiating a request, or from a process manager when the process manager is directing an aggregate to perform an action. - -An important characteristic of a command is that it should be processed just once by a single receiver. This is because a command is a single action or transaction you want to perform in the application. For example, the same order creation command should not be processed more than once. This is an important difference between commands and events. Events may be processed multiple times, because many systems or microservices might be interested in the event. - -In addition, it is important that a command be processed only once in case the command is not idempotent. A command is idempotent if it can be executed multiple times without changing the result, either because of the nature of the command, or because of the way the system handles the command. - -It is a good practice to make your commands and updates idempotent when it makes sense under your domain's business rules and invariants. For instance, to use the same example, if for any reason (retry logic, hacking, etc.) the same CreateOrder command reaches your system multiple times, you should be able to identify it and ensure that you do not create multiple orders. To do so, you need to attach some kind of identity in the operations and identify whether the command or update was already processed. - -You send a command to a single receiver; you do not publish a command. Publishing is for events that state a fact—that something has happened and might be interesting for event receivers. In the case of events, the publisher has no concerns about which receivers get the event or what they do it. But domain or integration events are a different story already introduced in previous sections. - -A command is implemented with a class that contains data fields or collections with all the information that is needed in order to execute that command. A command is a special kind of Data Transfer Object (DTO), one that is specifically used to request changes or transactions. The command itself is based on exactly the information that is needed for processing the command, and nothing more. - -The following example shows the simplified `CreateOrderCommand` class. This is an immutable command that is used in the ordering microservice in eShopOnContainers. - -```csharp -// DDD and CQRS patterns comment: Note that it is recommended to implement immutable Commands -// In this case, its immutability is achieved by having all the setters as private -// plus only being able to update the data just once, when creating the object through its constructor. -// References on Immutable Commands: -// http://cqrs.nu/Faq -// https://docs.spine3.org/motivation/immutability.html -// http://blog.gauffin.org/2012/06/griffin-container-introducing-command-support/ -// https://learn.microsoft.com/dotnet/csharp/programming-guide/classes-and-structs/how-to-implement-a-lightweight-class-with-auto-implemented-properties - -[DataContract] -public class CreateOrderCommand - : IRequest -{ - [DataMember] - private readonly List _orderItems; - - [DataMember] - public string UserId { get; private set; } - - [DataMember] - public string UserName { get; private set; } - - [DataMember] - public string City { get; private set; } - - [DataMember] - public string Street { get; private set; } - - [DataMember] - public string State { get; private set; } - - [DataMember] - public string Country { get; private set; } - - [DataMember] - public string ZipCode { get; private set; } - - [DataMember] - public string CardNumber { get; private set; } - - [DataMember] - public string CardHolderName { get; private set; } - - [DataMember] - public DateTime CardExpiration { get; private set; } - - [DataMember] - public string CardSecurityNumber { get; private set; } - - [DataMember] - public int CardTypeId { get; private set; } - - [DataMember] - public IEnumerable OrderItems => _orderItems; - - public CreateOrderCommand() - { - _orderItems = new List(); - } - - public CreateOrderCommand(List basketItems, string userId, string userName, string city, string street, string state, string country, string zipcode, - string cardNumber, string cardHolderName, DateTime cardExpiration, - string cardSecurityNumber, int cardTypeId) : this() - { - _orderItems = basketItems.ToOrderItemsDTO().ToList(); - UserId = userId; - UserName = userName; - City = city; - Street = street; - State = state; - Country = country; - ZipCode = zipcode; - CardNumber = cardNumber; - CardHolderName = cardHolderName; - CardExpiration = cardExpiration; - CardSecurityNumber = cardSecurityNumber; - CardTypeId = cardTypeId; - CardExpiration = cardExpiration; - } - - - public class OrderItemDTO - { - public int ProductId { get; set; } - - public string ProductName { get; set; } - - public decimal UnitPrice { get; set; } - - public decimal Discount { get; set; } - - public int Units { get; set; } - - public string PictureUrl { get; set; } - } -} -``` - -Basically, the command class contains all the data you need for performing a business transaction by using the domain model objects. Thus, commands are simply data structures that contain read-only data, and no behavior. The command's name indicates its purpose. In many languages like C#, commands are represented as classes, but they are not true classes in the real object-oriented sense. - -As an additional characteristic, commands are immutable, because the expected usage is that they are processed directly by the domain model. They do not need to change during their projected lifetime. In a C# class, immutability can be achieved by not having any setters or other methods that change the internal state. - -Keep in mind that if you intend or expect commands to go through a serializing/deserializing process, the properties must have a private setter, and the `[DataMember]` (or `[JsonProperty]`) attribute. Otherwise, the deserializer won't be able to reconstruct the object at the destination with the required values. You can also use truly read-only properties if the class has a constructor with parameters for all properties, with the usual camelCase naming convention, and annotate the constructor as `[JsonConstructor]`. However, this option requires more code. - -For example, the command class for creating an order is probably similar in terms of data to the order you want to create, but you probably do not need the same attributes. For instance, `CreateOrderCommand` does not have an order ID, because the order has not been created yet. - -Many command classes can be simple, requiring only a few fields about some state that needs to be changed. That would be the case if you are just changing the status of an order from "in process" to "paid" or "shipped" by using a command similar to the following: - -```csharp -[DataContract] -public class UpdateOrderStatusCommand - :IRequest -{ - [DataMember] - public string Status { get; private set; } - - [DataMember] - public string OrderId { get; private set; } - - [DataMember] - public string BuyerIdentityGuid { get; private set; } -} -``` - -Some developers make their UI request objects separate from their command DTOs, but that is just a matter of preference. It is a tedious separation with not much additional value, and the objects are almost exactly the same shape. For instance, in eShopOnContainers, some commands come directly from the client-side. - -### The Command handler class - -You should implement a specific command handler class for each command. That is how the pattern works, and it's where you'll use the command object, the domain objects, and the infrastructure repository objects. The command handler is in fact the heart of the application layer in terms of CQRS and DDD. However, all the domain logic should be contained in the domain classes—within the aggregate roots (root entities), child entities, or [domain services](https://lostechies.com/jimmybogard/2008/08/21/services-in-domain-driven-design/), but not within the command handler, which is a class from the application layer. - -The command handler class offers a strong stepping stone in the way to achieve the Single Responsibility Principle (SRP) mentioned in a previous section. - -A command handler receives a command and obtains a result from the aggregate that is used. The result should be either successful execution of the command, or an exception. In the case of an exception, the system state should be unchanged. - -The command handler usually takes the following steps: - -- It receives the command object, like a DTO (from the [mediator](https://en.wikipedia.org/wiki/Mediator_pattern) or other infrastructure object). - -- It validates that the command is valid (if not validated by the mediator). - -- It instantiates the aggregate root instance that is the target of the current command. - -- It executes the method on the aggregate root instance, getting the required data from the command. - -- It persists the new state of the aggregate to its related database. This last operation is the actual transaction. - -Typically, a command handler deals with a single aggregate driven by its aggregate root (root entity). If multiple aggregates should be impacted by the reception of a single command, you could use domain events to propagate states or actions across multiple aggregates. - -The important point here is that when a command is being processed, all the domain logic should be inside the domain model (the aggregates), fully encapsulated and ready for unit testing. The command handler just acts as a way to get the domain model from the database, and as the final step, to tell the infrastructure layer (repositories) to persist the changes when the model is changed. The advantage of this approach is that you can refactor the domain logic in an isolated, fully encapsulated, rich, behavioral domain model without changing code in the application or infrastructure layers, which are the plumbing level (command handlers, Web API, repositories, etc.). - -When command handlers get complex, with too much logic, that can be a code smell. Review them, and if you find domain logic, refactor the code to move that domain behavior to the methods of the domain objects (the aggregate root and child entity). - -As an example of a command handler class, the following code shows the same `CreateOrderCommandHandler` class that you saw at the beginning of this chapter. In this case, it also highlights the Handle method and the operations with the domain model objects/aggregates. - -```csharp -public class CreateOrderCommandHandler - : IRequestHandler -{ - private readonly IOrderRepository _orderRepository; - private readonly IIdentityService _identityService; - private readonly IMediator _mediator; - private readonly IOrderingIntegrationEventService _orderingIntegrationEventService; - private readonly ILogger _logger; - - // Using DI to inject infrastructure persistence Repositories - public CreateOrderCommandHandler(IMediator mediator, - IOrderingIntegrationEventService orderingIntegrationEventService, - IOrderRepository orderRepository, - IIdentityService identityService, - ILogger logger) - { - _orderRepository = orderRepository ?? throw new ArgumentNullException(nameof(orderRepository)); - _identityService = identityService ?? throw new ArgumentNullException(nameof(identityService)); - _mediator = mediator ?? throw new ArgumentNullException(nameof(mediator)); - _orderingIntegrationEventService = orderingIntegrationEventService ?? throw new ArgumentNullException(nameof(orderingIntegrationEventService)); - _logger = logger ?? throw new ArgumentNullException(nameof(logger)); - } - - public async Task Handle(CreateOrderCommand message, CancellationToken cancellationToken) - { - // Add Integration event to clean the basket - var orderStartedIntegrationEvent = new OrderStartedIntegrationEvent(message.UserId); - await _orderingIntegrationEventService.AddAndSaveEventAsync(orderStartedIntegrationEvent); - - // Add/Update the Buyer AggregateRoot - // DDD patterns comment: Add child entities and value-objects through the Order Aggregate-Root - // methods and constructor so validations, invariants and business logic - // make sure that consistency is preserved across the whole aggregate - var address = new Address(message.Street, message.City, message.State, message.Country, message.ZipCode); - var order = new Order(message.UserId, message.UserName, address, message.CardTypeId, message.CardNumber, message.CardSecurityNumber, message.CardHolderName, message.CardExpiration); - - foreach (var item in message.OrderItems) - { - order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice, item.Discount, item.PictureUrl, item.Units); - } - - _logger.LogInformation("----- Creating Order - Order: {@Order}", order); - - _orderRepository.Add(order); - - return await _orderRepository.UnitOfWork - .SaveEntitiesAsync(cancellationToken); - } -} -``` - -These are additional steps a command handler should take: - -- Use the command's data to operate with the aggregate root's methods and behavior. - -- Internally within the domain objects, raise domain events while the transaction is executed, but that is transparent from a command handler point of view. - -- If the aggregate's operation result is successful and after the transaction is finished, raise integration events. (These might also be raised by infrastructure classes like repositories.) - -#### Additional resources - -- **Mark Seemann. At the Boundaries, Applications are Not Object-Oriented** \ - - -- **Commands and events** \ - - -- **What does a command handler do?** \ - - -- **Jimmy Bogard. Domain Command Patterns – Handlers** \ - - -- **Jimmy Bogard. Domain Command Patterns – Validation** \ - - -## The Command process pipeline: how to trigger a command handler - -The next question is how to invoke a command handler. You could manually call it from each related ASP.NET Core controller. However, that approach would be too coupled and is not ideal. - -The other two main options, which are the recommended options, are: - -- Through an in-memory Mediator pattern artifact. - -- With an asynchronous message queue, in between controllers and handlers. - -### Use the Mediator pattern (in-memory) in the command pipeline - -As shown in Figure 7-25, in a CQRS approach you use an intelligent mediator, similar to an in-memory bus, which is smart enough to redirect to the right command handler based on the type of the command or DTO being received. The single black arrows between components represent the dependencies between objects (in many cases, injected through DI) with their related interactions. - -![Diagram showing a more detailed data flow from client to database.](./media/microservice-application-layer-implementation-web-api/mediator-cqrs-microservice.png) - -**Figure 7-25**. Using the Mediator pattern in process in a single CQRS microservice - -The above diagram shows a zoom-in from image 7-24: the ASP.NET Core controller sends the command to MediatR's command pipeline, so they get to the appropriate handler. - -The reason that using the Mediator pattern makes sense is that in enterprise applications, the processing requests can get complicated. You want to be able to add an open number of cross-cutting concerns like logging, validations, audit, and security. In these cases, you can rely on a mediator pipeline (see [Mediator pattern](https://en.wikipedia.org/wiki/Mediator_pattern)) to provide a means for these extra behaviors or cross-cutting concerns. - -A mediator is an object that encapsulates the "how" of this process: it coordinates execution based on state, the way a command handler is invoked, or the payload you provide to the handler. With a mediator component, you can apply cross-cutting concerns in a centralized and transparent way by applying decorators (or [pipeline behaviors](https://github.com/jbogard/MediatR/wiki/Behaviors) since [MediatR 3](https://www.nuget.org/packages/MediatR/3.0.0)). For more information, see the [Decorator pattern](https://en.wikipedia.org/wiki/Decorator_pattern). - -Decorators and behaviors are similar to [Aspect Oriented Programming (AOP)](https://en.wikipedia.org/wiki/Aspect-oriented_programming), only applied to a specific process pipeline managed by the mediator component. Aspects in AOP that implement cross-cutting concerns are applied based on *aspect weavers* injected at compilation time or based on object call interception. Both typical AOP approaches are sometimes said to work "like magic," because it is not easy to see how AOP does its work. When dealing with serious issues or bugs, AOP can be difficult to debug. On the other hand, these decorators/behaviors are explicit and applied only in the context of the mediator, so debugging is much more predictable and easy. - -For example, in the eShopOnContainers ordering microservice, has an implementation of two sample behaviors, a [LogBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/LoggingBehavior.cs) class and a [ValidatorBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/ValidatorBehavior.cs) class. The implementation of the behaviors is explained in the next section by showing how eShopOnContainers uses [MediatR](https://www.nuget.org/packages/MediatR) [behaviors](https://github.com/jbogard/MediatR/wiki/Behaviors). - -### Use message queues (out-of-proc) in the command's pipeline - -Another choice is to use asynchronous messages based on brokers or message queues, as shown in Figure 7-26. That option could also be combined with the mediator component right before the command handler. - -![Diagram showing the dataflow using an HA message queue.](./media/microservice-application-layer-implementation-web-api/add-ha-message-queue.png) - -**Figure 7-26**. Using message queues (out of the process and inter-process communication) with CQRS commands - -Command's pipeline can also be handled by a high availability message queue to deliver the commands to the appropriate handler. Using message queues to accept the commands can further complicate your command's pipeline, because you will probably need to split the pipeline into two processes connected through the external message queue. Still, it should be used if you need to have improved scalability and performance based on asynchronous messaging. Consider that in the case of Figure 7-26, the controller just posts the command message into the queue and returns. Then the command handlers process the messages at their own pace. That is a great benefit of queues: the message queue can act as a buffer in cases when hyper scalability is needed, such as for stocks or any other scenario with a high volume of ingress data. - -However, because of the asynchronous nature of message queues, you need to figure out how to communicate with the client application about the success or failure of the command's process. As a rule, you should never use "fire and forget" commands. Every business application needs to know if a command was processed successfully, or at least validated and accepted. - -Thus, being able to respond to the client after validating a command message that was submitted to an asynchronous queue adds complexity to your system, as compared to an in-process command process that returns the operation's result after running the transaction. Using queues, you might need to return the result of the command process through other operation result messages, which will require additional components and custom communication in your system. - -Additionally, async commands are one-way commands, which in many cases might not be needed, as is explained in the following interesting exchange between Burtsev Alexey and Greg Young in an [online conversation](https://groups.google.com/forum/#!msg/dddcqrs/xhJHVxDx2pM/WP9qP8ifYCwJ): - -> \[Burtsev Alexey\] I find lots of code where people use async command handling or one-way command messaging without any reason to do so (they are not doing some long operation, they are not executing external async code, they do not even cross-application boundary to be using message bus). Why do they introduce this unnecessary complexity? And actually, I haven't seen a CQRS code example with blocking command handlers so far, though it will work just fine in most cases. -> -> \[Greg Young\] \[...\] an asynchronous command doesn't exist; it's actually another event. If I must accept what you send me and raise an event if I disagree, it's no longer you telling me to do something \[that is, it's not a command\]. It's you telling me something has been done. This seems like a slight difference at first, but it has many implications. - -Asynchronous commands greatly increase the complexity of a system, because there is no simple way to indicate failures. Therefore, asynchronous commands are not recommended other than when scaling requirements are needed or in special cases when communicating the internal microservices through messaging. In those cases, you must design a separate reporting and recovery system for failures. - -In the initial version of eShopOnContainers, it was decided to use synchronous command processing, started from HTTP requests and driven by the Mediator pattern. That easily allows you to return the success or failure of the process, as in the [CreateOrderCommandHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/netcore1.1/src/Services/Ordering/Ordering.API/Application/Commands/CreateOrderCommandHandler.cs) implementation. - -In any case, this should be a decision based on your application's or microservice's business requirements. - -## Implement the command process pipeline with a mediator pattern (MediatR) - -As a sample implementation, this guide proposes using the in-process pipeline based on the Mediator pattern to drive command ingestion and route commands, in memory, to the right command handlers. The guide also proposes applying [behaviors](https://github.com/jbogard/MediatR/wiki/Behaviors) in order to separate cross-cutting concerns. - -For implementation in .NET, there are multiple open-source libraries available that implement the Mediator pattern. The library used in this guide is the [MediatR](https://github.com/jbogard/MediatR) open-source library (created by Jimmy Bogard), but you could use another approach. MediatR is a small and simple library that allows you to process in-memory messages like a command, while applying decorators or behaviors. - -Using the Mediator pattern helps you to reduce coupling and to isolate the concerns of the requested work, while automatically connecting to the handler that performs that work—in this case, to command handlers. - -Another good reason to use the Mediator pattern was explained by Jimmy Bogard when reviewing this guide: - -> I think it might be worth mentioning testing here – it provides a nice consistent window into the behavior of your system. Request-in, response-out. We've found that aspect quite valuable in building consistently behaving tests. - -First, let's look at a sample WebAPI controller where you actually would use the mediator object. If you weren't using the mediator object, you'd need to inject all the dependencies for that controller, things like a logger object and others. Therefore, the constructor would be complicated. On the other hand, if you use the mediator object, the constructor of your controller can be a lot simpler, with just a few dependencies instead of many dependencies if you had one per cross-cutting operation, as in the following example: - -```csharp -public class MyMicroserviceController : Controller -{ - public MyMicroserviceController(IMediator mediator, - IMyMicroserviceQueries microserviceQueries) - { - // ... - } -} -``` - -You can see that the mediator provides a clean and lean Web API controller constructor. In addition, within the controller methods, the code to send a command to the mediator object is almost one line: - -```csharp -[Route("new")] -[HttpPost] -public async Task ExecuteBusinessOperation([FromBody]RunOpCommand - runOperationCommand) -{ - var commandResult = await _mediator.SendAsync(runOperationCommand); - - return commandResult ? (IActionResult)Ok() : (IActionResult)BadRequest(); -} -``` - -### Implement idempotent Commands - -In **eShopOnContainers**, a more advanced example than the above is submitting a CreateOrderCommand object from the Ordering microservice. But since the Ordering business process is a bit more complex and, in our case, it actually starts in the Basket microservice, this action of submitting the CreateOrderCommand object is performed from an integration-event handler named [UserCheckoutAcceptedIntegrationEventHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/IntegrationEvents/EventHandling/UserCheckoutAcceptedIntegrationEventHandler.cs) instead of a simple WebAPI controller called from the client App as in the previous simpler example. - -Nevertheless, the action of submitting the Command to MediatR is pretty similar, as shown in the following code. - -```csharp -var createOrderCommand = new CreateOrderCommand(eventMsg.Basket.Items, - eventMsg.UserId, eventMsg.City, - eventMsg.Street, eventMsg.State, - eventMsg.Country, eventMsg.ZipCode, - eventMsg.CardNumber, - eventMsg.CardHolderName, - eventMsg.CardExpiration, - eventMsg.CardSecurityNumber, - eventMsg.CardTypeId); - -var requestCreateOrder = new IdentifiedCommand(createOrderCommand, - eventMsg.RequestId); -result = await _mediator.Send(requestCreateOrder); -``` - -However, this case is also slightly more advanced because we're also implementing idempotent commands. The CreateOrderCommand process should be idempotent, so if the same message comes duplicated through the network, because of any reason, like retries, the same business order will be processed just once. - -This is implemented by wrapping the business command (in this case CreateOrderCommand) and embedding it into a generic IdentifiedCommand, which is tracked by an ID of every message coming through the network that has to be idempotent. - -In the code below, you can see that the IdentifiedCommand is nothing more than a DTO with and ID plus the wrapped business command object. - -```csharp -public class IdentifiedCommand : IRequest - where T : IRequest -{ - public T Command { get; } - public Guid Id { get; } - public IdentifiedCommand(T command, Guid id) - { - Command = command; - Id = id; - } -} -``` - -Then the CommandHandler for the IdentifiedCommand named [IdentifiedCommandHandler.cs](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Commands/IdentifiedCommandHandler.cs) will basically check if the ID coming as part of the message already exists in a table. If it already exists, that command won't be processed again, so it behaves as an idempotent command. That infrastructure code is performed by the `_requestManager.ExistAsync` method call below. - -```csharp -// IdentifiedCommandHandler.cs -public class IdentifiedCommandHandler : IRequestHandler, R> - where T : IRequest -{ - private readonly IMediator _mediator; - private readonly IRequestManager _requestManager; - private readonly ILogger> _logger; - - public IdentifiedCommandHandler( - IMediator mediator, - IRequestManager requestManager, - ILogger> logger) - { - _mediator = mediator; - _requestManager = requestManager; - _logger = logger ?? throw new System.ArgumentNullException(nameof(logger)); - } - - /// - /// Creates the result value to return if a previous request was found - /// - /// - protected virtual R CreateResultForDuplicateRequest() - { - return default(R); - } - - /// - /// This method handles the command. It just ensures that no other request exists with the same ID, and if this is the case - /// just enqueues the original inner command. - /// - /// IdentifiedCommand which contains both original command & request ID - /// Return value of inner command or default value if request same ID was found - public async Task Handle(IdentifiedCommand message, CancellationToken cancellationToken) - { - var alreadyExists = await _requestManager.ExistAsync(message.Id); - if (alreadyExists) - { - return CreateResultForDuplicateRequest(); - } - else - { - await _requestManager.CreateRequestForCommandAsync(message.Id); - try - { - var command = message.Command; - var commandName = command.GetGenericTypeName(); - var idProperty = string.Empty; - var commandId = string.Empty; - - switch (command) - { - case CreateOrderCommand createOrderCommand: - idProperty = nameof(createOrderCommand.UserId); - commandId = createOrderCommand.UserId; - break; - - case CancelOrderCommand cancelOrderCommand: - idProperty = nameof(cancelOrderCommand.OrderNumber); - commandId = $"{cancelOrderCommand.OrderNumber}"; - break; - - case ShipOrderCommand shipOrderCommand: - idProperty = nameof(shipOrderCommand.OrderNumber); - commandId = $"{shipOrderCommand.OrderNumber}"; - break; - - default: - idProperty = "Id?"; - commandId = "n/a"; - break; - } - - _logger.LogInformation( - "----- Sending command: {CommandName} - {IdProperty}: {CommandId} ({@Command})", - commandName, - idProperty, - commandId, - command); - - // Send the embedded business command to mediator so it runs its related CommandHandler - var result = await _mediator.Send(command, cancellationToken); - - _logger.LogInformation( - "----- Command result: {@Result} - {CommandName} - {IdProperty}: {CommandId} ({@Command})", - result, - commandName, - idProperty, - commandId, - command); - - return result; - } - catch - { - return default(R); - } - } - } -} -``` - -Since the IdentifiedCommand acts like a business command's envelope, when the business command needs to be processed because it is not a repeated ID, then it takes that inner business command and resubmits it to Mediator, as in the last part of the code shown above when running `_mediator.Send(message.Command)`, from the [IdentifiedCommandHandler.cs](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Commands/IdentifiedCommandHandler.cs). - -When doing that, it will link and run the business command handler, in this case, the [CreateOrderCommandHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Commands/CreateOrderCommandHandler.cs), which is running transactions against the Ordering database, as shown in the following code. - -```csharp -// CreateOrderCommandHandler.cs -public class CreateOrderCommandHandler - : IRequestHandler -{ - private readonly IOrderRepository _orderRepository; - private readonly IIdentityService _identityService; - private readonly IMediator _mediator; - private readonly IOrderingIntegrationEventService _orderingIntegrationEventService; - private readonly ILogger _logger; - - // Using DI to inject infrastructure persistence Repositories - public CreateOrderCommandHandler(IMediator mediator, - IOrderingIntegrationEventService orderingIntegrationEventService, - IOrderRepository orderRepository, - IIdentityService identityService, - ILogger logger) - { - _orderRepository = orderRepository ?? throw new ArgumentNullException(nameof(orderRepository)); - _identityService = identityService ?? throw new ArgumentNullException(nameof(identityService)); - _mediator = mediator ?? throw new ArgumentNullException(nameof(mediator)); - _orderingIntegrationEventService = orderingIntegrationEventService ?? throw new ArgumentNullException(nameof(orderingIntegrationEventService)); - _logger = logger ?? throw new ArgumentNullException(nameof(logger)); - } - - public async Task Handle(CreateOrderCommand message, CancellationToken cancellationToken) - { - // Add Integration event to clean the basket - var orderStartedIntegrationEvent = new OrderStartedIntegrationEvent(message.UserId); - await _orderingIntegrationEventService.AddAndSaveEventAsync(orderStartedIntegrationEvent); - - // Add/Update the Buyer AggregateRoot - // DDD patterns comment: Add child entities and value-objects through the Order Aggregate-Root - // methods and constructor so validations, invariants and business logic - // make sure that consistency is preserved across the whole aggregate - var address = new Address(message.Street, message.City, message.State, message.Country, message.ZipCode); - var order = new Order(message.UserId, message.UserName, address, message.CardTypeId, message.CardNumber, message.CardSecurityNumber, message.CardHolderName, message.CardExpiration); - - foreach (var item in message.OrderItems) - { - order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice, item.Discount, item.PictureUrl, item.Units); - } - - _logger.LogInformation("----- Creating Order - Order: {@Order}", order); - - _orderRepository.Add(order); - - return await _orderRepository.UnitOfWork - .SaveEntitiesAsync(cancellationToken); - } -} -``` - -### Register the types used by MediatR - -In order for MediatR to be aware of your command handler classes, you need to register the mediator classes and the command handler classes in your IoC container. By default, MediatR uses Autofac as the IoC container, but you can also use the built-in ASP.NET Core IoC container or any other container supported by MediatR. - -The following code shows how to register Mediator's types and commands when using Autofac modules. - -```csharp -public class MediatorModule : Autofac.Module -{ - protected override void Load(ContainerBuilder builder) - { - builder.RegisterAssemblyTypes(typeof(IMediator).GetTypeInfo().Assembly) - .AsImplementedInterfaces(); - - // Register all the Command classes (they implement IRequestHandler) - // in assembly holding the Commands - builder.RegisterAssemblyTypes(typeof(CreateOrderCommand).GetTypeInfo().Assembly) - .AsClosedTypesOf(typeof(IRequestHandler<,>)); - // Other types registration - //... - } -} -``` - -This is where "the magic happens" with MediatR. - -As each command handler implements the generic `IRequestHandler` interface, when you register the assemblies using `RegisteredAssemblyTypes` method all the types marked as `IRequestHandler` also gets registered with their `Commands`. For example: - -```csharp -public class CreateOrderCommandHandler - : IRequestHandler -{ -``` - -That is the code that correlates commands with command handlers. The handler is just a simple class, but it inherits from `RequestHandler`, where T is the command type, and MediatR makes sure it is invoked with the correct payload (the command). - -## Apply cross-cutting concerns when processing commands with the Behaviors in MediatR - -There is one more thing: being able to apply cross-cutting concerns to the mediator pipeline. You can also see at the end of the Autofac registration module code how it registers a behavior type, specifically, a custom LoggingBehavior class and a ValidatorBehavior class. But you could add other custom behaviors, too. - -```csharp -public class MediatorModule : Autofac.Module -{ - protected override void Load(ContainerBuilder builder) - { - builder.RegisterAssemblyTypes(typeof(IMediator).GetTypeInfo().Assembly) - .AsImplementedInterfaces(); - - // Register all the Command classes (they implement IRequestHandler) - // in assembly holding the Commands - builder.RegisterAssemblyTypes( - typeof(CreateOrderCommand).GetTypeInfo().Assembly). - AsClosedTypesOf(typeof(IRequestHandler<,>)); - // Other types registration - //... - builder.RegisterGeneric(typeof(LoggingBehavior<,>)). - As(typeof(IPipelineBehavior<,>)); - builder.RegisterGeneric(typeof(ValidatorBehavior<,>)). - As(typeof(IPipelineBehavior<,>)); - } -} -``` - -That [LoggingBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/LoggingBehavior.cs) class can be implemented as the following code, which logs information about the command handler being executed and whether it was successful or not. - -```csharp -public class LoggingBehavior - : IPipelineBehavior -{ - private readonly ILogger> _logger; - public LoggingBehavior(ILogger> logger) => - _logger = logger; - - public async Task Handle(TRequest request, - RequestHandlerDelegate next) - { - _logger.LogInformation($"Handling {typeof(TRequest).Name}"); - var response = await next(); - _logger.LogInformation($"Handled {typeof(TResponse).Name}"); - return response; - } -} -``` - -Just by implementing this behavior class and by registering it in the pipeline (in the MediatorModule above), all the commands processed through MediatR will be logging information about the execution. - -The eShopOnContainers ordering microservice also applies a second behavior for basic validations, the [ValidatorBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/ValidatorBehavior.cs) class that relies on the [FluentValidation](https://github.com/JeremySkinner/FluentValidation) library, as shown in the following code: - -```csharp -public class ValidatorBehavior - : IPipelineBehavior -{ - private readonly IValidator[] _validators; - public ValidatorBehavior(IValidator[] validators) => - _validators = validators; - - public async Task Handle(TRequest request, - RequestHandlerDelegate next) - { - var failures = _validators - .Select(v => v.Validate(request)) - .SelectMany(result => result.Errors) - .Where(error => error != null) - .ToList(); - - if (failures.Any()) - { - throw new OrderingDomainException( - $"Command Validation Errors for type {typeof(TRequest).Name}", - new ValidationException("Validation exception", failures)); - } - - var response = await next(); - return response; - } -} -``` - -Here the behavior is raising an exception if validation fails, but you could also return a result object, containing the command result if it succeeded or the validation messages in case it didn't. This would probably make it easier to display validation results to the user. - -Then, based on the [FluentValidation](https://github.com/JeremySkinner/FluentValidation) library, you would create validation for the data passed with CreateOrderCommand, as in the following code: - -```csharp -public class CreateOrderCommandValidator : AbstractValidator -{ - public CreateOrderCommandValidator() - { - RuleFor(command => command.City).NotEmpty(); - RuleFor(command => command.Street).NotEmpty(); - RuleFor(command => command.State).NotEmpty(); - RuleFor(command => command.Country).NotEmpty(); - RuleFor(command => command.ZipCode).NotEmpty(); - RuleFor(command => command.CardNumber).NotEmpty().Length(12, 19); - RuleFor(command => command.CardHolderName).NotEmpty(); - RuleFor(command => command.CardExpiration).NotEmpty().Must(BeValidExpirationDate).WithMessage("Please specify a valid card expiration date"); - RuleFor(command => command.CardSecurityNumber).NotEmpty().Length(3); - RuleFor(command => command.CardTypeId).NotEmpty(); - RuleFor(command => command.OrderItems).Must(ContainOrderItems).WithMessage("No order items found"); - } - - private bool BeValidExpirationDate(DateTime dateTime) - { - return dateTime >= DateTime.UtcNow; - } - - private bool ContainOrderItems(IEnumerable orderItems) - { - return orderItems.Any(); - } -} -``` - -You could create additional validations. This is a very clean and elegant way to implement your command validations. - -In a similar way, you could implement other behaviors for additional aspects or cross-cutting concerns that you want to apply to commands when handling them. - -#### Additional resources - -##### The mediator pattern - -- **Mediator pattern** \ - [https://en.wikipedia.org/wiki/Mediator\_pattern](https://en.wikipedia.org/wiki/Mediator_pattern) - -##### The decorator pattern - -- **Decorator pattern** \ - [https://en.wikipedia.org/wiki/Decorator\_pattern](https://en.wikipedia.org/wiki/Decorator_pattern) - -##### MediatR (Jimmy Bogard) - -- **MediatR.** GitHub repo. \ - - -- **CQRS with MediatR and AutoMapper** \ - - -- **Put your controllers on a diet: POSTs and commands.** \ - - -- **Tackling cross-cutting concerns with a mediator pipeline** \ - - -- **CQRS and REST: the perfect match** \ - - -- **MediatR Pipeline Examples** \ - - -- **Vertical Slice Test Fixtures for MediatR and ASP.NET Core** \ - - -- **MediatR Extensions for Microsoft Dependency Injection Released** \ - - -##### Fluent validation - -- **Jeremy Skinner. FluentValidation.** GitHub repo. \ - - -> [!div class="step-by-step"] -> [Previous](microservice-application-layer-web-api-design.md) -> [Next](../implement-resilient-applications/index.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-web-api-design.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-web-api-design.md deleted file mode 100644 index 7a90533a58f44..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-web-api-design.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Designing the microservice application layer and Web API -description: .NET Microservices Architecture for Containerized .NET Applications | A brief mention of the SOLID principles for designing the application layer. -ms.date: 10/08/2018 ---- - -# Design the microservice application layer and Web API - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -## Use SOLID principles and Dependency Injection - -SOLID principles are critical techniques to be used in any modern and mission-critical application, such as developing a microservice with DDD patterns. SOLID is an acronym that groups five fundamental principles: - -- Single Responsibility principle - -- Open/closed principle - -- Liskov substitution principle - -- Interface Segregation principle - -- Dependency Inversion principle - -SOLID is more about how you design your application or microservice internal layers and about decoupling dependencies between them. It is not related to the domain, but to the application's technical design. The final principle, the Dependency Inversion principle, allows you to decouple the infrastructure layer from the rest of the layers, which allows a better decoupled implementation of the DDD layers. - -Dependency Injection (DI) is one way to implement the Dependency Inversion principle. It is a technique for achieving loose coupling between objects and their dependencies. Rather than directly instantiating collaborators, or using static references (that is, using new…), the objects that a class needs in order to perform its actions are provided to (or "injected into") the class. Most often, classes will declare their dependencies via their constructor, allowing them to follow the Explicit Dependencies principle. Dependency Injection is usually based on specific Inversion of Control (IoC) containers. ASP.NET Core provides a simple built-in IoC container, but you can also use your favorite IoC container, like Autofac or Ninject. - -By following the SOLID principles, your classes will tend naturally to be small, well-factored, and easily tested. But how can you know if too many dependencies are being injected into your classes? If you use DI through the constructor, it will be easy to detect that by just looking at the number of parameters for your constructor. If there are too many dependencies, this is generally a sign (a [code smell](https://deviq.com/code-smells/)) that your class is trying to do too much, and is probably violating the Single Responsibility principle. - -It would take another guide to cover SOLID in detail. Therefore, this guide requires you to have only a minimum knowledge of these topics. - -#### Additional resources - -- **SOLID: Fundamental OOP Principles** \ - - -- **Inversion of Control Containers and the Dependency Injection pattern** \ - - -- **Steve Smith. New is Glue** \ - - -> [!div class="step-by-step"] -> [Previous](nosql-database-persistence-infrastructure.md) -> [Next](microservice-application-layer-implementation-web-api.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-domain-model.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-domain-model.md deleted file mode 100644 index 4fc0ed235754b..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-domain-model.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: Designing a microservice domain model -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the key concepts when designing a DDD-oriented domain model. -ms.date: 01/30/2020 ---- -# Design a microservice domain model - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -*Define one rich domain model for each business microservice or Bounded Context.* - -Your goal is to create a single cohesive domain model for each business microservice or Bounded Context (BC). Keep in mind, however, that a BC or business microservice could sometimes be composed of several physical services that share a single domain model. The domain model must capture the rules, behavior, business language, and constraints of the single Bounded Context or business microservice that it represents. - -## The Domain Entity pattern - -Entities represent domain objects and are primarily defined by their identity, continuity, and persistence over time, and not only by the attributes that comprise them. As Eric Evans says, "an object primarily defined by its identity is called an Entity." Entities are very important in the domain model, since they are the base for a model. Therefore, you should identify and design them carefully. - -*An entity's identity can cross multiple microservices or Bounded Contexts.* - -The same identity (that is, the same `Id` value, although perhaps not the same domain entity) can be modeled across multiple Bounded Contexts or microservices. However, that does not imply that the same entity, with the same attributes and logic would be implemented in multiple Bounded Contexts. Instead, entities in each Bounded Context limit their attributes and behaviors to those required in that Bounded Context's domain. - -For instance, the buyer entity might have most of a person's attributes that are defined in the user entity in the profile or identity microservice, including the identity. But the buyer entity in the ordering microservice might have fewer attributes, because only certain buyer data is related to the order process. The context of each microservice or Bounded Context impacts its domain model. - -*Domain entities must implement behavior in addition to implementing data attributes.* - -A domain entity in DDD must implement the domain logic or behavior related to the entity data (the object accessed in memory). For example, as part of an order entity class you must have business logic and operations implemented as methods for tasks such as adding an order item, data validation, and total calculation. The entity's methods take care of the invariants and rules of the entity instead of having those rules spread across the application layer. - -Figure 7-8 shows a domain entity that implements not only data attributes but operations or methods with related domain logic. - -![Diagram showing a Domain Entity's pattern.](./media/microservice-domain-model/domain-entity-pattern.png) - -**Figure 7-8**. Example of a domain entity design implementing data plus behavior - -A domain model entity implements behaviors through methods, that is, it's not an "anemic" model. Of course, sometimes you can have entities that do not implement any logic as part of the entity class. This can happen in child entities within an aggregate if the child entity does not have any special logic because most of the logic is defined in the aggregate root. If you have a complex microservice that has logic implemented in the service classes instead of in the domain entities, you could be falling into the anemic domain model, explained in the following section. - -### Rich domain model versus anemic domain model - -In his post [AnemicDomainModel](https://martinfowler.com/bliki/AnemicDomainModel.html), Martin Fowler describes an anemic domain model this way: - -The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. - -Of course, when you use an anemic domain model, those data models will be used from a set of service objects (traditionally named the *business layer*) which capture all the domain or business logic. The business layer sits on top of the data model and uses the data model just as data. - -The anemic domain model is just a procedural style design. Anemic entity objects are not real objects because they lack behavior (methods). They only hold data properties and thus it is not object-oriented design. By putting all the behavior out into service objects (the business layer), you essentially end up with [spaghetti code](https://en.wikipedia.org/wiki/Spaghetti_code) or [transaction scripts](https://martinfowler.com/eaaCatalog/transactionScript.html), and therefore you lose the advantages that a domain model provides. - -Regardless, if your microservice or Bounded Context is very simple (a CRUD service), the anemic domain model in the form of entity objects with just data properties might be good enough, and it might not be worth implementing more complex DDD patterns. In that case, it will be simply a persistence model, because you have intentionally created an entity with only data for CRUD purposes. - -That is why microservices architectures are perfect for a multi-architectural approach depending on each Bounded Context. For instance, in eShopOnContainers, the ordering microservice implements DDD patterns, but the catalog microservice, which is a simple CRUD service, does not. - -Some people say that the anemic domain model is an anti-pattern. It really depends on what you are implementing. If the microservice you are creating is simple enough (for example, a CRUD service), following the anemic domain model it is not an anti-pattern. However, if you need to tackle the complexity of a microservice's domain that has a lot of ever-changing business rules, the anemic domain model might be an anti-pattern for that microservice or Bounded Context. In that case, designing it as a rich model with entities containing data plus behavior as well as implementing additional DDD patterns (aggregates, value objects, etc.) might have huge benefits for the long-term success of such a microservice. - -#### Additional resources - -- **DevIQ. Domain Entity** \ - - -- **Martin Fowler. The Domain Model** \ - - -- **Martin Fowler. The Anemic Domain Model** \ - - -### The Value Object pattern - -As Eric Evans has noted, "Many objects do not have conceptual identity. These objects describe certain characteristics of a thing." - -An entity requires an identity, but there are many objects in a system that do not, like the Value Object pattern. A value object is an object with no conceptual identity that describes a domain aspect. These are objects that you instantiate to represent design elements that only concern you temporarily. You care about *what* they are, not *who* they are. Examples include numbers and strings, but can also be higher-level concepts like groups of attributes. - -Something that is an entity in a microservice might not be an entity in another microservice, because in the second case, the Bounded Context might have a different meaning. For example, an address in an e-commerce application might not have an identity at all, since it might only represent a group of attributes of the customer's profile for a person or company. In this case, the address should be classified as a value object. However, in an application for an electric power utility company, the customer address could be important for the business domain. Therefore, the address must have an identity so the billing system can be directly linked to the address. In that case, an address should be classified as a domain entity. - -A person with a name and surname is usually an entity because a person has identity, even if the name and surname coincide with another set of values, such as if those names also refer to a different person. - -Value objects are hard to manage in relational databases and ORMs like Entity Framework (EF), whereas in document-oriented databases they are easier to implement and use. - -EF Core 2.0 and later versions include the [Owned Entities](https://devblogs.microsoft.com/dotnet/announcing-entity-framework-core-2-0/#owned-entities-and-table-splitting) feature that makes it easier to handle value objects, as we'll see in detail later on. - -#### Additional resources - -- **Martin Fowler. Value Object pattern** \ - - -- **Value Object** \ - - -- **Value Objects in Test-Driven Development** \ - [https://leanpub.com/tdd-ebook/read\#leanpub-auto-value-objects](https://leanpub.com/tdd-ebook/read#leanpub-auto-value-objects) - -- **Eric Evans. Domain-Driven Design: Tackling Complexity in the Heart of Software.** (Book; includes a discussion of value objects) \ - - -### The Aggregate pattern - -A domain model contains clusters of different data entities and processes that can control a significant area of functionality, such as order fulfillment or inventory. A more fine-grained DDD unit is the aggregate, which describes a cluster or group of entities and behaviors that can be treated as a cohesive unit. - -You usually define an aggregate based on the transactions that you need. A classic example is an order that also contains a list of order items. An order item will usually be an entity. But it will be a child entity within the order aggregate, which will also contain the order entity as its root entity, typically called an aggregate root. - -Identifying aggregates can be hard. An aggregate is a group of objects that must be consistent together, but you cannot just pick a group of objects and label them an aggregate. You must start with a domain concept and think about the entities that are used in the most common transactions related to that concept. Those entities that need to be transactionally consistent are what forms an aggregate. Thinking about transaction operations is probably the best way to identify aggregates. - -### The Aggregate Root or Root Entity pattern - -An aggregate is composed of at least one entity: the aggregate root, also called root entity or primary entity. Additionally, it can have multiple child entities and value objects, with all entities and objects working together to implement required behavior and transactions. - -The purpose of an aggregate root is to ensure the consistency of the aggregate; it should be the only entry point for updates to the aggregate through methods or operations in the aggregate root class. You should make changes to entities within the aggregate only via the aggregate root. It is the aggregate's consistency guardian, considering all the invariants and consistency rules you might need to comply with in your aggregate. If you change a child entity or value object independently, the aggregate root cannot ensure that the aggregate is in a valid state. It would be like a table with a loose leg. Maintaining consistency is the main purpose of the aggregate root. - -In Figure 7-9, you can see sample aggregates like the buyer aggregate, which contains a single entity (the aggregate root Buyer). The order aggregate contains multiple entities and a value object. - -![Diagram comparing a buyer aggregate and an order aggregate.](./media/microservice-domain-model/buyer-order-aggregate-pattern.png) - -**Figure 7-9**. Example of aggregates with multiple or single entities - -A DDD domain model is composed from aggregates, an aggregate can have just one entity or more, and can include value objects as well. Note that the Buyer aggregate could have additional child entities, depending on your domain, as it does in the ordering microservice in the eShopOnContainers reference application. Figure 7-9 just illustrates a case in which the buyer has a single entity, as an example of an aggregate that contains only an aggregate root. - -In order to maintain separation of aggregates and keep clear boundaries between them, it is a good practice in a DDD domain model to disallow direct navigation between aggregates and only having the foreign key (FK) field, as implemented in the [Ordering microservice domain model](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/Services/Ordering/Ordering.Domain/AggregatesModel/OrderAggregate/Order.cs) in eShopOnContainers. The Order entity only has a foreign key field for the buyer, but not an EF Core navigation property, as shown in the following code: - -```csharp -public class Order : Entity, IAggregateRoot -{ - private DateTime _orderDate; - public Address Address { get; private set; } - private int? _buyerId; // FK pointing to a different aggregate root - public OrderStatus OrderStatus { get; private set; } - private readonly List _orderItems; - public IReadOnlyCollection OrderItems => _orderItems; - // ... Additional code -} -``` - -Identifying and working with aggregates requires research and experience. For more information, see the following Additional resources list. - -#### Additional resources - -- **Vaughn Vernon. Effective Aggregate Design - Part I: Modeling a Single Aggregate** (from ) \ - - -- **Vaughn Vernon. Effective Aggregate Design - Part II: Making Aggregates Work Together** (from ) \ - - -- **Vaughn Vernon. Effective Aggregate Design - Part III: Gaining Insight Through Discovery** (from ) \ - - -- **Sergey Grybniak. DDD Tactical Design Patterns** \ - - -- **Chris Richardson. Developing Transactional Microservices Using Aggregates** \ - - -- **DevIQ. The Aggregate pattern** \ - - ->[!div class="step-by-step"] ->[Previous](ddd-oriented-microservice.md) ->[Next](net-core-microservice-domain-model.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md deleted file mode 100644 index 878886a8f0b2b..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -title: Implementing a microservice domain model with .NET -description: .NET Microservices Architecture for Containerized .NET Applications | Get into the implementation details of a DDD-oriented domain model. -ms.date: 02/02/2021 ---- - -# Implement a microservice domain model with .NET - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -In the previous section, the fundamental design principles and patterns for designing a domain model were explained. Now it's time to explore possible ways to implement the domain model by using .NET (plain C\# code) and EF Core. Your domain model will be composed simply of your code. It will have just the EF Core model requirements, but not real dependencies on EF. You shouldn't have hard dependencies or references to EF Core or any other ORM in your domain model. - -## Domain model structure in a custom .NET Standard Library - -The folder organization used for the eShopOnContainers reference application demonstrates the DDD model for the application. You might find that a different folder organization more clearly communicates the design choices made for your application. As you can see in Figure 7-10, in the ordering domain model there are two aggregates, the order aggregate and the buyer aggregate. Each aggregate is a group of domain entities and value objects, although you could have an aggregate composed of a single domain entity (the aggregate root or root entity) as well. - -:::image type="complex" source="./media/net-core-microservice-domain-model/ordering-microservice-container.png" alt-text="Screenshot of the Ordering.Domain project in Solution Explorer."::: -The Solution Explorer view for the Ordering.Domain project, showing the AggregatesModel folder containing the BuyerAggregate and OrderAggregate folders, each one containing its entity classes, value object files and so on. -:::image-end::: - -**Figure 7-10**. Domain model structure for the ordering microservice in eShopOnContainers - -Additionally, the domain model layer includes the repository contracts (interfaces) that are the infrastructure requirements of your domain model. In other words, these interfaces express what repositories and the methods the infrastructure layer must implement. It's critical that the implementation of the repositories be placed outside of the domain model layer, in the infrastructure layer library, so the domain model layer isn't "contaminated" by API or classes from infrastructure technologies, like Entity Framework. - -You can also see a [SeedWork](https://martinfowler.com/bliki/Seedwork.html) folder that contains custom base classes that you can use as a base for your domain entities and value objects, so you don't have redundant code in each domain's object class. - -## Structure aggregates in a custom .NET Standard library - -An aggregate refers to a cluster of domain objects grouped together to match transactional consistency. Those objects could be instances of entities (one of which is the aggregate root or root entity) plus any additional value objects. - -Transactional consistency means that an aggregate is guaranteed to be consistent and up to date at the end of a business action. For example, the order aggregate from the eShopOnContainers ordering microservice domain model is composed as shown in Figure 7-11. - -:::image type="complex" source="./media/net-core-microservice-domain-model/vs-solution-explorer-order-aggregate.png" alt-text="Screenshot of the OrderAggregate folder and its classes."::: -A detailed view of the OrderAggregate folder: Address.cs is a value object, IOrderRepository is a repo interface, Order.cs is an aggregate root, OrderItem.cs is a child entity, and OrderStatus.cs is an enumeration class. -:::image-end::: - -**Figure 7-11**. The order aggregate in Visual Studio solution - -If you open any of the files in an aggregate folder, you can see how it's marked as either a custom base class or interface, like entity or value object, as implemented in the [SeedWork](https://github.com/dotnet-architecture/eShopOnContainers/tree/main/src/Services/Ordering/Ordering.Domain/SeedWork) folder. - -## Implement domain entities as POCO classes - -You implement a domain model in .NET by creating POCO classes that implement your domain entities. In the following example, the Order class is defined as an entity and also as an aggregate root. Because the Order class derives from the Entity base class, it can reuse common code related to entities. Bear in mind that these base classes and interfaces are defined by you in the domain model project, so it is your code, not infrastructure code from an ORM like EF. - -```csharp -// COMPATIBLE WITH ENTITY FRAMEWORK CORE 5.0 -// Entity is a custom base class with the ID -public class Order : Entity, IAggregateRoot -{ - private DateTime _orderDate; - public Address Address { get; private set; } - private int? _buyerId; - - public OrderStatus OrderStatus { get; private set; } - private int _orderStatusId; - - private string _description; - private int? _paymentMethodId; - - private readonly List _orderItems; - public IReadOnlyCollection OrderItems => _orderItems; - - public Order(string userId, Address address, int cardTypeId, string cardNumber, string cardSecurityNumber, - string cardHolderName, DateTime cardExpiration, int? buyerId = null, int? paymentMethodId = null) - { - _orderItems = new List(); - _buyerId = buyerId; - _paymentMethodId = paymentMethodId; - _orderStatusId = OrderStatus.Submitted.Id; - _orderDate = DateTime.UtcNow; - Address = address; - - // ...Additional code ... - } - - public void AddOrderItem(int productId, string productName, - decimal unitPrice, decimal discount, - string pictureUrl, int units = 1) - { - //... - // Domain rules/logic for adding the OrderItem to the order - // ... - - var orderItem = new OrderItem(productId, productName, unitPrice, discount, pictureUrl, units); - - _orderItems.Add(orderItem); - - } - // ... - // Additional methods with domain rules/logic related to the Order aggregate - // ... -} -``` - -It's important to note that this is a domain entity implemented as a POCO class. It doesn't have any direct dependency on Entity Framework Core or any other infrastructure framework. This implementation is as it should be in DDD, just C# code implementing a domain model. - -In addition, the class is decorated with an interface named IAggregateRoot. That interface is an empty interface, sometimes called a *marker interface*, that's used just to indicate that this entity class is also an aggregate root. - -A marker interface is sometimes considered as an anti-pattern; however, it's also a clean way to mark a class, especially when that interface might be evolving. An attribute could be the other choice for the marker, but it's quicker to see the base class (Entity) next to the IAggregate interface instead of putting an Aggregate attribute marker above the class. It's a matter of preferences, in any case. - -Having an aggregate root means that most of the code related to consistency and business rules of the aggregate's entities should be implemented as methods in the Order aggregate root class (for example, AddOrderItem when adding an OrderItem object to the aggregate). You should not create or update OrderItems objects independently or directly; the AggregateRoot class must keep control and consistency of any update operation against its child entities. - -## Encapsulate data in the Domain Entities - -A common problem in entity models is that they expose collection navigation properties as publicly accessible list types. This allows any collaborator developer to manipulate the contents of these collection types, which may bypass important business rules related to the collection, possibly leaving the object in an invalid state. The solution to this is to expose read-only access to related collections and explicitly provide methods that define ways in which clients can manipulate them. - -In the previous code, note that many attributes are read-only or private and are only updatable by the class methods, so any update considers business domain invariants and logic specified within the class methods. - -For example, following DDD patterns, **you should *not* do the following** from any command handler method or application layer class (actually, it should be impossible for you to do so): - -```csharp -// WRONG ACCORDING TO DDD PATTERNS – CODE AT THE APPLICATION LAYER OR -// COMMAND HANDLERS -// Code in command handler methods or Web API controllers -//... (WRONG) Some code with business logic out of the domain classes ... -OrderItem myNewOrderItem = new OrderItem(orderId, productId, productName, - pictureUrl, unitPrice, discount, units); - -//... (WRONG) Accessing the OrderItems collection directly from the application layer // or command handlers -myOrder.OrderItems.Add(myNewOrderItem); -//... -``` - -In this case, the Add method is purely an operation to add data, with direct access to the OrderItems collection. Therefore, most of the domain logic, rules, or validations related to that operation with the child entities will be spread across the application layer (command handlers and Web API controllers). - -If you go around the aggregate root, the aggregate root cannot guarantee its invariants, its validity, or its consistency. Eventually you'll have spaghetti code or transactional script code. - -To follow DDD patterns, entities must not have public setters in any entity property. Changes in an entity should be driven by explicit methods with explicit ubiquitous language about the change they're performing in the entity. - -Furthermore, collections within the entity (like the order items) should be read-only properties (the AsReadOnly method explained later). You should be able to update it only from within the aggregate root class methods or the child entity methods. - -As you can see in the code for the Order aggregate root, all setters should be private or at least read-only externally, so that any operation against the entity's data or its child entities has to be performed through methods in the entity class. This maintains consistency in a controlled and object-oriented way instead of implementing transactional script code. - -The following code snippet shows the proper way to code the task of adding an OrderItem object to the Order aggregate. - -```csharp -// RIGHT ACCORDING TO DDD--CODE AT THE APPLICATION LAYER OR COMMAND HANDLERS -// The code in command handlers or WebAPI controllers, related only to application stuff -// There is NO code here related to OrderItem object's business logic -myOrder.AddOrderItem(productId, productName, pictureUrl, unitPrice, discount, units); - -// The code related to OrderItem params validations or domain rules should -// be WITHIN the AddOrderItem method. - -//... -``` - -In this snippet, most of the validations or logic related to the creation of an OrderItem object will be under the control of the Order aggregate root—in the AddOrderItem method—especially validations and logic related to other elements in the aggregate. For instance, you might get the same product item as the result of multiple calls to AddOrderItem. In that method, you could examine the product items and consolidate the same product items into a single OrderItem object with several units. Additionally, if there are different discount amounts but the product ID is the same, you would likely apply the higher discount. This principle applies to any other domain logic for the OrderItem object. - -In addition, the new OrderItem(params) operation will also be controlled and performed by the AddOrderItem method from the Order aggregate root. Therefore, most of the logic or validations related to that operation (especially anything that impacts the consistency between other child entities) will be in a single place within the aggregate root. That is the ultimate purpose of the aggregate root pattern. - -When you use Entity Framework Core 1.1 or later, a DDD entity can be better expressed because it allows [mapping to fields](/ef/core/modeling/backing-field) in addition to properties. This is useful when protecting collections of child entities or value objects. With this enhancement, you can use simple private fields instead of properties and you can implement any update to the field collection in public methods and provide read-only access through the AsReadOnly method. - -In DDD, you want to update the entity only through methods in the entity (or the constructor) in order to control any invariant and the consistency of the data, so properties are defined only with a get accessor. The properties are backed by private fields. Private members can only be accessed from within the class. However, there is one exception: EF Core needs to set these fields as well (so it can return the object with the proper values). - -### Map properties with only get accessors to the fields in the database table - -Mapping properties to database table columns is not a domain responsibility but part of the infrastructure and persistence layer. We mention this here just so you're aware of the new capabilities in EF Core 1.1 or later related to how you can model entities. Additional details on this topic are explained in the infrastructure and persistence section. - -When you use EF Core 1.0 or later, within the DbContext you need to map the properties that are defined only with getters to the actual fields in the database table. This is done with the HasField method of the PropertyBuilder class. - -### Map fields without properties - -With the feature in EF Core 1.1 or later to map columns to fields, it's also possible to not use properties. Instead, you can just map columns from a table to fields. A common use case for this is private fields for an internal state that doesn't need to be accessed from outside the entity. - -For example, in the preceding OrderAggregate code example, there are several private fields, like the `_paymentMethodId` field, that have no related property for either a setter or getter. That field could also be calculated within the order's business logic and used from the order's methods, but it needs to be persisted in the database as well. So in EF Core (since v1.1), there's a way to map a field without a related property to a column in the database. This is also explained in the [Infrastructure layer](ddd-oriented-microservice.md#the-infrastructure-layer) section of this guide. - -### Additional resources - -- **Vaughn Vernon. Modeling Aggregates with DDD and Entity Framework.** Note that this is *not* Entity Framework Core. \ - - -- **Julie Lerman. Data Points - Coding for Domain-Driven Design: Tips for Data-Focused Devs** \ - [https://learn.microsoft.com/archive/msdn-magazine/2013/august/data-points-coding-for-domain-driven-design-tips-for-data-focused-devs](/archive/msdn-magazine/2013/august/data-points-coding-for-domain-driven-design-tips-for-data-focused-devs) - -- **Udi Dahan. How to create fully encapsulated Domain Models** \ - - -- **Steve Smith. What is the difference between a DTO and a POCO?** \ - -> [!div class="step-by-step"] -> [Previous](microservice-domain-model.md) -> [Next](seedwork-domain-model-base-classes-interfaces.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md deleted file mode 100644 index 6d7361aa888cb..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md +++ /dev/null @@ -1,340 +0,0 @@ ---- -title: Using NoSQL databases as a persistence infrastructure -description: Understand the use of NoSql databases in general, and Azure Cosmos DB in particular, as an option to implement persistence. -ms.date: 09/10/2024 ---- -# Use NoSQL databases as a persistence infrastructure - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -When you use NoSQL databases for your infrastructure data tier, you typically do not use an ORM like Entity Framework Core. Instead you use the API provided by the NoSQL engine, such as Azure Cosmos DB, MongoDB, Cassandra, RavenDB, CouchDB, or Azure Storage Tables. - -However, when you use a NoSQL database, especially a document-oriented database like Azure Cosmos DB, CouchDB, or RavenDB, the way you design your model with DDD aggregates is partially similar to how you can do it in EF Core, in regards to the identification of aggregate roots, child entity classes, and value object classes. But, ultimately, the database selection will impact in your design. - -When you use a document-oriented database, you implement an aggregate as a single document, serialized in JSON or another format. However, the use of the database is transparent from a domain model code point of view. When using a NoSQL database, you still are using entity classes and aggregate root classes, but with more flexibility than when using EF Core because the persistence is not relational. - -The difference is in how you persist that model. If you implemented your domain model based on POCO entity classes, agnostic to the infrastructure persistence, it might look like you could move to a different persistence infrastructure, even from relational to NoSQL. However, that should not be your goal. There are always constraints and trade-offs in the different database technologies, so you will not be able to have the same model for relational or NoSQL databases. Changing persistence models is not a trivial task, because transactions and persistence operations will be very different. - -For example, in a document-oriented database, it is okay for an aggregate root to have multiple child collection properties. In a relational database, querying multiple child collection properties is not easily optimized, because you get a UNION ALL SQL statement back from EF. Having the same domain model for relational databases or NoSQL databases is not simple, and you should not try to do it. You really have to design your model with an understanding of how the data is going to be used in each particular database. - -A benefit when using NoSQL databases is that the entities are more denormalized, so you do not set a table mapping. Your domain model can be more flexible than when using a relational database. - -When you design your domain model based on aggregates, moving to NoSQL and document-oriented databases might be even easier than using a relational database, because the aggregates you design are similar to serialized documents in a document-oriented database. Then you can include in those "bags" all the information you might need for that aggregate. - -For instance, the following JSON code is a sample implementation of an order aggregate when using a document-oriented database. It is similar to the order aggregate we implemented in the eShopOnContainers sample, but without using EF Core underneath. - -```json -{ - "id": "2024001", - "orderDate": "2/25/2024", - "buyerId": "1234567", - "address": [ - { - "street": "100 One Microsoft Way", - "city": "Redmond", - "state": "WA", - "zip": "98052", - "country": "U.S." - } - ], - "orderItems": [ - {"id": 20240011, "productId": "123456", "productName": ".NET T-Shirt", - "unitPrice": 25, "units": 2, "discount": 0}, - {"id": 20240012, "productId": "123457", "productName": ".NET Mug", - "unitPrice": 15, "units": 1, "discount": 0} - ] -} -``` - -## Introduction to Azure Cosmos DB and the native Cosmos DB API - -[Azure Cosmos DB](/azure/cosmos-db/introduction) is Microsoft's globally distributed database service for mission-critical applications. Azure Cosmos DB provides [turn-key global distribution](/azure/cosmos-db/distribute-data-globally), [elastic scaling of throughput and storage](/azure/cosmos-db/partition-data) worldwide, single-digit millisecond latencies at the 99th percentile, [five well-defined consistency levels](/azure/cosmos-db/consistency-levels), and guaranteed high availability, all backed by [industry-leading SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/). Azure Cosmos DB [automatically indexes data](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf) without requiring you to deal with schema and index management. It is multi-model and supports document, key-value, graph, and columnar data models. - -![Diagram showing the Azure Cosmos DB global distribution.](./media/nosql-database-persistence-infrastructure/azure-cosmos-db-global-distribution.png) - -**Figure 7-19**. Azure Cosmos DB global distribution - -When you use a C\# model to implement the aggregate to be used by the Azure Cosmos DB API, the aggregate can be similar to the C\# POCO classes used with EF Core. The difference is in the way to use them from the application and infrastructure layers, as in the following code: - -```csharp -// C# EXAMPLE OF AN ORDER AGGREGATE BEING PERSISTED WITH AZURE COSMOS DB API -// *** Domain Model Code *** -// Aggregate: Create an Order object with its child entities and/or value objects. -// Then, use AggregateRoot's methods to add the nested objects so invariants and -// logic is consistent across the nested properties (value objects and entities). - -Order orderAggregate = new Order -{ - Id = "2024001", - OrderDate = new DateTime(2005, 7, 1), - BuyerId = "1234567", - PurchaseOrderNumber = "PO18009186470" -} - -Address address = new Address -{ - Street = "100 One Microsoft Way", - City = "Redmond", - State = "WA", - Zip = "98052", - Country = "U.S." -} - -orderAggregate.UpdateAddress(address); - -OrderItem orderItem1 = new OrderItem -{ - Id = 20240011, - ProductId = "123456", - ProductName = ".NET T-Shirt", - UnitPrice = 25, - Units = 2, - Discount = 0; -}; - -//Using methods with domain logic within the entity. No anemic-domain model -orderAggregate.AddOrderItem(orderItem1); -// *** End of Domain Model Code *** - -// *** Infrastructure Code using Cosmos DB Client API *** -Uri collectionUri = UriFactory.CreateDocumentCollectionUri(databaseName, - collectionName); - -await client.CreateDocumentAsync(collectionUri, orderAggregate); - -// As your app evolves, let's say your object has a new schema. You can insert -// OrderV2 objects without any changes to the database tier. -Order2 newOrder = GetOrderV2Sample("IdForSalesOrder2"); -await client.CreateDocumentAsync(collectionUri, newOrder); -``` - -You can see that the way you work with your domain model can be similar to the way you use it in your domain model layer when the infrastructure is EF. You still use the same aggregate root methods to ensure consistency, invariants, and validations within the aggregate. - -However, when you persist your model into the NoSQL database, the code and API change dramatically compared to EF Core code or any other code related to relational databases. - -## Implement .NET code targeting MongoDB and Azure Cosmos DB - -### Use Azure Cosmos DB from .NET containers - -You can access Azure Cosmos DB databases from .NET code running in containers, like from any other .NET application. For instance, the Locations.API and Marketing.API microservices in eShopOnContainers are implemented so they can consume Azure Cosmos DB databases. - -However, there’s a limitation in Azure Cosmos DB from a Docker development environment point of view. Even though there’s an on-premises [Azure Cosmos DB Emulator](/azure/cosmos-db/local-emulator) that can run in a local development machine, it only supports Windows. Linux and macOS aren't supported. - -There's also the possibility to run this emulator on Docker, but just on Windows Containers, not with Linux Containers. That's an initial handicap for the development environment if your application is deployed as Linux containers, since, currently, you can't deploy Linux and Windows Containers on Docker for Windows at the same time. Either all containers being deployed have to be for Linux or for Windows. - -The ideal and more straightforward deployment for a dev/test solution is to be able to deploy your database systems as containers along with your custom containers so your dev/test environments are always consistent. - -### Use MongoDB API for local dev/test Linux/Windows containers plus Azure Cosmos DB - -Cosmos DB databases support MongoDB API for .NET as well as the native MongoDB wire protocol. This means that by using existing drivers, your application written for MongoDB can now communicate with Cosmos DB and use Cosmos DB databases instead of MongoDB databases, as shown in Figure 7-20. - -![Diagram showing that Cosmos DB supports .NET and MongoDB wire protocol.](./media/nosql-database-persistence-infrastructure/mongodb-api-wire-protocol.png) - -**Figure 7-20**. Using MongoDB API and protocol to access Azure Cosmos DB - -This is a very convenient approach for proof of concepts in Docker environments with Linux containers because the [MongoDB Docker image](https://hub.docker.com/r/_/mongo/) is a multi-arch image that supports Docker Linux containers and Docker Windows containers. - -As shown in the following image, by using the MongoDB API, eShopOnContainers supports MongoDB Linux and Windows containers for the local development environment but then, you can move to a scalable, PaaS cloud solution as Azure Cosmos DB by simply [changing the MongoDB connection string to point to Azure Cosmos DB](/azure/cosmos-db/connect-mongodb-account). - -![Diagram showing that the Location microservice in eShopOnContainers can use either Cosmos DB or Mongo DB.](./media/nosql-database-persistence-infrastructure/eshoponcontainers-mongodb-containers.png) - -**Figure 7-21**. eShopOnContainers using MongoDB containers for dev-env or Azure Cosmos DB for production - -The production Azure Cosmos DB would be running in Azure's cloud as a PaaS and scalable service. - -Your custom .NET containers can run on a local development Docker host (that is using Docker for Windows in a Windows 10 machine) or be deployed into a production environment, like Kubernetes in Azure AKS or Azure Service Fabric. In this second environment, you would deploy only the .NET custom containers but not the MongoDB container since you'd be using Azure Cosmos DB in the cloud for handling the data in production. - -A clear benefit of using the MongoDB API is that your solution could run in both database engines, MongoDB or Azure Cosmos DB, so migrations to different environments should be easy. However, sometimes it is worthwhile to use a native API (that is the native Cosmos DB API) in order to take full advantage of the capabilities of a specific database engine. - -For further comparison between simply using MongoDB versus Cosmos DB in the cloud, see the [Benefits of using Azure Cosmos DB in this page](/azure/cosmos-db/mongodb-introduction). - -### Analyze your approach for production applications: MongoDB API vs. Cosmos DB API - -In eShopOnContainers, we're using MongoDB API because our priority was fundamentally to have a consistent dev/test environment using a NoSQL database that could also work with Azure Cosmos DB. - -However, if you are planning to use MongoDB API to access Azure Cosmos DB in Azure for production applications, you should analyze the differences in capabilities and performance when using MongoDB API to access Azure Cosmos DB databases compared to using the native Azure Cosmos DB API. If it is similar you can use MongoDB API and you get the benefit of supporting two NoSQL database engines at the same time. - -You could also use MongoDB clusters as the production database in Azure's cloud, too, with [MongoDB Azure Service](https://www.mongodb.com/scale/mongodb-azure-service). But that is not a PaaS service provided by Microsoft. In this case, Azure is just hosting that solution coming from MongoDB. - -Basically, this is just a disclaimer stating that you shouldn't always use MongoDB API against Azure Cosmos DB, as we did in eShopOnContainers because it was a convenient choice for Linux containers. The decision should be based on the specific needs and tests you need to do for your production application. - -### The code: Use MongoDB API in .NET applications - -MongoDB API for .NET is based on NuGet packages that you need to add to your projects, like in the Locations.API project shown in the following figure. - -![Screenshot of the dependencies in the MongoDB NuGet packages.](./media/nosql-database-persistence-infrastructure/mongodb-api-nuget-packages.png) - -**Figure 7-22**. MongoDB API NuGet packages references in a .NET project - -Let's investigate the code in the following sections. - -#### A Model used by MongoDB API - -First, you need to define a model that will hold the data coming from the database in your application's memory space. Here's an example of the model used for Locations at eShopOnContainers. - -```csharp -using MongoDB.Bson; -using MongoDB.Bson.Serialization.Attributes; -using MongoDB.Driver.GeoJsonObjectModel; -using System.Collections.Generic; - -public class Locations -{ - [BsonId] - [BsonRepresentation(BsonType.ObjectId)] - public string Id { get; set; } - public int LocationId { get; set; } - public string Code { get; set; } - [BsonRepresentation(BsonType.ObjectId)] - public string Parent_Id { get; set; } - public string Description { get; set; } - public double Latitude { get; set; } - public double Longitude { get; set; } - public GeoJsonPoint Location - { get; private set; } - public GeoJsonPolygon Polygon - { get; private set; } - public void SetLocation(double lon, double lat) => SetPosition(lon, lat); - public void SetArea(List coordinatesList) - => SetPolygon(coordinatesList); - - private void SetPosition(double lon, double lat) - { - Latitude = lat; - Longitude = lon; - Location = new GeoJsonPoint( - new GeoJson2DGeographicCoordinates(lon, lat)); - } - - private void SetPolygon(List coordinatesList) - { - Polygon = new GeoJsonPolygon( - new GeoJsonPolygonCoordinates( - new GeoJsonLinearRingCoordinates( - coordinatesList))); - } -} -``` - -You can see there are a few attributes and types coming from the MongoDB NuGet packages. - -NoSQL databases are usually very well suited for working with non-relational hierarchical data. In this example, we are using MongoDB types especially made for geo-locations, like `GeoJson2DGeographicCoordinates`. - -#### Retrieve the database and the collection - -In eShopOnContainers, we have created a custom database context where we implement the code to retrieve the database and the MongoCollections, as in the following code. - -```csharp -public class LocationsContext -{ - private readonly IMongoDatabase _database = null; - - public LocationsContext(IOptions settings) - { - var client = new MongoClient(settings.Value.ConnectionString); - if (client != null) - _database = client.GetDatabase(settings.Value.Database); - } - - public IMongoCollection Locations - { - get - { - return _database.GetCollection("Locations"); - } - } -} -``` - -#### Retrieve the data - -In C# code, like Web API controllers or custom Repositories implementation, you can write similar code to the following when querying through the MongoDB API. Note that the `_context` object is an instance of the previous `LocationsContext` class. - -```csharp -public async Task GetAsync(int locationId) -{ - var filter = Builders.Filter.Eq("LocationId", locationId); - return await _context.Locations - .Find(filter) - .FirstOrDefaultAsync(); -} -``` - -#### Use an env-var in the docker-compose.override.yml file for the MongoDB connection string - -When creating a MongoClient object, it needs a fundamental parameter which is precisely the `ConnectionString` parameter pointing to the right database. In the case of eShopOnContainers, the connection string can point to a local MongoDB Docker container or to a "production" Azure Cosmos DB database. That connection string comes from the environment variables defined in the `docker-compose.override.yml` files used when deploying with docker-compose or Visual Studio, as in the following yml code. - -```yml -# docker-compose.override.yml -version: '3.4' -services: - # Other services - locations-api: - environment: - # Other settings - - ConnectionString=${ESHOP_AZURE_COSMOSDB:-mongodb://nosqldata} -``` - -[!INCLUDE [managed-identities](../../../includes/managed-identities.md)] - -The `ConnectionString` environment variable is resolved this way: If the `ESHOP_AZURE_COSMOSDB` global variable is defined in the `.env` file with the Azure Cosmos DB connection string, it will use it to access the Azure Cosmos DB database in the cloud. If it’s not defined, it will take the `mongodb://nosqldata` value and use the development MongoDB container. - -The following code shows the `.env` file with the Azure Cosmos DB connection string global environment variable, as implemented in eShopOnContainers: - -```yml -# .env file, in eShopOnContainers root folder -# Other Docker environment variables - -ESHOP_EXTERNAL_DNS_NAME_OR_IP=host.docker.internal -ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP= - -#ESHOP_AZURE_COSMOSDB= - -#Other environment variables for additional Azure infrastructure assets -#ESHOP_AZURE_REDIS_BASKET_DB= -#ESHOP_AZURE_STORAGE_CATALOG_URL= -#ESHOP_AZURE_SERVICE_BUS= -``` - -Uncomment the ESHOP_AZURE_COSMOSDB line and update it with your Azure Cosmos DB connection string obtained from the Azure portal as explained in [Connect a MongoDB application to Azure Cosmos DB](/azure/cosmos-db/connect-mongodb-account). - -If the `ESHOP_AZURE_COSMOSDB` global variable is empty, meaning it's commented out in the `.env` file, then the container uses a default MongoDB connection string. This connection string points to the local MongoDB container deployed in eShopOnContainers that is named `nosqldata` and was defined at the docker-compose file, as shown in the following .yml code: - -``` yml -# docker-compose.yml -version: '3.4' -services: - # ...Other services... - nosqldata: - image: mongo -``` - -#### Additional resources - -- **Modeling document data for NoSQL databases** \ - [https://learn.microsoft.com/azure/cosmos-db/modeling-data](/azure/cosmos-db/modeling-data) - -- **Vaughn Vernon. The Ideal Domain-Driven Design Aggregate Store?** \ - - -- **Introduction to Azure Cosmos DB: API for MongoDB** \ - [https://learn.microsoft.com/azure/cosmos-db/mongodb-introduction](/azure/cosmos-db/mongodb-introduction) - -- **Azure Cosmos DB: Build a MongoDB API web app with .NET and the Azure portal** \ - [https://learn.microsoft.com/azure/cosmos-db/create-mongodb-dotnet](/azure/cosmos-db/create-mongodb-dotnet) - -- **Use the Azure Cosmos DB Emulator for local development and testing** \ - [https://learn.microsoft.com/azure/cosmos-db/local-emulator](/azure/cosmos-db/local-emulator) - -- **Connect a MongoDB application to Azure Cosmos DB** \ - [https://learn.microsoft.com/azure/cosmos-db/connect-mongodb-account](/azure/cosmos-db/connect-mongodb-account) - -- **The MongoDB Docker image (Linux and Windows Container)** \ - - -- **Use MongoChef (Studio 3T) with an Azure Cosmos DB: API for MongoDB account** \ - [https://learn.microsoft.com/azure/cosmos-db/mongodb-mongochef](/azure/cosmos-db/mongodb-mongochef) - ->[!div class="step-by-step"] ->[Previous](infrastructure-persistence-layer-implementation-entity-framework-core.md) ->[Next](microservice-application-layer-web-api-design.md) diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/seedwork-domain-model-base-classes-interfaces.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/seedwork-domain-model-base-classes-interfaces.md deleted file mode 100644 index 9853ed0ecf6dc..0000000000000 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/seedwork-domain-model-base-classes-interfaces.md +++ /dev/null @@ -1,143 +0,0 @@ ---- -title: Seedwork (reusable base classes and interfaces for your domain model) -description: .NET Microservices Architecture for Containerized .NET Applications | Use the seedwork concept as a starting point to start implementation for a DDD-oriented domain model. -ms.date: 10/08/2018 ---- -# Seedwork (reusable base classes and interfaces for your domain model) - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The solution folder contains a *SeedWork* folder. This folder contains custom base classes that you can use as a base for your domain entities and value objects. Use these base classes so you don't have redundant code in each domain's object class. The folder for these types of classes is called *SeedWork* and not something like *Framework*. It's called *SeedWork* because the folder contains just a small subset of reusable classes that cannot really be considered a framework. *Seedwork* is a term introduced by [Michael Feathers](https://www.artima.com/forums/flat.jsp?forum=106&thread=8826) and popularized by [Martin Fowler](https://martinfowler.com/bliki/Seedwork.html) but you could also name that folder Common, SharedKernel, or similar. - -Figure 7-12 shows the classes that form the seedwork of the domain model in the ordering microservice. It has a few custom base classes like Entity, ValueObject, and Enumeration, plus a few interfaces. These interfaces (IRepository and IUnitOfWork) inform the infrastructure layer about what needs to be implemented. Those interfaces are also used through Dependency Injection from the application layer. - -:::image type="complex" source="./media/seedwork-domain-model-base-classes-interfaces/vs-solution-seedwork-classes.png" alt-text="Screenshot of the classes contained in the SeedWork folder."::: -The detailed contents of the SeedWork folder, containing base classes and interfaces: Entity.cs, Enumeration.cs, IAggregateRoot.cs, IRepository.cs, IUnitOfWork.cs, and ValueObject.cs. -:::image-end::: - -**Figure 7-12**. A sample set of domain model "seedwork" base classes and interfaces - -This is the type of copy and paste reuse that many developers share between projects, not a formal framework. You can have seedworks in any layer or library. However, if the set of classes and interfaces gets large enough, you might want to create a single class library. - -## The custom Entity base class - -The following code is an example of an Entity base class where you can place code that can be used the same way by any domain entity, such as the entity ID, [equality operators](../../../csharp/language-reference/operators/equality-operators.md), a domain event list per entity, etc. - -```csharp -// COMPATIBLE WITH ENTITY FRAMEWORK CORE (1.1 and later) -public abstract class Entity -{ - int? _requestedHashCode; - int _Id; - private List _domainEvents; - public virtual int Id - { - get - { - return _Id; - } - protected set - { - _Id = value; - } - } - - public List DomainEvents => _domainEvents; - public void AddDomainEvent(INotification eventItem) - { - _domainEvents = _domainEvents ?? new List(); - _domainEvents.Add(eventItem); - } - public void RemoveDomainEvent(INotification eventItem) - { - if (_domainEvents is null) return; - _domainEvents.Remove(eventItem); - } - - public bool IsTransient() - { - return this.Id == default(Int32); - } - - public override bool Equals(object obj) - { - if (obj == null || !(obj is Entity)) - return false; - if (Object.ReferenceEquals(this, obj)) - return true; - if (this.GetType() != obj.GetType()) - return false; - Entity item = (Entity)obj; - if (item.IsTransient() || this.IsTransient()) - return false; - else - return item.Id == this.Id; - } - - public override int GetHashCode() - { - if (!IsTransient()) - { - if (!_requestedHashCode.HasValue) - _requestedHashCode = this.Id.GetHashCode() ^ 31; - // XOR for random distribution. See: - // https://learn.microsoft.com/archive/blogs/ericlippert/guidelines-and-rules-for-gethashcode - return _requestedHashCode.Value; - } - else - return base.GetHashCode(); - } - public static bool operator ==(Entity left, Entity right) - { - if (Object.Equals(left, null)) - return (Object.Equals(right, null)); - else - return left.Equals(right); - } - public static bool operator !=(Entity left, Entity right) - { - return !(left == right); - } -} -``` - -The previous code using a domain event list per entity will be explained in the next sections when focusing on domain events. - -## Repository contracts (interfaces) in the domain model layer - -Repository contracts are simply .NET interfaces that express the contract requirements of the repositories to be used for each aggregate. - -The repositories themselves, with EF Core code or any other infrastructure dependencies and code (Linq, SQL, etc.), must not be implemented within the domain model; the repositories should only implement the interfaces you define in the domain model. - -A pattern related to this practice (placing the repository interfaces in the domain model layer) is the Separated Interface pattern. As [explained](https://www.martinfowler.com/eaaCatalog/separatedInterface.html) by Martin Fowler, "Use Separated Interface to define an interface in one package but implement it in another. This way a client that needs the dependency to the interface can be completely unaware of the implementation." - -Following the Separated Interface pattern enables the application layer (in this case, the Web API project for the microservice) to have a dependency on the requirements defined in the domain model, but not a direct dependency to the infrastructure/persistence layer. In addition, you can use Dependency Injection to isolate the implementation, which is implemented in the infrastructure/ persistence layer using repositories. - -For example, the following example with the IOrderRepository interface defines what operations the OrderRepository class will need to implement at the infrastructure layer. In the current implementation of the application, the code just needs to add or update orders to the database, since queries are split following the simplified CQRS approach. - -```csharp -// Defined at IOrderRepository.cs -public interface IOrderRepository : IRepository -{ - Order Add(Order order); - - void Update(Order order); - - Task GetAsync(int orderId); -} - -// Defined at IRepository.cs (Part of the Domain Seedwork) -public interface IRepository where T : IAggregateRoot -{ - IUnitOfWork UnitOfWork { get; } -} -``` - -## Additional resources - -- **Martin Fowler. Separated Interface.** \ - - ->[!div class="step-by-step"] ->[Previous](net-core-microservice-domain-model.md) ->[Next](implement-value-objects.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md b/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md deleted file mode 100644 index d59d0322c669d..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md +++ /dev/null @@ -1,230 +0,0 @@ ---- -title: Implement background tasks in microservices with IHostedService and the BackgroundService class -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the new options to use IHostedService and BackgroundService to implement background tasks in microservices .NET Core. -ms.date: 01/13/2021 ---- -# Implement background tasks in microservices with IHostedService and the BackgroundService class - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Background tasks and scheduled jobs are something you might need to use in any application, whether or not it follows the microservices architecture pattern. The difference when using a microservices architecture is that you can implement the background task in a separate process/container for hosting so you can scale it down/up based on your need. - -From a generic point of view, in .NET we called these type of tasks *Hosted Services*, because they are services/logic that you host within your host/application/microservice. Note that in this case, the hosted service simply means a class with the background task logic. - -Since .NET Core 2.0, the framework provides a new interface named helping you to easily implement hosted services. The basic idea is that you can register multiple background tasks (hosted services) that run in the background while your web host or host is running, as shown in the image 6-26. - -![Diagram comparing ASP.NET Core IWebHost and .NET Core IHost.](./media/background-tasks-with-ihostedservice/ihosted-service-webhost-vs-host.png) - -**Figure 6-26**. Using IHostedService in a WebHost vs. a Host - -ASP.NET Core 1.x and 2.x support `IWebHost` for background processes in web apps. .NET Core 2.1 and later versions support `IHost` for background processes with plain console apps. Note the difference made between `WebHost` and `Host`. - -A `WebHost` (base class implementing `IWebHost`) in ASP.NET Core 2.0 is the infrastructure artifact you use to provide HTTP server features to your process, such as when you're implementing an MVC web app or Web API service. It provides all the new infrastructure goodness in ASP.NET Core, enabling you to use dependency injection, insert middlewares in the request pipeline, and similar. The `WebHost` uses these very same `IHostedServices` for background tasks. - -A `Host` (base class implementing `IHost`) was introduced in .NET Core 2.1. Basically, a `Host` allows you to have a similar infrastructure than what you have with `WebHost` (dependency injection, hosted services, etc.), but in this case, you just want to have a simple and lighter process as the host, with nothing related to MVC, Web API or HTTP server features. - -Therefore, you can choose and either create a specialized host-process with `IHost` to handle the hosted services and nothing else, such a microservice made just for hosting the `IHostedServices`, or you can alternatively extend an existing ASP.NET Core `WebHost`, such as an existing ASP.NET Core Web API or MVC app. - -Each approach has pros and cons depending on your business and scalability needs. The bottom line is basically that if your background tasks have nothing to do with HTTP (`IWebHost`) you should use `IHost`. - -## Registering hosted services in your WebHost or Host - -Let's drill down further on the `IHostedService` interface since its usage is pretty similar in a `WebHost` or in a `Host`. - -SignalR is one example of an artifact using hosted services, but you can also use it for much simpler things like: - -- A background task polling a database looking for changes. -- A scheduled task updating some cache periodically. -- An implementation of QueueBackgroundWorkItem that allows a task to be executed on a background thread. -- Processing messages from a message queue in the background of a web app while sharing common services such as `ILogger`. -- A background task started with `Task.Run()`. - -You can basically offload any of those actions to a background task that implements `IHostedService`. - -The way you add one or multiple `IHostedServices` into your `WebHost` or `Host` is by registering them up through the  extension method in an ASP.NET Core `WebHost` (or in a `Host` in .NET Core 2.1 and above). Basically, you have to register the hosted services within application startup in _Program.cs_. - -```csharp -//Other DI registrations; - -// Register Hosted Services -builder.Services.AddHostedService(); -builder.Services.AddHostedService(); -builder.Services.AddHostedService(); -//... -``` - -In that code, the `GracePeriodManagerService` hosted service is real code from the Ordering business microservice in eShopOnContainers, while the other two are just two additional samples. - -The `IHostedService` background task execution is coordinated with the lifetime of the application (host or microservice, for that matter). You register tasks when the application starts and you have the opportunity to do some graceful action or clean-up when the application is shutting down. - -Without using `IHostedService`, you could always start a background thread to run any task. The difference is precisely at the app's shutdown time when that thread would simply be killed without having the opportunity to run graceful clean-up actions. - -## The IHostedService interface - -When you register an `IHostedService`, .NET calls the `StartAsync()` and `StopAsync()` methods of your `IHostedService` type during application start and stop respectively. For more details, see [IHostedService interface](/aspnet/core/fundamentals/host/hosted-services#ihostedservice-interface). - -As you can imagine, you can create multiple implementations of IHostedService and register each of them in _Program.cs_, as shown previously. All those hosted services will be started and stopped along with the application/microservice. - -As a developer, you are responsible for handling the stopping action of your services when `StopAsync()` method is triggered by the host. - -## Implementing IHostedService with a custom hosted service class deriving from the BackgroundService base class - -You could go ahead and create your custom hosted service class from scratch and implement the `IHostedService`, as you need to do when using .NET Core 2.0 and later. - -However, since most background tasks will have similar needs in regard to the cancellation tokens management and other typical operations, there is a convenient abstract base class you can derive from, named `BackgroundService` (available since .NET Core 2.1). - -That class provides the main work needed to set up the background task. - -The next code is the abstract BackgroundService base class as implemented in .NET. - -```csharp -// Copyright (c) .NET Foundation. Licensed under the Apache License, Version 2.0. -/// -/// Base class for implementing a long running . -/// -public abstract class BackgroundService : IHostedService, IDisposable -{ - private Task _executingTask; - private readonly CancellationTokenSource _stoppingCts = - new CancellationTokenSource(); - - protected abstract Task ExecuteAsync(CancellationToken stoppingToken); - - public virtual Task StartAsync(CancellationToken cancellationToken) - { - // Store the task we're executing - _executingTask = ExecuteAsync(_stoppingCts.Token); - - // If the task is completed then return it, - // this will bubble cancellation and failure to the caller - if (_executingTask.IsCompleted) - { - return _executingTask; - } - - // Otherwise it's running - return Task.CompletedTask; - } - - public virtual async Task StopAsync(CancellationToken cancellationToken) - { - // Stop called without start - if (_executingTask == null) - { - return; - } - - try - { - // Signal cancellation to the executing method - _stoppingCts.Cancel(); - } - finally - { - // Wait until the task completes or the stop token triggers - await Task.WhenAny(_executingTask, Task.Delay(Timeout.Infinite, - cancellationToken)); - } - - } - - public virtual void Dispose() - { - _stoppingCts.Cancel(); - } -} -``` - -When deriving from the previous abstract base class, thanks to that inherited implementation, you just need to implement the `ExecuteAsync()` method in your own custom hosted service class, as in the following simplified code from eShopOnContainers which is polling a database and publishing integration events into the Event Bus when needed. - -```csharp -public class GracePeriodManagerService : BackgroundService -{ - private readonly ILogger _logger; - private readonly OrderingBackgroundSettings _settings; - - private readonly IEventBus _eventBus; - - public GracePeriodManagerService(IOptions settings, - IEventBus eventBus, - ILogger logger) - { - // Constructor's parameters validations... - } - - protected override async Task ExecuteAsync(CancellationToken stoppingToken) - { - _logger.LogDebug($"GracePeriodManagerService is starting."); - - stoppingToken.Register(() => - _logger.LogDebug($" GracePeriod background task is stopping.")); - - while (!stoppingToken.IsCancellationRequested) - { - _logger.LogDebug($"GracePeriod task doing background work."); - - // This eShopOnContainers method is querying a database table - // and publishing events into the Event Bus (RabbitMQ / ServiceBus) - CheckConfirmedGracePeriodOrders(); - - try { - await Task.Delay(_settings.CheckUpdateTime, stoppingToken); - } - catch (TaskCanceledException exception) { - _logger.LogCritical(exception, "TaskCanceledException Error", exception.Message); - } - } - - _logger.LogDebug($"GracePeriod background task is stopping."); - } - - .../... -} -``` - -In this specific case for eShopOnContainers, it's executing an application method that's querying a database table looking for orders with a specific state and when applying changes, it is publishing integration events through the event bus (underneath it can be using RabbitMQ or Azure Service Bus). - -Of course, you could run any other business background task, instead. - -By default, the cancellation token is set with a 5 seconds timeout, although you can change that value when building your `WebHost` using the `UseShutdownTimeout` extension of the `IWebHostBuilder`. This means that our service is expected to cancel within 5 seconds otherwise it will be more abruptly killed. - -The following code would be changing that time to 10 seconds. - -```csharp -WebHost.CreateDefaultBuilder(args) - .UseShutdownTimeout(TimeSpan.FromSeconds(10)) - ... -``` - -### Summary class diagram - -The following image shows a visual summary of the classes and interfaces involved when implementing IHostedServices. - -![Diagram showing that IWebHost and IHost can host many services.](./media/background-tasks-with-ihostedservice/class-diagram-custom-ihostedservice.png) - -**Figure 6-27**. Class diagram showing the multiple classes and interfaces related to IHostedService - -Class diagram: IWebHost and IHost can host many services, which inherit from BackgroundService, which implements IHostedService. - -### Deployment considerations and takeaways - -It is important to note that the way you deploy your ASP.NET Core `WebHost` or .NET `Host` might impact the final solution. For instance, if you deploy your `WebHost` on IIS or a regular Azure App Service, your host can be shut down because of app pool recycles. But if you are deploying your host as a container into an orchestrator like Kubernetes, you can control the assured number of live instances of your host. In addition, you could consider other approaches in the cloud especially made for these scenarios, like Azure Functions. Finally, if you need the service to be running all the time and are deploying on a Windows Server you could use a Windows Service. - -But even for a `WebHost` deployed into an app pool, there are scenarios like repopulating or flushing application's in-memory cache that would be still applicable. - -The `IHostedService` interface provides a convenient way to start background tasks in an ASP.NET Core web application (in .NET Core 2.0 and later versions) or in any process/host (starting in .NET Core 2.1 with `IHost`). Its main benefit is the opportunity you get with the graceful cancellation to clean-up the code of your background tasks when the host itself is shutting down. - -## Additional resources - -- **Building a scheduled task in ASP.NET Core/Standard 2.0** \ - - -- **Implementing IHostedService in ASP.NET Core 2.0** \ - - -- **GenericHost Sample using ASP.NET Core 2.1** \ - - -> [!div class="step-by-step"] -> [Previous](test-aspnet-core-services-web-apps.md) -> [Next](implement-api-gateways-with-ocelot.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md b/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md deleted file mode 100644 index 4e33146378605..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -title: Creating a simple data-driven CRUD microservice -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the creation of a simple CRUD (data-driven) microservice within the context of a microservices application. -ms.date: 09/10/2024 ---- - -# Creating a simple data-driven CRUD microservice - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -This section outlines how to create a simple microservice that performs create, read, update, and delete (CRUD) operations on a data source. - -## Designing a simple CRUD microservice - -From a design point of view, this type of containerized microservice is very simple. Perhaps the problem to solve is simple, or perhaps the implementation is only a proof of concept. - -![Diagram showing a simple CRUD microservice internal design pattern.](./media/data-driven-crud-microservice/internal-design-simple-crud-microservices.png) - -**Figure 6-4**. Internal design for simple CRUD microservices - -An example of this kind of simple data-drive service is the catalog microservice from the eShopOnContainers sample application. This type of service implements all its functionality in a single ASP.NET Core Web API project that includes classes for its data model, its business logic, and its data access code. It also stores its related data in a database running in SQL Server (as another container for dev/test purposes), but could also be any regular SQL Server host, as shown in Figure 6-5. - -![Diagram showing a data-driven/CRUD microservice container.](./media/data-driven-crud-microservice/simple-data-driven-crud-microservice.png) - -**Figure 6-5**. Simple data-driven/CRUD microservice design - -The previous diagram shows the logical Catalog microservice, that includes its Catalog database, which can be or not in the same Docker host. Having the database in the same Docker host might be good for development, but not for production. When you are developing this kind of service, you only need [ASP.NET Core](/aspnet/core/) and a data-access API or ORM like [Entity Framework Core](/ef/core/index). You could also generate [Swagger](https://swagger.io/) metadata automatically through [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore) to provide a description of what your service offers, as explained in the next section. - -Note that running a database server like SQL Server within a Docker container is great for development environments, because you can have all your dependencies up and running without needing to provision a database in the cloud or on-premises. This approach is convenient when running integration tests. However, for production environments, running a database server in a container is not recommended, because you usually do not get high availability with that approach. For a production environment in Azure, it is recommended that you use Azure SQL DB or any other database technology that can provide high availability and high scalability. For example, for a NoSQL approach, you might choose CosmosDB. - -Finally, by editing the Dockerfile and docker-compose.yml metadata files, you can configure how the image of this container will be created—what base image it will use, plus design settings such as internal and external names and TCP ports. - -## Implementing a simple CRUD microservice with ASP.NET Core - -To implement a simple CRUD microservice using .NET and Visual Studio, you start by creating a simple ASP.NET Core Web API project (running on .NET so it can run on a Linux Docker host), as shown in Figure 6-6. - -![Screenshot of Visual Studios showing the set up of the project.](./media/data-driven-crud-microservice/create-asp-net-core-web-api-project.png) - -**Figure 6-6**. Creating an ASP.NET Core Web API project in Visual Studio 2019 - -To create an ASP.NET Core Web API Project, first select an ASP.NET Core Web Application and then select the API type. After creating the project, you can implement your MVC controllers as you would in any other Web API project, using the Entity Framework API or other API. In a new Web API project, you can see that the only dependency you have in that microservice is on ASP.NET Core itself. Internally, within the *Microsoft.AspNetCore.All* dependency, it is referencing Entity Framework and many other .NET NuGet packages, as shown in Figure 6-7. - -![Screenshot of VS showing the NuGet dependencies of Catalog.Api.](./media/data-driven-crud-microservice/simple-crud-web-api-microservice-dependencies.png) - -**Figure 6-7**. Dependencies in a simple CRUD Web API microservice - -The API project includes references to Microsoft.AspNetCore.App NuGet package, that includes references to all essential packages. It could include some other packages as well. - -### Implementing CRUD Web API services with Entity Framework Core - -Entity Framework (EF) Core is a lightweight, extensible, and cross-platform version of the popular Entity Framework data access technology. EF Core is an object-relational mapper (ORM) that enables .NET developers to work with a database using .NET objects. - -The catalog microservice uses EF and the SQL Server provider because its database is running in a container with the SQL Server for Linux Docker image. However, the database could be deployed into any SQL Server, such as Windows on-premises or Azure SQL DB. The only thing you would need to change is the connection string in the ASP.NET Web API microservice. - -#### The data model - -With EF Core, data access is performed by using a model. A model is made up of (domain model) entity classes and a derived context (DbContext) that represents a session with the database, allowing you to query and save data. You can generate a model from an existing database, manually code a model to match your database, or use EF migrations technique to create a database from your model, using the code-first approach (that makes it easy to evolve the database as your model changes over time). For the catalog microservice, the last approach has been used. You can see an example of the CatalogItem entity class in the following code example, which is a simple Plain Old Class Object ([POCO](../../../standard/glossary.md#poco)) entity class. - -```csharp -public class CatalogItem -{ - public int Id { get; set; } - public string Name { get; set; } - public string Description { get; set; } - public decimal Price { get; set; } - public string PictureFileName { get; set; } - public string PictureUri { get; set; } - public int CatalogTypeId { get; set; } - public CatalogType CatalogType { get; set; } - public int CatalogBrandId { get; set; } - public CatalogBrand CatalogBrand { get; set; } - public int AvailableStock { get; set; } - public int RestockThreshold { get; set; } - public int MaxStockThreshold { get; set; } - - public bool OnReorder { get; set; } - public CatalogItem() { } - - // Additional code ... -} -``` - -You also need a DbContext that represents a session with the database. For the catalog microservice, the CatalogContext class derives from the DbContext base class, as shown in the following example: - -```csharp -public class CatalogContext : DbContext -{ - public CatalogContext(DbContextOptions options) : base(options) - { } - public DbSet CatalogItems { get; set; } - public DbSet CatalogBrands { get; set; } - public DbSet CatalogTypes { get; set; } - - // Additional code ... -} -``` - -You can have additional `DbContext` implementations. For example, in the sample Catalog.API microservice, there's a second `DbContext` named `CatalogContextSeed` where it automatically populates the sample data the first time it tries to access the database. This method is useful for demo data and for automated testing scenarios, as well. - -Within the `DbContext`, you use the `OnModelCreating` method to customize object/database entity mappings and other [EF extensibility points](https://devblogs.microsoft.com/dotnet/implementing-seeding-custom-conventions-and-interceptors-in-ef-core-1-0/). - -##### Querying data from Web API controllers - -Instances of your entity classes are typically retrieved from the database using Language-Integrated Query (LINQ), as shown in the following example: - -```csharp -[Route("api/v1/[controller]")] -public class CatalogController : ControllerBase -{ - private readonly CatalogContext _catalogContext; - private readonly CatalogSettings _settings; - private readonly ICatalogIntegrationEventService _catalogIntegrationEventService; - - public CatalogController( - CatalogContext context, - IOptionsSnapshot settings, - ICatalogIntegrationEventService catalogIntegrationEventService) - { - _catalogContext = context ?? throw new ArgumentNullException(nameof(context)); - _catalogIntegrationEventService = catalogIntegrationEventService - ?? throw new ArgumentNullException(nameof(catalogIntegrationEventService)); - - _settings = settings.Value; - context.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking; - } - - // GET api/v1/[controller]/items[?pageSize=3&pageIndex=10] - [HttpGet] - [Route("items")] - [ProducesResponseType(typeof(PaginatedItemsViewModel), (int)HttpStatusCode.OK)] - [ProducesResponseType(typeof(IEnumerable), (int)HttpStatusCode.OK)] - [ProducesResponseType((int)HttpStatusCode.BadRequest)] - public async Task ItemsAsync( - [FromQuery]int pageSize = 10, - [FromQuery]int pageIndex = 0, - string ids = null) - { - if (!string.IsNullOrEmpty(ids)) - { - var items = await GetItemsByIdsAsync(ids); - - if (!items.Any()) - { - return BadRequest("ids value invalid. Must be comma-separated list of numbers"); - } - - return Ok(items); - } - - var totalItems = await _catalogContext.CatalogItems - .LongCountAsync(); - - var itemsOnPage = await _catalogContext.CatalogItems - .OrderBy(c => c.Name) - .Skip(pageSize * pageIndex) - .Take(pageSize) - .ToListAsync(); - - itemsOnPage = ChangeUriPlaceholder(itemsOnPage); - - var model = new PaginatedItemsViewModel( - pageIndex, pageSize, totalItems, itemsOnPage); - - return Ok(model); - } - //... -} -``` - -##### Saving data - -Data is created, deleted, and modified in the database using instances of your entity classes. You could add code like the following hard-coded example (mock data, in this case) to your Web API controllers. - -```csharp -var catalogItem = new CatalogItem() {CatalogTypeId=2, CatalogBrandId=2, - Name="Roslyn T-Shirt", Price = 12}; -_context.Catalog.Add(catalogItem); -_context.SaveChanges(); -``` - -##### Dependency Injection in ASP.NET Core and Web API controllers - -In ASP.NET Core, you can use Dependency Injection (DI) out of the box. You do not need to set up a third-party Inversion of Control (IoC) container, although you can plug your preferred IoC container into the ASP.NET Core infrastructure if you want. In this case, it means that you can directly inject the required EF DBContext or additional repositories through the controller constructor. - -In the `CatalogController` class mentioned earlier, `CatalogContext` (which inherits from `DbContext`) type is injected along with the other required objects in the `CatalogController()` constructor. - -An important configuration to set up in the Web API project is the DbContext class registration into the service's IoC container. You typically do so in the _Program.cs_ file by calling the `builder.Services.AddDbContext()` method, as shown in the following **simplified** example: - -```csharp -// Additional code... - -builder.Services.AddDbContext(options => -{ - options.UseSqlServer(builder.Configuration["ConnectionString"], - sqlServerOptionsAction: sqlOptions => - { - sqlOptions.MigrationsAssembly( - typeof(Program).GetTypeInfo().Assembly.GetName().Name); - - //Configuring Connection Resiliency: - sqlOptions. - EnableRetryOnFailure(maxRetryCount: 5, - maxRetryDelay: TimeSpan.FromSeconds(30), - errorNumbersToAdd: null); - }); - - // Changing default behavior when client evaluation occurs to throw. - // Default in EFCore would be to log warning when client evaluation is done. - options.ConfigureWarnings(warnings => warnings.Throw( - RelationalEventId.QueryClientEvaluationWarning)); -}); -``` - -[!INCLUDE [managed-identities](../../../includes/managed-identities.md)] - -### Additional resources - -- **Querying Data** \ - [https://learn.microsoft.com/ef/core/querying/index](/ef/core/querying/index) - -- **Saving Data** \ - [https://learn.microsoft.com/ef/core/saving/index](/ef/core/saving/index) - -## The DB connection string and environment variables used by Docker containers - -You can use the ASP.NET Core settings and add a ConnectionString property to your settings.json file as shown in the following example: - -```json -{ - "ConnectionString": "Server=tcp:127.0.0.1,5433;Initial Catalog=Microsoft.eShopOnContainers.Services.CatalogDb;User Id=sa;Password=[PLACEHOLDER]", - "ExternalCatalogBaseUrl": "http://host.docker.internal:5101", - "Logging": { - "IncludeScopes": false, - "LogLevel": { - "Default": "Debug", - "System": "Information", - "Microsoft": "Information" - } - } -} -``` - -The settings.json file can have default values for the ConnectionString property or for any other property. However, those properties will be overridden by the values of environment variables that you specify in the docker-compose.override.yml file, when using Docker. - -From your docker-compose.yml or docker-compose.override.yml files, you can initialize those environment variables so that Docker will set them up as OS environment variables for you, as shown in the following docker-compose.override.yml file (the connection string and other lines wrap in this example, but it would not wrap in your own file). - -```yml -# docker-compose.override.yml - -# -catalog-api: - environment: - - ConnectionString=Server=sqldata;Database=Microsoft.eShopOnContainers.Services.CatalogDb;User Id=sa;Password=[PLACEHOLDER] - # Additional environment variables for this service - ports: - - "5101:80" -``` - -[!INCLUDE [managed-identities](../../../includes/managed-identities.md)] - -The docker-compose.yml files at the solution level are not only more flexible than configuration files at the project or microservice level, but also more secure if you override the environment variables declared at the docker-compose files with values set from your deployment tools, like from Azure DevOps Services Docker deployment tasks. - -Finally, you can get that value from your code by using `builder.Configuration["ConnectionString"]`, as shown in an earlier code example. - -However, for production environments, you might want to explore additional ways on how to store secrets like the connection strings. An excellent way to manage application secrets is using [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). - -Azure Key Vault helps to store and safeguard cryptographic keys and secrets used by your cloud applications and services. A secret is anything you want to keep strict control of, like API keys, connection strings, passwords, etc. and strict control includes usage logging, setting expiration, managing access, *among others*. - -Azure Key Vault allows a detailed control level of the application secrets usage without the need to let anyone know them. The secrets can even be rotated for enhanced security without disrupting development or operations. - -Applications have to be registered in the organization's Active Directory, so they can use the Key Vault. - -You can check the *Key Vault Concepts documentation* for more details. - -### Implementing versioning in ASP.NET Web APIs - -As business requirements change, new collections of resources may be added, the relationships between resources might change, and the structure of the data in resources might be amended. Updating a Web API to handle new requirements is a relatively straightforward process, but you must consider the effects that such changes will have on client applications consuming the Web API. Although the developer designing and implementing a Web API has full control over that API, the developer does not have the same degree of control over client applications that might be built by third-party organizations operating remotely. - -Versioning enables a Web API to indicate the features and resources that it exposes. A client application can then submit requests to a specific version of a feature or resource. There are several approaches to implement versioning: - -- URI versioning -- Query string versioning -- Header versioning - -Query string and URI versioning are the simplest to implement. Header versioning is a good approach. However, header versioning is not as explicit and straightforward as URI versioning. Because URL versioning is the simplest and most explicit, the eShopOnContainers sample application uses URI versioning. - -With URI versioning, as in the eShopOnContainers sample application, each time you modify the Web API or change the schema of resources, you add a version number to the URI for each resource. Existing URIs should continue to operate as before, returning resources that conform to the schema that matches the requested version. - -As shown in the following code example, the version can be set by using the Route attribute in the Web API controller, which makes the version explicit in the URI (v1 in this case). - -```csharp -[Route("api/v1/[controller]")] -public class CatalogController : ControllerBase -{ - // Implementation ... -``` - -This versioning mechanism is simple and depends on the server routing the request to the appropriate endpoint. However, for a more sophisticated versioning and the best method when using REST, you should use hypermedia and implement [HATEOAS (Hypertext as the Engine of Application State)](/azure/architecture/best-practices/api-design#use-hateoas-to-enable-navigation-to-related-resources). - -### Additional resources - -- **ASP.NET API Versioning** \ - -- **Scott Hanselman. ASP.NET Core RESTful Web API versioning made easy** \ - - -- **Versioning a RESTful web API** \ - [https://learn.microsoft.com/azure/architecture/best-practices/api-design#versioning-a-restful-web-api](/azure/architecture/best-practices/api-design#versioning-a-restful-web-api) - -- **Roy Fielding. Versioning, Hypermedia, and REST** \ - - -## Generating Swagger description metadata from your ASP.NET Core Web API - -[Swagger](https://swagger.io/) is a commonly used open source framework backed by a large ecosystem of tools that helps you design, build, document, and consume your RESTful APIs. It is becoming the standard for the APIs description metadata domain. You should include Swagger description metadata with any kind of microservice, either data-driven microservices or more advanced domain-driven microservices (as explained in the following section). - -The heart of Swagger is the Swagger specification, which is API description metadata in a JSON or YAML file. The specification creates the RESTful contract for your API, detailing all its resources and operations in both a human- and machine-readable format for easy development, discovery, and integration. - -The specification is the basis of the OpenAPI Specification (OAS) and is developed in an open, transparent, and collaborative community to standardize the way RESTful interfaces are defined. - -The specification defines the structure for how a service can be discovered and how its capabilities understood. For more information, including a web editor and examples of Swagger specifications from companies like Spotify, Uber, Slack, and Microsoft, see the Swagger site (). - -### Why use Swagger? - -The main reasons to generate Swagger metadata for your APIs are the following. - -**Ability for other products to automatically consume and integrate your APIs**. Dozens of products and [commercial tools](https://swagger.io/commercial-tools/) and many [libraries and frameworks](https://swagger.io/open-source-integrations/) support Swagger. Microsoft has high-level products and tools that can automatically consume Swagger-based APIs, such as the following: - -- [AutoRest](https://github.com/Azure/AutoRest). You can automatically generate .NET client classes for calling Swagger. This tool can be used from the CLI and it also integrates with Visual Studio for easy use through the GUI. - -- [Microsoft Flow](https://flow.microsoft.com/). You can automatically [use and integrate your API](https://flow.microsoft.com/blog/integrating-custom-api/) into a high-level Microsoft Flow workflow, with no programming skills required. - -- [Microsoft PowerApps](https://powerapps.microsoft.com/). You can automatically consume your API from [PowerApps mobile apps](https://powerapps.microsoft.com/blog/register-and-use-custom-apis-in-powerapps/) built with [PowerApps Studio](https://powerapps.microsoft.com/build-powerapps/), with no programming skills required. - -- [Azure App Service Logic Apps](/azure/app-service-logic/app-service-logic-what-are-logic-apps). You can automatically [use and integrate your API into an Azure App Service Logic App](/azure/app-service-logic/app-service-logic-custom-hosted-api), with no programming skills required. - -**Ability to automatically generate API documentation**. When you create large-scale RESTful APIs, such as complex microservice-based applications, you need to handle many endpoints with different data models used in the request and response payloads. Having proper documentation and having a solid API explorer, as you get with Swagger, is key for the success of your API and adoption by developers. - -Swagger's metadata is what Microsoft Flow, PowerApps, and Azure Logic Apps use to understand how to use APIs and connect to them. - -There are several options to automate Swagger metadata generation for ASP.NET Core REST API applications, in the form of functional API help pages, based on *swagger-ui*. - -Probably the best know is [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore), which is currently used in [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers) and we'll cover in some detail in this guide but there's also the option to use [NSwag](https://github.com/RSuter/NSwag), which can generate Typescript and C\# API clients, as well as C\# controllers, from a Swagger or OpenAPI specification and even by scanning the .dll that contains the controllers, using [NSwagStudio](https://github.com/RSuter/NSwag/wiki/NSwagStudio). - -### How to automate API Swagger metadata generation with the Swashbuckle NuGet package - -Generating Swagger metadata manually (in a JSON or YAML file) can be tedious work. However, you can automate API discovery of ASP.NET Web API services by using the [Swashbuckle NuGet package](https://aka.ms/swashbuckledotnetcore) to dynamically generate Swagger API metadata. - -Swashbuckle automatically generates Swagger metadata for your ASP.NET Web API projects. It supports ASP.NET Core Web API projects and the traditional ASP.NET Web API and any other flavor, such as Azure API App, Azure Mobile App, Azure Service Fabric microservices based on ASP.NET. It also supports plain Web API deployed on containers, as in for the reference application. - -Swashbuckle combines API Explorer and Swagger or [swagger-ui](https://github.com/swagger-api/swagger-ui) to provide a rich discovery and documentation experience for your API consumers. In addition to its Swagger metadata generator engine, Swashbuckle also contains an embedded version of swagger-ui, which it will automatically serve up once Swashbuckle is installed. - -This means you can complement your API with a nice discovery UI to help developers to use your API. It requires a small amount of code and maintenance because it is automatically generated, allowing you to focus on building your API. The result for the API Explorer looks like Figure 6-8. - -![Screenshot of Swagger API Explorer displaying eShopOContainers API.](./media/data-driven-crud-microservice/swagger-metadata-eshoponcontainers-catalog-microservice.png) - -**Figure 6-8**. Swashbuckle API Explorer based on Swagger metadata—eShopOnContainers catalog microservice - -The Swashbuckle generated Swagger UI API documentation includes all published actions. The API explorer is not the most important thing here. Once you have a Web API that can describe itself in Swagger metadata, your API can be used seamlessly from Swagger-based tools, including client proxy-class code generators that can target many platforms. For example, as mentioned, [AutoRest](https://github.com/Azure/AutoRest) automatically generates .NET client classes. But additional tools like [swagger-codegen](https://github.com/swagger-api/swagger-codegen) are also available, which allow code generation of API client libraries, server stubs, and documentation automatically. - -Currently, Swashbuckle consists of five internal NuGet packages under the high-level metapackage [Swashbuckle.AspNetCore](https://www.nuget.org/packages/Swashbuckle.AspNetCore) for ASP.NET Core applications. - -After you have installed these NuGet packages in your Web API project, you need to configure Swagger in the _Program.cs_ class, as in the following **simplified** code: - -```csharp -// Add framework services. - -builder.Services.AddSwaggerGen(options => -{ - options.DescribeAllEnumsAsStrings(); - options.SwaggerDoc("v1", new OpenApiInfo - { - Title = "eShopOnContainers - Catalog HTTP API", - Version = "v1", - Description = "The Catalog Microservice HTTP API. This is a Data-Driven/CRUD microservice sample" - }); -}); - -// Other startup code... - -app.UseSwagger(); - -if (app.Environment.IsDevelopment()) -{ - app.UseSwaggerUI(c => - { - c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1"); - }); -} -``` - -Once this is done, you can start your application and browse the following Swagger JSON and UI endpoints using URLs like these: - -```console - http:///swagger/v1/swagger.json - - http:///swagger/ -``` - -You previously saw the generated UI created by Swashbuckle for a URL like `http:///swagger`. In Figure 6-9, you can also see how you can test any API method. - -![Screenshot of Swagger UI showing available testing tools.](./media/data-driven-crud-microservice/swashbuckle-ui-testing.png) - -**Figure 6-9**. Swashbuckle UI testing the Catalog/Items API method - -The Swagger UI API detail shows a sample of the response and can be used to execute the real API, which is great for developer discovery. To see the Swagger JSON metadata generated from the eShopOnContainers microservice (which is what the tools use underneath), make a you request `http:///swagger/v1/swagger.json` using the [Visual Studio Code: REST Client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client). - -### Additional resources - -- **ASP.NET Web API Help Pages using Swagger** \ - [https://learn.microsoft.com/aspnet/core/tutorials/web-api-help-pages-using-swagger](/aspnet/core/tutorials/web-api-help-pages-using-swagger) - -- **Get started with Swashbuckle and ASP.NET Core** \ - [https://learn.microsoft.com/aspnet/core/tutorials/getting-started-with-swashbuckle](/aspnet/core/tutorials/getting-started-with-swashbuckle) - -- **Get started with NSwag and ASP.NET Core** \ - [https://learn.microsoft.com/aspnet/core/tutorials/getting-started-with-nswag](/aspnet/core/tutorials/getting-started-with-nswag) - -> [!div class="step-by-step"] -> [Previous](microservice-application-design.md) -> [Next](multi-container-applications-docker-compose.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/database-server-container.md b/docs/architecture/microservices/multi-container-microservice-net-applications/database-server-container.md deleted file mode 100644 index 024b587e177b5..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/database-server-container.md +++ /dev/null @@ -1,285 +0,0 @@ ---- -title: Use a database server running as a container -description: Understand the importance of using a database server running as a container only for development. Never for production. -ms.date: 01/13/2021 ---- -# Use a database server running as a container - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -You can have your databases (SQL Server, PostgreSQL, MySQL, etc.) on regular standalone servers, in on-premises clusters, or in PaaS services in the cloud like Azure SQL DB. However, for development and test environments, having your databases running as containers is convenient, because you don't have any external dependency and simply running the `docker-compose up` command starts the whole application. Having those databases as containers is also great for integration tests, because the database is started in the container and is always populated with the same sample data, so tests can be more predictable. - -## SQL Server running as a container with a microservice-related database - -In eShopOnContainers, there's a container named `sqldata`, as defined in the [docker-compose.yml](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/docker-compose.yml) file, that runs a SQL Server for Linux instance with the SQL databases for all microservices that need one. - -A key point in microservices is that each microservice owns its related data, so it should have its own database. However, the databases can be anywhere. In this case, they are all in the same container to keep Docker memory requirements as low as possible. Keep in mind that this is a good-enough solution for development and, perhaps, testing but not for production. - -The SQL Server container in the sample application is configured with the following YAML code in the docker-compose.yml file, which is executed when you run `docker-compose up`. Note that the YAML code has consolidated configuration information from the generic docker-compose.yml file and the docker-compose.override.yml file. (Usually you would separate the environment settings from the base or static information related to the SQL Server image.) - -```yml - sqldata: - image: mcr.microsoft.com/mssql/server:2017-latest - environment: - - SA_PASSWORD=Pass@word - - ACCEPT_EULA=Y - ports: - - "5434:1433" -``` - -In a similar way, instead of using `docker-compose`, the following `docker run` command can run that container: - -```powershell -docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Pass@word' -p 5433:1433 -d mcr.microsoft.com/mssql/server:2017-latest -``` - -However, if you are deploying a multi-container application like eShopOnContainers, it is more convenient to use the `docker-compose up` command so that it deploys all the required containers for the application. - -When you start this SQL Server container for the first time, the container initializes SQL Server with the password that you provide. Once SQL Server is running as a container, you can update the database by connecting through any regular SQL connection, such as from SQL Server Management Studio, Visual Studio, or C\# code. - -The eShopOnContainers application initializes each microservice database with sample data by seeding it with data on startup, as explained in the following section. - -Having SQL Server running as a container is not just useful for a demo where you might not have access to an instance of SQL Server. As noted, it is also great for development and testing environments so that you can easily run integration tests starting from a clean SQL Server image and known data by seeding new sample data. - -### Additional resources - -- **Run the SQL Server Docker image on Linux, Mac, or Windows** \ - [https://learn.microsoft.com/sql/linux/sql-server-linux-setup-docker](/sql/linux/sql-server-linux-setup-docker) - -- **Connect and query SQL Server on Linux with sqlcmd** \ - [https://learn.microsoft.com/sql/linux/sql-server-linux-connect-and-query-sqlcmd](/sql/linux/sql-server-linux-connect-and-query-sqlcmd) - -## Seeding with test data on Web application startup - -To add data to the database when the application starts up, you can add code like the following to the `Main` method in the `Program` class of the Web API project: - -```csharp -public static int Main(string[] args) -{ - var configuration = GetConfiguration(); - - Log.Logger = CreateSerilogLogger(configuration); - - try - { - Log.Information("Configuring web host ({ApplicationContext})...", AppName); - var host = CreateHostBuilder(configuration, args); - - Log.Information("Applying migrations ({ApplicationContext})...", AppName); - host.MigrateDbContext((context, services) => - { - var env = services.GetService(); - var settings = services.GetService>(); - var logger = services.GetService>(); - - new CatalogContextSeed() - .SeedAsync(context, env, settings, logger) - .Wait(); - }) - .MigrateDbContext((_, __) => { }); - - Log.Information("Starting web host ({ApplicationContext})...", AppName); - host.Run(); - - return 0; - } - catch (Exception ex) - { - Log.Fatal(ex, "Program terminated unexpectedly ({ApplicationContext})!", AppName); - return 1; - } - finally - { - Log.CloseAndFlush(); - } -} -``` - -There's an important caveat when applying migrations and seeding a database during container startup. Since the database server might not be available for whatever reason, you must handle retries while waiting for the server to be available. This retry logic is handled by the `MigrateDbContext()` extension method, as shown in the following code: - -```csharp -public static IWebHost MigrateDbContext( - this IWebHost host, - Action seeder) - where TContext : DbContext -{ - var underK8s = host.IsInKubernetes(); - - using (var scope = host.Services.CreateScope()) - { - var services = scope.ServiceProvider; - - var logger = services.GetRequiredService>(); - - var context = services.GetService(); - - try - { - logger.LogInformation("Migrating database associated with context {DbContextName}", typeof(TContext).Name); - - if (underK8s) - { - InvokeSeeder(seeder, context, services); - } - else - { - var retry = Policy.Handle() - .WaitAndRetry(new TimeSpan[] - { - TimeSpan.FromSeconds(3), - TimeSpan.FromSeconds(5), - TimeSpan.FromSeconds(8), - }); - - //if the sql server container is not created on run docker compose this - //migration can't fail for network related exception. The retry options for DbContext only - //apply to transient exceptions - // Note that this is NOT applied when running some orchestrators (let the orchestrator to recreate the failing service) - retry.Execute(() => InvokeSeeder(seeder, context, services)); - } - - logger.LogInformation("Migrated database associated with context {DbContextName}", typeof(TContext).Name); - } - catch (Exception ex) - { - logger.LogError(ex, "An error occurred while migrating the database used on context {DbContextName}", typeof(TContext).Name); - if (underK8s) - { - throw; // Rethrow under k8s because we rely on k8s to re-run the pod - } - } - } - - return host; -} -``` - -The following code in the custom CatalogContextSeed class populates the data. - -```csharp -public class CatalogContextSeed -{ - public static async Task SeedAsync(IApplicationBuilder applicationBuilder) - { - var context = (CatalogContext)applicationBuilder - .ApplicationServices.GetService(typeof(CatalogContext)); - using (context) - { - context.Database.Migrate(); - if (!context.CatalogBrands.Any()) - { - context.CatalogBrands.AddRange( - GetPreconfiguredCatalogBrands()); - await context.SaveChangesAsync(); - } - if (!context.CatalogTypes.Any()) - { - context.CatalogTypes.AddRange( - GetPreconfiguredCatalogTypes()); - await context.SaveChangesAsync(); - } - } - } - - static IEnumerable GetPreconfiguredCatalogBrands() - { - return new List() - { - new CatalogBrand() { Brand = "Azure"}, - new CatalogBrand() { Brand = ".NET" }, - new CatalogBrand() { Brand = "Visual Studio" }, - new CatalogBrand() { Brand = "SQL Server" } - }; - } - - static IEnumerable GetPreconfiguredCatalogTypes() - { - return new List() - { - new CatalogType() { Type = "Mug"}, - new CatalogType() { Type = "T-Shirt" }, - new CatalogType() { Type = "Backpack" }, - new CatalogType() { Type = "USB Memory Stick" } - }; - } -} -``` - -When you run integration tests, having a way to generate data consistent with your integration tests is useful. Being able to create everything from scratch, including an instance of SQL Server running on a container, is great for test environments. - -## EF Core InMemory database versus SQL Server running as a container - -Another good choice when running tests is to use the Entity Framework InMemory database provider. You can specify that configuration in the ConfigureServices method of the Startup class in your Web API project: - -```csharp -public class Startup -{ - // Other Startup code ... - public void ConfigureServices(IServiceCollection services) - { - services.AddSingleton(Configuration); - // DbContext using an InMemory database provider - services.AddDbContext(opt => opt.UseInMemoryDatabase()); - //(Alternative: DbContext using a SQL Server provider - //services.AddDbContext(c => - //{ - // c.UseSqlServer(Configuration["ConnectionString"]); - // - //}); - } - - // Other Startup code ... -} -``` - -There is an important catch, though. The in-memory database does not support many constraints that are specific to a particular database. For instance, you might add a unique index on a column in your EF Core model and write a test against your in-memory database to check that it does not let you add a duplicate value. But when you are using the in-memory database, you cannot handle unique indexes on a column. Therefore, the in-memory database does not behave exactly the same as a real SQL Server database—it does not emulate database-specific constraints. - -Even so, an in-memory database is still useful for testing and prototyping. But if you want to create accurate integration tests that take into account the behavior of a specific database implementation, you need to use a real database like SQL Server. For that purpose, running SQL Server in a container is a great choice and more accurate than the EF Core InMemory database provider. - -## Using a Redis cache service running in a container - -You can run Redis on a container, especially for development and testing and for proof-of-concept scenarios. This scenario is convenient, because you can have all your dependencies running on containers—not just for your local development machines, but for your testing environments in your CI/CD pipelines. - -However, when you run Redis in production, it is better to look for a high-availability solution like Redis Microsoft Azure, which runs as a PaaS (Platform as a Service). In your code, you just need to change your connection strings. - -Redis provides a Docker image with Redis. That image is available from Docker Hub at this URL: - - - -You can directly run a Docker Redis container by executing the following Docker CLI command in your command prompt: - -```console -docker run --name some-redis -d redis -``` - -The Redis image includes expose:6379 (the port used by Redis), so standard container linking will make it automatically available to the linked containers. - -In eShopOnContainers, the `basket-api` microservice uses a Redis cache running as a container. That `basketdata` container is defined as part of the multi-container *docker-compose.yml* file, as shown in the following example: - -```yml -#docker-compose.yml file -#... - basketdata: - image: redis - expose: - - "6379" -``` - -This code in the docker-compose.yml defines a container named `basketdata` based on the redis image and publishing the port 6379 internally. This configuration means that it will only be accessible from other containers running within the Docker host. - -Finally, in the *docker-compose.override.yml* file, the `basket-api` microservice for the eShopOnContainers sample defines the connection string to use for that Redis container: - -```yml - basket-api: - environment: - # Other data ... - - ConnectionString=basketdata - - EventBusConnection=rabbitmq -``` - -As mentioned before, the name of the microservice `basketdata` is resolved by Docker's internal network DNS. - ->[!div class="step-by-step"] ->[Previous](multi-container-applications-docker-compose.md) ->[Next](integration-event-based-microservice-communications.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md b/docs/architecture/microservices/multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md deleted file mode 100644 index 9d47f0dca5654..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -title: Implementing API Gateways with Ocelot -description: Learn how to implement API Gateways with Ocelot and how to use Ocelot in a container-based environment. -ms.date: 06/23/2021 ---- - -# Implement API Gateways with Ocelot - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -> [!IMPORTANT] -> The reference microservice application [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers) is currently using features provided by [Envoy](https://www.envoyproxy.io/) to implement the API Gateway instead of the earlier referenced [Ocelot](https://github.com/ThreeMammals/Ocelot). -> We made this design choice because of Envoy's built-in support for the WebSocket protocol, required by the new gRPC inter-service communications implemented in eShopOnContainers. -> However, we've retained this section in the guide so you can consider Ocelot as a simple, capable, and lightweight API Gateway suitable for production-grade scenarios. -> Also, latest Ocelot version contains a breaking change on its json schema. Consider using Ocelot < v16.0.0, or use the key Routes instead of ReRoutes. - -## Architect and design your API Gateways - -The following architecture diagram shows how API Gateways were implemented with Ocelot in eShopOnContainers. - -![Diagram showing the eShopOnContainers architecture.](./media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture.png) - -**Figure 6-28**. eShopOnContainers architecture with API Gateways - -That diagram shows how the whole application is deployed into a single Docker host or development PC with "Docker for Windows" or "Docker for Mac". However, deploying into any orchestrator would be similar, but any container in the diagram could be scaled out in the orchestrator. - -In addition, the infrastructure assets such as databases, cache, and message brokers should be offloaded from the orchestrator and deployed into high available systems for infrastructure, like Azure SQL Database, Azure Cosmos DB, Azure Redis, Azure Service Bus, or any HA clustering solution on-premises. - -As you can also notice in the diagram, having several API Gateways allows multiple development teams to be autonomous (in this case Marketing features vs. Shopping features) when developing and deploying their microservices plus their own related API Gateways. - -If you had a single monolithic API Gateway that would mean a single point to be updated by several development teams, which could couple all the microservices with a single part of the application. - -Going much further in the design, sometimes a fine-grained API Gateway can also be limited to a single business microservice depending on the chosen architecture. Having the API Gateway's boundaries dictated by the business or domain will help you to get a better design. - -For instance, fine granularity in the API Gateway tier can be especially useful for more advanced composite UI applications that are based on microservices, because the concept of a fine-grained API Gateway is similar to a UI composition service. - -We delve into more details in the previous section [Creating composite UI based on microservices](../architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md). - -As a key takeaway, for many medium- and large-size applications, using a custom-built API Gateway product is usually a good approach, but not as a single monolithic aggregator or unique central custom API Gateway unless that API Gateway allows multiple independent configuration areas for the several development teams creating autonomous microservices. - -### Sample microservices/containers to reroute through the API Gateways - -As an example, eShopOnContainers has around six internal microservice-types that have to be published through the API Gateways, as shown in the following image. - -![Screenshot of the Services folder showing its subfolders.](./media/implement-api-gateways-with-ocelot/eshoponcontainers-microservice-folders.png) - -**Figure 6-29**. Microservice folders in eShopOnContainers solution in Visual Studio - -About the Identity service, in the design it's left out of the API Gateway routing because it's the only cross-cutting concern in the system, although with Ocelot it's also possible to include it as part of the rerouting lists. - -All those services are currently implemented as ASP.NET Core Web API services, as you can tell from the code. Let's focus on one of the microservices like the Catalog microservice code. - -![Screenshot of Solution Explorer showing Catalog.API project contents.](./media/implement-api-gateways-with-ocelot/catalog-api-microservice-folders.png) - -**Figure 6-30**. Sample Web API microservice (Catalog microservice) - -You can see that the Catalog microservice is a typical ASP.NET Core Web API project with several controllers and methods like in the following code. - -```csharp -[HttpGet] -[Route("items/{id:int}")] -[ProducesResponseType((int)HttpStatusCode.BadRequest)] -[ProducesResponseType((int)HttpStatusCode.NotFound)] -[ProducesResponseType(typeof(CatalogItem),(int)HttpStatusCode.OK)] -public async Task GetItemById(int id) -{ - if (id <= 0) - { - return BadRequest(); - } - var item = await _catalogContext.CatalogItems. - SingleOrDefaultAsync(ci => ci.Id == id); - //… - - if (item != null) - { - return Ok(item); - } - return NotFound(); -} -``` - -The HTTP request will end up running that kind of C# code accessing the microservice database and any additional required action. - -Regarding the microservice URL, when the containers are deployed in your local development PC (local Docker host), each microservice's container always has an internal port (usually port 80) specified in its dockerfile, as in the following dockerfile: - -```dockerfile -FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base -WORKDIR /app -EXPOSE 80 -``` - -The port 80 shown in the code is internal within the Docker host, so it can't be reached by client apps. - -Client apps can access only the external ports (if any) published when deploying with `docker-compose`. - -Those external ports shouldn't be published when deploying to a production environment. For this specific reason, why you want to use the API Gateway, to avoid the direct communication between the client apps and the microservices. - -However, when developing, you want to access the microservice/container directly and run it through Swagger. That's why in eShopOnContainers, the external ports are still specified even when they won't be used by the API Gateway or the client apps. - -Here's an example of the `docker-compose.override.yml` file for the Catalog microservice: - -```yml -catalog-api: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - ASPNETCORE_URLS=http://0.0.0.0:80 - - ConnectionString=YOUR_VALUE - - ... Other Environment Variables - ports: - - "5101:80" # Important: In a production environment you should remove the external port (5101) kept here for microservice debugging purposes. - # The API Gateway redirects and access through the internal port (80). -``` - -You can see how in the docker-compose.override.yml configuration the internal port for the Catalog container is port 80, but the port for external access is 5101. But this port shouldn't be used by the application when using an API Gateway, only to debug, run, and test just the Catalog microservice. - -Normally, you won't be deploying with docker-compose into a production environment because the right production deployment environment for microservices is an orchestrator like Kubernetes or Service Fabric. When deploying to those environments you use different configuration files where you won't publish directly any external port for the microservices but, you'll always use the reverse proxy from the API Gateway. - -Run the catalog microservice in your local Docker host. Either run the full eShopOnContainers solution from Visual Studio (it runs all the services in the docker-compose files), or start the Catalog microservice with the following docker-compose command in CMD or PowerShell positioned at the folder where the `docker-compose.yml` and `docker-compose.override.yml` are placed. - -```console -docker-compose run --service-ports catalog-api -``` - -This command only runs the catalog-api service container plus dependencies that are specified in the docker-compose.yml. In this case, the SQL Server container and RabbitMQ container. - -Then, you can directly access the Catalog microservice and see its methods through the Swagger UI accessing directly through that "external" port, in this case `http://host.docker.internal:5101/swagger`: - -![Screenshot of Swagger UI showing the Catalog.API REST API.](./media/implement-api-gateways-with-ocelot/test-catalog-microservice.png) - -**Figure 6-31**. Testing the Catalog microservice with its Swagger UI - -At this point, you could set a breakpoint in C# code in Visual Studio, test the microservice with the methods exposed in Swagger UI, and finally clean-up everything with the `docker-compose down` command. - -However, direct-access communication to the microservice, in this case through the external port 5101, is precisely what you want to avoid in your application. And you can avoid that by setting the additional level of indirection of the API Gateway (Ocelot, in this case). That way, the client app won't directly access the microservice. - -## Implementing your API Gateways with Ocelot - -Ocelot is basically a set of middleware that you can apply in a specific order. - -Ocelot is designed to work with ASP.NET Core only. The latest version of the package is 18.0 which targets .NET 6 and hence is not suitable for .NET Framework applications. - -You install Ocelot and its dependencies in your ASP.NET Core project with [Ocelot's NuGet package](https://www.nuget.org/packages/Ocelot/), from Visual Studio. - -```powershell -Install-Package Ocelot -``` - -In eShopOnContainers, its API Gateway implementation is a simple ASP.NET Core WebHost project, and Ocelot’s middleware handles all the API Gateway features, as shown in the following image: - -![Screenshot of Solution Explorer showing Ocelot API gateway project.](./media/implement-api-gateways-with-ocelot/ocelotapigw-base-project.png) - -**Figure 6-32**. The OcelotApiGw base project in eShopOnContainers - -This ASP.NET Core WebHost project is built with two simple files: `Program.cs` and `Startup.cs`. - -The Program.cs just needs to create and configure the typical ASP.NET Core BuildWebHost. - -```csharp -namespace OcelotApiGw -{ - public class Program - { - public static void Main(string[] args) - { - BuildWebHost(args).Run(); - } - - public static IWebHost BuildWebHost(string[] args) - { - var builder = WebHost.CreateDefaultBuilder(args); - - builder.ConfigureServices(s => s.AddSingleton(builder)) - .ConfigureAppConfiguration( - ic => ic.AddJsonFile(Path.Combine("configuration", - "configuration.json"))) - .UseStartup(); - var host = builder.Build(); - return host; - } - } -} -``` - -The important point here for Ocelot is the `configuration.json` file that you must provide to the builder through the `AddJsonFile()` method. That `configuration.json` is where you specify all the API Gateway ReRoutes, meaning the external endpoints with specific ports and the correlated internal endpoints, usually using different ports. - -```json -{ - "ReRoutes": [], - "GlobalConfiguration": {} -} -``` - -There are two sections to the configuration. An array of ReRoutes and a GlobalConfiguration. The ReRoutes are the objects that tell Ocelot how to treat an upstream request. The Global configuration allows overrides of ReRoute specific settings. It's useful if you don't want to manage lots of ReRoute specific settings. - -Here's a simplified example of [ReRoute configuration file](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/ApiGateways/Mobile.Bff.Shopping/apigw/configuration.json) from one of the API Gateways from eShopOnContainers. - -```json -{ - "ReRoutes": [ - { - "DownstreamPathTemplate": "/api/{version}/{everything}", - "DownstreamScheme": "http", - "DownstreamHostAndPorts": [ - { - "Host": "catalog-api", - "Port": 80 - } - ], - "UpstreamPathTemplate": "/api/{version}/c/{everything}", - "UpstreamHttpMethod": [ "POST", "PUT", "GET" ] - }, - { - "DownstreamPathTemplate": "/api/{version}/{everything}", - "DownstreamScheme": "http", - "DownstreamHostAndPorts": [ - { - "Host": "basket-api", - "Port": 80 - } - ], - "UpstreamPathTemplate": "/api/{version}/b/{everything}", - "UpstreamHttpMethod": [ "POST", "PUT", "GET" ], - "AuthenticationOptions": { - "AuthenticationProviderKey": "IdentityApiKey", - "AllowedScopes": [] - } - } - - ], - "GlobalConfiguration": { - "RequestIdKey": "OcRequestId", - "AdministrationPath": "/administration" - } - } -``` - -The main functionality of an Ocelot API Gateway is to take incoming HTTP requests and forward them on to a downstream service, currently as another HTTP request. Ocelot's describes the routing of one request to another as a ReRoute. - -For instance, let's focus on one of the ReRoutes in the configuration.json from above, the configuration for the Basket microservice. - -```json -{ - "DownstreamPathTemplate": "/api/{version}/{everything}", - "DownstreamScheme": "http", - "DownstreamHostAndPorts": [ - { - "Host": "basket-api", - "Port": 80 - } - ], - "UpstreamPathTemplate": "/api/{version}/b/{everything}", - "UpstreamHttpMethod": [ "POST", "PUT", "GET" ], - "AuthenticationOptions": { - "AuthenticationProviderKey": "IdentityApiKey", - "AllowedScopes": [] - } -} -``` - -The DownstreamPathTemplate, Scheme, and DownstreamHostAndPorts make the internal microservice URL that this request will be forwarded to. - -The port is the internal port used by the service. When using containers, the port specified at its dockerfile. - -The `Host` is a service name that depends on the service name resolution you are using. When using docker-compose, the services names are provided by the Docker Host, which is using the service names provided in the docker-compose files. If using an orchestrator like Kubernetes or Service Fabric, that name should be resolved by the DNS or name resolution provided by each orchestrator. - -DownstreamHostAndPorts is an array that contains the host and port of any downstream services that you wish to forward requests to. Usually this configuration will just contain one entry but sometimes you might want to load balance requests to your downstream services and Ocelot lets you add more than one entry and then select a load balancer. But if using Azure and any orchestrator it is probably a better idea to load balance with the cloud and orchestrator infrastructure. - -The UpstreamPathTemplate is the URL that Ocelot will use to identify which DownstreamPathTemplate to use for a given request from the client. Finally, the UpstreamHttpMethod is used so Ocelot can distinguish between different requests (GET, POST, PUT) to the same URL. - -At this point, you could have a single Ocelot API Gateway (ASP.NET Core WebHost) using one or [multiple merged configuration.json files](https://ocelot.readthedocs.io/en/latest/features/configuration.html#merging-configuration-files) or you can also store the [configuration in a Consul KV store](https://ocelot.readthedocs.io/en/latest/features/configuration.html#store-configuration-in-consul). - -But as introduced in the architecture and design sections, if you really want to have autonomous microservices, it might be better to split that single monolithic API Gateway into multiple API Gateways and/or BFF (Backend for Frontend). For that purpose, let's see how to implement that approach with Docker containers. - -### Using a single Docker container image to run multiple different API Gateway / BFF container types - -In eShopOnContainers, we're using a single Docker container image with the Ocelot API Gateway but then, at run time, we create different services/containers for each type of API-Gateway/BFF by providing a different configuration.json file, using a docker volume to access a different PC folder for each service. - -![Diagram of a single Ocelot gateway Docker image for all API gateways.](./media/implement-api-gateways-with-ocelot/reusing-single-ocelot-docker-image.png) - -**Figure 6-33**. Reusing a single Ocelot Docker image across multiple API Gateway types - -In eShopOnContainers, the "Generic Ocelot API Gateway Docker Image" is created with the project named 'OcelotApiGw' and the image name "eshop/ocelotapigw" that is specified in the docker-compose.yml file. Then, when deploying to Docker, there will be four API-Gateway containers created from that same Docker image, as shown in the following extract from the docker-compose.yml file. - -```yml - mobileshoppingapigw: - image: eshop/ocelotapigw:${TAG:-latest} - build: - context: . - dockerfile: src/ApiGateways/ApiGw-Base/Dockerfile - - mobilemarketingapigw: - image: eshop/ocelotapigw:${TAG:-latest} - build: - context: . - dockerfile: src/ApiGateways/ApiGw-Base/Dockerfile - - webshoppingapigw: - image: eshop/ocelotapigw:${TAG:-latest} - build: - context: . - dockerfile: src/ApiGateways/ApiGw-Base/Dockerfile - - webmarketingapigw: - image: eshop/ocelotapigw:${TAG:-latest} - build: - context: . - dockerfile: src/ApiGateways/ApiGw-Base/Dockerfile -``` - -Additionally, as you can see in the following docker-compose.override.yml file, the only difference between those API Gateway containers is the Ocelot configuration file, which is different for each service container and it's specified at run time through a Docker volume. - -```yml -mobileshoppingapigw: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - IdentityUrl=http://identity-api - ports: - - "5200:80" - volumes: - - ./src/ApiGateways/Mobile.Bff.Shopping/apigw:/app/configuration - -mobilemarketingapigw: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - IdentityUrl=http://identity-api - ports: - - "5201:80" - volumes: - - ./src/ApiGateways/Mobile.Bff.Marketing/apigw:/app/configuration - -webshoppingapigw: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - IdentityUrl=http://identity-api - ports: - - "5202:80" - volumes: - - ./src/ApiGateways/Web.Bff.Shopping/apigw:/app/configuration - -webmarketingapigw: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - IdentityUrl=http://identity-api - ports: - - "5203:80" - volumes: - - ./src/ApiGateways/Web.Bff.Marketing/apigw:/app/configuration -``` - -Because of that previous code, and as shown in the Visual Studio Explorer below, the only file needed to define each specific business/BFF API Gateway is just a configuration.json file, because the four API Gateways are based on the same Docker image. - -![Screenshot showing all API gateways with configuration.json files.](./media/implement-api-gateways-with-ocelot/ocelot-configuration-files.png) - -**Figure 6-34**. The only file needed to define each API Gateway / BFF with Ocelot is a configuration file - -By splitting the API Gateway into multiple API Gateways, different development teams focusing on different subsets of microservices can manage their own API Gateways by using independent Ocelot configuration files. Plus, at the same time they can reuse the same Ocelot Docker image. - -Now, if you run eShopOnContainers with the API Gateways (included by default in VS when opening eShopOnContainers-ServicesAndWebApps.sln solution or if running "docker-compose up"), the following sample routes will be performed. - -For instance, when visiting the upstream URL `http://host.docker.internal:5202/api/v1/c/catalog/items/2/` served by the webshoppingapigw API Gateway, you get the same result from the internal Downstream URL `http://catalog-api/api/v1/2` within the Docker host, as in the following browser. - -![Screenshot of a browser showing a response going through API gateway.](./media/implement-api-gateways-with-ocelot/access-microservice-through-url.png) - -**Figure 6-35**. Accessing a microservice through a URL provided by the API Gateway - -Because of testing or debugging reasons, if you wanted to directly access to the Catalog Docker container (only at the development environment) without passing through the API Gateway, since 'catalog-api' is a DNS resolution internal to the Docker host (service discovery handled by docker-compose service names), the only way to directly access the container is through the external port published in the docker-compose.override.yml, which is provided only for development tests, such as `http://host.docker.internal:5101/api/v1/Catalog/items/1` in the following browser. - -![Screenshot of a browser showing a direct response to the Catalog.api.](./media/implement-api-gateways-with-ocelot/direct-access-microservice-testing.png) - -**Figure 6-36**. Direct access to a microservice for testing purposes - -But the application is configured so it accesses all the microservices through the API Gateways, not through the direct port "shortcuts". - -### The Gateway aggregation pattern in eShopOnContainers - -As introduced previously, a flexible way to implement requests aggregation is with custom services, by code. You could also implement request aggregation with the [Request Aggregation feature in Ocelot](https://ocelot.readthedocs.io/en/latest/features/requestaggregation.html#request-aggregation), but it might not be as flexible as you need. Therefore, the selected way to implement aggregation in eShopOnContainers is with an explicit ASP.NET Core Web API service for each aggregator. - -According to that approach, the API Gateway composition diagram is in reality a bit more extended when considering the aggregator services that are not shown in the simplified global architecture diagram shown previously. - -In the following diagram, you can also see how the aggregator services work with their related API Gateways. - -![Diagram of eShopOnContainers architecture showing aggregator services.](./media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture-aggregator-services.png) - -**Figure 6-37**. eShopOnContainers architecture with aggregator services - -Zooming in further, on the "Shopping" business area in the following image, you can see that chattiness between the client apps and the microservices is reduced when using the aggregator services in the API Gateways. - -![Diagram showing eShopOnContainers architecture zoom in.](./media/implement-api-gateways-with-ocelot/zoom-in-vision-aggregator-services.png) - -**Figure 6-38**. Zoom in vision of the Aggregator services - -You can notice how when the diagram shows the possible requests coming from the API Gateways it can get complex. On the other hand, when you use the aggregator pattern, you can see how the arrows in blue would simplify the communication from a client app perspective. This pattern not only helps to reduce the chattiness and latency in the communication, it also improves the user experience significantly for the remote apps (mobile and SPA apps). - -In the case of the "Marketing" business area and microservices, it is a simple use case so there was no need to use aggregators, but it could also be possible, if needed. - -### Authentication and authorization in Ocelot API Gateways - -In an Ocelot API Gateway, you can sit the authentication service, such as an ASP.NET Core Web API service using [IdentityServer](../../cloud-native/identity-server.md) providing the auth token, either out or inside the API Gateway. - -Since eShopOnContainers is using multiple API Gateways with boundaries based on BFF and business areas, the Identity/Auth service is left out of the API Gateways, as highlighted in yellow in the following diagram. - -![Diagram showing Identity microservice beneath the API gateway.](./media/implement-api-gateways-with-ocelot/eshoponcontainers-identity-service-position.png) - -**Figure 6-39**. Position of the Identity service in eShopOnContainers - -However, Ocelot also supports sitting the Identity/Auth microservice within the API Gateway boundary, as in this other diagram. - -![Diagram showing authentication in an Ocelot API Gateway.](./media/implement-api-gateways-with-ocelot/ocelot-authentication.png) - -**Figure 6-40**. Authentication in Ocelot - -As the previous diagram shows, when the Identity microservice is beneath the API gateway (AG): 1) AG requests an auth token from identity microservice, 2) The identity microservice returns token to AG, 3-4) AG requests from microservices using the auth token. Because eShopOnContainers application has split the API Gateway into multiple BFF (Backend for Frontend) and business areas API Gateways, another option would have been to create an additional API Gateway for cross-cutting concerns. That choice would be fair in a more complex microservice based architecture with multiple cross-cutting concerns microservices. Since there's only one cross-cutting concern in eShopOnContainers, it was decided to just handle the security service out of the API Gateway realm, for simplicity's sake. - -In any case, if the app is secured at the API Gateway level, the authentication module of the Ocelot API Gateway is visited at first when trying to use any secured microservice. That redirects the HTTP request to visit the Identity or auth microservice to get the access token so you can visit the protected services with the access_token. - -The way you secure with authentication any service at the API Gateway level is by setting the AuthenticationProviderKey in its related settings at the configuration.json. - -```json - { - "DownstreamPathTemplate": "/api/{version}/{everything}", - "DownstreamScheme": "http", - "DownstreamHostAndPorts": [ - { - "Host": "basket-api", - "Port": 80 - } - ], - "UpstreamPathTemplate": "/api/{version}/b/{everything}", - "UpstreamHttpMethod": [], - "AuthenticationOptions": { - "AuthenticationProviderKey": "IdentityApiKey", - "AllowedScopes": [] - } - } -``` - -When Ocelot runs, it will look at the ReRoutes AuthenticationOptions.AuthenticationProviderKey and check that there is an Authentication Provider registered with the given key. If there isn't, then Ocelot will not start up. If there is, then the ReRoute will use that provider when it executes. - -Because the Ocelot WebHost is configured with the `authenticationProviderKey = "IdentityApiKey"`, that will require authentication whenever that service has any requests without any auth token. - -```csharp -namespace OcelotApiGw -{ - public class Startup - { - private readonly IConfiguration _cfg; - - public Startup(IConfiguration configuration) => _cfg = configuration; - - public void ConfigureServices(IServiceCollection services) - { - var identityUrl = _cfg.GetValue("IdentityUrl"); - var authenticationProviderKey = "IdentityApiKey"; - //… - services.AddAuthentication() - .AddJwtBearer(authenticationProviderKey, x => - { - x.Authority = identityUrl; - x.RequireHttpsMetadata = false; - x.TokenValidationParameters = new Microsoft.IdentityModel.Tokens.TokenValidationParameters() - { - ValidAudiences = new[] { "orders", "basket", "locations", "marketing", "mobileshoppingagg", "webshoppingagg" } - }; - }); - //... - } - } -} -``` - -Then, you also need to set authorization with the [Authorize] attribute on any resource to be accessed like the microservices, such as in the following Basket microservice controller. - -```csharp -namespace Microsoft.eShopOnContainers.Services.Basket.API.Controllers -{ - [Route("api/v1/[controller]")] - [Authorize] - public class BasketController : Controller - { - //... - } -} -``` - -The ValidAudiences such as "basket" are correlated with the audience defined in each microservice with `AddJwtBearer()` at the ConfigureServices() of the Startup class, such as in the code below. - -```csharp -// prevent from mapping "sub" claim to nameidentifier. -JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); - -var identityUrl = Configuration.GetValue("IdentityUrl"); - -services.AddAuthentication(options => -{ - options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; - options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; - -}).AddJwtBearer(options => -{ - options.Authority = identityUrl; - options.RequireHttpsMetadata = false; - options.Audience = "basket"; -}); -``` - -If you try to access any secured microservice, like the Basket microservice with a ReRoute URL based on the API Gateway like `http://host.docker.internal:5202/api/v1/b/basket/1`, then you'll get a 401 Unauthorized unless you provide a valid token. On the other hand, if a ReRoute URL is authenticated, Ocelot will invoke whatever downstream scheme is associated with it (the internal microservice URL). - -**Authorization at Ocelot's ReRoutes tier.** Ocelot supports claims-based authorization evaluated after the authentication. You set the authorization at a route level by adding the following lines to the ReRoute configuration. - -```json -"RouteClaimsRequirement": { - "UserType": "employee" -} -``` - -In that example, when the authorization middleware is called, Ocelot will find if the user has the claim type 'UserType' in the token and if the value of that claim is 'employee'. If it isn't, then the user will not be authorized and the response will be 403 forbidden. - -## Using Kubernetes Ingress plus Ocelot API Gateways - -When using Kubernetes (like in an Azure Kubernetes Service cluster), you usually unify all the HTTP requests through the [Kubernetes Ingress tier](https://kubernetes.io/docs/concepts/services-networking/ingress/) based on *Nginx*. - -In Kubernetes, if you don't use any ingress approach, then your services and pods have IPs only routable by the cluster network. - -But if you use an ingress approach, you'll have a middle tier between the Internet and your services (including your API Gateways), acting as a reverse proxy. - -As a definition, an Ingress is a collection of rules that allow inbound connections to reach the cluster services. An ingress is configured to provide services externally reachable URLs, load balance traffic, SSL termination and more. Users request ingress by POSTing the Ingress resource to the API server. - -In eShopOnContainers, when developing locally and using just your development machine as the Docker host, you are not using any ingress but only the multiple API Gateways. - -However, when targeting a "production" environment based on Kubernetes, eShopOnContainers is using an ingress in front of the API gateways. That way, the clients still call the same base URL but the requests are routed to multiple API Gateways or BFF. - -API Gateways are front-ends or façades surfacing only the services but not the web applications that are usually out of their scope. In addition, the API Gateways might hide certain internal microservices. - -The ingress, however, is just redirecting HTTP requests but not trying to hide any microservice or web app. - -Having an ingress Nginx tier in Kubernetes in front of the web applications plus the several Ocelot API Gateways / BFF is the ideal architecture, as shown in the following diagram. - -![A diagram showing how an ingress tier fits into the AKS environment.](./media/implement-api-gateways-with-ocelot/eshoponcontainer-ingress-tier.png) - -**Figure 6-41**. The ingress tier in eShopOnContainers when deployed into Kubernetes - -A Kubernetes Ingress acts as a reverse proxy for all traffic to the app, including the web applications, that are out of the Api gateway scope. When you deploy eShopOnContainers into Kubernetes, it exposes just a few services or endpoints via _ingress_, basically the following list of postfixes on the URLs: - -- `/` for the client SPA web application -- `/webmvc` for the client MVC web application -- `/webstatus` for the client web app showing the status/healthchecks -- `/webshoppingapigw` for the web BFF and shopping business processes -- `/webmarketingapigw` for the web BFF and marketing business processes -- `/mobileshoppingapigw` for the mobile BFF and shopping business processes -- `/mobilemarketingapigw` for the mobile BFF and marketing business processes - -When deploying to Kubernetes, each Ocelot API Gateway is using a different "configuration.json" file for each _pod_ running the API Gateways. Those "configuration.json" files are provided by mounting (originally with the deploy.ps1 script) a volume created based on a Kubernetes _config map_ named ‘ocelot'. Each container mounts its related configuration file in the container's folder named `/app/configuration`. - -In the source code files of eShopOnContainers, the original "configuration.json" files can be found within the `k8s/ocelot/` folder. There's one file for each BFF/APIGateway. - -## Additional cross-cutting features in an Ocelot API Gateway - -There are other important features to research and use, when using an Ocelot API Gateway, described in the following links. - -- **Service discovery in the client side integrating Ocelot with Consul or Eureka** \ - - -- **Caching at the API Gateway tier** \ - - -- **Logging at the API Gateway tier** \ - - -- **Quality of Service (Retries and Circuit breakers) at the API Gateway tier** \ - - -- **Rate limiting** \ - [https://ocelot.readthedocs.io/en/latest/features/ratelimiting.html](https://ocelot.readthedocs.io/en/latest/features/ratelimiting.html ) - -- **Swagger for Ocelot** \ - [https://github.com/Burgyn/MMLib.SwaggerForOcelot](https://github.com/Burgyn/MMLib.SwaggerForOcelot) - -> [!div class="step-by-step"] -> [Previous](background-tasks-with-ihostedservice.md) -> [Next](../microservice-ddd-cqrs-patterns/index.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/index.md b/docs/architecture/microservices/multi-container-microservice-net-applications/index.md deleted file mode 100644 index 29b6087700b03..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/index.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Designing and Developing Multi Container and Microservice Based .NET Applications -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the external architecture for Designing and Developing Multi Container and Microservice Based .NET Applications. -ms.date: 10/02/2018 ---- -# Designing and Developing Multi-Container and Microservice-Based .NET Applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -*Developing containerized microservice applications means you are building multi-container applications. However, a multi-container application could also be simpler—for example, a three-tier application—and might not be built using a microservice architecture.* - -Earlier we raised the question "Is Docker necessary when building a microservice architecture?" The answer is a clear no. Docker is an enabler and can provide significant benefits, but containers and Docker are not a hard requirement for microservices. As an example, you could create a microservices-based application with or without Docker when using Azure Service Fabric, which supports microservices running as simple processes or as Docker containers. - -However, if you know how to design and develop a microservices-based application that is also based on Docker containers, you will be able to design and develop any other, simpler application model. For example, you might design a three-tier application that also requires a multi-container approach. Because of that, and because microservice architectures are an important trend within the container world, this section focuses on a microservice architecture implementation using Docker containers. - ->[!div class="step-by-step"] ->[Previous](../docker-application-development-process/docker-app-development-workflow.md) ->[Next](microservice-application-design.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md b/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md deleted file mode 100644 index 6a41e7ef5ef6a..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -title: Implementing event-based communication between microservices (integration events) -description: .NET Microservices Architecture for Containerized .NET Applications | Understand integration events to implement event-based communication between microservices. -ms.date: 01/13/2021 ---- - -# Implementing event-based communication between microservices (integration events) - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -As described earlier, when you use [event-based communication](/azure/architecture/guide/architecture-styles/event-driven), a [microservice](/azure/architecture/microservices/) publishes an event when something notable happens, such as when it updates a business entity. Other microservices subscribe to those events. When a microservice receives an event, it can update its own business entities, which might lead to more events being published. This is the essence of the eventual consistency concept. This [publish/subscribe](/azure/architecture/patterns/publisher-subscriber) system is usually performed by using an implementation of an event bus. The event bus can be designed as an interface with the API needed to subscribe and unsubscribe to events and to publish events. It can also have one or more implementations based on any inter-process or messaging communication, such as a messaging queue or a service bus that supports asynchronous communication and a publish/subscribe model. - -You can use events to implement business transactions that span multiple services, which give you eventual consistency between those services. An eventually consistent transaction consists of a series of distributed actions. At each action, the microservice updates a business entity and publishes an event that triggers the next action. Be aware that transaction do not span the underlying persistence and event bus, so [idempotence needs to be handled](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-data-platform#idempotent-message-processing). Figure 6-18 below, shows a PriceUpdated event published through an event bus, so the price update is propagated to the Basket and other microservices. - -![Diagram of asynchronous event-driven communication with an event bus.](./media/integration-event-based-microservice-communications/event-driven-communication.png) - -**Figure 6-18**. Event-driven communication based on an event bus - -This section describes how you can implement this type of communication with .NET by using a generic event bus interface, as shown in Figure 6-18. There are multiple potential implementations, each using a different technology or infrastructure such as RabbitMQ, Azure Service Bus, or any other third-party open-source or commercial service bus. - -## Using message brokers and service buses for production systems - -As noted in the architecture section, you can choose from multiple messaging technologies for implementing your abstract event bus. But these technologies are at different levels. For instance, RabbitMQ, a messaging broker transport, is at a lower level than commercial products like Azure Service Bus, NServiceBus, MassTransit, or Brighter. Most of these products can work on top of either RabbitMQ or Azure Service Bus. Your choice of product depends on how many features and how much out-of-the-box scalability you need for your application. - -For implementing just an event bus proof-of-concept for your development environment, as in the eShopOnContainers sample, a simple implementation on top of [RabbitMQ](https://www.rabbitmq.com/) running as a container might be enough. But for mission-critical and production systems that need high scalability, you might want to evaluate and use [Azure Service Bus](/azure/service-bus-messaging/). - -If you require high-level abstractions and richer features like [Sagas](https://docs.particular.net/nservicebus/sagas/) for long-running processes that make distributed development easier, other commercial and open-source service buses like [NServiceBus](/azure/service-bus-messaging/build-message-driven-apps-nservicebus), [MassTransit](https://masstransit.io/), and [Brighter](https://github.com/BrighterCommand/Brighter) are worth evaluating. In this case, the abstractions and API to use would usually be directly the ones provided by those high-level service buses instead of your own abstractions (like the [simple event bus abstractions provided at eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/BuildingBlocks/EventBus/EventBus/Abstractions/IEventBus.cs)). For that matter, you can research the [forked eShopOnContainers using NServiceBus](https://go.particular.net/eShopOnContainers) (additional derived sample implemented by Particular Software). - -Of course, you could always build your own service bus features on top of lower-level technologies like RabbitMQ and Docker, but the work needed to "reinvent the wheel" might be too costly for a custom enterprise application. - -To reiterate: the sample event bus abstractions and implementation showcased in the eShopOnContainers sample are intended to be used only as a proof of concept. Once you have decided that you want to have asynchronous and event-driven communication, as explained in the current section, you should choose the service bus product that best fits your needs for production. - -## Integration events - -Integration events are used for bringing domain state in sync across multiple microservices or external systems. This functionality is done by publishing integration events outside the microservice. When an event is published to multiple receiver microservices (to as many microservices as are subscribed to the integration event), the appropriate event handler in each receiver microservice handles the event. - -An integration event is basically a data-holding class, as in the following example: - -```csharp -public class ProductPriceChangedIntegrationEvent : IntegrationEvent -{ - public int ProductId { get; private set; } - public decimal NewPrice { get; private set; } - public decimal OldPrice { get; private set; } - - public ProductPriceChangedIntegrationEvent(int productId, decimal newPrice, - decimal oldPrice) - { - ProductId = productId; - NewPrice = newPrice; - OldPrice = oldPrice; - } -} -``` - -The integration events can be defined at the application level of each microservice, so they are decoupled from other microservices, in a way comparable to how ViewModels are defined in the server and client. What is not recommended is sharing a common integration events library across multiple microservices; doing that would be coupling those microservices with a single event definition data library. You do not want to do that for the same reasons that you do not want to share a common domain model across multiple microservices: microservices must be completely autonomous. For more information, see this blog post on [the amount of data to put in events](https://particular.net/blog/putting-your-events-on-a-diet). Be careful not to take this too far, as this other blog post describes [the problem data deficient messages can produce](https://ardalis.com/data-deficient-messages/). Your design of your events should aim to be "just right" for the needs of their consumers. - -There are only a few kinds of libraries you should share across microservices. One is libraries that are final application blocks, like the [Event Bus client API](https://github.com/dotnet-architecture/eShopOnContainers/tree/main/src/BuildingBlocks/EventBus), as in eShopOnContainers. Another is libraries that constitute tools that could also be shared as NuGet components, like JSON serializers. - -## The event bus - -An event bus allows publish/subscribe-style communication between microservices without requiring the components to explicitly be aware of each other, as shown in Figure 6-19. - -![A diagram showing the basic publish/subscribe pattern.](./media/integration-event-based-microservice-communications/publish-subscribe-basics.png) - -**Figure 6-19**. Publish/subscribe basics with an event bus - -The above diagram shows that microservice A publishes to Event Bus, which distributes to subscribing microservices B and C, without the publisher needing to know the subscribers. The event bus is related to the Observer pattern and the publish-subscribe pattern. - -### Observer pattern - -In the [Observer pattern](https://en.wikipedia.org/wiki/Observer_pattern), your primary object (known as the Observable) notifies other interested objects (known as Observers) with relevant information (events). - -### Publish/Subscribe (Pub/Sub) pattern - -The purpose of the [Publish/Subscribe pattern](/previous-versions/msp-n-p/ff649664(v=pandp.10)) is the same as the Observer pattern: you want to notify other services when certain events take place. But there is an important difference between the Observer and Pub/Sub patterns. In the observer pattern, the broadcast is performed directly from the observable to the observers, so they "know" each other. But when using a Pub/Sub pattern, there is a third component, called broker, or message broker or event bus, which is known by both the publisher and subscriber. Therefore, when using the Pub/Sub pattern the publisher and the subscribers are precisely decoupled thanks to the mentioned event bus or message broker. - -### The middleman or event bus - -How do you achieve anonymity between publisher and subscriber? An easy way is let a middleman take care of all the communication. An event bus is one such middleman. - -An event bus is typically composed of two parts: - -- The abstraction or interface. - -- One or more implementations. - -In Figure 6-19 you can see how, from an application point of view, the event bus is nothing more than a Pub/Sub channel. The way you implement this asynchronous communication can vary. It can have multiple implementations so that you can swap between them, depending on the environment requirements (for example, production versus development environments). - -In Figure 6-20, you can see an abstraction of an event bus with multiple implementations based on infrastructure messaging technologies like RabbitMQ, Azure Service Bus, or another event/message broker. - -![Diagram showing the addition of an event bus abstraction layer.](./media/integration-event-based-microservice-communications/multiple-implementations-event-bus.png) - -**Figure 6- 20.** Multiple implementations of an event bus - -It's good to have the event bus defined through an interface so it can be implemented with several technologies, like RabbitMQ, Azure Service bus or others. However, and as mentioned previously, using your own abstractions (the event bus interface) is good only if you need basic event bus features supported by your abstractions. If you need richer service bus features, you should probably use the API and abstractions provided by your preferred commercial service bus instead of your own abstractions. - -### Defining an event bus interface - -Let's start with some implementation code for the event bus interface and possible implementations for exploration purposes. The interface should be generic and straightforward, as in the following interface. - -```csharp -public interface IEventBus -{ - void Publish(IntegrationEvent @event); - - void Subscribe() - where T : IntegrationEvent - where TH : IIntegrationEventHandler; - - void SubscribeDynamic(string eventName) - where TH : IDynamicIntegrationEventHandler; - - void UnsubscribeDynamic(string eventName) - where TH : IDynamicIntegrationEventHandler; - - void Unsubscribe() - where TH : IIntegrationEventHandler - where T : IntegrationEvent; -} -``` - -The `Publish` method is straightforward. The event bus will broadcast the integration event passed to it to any microservice, or even an external application, subscribed to that event. This method is used by the microservice that is publishing the event. - -The `Subscribe` methods (you can have several implementations depending on the arguments) are used by the microservices that want to receive events. This method has two arguments. The first is the integration event to subscribe to (`IntegrationEvent`). The second argument is the integration event handler (or callback method), named `IIntegrationEventHandler`, to be executed when the receiver microservice gets that integration event message. - -## Additional resources - -Some production-ready messaging solutions: - -- **Azure Service Bus** \ - [https://learn.microsoft.com/azure/service-bus-messaging/](/azure/service-bus-messaging/) - -- **NServiceBus** \ - - -- **MassTransit** \ - - -> [!div class="step-by-step"] -> [Previous](database-server-container.md) -> [Next](rabbitmq-event-bus-development-test-environment.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/background-tasks-with-ihostedservice/class-diagram-custom-ihostedservice.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/background-tasks-with-ihostedservice/class-diagram-custom-ihostedservice.png deleted file mode 100644 index 2af79dd1bb7cc..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/background-tasks-with-ihostedservice/class-diagram-custom-ihostedservice.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/background-tasks-with-ihostedservice/ihosted-service-webhost-vs-host.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/background-tasks-with-ihostedservice/ihosted-service-webhost-vs-host.png deleted file mode 100644 index 27d3d14931d5f..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/background-tasks-with-ihostedservice/ihosted-service-webhost-vs-host.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/create-asp-net-core-web-api-project.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/create-asp-net-core-web-api-project.png deleted file mode 100644 index 6e24a5bb48e5d..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/create-asp-net-core-web-api-project.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/internal-design-simple-crud-microservices.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/internal-design-simple-crud-microservices.png deleted file mode 100644 index cc088e36ec5b3..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/internal-design-simple-crud-microservices.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/simple-crud-web-api-microservice-dependencies.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/simple-crud-web-api-microservice-dependencies.png deleted file mode 100644 index 8a36f190ad805..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/simple-crud-web-api-microservice-dependencies.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/simple-data-driven-crud-microservice.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/simple-data-driven-crud-microservice.png deleted file mode 100644 index 8c3d00b2df116..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/simple-data-driven-crud-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/swagger-metadata-eshoponcontainers-catalog-microservice.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/swagger-metadata-eshoponcontainers-catalog-microservice.png deleted file mode 100644 index c0a33b454122b..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/swagger-metadata-eshoponcontainers-catalog-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/swashbuckle-ui-testing.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/swashbuckle-ui-testing.png deleted file mode 100644 index b41c226aac1d5..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/data-driven-crud-microservice/swashbuckle-ui-testing.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/access-microservice-through-url.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/access-microservice-through-url.png deleted file mode 100644 index 8c62e3a9b1cad..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/access-microservice-through-url.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/catalog-api-microservice-folders.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/catalog-api-microservice-folders.png deleted file mode 100644 index 1d763bc34cb70..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/catalog-api-microservice-folders.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/direct-access-microservice-testing.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/direct-access-microservice-testing.png deleted file mode 100644 index 1f5bb64cf55a0..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/direct-access-microservice-testing.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainer-ingress-tier.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainer-ingress-tier.png deleted file mode 100644 index e1073913a6d44..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainer-ingress-tier.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture-aggregator-services.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture-aggregator-services.png deleted file mode 100644 index 33c3fec569f5d..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture-aggregator-services.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture.png deleted file mode 100644 index 2949667a09da8..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-architecture.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-identity-service-position.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-identity-service-position.png deleted file mode 100644 index 68188cfc1a80e..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-identity-service-position.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-microservice-folders.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-microservice-folders.png deleted file mode 100644 index 768c13087e02f..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/eshoponcontainers-microservice-folders.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelot-authentication.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelot-authentication.png deleted file mode 100644 index 9c20d46aff85c..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelot-authentication.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelot-configuration-files.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelot-configuration-files.png deleted file mode 100644 index 06c1b1b9d12f7..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelot-configuration-files.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelotapigw-base-project.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelotapigw-base-project.png deleted file mode 100644 index 8fe595947be4e..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/ocelotapigw-base-project.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/reusing-single-ocelot-docker-image.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/reusing-single-ocelot-docker-image.png deleted file mode 100644 index 6d8264f3e0a05..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/reusing-single-ocelot-docker-image.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/test-catalog-microservice.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/test-catalog-microservice.png deleted file mode 100644 index 82190af86c5ad..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/test-catalog-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/zoom-in-vision-aggregator-services.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/zoom-in-vision-aggregator-services.png deleted file mode 100644 index c099a2da1c66f..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/implement-api-gateways-with-ocelot/zoom-in-vision-aggregator-services.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/event-driven-communication.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/event-driven-communication.png deleted file mode 100644 index 5df9a8c276e64..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/event-driven-communication.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/multiple-implementations-event-bus.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/multiple-implementations-event-bus.png deleted file mode 100644 index 026cc9d7c4b2a..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/multiple-implementations-event-bus.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/publish-subscribe-basics.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/publish-subscribe-basics.png deleted file mode 100644 index 4b369e75b80de..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/integration-event-based-microservice-communications/publish-subscribe-basics.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/eshoponcontainers-reference-application-architecture.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/eshoponcontainers-reference-application-architecture.png deleted file mode 100644 index c0fdea6573a9c..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/eshoponcontainers-reference-application-architecture.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/external-versus-internal-architecture.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/external-versus-internal-architecture.png deleted file mode 100644 index 563cb0e5fa469..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/external-versus-internal-architecture.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/multi-architectural-patterns-polyglot-microservices.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/multi-architectural-patterns-polyglot-microservices.png deleted file mode 100644 index 9d6d45b60d98b..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/microservice-application-design/multi-architectural-patterns-polyglot-microservices.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/multi-container-applications-docker-compose/docker-compose-file-visual-studio.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/multi-container-applications-docker-compose/docker-compose-file-visual-studio.png deleted file mode 100644 index bfa112c5ae598..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/multi-container-applications-docker-compose/docker-compose-file-visual-studio.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/multi-container-applications-docker-compose/multiple-docker-compose-files-override-base.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/multi-container-applications-docker-compose/multiple-docker-compose-files-override-base.png deleted file mode 100644 index a288577aa8018..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/multi-container-applications-docker-compose/multiple-docker-compose-files-override-base.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/rabbitmq-event-bus-development-test-environment/rabbitmq-implementation.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/rabbitmq-event-bus-development-test-environment/rabbitmq-implementation.png deleted file mode 100644 index 5fef906d0d74b..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/rabbitmq-event-bus-development-test-environment/rabbitmq-implementation.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/atomicity-publish-event-bus.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/atomicity-publish-event-bus.png deleted file mode 100644 index 0579f94b2e3c0..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/atomicity-publish-event-bus.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/atomicity-publish-worker-microservice.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/atomicity-publish-worker-microservice.png deleted file mode 100644 index 05a94838d672d..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/atomicity-publish-worker-microservice.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/display-item-price-change.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/display-item-price-change.png deleted file mode 100644 index b5e3e96f5ab19..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/subscribe-events/display-item-price-change.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/media/test-aspnet-core-services-web-apps/eshoponcontainers-test-folder-structure.png b/docs/architecture/microservices/multi-container-microservice-net-applications/media/test-aspnet-core-services-web-apps/eshoponcontainers-test-folder-structure.png deleted file mode 100644 index fbc711e596c24..0000000000000 Binary files a/docs/architecture/microservices/multi-container-microservice-net-applications/media/test-aspnet-core-services-web-apps/eshoponcontainers-test-folder-structure.png and /dev/null differ diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md b/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md deleted file mode 100644 index de71de4c94f3f..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: Designing a microservice-oriented application -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the benefits and downsides of a microservice-oriented application, so you can take an informed decision. -ms.date: 01/13/2021 ---- - -# Design a microservice-oriented application - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -This section focuses on developing a hypothetical server-side enterprise application. - -## Application specifications - -The hypothetical application handles requests by executing business logic, accessing databases, and then returning HTML, JSON, or XML responses. We will say that the application must support various clients, including desktop browsers running Single Page Applications (SPAs), traditional web apps, mobile web apps, and native mobile apps. The application might also expose an API for third parties to consume. It should also be able to integrate its microservices or external applications asynchronously, so that approach will help resiliency of the microservices in the case of partial failures. - -The application will consist of these types of components: - -- Presentation components. These components are responsible for handling the UI and consuming remote services. - -- Domain or business logic. This component is the application's domain logic. - -- Database access logic. This component consists of data access components responsible for accessing databases (SQL or NoSQL). - -- Application integration logic. This component includes a messaging channel, based on message brokers. - -The application will require high scalability, while allowing its vertical subsystems to scale out autonomously, because certain subsystems will require more scalability than others. - -The application must be able to be deployed in multiple infrastructure environments (multiple public clouds and on-premises) and ideally should be cross-platform, able to move from Linux to Windows (or vice versa) easily. - -## Development team context - -We also assume the following about the development process for the application: - -- You have multiple dev teams focusing on different business areas of the application. - -- New team members must become productive quickly, and the application must be easy to understand and modify. - -- The application will have a long-term evolution and ever-changing business rules. - -- You need good long-term maintainability, which means having agility when implementing new changes in the future while being able to update multiple subsystems with minimum impact on the other subsystems. - -- You want to practice continuous integration and continuous deployment of the application. - -- You want to take advantage of emerging technologies (frameworks, programming languages, etc.) while evolving the application. You do not want to make full migrations of the application when moving to new technologies, because that would result in high costs and impact the predictability and stability of the application. - -## Choosing an architecture - -What should the application deployment architecture be? The specifications for the application, along with the development context, strongly suggest that you should architect the application by decomposing it into autonomous subsystems in the form of collaborating [microservices](/azure/architecture/guide/architecture-styles/microservices) and containers, where a microservice is a container. - -In this approach, each service (container) implements a set of cohesive and narrowly related functions. For example, an application might consist of services such as the catalog service, ordering service, basket service, user profile service, etc. - -Microservices communicate using protocols such as HTTP (REST), but also asynchronously (for example, using AMQP) whenever possible, especially when propagating updates with integration events. - -Microservices are developed and deployed as containers independently of one another. This approach means that a development team can be developing and deploying a certain microservice without impacting other subsystems. - -Each microservice has its own database, allowing it to be fully decoupled from other microservices. When necessary, consistency between databases from different microservices is achieved using application-level integration events (through a logical event bus), as handled in [Command and Query Responsibility Segregation (CQRS)](/azure/architecture/patterns/cqrs). Because of that, the business constraints must embrace eventual consistency between the multiple microservices and related databases. - -### eShopOnContainers: A reference application for .NET and microservices deployed using containers - -So that you can focus on the architecture and technologies instead of thinking about a hypothetical business domain that you might not know, we have selected a well-known business domain—namely, a simplified e-commerce (e-shop) application that presents a catalog of products, takes orders from customers, verifies inventory, and performs other business functions. This container-based application source code is available in the [eShopOnContainers](https://aka.ms/MicroservicesArchitecture) GitHub repo. - -The application consists of multiple subsystems, including several store UI front ends (a Web application and a native mobile app), along with the back-end microservices and containers for all the required server-side operations with several API Gateways as consolidated entry points to the internal microservices. Figure 6-1 shows the architecture of the reference application. - -![Diagram of client apps using eShopOnContainers in a single Docker host.](./media/microservice-application-design/eshoponcontainers-reference-application-architecture.png) - -**Figure 6-1**. The eShopOnContainers reference application architecture for development environment - -The above diagram shows that Mobile and SPA clients communicate to single API gateway endpoints, that then communicate to microservices. Traditional web clients communicate to MVC microservice, that communicates to microservices through the API gateway. - -**Hosting environment**. In Figure 6-1, you see several containers deployed within a single Docker host. That would be the case when deploying to a single Docker host with the docker-compose up command. However, if you are using an orchestrator or container cluster, each container could be running in a different host (node), and any node could be running any number of containers, as we explained earlier in the architecture section. - -**Communication architecture**. The eShopOnContainers application uses two communication types, depending on the kind of the functional action (queries versus updates and transactions): - -- Http client-to-microservice communication through API Gateways. This approach is used for queries and when accepting update or transactional commands from the client apps. The approach using API Gateways is explained in detail in later sections. - -- Asynchronous event-based communication. This communication occurs through an event bus to propagate updates across microservices or to integrate with external applications. The event bus can be implemented with any messaging-broker infrastructure technology like [RabbitMQ](https://www.rabbitmq.com/), or using higher-level (abstraction-level) service buses like [Azure Service Bus](/azure/service-bus-messaging/), [NServiceBus](/azure/service-bus-messaging/build-message-driven-apps-nservicebus), [MassTransit](https://masstransit.io/), or [Brighter](https://github.com/BrighterCommand/Brighter). - -The application is deployed as a set of microservices in the form of containers. Client apps can communicate with those microservices running as containers through the public URLs published by the API Gateways. - -### Data sovereignty per microservice - -In the sample application, each microservice owns its own database or data source, although all SQL Server databases are deployed as a single container. This design decision was made only to make it easy for a developer to get the code from GitHub, clone it, and open it in Visual Studio or Visual Studio Code. Or alternatively, it makes it easy to compile the custom Docker images using the .NET CLI and the Docker CLI, and then deploy and run them in a Docker development environment. Either way, using containers for data sources lets developers build and deploy in a matter of minutes without having to provision an external database or any other data source with hard dependencies on infrastructure (cloud or on-premises). - -In a real production environment, for high availability and for scalability, the databases should be based on database servers in the cloud or on-premises, but not in containers. - -Therefore, the units of deployment for microservices (and even for databases in this application) are Docker containers, and the reference application is a multi-container application that embraces microservices principles. - -### Additional resources - -- **eShopOnContainers GitHub repo. Source code for the reference application** \ - - -## Benefits of a microservice-based solution - -A microservice-based solution like this has many benefits: - -**Each microservice is relatively small—easy to manage and evolve**. Specifically: - -- It is easy for a developer to understand and get started quickly with good productivity. - -- Containers start fast, which makes developers more productive. - -- An IDE like Visual Studio can load smaller projects fast, making developers productive. - -- Each microservice can be designed, developed, and deployed independently of other microservices, which provide agility because it is easier to deploy new versions of microservices frequently. - -**It is possible to scale out individual areas of the application**. For instance, the catalog service or the basket service might need to be scaled out, but not the ordering process. A microservices infrastructure will be much more efficient with regard to the resources used when scaling out than a monolithic architecture would be. - -**You can divide the development work between multiple teams**. Each service can be owned by a single development team. Each team can manage, develop, deploy, and scale their service independently of the rest of the teams. - -**Issues are more isolated**. If there is an issue in one service, only that service is initially impacted (except when the wrong design is used, with direct dependencies between microservices), and other services can continue to handle requests. In contrast, one malfunctioning component in a monolithic deployment architecture can bring down the entire system, especially when it involves resources, such as a memory leak. Additionally, when an issue in a microservice is resolved, you can deploy just the affected microservice without impacting the rest of the application. - -**You can use the latest technologies**. Because you can start developing services independently and run them side by side (thanks to containers and .NET), you can start using the latest technologies and frameworks expediently instead of being stuck on an older stack or framework for the whole application. - -## Downsides of a microservice-based solution - -A microservice-based solution like this also has some drawbacks: - -**Distributed application**. Distributing the application adds complexity for developers when they are designing and building the services. For example, developers must implement inter-service communication using protocols like HTTP or AMQP, which adds complexity for testing and exception handling. It also adds latency to the system. - -**Deployment complexity**. An application that has dozens of microservices types and needs high scalability (it needs to be able to create many instances per service and balance those services across many hosts) means a high degree of deployment complexity for IT operations and management. If you are not using a microservice-oriented infrastructure (like an orchestrator and scheduler), that additional complexity can require far more development efforts than the business application itself. - -**Atomic transactions**. Atomic transactions between multiple microservices usually are not possible. The business requirements have to embrace eventual consistency between multiple microservices. For more information, see the [challenges of idempotent message processing](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-data-platform#idempotent-message-processing). - -**Increased global resource needs** (total memory, drives, and network resources for all the servers or hosts). In many cases, when you replace a monolithic application with a microservices approach, the amount of initial global resources needed by the new microservice-based application will be larger than the infrastructure needs of the original monolithic application. This approach is because the higher degree of granularity and distributed services requires more global resources. However, given the low cost of resources in general and the benefit of being able to scale out certain areas of the application compared to long-term costs when evolving monolithic applications, the increased use of resources is usually a good tradeoff for large, long-term applications. - -**Issues with direct client-to-microservice communication**. When the application is large, with dozens of microservices, there are challenges and limitations if the application requires direct client-to-microservice communications. One problem is a potential mismatch between the needs of the client and the APIs exposed by each of the microservices. In certain cases, the client application might need to make many separate requests to compose the UI, which can be inefficient over the Internet and would be impractical over a mobile network. Therefore, requests from the client application to the back-end system should be minimized. - -Another problem with direct client-to-microservice communications is that some microservices might be using protocols that are not Web-friendly. One service might use a binary protocol, while another service might use AMQP messaging. Those protocols are not firewall-friendly and are best used internally. Usually, an application should use protocols such as HTTP and WebSockets for communication outside of the firewall. - -Yet another drawback with this direct client-to-service approach is that it makes it difficult to refactor the contracts for those microservices. Over time developers might want to change how the system is partitioned into services. For example, they might merge two services or split a service into two or more services. However, if clients communicate directly with the services, performing this kind of refactoring can break compatibility with client apps. - -As mentioned in the architecture section, when designing and building a complex application based on microservices, you might consider the use of multiple fine-grained API Gateways instead of the simpler direct client-to-microservice communication approach. - -**Partitioning the microservices**. Finally, no matter, which approach you take for your microservice architecture, another challenge is deciding how to partition an end-to-end application into multiple microservices. As noted in the architecture section of the guide, there are several techniques and approaches you can take. Basically, you need to identify areas of the application that are decoupled from the other areas and that have a low number of hard dependencies. In many cases, this approach is aligned to partitioning services by use case. For example, in our e-shop application, we have an ordering service that is responsible for all the business logic related to the order process. We also have the catalog service and the basket service that implement other capabilities. Ideally, each service should have only a small set of responsibilities. This approach is similar to the single responsibility principle (SRP) applied to classes, which states that a class should only have one reason to change. But in this case, it is about microservices, so the scope will be larger than a single class. Most of all, a microservice has to be autonomous, end to end, including responsibility for its own data sources. - -## External versus internal architecture and design patterns - -The external architecture is the microservice architecture composed by multiple services, following the principles described in the architecture section of this guide. However, depending on the nature of each microservice, and independently of high-level microservice architecture you choose, it is common and sometimes advisable to have different internal architectures, each based on different patterns, for different microservices. The microservices can even use different technologies and programming languages. Figure 6-2 illustrates this diversity. - -![Diagram comparing external and internal architecture patterns.](./media/microservice-application-design/external-versus-internal-architecture.png) - -**Figure 6-2**. External versus internal architecture and design - -For instance, in our *eShopOnContainers* sample, the catalog, basket, and user profile microservices are simple (basically, CRUD subsystems). Therefore, their internal architecture and design is straightforward. However, you might have other microservices, such as the ordering microservice, which is more complex and represents ever-changing business rules with a high degree of domain complexity. In cases like these, you might want to implement more advanced patterns within a particular microservice, like the ones defined with domain-driven design (DDD) approaches, as we are doing in the *eShopOnContainers* ordering microservice. (We will review these DDD patterns in the section later that explains the implementation of the *eShopOnContainers* ordering microservice.) - -Another reason for a different technology per microservice might be the nature of each microservice. For example, it might be better to use a functional programming language like F\#, or even a language like R if you are targeting AI and machine learning domains, instead of a more object-oriented programming language like C\#. - -The bottom line is that each microservice can have a different internal architecture based on different design patterns. Not all microservices should be implemented using advanced DDD patterns, because that would be over-engineering them. Similarly, complex microservices with ever-changing business logic should not be implemented as CRUD components, or you can end up with low-quality code. - -## The new world: multiple architectural patterns and polyglot microservices - -There are many architectural patterns used by software architects and developers. The following are a few (mixing architecture styles and architecture patterns): - -- Simple CRUD, single-tier, single-layer. - -- [Traditional N-Layered](/previous-versions/msp-n-p/ee658109(v=pandp.10)). - -- [Domain-Driven Design N-layered](https://devblogs.microsoft.com/cesardelatorre/published-first-alpha-version-of-domain-oriented-n-layered-architecture-v2-0/). - -- [Clean Architecture](../../modern-web-apps-azure/common-web-application-architectures.md#clean-architecture) (as used with [eShopOnWeb](https://aka.ms/WebAppArchitecture)) - -- [Command and Query Responsibility Segregation](https://martinfowler.com/bliki/CQRS.html) (CQRS). - -- [Event-Driven Architecture](https://en.wikipedia.org/wiki/Event-driven_architecture) (EDA). - -You can also build microservices with many technologies and languages, such as ASP.NET Core Web APIs, NancyFx, ASP.NET Core SignalR (available with .NET Core 2 or later), F\#, Node.js, Python, Java, C++, GoLang, and more. - -The important point is that no particular architecture pattern or style, nor any particular technology, is right for all situations. Figure 6-3 shows some approaches and technologies (although not in any particular order) that could be used in different microservices. - -![Diagram showing 12 complex microservices in a polyglot world architecture.](./media/microservice-application-design/multi-architectural-patterns-polyglot-microservices.png) - -**Figure 6-3**. Multi-architectural patterns and the polyglot microservices world - -Multi-architectural pattern and polyglot microservices means you can mix and match languages and technologies to the needs of each microservice and still have them talking to each other. As shown in Figure 6-3, in applications composed of many microservices (Bounded Contexts in domain-driven design terminology, or simply "subsystems" as autonomous microservices), you might implement each microservice in a different way. Each might have a different architecture pattern and use different languages and databases depending on the application's nature, business requirements, and priorities. In some cases, the microservices might be similar. But that is not usually the case, because each subsystem's context boundary and requirements are usually different. - -For instance, for a simple CRUD maintenance application, it might not make sense to design and implement DDD patterns. But for your core domain or core business, you might need to apply more advanced patterns to tackle business complexity with ever-changing business rules. - -Especially when you deal with large applications composed by multiple subsystems, you should not apply a single top-level architecture based on a single architecture pattern. For instance, CQRS should not be applied as a top-level architecture for a whole application, but might be useful for a specific set of services. - -There is no silver bullet or a right architecture pattern for every given case. You cannot have "one architecture pattern to rule them all." Depending on the priorities of each microservice, you must choose a different approach for each, as explained in the following sections. - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](data-driven-crud-microservice.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/multi-container-applications-docker-compose.md b/docs/architecture/microservices/multi-container-microservice-net-applications/multi-container-applications-docker-compose.md deleted file mode 100644 index d8fe403e6d5bf..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/multi-container-applications-docker-compose.md +++ /dev/null @@ -1,473 +0,0 @@ ---- -title: Defining your multi-container application with docker-compose.yml -description: How to specify microservices composition for a multicontainer application with docker-compose.yml. -ms.date: 11/19/2021 ---- - -# Defining your multi-container application with docker-compose.yml - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -In this guide, the [docker-compose.yml](https://docs.docker.com/compose/compose-file/) file was introduced in the section [Step 4. Define your services in docker-compose.yml when building a multi-container Docker application](../docker-application-development-process/docker-app-development-workflow.md#step-4-define-your-services-in-docker-composeyml-when-building-a-multi-container-docker-application). However, there are additional ways to use the docker-compose files that are worth exploring in further detail. - -For example, you can explicitly describe how you want to deploy your multi-container application in the docker-compose.yml file. Optionally, you can also describe how you are going to build your custom Docker images. (Custom Docker images can also be built with the Docker CLI.) - -Basically, you define each of the containers you want to deploy plus certain characteristics for each container deployment. Once you have a multi-container deployment description file, you can deploy the whole solution in a single action orchestrated by the [docker-compose up](https://docs.docker.com/compose/overview/) CLI command, or you can deploy it transparently from Visual Studio. Otherwise, you would need to use the Docker CLI to deploy container-by-container in multiple steps by using the `docker run` command from the command line. Therefore, each service defined in docker-compose.yml must specify exactly one image or build. Other keys are optional, and are analogous to their `docker run` command-line counterparts. - -The following YAML code is the definition of a possible global but single docker-compose.yml file for the eShopOnContainers sample. This code is not the actual docker-compose file from eShopOnContainers. Instead, it is a simplified and consolidated version in a single file, which is not the best way to work with docker-compose files, as will be explained later. - -```yml -version: '3.4' - -services: - webmvc: - image: eshop/webmvc - environment: - - CatalogUrl=http://catalog-api - - OrderingUrl=http://ordering-api - - BasketUrl=http://basket-api - ports: - - "5100:80" - depends_on: - - catalog-api - - ordering-api - - basket-api - - catalog-api: - image: eshop/catalog-api - environment: - - ConnectionString=Server=sqldata;Initial Catalog=CatalogData;User Id=sa;Password=[PLACEHOLDER] - expose: - - "80" - ports: - - "5101:80" - #extra hosts can be used for standalone SQL Server or services at the dev PC - extra_hosts: - - "CESARDLSURFBOOK:10.0.75.1" - depends_on: - - sqldata - - ordering-api: - image: eshop/ordering-api - environment: - - ConnectionString=Server=sqldata;Database=Services.OrderingDb;User Id=sa;Password=[PLACEHOLDER] - ports: - - "5102:80" - #extra hosts can be used for standalone SQL Server or services at the dev PC - extra_hosts: - - "CESARDLSURFBOOK:10.0.75.1" - depends_on: - - sqldata - - basket-api: - image: eshop/basket-api - environment: - - ConnectionString=sqldata - ports: - - "5103:80" - depends_on: - - sqldata - - sqldata: - environment: - - SA_PASSWORD=[PLACEHOLDER] - - ACCEPT_EULA=Y - ports: - - "5434:1433" - - basketdata: - image: redis -``` - -The root key in this file is services. Under that key, you define the services you want to deploy and run when you execute the `docker-compose up` command or when you deploy from Visual Studio by using this docker-compose.yml file. In this case, the docker-compose.yml file has multiple services defined, as described in the following table. - -| Service name | Description | -|--------------|-------------| -| webmvc | Container including the ASP.NET Core MVC application consuming the microservices from server-side C\#| -| catalog-api | Container including the Catalog ASP.NET Core Web API microservice | -| ordering-api | Container including the Ordering ASP.NET Core Web API microservice | -| sqldata | Container running SQL Server for Linux, holding the microservices databases | -| basket-api | Container with the Basket ASP.NET Core Web API microservice | -| basketdata | Container running the REDIS cache service, with the basket database as a REDIS cache | - -### A simple Web Service API container - -Focusing on a single container, the catalog-api container-microservice has a straightforward definition: - -```yml - catalog-api: - image: eshop/catalog-api - environment: - - ConnectionString=Server=sqldata;Initial Catalog=CatalogData;User Id=sa;Password=[PLACEHOLDER] - expose: - - "80" - ports: - - "5101:80" - #extra hosts can be used for standalone SQL Server or services at the dev PC - extra_hosts: - - "CESARDLSURFBOOK:10.0.75.1" - depends_on: - - sqldata -``` - -This containerized service has the following basic configuration: - -- It is based on the custom **eshop/catalog-api** image. For simplicity's sake, there is no build: key setting in the file. This means that the image must have been previously built (with docker build) or have been downloaded (with the docker pull command) from any Docker registry. - -- It defines an environment variable named ConnectionString with the connection string to be used by Entity Framework to access the SQL Server instance that contains the catalog data model. In this case, the same SQL Server container is holding multiple databases. Therefore, you need less memory in your development machine for Docker. However, you could also deploy one SQL Server container for each microservice database. - -- The SQL Server name is **sqldata**, which is the same name used for the container that is running the SQL Server instance for Linux. This is convenient; being able to use this name resolution (internal to the Docker host) will resolve the network address so you don't need to know the internal IP for the containers you are accessing from other containers. - -Because the connection string is defined by an environment variable, you could set that variable through a different mechanism and at a different time. For example, you could set a different connection string when deploying to production in the final hosts, or by doing it from your CI/CD pipelines in Azure DevOps Services or your preferred DevOps system. - -- It exposes port 80 for internal access to the **catalog-api** service within the Docker host. The host is currently a Linux VM because it is based on a Docker image for Linux, but you could configure the container to run on a Windows image instead. - -- It forwards the exposed port 80 on the container to port 5101 on the Docker host machine (the Linux VM). - -- It links the web service to the **sqldata** service (the SQL Server instance for Linux database running in a container). When you specify this dependency, the catalog-api container will not start until the sqldata container has already started; this aspect is important because catalog-api needs to have the SQL Server database up and running first. However, this kind of container dependency is not enough in many cases, because Docker checks only at the container level. Sometimes the service (in this case SQL Server) might still not be ready, so it is advisable to implement retry logic with exponential backoff in your client microservices. That way, if a dependency container is not ready for a short time, the application will still be resilient. - -- It is configured to allow access to external servers: the extra\_hosts setting allows you to access external servers or machines outside of the Docker host (that is, outside the default Linux VM, which is a development Docker host), such as a local SQL Server instance on your development PC. - -There are also other, more advanced `docker-compose.yml` settings that we'll discuss in the following sections. - -### Using docker-compose files to target multiple environments - -The `docker-compose.*.yml` files are definition files and can be used by multiple infrastructures that understand that format. The most straightforward tool is the docker-compose command. - -Therefore, by using the docker-compose command you can target the following main scenarios. - -#### Development environments - -When you develop applications, it is important to be able to run an application in an isolated development environment. You can use the docker-compose CLI command to create that environment or Visual Studio, which uses docker-compose under the covers. - -The docker-compose.yml file allows you to configure and document all your application's service dependencies (other services, cache, databases, queues, etc.). Using the docker-compose CLI command, you can create and start one or more containers for each dependency with a single command (docker-compose up). - -The docker-compose.yml files are configuration files interpreted by Docker engine but also serve as convenient documentation files about the composition of your multi-container application. - -#### Testing environments - -An important part of any continuous deployment (CD) or continuous integration (CI) process are the unit tests and integration tests. These automated tests require an isolated environment so they are not impacted by the users or any other change in the application's data. - -With Docker Compose, you can create and destroy that isolated environment very easily in a few commands from your command prompt or scripts, like the following commands: - -```console -docker-compose -f docker-compose.yml -f docker-compose-test.override.yml up -d -./run_unit_tests -docker-compose -f docker-compose.yml -f docker-compose-test.override.yml down -``` - -#### Production deployments - -You can also use Compose to deploy to a remote Docker Engine. A typical case is to deploy to a single Docker host instance. - -If you're using any other orchestrator (for example, Azure Service Fabric or Kubernetes), you might need to add setup and metadata configuration settings like those in docker-compose.yml, but in the format required by the other orchestrator. - -In any case, docker-compose is a convenient tool and metadata format for development, testing, and production workflows, although the production workflow might vary on the orchestrator you are using. - -### Using multiple docker-compose files to handle several environments - -When targeting different environments, you should use multiple compose files. This approach lets you create multiple configuration variants depending on the environment. - -#### Overriding the base docker-compose file - -You could use a single docker-compose.yml file as in the simplified examples shown in previous sections. However, that is not recommended for most applications. - -By default, Compose reads two files, a docker-compose.yml and an optional docker-compose.override.yml file. As shown in Figure 6-11, when you are using Visual Studio and enabling Docker support, Visual Studio also creates an additional docker-compose.vs.debug.g.yml file for debugging the application, you can take a look at this file in folder obj\\Docker\\ in the main solution folder. - -![Files in a docker compose project.](./media/multi-container-applications-docker-compose/docker-compose-file-visual-studio.png) - -**Figure 6-11**. docker-compose files in Visual Studio 2019 - -**docker-compose** project file structure: - -- *.dockerignore* - used to ignore files -- *docker-compose.yml* - used to compose microservices -- *docker-compose.override.yml* - used to configure microservices environment - -You can edit the docker-compose files with any editor, like Visual Studio Code or Sublime, and run the application with the docker-compose up command. - -By convention, the docker-compose.yml file contains your base configuration and other static settings. That means that the service configuration should not change depending on the deployment environment you are targeting. - -The docker-compose.override.yml file, as its name suggests, contains configuration settings that override the base configuration, such as configuration that depends on the deployment environment. You can have multiple override files with different names also. The override files usually contain additional information needed by the application but specific to an environment or to a deployment. - -#### Targeting multiple environments - -A typical use case is when you define multiple compose files so you can target multiple environments, like production, staging, CI, or development. To support these differences, you can split your Compose configuration into multiple files, as shown in Figure 6-12. - -![Diagram of three docker-compose files set to override the base file.](./media/multi-container-applications-docker-compose/multiple-docker-compose-files-override-base.png) - -**Figure 6-12**. Multiple docker-compose files overriding values in the base docker-compose.yml file - -You can combine multiple docker-compose*.yml files to handle different environments. You start with the base docker-compose.yml file. This base file contains the base or static configuration settings that do not change depending on the environment. For example, the eShopOnContainers app has the following docker-compose.yml file (simplified with fewer services) as the base file. - -```yml -#docker-compose.yml (Base) -version: '3.4' -services: - basket-api: - image: eshop/basket-api:${TAG:-latest} - build: - context: . - dockerfile: src/Services/Basket/Basket.API/Dockerfile - depends_on: - - basketdata - - identity-api - - rabbitmq - - catalog-api: - image: eshop/catalog-api:${TAG:-latest} - build: - context: . - dockerfile: src/Services/Catalog/Catalog.API/Dockerfile - depends_on: - - sqldata - - rabbitmq - - marketing-api: - image: eshop/marketing-api:${TAG:-latest} - build: - context: . - dockerfile: src/Services/Marketing/Marketing.API/Dockerfile - depends_on: - - sqldata - - nosqldata - - identity-api - - rabbitmq - - webmvc: - image: eshop/webmvc:${TAG:-latest} - build: - context: . - dockerfile: src/Web/WebMVC/Dockerfile - depends_on: - - catalog-api - - ordering-api - - identity-api - - basket-api - - marketing-api - - sqldata: - image: mcr.microsoft.com/mssql/server:2019-latest - - nosqldata: - image: mongo - - basketdata: - image: redis - - rabbitmq: - image: rabbitmq:3-management -``` - -The values in the base docker-compose.yml file should not change because of different target deployment environments. - -If you focus on the webmvc service definition, for instance, you can see how that information is much the same no matter what environment you might be targeting. You have the following information: - -- The service name: webmvc. - -- The container's custom image: eshop/webmvc. - -- The command to build the custom Docker image, indicating which Dockerfile to use. - -- Dependencies on other services, so this container does not start until the other dependency containers have started. - -You can have additional configuration, but the important point is that in the base docker-compose.yml file, you just want to set the information that is common across environments. Then in the docker-compose.override.yml or similar files for production or staging, you should place configuration that is specific for each environment. - -Usually, the docker-compose.override.yml is used for your development environment, as in the following example from eShopOnContainers: - -```yml -#docker-compose.override.yml (Extended config for DEVELOPMENT env.) -version: '3.4' - -services: -# Simplified number of services here: - - basket-api: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - ASPNETCORE_URLS=http://0.0.0.0:80 - - ConnectionString=${ESHOP_AZURE_REDIS_BASKET_DB:-basketdata} - - identityUrl=http://identity-api - - IdentityUrlExternal=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5105 - - EventBusConnection=${ESHOP_AZURE_SERVICE_BUS:-rabbitmq} - - EventBusUserName=${ESHOP_SERVICE_BUS_USERNAME} - - EventBusPassword=${ESHOP_SERVICE_BUS_PASSWORD} - - AzureServiceBusEnabled=False - - ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY} - - OrchestratorType=${ORCHESTRATOR_TYPE} - - UseLoadTest=${USE_LOADTEST:-False} - - ports: - - "5103:80" - - catalog-api: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - ASPNETCORE_URLS=http://0.0.0.0:80 - - ConnectionString=${ESHOP_AZURE_CATALOG_DB:-Server=sqldata;Database=Microsoft.eShopOnContainers.Services.CatalogDb;User Id=sa;Password=[PLACEHOLDER]} - - PicBaseUrl=${ESHOP_AZURE_STORAGE_CATALOG_URL:-http://host.docker.internal:5202/api/v1/catalog/items/[0]/pic/} - - EventBusConnection=${ESHOP_AZURE_SERVICE_BUS:-rabbitmq} - - EventBusUserName=${ESHOP_SERVICE_BUS_USERNAME} - - EventBusPassword=${ESHOP_SERVICE_BUS_PASSWORD} - - AzureStorageAccountName=${ESHOP_AZURE_STORAGE_CATALOG_NAME} - - AzureStorageAccountKey=${ESHOP_AZURE_STORAGE_CATALOG_KEY} - - UseCustomizationData=True - - AzureServiceBusEnabled=False - - AzureStorageEnabled=False - - ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY} - - OrchestratorType=${ORCHESTRATOR_TYPE} - ports: - - "5101:80" - - marketing-api: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - ASPNETCORE_URLS=http://0.0.0.0:80 - - ConnectionString=${ESHOP_AZURE_MARKETING_DB:-Server=sqldata;Database=Microsoft.eShopOnContainers.Services.MarketingDb;User Id=sa;Password=[PLACEHOLDER]} - - MongoConnectionString=${ESHOP_AZURE_COSMOSDB:-mongodb://nosqldata} - - MongoDatabase=MarketingDb - - EventBusConnection=${ESHOP_AZURE_SERVICE_BUS:-rabbitmq} - - EventBusUserName=${ESHOP_SERVICE_BUS_USERNAME} - - EventBusPassword=${ESHOP_SERVICE_BUS_PASSWORD} - - identityUrl=http://identity-api - - IdentityUrlExternal=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5105 - - CampaignDetailFunctionUri=${ESHOP_AZUREFUNC_CAMPAIGN_DETAILS_URI} - - PicBaseUrl=${ESHOP_AZURE_STORAGE_MARKETING_URL:-http://host.docker.internal:5110/api/v1/campaigns/[0]/pic/} - - AzureStorageAccountName=${ESHOP_AZURE_STORAGE_MARKETING_NAME} - - AzureStorageAccountKey=${ESHOP_AZURE_STORAGE_MARKETING_KEY} - - AzureServiceBusEnabled=False - - AzureStorageEnabled=False - - ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY} - - OrchestratorType=${ORCHESTRATOR_TYPE} - - UseLoadTest=${USE_LOADTEST:-False} - ports: - - "5110:80" - - webmvc: - environment: - - ASPNETCORE_ENVIRONMENT=Development - - ASPNETCORE_URLS=http://0.0.0.0:80 - - PurchaseUrl=http://webshoppingapigw - - IdentityUrl=http://10.0.75.1:5105 - - MarketingUrl=http://webmarketingapigw - - CatalogUrlHC=http://catalog-api/hc - - OrderingUrlHC=http://ordering-api/hc - - IdentityUrlHC=http://identity-api/hc - - BasketUrlHC=http://basket-api/hc - - MarketingUrlHC=http://marketing-api/hc - - PaymentUrlHC=http://payment-api/hc - - SignalrHubUrl=http://${ESHOP_EXTERNAL_DNS_NAME_OR_IP}:5202 - - UseCustomizationData=True - - ApplicationInsights__InstrumentationKey=${INSTRUMENTATION_KEY} - - OrchestratorType=${ORCHESTRATOR_TYPE} - - UseLoadTest=${USE_LOADTEST:-False} - ports: - - "5100:80" - sqldata: - environment: - - SA_PASSWORD=[PLACEHOLDER] - - ACCEPT_EULA=Y - ports: - - "5433:1433" - nosqldata: - ports: - - "27017:27017" - basketdata: - ports: - - "6379:6379" - rabbitmq: - ports: - - "15672:15672" - - "5672:5672" -``` - -In this example, the development override configuration exposes some ports to the host, defines environment variables with redirect URLs, and specifies connection strings for the development environment. These settings are all just for the development environment. - -When you run `docker-compose up` (or launch it from Visual Studio), the command reads the overrides automatically as if it were merging both files. - -Suppose that you want another Compose file for the production environment, with different configuration values, ports, or connection strings. You can create another override file, like file named `docker-compose.prod.yml` with different settings and environment variables. That file might be stored in a different Git repo or managed and secured by a different team. - -#### How to deploy with a specific override file - -To use multiple override files, or an override file with a different name, you can use the -f option with the docker-compose command and specify the files. Compose merges files in the order they are specified on the command line. The following example shows how to deploy with override files. - -```console -docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d -``` - -#### Using environment variables in docker-compose files - -It is convenient, especially in production environments, to be able to get configuration information from environment variables, as we have shown in previous examples. You can reference an environment variable in your docker-compose files using the syntax ${MY\_VAR}. The following line from a docker-compose.prod.yml file shows how to reference the value of an environment variable. - -```yml -IdentityUrl=http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5105 -``` - -Environment variables are created and initialized in different ways, depending on your host environment (Linux, Windows, Cloud cluster, etc.). However, a convenient approach is to use an .env file. The docker-compose files support declaring default environment variables in the .env file. These values for the environment variables are the default values. But they can be overridden by the values you might have defined in each of your environments (host OS or environment variables from your cluster). You place this .env file in the folder where the docker-compose command is executed from. - -The following example shows an .env file like the [.env](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/.env) file for the eShopOnContainers application. - -```sh -# .env file - -ESHOP_EXTERNAL_DNS_NAME_OR_IP=host.docker.internal - -ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP=10.121.122.92 -``` - -Docker-compose expects each line in an .env file to be in the format \=\. - -The values set in the run-time environment always override the values defined inside the .env file. In a similar way, values passed via command-line arguments also override the default values set in the .env file. - -#### Additional resources - -- **Overview of Docker Compose** \ - - -- **Multiple Compose files** \ - [https://docs.docker.com/compose/multiple-compose-files/](https://docs.docker.com/compose/multiple-compose-files/) - -### Building optimized ASP.NET Core Docker images - -If you are exploring Docker and .NET on sources on the Internet, you will find Dockerfiles that demonstrate the simplicity of building a Docker image by copying your source into a container. These examples suggest that by using a simple configuration, you can have a Docker image with the environment packaged with your application. The following example shows a simple Dockerfile in this vein. - -```dockerfile -FROM mcr.microsoft.com/dotnet/sdk:8.0 -WORKDIR /app -ENV ASPNETCORE_URLS http://+:80 -EXPOSE 80 -COPY . . -RUN dotnet restore -ENTRYPOINT ["dotnet", "run"] -``` - -A Dockerfile like this will work. However, you can substantially optimize your images, especially your production images. - -In the container and microservices model, you are constantly starting containers. The typical way of using containers does not restart a sleeping container, because the container is disposable. Orchestrators (like Kubernetes and Azure Service Fabric) create new instances of images. What this means is that you would need to optimize by precompiling the application when it is built so the instantiation process will be faster. When the container is started, it should be ready to run. Don't restore and compile at run time using the `dotnet restore` and `dotnet build` CLI commands as you may see in blog posts about .NET and Docker. - -The .NET team has been doing important work to make .NET and ASP.NET Core a container-optimized framework. Not only is .NET a lightweight framework with a small memory footprint; the team has focused on optimized Docker images for three main scenarios and published them in the Docker Hub registry at *dotnet/*, beginning with version 2.1: - -- **Development**: The priority is the ability to quickly iterate and debug changes, and where size is secondary. -- **Build**: The priority is compiling the application, and the image includes binaries and other dependencies to optimize binaries. -- **Production**: The focus is fast deploying and starting of containers, so these images are limited to the binaries and content needed to run the application. - -The .NET team provides some basic variants in [dotnet/](https://hub.docker.com/r/microsoft/dotnet), for example: - -- **sdk**: for development and build scenarios -- **aspnet**: for ASP.NET production scenarios -- **runtime**: for .NET production scenarios -- **runtime-deps**: for production scenarios of [self-contained applications](../../../core/deploying/index.md#publish-self-contained) - -For faster startup, runtime images also automatically set aspnetcore\_urls to port 80 and use Ngen to create a native image cache of assemblies. - -#### Additional resources - -- **Building Optimized Docker Images with ASP.NET Core** - [https://learn.microsoft.com/archive/blogs/stevelasker/building-optimized-docker-images-with-asp-net-core](/archive/blogs/stevelasker/building-optimized-docker-images-with-asp-net-core) - -- **Building Docker Images for .NET Applications** - [https://learn.microsoft.com/dotnet/core/docker/building-net-docker-images](/aspnet/core/host-and-deploy/docker/building-net-docker-images) - -> [!div class="step-by-step"] -> [Previous](data-driven-crud-microservice.md) -> [Next](database-server-container.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/rabbitmq-event-bus-development-test-environment.md b/docs/architecture/microservices/multi-container-microservice-net-applications/rabbitmq-event-bus-development-test-environment.md deleted file mode 100644 index 72d9c1bddd8e3..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/rabbitmq-event-bus-development-test-environment.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: Implementing an event bus with RabbitMQ for the development or test environment -description: .NET Microservices Architecture for Containerized .NET Applications | Use RabbitMQ to implement an event bus messaging for integration events for the development or test environments. -ms.date: 01/13/2021 ---- -# Implementing an event bus with RabbitMQ for the development or test environment - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -We should start by saying that if you create your custom event bus based on [RabbitMQ](https://www.rabbitmq.com/) running in a container, as the eShopOnContainers application does, it should be used only for your development and test environments. Don't use it for your production environment, unless you are building it as a part of a production-ready service bus as described in the [Additional resources section below](rabbitmq-event-bus-development-test-environment.md#additional-resources). A simple custom event bus might be missing many production-ready critical features that a commercial service bus has. - -One of the event bus custom implementations in eShopOnContainers is basically a library using the RabbitMQ API. (There's another implementation based on Azure Service Bus.) - -The event bus implementation with RabbitMQ lets microservices subscribe to events, publish events, and receive events, as shown in Figure 6-21. - -![Diagram showing RabbitMQ between message sender and message receiver.](./media/rabbitmq-event-bus-development-test-environment/rabbitmq-implementation.png) - -**Figure 6-21.** RabbitMQ implementation of an event bus - -RabbitMQ functions as an intermediary between message publisher and subscribers, to handle distribution. In the code, the EventBusRabbitMQ class implements the generic IEventBus interface. This implementation is based on Dependency Injection so that you can swap from this dev/test version to a production version. - -```csharp -public class EventBusRabbitMQ : IEventBus, IDisposable -{ - // Implementation using RabbitMQ API - //... -} -``` - -The RabbitMQ implementation of a sample dev/test event bus is boilerplate code. It has to handle the connection to the RabbitMQ server and provide code for publishing a message event to the queues. It also has to implement a dictionary of collections of integration event handlers for each event type; these event types can have a different instantiation and different subscriptions for each receiver microservice, as shown in Figure 6-21. - -## Implementing a simple publish method with RabbitMQ - -The following code is a ***simplified*** version of an event bus implementation for RabbitMQ, to showcase the whole scenario. You don't really handle the connection this way. To see the full implementation, see the actual code in the [dotnet-architecture/eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/BuildingBlocks/EventBus/EventBusRabbitMQ/EventBusRabbitMQ.cs) repository. - -```csharp -public class EventBusRabbitMQ : IEventBus, IDisposable -{ - // Member objects and other methods ... - // ... - - public void Publish(IntegrationEvent @event) - { - var eventName = @event.GetType().Name; - var factory = new ConnectionFactory() { HostName = _connectionString }; - using (var connection = factory.CreateConnection()) - using (var channel = connection.CreateModel()) - { - channel.ExchangeDeclare(exchange: _brokerName, - type: "direct"); - string message = JsonConvert.SerializeObject(@event); - var body = Encoding.UTF8.GetBytes(message); - channel.BasicPublish(exchange: _brokerName, - routingKey: eventName, - basicProperties: null, - body: body); - } - } -} -``` - -The [actual code](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/BuildingBlocks/EventBus/EventBusRabbitMQ/EventBusRabbitMQ.cs) of the Publish method in the eShopOnContainers application is improved by using a [Polly](https://github.com/App-vNext/Polly) retry policy, which retries the task some times in case the RabbitMQ container is not ready. This scenario can occur when docker-compose is starting the containers; for example, the RabbitMQ container might start more slowly than the other containers. - -As mentioned earlier, there are many possible configurations in RabbitMQ, so this code should be used only for dev/test environments. - -## Implementing the subscription code with the RabbitMQ API - -As with the publish code, the following code is a simplification of part of the event bus implementation for RabbitMQ. Again, you usually do not need to change it unless you are improving it. - -```csharp -public class EventBusRabbitMQ : IEventBus, IDisposable -{ - // Member objects and other methods ... - // ... - - public void Subscribe() - where T : IntegrationEvent - where TH : IIntegrationEventHandler - { - var eventName = _subsManager.GetEventKey(); - - var containsKey = _subsManager.HasSubscriptionsForEvent(eventName); - if (!containsKey) - { - if (!_persistentConnection.IsConnected) - { - _persistentConnection.TryConnect(); - } - - using (var channel = _persistentConnection.CreateModel()) - { - channel.QueueBind(queue: _queueName, - exchange: BROKER_NAME, - routingKey: eventName); - } - } - - _subsManager.AddSubscription(); - } -} -``` - -Each event type has a related channel to get events from RabbitMQ. You can then have as many event handlers per channel and event type as needed. - -The Subscribe method accepts an IIntegrationEventHandler object, which is like a callback method in the current microservice, plus its related IntegrationEvent object. The code then adds that event handler to the list of event handlers that each integration event type can have per client microservice. If the client code has not already been subscribed to the event, the code creates a channel for the event type so it can receive events in a push style from RabbitMQ when that event is published from any other service. - -As mentioned above, the event bus implemented in eShopOnContainers has only an educational purpose, since it only handles the main scenarios, so it's not ready for production. - -For production scenarios check the additional resources below, specific for RabbitMQ, and the [Implementing event-based communication between microservices](./integration-event-based-microservice-communications.md#additional-resources) section. - -## Additional resources - -A production-ready solution with support for RabbitMQ. - -- **Peregrine Connect** - Simplify your integration with efficient design, deployment, and management of apps, APIs, and workflows \ - - -- **NServiceBus** - Fully-supported commercial service bus with advanced management and monitoring tooling for .NET \ - - -- **EasyNetQ** - Open Source .NET API client for RabbitMQ \ - - -- **MassTransit** - Free, open-source distributed application framework for .NET \ - - -- **Rebus** - Open source .NET Service Bus \ - - -> [!div class="step-by-step"] -> [Previous](integration-event-based-microservice-communications.md) -> [Next](subscribe-events.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md b/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md deleted file mode 100644 index d366acbc37bde..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md +++ /dev/null @@ -1,380 +0,0 @@ ---- -title: Subscribing to events -description: .NET Microservices Architecture for Containerized .NET Applications | Understand the details of publishing and subscription to integration events. -ms.date: 06/23/2021 ---- - -# Subscribing to events - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The first step for using the event bus is to subscribe the microservices to the events they want to receive. That functionality should be done in the receiver microservices. - -The following simple code shows what each receiver microservice needs to implement when starting the service (that is, in the `Startup` class) so it subscribes to the events it needs. In this case, the `basket-api` microservice needs to subscribe to `ProductPriceChangedIntegrationEvent` and the `OrderStartedIntegrationEvent` messages. - -For instance, when subscribing to the `ProductPriceChangedIntegrationEvent` event, that makes the basket microservice aware of any changes to the product price and lets it warn the user about the change if that product is in the user's basket. - -```csharp -var eventBus = app.ApplicationServices.GetRequiredService(); - -eventBus.Subscribe(); - -eventBus.Subscribe(); -``` - -After this code runs, the subscriber microservice will be listening through RabbitMQ channels. When any message of type ProductPriceChangedIntegrationEvent arrives, the code invokes the event handler that is passed to it and processes the event. - -## Publishing events through the event bus - -Finally, the message sender (origin microservice) publishes the integration events with code similar to the following example. (This approach is a simplified example that does not take atomicity into account.) You would implement similar code whenever an event must be propagated across multiple microservices, usually right after committing data or transactions from the origin microservice. - -First, the event bus implementation object (based on RabbitMQ or based on a service bus) would be injected at the controller constructor, as in the following code: - -```csharp -[Route("api/v1/[controller]")] -public class CatalogController : ControllerBase -{ - private readonly CatalogContext _context; - private readonly IOptionsSnapshot _settings; - private readonly IEventBus _eventBus; - - public CatalogController(CatalogContext context, - IOptionsSnapshot settings, - IEventBus eventBus) - { - _context = context; - _settings = settings; - _eventBus = eventBus; - } - // ... -} -``` - -Then you use it from your controller's methods, like in the UpdateProduct method: - -```csharp -[Route("items")] -[HttpPost] -public async Task UpdateProduct([FromBody]CatalogItem product) -{ - var item = await _context.CatalogItems.SingleOrDefaultAsync( - i => i.Id == product.Id); - // ... - if (item.Price != product.Price) - { - var oldPrice = item.Price; - item.Price = product.Price; - _context.CatalogItems.Update(item); - var @event = new ProductPriceChangedIntegrationEvent(item.Id, - item.Price, - oldPrice); - // Commit changes in original transaction - await _context.SaveChangesAsync(); - // Publish integration event to the event bus - // (RabbitMQ or a service bus underneath) - _eventBus.Publish(@event); - // ... - } - // ... -} -``` - -In this case, since the origin microservice is a simple CRUD microservice, that code is placed right into a Web API controller. - -In more advanced microservices, like when using CQRS approaches, it can be implemented in the `CommandHandler` class, within the `Handle()` method. - -### Designing atomicity and resiliency when publishing to the event bus - -When you publish integration events through a distributed messaging system like your event bus, you have the problem of atomically updating the original database and publishing an event (that is, either both operations complete or none of them). For instance, in the simplified example shown earlier, the code commits data to the database when the product price is changed and then publishes a ProductPriceChangedIntegrationEvent message. Initially, it might look essential that these two operations be performed atomically. However, if you are using a distributed transaction involving the database and the message broker, as you do in older systems like [Microsoft Message Queuing (MSMQ)](/previous-versions/windows/desktop/legacy/ms711472(v=vs.85)), this approach is not recommended for the reasons described by the [CAP theorem](https://www.quora.com/What-Is-CAP-Theorem-1). - -Basically, you use microservices to build scalable and highly available systems. Simplifying somewhat, the CAP theorem says that you cannot build a (distributed) database (or a microservice that owns its model) that's continually available, strongly consistent, *and* tolerant to any partition. You must choose two of these three properties. - -In microservices-based architectures, you should choose availability and tolerance, and you should de-emphasize strong consistency. Therefore, in most modern microservice-based applications, you usually do not want to use distributed transactions in messaging, as you do when you implement [distributed transactions](/previous-versions/windows/desktop/ms681205(v=vs.85)) based on the Windows Distributed Transaction Coordinator (DTC) with [MSMQ](/previous-versions/windows/desktop/legacy/ms711472(v=vs.85)). - -Let's go back to the initial issue and its example. If the service crashes after the database is updated (in this case, right after the line of code with `_context.SaveChangesAsync()`), but before the integration event is published, the overall system could become inconsistent. This approach might be business critical, depending on the specific business operation you are dealing with. - -As mentioned earlier in the architecture section, you can have several approaches for dealing with this issue: - -- Using the full [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing). - -- Using transaction log mining. - -- Using the [Outbox pattern](https://www.kamilgrzybek.com/design/the-outbox-pattern/). This is a transactional table to store the integration events (extending the local transaction). - -For this scenario, using the full Event Sourcing (ES) pattern is one of the best approaches, if not *the* best. However, in many application scenarios, you might not be able to implement a full ES system. ES means storing only domain events in your transactional database, instead of storing current state data. Storing only domain events can have great benefits, such as having the history of your system available and being able to determine the state of your system at any moment in the past. However, implementing a full ES system requires you to rearchitect most of your system and introduces many other complexities and requirements. For example, you would want to use a database specifically made for event sourcing, such as [Event Store](https://eventstore.org/), or a document-oriented database such as Azure Cosmos DB, MongoDB, Cassandra, CouchDB, or RavenDB. ES is a great approach for this problem, but not the easiest solution unless you are already familiar with event sourcing. - -The option to use transaction log mining initially looks transparent. However, to use this approach, the microservice has to be coupled to your RDBMS transaction log, such as the SQL Server transaction log. This approach is probably not desirable. Another drawback is that the low-level updates recorded in the transaction log might not be at the same level as your high-level integration events. If so, the process of reverse-engineering those transaction log operations can be difficult. - -A balanced approach is a mix of a transactional database table and a simplified ES pattern. You can use a state such as "ready to publish the event," which you set in the original event when you commit it to the integration events table. You then try to publish the event to the event bus. If the publish-event action succeeds, you start another transaction in the origin service and move the state from "ready to publish the event" to "event already published." - -If the publish-event action in the event bus fails, the data still will not be inconsistent within the origin microservice—it is still marked as "ready to publish the event," and with respect to the rest of the services, it will eventually be consistent. You can always have background jobs checking the state of the transactions or integration events. If the job finds an event in the "ready to publish the event" state, it can try to republish that event to the event bus. - -Notice that with this approach, you are persisting only the integration events for each origin microservice, and only the events that you want to communicate to other microservices or external systems. In contrast, in a full ES system, you store all domain events as well. - -Therefore, this balanced approach is a simplified ES system. You need a list of integration events with their current state ("ready to publish" versus "published"). But you only need to implement these states for the integration events. And in this approach, you do not need to store all your domain data as events in the transactional database, as you would in a full ES system. - -If you are already using a relational database, you can use a transactional table to store integration events. To achieve atomicity in your application, you use a two-step process based on local transactions. Basically, you have an IntegrationEvent table in the same database where you have your domain entities. That table works as an insurance for achieving atomicity so that you include persisted integration events into the same transactions that are committing your domain data. - -Step by step, the process goes like this: - -1. The application begins a local database transaction. - -2. It then updates the state of your domain entities and inserts an event into the integration event table. - -3. Finally, it commits the transaction, so you get the desired atomicity and then - -4. You publish the event somehow (next). - -When implementing the steps of publishing the events, you have these choices: - -- Publish the integration event right after committing the transaction and use another local transaction to mark the events in the table as being published. Then, use the table just as an artifact to track the integration events in case of issues in the remote microservices, and perform compensatory actions based on the stored integration events. - -- Use the table as a kind of queue. A separate application thread or process queries the integration event table, publishes the events to the event bus, and then uses a local transaction to mark the events as published. - -Figure 6-22 shows the architecture for the first of these approaches. - -![Diagram of atomicity when publishing without a worker microservice.](./media/subscribe-events/atomicity-publish-event-bus.png) - -**Figure 6-22**. Atomicity when publishing events to the event bus - -The approach illustrated in Figure 6-22 is missing an additional worker microservice that is in charge of checking and confirming the success of the published integration events. In case of failure, that additional checker worker microservice can read events from the table and republish them, that is, repeat step number 2. - -About the second approach: you use the EventLog table as a queue and always use a worker microservice to publish the messages. In that case, the process is like that shown in Figure 6-23. This shows an additional microservice, and the table is the single source when publishing events. - -![Diagram of atomicity when publishing with a worker microservice.](./media/subscribe-events/atomicity-publish-worker-microservice.png) - -**Figure 6-23**. Atomicity when publishing events to the event bus with a worker microservice - -For simplicity, the eShopOnContainers sample uses the first approach (with no additional processes or checker microservices) plus the event bus. However, the eShopOnContainers sample is not handling all possible failure cases. In a real application deployed to the cloud, you must embrace the fact that issues will arise eventually, and you must implement that check and resend logic. Using the table as a queue can be more effective than the first approach if you have that table as a single source of events when publishing them (with the worker) through the event bus. - -### Implementing atomicity when publishing integration events through the event bus - -The following code shows how you can create a single transaction involving multiple DbContext objects—one context related to the original data being updated, and the second context related to the IntegrationEventLog table. - -The transaction in the example code below will not be resilient if connections to the database have any issue at the time when the code is running. This can happen in cloud-based systems like Azure SQL DB, which might move databases across servers. For implementing resilient transactions across multiple contexts, see the [Implementing resilient Entity Framework Core SQL connections](../implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections.md) section later in this guide. - -For clarity, the following example shows the whole process in a single piece of code. However, the eShopOnContainers implementation is refactored and splits this logic into multiple classes so it's easier to maintain. - -```csharp -// Update Product from the Catalog microservice -// -public async Task UpdateProduct([FromBody]CatalogItem productToUpdate) -{ - var catalogItem = - await _catalogContext.CatalogItems.SingleOrDefaultAsync(i => i.Id == - productToUpdate.Id); - if (catalogItem == null) return NotFound(); - - bool raiseProductPriceChangedEvent = false; - IntegrationEvent priceChangedEvent = null; - - if (catalogItem.Price != productToUpdate.Price) - raiseProductPriceChangedEvent = true; - - if (raiseProductPriceChangedEvent) // Create event if price has changed - { - var oldPrice = catalogItem.Price; - priceChangedEvent = new ProductPriceChangedIntegrationEvent(catalogItem.Id, - productToUpdate.Price, - oldPrice); - } - // Update current product - catalogItem = productToUpdate; - - // Just save the updated product if the Product's Price hasn't changed. - if (!raiseProductPriceChangedEvent) - { - await _catalogContext.SaveChangesAsync(); - } - else // Publish to event bus only if product price changed - { - // Achieving atomicity between original DB and the IntegrationEventLog - // with a local transaction - using (var transaction = _catalogContext.Database.BeginTransaction()) - { - _catalogContext.CatalogItems.Update(catalogItem); - await _catalogContext.SaveChangesAsync(); - - await _integrationEventLogService.SaveEventAsync(priceChangedEvent); - - transaction.Commit(); - } - - // Publish the integration event through the event bus - _eventBus.Publish(priceChangedEvent); - - _integrationEventLogService.MarkEventAsPublishedAsync( - priceChangedEvent); - } - - return Ok(); -} -``` - -After the ProductPriceChangedIntegrationEvent integration event is created, the transaction that stores the original domain operation (update the catalog item) also includes the persistence of the event in the EventLog table. This makes it a single transaction, and you will always be able to check whether event messages were sent. - -The event log table is updated atomically with the original database operation, using a local transaction against the same database. If any of the operations fail, an exception is thrown and the transaction rolls back any completed operation, thus maintaining consistency between the domain operations and the event messages saved to the table. - -### Receiving messages from subscriptions: event handlers in receiver microservices - -In addition to the event subscription logic, you need to implement the internal code for the integration event handlers (like a callback method). The event handler is where you specify where the event messages of a certain type will be received and processed. - -An event handler first receives an event instance from the event bus. Then it locates the component to be processed related to that integration event, propagating and persisting the event as a change in state in the receiver microservice. For example, if a ProductPriceChanged event originates in the catalog microservice, it is handled in the basket microservice and changes the state in this receiver basket microservice as well, as shown in the following code. - -```csharp -namespace Microsoft.eShopOnContainers.Services.Basket.API.IntegrationEvents.EventHandling -{ - public class ProductPriceChangedIntegrationEventHandler : - IIntegrationEventHandler - { - private readonly IBasketRepository _repository; - - public ProductPriceChangedIntegrationEventHandler( - IBasketRepository repository) - { - _repository = repository; - } - - public async Task Handle(ProductPriceChangedIntegrationEvent @event) - { - var userIds = await _repository.GetUsers(); - foreach (var id in userIds) - { - var basket = await _repository.GetBasket(id); - await UpdatePriceInBasketItems(@event.ProductId, @event.NewPrice, basket); - } - } - - private async Task UpdatePriceInBasketItems(int productId, decimal newPrice, - CustomerBasket basket) - { - var itemsToUpdate = basket?.Items?.Where(x => int.Parse(x.ProductId) == - productId).ToList(); - if (itemsToUpdate != null) - { - foreach (var item in itemsToUpdate) - { - if(item.UnitPrice != newPrice) - { - var originalPrice = item.UnitPrice; - item.UnitPrice = newPrice; - item.OldUnitPrice = originalPrice; - } - } - await _repository.UpdateBasket(basket); - } - } - } -} -``` - -The event handler needs to verify whether the product exists in any of the basket instances. It also updates the item price for each related basket line item. Finally, it creates an alert to be displayed to the user about the price change, as shown in Figure 6-24. - -![Screenshot of a browser showing the price change notification on the user cart.](media/subscribe-events/display-item-price-change.png) - -**Figure 6-24**. Displaying an item price change in a basket, as communicated by integration events - -## Idempotency in update message events - -An important aspect of update message events is that a failure at any point in the communication should cause the message to be retried. Otherwise a background task might try to publish an event that has already been published, creating a race condition. Make sure that the updates are either idempotent or that they provide enough information to ensure that you can detect a duplicate, discard it, and send back only one response. - -As noted earlier, idempotency means that an operation can be performed multiple times without changing the result. In a messaging environment, as when communicating events, an event is idempotent if it can be delivered multiple times without changing the result for the receiver microservice. This may be necessary because of the nature of the event itself, or because of the way the system handles the event. Message idempotency is important in any application that uses messaging, not just in applications that implement the event bus pattern. - -An example of an idempotent operation is a SQL statement that inserts data into a table only if that data is not already in the table. It does not matter how many times you run that insert SQL statement; the result will be the same—the table will contain that data. Idempotency like this can also be necessary when dealing with messages if the messages could potentially be sent and therefore processed more than once. For instance, if retry logic causes a sender to send exactly the same message more than once, you need to make sure that it is idempotent. - -It is possible to design idempotent messages. For example, you can create an event that says "set the product price to $25" instead of "add $5 to the product price." You could safely process the first message any number of times and the result will be the same. That is not true for the second message. But even in the first case, you might not want to process the first event, because the system could also have sent a newer price-change event and you would be overwriting the new price. - -Another example might be an order-completed event that's propagated to multiple subscribers. The app has to make sure that order information is updated in other systems only once, even if there are duplicated message events for the same order-completed event. - -It is convenient to have some kind of identity per event so that you can create logic that enforces that each event is processed only once per receiver. - -Some message processing is inherently idempotent. For example, if a system generates image thumbnails, it might not matter how many times the message about the generated thumbnail is processed; the outcome is that the thumbnails are generated and they are the same every time. On the other hand, operations such as calling a payment gateway to charge a credit card may not be idempotent at all. In these cases, you need to ensure that processing a message multiple times has the effect that you expect. - -### Additional resources - -- **Honoring message idempotency** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/jj591565(v=pandp.10)#honoring-message-idempotency](/previous-versions/msp-n-p/jj591565(v=pandp.10)#honoring-message-idempotency) - -## Deduplicating integration event messages - -You can make sure that message events are sent and processed only once per subscriber at different levels. One way is to use a deduplication feature offered by the messaging infrastructure you are using. Another is to implement custom logic in your destination microservice. Having validations at both the transport level and the application level is your best bet. - -### Deduplicating message events at the EventHandler level - -One way to make sure that an event is processed only once by any receiver is by implementing certain logic when processing the message events in event handlers. For example, that is the approach used in the eShopOnContainers application, as you can see in the [source code of the UserCheckoutAcceptedIntegrationEventHandler class](https://github.com/dotnet-architecture/eShopOnContainers/blob/main/src/Services/Ordering/Ordering.API/Application/IntegrationEvents/EventHandling/UserCheckoutAcceptedIntegrationEventHandler.cs) when it receives a `UserCheckoutAcceptedIntegrationEvent` integration event. (In this case, the `CreateOrderCommand` is wrapped with an `IdentifiedCommand`, using the `eventMsg.RequestId` as an identifier, before sending it to the command handler). - -### Deduplicating messages when using RabbitMQ - -When intermittent network failures happen, messages can be duplicated, and the message receiver must be ready to handle these duplicated messages. If possible, receivers should handle messages in an idempotent way, which is better than explicitly handling them with deduplication. - -According to the [RabbitMQ documentation](https://www.rabbitmq.com/reliability.html#consumer), "If a message is delivered to a consumer and then requeued (because it was not acknowledged before the consumer connection dropped, for example) then RabbitMQ will set the redelivered flag on it when it is delivered again (whether to the same consumer or a different one). - -If the "redelivered" flag is set, the receiver must take that into account, because the message might already have been processed. But that is not guaranteed; the message might never have reached the receiver after it left the message broker, perhaps because of network issues. On the other hand, if the "redelivered" flag is not set, it is guaranteed that the message has not been sent more than once. Therefore, the receiver needs to deduplicate messages or process messages in an idempotent way only if the "redelivered" flag is set in the message. - -### Additional resources - -- **Forked eShopOnContainers using NServiceBus (Particular Software)** \ - - -- **Event Driven Messaging** \ - - -- **Jimmy Bogard. Refactoring Towards Resilience: Evaluating Coupling** \ - - -- **Publish-Subscribe channel** \ - - -- **Communicating Between Bounded Contexts** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/jj591572(v=pandp.10)](/previous-versions/msp-n-p/jj591572(v=pandp.10)) - -- **Eventual Consistency** \ - - -- **Philip Brown. Strategies for Integrating Bounded Contexts** \ - - -- **Chris Richardson. Developing Transactional Microservices Using Aggregates, Event Sourcing and CQRS - Part 2** \ - - -- **Chris Richardson. Event Sourcing pattern** \ - - -- **Introducing Event Sourcing** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/jj591559(v=pandp.10)](/previous-versions/msp-n-p/jj591559(v=pandp.10)) - -- **Event Store database**. Official site. \ - - -- **Patrick Nommensen. Event-Driven Data Management for Microservices** \ - - -- **The CAP Theorem** \ - - -- **What is CAP Theorem?** \ - - -- **Data Consistency Primer** \ - [https://learn.microsoft.com/previous-versions/msp-n-p/dn589800(v=pandp.10)](/previous-versions/msp-n-p/dn589800(v=pandp.10)) - -- **Rick Saling. The CAP Theorem: Why "Everything is Different" with the Cloud and Internet** \ - [https://learn.microsoft.com/archive/blogs/rickatmicrosoft/the-cap-theorem-why-everything-is-different-with-the-cloud-and-internet/](/archive/blogs/rickatmicrosoft/the-cap-theorem-why-everything-is-different-with-the-cloud-and-internet/) - -- **Eric Brewer. CAP Twelve Years Later: How the "Rules" Have Changed** \ - - -- **CAP, PACELC, and Microservices** \ - - -- **Azure Service Bus. Brokered Messaging: Duplicate Detection**\ - - -- **Reliability Guide** (RabbitMQ documentation) \ - - -> [!div class="step-by-step"] -> [Previous](rabbitmq-event-bus-development-test-environment.md) -> [Next](test-aspnet-core-services-web-apps.md) diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/test-aspnet-core-services-web-apps.md b/docs/architecture/microservices/multi-container-microservice-net-applications/test-aspnet-core-services-web-apps.md deleted file mode 100644 index bd4ecf5a9bd47..0000000000000 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/test-aspnet-core-services-web-apps.md +++ /dev/null @@ -1,208 +0,0 @@ ---- -title: Testing ASP.NET Core services and web apps -description: .NET Microservices Architecture for Containerized .NET Applications | Explore an architecture for testing ASP.NET Core services and web apps in containers. -ms.date: 01/13/2021 ---- - -# Testing ASP.NET Core services and web apps - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Controllers are a central part of any ASP.NET Core API service and ASP.NET MVC Web application. As such, you should have confidence they behave as intended for your application. Automated tests can provide you with this confidence and can detect errors before they reach production. - -You need to test how the controller behaves based on valid or invalid inputs, and test controller responses based on the result of the business operation it performs. However, you should have these types of tests for your microservices: - -- Unit tests. These tests ensure that individual components of the application work as expected. Assertions test the component API. - -- Integration tests. These tests ensure that component interactions work as expected against external artifacts like databases. Assertions can test component API, UI, or the side effects of actions like database I/O, logging, etc. - -- Functional tests for each microservice. These tests ensure that the application works as expected from the user's perspective. - -- Service tests. These tests ensure that end-to-end service use cases, including testing multiple services at the same time, are tested. For this type of testing, you need to prepare the environment first. In this case, it means starting the services (for example, by using docker-compose up). - -### Implementing unit tests for ASP.NET Core Web APIs - -Unit testing involves testing a part of an application in isolation from its infrastructure and dependencies. When you unit test controller logic, only the content of a single action or method is tested, not the behavior of its dependencies or of the framework itself. Unit tests do not detect issues in the interaction between components—that is the purpose of integration testing. - -As you unit test your controller actions, make sure you focus only on their behavior. A controller unit test avoids things like filters, routing, or model binding (the mapping of request data to a ViewModel or DTO). Because they focus on testing just one thing, unit tests are generally simple to write and quick to run. A well-written set of unit tests can be run frequently without much overhead. - -Unit tests are implemented based on test frameworks like xUnit.net, MSTest, Moq, or NUnit. For the eShopOnContainers sample application, we are using xUnit. - -When you write a unit test for a Web API controller, you instantiate the controller class directly using the new keyword in C\#, so that the test will run as fast as possible. The following example shows how to do this when using [xUnit](https://xunit.net/) as the Test framework. - -```csharp -[Fact] -public async Task Get_order_detail_success() -{ - //Arrange - var fakeOrderId = "12"; - var fakeOrder = GetFakeOrder(); - - //... - - //Act - var orderController = new OrderController( - _orderServiceMock.Object, - _basketServiceMock.Object, - _identityParserMock.Object); - - orderController.ControllerContext.HttpContext = _contextMock.Object; - var actionResult = await orderController.Detail(fakeOrderId); - - //Assert - var viewResult = Assert.IsType(actionResult); - Assert.IsAssignableFrom(viewResult.ViewData.Model); -} -``` - -### Implementing integration and functional tests for each microservice - -As noted, integration tests and functional tests have different purposes and goals. However, the way you implement both when testing ASP.NET Core controllers is similar, so in this section we concentrate on integration tests. - -Integration testing ensures that an application's components function correctly when assembled. ASP.NET Core supports integration testing using unit test frameworks and a built-in test web host that can be used to handle requests without network overhead. - -Unlike unit testing, integration tests frequently involve application infrastructure concerns, such as a database, file system, network resources, or web requests and responses. Unit tests use fakes or mock objects in place of these concerns. But the purpose of integration tests is to confirm that the system works as expected with these systems, so for integration testing you do not use fakes or mock objects. Instead, you include the infrastructure, like database access or service invocation from other services. - -Because integration tests exercise larger segments of code than unit tests, and because integration tests rely on infrastructure elements, they tend to be orders of magnitude slower than unit tests. Thus, it is a good idea to limit how many integration tests you write and run. - -ASP.NET Core includes a built-in test web host that can be used to handle HTTP requests without network overhead, meaning that you can run those tests faster than when using a real web host. The test web host (TestServer) is available in a NuGet component as Microsoft.AspNetCore.TestHost. It can be added to integration test projects and used to host ASP.NET Core applications. - -As you can see in the following code, when you create integration tests for ASP.NET Core controllers, you instantiate the controllers through the test host. This functionality is comparable to an HTTP request, but it runs faster. - -```csharp -public class PrimeWebDefaultRequestShould -{ - private readonly TestServer _server; - private readonly HttpClient _client; - - public PrimeWebDefaultRequestShould() - { - // Arrange - _server = new TestServer(new WebHostBuilder() - .UseStartup()); - _client = _server.CreateClient(); - } - - [Fact] - public async Task ReturnHelloWorld() - { - // Act - var response = await _client.GetAsync("/"); - response.EnsureSuccessStatusCode(); - var responseString = await response.Content.ReadAsStringAsync(); - // Assert - Assert.Equal("Hello World!", responseString); - } -} -``` - -#### Additional resources - -- **Steve Smith. Testing controllers** (ASP.NET Core) \ - [https://learn.microsoft.com/aspnet/core/mvc/controllers/testing](/aspnet/core/mvc/controllers/testing) - -- **Steve Smith. Integration testing** (ASP.NET Core) \ - [https://learn.microsoft.com/aspnet/core/test/integration-tests](/aspnet/core/test/integration-tests) - -- **Unit testing in .NET using dotnet test** \ - [https://learn.microsoft.com/dotnet/core/testing/unit-testing-with-dotnet-test](../../../core/testing/unit-testing-with-dotnet-test.md) - -- **xUnit.net**. Official site. \ - - -- **Unit Test Basics.** \ - [https://learn.microsoft.com/visualstudio/test/unit-test-basics](/visualstudio/test/unit-test-basics) - -- **Moq**. GitHub repo. \ - - -- **NUnit**. Official site. \ - - -### Implementing service tests on a multi-container application - -As noted earlier, when you test multi-container applications, all the microservices need to be running within the Docker host or container cluster. End-to-end service tests that include multiple operations involving several microservices require you to deploy and start the whole application in the Docker host by running docker-compose up (or a comparable mechanism if you are using an orchestrator). Once the whole application and all its services is running, you can execute end-to-end integration and functional tests. - -There are a few approaches you can use. In the docker-compose.yml file that you use to deploy the application at the solution level you can expand the entry point to use [dotnet test](../../../core/tools/dotnet-test.md). You can also use another compose file that would run your tests in the image you are targeting. By using another compose file for integration tests that includes your microservices and databases on containers, you can make sure that the related data is always reset to its original state before running the tests. - -Once the compose application is up and running, you can take advantage of breakpoints and exceptions if you are running Visual Studio. Or you can run the integration tests automatically in your CI pipeline in Azure DevOps Services or any other CI/CD system that supports Docker containers. - -## Testing in eShopOnContainers - -The reference application (eShopOnContainers) tests were recently restructured and now there are four categories: - -1. **Unit** tests, just plain old regular unit tests, contained in the **{MicroserviceName}.UnitTests** projects - -2. **Microservice functional/integration tests**, with test cases involving the infrastructure for each microservice but isolated from the others and are contained in the **{MicroserviceName}.FunctionalTests** projects. - -3. **Application functional/integration tests**, which focus on microservices integration, with test cases that exert several microservices. These tests are located in project **Application.FunctionalTests**. - -While unit and integration tests are organized in a test folder within the microservice project, application and load tests are managed separately under the root folder, as shown in Figure 6-25. - -![Screenshot of VS pointing out some of the test projects in the solution.](./media/test-aspnet-core-services-web-apps/eshoponcontainers-test-folder-structure.png) - -**Figure 6-25**. Test folder structure in eShopOnContainers - -Microservice and Application functional/integration tests are run from Visual Studio, using the regular tests runner, but first you need to start the required infrastructure services, with a set of docker-compose files contained in the solution test folder: - -**docker-compose-test.yml** - -```yml -version: '3.4' - -services: - redis.data: - image: redis:alpine - rabbitmq: - image: rabbitmq:3-management-alpine - sqldata: - image: mcr.microsoft.com/mssql/server:2017-latest - nosqldata: - image: mongo -``` - -**docker-compose-test.override.yml** - -```yml -version: '3.4' - -services: - redis.data: - ports: - - "6379:6379" - rabbitmq: - ports: - - "15672:15672" - - "5672:5672" - sqldata: - environment: - - SA_PASSWORD=[PLACEHOLDER] - - ACCEPT_EULA=Y - ports: - - "5433:1433" - nosqldata: - ports: - - "27017:27017" -``` - -[!INCLUDE [managed-identities](../../../includes/managed-identities.md)] - -So, to run the functional/integration tests you must first run this command, from the solution test folder: - -```console -docker-compose -f docker-compose-test.yml -f docker-compose-test.override.yml up -``` - -As you can see, these docker-compose files only start the Redis, RabbitMQ, SQL Server, and MongoDB microservices. - -### Additional resources - -- **Unit & Integration testing** on the eShopOnContainers \ - - -- **Load testing** on the eShopOnContainers \ - - -> [!div class="step-by-step"] -> [Previous](subscribe-events.md) -> [Next](background-tasks-with-ihostedservice.md) diff --git a/docs/architecture/microservices/net-core-net-framework-containers/container-framework-choice-factors.md b/docs/architecture/microservices/net-core-net-framework-containers/container-framework-choice-factors.md deleted file mode 100644 index e65e3d90cb0d8..0000000000000 --- a/docs/architecture/microservices/net-core-net-framework-containers/container-framework-choice-factors.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Decision table. .NET implementations to use for Docker -description: .NET Microservices Architecture for Containerized .NET Applications | Decision table, .NET implementations to use for Docker -ms.date: 11/19/2021 ---- -# Decision table: .NET implementations to use for Docker - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The following decision table summarizes whether to use .NET Framework or .NET 8. Remember that for Linux containers, you need Linux-based Docker hosts (VMs or servers), and that for Windows Containers, you need Windows Server-based Docker hosts (VMs or servers). - -> [!IMPORTANT] -> Your development machines will run one Docker host, either Linux or Windows. Related microservices that you want to run and test together in one solution will all need to run on the same container platform. - -| Architecture / App type | Linux containers | Windows containers | -|-------------------------|------------------|--------------------| -| Microservices on containers | .NET 8 | .NET 8 | -| Monolithic app | .NET 8 | .NET Framework
.NET 8 | -| Best-in-class performance and scalability | .NET 8 | .NET 8 | -| Windows Server legacy app ("brown-field") migration to containers | -- | .NET Framework | -| New container-based development ("green-field") | .NET 8 | .NET 8 | -| ASP.NET Core | .NET 8 | .NET 8 (recommended)
.NET Framework | -| ASP.NET 4 (MVC 5, Web API 2, and Web Forms) | -- | .NET Framework | -| SignalR services | .NET Core 2.1 or higher version | .NET Framework
.NET Core 2.1 or higher version | -| WCF, WF, and other legacy frameworks | WCF in .NET Core (client library only) or [CoreWCF](https://www.nuget.org/profiles/corewcf) | .NET Framework
WCF in .NET 8 (client library only) or [CoreWCF](https://www.nuget.org/profiles/corewcf) | -| Consumption of Azure services | .NET 8
(eventually most Azure services will provide client SDKs for .NET 8) | .NET Framework
.NET 8
(eventually most Azure services will provide client SDKs for .NET 8) | - ->[!div class="step-by-step"] ->[Previous](net-framework-container-scenarios.md) ->[Next](net-container-os-targets.md) diff --git a/docs/architecture/microservices/net-core-net-framework-containers/general-guidance.md b/docs/architecture/microservices/net-core-net-framework-containers/general-guidance.md deleted file mode 100644 index f6520df2e0ff0..0000000000000 --- a/docs/architecture/microservices/net-core-net-framework-containers/general-guidance.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: General guidance -description: .NET Microservices Architecture for Containerized .NET Applications | General guidance -ms.date: 11/19/2021 ---- -# General guidance - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -This section provides a summary of when to choose .NET 8 or .NET Framework. We provide more details about these choices in the sections that follow. - -Use .NET 8, with Linux or Windows Containers, for your containerized Docker server application when: - -- You have cross-platform needs. For example, you want to use both Linux and Windows Containers. - -- Your application architecture is based on microservices. - -- You need to start containers fast and want a small footprint per container to achieve better density or more containers per hardware unit in order to lower your costs. - -In short, when you create new containerized .NET applications, you should consider .NET 8 as the default choice. It has many benefits and fits best with the containers philosophy and style of working. - -An extra benefit of using .NET 8 is that you can run side-by-side .NET versions for applications within the same machine. This benefit is more important for servers or VMs that do not use containers, because containers isolate the versions of .NET that the app needs. (As long as they are compatible with the underlying OS.) - -Use .NET Framework for your containerized Docker server application when: - -- Your application currently uses .NET Framework and has strong dependencies on Windows. - -- You need to use Windows APIs that are not supported by .NET 8. - -- You need to use third-party .NET libraries or NuGet packages that are not available for .NET 8. - -Using .NET Framework on Docker can improve your deployment experiences by minimizing deployment issues. This ["lift and shift" scenario](https://aka.ms/liftandshiftwithcontainersebook) is important for containerizing legacy applications that were originally developed with the traditional .NET Framework, like ASP.NET WebForms, MVC web apps, or WCF (Windows Communication Foundation) services. - -### Additional resources - -- **E-book: Modernize existing .NET Framework applications with Azure and Windows Containers** - - -- **Sample apps: Modernization of legacy ASP.NET web apps by using Windows Containers** - - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](net-core-container-scenarios.md) diff --git a/docs/architecture/microservices/net-core-net-framework-containers/index.md b/docs/architecture/microservices/net-core-net-framework-containers/index.md deleted file mode 100644 index 18029f0043ad9..0000000000000 --- a/docs/architecture/microservices/net-core-net-framework-containers/index.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Choosing Between .NET and .NET Framework for Docker Containers -description: .NET Microservices Architecture for Containerized .NET Applications | Choosing Between .NET and .NET Framework for Docker Containers -ms.date: 11/19/2021 ---- -# Choosing Between .NET and .NET Framework for Docker Containers - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -There are two supported frameworks for building server-side containerized Docker applications with .NET: [.NET Framework and .NET 8](https://dotnet.microsoft.com/download). They share many .NET platform components, and you can share code across the two. However, there are fundamental differences between them, and which framework you use will depend on what you want to accomplish. This section provides guidance on when to choose each framework. - ->[!div class="step-by-step"] ->[Previous](../container-docker-introduction/docker-containers-images-registries.md) ->[Next](general-guidance.md) diff --git a/docs/architecture/microservices/net-core-net-framework-containers/media/net-container-os-targets/targeting-operating-systems.png b/docs/architecture/microservices/net-core-net-framework-containers/media/net-container-os-targets/targeting-operating-systems.png deleted file mode 100644 index e494271855646..0000000000000 Binary files a/docs/architecture/microservices/net-core-net-framework-containers/media/net-container-os-targets/targeting-operating-systems.png and /dev/null differ diff --git a/docs/architecture/microservices/net-core-net-framework-containers/net-container-os-targets.md b/docs/architecture/microservices/net-core-net-framework-containers/net-container-os-targets.md deleted file mode 100644 index eb3a7c9293584..0000000000000 --- a/docs/architecture/microservices/net-core-net-framework-containers/net-container-os-targets.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: What OS to target with .NET containers -description: .NET Microservices Architecture for Containerized .NET Applications | What OS to target with .NET containers -ms.date: 01/13/2021 -ms.custom: linux-related-content ---- - -# What OS to target with .NET containers - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Given the diversity of operating systems supported by Docker and the differences between .NET Framework and .NET 8, you should target a specific OS and specific versions depending on the framework you are using. - -For Windows, you can use Windows Server Core or Windows Nano Server. These Windows versions provide different characteristics (IIS in Windows Server Core versus a self-hosted web server like Kestrel in Nano Server) that might be needed by .NET Framework or .NET 8, respectively. - -For Linux, multiple distros are available and supported in official .NET Docker images (like Debian). - -In Figure 3-1, you can see the possible OS version depending on the .NET framework used. - -![Diagram showing what OS to use with which .NET containers.](./media/net-container-os-targets/targeting-operating-systems.png) - -**Figure 3-1.** Operating systems to target depending on versions of the .NET framework - -When deploying legacy .NET Framework applications you have to target Windows Server Core, compatible with legacy apps and IIS, but it has a larger image. When deploying .NET 8 applications, you can target Windows Nano Server, which is cloud optimized, uses Kestrel and is smaller and starts faster. You can also target Linux, supporting Debian, Alpine, and others. - -You can also create your own Docker image in cases where you want to use a different Linux distro or where you want an image with versions not provided by Microsoft. For example, you might create an image with ASP.NET Core running on the traditional .NET Framework and Windows Server Core, which is a not-so-common scenario for Docker. - -When you add the image name to your Dockerfile file, you can select the operating system and version depending on the tag you use, as in the following examples: - -| Image | Comments | -|-------|----------| -| mcr.microsoft.com/dotnet/runtime:8.0 | .NET 8 multi-architecture: Supports Linux and Windows Nano Server depending on the Docker host. | -| mcr.microsoft.com/dotnet/aspnet:8.0 | ASP.NET Core 8.0 multi-architecture: Supports Linux and Windows Nano Server depending on the Docker host.
The aspnetcore image has a few optimizations for ASP.NET Core. | -| mcr.microsoft.com/dotnet/aspnet:8.0-bullseye-slim | .NET 8 runtime-only on Linux Debian distro | -| mcr.microsoft.com/dotnet/aspnet:8.0-nanoserver-1809 | .NET 8 runtime-only on Windows Nano Server (Windows Server version 1809) | - -> [!div class="step-by-step"] -> [Previous](container-framework-choice-factors.md) -> [Next](official-net-docker-images.md) diff --git a/docs/architecture/microservices/net-core-net-framework-containers/net-core-container-scenarios.md b/docs/architecture/microservices/net-core-net-framework-containers/net-core-container-scenarios.md deleted file mode 100644 index 58ea34998df98..0000000000000 --- a/docs/architecture/microservices/net-core-net-framework-containers/net-core-container-scenarios.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: When to choose .NET 8 for Docker containers -description: .NET Microservices Architecture for Containerized .NET Applications | When to choose .NET for Docker containers -ms.date: 11/19/2021 ---- -# When to choose .NET for Docker containers - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The modularity and lightweight nature of .NET 8 makes it perfect for containers. When you deploy and start a container, its image is far smaller with .NET 8 than with .NET Framework. In contrast, to use .NET Framework for a container, you must base your image on the Windows Server Core image, which is a lot heavier than the Windows Nano Server or Linux images that you use for .NET 8. - -Additionally, .NET 8 is cross-platform, so you can deploy server apps with Linux or Windows container images. However, if you are using the traditional .NET Framework, you can only deploy images based on Windows Server Core. - -The following is a more detailed explanation of why to choose .NET 8. - -## Developing and deploying cross platform - -Clearly, if your goal is to have an application (web app or service) that can run on multiple platforms supported by Docker (Linux and Windows), the right choice is .NET 8, because .NET Framework only supports Windows. - -.NET 8 also supports macOS as a development platform. However, when you deploy containers to a Docker host, that host must (currently) be based on Linux or Windows. For example, in a development environment, you could use a Linux VM running on a Mac. - -[Visual Studio](https://www.visualstudio.com/vs/) provides an integrated development environment (IDE) for Windows and supports Docker development. - -You can also use [Visual Studio Code](https://code.visualstudio.com/) on macOS, Linux, and Windows. Visual Studio Code fully supports .NET 8, including IntelliSense and debugging. Because VS Code is a lightweight editor, you can use it to develop containerized apps on the machine in conjunction with the Docker CLI and the [.NET CLI](../../../core/tools/index.md). You can also target .NET 8 with most third-party editors like Sublime, Emacs, vi, and the open-source OmniSharp project, which also provides IntelliSense support. - -In addition to the IDEs and editors, you can use the [.NET CLI](../../../core/tools/index.md) for all supported platforms. - -## Using containers for new ("green-field") projects - -Containers are commonly used in conjunction with a microservices architecture, although they can also be used to containerize web apps or services that follow any architectural pattern. You can use .NET Framework on Windows Containers, but the modularity and lightweight nature of .NET 8 makes it perfect for containers and microservices architectures. When you create and deploy a container, its image is far smaller with .NET 8 than with .NET Framework. - -## Create and deploy microservices on containers - -You could use the traditional .NET Framework for building microservices-based applications (without containers) by using plain processes. That way, because the .NET Framework is already installed and shared across processes, processes are light and fast to start. However, if you are using containers, the image for the traditional .NET Framework is also based on Windows Server Core and that makes it too heavy for a microservices-on-containers approach. However, teams have been looking for opportunities to improve the experience for .NET Framework users as well. Recently, size of the [Windows Server Core container images have been reduced to >40% smaller](https://devblogs.microsoft.com/dotnet/we-made-windows-server-core-container-images-40-smaller). - -On the other hand, .NET 8 is the best candidate if you're embracing a microservices-oriented system that is based on containers because .NET 8 is lightweight. In addition, its related container images, for either Linux or Windows Nano Server, are lean and small, making containers light and fast to start. - -A microservice is meant to be as small as possible: to be light when spinning up, to have a small footprint, to have a small Bounded Context (check DDD, [Domain-Driven Design](https://en.wikipedia.org/wiki/Domain-driven_design)), to represent a small area of concerns, and to be able to start and stop fast. For those requirements, you will want to use small and fast-to-instantiate container images like the .NET 8 container image. - -A microservices architecture also allows you to mix technologies across a service boundary. This approach enables a gradual migration to .NET 8 for new microservices that work in conjunction with other microservices or with services developed with Node.js, Python, Java, GoLang, or other technologies. - -## Deploying high density in scalable systems - -When your container-based system needs the best possible density, granularity, and performance, .NET and ASP.NET Core are your best options. ASP.NET Core is up to 10 times faster than ASP.NET in the traditional .NET Framework, and it leads to other popular industry technologies for microservices, such as Java servlets, Go, and Node.js. - -This approach is especially relevant for microservices architectures, where you could have hundreds of microservices (containers) running. With ASP.NET Core images (based on the .NET runtime) on Linux or Windows Nano, you can run your system with a much lower number of servers or VMs, ultimately saving costs in infrastructure and hosting. - ->[!div class="step-by-step"] ->[Previous](general-guidance.md) ->[Next](net-framework-container-scenarios.md) diff --git a/docs/architecture/microservices/net-core-net-framework-containers/net-framework-container-scenarios.md b/docs/architecture/microservices/net-core-net-framework-containers/net-framework-container-scenarios.md deleted file mode 100644 index 07ea62f75b136..0000000000000 --- a/docs/architecture/microservices/net-core-net-framework-containers/net-framework-container-scenarios.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: When to choose .NET Framework for Docker containers -description: .NET Microservices Architecture for Containerized .NET Applications | When to choose .NET Framework for Docker containers -ms.date: 12/14/2023 ---- -# When to choose .NET Framework for Docker containers - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -While .NET 8 offers significant benefits for new applications and application patterns, .NET Framework will continue to be a good choice for many existing scenarios. - -## Migrating existing applications directly to a Windows Server container - -You might want to use Docker containers just to simplify deployment, even if you are not creating microservices. For example, perhaps you want to improve your DevOps workflow with Docker—containers can give you better isolated test environments and can also eliminate deployment issues caused by missing dependencies when you move to a production environment. In cases like these, even if you are deploying a monolithic application, it makes sense to use Docker and Windows Containers for your current .NET Framework applications. - -In most cases for this scenario, you will not need to migrate your existing applications to .NET 8; you can use Docker containers that include the traditional .NET Framework. However, a recommended approach is to use .NET 8 as you extend an existing application, such as writing a new service in ASP.NET Core. - -## Using third-party .NET libraries or NuGet packages not available for .NET 8 - -Third-party libraries are quickly embracing [.NET Standard](../../../standard/net-standard.md), which enables code sharing across all .NET flavors, including .NET 8. With .NET Standard 2.0 and later, the API surface compatibility across different frameworks has become significantly larger. Even more, .NET Core 2.x and newer applications can also directly reference existing .NET Framework libraries (see [.NET Framework 4.6.1 supporting .NET Standard 2.0](https://github.com/dotnet/standard/blob/v2.1.0/docs/planning/netstandard-2.0/README.md#net-framework-461-supporting-net-standard-20)). - -In addition, the [Windows Compatibility Pack](../../../core/porting/windows-compat-pack.md) extends the API surface available for .NET Standard 2.0 on Windows. This pack allows recompiling most existing code to .NET Standard 2.x with little or no modification, to run on Windows. - -However, even with that exceptional progression since .NET Standard 2.0 and .NET Core 2.1 or later, there might be cases where certain NuGet packages need Windows to run and might not support .NET Core or later. If those packages are critical for your application, then you will need to use .NET Framework on Windows Containers. - -## Using .NET technologies not available for .NET 8 - -Some .NET Framework technologies aren't available in .NET 8. Some of them might become available in later releases, but others don't fit the new application patterns targeted by .NET Core and might never be available. - -The following list shows most of the technologies that aren't available in .NET 8: - -- ASP.NET Web Forms. This technology is only available on .NET Framework. Currently there are no plans to bring ASP.NET Web Forms to .NET or later. - -- Workflow-related services. Windows Workflow Foundation (WF), Workflow Services (WCF + WF in a single service), and WCF Data Services (formerly known as ADO.NET Data Services) are only available on .NET Framework. There are currently no plans to bring them to .NET 8. - -In addition to the technologies listed in the official [.NET roadmap](https://github.com/dotnet/core/blob/main/roadmap.md), other features might be ported to the new [unified .NET platform](https://devblogs.microsoft.com/dotnet/introducing-net-5/). You might consider participating in the discussions on GitHub so that your voice can be heard. And if you think something is missing, file a new issue in the [dotnet/runtime](https://github.com/dotnet/runtime/issues/new) GitHub repository. - -## Using a platform or API that doesn't support .NET 8 - -Some Microsoft and third-party platforms don't support .NET 8. For example, some Azure services provide an SDK that isn't yet available for consumption on .NET 8 yet. Most Azure SDK should eventually be ported to .NET 8/.NET Standard, but some might not for several reasons. You can see the available Azure SDKs in the [Azure SDK Latest Releases](https://azure.github.io/azure-sdk/releases/latest/index.html) page. - -In the meantime, if any platform or service in Azure still doesn't support .NET 8 with its client API, you can use the equivalent REST API from the Azure service or the client SDK on .NET Framework. - -## Porting existing ASP.NET application to .NET 8 - -.NET Core is a revolutionary step forward from .NET Framework. It offers a host of advantages over .NET Framework across the board from productivity to performance, and from cross-platform support to developer satisfaction. - -### Additional resources - -- **.NET fundamentals** \ - [https://learn.microsoft.com/dotnet/fundamentals](../../../fundamentals/index.yml) - -- **Porting Projects to .NET 5** \ - [https://learn.microsoft.com/events/dotnetconf-2020/porting-projects-to-net-5](/Events/dotnetConf/2020/Porting-Projects-to-NET-5) - -- **.NET on Docker Guide** \ - [https://learn.microsoft.com/dotnet/core/docker/introduction](../../../core/docker/introduction.md) - ->[!div class="step-by-step"] ->[Previous](net-core-container-scenarios.md) ->[Next](container-framework-choice-factors.md) diff --git a/docs/architecture/microservices/net-core-net-framework-containers/official-net-docker-images.md b/docs/architecture/microservices/net-core-net-framework-containers/official-net-docker-images.md deleted file mode 100644 index db451b460f4f1..0000000000000 --- a/docs/architecture/microservices/net-core-net-framework-containers/official-net-docker-images.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: Official .NET Docker images -description: .NET Microservices Architecture for Containerized .NET Applications | Official .NET Docker images -ms.date: 11/19/2021 ---- - -# Official .NET Docker images - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -The Official .NET Docker images are Docker images created and optimized by Microsoft. They're publicly available on [Microsoft Artifact Registry](https://mcr.microsoft.com/). You can search over the catalog to find all .NET image repositories, for example [.NET SDK](https://mcr.microsoft.com/product/dotnet/sdk/about) repository. - -Each repository can contain multiple images, depending on .NET versions, and depending on the OS and versions (Linux Debian, Linux Alpine, Windows Nano Server, Windows Server Core, and so on). Image repositories provide extensive tagging to help you select not just a specific framework version, but also to choose an OS (Linux distribution or Windows version). - -## .NET and Docker image optimizations for development versus production - -When building Docker images for developers, Microsoft focused on the following main scenarios: - -- Images used to *develop* and build .NET apps. - -- Images used to *run* .NET apps. - -Why multiple images? When developing, building, and running containerized applications, you usually have different priorities. By providing different images for these separate tasks, Microsoft helps optimize the separate processes of developing, building, and deploying apps. - -### During development and build - -During development, what is important is how fast you can iterate changes, and the ability to debug the changes. The size of the image isn't as important as the ability to make changes to your code and see the changes quickly. Some tools and "build-agent containers", use the development .NET image (*mcr.microsoft.com/dotnet/sdk:8.0*) during development and build process. When building inside a Docker container, the important aspects are the elements that are needed to compile your app. This includes the compiler and any other .NET dependencies. - -Another great option is development containers. These containers are prebuilt development environments that are ready to use—you don't have to worry about dependencies and configurations. They are also easy to customize to include additional tools or dependencies. Development containers provide a consistent and reproducible setup that's easy to share with your team. Development containers conform to the [Development Container Specification], and many popular developer tools, including Visual Studio Code and GitHub Codespaces, support them. The .NET dev containers are based on the .NET SDK image and include the .NET SDK, runtime, and other tools you need to develop .NET applications. - -[Development Container Specification]: https://containers.dev/implementors/spec/ - -Why is this type of build image important? You don't deploy this image to production. Instead, it's an image that you use to build the content you place into a production image. This image would be used in your continuous integration (CI) environment or build environment when using Docker multi-stage builds. - -### In production - -What is important in production is how fast you can deploy and start your containers based on a production .NET image. Therefore, the runtime-only image based on *mcr.microsoft.com/dotnet/aspnet:8.0* is small so that it can travel quickly across the network from your Docker registry to your Docker hosts. The contents are ready to run, enabling the fastest time from starting the container to processing results. In the Docker model, there is no need for compilation from C\# code, as there's when you run dotnet build or dotnet publish when using the build container. - -In this optimized image, you put only the binaries and other content needed to run the application. For example, the content created by `dotnet publish` contains only the compiled .NET binaries, images, .js, and .css files. Over time, you'll see images that contain pre-jitted (the compilation from IL to native that occurs at run time) packages. - -Although there are multiple versions of the .NET and ASP.NET Core images, they all share one or more layers, including the base layer. Therefore, the amount of disk space needed to store an image is small; it consists only of the delta between your custom image and its base image. The result is that it's quick to pull the image from your registry. - -When you explore the .NET image repositories at Microsoft Artifact Registry, you'll find multiple image versions classified or marked with tags. These tags help to decide which one to use, depending on the version you need, like those in the following table: - -| Image | Comments | -|-------|----------| -| mcr.microsoft.com/dotnet/aspnet:**8.0** | ASP.NET Core, with runtime only and ASP.NET Core optimizations, on Linux and Windows (multi-arch) | -| mcr.microsoft.com/dotnet/sdk:**8.0** | .NET 8, with SDKs included, on Linux and Windows (multi-arch) | - -You can find all the available docker images in [dotnet-docker](https://github.com/dotnet/dotnet-docker) and also refer to the latest preview releases by using nightly build `mcr.microsoft.com/dotnet/nightly/*` - -> [!div class="step-by-step"] -> [Previous](net-container-os-targets.md) -> [Next](../architect-microservice-container-applications/index.md) diff --git a/docs/architecture/microservices/secure-net-microservices-web-applications/authorization-net-microservices-web-applications.md b/docs/architecture/microservices/secure-net-microservices-web-applications/authorization-net-microservices-web-applications.md deleted file mode 100644 index d31c6d5694676..0000000000000 --- a/docs/architecture/microservices/secure-net-microservices-web-applications/authorization-net-microservices-web-applications.md +++ /dev/null @@ -1,147 +0,0 @@ ---- -title: About authorization in .NET microservices and web applications -description: Security in .NET Microservices and Web Applications - Get an overview of the main authorization options in ASP.NET Core applications - role-based and policy-based. -author: mjrousos -ms.date: 01/30/2020 ---- -# About authorization in .NET microservices and web applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -After authentication, ASP.NET Core Web APIs need to authorize access. This process allows a service to make APIs available to some authenticated users, but not to all. [Authorization](/aspnet/core/security/authorization/introduction) can be done based on users' roles or based on custom policy, which might include inspecting claims or other heuristics. - -Restricting access to an ASP.NET Core MVC route is as easy as applying an Authorize attribute to the action method (or to the controller's class if all the controller's actions require authorization), as shown in following example: - -```csharp -public class AccountController : Controller -{ - public ActionResult Login() - { - } - - [Authorize] - public ActionResult Logout() - { - } -} -``` - -By default, adding an Authorize attribute without parameters will limit access to authenticated users for that controller or action. To further restrict an API to be available for only specific users, the attribute can be expanded to specify required roles or policies that users must satisfy. - -## Implement role-based authorization - -ASP.NET Core Identity has a built-in concept of roles. In addition to users, ASP.NET Core Identity stores information about different roles used by the application and keeps track of which users are assigned to which roles. These assignments can be changed programmatically with the `RoleManager` type that updates roles in persisted storage, and the `UserManager` type that can grant or revoke roles from users. - -If you're authenticating with JWT bearer tokens, the ASP.NET Core JWT bearer authentication middleware will populate a user's roles based on role claims found in the token. To limit access to an MVC action or controller to users in specific roles, you can include a Roles parameter in the Authorize annotation (attribute), as shown in the following code fragment: - -```csharp -[Authorize(Roles = "Administrator, PowerUser")] -public class ControlPanelController : Controller -{ - public ActionResult SetTime() - { - } - - [Authorize(Roles = "Administrator")] - public ActionResult ShutDown() - { - } -} -``` - -In this example, only users in the Administrator or PowerUser roles can access APIs in the ControlPanel controller (such as executing the SetTime action). The ShutDown API is further restricted to allow access only to users in the Administrator role. - -To require a user be in multiple roles, you use multiple Authorize attributes, as shown in the following example: - -```csharp -[Authorize(Roles = "Administrator, PowerUser")] -[Authorize(Roles = "RemoteEmployee ")] -[Authorize(Policy = "CustomPolicy")] -public ActionResult API1 () -{ -} -``` - -In this example, to call API1, a user must: - -- Be in the Administrator *or* PowerUser role, *and* - -- Be in the RemoteEmployee role, *and* - -- Satisfy a custom handler for CustomPolicy authorization. - -## Implement policy-based authorization - -Custom authorization rules can also be written using [authorization policies](https://docs.asp.net/en/latest/security/authorization/policies.html). This section provides an overview. For more information, see the [ASP.NET Authorization Workshop](https://github.com/blowdart/AspNetAuthorizationWorkshop). - -Custom authorization policies are registered in the Startup.ConfigureServices method using the service.AddAuthorization method. This method takes a delegate that configures an AuthorizationOptions argument. - -```csharp -services.AddAuthorization(options => -{ - options.AddPolicy("AdministratorsOnly", policy => - policy.RequireRole("Administrator")); - - options.AddPolicy("EmployeesOnly", policy => - policy.RequireClaim("EmployeeNumber")); - - options.AddPolicy("Over21", policy => - policy.Requirements.Add(new MinimumAgeRequirement(21))); -}); -``` - -As shown in the example, policies can be associated with different types of requirements. After the policies are registered, they can be applied to an action or controller by passing the policy's name as the Policy argument of the Authorize attribute (for example, `[Authorize(Policy="EmployeesOnly")]`) Policies can have multiple requirements, not just one (as shown in these examples). - -In the previous example, the first AddPolicy call is just an alternative way of authorizing by role. If `[Authorize(Policy="AdministratorsOnly")]` is applied to an API, only users in the Administrator role will be able to access it. - -The second call demonstrates an easy way to require that a particular claim should be present for the user. The method also optionally takes expected values for the claim. If values are specified, the requirement is met only if the user has both a claim of the correct type and one of the specified values. If you're using the JWT bearer authentication middleware, all JWT properties will be available as user claims. - -The most interesting policy shown here is in the third `AddPolicy` method, because it uses a custom authorization requirement. By using custom authorization requirements, you can have a great deal of control over how authorization is performed. For this to work, you must implement these types: - -- A Requirements type that derives from and that contains fields specifying the details of the requirement. In the example, this is an age field for the sample `MinimumAgeRequirement` type. - -- A handler that implements , where T is the type of that the handler can satisfy. The handler must implement the method, which checks whether a specified context that contains information about the user satisfies the requirement. - -If the user meets the requirement, a call to `context.Succeed` will indicate that the user is authorized. If there are multiple ways that a user might satisfy an authorization requirement, multiple handlers can be created. - -In addition to registering custom policy requirements with `AddPolicy` calls, you also need to register custom requirement handlers via Dependency Injection (`services.AddTransient()`). - -An example of a custom authorization requirement and handler for checking a user's age (based on a `DateOfBirth` claim) is available in the ASP.NET Core [authorization documentation](https://docs.asp.net/en/latest/security/authorization/policies.html). - -## Authorization and minimal apis - -ASP.NET supports minimal APIs as an alternative to controller-based APIs. Authorization policies are the recommended way to configure authorization for minimal APIs, as this example demonstrates: - -```csharp -// Program.cs -builder.Services.AddAuthorizationBuilder() - .AddPolicy("admin_greetings", policy => - policy - .RequireRole("admin") - .RequireScope("greetings_api")); - -// build the app - -app.MapGet("/hello", () => "Hello world!") - .RequireAuthorization("admin_greetings"); -``` - -## Additional resources - -- **ASP.NET Core Authentication** \ - [https://learn.microsoft.com/aspnet/core/security/authentication/identity](/aspnet/core/security/authentication/identity) - -- **ASP.NET Core Authorization** \ - [https://learn.microsoft.com/aspnet/core/security/authorization/introduction](/aspnet/core/security/authorization/introduction) - -- **Role-based Authorization** \ - [https://learn.microsoft.com/aspnet/core/security/authorization/roles](/aspnet/core/security/authorization/roles) - -- **Custom Policy-Based Authorization** \ - [https://learn.microsoft.com/aspnet/core/security/authorization/policies](/aspnet/core/security/authorization/policies) - -- **Authentication and authorization in minimal APIs** \ [https://learn.microsoft.com/aspnet/core/fundamentals/minimal-apis/security](/aspnet/core/fundamentals/minimal-apis/security) - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](developer-app-secrets-storage.md) diff --git a/docs/architecture/microservices/secure-net-microservices-web-applications/azure-key-vault-protects-secrets.md b/docs/architecture/microservices/secure-net-microservices-web-applications/azure-key-vault-protects-secrets.md deleted file mode 100644 index 4db70c7ef9e13..0000000000000 --- a/docs/architecture/microservices/secure-net-microservices-web-applications/azure-key-vault-protects-secrets.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: Using Azure Key Vault to protect secrets at production time -description: Security in .NET Microservices and Web Applications - Azure Key Vault is an excellent way to handle application secrets that are completely controlled by administrators. Administrators can even assign and revoke development values without developers having to handle them. -author: mjrousos -ms.date: 01/30/2020 ---- -# Use Azure Key Vault to protect secrets at production time - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -Secrets stored as environment variables or stored by the Secret Manager tool are still stored locally and unencrypted on the machine. A more secure option for storing secrets is [Azure Key Vault](https://azure.microsoft.com/services/key-vault/), which provides a secure, central location for storing keys and secrets. - -The **Azure.Extensions.AspNetCore.Configuration.Secrets** package allows an ASP.NET Core application to read configuration information from Azure Key Vault. To start using secrets from an Azure Key Vault, you follow these steps: - -1. Register your application as an Azure AD application. (Access to key vaults is managed by Azure AD.) This can be done through the Azure management portal.\ - - Alternatively, if you want your application to authenticate using a certificate instead of a password or client secret, you can use the [New-AzADApplication](/powershell/module/az.resources/new-azadapplication) PowerShell cmdlet. The certificate that you register with Azure Key Vault needs only your public key. Your application will use the private key. - -2. Give the registered application access to the key vault by creating a new service principal. You can do this using the following PowerShell commands: - - ```powershell - $sp = New-AzADServicePrincipal -ApplicationId "" - Set-AzKeyVaultAccessPolicy -VaultName "" -ServicePrincipalName $sp.ServicePrincipalNames[0] -PermissionsToSecrets all -ResourceGroupName "" - ``` - -3. Include the key vault as a configuration source in your application by calling the AzureKeyVaultConfigurationExtensions.AddAzureKeyVault extension method when you create an instance. - -Note that calling `AddAzureKeyVault` requires the application ID that was registered and given access to the key vault in the previous steps. Or you can firstly running the Azure CLI command: `az login`, then using an overload of `AddAzureKeyVault` that takes a DefaultAzureCredential in place of the client. - -> [!IMPORTANT] -> We recommend that you register Azure Key Vault as the last configuration provider, so it can override configuration values from previous providers. - -## Additional resources - -- **Using Azure Key Vault to protect application secrets** \ - [https://learn.microsoft.com/azure/architecture/multitenant-identity](/azure/architecture/multitenant-identity) - -- **Safe storage of app secrets during development** \ - [https://learn.microsoft.com/aspnet/core/security/app-secrets](/aspnet/core/security/app-secrets) - -- **Configuring data protection** \ - [https://learn.microsoft.com/aspnet/core/security/data-protection/configuration/overview](/aspnet/core/security/data-protection/configuration/overview) - -- **Data Protection key management and lifetime in ASP.NET Core** \ - [https://learn.microsoft.com/aspnet/core/security/data-protection/configuration/default-settings](/aspnet/core/security/data-protection/configuration/default-settings) - ->[!div class="step-by-step"] ->[Previous](developer-app-secrets-storage.md) ->[Next](../key-takeaways.md) diff --git a/docs/architecture/microservices/secure-net-microservices-web-applications/developer-app-secrets-storage.md b/docs/architecture/microservices/secure-net-microservices-web-applications/developer-app-secrets-storage.md deleted file mode 100644 index 467083c2b0344..0000000000000 --- a/docs/architecture/microservices/secure-net-microservices-web-applications/developer-app-secrets-storage.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: Storing application secrets safely during development -description: Security in .NET Microservices and Web Applications - Don't store your application secrets like passwords, connection strings or API keys in source control, understand the options you can use in ASP.NET Core, in particular you have to understand how to handle "user secrets". -author: mjrousos -ms.date: 01/30/2020 ---- -# Store application secrets safely during development - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -To connect with protected resources and other services, ASP.NET Core applications typically need to use connection strings, passwords, or other credentials that contain sensitive information. These sensitive pieces of information are called *secrets*. It's a best practice to not include secrets in source code and making sure not to store secrets in source control. Instead, you should use the ASP.NET Core configuration model to read the secrets from more secure locations. - -You must separate the secrets for accessing development and staging resources from the ones used for accessing production resources, because different individuals will need access to those different sets of secrets. To store secrets used during development, common approaches are to either store secrets in environment variables or by using the ASP.NET Core Secret Manager tool. For more secure storage in production environments, microservices can store secrets in an Azure Key Vault. - -## Store secrets in environment variables - -One way to keep secrets out of source code is for developers to set string-based secrets as [environment variables](/aspnet/core/security/app-secrets#environment-variables) on their development machines. When you use environment variables to store secrets with hierarchical names, such as the ones nested in configuration sections, you must name the variables to include the complete hierarchy of its sections, delimited with colons (:). - -For example, setting an environment variable `Logging:LogLevel:Default` to `Debug` value would be equivalent to a configuration value from the following JSON file: - -```json -{ - "Logging": { - "LogLevel": { - "Default": "Debug" - } - } -} -``` - -To access these values from environment variables, the application just needs to call `AddEnvironmentVariables` on its `ConfigurationBuilder` when constructing an `IConfigurationRoot` object. - -> [!NOTE] -> Environment variables are commonly stored as plain text, so if the machine or process with the environment variables is compromised, the environment variable values will be visible. - -## Store secrets with the ASP.NET Core Secret Manager - -The ASP.NET Core [Secret Manager](/aspnet/core/security/app-secrets#secret-manager) tool provides another method of keeping secrets out of source code **during development**. To use the Secret Manager tool, install the package **Microsoft.Extensions.Configuration.SecretManager** in your project file. Once that dependency is present and has been restored, the `dotnet user-secrets` command can be used to set the value of secrets from the command line. These secrets will be stored in a JSON file in the user’s profile directory (details vary by OS), away from source code. - -Secrets set by the Secret Manager tool are organized by the `UserSecretsId` property of the project that's using the secrets. Therefore, you must be sure to set the UserSecretsId property in your project file, as shown in the snippet below. The default value is a GUID assigned by Visual Studio, but the actual string is not important as long as it's unique in your computer. - -```xml - - UniqueIdentifyingString - -``` - -Using secrets stored with Secret Manager in an application is accomplished by calling `AddUserSecrets` on the `ConfigurationBuilder` instance to include secrets for the application in its configuration. The generic parameter `T` should be a type from the assembly that the UserSecretId was applied to. Usually, using `AddUserSecrets` is fine. - -The `AddUserSecrets()` is included in the default options for the Development environment when using the `CreateDefaultBuilder` method in *Program.cs*. - ->[!div class="step-by-step"] ->[Previous](authorization-net-microservices-web-applications.md) ->[Next](azure-key-vault-protects-secrets.md) diff --git a/docs/architecture/microservices/secure-net-microservices-web-applications/index.md b/docs/architecture/microservices/secure-net-microservices-web-applications/index.md deleted file mode 100644 index 0227feed3b967..0000000000000 --- a/docs/architecture/microservices/secure-net-microservices-web-applications/index.md +++ /dev/null @@ -1,306 +0,0 @@ ---- -title: Securing .NET Microservices and Web Applications -description: Security in .NET Microservices and Web Applications - Get to know the authentication options in ASP.NET Core web applications. -author: mjrousos -ms.date: 01/13/2021 ---- -# Make secure .NET Microservices and Web Applications - -[!INCLUDE [download-alert](../includes/download-alert.md)] - -There are so many aspects about security in microservices and web applications that the topic could easily take several books like this one. So, in this section, we'll focus on authentication, authorization, and application secrets. - -## Implement authentication in .NET microservices and web applications - -It's often necessary for resources and APIs published by a service to be limited to certain trusted users or clients. The first step to making these sorts of API-level trust decisions is authentication. Authentication is the process of reliably verifying a user's identity. - -In microservice scenarios, authentication is typically handled centrally. If you're using an API Gateway, the gateway is a good place to authenticate, as shown in Figure 9-1. If you use this approach, make sure that the individual microservices cannot be reached directly (without the API Gateway) unless additional security is in place to authenticate messages whether they come from the gateway or not. - -![Diagram showing how the client mobile app interacts with the backend.](./media/index/api-gateway-centralized-authentication.png) - -**Figure 9-1**. Centralized authentication with an API Gateway - -When the API Gateway centralizes authentication, it adds user information when forwarding requests to the microservices. If services can be accessed directly, an authentication service like Azure Active Directory or a dedicated authentication microservice acting as a security token service (STS) can be used to authenticate users. Trust decisions are shared between services with security tokens or cookies. (These tokens can be shared between ASP.NET Core applications, if needed, by implementing [cookie sharing](/aspnet/core/security/cookie-sharing).) This pattern is illustrated in Figure 9-2. - -![Diagram showing authentication through backend microservices.](./media/index/identity-microservice-authentication.png) - -**Figure 9-2**. Authentication by identity microservice; trust is shared using an authorization token - -When microservices are accessed directly, trust, that includes authentication and authorization, is handled by a security token issued by a dedicated microservice, shared between microservices. - -### Authenticate with ASP.NET Core Identity - -The primary mechanism in ASP.NET Core for identifying an application's users is the [ASP.NET Core Identity](/aspnet/core/security/authentication/identity) membership system. ASP.NET Core Identity stores user information (including sign-in information, roles, and claims) in a data store configured by the developer. Typically, the ASP.NET Core Identity data store is an Entity Framework store provided in the `Microsoft.AspNetCore.Identity.EntityFrameworkCore` package. However, custom stores or other third-party packages can be used to store identity information in Azure Table Storage, CosmosDB, or other locations. - -> [!TIP] -> ASP.NET Core 2.1 and later provides [ASP.NET Core Identity](/aspnet/core/security/authentication/identity) as a [Razor Class Library](/aspnet/core/razor-pages/ui-class), so you won't see much of the necessary code in your project, as was the case for previous versions. For details on how to customize the Identity code to suit your needs, see [Scaffold Identity in ASP.NET Core projects](/aspnet/core/security/authentication/scaffold-identity). - -The following code is taken from the ASP.NET Core Web Application MVC project template with individual user account authentication selected. It shows how to configure ASP.NET Core Identity using Entity Framework Core in the _Program.cs_ file. - -```csharp -//... -builder.Services.AddDbContext(options => - options.UseSqlServer( - builder.Configuration.GetConnectionString("DefaultConnection"))); - -builder.Services.AddDefaultIdentity(options => - options.SignIn.RequireConfirmedAccount = true) - .AddEntityFrameworkStores(); - -builder.Services.AddRazorPages(); -//... -``` - -Once ASP.NET Core Identity is configured, you enable it by adding the `app.UseAuthentication()` and `endpoints.MapRazorPages()` as shown in the following code in the service's _Program.cs_ file: - -```csharp -//... -app.UseRouting(); - -app.UseAuthentication(); -app.UseAuthorization(); - -app.UseEndpoints(endpoints => -{ - endpoints.MapRazorPages(); -}); -//... -``` - -> [!IMPORTANT] -> The lines in the preceding code **MUST BE IN THE ORDER SHOWN** for Identity to work correctly. - -Using ASP.NET Core Identity enables several scenarios: - -- Create new user information using the UserManager type (userManager.CreateAsync). - -- Authenticate users using the SignInManager type. You can use `signInManager.SignInAsync` to sign in directly, or `signInManager.PasswordSignInAsync` to confirm the user's password is correct and then sign them in. - -- Identify a user based on information stored in a cookie (which is read by ASP.NET Core Identity middleware) so that subsequent requests from a browser will include a signed-in user's identity and claims. - -ASP.NET Core Identity also supports [two-factor authentication](/aspnet/core/security/authentication/2fa). - -For authentication scenarios that make use of a local user data store and that persist identity between requests using cookies (as is typical for MVC web applications), ASP.NET Core Identity is a recommended solution. - -### Authenticate with external providers - -ASP.NET Core also supports using [external authentication providers](/aspnet/core/security/authentication/social/) to let users sign in via [OAuth 2.0](https://www.digitalocean.com/community/tutorials/an-introduction-to-oauth-2) flows. This means that users can sign in using existing authentication processes from providers like Microsoft, Google, Facebook, or Twitter and associate those identities with an ASP.NET Core identity in your application. - -To use external authentication, besides including the authentication middleware as mentioned before, using the `app.UseAuthentication()` method, you also have to register the external provider in _Program.cs_ as shown in the following example: - -```csharp -//... -services.AddDefaultIdentity(options => options.SignIn.RequireConfirmedAccount = true) - .AddEntityFrameworkStores(); - -services.AddAuthentication() - .AddMicrosoftAccount(microsoftOptions => - { - microsoftOptions.ClientId = builder.Configuration["Authentication:Microsoft:ClientId"]; - microsoftOptions.ClientSecret = builder.Configuration["Authentication:Microsoft:ClientSecret"]; - }) - .AddGoogle(googleOptions => { ... }) - .AddTwitter(twitterOptions => { ... }) - .AddFacebook(facebookOptions => { ... }); -//... -``` - -Popular external authentication providers and their associated NuGet packages are shown in the following table: - -| **Provider** | **Package** | -| ------------- | ---------------------------------------------------- | -| **Microsoft** | **Microsoft.AspNetCore.Authentication.MicrosoftAccount** | -| **Google** | **Microsoft.AspNetCore.Authentication.Google** | -| **Facebook** | **Microsoft.AspNetCore.Authentication.Facebook** | -| **Twitter** | **Microsoft.AspNetCore.Authentication.Twitter** | - -In all cases, you must complete an application registration procedure that is vendor dependent and that usually involves: - -1. Getting a Client Application ID. -2. Getting a Client Application Secret. -3. Configuring a redirection URL, that's handled by the authorization middleware and the registered provider -4. Optionally, configuring a sign-out URL to properly handle sign out in a Single Sign On (SSO) scenario. - -For details on configuring your app for an external provider, see the [External provider authentication in the ASP.NET Core documentation](/aspnet/core/security/authentication/social/)). - -> [!TIP] -> All details are handled by the authorization middleware and services previously mentioned. So, you just have to choose the **Individual User Account** authentication option when you create the ASP.NET Core web application project in Visual Studio, as shown in Figure 9-3, besides registering the authentication providers previously mentioned. - -![Screenshot of the New ASP.NET Core Web Application dialog.](./media/index/select-individual-user-account-authentication-option.png) - -**Figure 9-3**. Selecting the Individual User Accounts option, for using external authentication, when creating a web application project in Visual Studio 2019. - -In addition to the external authentication providers listed previously, third-party packages are available that provide middleware for using many more external authentication providers. For a list, see the [AspNet.Security.OAuth.Providers](https://github.com/aspnet-contrib/AspNet.Security.OAuth.Providers/tree/dev/src) repository on GitHub. - -You can also create your own external authentication middleware to solve some special need. - -### Authenticate with bearer tokens - -Authenticating with ASP.NET Core Identity (or Identity plus external authentication providers) works well for many web application scenarios in which storing user information in a cookie is appropriate. In other scenarios, though, cookies are not a natural means of persisting and transmitting data. - -For example, in an ASP.NET Core Web API that exposes RESTful endpoints that might be accessed by Single Page Applications (SPAs), by native clients, or even by other Web APIs, you typically want to use bearer token authentication instead. These types of applications do not work with cookies, but can easily retrieve a bearer token and include it in the authorization header of subsequent requests. To enable token authentication, ASP.NET Core supports several options for using [OAuth 2.0](https://oauth.net/2/) and [OpenID Connect](https://openid.net/connect/). - -### Authenticate with an OpenID Connect or OAuth 2.0 Identity provider - -If user information is stored in Azure Active Directory or another identity solution that supports OpenID Connect or OAuth 2.0, you can use the **Microsoft.AspNetCore.Authentication.OpenIdConnect** package to authenticate using the OpenID Connect workflow. For example, to authenticate to the Identity.Api microservice in eShopOnContainers, an ASP.NET Core web application can use middleware from that package as shown in the following simplified example in _Program.cs_: - -```csharp -// Program.cs - -var identityUrl = builder.Configuration.GetValue("IdentityUrl"); -var callBackUrl = builder.Configuration.GetValue("CallBackUrl"); -var sessionCookieLifetime = builder.Configuration.GetValue("SessionCookieLifetimeMinutes", 60); - -// Add Authentication services - -services.AddAuthentication(options => -{ - options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; - options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; -}) -.AddCookie(setup => setup.ExpireTimeSpan = TimeSpan.FromMinutes(sessionCookieLifetime)) -.AddOpenIdConnect(options => -{ - options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; - options.Authority = identityUrl.ToString(); - options.SignedOutRedirectUri = callBackUrl.ToString(); - options.ClientId = useLoadTest ? "mvctest" : "mvc"; - options.ClientSecret = "secret"; - options.ResponseType = useLoadTest ? "code id_token token" : "code id_token"; - options.SaveTokens = true; - options.GetClaimsFromUserInfoEndpoint = true; - options.RequireHttpsMetadata = false; - options.Scope.Add("openid"); - options.Scope.Add("profile"); - options.Scope.Add("orders"); - options.Scope.Add("basket"); - options.Scope.Add("marketing"); - options.Scope.Add("locations"); - options.Scope.Add("webshoppingagg"); - options.Scope.Add("orders.signalrhub"); -}); - -// Build the app -//… -app.UseAuthentication(); -//… -app.UseEndpoints(endpoints => -{ - //... -}); -``` - -When you use this workflow, the ASP.NET Core Identity middleware is not needed, because all user information storage and authentication is handled by the Identity service. - -### Issue security tokens from an ASP.NET Core service - -If you prefer to issue security tokens for local ASP.NET Core Identity users rather than using an external identity provider, you can take advantage of some good third-party libraries. - -[IdentityServer4](https://github.com/IdentityServer/IdentityServer4) and [OpenIddict](https://github.com/openiddict/openiddict-core) are OpenID Connect providers that integrate easily with ASP.NET Core Identity to let you issue security tokens from an ASP.NET Core service. The [IdentityServer4 documentation](https://identityserver4.readthedocs.io/en/latest/) has in-depth instructions for using the library. However, the basic steps to using IdentityServer4 to issue tokens are as follows. - -1. You configure IdentityServer4 in _Program.cs_ by making a call to builder.Services.AddIdentityServer. - -2. You call app.UseIdentityServer in _Program.cs_ to add IdentityServer4 to the application's HTTP request processing pipeline. This lets the library serve requests to OpenID Connect and OAuth2 endpoints like /connect/token. - -3. You configure identity server by setting the following data: - - - The [credentials](https://identityserver4.readthedocs.io/en/latest/topics/crypto.html) to use for signing. - - - The [Identity and API resources](https://identityserver4.readthedocs.io/en/latest/topics/resources.html) that users might request access to: - - - API resources represent protected data or functionality that a user can access with an access token. An example of an API resource would be a web API (or set of APIs) that requires authorization. - - - Identity resources represent information (claims) that are given to a client to identify a user. The claims might include the user name, email address, and so on. - - - The [clients](https://identityserver4.readthedocs.io/en/latest/topics/clients.html) that will be connecting in order to request tokens. - - - The storage mechanism for user information, such as [ASP.NET Core Identity](https://identityserver4.readthedocs.io/en/latest/quickstarts/0_overview.html) or an alternative. - -When you specify clients and resources for IdentityServer4 to use, you can pass an collection of the appropriate type to methods that take in-memory client or resource stores. Or for more complex scenarios, you can provide client or resource provider types via Dependency Injection. - -A sample configuration for IdentityServer4 to use in-memory resources and clients provided by a custom IClientStore type might look like the following example: - -```csharp -// Program.cs - -builder.Services.AddSingleton(); -builder.Services.AddIdentityServer() - .AddSigningCredential("CN=sts") - .AddInMemoryApiResources(MyApiResourceProvider.GetAllResources()) - .AddAspNetIdentity(); -//... -``` - -### Consume security tokens - -Authenticating against an OpenID Connect endpoint or issuing your own security tokens covers some scenarios. But what about a service that simply needs to limit access to those users who have valid security tokens that were provided by a different service? - -For that scenario, authentication middleware that handles JWT tokens is available in the **Microsoft.AspNetCore.Authentication.JwtBearer** package. JWT stands for "[JSON Web Token](https://tools.ietf.org/html/rfc7519)" and is a common security token format (defined by RFC 7519) for communicating security claims. A simplified example of how to use middleware to consume such tokens might look like this code fragment, taken from the Ordering.Api microservice of eShopOnContainers. - -```csharp -// Program.cs - -var identityUrl = builder.Configuration.GetValue("IdentityUrl"); - -// Add Authentication services - -builder.Services.AddAuthentication(options => -{ - options.DefaultAuthenticateScheme = AspNetCore.Authentication.JwtBearer.JwtBearerDefaults.AuthenticationScheme; - options.DefaultChallengeScheme = AspNetCore.Authentication.JwtBearer.JwtBearerDefaults.AuthenticationScheme; - -}).AddJwtBearer(options => -{ - options.Authority = identityUrl; - options.RequireHttpsMetadata = false; - options.Audience = "orders"; -}); - -// Build the app - -app.UseAuthentication(); -//… -app.UseEndpoints(endpoints => -{ - //... -}); -``` - -The parameters in this usage are: - -- `Audience` represents the receiver of the incoming token or the resource that the token grants access to. If the value specified in this parameter does not match the parameter in the token, the token will be rejected. - -- `Authority` is the address of the token-issuing authentication server. The JWT bearer authentication middleware uses this URI to get the public key that can be used to validate the token's signature. The middleware also confirms that the `iss` parameter in the token matches this URI. - -Another parameter, `RequireHttpsMetadata`, is useful for testing purposes; you set this parameter to false so you can test in environments where you don't have certificates. In real-world deployments, JWT bearer tokens should always be passed only over HTTPS. - -With this middleware in place, JWT tokens are automatically extracted from authorization headers. They are then deserialized, validated (using the values in the `Audience` and `Authority` parameters), and stored as user information to be referenced later by MVC actions or authorization filters. - -The JWT bearer authentication middleware can also support more advanced scenarios, such as using a local certificate to validate a token if the authority is not available. For this scenario, you can specify a `TokenValidationParameters` object in the `JwtBearerOptions` object. - -## Additional resources - -- **Sharing cookies between applications** \ - [https://learn.microsoft.com/aspnet/core/security/cookie-sharing](/aspnet/core/security/cookie-sharing) - -- **Introduction to Identity** \ - [https://learn.microsoft.com/aspnet/core/security/authentication/identity](/aspnet/core/security/authentication/identity) - -- **Rick Anderson. Two-factor authentication with SMS** \ - [https://learn.microsoft.com/aspnet/core/security/authentication/2fa](/aspnet/core/security/authentication/2fa) - -- **Enabling authentication using Facebook, Google and other external providers** \ - [https://learn.microsoft.com/aspnet/core/security/authentication/social/](/aspnet/core/security/authentication/social/) - -- **Michell Anicas. An Introduction to OAuth 2** \ - - -- **AspNet.Security.OAuth.Providers** (GitHub repo for ASP.NET OAuth providers) \ - - -- **IdentityServer4. Official documentation** \ - - ->[!div class="step-by-step"] ->[Previous](../implement-resilient-applications/monitor-app-health.md) ->[Next](authorization-net-microservices-web-applications.md) diff --git a/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/api-gateway-centralized-authentication.png b/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/api-gateway-centralized-authentication.png deleted file mode 100644 index 028f6c29ae879..0000000000000 Binary files a/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/api-gateway-centralized-authentication.png and /dev/null differ diff --git a/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/identity-microservice-authentication.png b/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/identity-microservice-authentication.png deleted file mode 100644 index 41f55a3a3d8d5..0000000000000 Binary files a/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/identity-microservice-authentication.png and /dev/null differ diff --git a/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/select-individual-user-account-authentication-option.png b/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/select-individual-user-account-authentication-option.png deleted file mode 100644 index a3818f8a82001..0000000000000 Binary files a/docs/architecture/microservices/secure-net-microservices-web-applications/media/index/select-individual-user-account-authentication-option.png and /dev/null differ diff --git a/docs/architecture/microservices/toc.yml b/docs/architecture/microservices/toc.yml deleted file mode 100644 index bf4922aa3094c..0000000000000 --- a/docs/architecture/microservices/toc.yml +++ /dev/null @@ -1,158 +0,0 @@ -items: -- name: ".NET Microservices: Architecture for Containerized .NET Applications" - href: index.md - items: - - name: Introduction to Containers and Docker - href: container-docker-introduction/index.md - items: - - name: What is Docker? - href: container-docker-introduction/docker-defined.md - - name: Docker terminology - href: container-docker-introduction/docker-terminology.md - - name: Docker containers, images, and registries - href: container-docker-introduction/docker-containers-images-registries.md - - name: Choosing Between .NET and .NET Framework for Docker Containers - href: net-core-net-framework-containers/index.md - items: - - name: General guidance - href: net-core-net-framework-containers/general-guidance.md - - name: When to choose .NET for Docker containers - href: net-core-net-framework-containers/net-core-container-scenarios.md - - name: When to choose .NET Framework for Docker containers - href: net-core-net-framework-containers/net-framework-container-scenarios.md - - name: "Decision table: .NET frameworks to use for Docker" - href: net-core-net-framework-containers/container-framework-choice-factors.md - - name: What OS to target with .NET containers - href: net-core-net-framework-containers/net-container-os-targets.md - - name: Official .NET Docker images - href: net-core-net-framework-containers/official-net-docker-images.md - - name: Architecting Container and Microservice Based Applications - href: architect-microservice-container-applications/index.md - items: - - name: Containerizing monolithic applications - href: architect-microservice-container-applications/containerize-monolithic-applications.md - - name: Manage state and data in Docker applications - href: architect-microservice-container-applications/docker-application-state-data.md - - name: Service-oriented architecture - href: architect-microservice-container-applications/service-oriented-architecture.md - - name: Microservices architecture - href: architect-microservice-container-applications/microservices-architecture.md - - name: Data sovereignty per microservice - href: architect-microservice-container-applications/data-sovereignty-per-microservice.md - - name: Logical architecture versus physical architecture - href: architect-microservice-container-applications/logical-versus-physical-architecture.md - - name: Challenges and solutions for distributed data management - href: architect-microservice-container-applications/distributed-data-management.md - - name: Identifying domain-model boundaries for each microservice - href: architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md - - name: Direct client-to-microservice communication versus the API Gateway pattern - href: architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md - - name: Communication in a microservice architecture - href: architect-microservice-container-applications/communication-in-microservice-architecture.md - - name: Asynchronous message-based communication - href: architect-microservice-container-applications/asynchronous-message-based-communication.md - - name: Creating, evolving, and versioning microservice APIs and contracts - href: architect-microservice-container-applications/maintain-microservice-apis.md - - name: Microservices addressability and the service registry - href: architect-microservice-container-applications/microservices-addressability-service-registry.md - - name: Creating composite UI based on microservices, including visual UI shape and layout generated by multiple microservices - href: architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md - - name: Resiliency and high availability in microservices - href: architect-microservice-container-applications/resilient-high-availability-microservices.md - - name: Orchestrating microservices and multi-container applications for high scalability and availability - href: architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md - - name: Development Process for Docker Based Applications - href: docker-application-development-process/index.md - items: - - name: Development workflow for Docker apps - href: docker-application-development-process/docker-app-development-workflow.md - - name: Designing and Developing Multi Container and Microservice Based .NET Applications - href: multi-container-microservice-net-applications/index.md - items: - - name: Designing a microservice-oriented application - href: multi-container-microservice-net-applications/microservice-application-design.md - - name: Creating a simple data-driven CRUD microservice - href: multi-container-microservice-net-applications/data-driven-crud-microservice.md - - name: Defining your multi-container application with docker-compose.yml - href: multi-container-microservice-net-applications/multi-container-applications-docker-compose.md - - name: Using a database server running as a container - href: multi-container-microservice-net-applications/database-server-container.md - - name: Implementing event-based communication between microservices (integration events) - href: multi-container-microservice-net-applications/integration-event-based-microservice-communications.md - - name: Implementing an event bus with RabbitMQ for the development or test environment - href: multi-container-microservice-net-applications/rabbitmq-event-bus-development-test-environment.md - - name: Subscribing to events - href: multi-container-microservice-net-applications/subscribe-events.md - - name: Testing ASP.NET Core services and web apps - href: multi-container-microservice-net-applications/test-aspnet-core-services-web-apps.md - - name: Implement background tasks in microservices with IHostedService - href: multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md - - name: Implementing API Gateways with Ocelot - href: multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md - - name: Tackling Business Complexity in a Microservice with DDD and CQRS Patterns - href: microservice-ddd-cqrs-patterns/index.md - items: - - name: Applying simplified CQRS and DDD patterns in a microservice - href: microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns.md - - name: Applying CQRS and CQS approaches in a DDD microservice in eShopOnContainers - href: microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md - - name: Implementing reads/queries in a CQRS microservice - href: microservice-ddd-cqrs-patterns/cqrs-microservice-reads.md - - name: Designing a DDD-oriented microservice - href: microservice-ddd-cqrs-patterns/ddd-oriented-microservice.md - - name: Designing a microservice domain model - href: microservice-ddd-cqrs-patterns/microservice-domain-model.md - - name: Implementing a microservice domain model with .NET - href: microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md - - name: Seedwork (reusable base classes and interfaces for your domain model) - href: microservice-ddd-cqrs-patterns/seedwork-domain-model-base-classes-interfaces.md - - name: Implementing value objects - href: microservice-ddd-cqrs-patterns/implement-value-objects.md - - name: Using Enumeration classes instead of enum types - href: microservice-ddd-cqrs-patterns/enumeration-classes-over-enum-types.md - - name: Designing validations in the domain model layer - href: microservice-ddd-cqrs-patterns/domain-model-layer-validations.md - - name: Client-side validation (validation in the presentation layers) - href: microservice-ddd-cqrs-patterns/client-side-validation.md - - name: "Domain events: design and implementation" - href: microservice-ddd-cqrs-patterns/domain-events-design-implementation.md - - name: Designing the infrastructure persistence layer - href: microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design.md - - name: Implementing the infrastructure persistence layer with Entity Framework Core - href: microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md - - name: Using NoSQL databases as a persistence infrastructure - href: microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md - - name: Designing the microservice application layer and Web API - href: microservice-ddd-cqrs-patterns/microservice-application-layer-web-api-design.md - - name: Implementing the microservice application layer using the Web API - href: microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md - - name: Implementing Resilient Applications - href: implement-resilient-applications/index.md - items: - - name: Handling partial failure - href: implement-resilient-applications/handle-partial-failure.md - - name: Strategies for handling partial failure - href: implement-resilient-applications/partial-failure-strategies.md - - name: Implementing retries with exponential backoff - href: implement-resilient-applications/implement-retries-exponential-backoff.md - - name: Implementing resilient Entity Framework Core SQL connections - href: implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections.md - - name: Use IHttpClientFactory to implement resilient HTTP requests - href: implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md - - name: Implement HTTP call retries with exponential backoff with Polly - href: implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly.md - - name: Implement the Circuit Breaker pattern - href: implement-resilient-applications/implement-circuit-breaker-pattern.md - - name: Health monitoring - href: implement-resilient-applications/monitor-app-health.md - - name: Securing .NET Microservices and Web Applications - href: secure-net-microservices-web-applications/index.md - items: - - name: About authorization in .NET microservices and web applications - href: secure-net-microservices-web-applications/authorization-net-microservices-web-applications.md - - name: Storing application secrets safely during development - href: secure-net-microservices-web-applications/developer-app-secrets-storage.md - - name: Using Azure Key Vault to protect secrets at production time - href: secure-net-microservices-web-applications/azure-key-vault-protects-secrets.md - - name: Key takeaways - href: key-takeaways.md diff --git a/docs/architecture/modern-web-apps-azure/architectural-principles.md b/docs/architecture/modern-web-apps-azure/architectural-principles.md deleted file mode 100644 index d401eae3ebaf6..0000000000000 --- a/docs/architecture/modern-web-apps-azure/architectural-principles.md +++ /dev/null @@ -1,107 +0,0 @@ ---- -title: Architectural principles -description: Architect Modern Web Applications with ASP.NET Core and Azure | Architectural principles -author: ardalis -ms.author: wiwagn -ms.date: 12/12/2021 ---- -# Architectural principles - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." -> _\- Gerald Weinberg_ - -You should architect and design software solutions with maintainability in mind. The principles outlined in this section can help guide you toward architectural decisions that will result in clean, maintainable applications. Generally, these principles will guide you toward building applications out of discrete components that are not tightly coupled to other parts of your application, but rather communicate through explicit interfaces or messaging systems. - -## Common design principles - -### Separation of concerns - -A guiding principle when developing is **Separation of Concerns**. This principle asserts that software should be separated based on the kinds of work it performs. For instance, consider an application that includes logic for identifying noteworthy items to display to the user, and which formats such items in a particular way to make them more noticeable. The behavior responsible for choosing which items to format should be kept separate from the behavior responsible for formatting the items, since these behaviors are separate concerns that are only coincidentally related to one another. - -Architecturally, applications can be logically built to follow this principle by separating core business behavior from infrastructure and user-interface logic. Ideally, business rules and logic should reside in a separate project, which should not depend on other projects in the application. This separation helps ensure that the business model is easy to test and can evolve without being tightly coupled to low-level implementation details (it also helps if infrastructure concerns depend on abstractions defined in the business layer). Separation of concerns is a key consideration behind the use of layers in application architectures. - -### Encapsulation - -Different parts of an application should use **encapsulation** to insulate them from other parts of the application. Application components and layers should be able to adjust their internal implementation without breaking their collaborators as long as external contracts are not violated. Proper use of encapsulation helps achieve loose coupling and modularity in application designs, since objects and packages can be replaced with alternative implementations so long as the same interface is maintained. - -In classes, encapsulation is achieved by limiting outside access to the class's internal state. If an outside actor wants to manipulate the state of the object, it should do so through a well-defined function (or property setter), rather than having direct access to the private state of the object. Likewise, application components and applications themselves should expose well-defined interfaces for their collaborators to use, rather than allowing their state to be modified directly. This approach frees the application's internal design to evolve over time without worrying that doing so will break collaborators, so long as the public contracts are maintained. - -Mutable global state is antithetical to encapsulation. A value fetched from mutable global state in one function cannot be relied upon to have the same value in another function (or even further in the same function). Understanding concerns with mutable global state is one of the reasons programming languages like C# have support for different scoping rules, which are used everywhere from statements to methods to classes. It's worth noting that data-driven architectures which rely on a central database for integration within and between applications are, themselves, choosing to depend on the mutable global state represented by the database. A key consideration in domain-driven design and clean architecture is how to encapsulate access to data, and how to ensure application state is not made invalid by direct access to its persistence format. - -### Dependency inversion - -The direction of dependency within the application should be in the direction of abstraction, not implementation details. Most applications are written such that compile-time dependency flows in the direction of runtime execution, producing a direct dependency graph. That is, if class A calls a method of class B and class B calls a method of class C, then at compile time class A will depend on class B, and class B will depend on class C, as shown in Figure 4-1. - -![Direct dependency graph](./media/image4-1.png) - -**Figure 4-1.** Direct dependency graph. - -Applying the dependency inversion principle allows A to call methods on an abstraction that B implements, making it possible for A to call B at run time, but for B to depend on an interface controlled by A at compile time (thus, *inverting* the typical compile-time dependency). At run time, the flow of program execution remains unchanged, but the introduction of interfaces means that different implementations of these interfaces can easily be plugged in. - -![Inverted dependency graph](./media/image4-2.png) - -**Figure 4-2.** Inverted dependency graph. - -**Dependency inversion** is a key part of building loosely coupled applications, since implementation details can be written to depend on and implement higher-level abstractions, rather than the other way around. The resulting applications are more testable, modular, and maintainable as a result. The practice of *dependency injection* is made possible by following the dependency inversion principle. - -### Explicit dependencies - -**Methods and classes should explicitly require any collaborating objects they need in order to function correctly.** It is called the [Explicit Dependencies Principle](https://deviq.com/principles/explicit-dependencies-principle). Class constructors provide an opportunity for classes to identify the things they need in order to be in a valid state and to function properly. If you define classes that can be constructed and called, but that will only function properly if certain global or infrastructure components are in place, these classes are being *dishonest* with their clients. The constructor contract is telling the client that it only needs the things specified (possibly nothing if the class is just using a parameterless constructor), but then at runtime it turns out the object really did need something else. - -By following the explicit dependencies principle, your classes and methods are being honest with their clients about what they need in order to function. Following the principle makes your code more self-documenting and your coding contracts more user-friendly, since users will come to trust that as long as they provide what's required in the form of method or constructor parameters, the objects they're working with will behave correctly at run time. - -### Single responsibility - -The single responsibility principle applies to object-oriented design, but can also be considered as an architectural principle similar to separation of concerns. It states that objects should have only one responsibility and that they should have only one reason to change. Specifically, the only situation in which the object should change is if the manner in which it performs its one responsibility must be updated. Following this principle helps to produce more loosely coupled and modular systems, since many kinds of new behavior can be implemented as new classes, rather than by adding additional responsibility to existing classes. Adding new classes is always safer than changing existing classes, since no code yet depends on the new classes. - -In a monolithic application, we can apply the single responsibility principle at a high level to the layers in the application. Presentation responsibility should remain in the UI project, while data access responsibility should be kept within an infrastructure project. Business logic should be kept in the application core project, where it can be easily tested and can evolve independently from other responsibilities. - -When this principle is applied to application architecture and taken to its logical endpoint, you get microservices. A given microservice should have a single responsibility. If you need to extend the behavior of a system, it's usually better to do it by adding additional microservices, rather than by adding responsibility to an existing one. - -[Learn more about microservices architecture](https://aka.ms/MicroservicesEbook) - -### Don't repeat yourself (DRY) - -The application should avoid specifying behavior related to a particular concept in multiple places as this practice is a frequent source of errors. At some point, a change in requirements will require changing this behavior. It's likely that at least one instance of the behavior will fail to be updated, and the system will behave inconsistently. - -Rather than duplicating logic, encapsulate it in a programming construct. Make this construct the single authority over this behavior, and have any other part of the application that requires this behavior use the new construct. - -> [!NOTE] -> Avoid binding together behavior that is only coincidentally repetitive. For example, just because two different constants both have the same value, that doesn't mean you should have only one constant, if conceptually they're referring to different things. Duplication is always preferable to coupling to the wrong abstraction. - -### Persistence ignorance - -**Persistence ignorance** (PI) refers to types that need to be persisted, but whose code is unaffected by the choice of persistence technology. Such types in .NET are sometimes referred to as Plain Old CLR Objects (POCOs), because they do not need to inherit from a particular base class or implement a particular interface. Persistence ignorance is valuable because it allows the same business model to be persisted in multiple ways, offering additional flexibility to the application. Persistence choices might change over time, from one database technology to another, or additional forms of persistence might be required in addition to whatever the application started with (for example, using a Redis cache or Azure Cosmos DB in addition to a relational database). - -Some examples of violations of this principle include: - -- A required base class. - -- A required interface implementation. - -- Classes responsible for saving themselves (such as the Active Record pattern). - -- Required parameterless constructor. - -- Properties requiring virtual keyword. - -- Persistence-specific required attributes. - -The requirement that classes have any of the above features or behaviors adds coupling between the types to be persisted and the choice of persistence technology, making it more difficult to adopt new data access strategies in the future. - -### Bounded contexts - -**Bounded contexts** are a central pattern in Domain-Driven Design. They provide a way of tackling complexity in large applications or organizations by breaking it up into separate conceptual modules. Each conceptual module then represents a context that is separated from other contexts (hence, bounded), and can evolve independently. Each bounded context should ideally be free to choose its own names for concepts within it, and should have exclusive access to its own persistence store. - -At a minimum, individual web applications should strive to be their own bounded context, with their own persistence store for their business model, rather than sharing a database with other applications. Communication between bounded contexts occurs through programmatic interfaces, rather than through a shared database, which allows for business logic and events to take place in response to changes that take place. Bounded contexts map closely to microservices, which also are ideally implemented as their own individual bounded contexts. - -## Additional resources - -- [Principles](https://deviq.com/principles/principles-overview) -- [Bounded Context](https://martinfowler.com/bliki/BoundedContext.html) - ->[!div class="step-by-step"] ->[Previous](choose-between-traditional-web-and-single-page-apps.md) ->[Next](common-web-application-architectures.md) diff --git a/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md b/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md deleted file mode 100644 index c29439e96f98c..0000000000000 --- a/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md +++ /dev/null @@ -1,154 +0,0 @@ ---- -title: Azure hosting recommendations for ASP.NET Core web apps -description: Architect Modern Web Applications with ASP.NET Core and Azure | Azure hosting recommendations for ASP.NET web apps -author: ardalis -ms.author: wiwagn -ms.date: 12/12/2021 ---- - -# Azure hosting recommendations for ASP.NET Core web apps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "Line-of-business leaders everywhere are bypassing IT departments to get applications from the cloud (also known as SaaS) and paying for them like they would a magazine subscription. And when the service is no longer required, they can cancel the subscription with no equipment left unused in the corner." -> _\- Daryl Plummer, Gartner analyst_ - -Whatever your application's needs and architecture, Microsoft Azure can support it. Your hosting needs can be as simple as a static website or a sophisticated application made up of dozens of services. For ASP.NET Core monolithic web applications and supporting services, there are several well-known configurations that are recommended. The recommendations on this article are grouped based on the kind of resource to be hosted, whether full applications, individual processes, or data. - -## Web applications - -Web applications can be hosted with: - -- App Service Web Apps - -- Containers (several options) - -- Virtual Machines (VMs) - -Of these, App Service Web Apps is the recommended approach for most scenarios, including simple container-based apps. For microservice architectures, consider a container-based approach. If you need more control over the machines running your application, consider Azure Virtual Machines. - -### App Service Web Apps - -App Service Web Apps offers a fully managed platform optimized for hosting web applications. It's a platform as a service (PaaS) offering that lets you focus on your business logic, while Azure takes care of the infrastructure needed to run and scale the app. Some key features of App Service Web Apps: - -- DevOps optimization (continuous integration and delivery, multiple environments, A/B testing, scripting support). - -- Global scale and high availability. - -- Connections to SaaS platforms and your on-premises data. - -- Security and compliance. - -- Visual Studio integration. - -Azure App Service is the best choice for most web apps. Deployment and management are integrated into the platform, sites can scale quickly to handle high traffic loads, and the built-in load balancing and traffic manager provide high availability. You can move existing sites to Azure App Service easily with an online migration tool. You can use an open-source app from the Web Application Gallery, or create a new site using the framework and tools of your choice. The WebJobs feature makes it easy to add background job processing to your App Service web app. If you have an existing ASP.NET application hosted on-premises using a local database, there's a clear path to migrate. You can use App Service Web App with an Azure SQL Database (or secure access to your on-premises database server, if preferred). - -![Recommended migration strategy for on-premises .NET apps to Azure App Service](./media/image1-6.png) - -In most cases, moving from a locally hosted ASP.NET app to an App Service Web App is a straightforward process. Little or no modification should be required of the app itself, and it can quickly start to take advantage of the many features that Azure App Service Web Apps offer. - -In addition to apps that are not optimized for the cloud, Azure App Service Web Apps are an excellent solution for many simple monolithic (non-distributed) applications, such as many ASP.NET Core apps. In this approach, the architecture is basic and simple to understand and manage: - -![Basic Azure architecture](./media/image1-5.png) - -A small number of resources in a single resource group is typically sufficient to manage such an app. Apps that are typically deployed as a single unit, rather than those apps that are made up of many separate processes, are good candidates for this [basic architectural approach](/azure/architecture/reference-architectures/app-service-web-app/basic-web-app). Though architecturally simple, this approach still allows the hosted app to scale both up (more resources per node) and out (more hosted nodes) to meet any increase in demand. With autoscale, the app can be configured to automatically adjust the number of nodes hosting the app based on demand and average load across nodes. - -### App Service Web Apps for Containers - -In addition to support for hosting web apps directly, [App Service Web Apps for Containers](https://azure.microsoft.com/services/app-service/containers/) can be used to run containerized applications on Windows and Linux. Using this service, you can easily deploy and run containerized applications that can scale with your business. The apps have all of the features of App Service Web Apps listed above. In addition, Web Apps for Containers supports streamlined CI/CD with Docker Hub, Azure Container Registry, and GitHub. You can use Azure DevOps to define build and deployment pipelines that publish changes to a registry. These changes can then be tested in a staging environment and automatically deployed to production using deployment slots, allowing zero-downtime upgrades. Rolling back to previous versions can be done just as easily. - -There are a few scenarios where Web Apps for Containers makes the most sense. If you have existing apps that you can containerize, whether in Windows or Linux containers, you can host these easily using this toolset. Just publish your container and then configure Web Apps for Containers to pull the latest version of that image from your registry of choice. This is a "lift and shift" approach to migrating from classic app hosting models to a cloud-optimized model. - -![Migrate containerized on-premises .NET application to Azure Web Apps for Containers](./media/image1-8.png) - -This approach also works well if your development team is able to move to a container-based development process. The "inner loop" of developing apps with containers includes building the app with containers. Changes made to the code as well as to container configuration are pushed to source control, and an automated build is responsible for publishing new container images to a registry like Docker Hub or Azure Container Registry. These images are then used as the basis for additional development, as well as for deployments to production, as shown in the following diagram: - -![End to End Docker DevOps Lifecycle Workflow](./media/image1-7.png) - -Developing with containers offers many advantages, especially when containers are used in production. The same container configuration is used to host the app in each environment in which it runs, from the local development machine to build and test systems to production. This approach greatly reduces the likelihood of defects resulting from differences in machine configuration or software versions. Developers can also use whatever tools they're most productive with, including the operating system, since containers can run on any OS. In some cases, distributed applications involving many containers may be very resource-intensive to run on a single development machine. In this scenario, it may make sense to upgrade to using Kubernetes and Azure Dev Spaces, covered in the next section. - -As portions of larger applications are broken up into their own smaller, independent *microservices*, additional design patterns can be used to improve app behavior. Instead of working directly with individual services, an *API gateway* can simplify access and decouple the client from its back end. Having separate service back ends for different front ends also allows services to evolve in concert with their consumers. Common services can be accessed via a separate *sidecar* container, which might include common client connectivity libraries using the *ambassador* pattern. - -![Microservices sample architecture with several common design patterns noted.](./media/image1-10.png) - -[Learn more about design patterns to consider when building microservice-based systems.](/azure/architecture/microservices/design/patterns) - -### Azure Kubernetes Service - -Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on-demand, without taking your applications offline. - -AKS reduces the complexity and operational overhead of managing a Kubernetes cluster by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. Also, you pay only for the agent nodes within your clusters, not for the masters. As a managed Kubernetes service, AKS provides: - -- Automated Kubernetes version upgrades and patching. -- Easy cluster scaling. -- Self-healing hosted control plane (masters). -- Cost savings - pay only for running agent pool nodes. - -With Azure handling the management of the nodes in your AKS cluster, you no longer need to perform many tasks manually, like cluster upgrades. Because Azure handles these critical maintenance tasks for you, AKS doesn't provide direct access (such as with SSH) to the cluster. - -Teams who are leveraging AKS can also take advantage of Azure Dev Spaces. Azure Dev Spaces helps teams to focus on the development and rapid iteration of their microservice application by allowing teams to work directly with their entire microservices architecture or application running in AKS. Azure Dev Spaces also provides a way to independently update portions of your microservices architecture in isolation without affecting the rest of the AKS cluster or other developers. - -![Azure Dev Spaces workflow example](./media/image1-9.gif) - -Azure Dev Spaces: - -- Minimize local machine setup time and resource requirements -- Allow teams to iterate more rapidly -- Reduce the number of integration environments required by a team -- Remove the need to mock certain services in a distributed system when developing/testing - -[Learn more about Azure Dev Spaces](/azure/dev-spaces/about) - -### Azure Virtual Machines - -If you have an existing application that would require substantial modifications to run in App Service, you could choose Virtual Machines in order to simplify migrating to the cloud. However, correctly configuring, securing, and maintaining VMs requires much more time and IT expertise compared to Azure App Service. If you're considering Azure Virtual Machines, make sure you take into account the ongoing maintenance effort required to patch, update, and manage your VM environment. Azure Virtual Machines is infrastructure as a service (IaaS), while App Service is PaaS. You should also consider whether deploying your app as a Windows Container to Web App for Containers might be a viable option for your scenario. - -## Logical processes - -Individual logical processes that can be decoupled from the rest of the application may be deployed independently to Azure Functions in a "serverless" manner. Azure Functions lets you just write the code you need for a given problem, without worrying about the application or infrastructure to run it. You can choose from a variety of programming languages, including C\#, F\#, Node.js, Python, and PHP, allowing you to pick the most productive language for the task at hand. Like most cloud-based solutions, you pay only for the amount of time your use, and you can trust Azure Functions to scale up as needed. - -## Data - -Azure offers a wide variety of data storage options, so that your application can use the appropriate data provider for the data in question. - -For transactional, relational data, Azure SQL Databases are the best option. For high performance read-mostly data, a Redis cache backed by an Azure SQL Database is a good solution. - -Unstructured JSON data can be stored in a variety of ways, from SQL Database columns to Blobs or Tables in Azure Storage, to Azure Cosmos DB. Of these, Azure Cosmos DB offers the best querying functionality, and is the recommended option for large numbers of JSON-based documents that must support querying. - -Transient command- or event-based data used to orchestrate application behavior can use Azure Service Bus or Azure Storage Queues. Azure Service Bus offers more flexibility and is the recommended service for non-trivial messaging within and between applications. - -## Architecture recommendations - -Your application's requirements should dictate its architecture. There are many different Azure services available. Choosing the right one is an important decision. Microsoft offers a gallery of reference architectures to help identify typical architectures optimized for common scenarios. You may find a reference architecture that maps closely to your application's requirements, or at least offers a starting point. - -Figure 11-1 shows an example reference architecture. This diagram describes a recommended architecture approach for a Sitecore content management system website optimized for marketing. - -![Figure 11-1](./media/image11-2.png) - -**Figure 11-1.** Sitecore marketing website reference architecture. - -**References – Azure hosting recommendations** - -- Azure Solution Architectures\ - - -- Azure Basic Web Application Architecture\ - [https://learn.microsoft.com/azure/architecture/reference-architectures/app-service-web-app/basic-web-app](/azure/architecture/reference-architectures/app-service-web-app/basic-web-app) - -- Design Patterns for Microservices\ - [https://learn.microsoft.com/azure/architecture/microservices/design/patterns](/azure/architecture/microservices/design/patterns) - -- Azure Developer Guide\ - - -- Web Apps overview\ - [https://learn.microsoft.com/azure/app-service/app-service-web-overview](/azure/app-service/app-service-web-overview) - -- Web App for Containers\ - - -- Introduction to Azure Kubernetes Service (AKS)\ - [https://learn.microsoft.com/azure/aks/intro-kubernetes](/azure/aks/intro-kubernetes) - ->[!div class="step-by-step"] ->[Previous](development-process-for-azure.md) diff --git a/docs/architecture/modern-web-apps-azure/choose-between-traditional-web-and-single-page-apps.md b/docs/architecture/modern-web-apps-azure/choose-between-traditional-web-and-single-page-apps.md deleted file mode 100644 index 4b4f906ea617b..0000000000000 --- a/docs/architecture/modern-web-apps-azure/choose-between-traditional-web-and-single-page-apps.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: Choose between traditional web apps and single page apps -description: Learn how to choose between traditional web apps and single page applications (SPAs) when building web applications. -author: ardalis -ms.author: wiwagn -no-loc: [Blazor, WebAssembly] -ms.date: 12/12/2021 ---- - -# Choose Between Traditional Web Apps and Single Page Apps (SPAs) - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "Atwood's Law: Any application that can be written in JavaScript, will eventually be written in JavaScript." -> _\- Jeff Atwood_ - -There are two general approaches to building web applications today: traditional web applications that perform most of the application logic on the server, and single-page applications (SPAs) that perform most of the user interface logic in a web browser, communicating with the web server primarily using web APIs. A hybrid approach is also possible, the simplest being host one or more rich SPA-like subapplications within a larger traditional web application. - -Use traditional web applications when: - -- Your application's client-side requirements are simple or even read-only. - -- Your application needs to function in browsers without JavaScript support. - -- Your application is public-facing and benefits from search engine discovery and referrals. - -Use a SPA when: - -- Your application must expose a rich user interface with many features. - -- Your team is familiar with JavaScript, TypeScript, or Blazor WebAssembly development. - -- Your application must already expose an API for other (internal or public) clients. - -Additionally, SPA frameworks require greater architectural and security expertise. They experience greater churn due to frequent updates and new client frameworks than traditional web applications. Configuring automated build and deployment processes and utilizing deployment options like containers may be more difficult with SPA applications than traditional web apps. - -Improvements in user experience made possible by the SPA approach must be weighed against these considerations. - -## Blazor - -ASP.NET Core includes a model for building rich, interactive, and composable user interfaces called Blazor. Blazor server-side allows developers to build UI with C# and Razor on the server and for the UI to be interactively connected to the browser in real-time using a persistent SignalR connection. Blazor WebAssembly introduces another option for Blazor apps, allowing them to run in the browser using WebAssembly. Because it's real .NET code running on WebAssembly, you can reuse code and libraries from server-side parts of your application. - -Blazor provides a new, third option to consider when evaluating whether to build a purely server-rendered web application or a SPA. You can build rich, SPA-like client-side behaviors using Blazor, without the need for significant JavaScript development. Blazor applications can call APIs to request data or perform server-side operations. They can interoperate with JavaScript where necessary to take advantage of JavaScript libraries and frameworks. - -Consider building your web application with Blazor when: - -- Your application must expose a rich user interface - -- Your team is more comfortable with .NET development than JavaScript or TypeScript development - -If you have an existing web forms application you're considering migrating to .NET Core or the latest .NET, you may wish to review the free e-book, [Blazor for Web Forms Developers](../blazor-for-web-forms-developers/index.md) to see whether it makes sense to consider migrating it to Blazor. - -For more information about Blazor, see [Get started with Blazor](https://blazor.net/docs/get-started.html). - -## When to choose traditional web apps - -The following section is a more detailed explanation of the previously stated reasons for picking traditional web applications. - -**Your application has simple, possibly read-only, client-side requirements** - -Many web applications are primarily consumed in a read-only fashion by the vast majority of their users. Read-only (or read-mostly) applications tend to be much simpler than those applications that maintain and manipulate a great deal of state. For example, a search engine might consist of a single entry point with a textbox and a second page for displaying search results. Anonymous users can easily make requests, and there is little need for client-side logic. Likewise, a blog or content management system's public-facing application usually consists mainly of content with little client-side behavior. Such applications are easily built as traditional server-based web applications, which perform logic on the web server and render HTML to be displayed in the browser. The fact that each unique page of the site has its own URL that can be bookmarked and indexed by search engines (by default, without having to add this functionality as a separate feature of the application) is also a clear benefit in such scenarios. - -**Your application needs to function in browsers without JavaScript support** - -Web applications that need to function in browsers with limited or no JavaScript support should be written using traditional web app workflows (or at least be able to fall back to such behavior). SPAs require client-side JavaScript in order to function; if it's not available, SPAs are not a good choice. - -**Your team is unfamiliar with JavaScript or TypeScript development techniques** - -If your team is unfamiliar with JavaScript or TypeScript, but is familiar with server-side web application development, then they will probably be able to deliver a traditional web app more quickly than a SPA. Unless learning to program SPAs is a goal, or the user experience afforded by a SPA is required, traditional web apps are a more productive choice for teams who are already familiar with building them. - -## When to choose SPAs - -The following section is a more detailed explanation of when to choose a Single Page Applications style of development for your web app. - -**Your application must expose a rich user interface with many features** - -SPAs can support rich client-side functionality that doesn't require reloading the page as users take actions or navigate between areas of the app. SPAs can load more quickly, fetching data in the background, and individual user actions are more responsive since full page reloads are rare. SPAs can support incremental updates, saving partially completed forms or documents without the user having to click a button to submit a form. SPAs can support rich client-side behaviors, such as drag-and-drop, much more readily than traditional applications. SPAs can be designed to run in a disconnected mode, making updates to a client-side model that are eventually synchronized back to the server once a connection is re-established. Choose a SPA-style application if your app's requirements include rich functionality that goes beyond what typical HTML forms offer. - -Frequently, SPAs need to implement features that are built into traditional web apps, such as displaying a meaningful URL in the address bar reflecting the current operation (and allowing users to bookmark or deep link to this URL to return to it). SPAs also should allow users to use the browser's back and forward buttons with results that won't surprise them. - -**Your team is familiar with JavaScript and/or TypeScript development** - -Writing SPAs requires familiarity with JavaScript and/or TypeScript and client-side programming techniques and libraries. Your team should be competent in writing modern JavaScript using a SPA framework like Angular. - -> ### References – SPA Frameworks -> -> - **Angular**: -> - **React**: -> - **Vue.js**: - -**Your application must already expose an API for other (internal or public) clients** - -If you're already supporting a web API for use by other clients, it may require less effort to create a SPA implementation that leverages these APIs rather than reproducing the logic in server-side form. SPAs make extensive use of web APIs to query and update data as users interact with the application. - -## When to choose Blazor - -The following section is a more detailed explanation of when to choose Blazor for your web app. - -**Your application must expose a rich user interface** - -Like JavaScript-based SPAs, Blazor applications can support rich client behavior without page reloads. These applications are more responsive to users, fetching only the data (or HTML) required to respond to a given user interaction. Designed properly, server-side Blazor apps can be configured to run as client-side Blazor apps with minimal changes once this feature is supported. - -**Your team is more comfortable with .NET development than JavaScript or TypeScript development** - -Many developers are more productive with .NET and Razor than with client-side languages like JavaScript or TypeScript. Since the server-side of the application is already being developed with .NET, using Blazor ensures every .NET developer on the team can understand and potentially build the behavior of the front end of the application. - -## Decision table - -The following decision table summarizes some of the basic factors to consider when choosing between a traditional web application, a SPA, or a Blazor app. - -| **Factor** | **Traditional Web App** | **Single Page Application** | **Blazor App** | -| ---------------------------------------------------- | ----------------------- | --------------------------- | --------------- | -| Required Team Familiarity with JavaScript/TypeScript | **Minimal** | **Required** | **Minimal** | -| Support Browsers without Scripting | **Supported** | **Not Supported** | **Supported** | -| Minimal Client-Side Application Behavior | **Well-Suited** | **Overkill** | **Viable** | -| Rich, Complex User Interface Requirements | **Limited** | **Well-Suited** | **Well-Suited** | - -## Other considerations - -Traditional Web Apps tend to be simpler and have better Search Engine Optimization (SEO) criteria than SPA apps. Search engine bots can easily consume content from traditional apps, while support for indexing SPAs may be lacking or limited. If your app benefits from public discovery by search engines, this is often an important consideration. - -In addition, unless you've built a management tool for your SPA's content, it may require developers to make changes. Traditional Web Apps are often easier for non-developers to make changes to, since much of the content is simply HTML and may not require a build process to preview or even deploy. If non-developers in your organization are likely to need to maintain the content of the app, a traditional web app may be a better choice. - -SPAs shine when the app has complex interactive forms or other user interaction features. For complex apps that require authentication to use, and thus aren't accessible by public search engine spiders, SPAs are a great option in many cases. - ->[!div class="step-by-step"] ->[Previous](modern-web-applications-characteristics.md) ->[Next](architectural-principles.md) diff --git a/docs/architecture/modern-web-apps-azure/common-client-side-web-technologies.md b/docs/architecture/modern-web-apps-azure/common-client-side-web-technologies.md deleted file mode 100644 index da5a099aac7f1..0000000000000 --- a/docs/architecture/modern-web-apps-azure/common-client-side-web-technologies.md +++ /dev/null @@ -1,212 +0,0 @@ ---- -title: Common client-side web technologies -description: Architect Modern Web Applications with ASP.NET Core and Azure | Common client-side web technologies -author: ardalis -ms.author: wiwagn -no-loc: [Blazor] -ms.date: 12/12/2021 ---- -# Common client-side web technologies - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "Websites should look good from the inside and out." -> _- Paul Cookson_ - -ASP.NET Core applications are web applications and they typically rely on client-side web technologies like HTML, CSS, and JavaScript. By separating the content of the page (the HTML) from its layout and styling (the CSS), and its behavior (via JavaScript), complex web apps can leverage the Separation of Concerns principle. Future changes to the structure, design, or behavior of the application can be made more easily when these concerns are not intertwined. - -While HTML and CSS are relatively stable, JavaScript, by means of the application frameworks and utilities developers work with to build web-based applications, is evolving at breakneck speed. This chapter looks at a few ways that JavaScript is used by web developers and provides a high-level overview of the Angular and React client-side libraries. - -> [!NOTE] -> Blazor provides an alternative to JavaScript frameworks for building rich, interactive client user interfaces. - -## HTML - -HTML is the standard markup language used to create web pages and web applications. Its elements form the building blocks of pages, representing formatted text, images, form inputs, and other structures. When a browser makes a request to a URL, whether fetching a page or an application, the first thing that is returned is an HTML document. This HTML document may reference or include additional information about its look and layout in the form of CSS, or behavior in the form of JavaScript. - -## CSS - -CSS (Cascading Style Sheets) is used to control the look and layout of HTML elements. CSS styles can be applied directly to an HTML element, defined separately on the same page, or defined in a separate file and referenced by the page. Styles cascade based on how they are used to select a given HTML element. For instance, a style might apply to an entire document, but would be overridden by a style that applied to a particular element. Likewise, an element-specific style would be overridden by a style that applied to a CSS class that was applied to the element, which in turn would be overridden by a style targeting a specific instance of that element (via its ID). Figure 6-1 - -![CSS Specificity rules](./media/image6-1.png) - -**Figure 6-1.** CSS Specificity rules, in order. - -It's best to keep styles in their own separate stylesheet files, and to use selection-based cascading to implement consistent and reusable styles within the application. Placing style rules within HTML should be avoided, and applying styles to specific individual elements (rather than whole classes of elements, or elements that have had a particular CSS class applied to them) should be the exception, not the rule. - -### CSS preprocessors - -CSS stylesheets lack support for conditional logic, variables, and other programming language features. Thus, large stylesheets often include quite a bit of repetition, as the same color, font, or other setting is applied to many different variations of HTML elements and CSS classes. CSS preprocessors can help your stylesheets follow the [DRY principle](https://deviq.com/don-t-repeat-yourself/) by adding support for variables and logic. - -The most popular CSS preprocessors are Sass and LESS. Both extend CSS and are backward compatible with it, meaning that a plain CSS file is a valid Sass or LESS file. Sass is Ruby-based and LESS is JavaScript based, and both typically run as part of your local development process. Both have command-line tools available, as well as built-in support in Visual Studio for running them using Gulp or Grunt tasks. - -## JavaScript - -JavaScript is a dynamic, interpreted programming language that has been standardized in the ECMAScript language specification. It is the programming language of the web. Like CSS, JavaScript can be defined as attributes within HTML elements, as blocks of script within a page, or in separate files. Just like CSS, it's recommended to organize JavaScript into separate files, keeping it separated as much as possible from the HTML found on individual web pages or application views. - -When working with JavaScript in your web application, there are a few tasks that you'll commonly need to perform: - -- Selecting an HTML element and retrieving and/or updating its value. - -- Querying a Web API for data. - -- Sending a command to a Web API (and responding to a callback with its result). - -- Performing validation. - -You can perform all of these tasks with JavaScript alone, but many libraries exist to make these tasks easier. One of the first and most successful of these libraries is jQuery, which continues to be a popular choice for simplifying these tasks on web pages. For Single Page Applications (SPAs), jQuery doesn't provide many of the desired features that Angular and React offer. - -### Legacy web apps with jQuery - -Although ancient by JavaScript framework standards, jQuery continues to be a commonly used library for working with HTML/CSS and building applications that make AJAX calls to web APIs. However, jQuery operates at the level of the browser document object model (DOM), and by default offers only an imperative, rather than declarative, model. - -For example, imagine that if a textbox's value exceeds 10, an element on the page should be made visible. In jQuery, this functionality would typically be implemented by writing an event handler with code that would inspect the textbox's value and set the visibility of the target element based on that value. This process is an imperative, code-based approach. Another framework might instead use databinding to bind the visibility of the element to the value of the textbox declaratively. This approach would not require writing any code, but instead only requires decorating the elements involved with data binding attributes. As client-side behaviors grow more complex, data binding approaches frequently result in simpler solutions with less code and conditional complexity. - -### jQuery vs a SPA Framework - -| **Factor** | **jQuery** | **Angular**| -|--------------------------|------------|-------------| -| Abstracts the DOM | **Yes** | **Yes** | -| AJAX Support | **Yes** | **Yes** | -| Declarative Data Binding | **No** | **Yes** | -| MVC-style Routing | **No** | **Yes** | -| Templating | **No** | **Yes** | -| Deep-Link Routing | **No** | **Yes** | - -Most of the features jQuery lacks intrinsically can be added with the addition of other libraries. However, a SPA framework like Angular provides these features in a more integrated fashion, since it's been designed with all of them in mind from the start. Also, jQuery is an imperative library, meaning that you need to call jQuery functions in order to do anything with jQuery. Much of the work and functionality that SPA frameworks provide can be done declaratively, requiring no actual code to be written. - -Data binding is a great example of this functionality. In jQuery, it usually only takes one line of code to get the value of a DOM element or to set an element's value. However, you have to write this code anytime you need to change the value of the element, and sometimes this will occur in multiple functions on a page. Another common example is element visibility. In jQuery, there might be many different places where you'd write code to control whether certain elements were visible. In each of these cases, when using data binding, no code would need to be written. You'd simply bind the value or visibility of the elements in question to a *viewmodel* on the page, and changes to that viewmodel would automatically be reflected in the bound elements. - -### Angular SPAs - -Angular remains one of the world's most popular JavaScript frameworks. Since Angular 2, the team rebuilt the framework from the ground up (using [TypeScript](https://www.typescriptlang.org/)) and rebranded from the original AngularJS name to Angular. Now several years old, the redesigned Angular continues to be a robust framework for building Single Page Applications. - -Angular applications are built from components. Components combine HTML templates with special objects and control a portion of the page. A simple component from Angular's docs is shown here: - -```js -import { Component } from '@angular/core'; - -@Component({ - selector: 'my-app', - template: `

Hello {{name}}

` -}) - -export class AppComponent { name = 'Angular'; } -``` - -Components are defined using the `@Component` decorator function, which takes in metadata about the component. The selector property identifies the ID of the element on the page where this component will be displayed. The template property is a simple HTML template that includes a placeholder that corresponds to the component's name property, defined on the last line. - -By working with components and templates, instead of DOM elements, Angular apps can operate at a higher level of abstraction and with less overall code than apps written using just JavaScript (also called "vanilla JS") or with jQuery. Angular also imposes some order on how you organize your client-side script files. By convention, Angular apps use a common folder structure, with module and component script files located in an app folder. Angular scripts concerned with building, deploying, and testing the app are typically located in a higher-level folder. - -You can develop Angular apps by using a CLI. Getting started with Angular development locally (assuming you already have git and npm installed) consists of simply cloning a repo from GitHub and running `npm install` and `npm start`. Beyond this, Angular ships its own CLI, which can create projects, add files, and assist with testing, bundling, and deployment tasks. This CLI friendliness makes Angular especially compatible with ASP.NET Core, which also features great CLI support. - -Microsoft has developed a reference application, eShopOnContainers, which includes an Angular SPA implementation. This app includes Angular modules to manage the online store's shopping basket, load and display items from its catalog, and handling order creation. You can view and download the sample application from [GitHub](https://github.com/dotnet-architecture/eShopOnContainers/tree/main/src/Web/WebSPA). - -### React - -Unlike Angular, which offers a full Model-View-Controller pattern implementation, React is only concerned with views. It's not a framework, just a library, so to build a SPA you'll need to leverage additional libraries. There are a number of libraries that are designed to be used with React to produce rich single page applications. - -One of React's most important features is its use of a virtual DOM. The virtual DOM provides React with several advantages, including performance (the virtual DOM can optimize which parts of the actual DOM need to be updated) and testability (no need to have a browser to test React and its interactions with its virtual DOM). - -React is also unusual in how it works with HTML. Rather than having a strict separation between code and markup (with references to JavaScript appearing in HTML attributes perhaps), React adds HTML directly within its JavaScript code as JSX. JSX is HTML-like syntax that can compile down to pure JavaScript. For example: - -```js -
    -{ authors.map(author => -
  • {author.name}
  • -)} -
-``` - -If you already know JavaScript, learning React should be easy. There isn't nearly as much learning curve or special syntax involved as with Angular or other popular libraries. - -Because React isn't a full framework, you'll typically want other libraries to handle things like routing, web API calls, and dependency management. The nice thing is, you can pick the best library for each of these, but the disadvantage is that you need to make all of these decisions and verify all of your chosen libraries work well together when you're done. If you want a good starting point, you can use a starter kit like React Slingshot, which prepackages a set of compatible libraries together with React. - -### Vue - -From its getting started guide, "Vue is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is easy to pick up and integrate with other libraries or existing projects. On the other hand, Vue is perfectly capable of powering sophisticated Single-Page Applications when used in combination with modern tooling and supporting libraries." - -Getting started with Vue simply requires including its script within an HTML file: - -```html - - -``` - -With the framework added, you're then able to declaratively render data to the DOM using Vue's straightforward templating syntax: - -```html -
- {{ message }} -
-``` - -and then adding the following script: - -```js -var app = new Vue({ - el: '#app', - data: { - message: 'Hello Vue!' - } -}) -``` - -This is enough to render `"Hello Vue!"` on the page. Note, however, that Vue isn't simply rendering the message to the div once. It supports databinding and dynamic updates such that if the value of `message` changes, the value in the `
` is immediately updated to reflect it. - -Of course, this only scratches the surface of what Vue is capable of. It's gained a great deal of popularity in the last several years and has a large community. There's a [huge and growing list of supporting components and libraries](https://github.com/vuejs/awesome-vue#redux) that work with Vue to extend it as well. If you're looking to add client-side behavior to your web application or considering building a full SPA, Vue is worth investigating. - -### Blazor WebAssembly - -Unlike other JavaScript frameworks, `Blazor WebAssembly` is a single-page app (SPA) framework for building interactive client-side web apps with .NET. Blazor WebAssembly uses open web standards without plugins or recompiling code into other languages. Blazor WebAssembly works in all modern web browsers, including mobile browsers. - -Running .NET code inside web browsers is made possible by WebAssembly (abbreviated `wasm`). WebAssembly is a compact bytecode format optimized for fast download and maximum execution speed. WebAssembly is an open web standard and is supported in web browsers without plugins. - -WebAssembly code can access the full functionality of the browser via JavaScript, called JavaScript interoperability, often shortened to JavaScript interop or JS interop. .NET code executed via WebAssembly in the browser runs in the browser's JavaScript sandbox with the protections that the sandbox provides against malicious actions on the client machine. - -For more information, see [Introduction to ASP.NET Core Blazor](/aspnet/core/blazor/). - -### Choosing a SPA Framework - -When considering which option will work best to support your SPA, keep in mind the following considerations: - -- Is your team familiar with the framework and its dependencies (including TypeScript in some cases)? - -- How opinionated is the framework, and do you agree with its default way of doing things? - -- Does it (or a companion library) include all of the features your app requires? - -- Is it well documented? - -- How active is its community? Are new projects being built with it? - -- How active is its core team? Are issues being resolved and new versions shipped regularly? - -Frameworks continue to evolve with breakneck speed. Use the considerations listed above to help mitigate the risk of choosing a framework you'll later regret being dependent upon. If you're particularly risk-averse, consider a framework that offers commercial support and/or is being developed by a large enterprise. - -> ### References – Client Web Technologies -> -> - **HTML and CSS** -> -> - **Sass vs. LESS** -> -> - **Styling ASP.NET Core Apps with LESS, Sass, and Font Awesome** -> [https://learn.microsoft.com/aspnet/core/client-side/less-sass-fa](/aspnet/core/client-side/less-sass-fa) -> - **Client-Side Development in ASP.NET Core** -> [https://learn.microsoft.com/aspnet/core/client-side/](/aspnet/core/client-side/) -> - **jQuery** -> -> - **Angular** -> -> - **React** -> -> - **Vue** -> -> - **Angular vs React vs Vue: Which Framework to Choose in 2020** -> -> - **The Top JavaScript Frameworks for Front-End Development in 2020** -> - ->[!div class="step-by-step"] ->[Previous](common-web-application-architectures.md) ->[Next](develop-asp-net-core-mvc-apps.md) diff --git a/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md b/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md deleted file mode 100644 index e9e280c4f2ac5..0000000000000 --- a/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md +++ /dev/null @@ -1,321 +0,0 @@ ---- -title: Common web application architectures -description: Architect Modern Web Applications with ASP.NET Core and Azure | Explore the common web application architectures -author: ardalis -ms.author: wiwagn -ms.date: 12/12/2021 ---- -# Common web application architectures - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "If you think good architecture is expensive, try bad architecture." -> _- Brian Foote and Joseph Yoder_ - -Most traditional .NET applications are deployed as single units corresponding to an executable or a single web application running within a single IIS appdomain. This approach is the simplest deployment model and serves many internal and smaller public applications very well. However, even given this single unit of deployment, most non-trivial business applications benefit from some logical separation into several layers. - -## What is a monolithic application? - -A monolithic application is one that is entirely self-contained, in terms of its behavior. It may interact with other services or data stores in the course of performing its operations, but the core of its behavior runs within its own process and the entire application is typically deployed as a single unit. If such an application needs to scale horizontally, typically the entire application is duplicated across multiple servers or virtual machines. - -## All-in-one applications - -The smallest possible number of projects for an application architecture is one. In this architecture, the entire logic of the application is contained in a single project, compiled to a single assembly, and deployed as a single unit. - -A new ASP.NET Core project, whether created in Visual Studio or from the command line, starts out as a simple "all-in-one" monolith. It contains all of the behavior of the application, including presentation, business, and data access logic. Figure 5-1 shows the file structure of a single-project app. - -![A single project ASP.NET Core app](./media/image5-1.png) - -**Figure 5-1.** A single project ASP.NET Core app. - -In a single project scenario, separation of concerns is achieved through the use of folders. The default template includes separate folders for MVC pattern responsibilities of Models, Views, and Controllers, as well as additional folders for Data and Services. In this arrangement, presentation details should be limited as much as possible to the Views folder, and data access implementation details should be limited to classes kept in the Data folder. Business logic should reside in services and classes within the Models folder. - -Although simple, the single-project monolithic solution has some disadvantages. As the project's size and complexity grows, the number of files and folders will continue to grow as well. User interface (UI) concerns (models, views, controllers) reside in multiple folders, which aren't grouped together alphabetically. This issue only gets worse when additional UI-level constructs, such as Filters or ModelBinders, are added in their own folders. Business logic is scattered between the Models and Services folders, and there's no clear indication of which classes in which folders should depend on which others. This lack of organization at the project level frequently leads to [spaghetti code](https://deviq.com/spaghetti-code/). - -To address these issues, applications often evolve into multi-project solutions, where each project is considered to reside in a particular _layer_ of the application. - -## What are layers? - -As applications grow in complexity, one way to manage that complexity is to break up the application according to its responsibilities or concerns. This approach follows the separation of concerns principle and can help keep a growing codebase organized so that developers can easily find where certain functionality is implemented. Layered architecture offers a number of advantages beyond just code organization, though. - -By organizing code into layers, common low-level functionality can be reused throughout the application. This reuse is beneficial because it means less code needs to be written and because it can allow the application to standardize on a single implementation, following the [don't repeat yourself (DRY)](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) principle. - -With a layered architecture, applications can enforce restrictions on which layers can communicate with other layers. This architecture helps to achieve encapsulation. When a layer is changed or replaced, only those layers that work with it should be impacted. By limiting which layers depend on which other layers, the impact of changes can be mitigated so that a single change doesn't impact the entire application. - -Layers (and encapsulation) make it much easier to replace functionality within the application. For example, an application might initially use its own SQL Server database for persistence, but later could choose to use a cloud-based persistence strategy, or one behind a web API. If the application has properly encapsulated its persistence implementation within a logical layer, that SQL Server-specific layer could be replaced by a new one implementing the same public interface. - -In addition to the potential of swapping out implementations in response to future changes in requirements, application layers can also make it easier to swap out implementations for testing purposes. Instead of having to write tests that operate against the real data layer or UI layer of the application, these layers can be replaced at test time with fake implementations that provide known responses to requests. This approach typically makes tests much easier to write and much faster to run when compared to running tests against the application's real infrastructure. - -Logical layering is a common technique for improving the organization of code in enterprise software applications, and there are several ways in which code can be organized into layers. - -> [!NOTE] -> _Layers_ represent logical separation within the application. In the event that application logic is physically distributed to separate servers or processes, these separate physical deployment targets are referred to as _tiers_. It's possible, and quite common, to have an N-Layer application that is deployed to a single tier. - -## Traditional "N-Layer" architecture applications - -The most common organization of application logic into layers is shown in Figure 5-2. - -![Typical application layers](./media/image5-2.png) - -**Figure 5-2.** Typical application layers. - -These layers are frequently abbreviated as UI, BLL (Business Logic Layer), and DAL (Data Access Layer). Using this architecture, users make requests through the UI layer, which interacts only with the BLL. The BLL, in turn, can call the DAL for data access requests. The UI layer shouldn't make any requests to the DAL directly, nor should it interact with persistence directly through other means. Likewise, the BLL should only interact with persistence by going through the DAL. In this way, each layer has its own well-known responsibility. - -One disadvantage of this traditional layering approach is that compile-time dependencies run from the top to the bottom. That is, the UI layer depends on the BLL, which depends on the DAL. This means that the BLL, which usually holds the most important logic in the application, is dependent on data access implementation details (and often on the existence of a database). Testing business logic in such an architecture is often difficult, requiring a test database. The dependency inversion principle can be used to address this issue, as you'll see in the next section. - -Figure 5-3 shows an example solution, breaking the application into three projects by responsibility (or layer). - -![A simple monolithic application with three projects](./media/image5-3.png) - -**Figure 5-3.** A simple monolithic application with three projects. - -Although this application uses several projects for organizational purposes, it's still deployed as a single unit and its clients will interact with it as a single web app. This allows for very simple deployment process. Figure 5-4 shows how such an app might be hosted using Azure. - -![Simple deployment of Azure Web App](./media/image5-4.png) - -**Figure 5-4.** Simple deployment of Azure Web App - -As application needs grow, more complex and robust deployment solutions may be required. Figure 5-5 shows an example of a more complex deployment plan that supports additional capabilities. - -![Deploying a web app to an Azure App Service](./media/image5-5.png) - -**Figure 5-5.** Deploying a web app to an Azure App Service - -Internally, this project's organization into multiple projects based on responsibility improves the maintainability of the application. - -This unit can be scaled up or out to take advantage of cloud-based on-demand scalability. Scaling up means adding additional CPU, memory, disk space, or other resources to the server(s) hosting your app. Scaling out means adding additional instances of such servers, whether these are physical servers, virtual machines, or containers. When your app is hosted across multiple instances, a load balancer is used to assign requests to individual app instances. - -The simplest approach to scaling a web application in Azure is to configure scaling manually in the application's App Service Plan. Figure 5-6 shows the appropriate Azure dashboard screen to configure how many instances are serving an app. - -![App Service Plan scaling in Azure](./media/image5-6.png) - -**Figure 5-6.** App Service Plan scaling in Azure. - -## Clean architecture - -Applications that follow the Dependency Inversion Principle as well as the Domain-Driven Design (DDD) principles tend to arrive at a similar architecture. This architecture has gone by many names over the years. One of the first names was Hexagonal Architecture, followed by Ports-and-Adapters. More recently, it's been cited as the [Onion Architecture](https://jeffreypalermo.com/blog/the-onion-architecture-part-1/) or [Clean Architecture](https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html). The latter name, Clean Architecture, is used as the name for this architecture in this e-book. - -The eShopOnWeb reference application uses the Clean Architecture approach in organizing its code into projects. You can find a solution template you can use as a starting point for your own ASP.NET Core solutions in the [ardalis/cleanarchitecture](https://github.com/ardalis/cleanarchitecture) GitHub repository or by [installing the template from NuGet](https://www.nuget.org/packages/Ardalis.CleanArchitecture.Template/). - -Clean architecture puts the business logic and application model at the center of the application. Instead of having business logic depend on data access or other infrastructure concerns, this dependency is inverted: infrastructure and implementation details depend on the Application Core. This functionality is achieved by defining abstractions, or interfaces, in the Application Core, which are then implemented by types defined in the Infrastructure layer. A common way of visualizing this architecture is to use a series of concentric circles, similar to an onion. Figure 5-7 shows an example of this style of architectural representation. - -![Clean Architecture; onion view](./media/image5-7.png) - -**Figure 5-7.** Clean Architecture; onion view - -In this diagram, dependencies flow toward the innermost circle. The Application Core takes its name from its position at the core of this diagram. And you can see on the diagram that the Application Core has no dependencies on other application layers. The application's entities and interfaces are at the very center. Just outside, but still in the Application Core, are domain services, which typically implement interfaces defined in the inner circle. Outside of the Application Core, both the UI and the Infrastructure layers depend on the Application Core, but not on one another (necessarily). - -Figure 5-8 shows a more traditional horizontal layer diagram that better reflects the dependency between the UI and other layers. - -![Clean Architecture; horizontal layer view](./media/image5-8.png) - -**Figure 5-8.** Clean Architecture; horizontal layer view - -Note that the solid arrows represent compile-time dependencies, while the dashed arrow represents a runtime-only dependency. With the clean architecture, the UI layer works with interfaces defined in the Application Core at compile time, and ideally shouldn't know about the implementation types defined in the Infrastructure layer. At run time, however, these implementation types are required for the app to execute, so they need to be present and wired up to the Application Core interfaces via dependency injection. - -Figure 5-9 shows a more detailed view of an ASP.NET Core application's architecture when built following these recommendations. - -![ASP.NET Core architecture diagram following Clean Architecture](./media/image5-9.png) - -**Figure 5-9.** ASP.NET Core architecture diagram following Clean Architecture. - -Because the Application Core doesn't depend on Infrastructure, it's very easy to write automated unit tests for this layer. Figures 5-10 and 5-11 show how tests fit into this architecture. - -![UnitTestCore](./media/image5-10.png) - -**Figure 5-10.** Unit testing Application Core in isolation. - -![IntegrationTests](./media/image5-11.png) - -**Figure 5-11.** Integration testing Infrastructure implementations with external dependencies. - -Since the UI layer doesn't have any direct dependency on types defined in the Infrastructure project, it's likewise very easy to swap out implementations, either to facilitate testing or in response to changing application requirements. ASP.NET Core's built-in use of and support for dependency injection makes this architecture the most appropriate way to structure non-trivial monolithic applications. - -For monolithic applications, the Application Core, Infrastructure, and UI projects are all run as a single application. The runtime application architecture might look something like Figure 5-12. - -![ASP.NET Core Architecture 2](./media/image5-12.png) - -**Figure 5-12.** A sample ASP.NET Core app's runtime architecture. - -### Organizing code in Clean Architecture - -In a Clean Architecture solution, each project has clear responsibilities. As such, certain types belong in each project and you'll frequently find folders corresponding to these types in the appropriate project. - -#### Application Core - -The Application Core holds the business model, which includes entities, services, and interfaces. These interfaces include abstractions for operations that will be performed using Infrastructure, such as data access, file system access, network calls, etc. Sometimes services or interfaces defined at this layer will need to work with non-entity types that have no dependencies on UI or Infrastructure. These can be defined as simple Data Transfer Objects (DTOs). - -##### Application Core types - -- Entities (business model classes that are persisted) -- Aggregates (groups of entities) -- Interfaces -- Domain Services -- Specifications -- Custom Exceptions and Guard Clauses -- Domain Events and Handlers - -#### Infrastructure - -The Infrastructure project typically includes data access implementations. In a typical ASP.NET Core web application, these implementations include the Entity Framework (EF) DbContext, any EF Core `Migration` objects that have been defined, and data access implementation classes. The most common way to abstract data access implementation code is through the use of the [Repository design pattern](https://deviq.com/repository-pattern/). - -In addition to data access implementations, the Infrastructure project should contain implementations of services that must interact with infrastructure concerns. These services should implement interfaces defined in the Application Core, and so Infrastructure should have a reference to the Application Core project. - -##### Infrastructure types - -- EF Core types (`DbContext`, `Migration`) -- Data access implementation types (Repositories) -- Infrastructure-specific services (for example, `FileLogger` or `SmtpNotifier`) - -#### UI Layer - -The user interface layer in an ASP.NET Core MVC application is the entry point for the application. This project should reference the Application Core project, and its types should interact with infrastructure strictly through interfaces defined in Application Core. No direct instantiation of or static calls to the Infrastructure layer types should be allowed in the UI layer. - -##### UI Layer types - -- Controllers -- Custom Filters -- Custom Middleware -- Views -- ViewModels -- Startup - -The `Startup` class or _Program.cs_ file is responsible for configuring the application, and for wiring up implementation types to interfaces. The place where this logic is performed is known as the app's *composition root*, and is what allows dependency injection to work properly at run time. - -> [!NOTE] -> In order to wire up dependency injection during app startup, the UI layer project may need to reference the Infrastructure project. This dependency can be eliminated, most easily by using a custom DI container that has built-in support for loading types from assemblies. For the purposes of this sample, the simplest approach is to allow the UI project to reference the Infrastructure project (but developers should limit actual references to types in the Infrastructure project to the app's composition root). - -## Monolithic applications and containers - -You can build a single and monolithic-deployment based Web Application or Service and deploy it as a container. Within the application, it might not be monolithic but organized into several libraries, components, or layers. Externally, it's a single container with a single process, single web application, or single service. - -To manage this model, you deploy a single container to represent the application. To scale, just add additional copies with a load balancer in front. The simplicity comes from managing a single deployment in a single container or VM. - -![Figure 5-13](./media/image5-13.png) - -You can include multiple components/libraries or internal layers within each container, as illustrated in Figure 5-13. But, following the container principle of _"a container does one thing, and does it in one process_", the monolithic pattern might be a conflict. - -The downside of this approach comes if/when the application grows, requiring it to scale. If the entire application scales, it's not really a problem. However, in most cases, a few parts of the application are the choke points requiring scaling, while other components are used less. - -Using the typical eCommerce example, what you likely need to scale is the product information component. Many more customers browse products than purchase them. More customers use their basket than use the payment pipeline. Fewer customers add comments or view their purchase history. And you likely only have a handful of employees, in a single region, that need to manage the content and marketing campaigns. By scaling the monolithic design, all the code is deployed multiple times. - -In addition to the "scale everything" problem, changes to a single component require complete retesting of the entire application, and a complete redeployment of all the instances. - -The monolithic approach is common, and many organizations are developing with this architectural approach. Many are having good enough results, while others are hitting limits. Many designed their applications in this model, because the tools and infrastructure were too difficult to build service-oriented architectures (SOA), and they didn't see the need until the app grew. If you find you're hitting the limits of the monolithic approach, breaking up the app to enable it to better leverage containers and microservices may be the next logical step. - -![Figure 5-14](./media/image5-14.png) - -Deploying monolithic applications in Microsoft Azure can be achieved using dedicated VMs for each instance. Using [Azure Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/), you can easily scale the VMs. [Azure App Services](https://azure.microsoft.com/services/app-service/) can run monolithic applications and easily scale instances without having to manage the VMs. Azure App Services can run single instances of Docker containers as well, simplifying the deployment. Using Docker, you can deploy a single VM as a Docker host, and run multiple instances. Using the Azure balancer, as shown in the Figure 5-14, you can manage scaling. - -The deployment to the various hosts can be managed with traditional deployment techniques. The Docker hosts can be managed with commands like **docker run** performed manually, or through automation such as Continuous Delivery (CD) pipelines. - -### Monolithic application deployed as a container - -There are benefits of using containers to manage monolithic application deployments. Scaling the instances of containers is far faster and easier than deploying additional VMs. Even when using virtual machine scale sets to scale VMs, they take time to create. When deployed as app instances, the configuration of the app is managed as part of the VM. - -Deploying updates as Docker images is far faster and network efficient. Docker Images typically start in seconds, speeding rollouts. Tearing down a Docker instance is as easy as issuing a `docker stop` command, typically completing in less than a second. - -As containers are inherently immutable by design, you never need to worry about corrupted VMs, whereas update scripts might forget to account for some specific configuration or file left on the disk. - -You can use Docker containers for a monolithic deployment of simpler web applications. This approach improves continuous integration and continuous deployment pipelines and helps achieve deployment-to-production success. No more “It works on my machine, why does it not work in production?” - -A microservices-based architecture has many benefits, but those benefits come at a cost of increased complexity. In some cases, the costs outweigh the benefits, so a monolithic deployment application running in a single container or in just a few containers is a better option. - -A monolithic application might not be easily decomposable into well-separated microservices. Microservices should work independently of each other to provide a more resilient application. If you can't deliver independent feature slices of the application, separating it only adds complexity. - -An application might not yet need to scale features independently. Many applications, when they need to scale beyond a single instance, can do so through the relatively simple process of cloning that entire instance. The additional work to separate the application into discrete services provides a minimal benefit when scaling full instances of the application is simple and cost-effective. - -Early in the development of an application, you might not have a clear idea where the natural functional boundaries are. As you develop a minimum viable product, the natural separation might not yet have emerged. Some of these conditions might be temporary. You might start by creating a monolithic application, and later separate some features to be developed and deployed as microservices. Other conditions might be essential to the application's problem space, meaning that the application might never be broken into multiple microservices. - -Separating an application into many discrete processes also introduces overhead. There's more complexity in separating features into different processes. The communication protocols become more complex. Instead of method calls, you must use asynchronous communications between services. As you move to a microservices architecture, you need to add many of the building blocks implemented in the microservices version of the eShopOnContainers application: event bus handling, message resiliency and retries, eventual consistency, and more. - -The much simpler [eShopOnWeb reference application](https://github.com/dotnet-architecture/eShopOnWeb) supports single-container monolithic container usage. The application includes one web application that includes traditional MVC views, web APIs, and Razor Pages. Optionally, you can run the application's Blazor-based admin component, which requires a separate API project to run as well. - -The application can be launched from the solution root using the `docker-compose build` and `docker-compose up` commands. This command configures a container for the web instance, using the `Dockerfile` found in the web project's root, and runs the container on a specified port. You can download the source for this application from GitHub and run it locally. Even this monolithic application benefits from being deployed in a container environment. - -For one, the containerized deployment means that every instance of the application runs in the same environment. This approach includes the developer environment where early testing and development take place. The development team can run the application in a containerized environment that matches the production environment. - -In addition, containerized applications scale out at a lower cost. Using a container environment enables greater resource sharing than traditional VM environments. - -Finally, containerizing the application forces a separation between the business logic and the storage server. As the application scales out, the multiple containers will all rely on a single physical storage medium. This storage medium would typically be a high-availability server running a SQL Server database. - -## Docker support - -The `eShopOnWeb` project runs on .NET. Therefore, it can run in either Linux-based or Windows-based containers. Note that for Docker deployment, you want to use the same host type for SQL Server. Linux-based containers allow a smaller footprint and are preferred. - -You can use Visual Studio 2017 or later to add Docker support to an existing application by right-clicking on a project in **Solution Explorer** and choosing **Add** > **Docker Support**. This step adds the files required and modifies the project to use them. The current `eShopOnWeb` sample already has these files in place. - -The solution-level `docker-compose.yml` file contains information about what images to build and what containers to launch. The file allows you to use the `docker-compose` command to launch multiple applications at the same time. In this case, it is only launching the Web project. You can also use it to configure dependencies, such as a separate database container. - -```yml -version: '3' - -services: - eshopwebmvc: - image: eshopwebmvc - build: - context: . - dockerfile: src/Web/Dockerfile - environment: - - ASPNETCORE_ENVIRONMENT=Development - ports: - - "5106:5106" - -networks: - default: - external: - name: nat -``` - -The `docker-compose.yml` file references the `Dockerfile` in the `Web` project. The `Dockerfile` is used to specify which base container will be used and how the application will be configured on it. The `Web`' `Dockerfile`: - -```dockerfile -FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build -WORKDIR /app - -COPY *.sln . -COPY . . -WORKDIR /app/src/Web -RUN dotnet restore - -RUN dotnet publish -c Release -o out - -FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime -WORKDIR /app -COPY --from=build /app/src/Web/out ./ - -ENTRYPOINT ["dotnet", "Web.dll"] -``` - -### Troubleshooting Docker problems - -Once you run the containerized application, it continues to run until you stop it. You can view which containers are running with the `docker ps` command. You can stop a running container by using the `docker stop` command and specifying the container ID. - -Note that running Docker containers may be bound to ports you might otherwise try to use in your development environment. If you try to run or debug an application using the same port as a running Docker container, you'll get an error stating that the server can't bind to that port. Once again, stopping the container should resolve the issue. - -If you want to add Docker support to your application using Visual Studio, make sure Docker Desktop is running when you do so. The wizard won't run correctly if Docker Desktop isn't running when you start the wizard. In addition, the wizard examines your current container choice to add the correct Docker support. If you want to add, support for Windows Containers, you need to run the wizard while you have Docker Desktop running with Windows Containers configured. If you want to add, support for Linux containers, run the wizard while you have Docker running with Linux containers configured. - -### Other web application architectural styles - -- [Web-Queue-Worker](/azure/architecture/guide/architecture-styles/web-queue-worker): The core components of this architecture are a web front end that serves client requests, and a worker that performs resource-intensive tasks, long-running workflows, or batch jobs. The web front end communicates with the worker through a message queue. -- [N-tier](/azure/architecture/guide/architecture-styles/n-tier): An N-tier architecture divides an application into logical layers and physical tiers. -- [Microservice](/azure/architecture/guide/architecture-styles/microservices): A microservices architecture consists of a collection of small, autonomous services. Each service is self-contained and should implement a single business capability within a bounded context. - -### References – Common web architectures - -- **The Clean Architecture** - -- **The Onion Architecture** - -- **The Repository Pattern** - -- **Clean Architecture Solution Template** - -- **Architecting Microservices e-book** - -- **DDD (Domain-Driven Design)** - [https://learn.microsoft.com/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/](../microservices/microservice-ddd-cqrs-patterns/index.md) - ->[!div class="step-by-step"] ->[Previous](architectural-principles.md) ->[Next](common-client-side-web-technologies.md) diff --git a/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md b/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md deleted file mode 100644 index 7180b3c0e144e..0000000000000 --- a/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md +++ /dev/null @@ -1,838 +0,0 @@ ---- -title: Developing ASP.NET Core MVC apps -description: Architect Modern Web Applications with ASP.NET Core and Azure | developing ASP.NET Core MVC Apps -author: ardalis -ms.author: wiwagn -ms.date: 12/12/2021 -no-loc: [Blazor, WebAssembly] ---- -# Develop ASP.NET Core MVC apps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "It's not important to get it right the first time. It's vitally important to get it right the last time." -> _- Andrew Hunt and David Thomas_ - -ASP.NET Core is a cross-platform, open-source framework for building modern cloud-optimized web applications. ASP.NET Core apps are lightweight and modular, with built-in support for dependency injection, enabling greater testability and maintainability. Combined with MVC, which supports building modern web APIs in addition to view-based apps, ASP.NET Core is a powerful framework with which to build enterprise web applications. - -## MVC and Razor Pages - -ASP.NET Core MVC offers many features that are useful for building web-based APIs and apps. The term MVC stands for "Model-View-Controller", a UI pattern that breaks up the responsibilities of responding to user requests into several parts. In addition to following this pattern, you can also implement features in your ASP.NET Core apps as Razor Pages. - -Razor Pages are built into ASP.NET Core MVC, and use the same features for routing, model binding, filters, authorization, etc. However, instead of having separate folders and files for Controllers, Models, Views, etc. and using attribute-based routing, Razor Pages are placed in a single folder ("/Pages"), route based on their relative location in this folder, and handle requests with handlers instead of controller actions. As a result, when working with Razor Pages, all of the files and classes you need are typically colocated, not spread throughout the web project. - -Learn more about [how MVC, Razor Pages, and related patterns are applied in the eShopOnWeb sample application](https://github.com/dotnet-architecture/eShopOnWeb/wiki/Patterns#mvc). - -When you create a new ASP.NET Core App, you should have a plan in mind for the kind of app you want to build. When creating a new project, in your IDE or using the `dotnet new` CLI command, you will choose from several templates. The most common project templates are Empty, Web API, Web App, and Web App (Model-View-Controller). Although you can only make this decision when you first create a project, it's not an irrevocable decision. The Web API project uses standard Model-View-Controller controllers – it just lacks Views by default. Likewise, the default Web App template uses Razor Pages, and so also lacks a Views folder. You can add a Views folder to these projects later to support view-based behavior. Web API and Model-View-Controller projects don't include a Pages folder by default, but you can add one later to support Razor Pages-based behavior. You can think of these three templates as supporting three different kinds of default user interaction: data (web API), page-based, and view-based. However, you can mix and match any or all of these templates within a single project if you wish. - -### Why Razor Pages? - -Razor Pages is the default approach for new web applications in Visual Studio. Razor Pages offers a simpler way of building page-based application features, such as non-SPA forms. Using controllers and views, it was common for applications to have very large controllers that worked with many different dependencies and view models and returned many different views. This resulted in more complexity and often resulted in controllers that didn't follow the Single Responsibility Principle or Open/Closed Principles effectively. Razor Pages addresses this issue by encapsulating the server-side logic for a given logical "page" in a web application with its Razor markup. A Razor Page that has no server-side logic can only consist of a Razor file (for instance, "Index.cshtml"). However, most non-trivial Razor Pages will have an associated page model class, which by convention is named the same as the Razor file with a ".cs" extension (for example, "Index.cshtml.cs"). - -A Razor Page's page model combines the responsibilities of an MVC controller and a viewmodel. Instead of handling requests with controller action methods, page model handlers like "OnGet()" are executed, rendering their associated page by default. Razor Pages simplifies the process of building individual pages in an ASP.NET Core app, while still providing all the architectural features of ASP.NET Core MVC. They're a good default choice for new page-based functionality. - -### When to use MVC - -If you're building web APIs, the MVC pattern makes more sense than trying to use Razor Pages. If your project will only expose web API endpoints, you should ideally start from the Web API project template. Otherwise, it's easy to add controllers and associated API endpoints to any ASP.NET Core app. Use the view-based MVC approach if you're migrating an existing application from ASP.NET MVC 5 or earlier to ASP.NET Core MVC and you want to do so with the least amount of effort. Once you've made the initial migration, you can evaluate whether it makes sense to adopt Razor Pages for new features or even as a wholesale migration. For more information about porting .NET 4.x apps to .NET 8, see [Porting Existing ASP.NET Apps to ASP.NET Core eBook](/dotnet/architecture/porting-existing-aspnet-apps/). - -Whether you choose to build your web app using Razor Pages or MVC views, your app will have similar performance and will include support for dependency injection, filters, model binding, validation, and so on. - -## Mapping requests to responses - -At its heart, ASP.NET Core apps map incoming requests to outgoing responses. At a low level, this mapping is done with middleware, and simple ASP.NET Core apps and microservices may be comprised solely of custom middleware. When using ASP.NET Core MVC, you can work at a somewhat higher level, thinking in terms of _routes_, _controllers_, and _actions_. Each incoming request is compared with the application's routing table, and if a matching route is found, the associated action method (belonging to a controller) is called to handle the request. If no matching route is found, an error handler (in this case, returning a NotFound result) is called. - -ASP.NET Core MVC apps can use conventional routes, attribute routes, or both. Conventional routes are defined in code, specifying routing _conventions_ using syntax like in the example below: - -```csharp -app.UseEndpoints(endpoints => -{ - endpoints.MapControllerRoute(name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); -}); -``` - -In this example, a route named "default" has been added to the routing table. It defines a route template with placeholders for `controller`, `action`, and `id`. The `controller` and `action` placeholders have the default specified (`Home` and `Index`, respectively), and the `id` placeholder is optional (by virtue of a "?" applied to it). The convention defined here states that the first part of a request should correspond to the name of the controller, the second part to the action, and then if necessary a third part will represent an ID parameter. Conventional routes are typically defined in one place for the application, such as in *Program.cs* where the request middleware pipeline is configured. - -Attribute routes are applied to controllers and actions directly, rather than specified globally. This approach has the advantage of making them much more discoverable when you're looking at a particular method, but does mean that routing information is not kept in one place in the application. With attribute routes, you can easily specify multiple routes for a given action, as well as combine routes between controllers and actions. For example: - -```csharp -[Route("Home")] -public class HomeController : Controller -{ - [Route("")] // Combines to define the route template "Home" - [Route("Index")] // Combines to define route template "Home/Index" - [Route("/")] // Does not combine, defines the route template "" - public IActionResult Index() {} -} -``` - -Routes can be specified on [HttpGet] and similar attributes, avoiding the need to add separate [Route] attributes. Attribute routes can also use tokens to reduce the need to repeat controller or action names, as shown below: - -```csharp -[Route("[controller]")] -public class ProductsController : Controller -{ - [Route("")] // Matches 'Products' - [Route("Index")] // Matches 'Products/Index' - public IActionResult Index() {} -} -``` - -Razor Pages don't use attribute routing. You can specify additional route template information for a Razor Page as part of its `@page` directive: - -```csharp -@page "{id:int}" -``` - -In the previous example, the page in question would match a route with an integer `id` parameter. For example, the *Products.cshtml* page located in the root of `/Pages` would respond to requests like this one: - -```http -/Products/123 -``` - -Once a given request has been matched to a route, but before the action method is called, ASP.NET Core MVC will perform [model binding](/aspnet/core/mvc/models/model-binding) and [model validation](/aspnet/core/mvc/models/validation) on the request. Model binding is responsible for converting incoming HTTP data into the .NET types specified as parameters of the action method to be called. For example, if the action method expects an `int id` parameter, model binding will attempt to provide this parameter from a value provided as part of the request. To do so, model binding looks for values in a posted form, values in the route itself, and query string values. Assuming an `id` value is found, it will be converted to an integer before being passed into the action method. - -After binding the model but before calling the action method, model validation occurs. Model validation uses optional attributes on the model type, and can help ensure that the provided model object conforms to certain data requirements. Certain values may be specified as required, or limited to a certain length or numeric range, etc. If validation attributes are specified but the model does not conform to their requirements, the property ModelState.IsValid will be false, and the set of failing validation rules will be available to send to the client making the request. - -If you're using model validation, you should be sure to always check that the model is valid before performing any state-altering commands, to ensure your app is not corrupted by invalid data. You can use a [filter](/aspnet/core/mvc/controllers/filters) to avoid the need to add code for this validation in every action. ASP.NET Core MVC filters offer a way of intercepting groups of requests, so that common policies and cross-cutting concerns can be applied on a targeted basis. Filters can be applied to individual actions, whole controllers, or globally for an application. - -For web APIs, ASP.NET Core MVC supports [_content negotiation_](/aspnet/core/mvc/models/formatting), allowing requests to specify how responses should be formatted. Based on headers provided in the request, actions returning data will format the response in XML, JSON, or another supported format. This feature enables the same API to be used by multiple clients with different data format requirements. - -Web API projects should consider using the `[ApiController]` attribute, which can be applied to individual controllers, to a base controller class, or to the entire assembly. This attribute adds automatic model validation checking and any action with an invalid model will return a BadRequest with the details of the validation errors. The attribute also requires all actions have an attribute route, rather than using a conventional route, and returns more detailed ProblemDetails information in response to errors. - -### Keeping controllers under control - -For page-based applications, Razor Pages do a great job of keeping controllers from getting too large. Each individual page is given its own files and classes dedicated just to its handler(s). Prior to the introduction of Razor Pages, many view-centric applications would have large controller classes responsible for many different actions and views. These classes would naturally grow to have many responsibilities and dependencies, making them harder to maintain. If you find your view-based controllers are growing too large, consider refactoring them to use Razor Pages, or introducing a pattern like a mediator. - -The mediator design pattern is used to reduce coupling between classes while allowing communication between them. In ASP.NET Core MVC applications, this pattern is frequently employed to break up controllers into smaller pieces by using *handlers* to do the work of action methods. The popular [MediatR NuGet package](https://www.nuget.org/packages/MediatR/) is often used to accomplish this. Typically, controllers include many different action methods, each of which may require certain dependencies. The set of all dependencies required by any action must be passed into the controller's constructor. When using MediatR, the only dependency a controller will typically have is an instance of the mediator. Each action then uses the mediator instance to send a message, which is processed by a handler. The handler is specific to a single action and thus only needs the dependencies required by that action. An example of a controller using MediatR is shown here: - -```csharp -public class OrderController : Controller -{ - private readonly IMediator _mediator; - - public OrderController(IMediator mediator) - { - _mediator = mediator; - } - - [HttpGet] - public async Task MyOrders() - { - var viewModel = await _mediator.Send(new GetMyOrders(User.Identity.Name)); - return View(viewModel); - } - // other actions implemented similarly -} -``` - -In the `MyOrders` action, the call to `Send` a `GetMyOrders` message is handled by this class: - -```csharp -public class GetMyOrdersHandler : IRequestHandler> -{ - private readonly IOrderRepository _orderRepository; - public GetMyOrdersHandler(IOrderRepository orderRepository) - { - _orderRepository = orderRepository; - } - - public async Task> Handle(GetMyOrders request, CancellationToken cancellationToken) - { - var specification = new CustomerOrdersWithItemsSpecification(request.UserName); - var orders = await _orderRepository.ListAsync(specification); - return orders.Select(o => new OrderViewModel - { - OrderDate = o.OrderDate, - OrderItems = o.OrderItems?.Select(oi => new OrderItemViewModel() - { - PictureUrl = oi.ItemOrdered.PictureUri, - ProductId = oi.ItemOrdered.CatalogItemId, - ProductName = oi.ItemOrdered.ProductName, - UnitPrice = oi.UnitPrice, - Units = oi.Units - }).ToList(), - OrderNumber = o.Id, - ShippingAddress = o.ShipToAddress, - Total = o.Total() - }); - } -} -``` - -The end result of this approach is for controllers to be much smaller and focused primarily on routing and model binding, while individual handlers are responsible for the specific tasks needed by a given endpoint. This approach can also be achieved without MediatR by using the [ApiEndpoints NuGet package](https://www.nuget.org/packages/Ardalis.ApiEndpoints/), which attempts to bring to API controllers the same benefits Razor Pages brings to view-based controllers. - -> ### References – Mapping Requests to Responses -> -> - **Routing to Controller Actions**\ - > [https://learn.microsoft.com/aspnet/core/mvc/controllers/routing](/aspnet/core/mvc/controllers/routing) -> - **Model Binding**\ - > [https://learn.microsoft.com/aspnet/core/mvc/models/model-binding](/aspnet/core/mvc/models/model-binding) -> - **Model Validation**\ - > [https://learn.microsoft.com/aspnet/core/mvc/models/validation](/aspnet/core/mvc/models/validation) -> - **Filters**\ - > [https://learn.microsoft.com/aspnet/core/mvc/controllers/filters](/aspnet/core/mvc/controllers/filters) -> - **ApiController Attribute**\ - > [https://learn.microsoft.com/aspnet/core/web-api/](/aspnet/core/web-api/) - -## Working with dependencies - -ASP.NET Core has built-in support for and internally makes use of a technique known as [dependency injection](/aspnet/core/fundamentals/dependency-injection). Dependency injection is a technique that enables loose coupling between different parts of an application. Looser coupling is desirable because it makes it easier to isolate parts of the application, allowing for testing or replacement. It also makes it less likely that a change in one part of the application will have an unexpected impact somewhere else in the application. Dependency injection is based on the dependency inversion principle, and is often key to achieving the open/closed principle. When evaluating how your application works with its dependencies, beware of the [static cling](https://deviq.com/static-cling/) code smell, and remember the aphorism "[new is glue](https://ardalis.com/new-is-glue)." - -Static cling occurs when your classes make calls to static methods, or access static properties, which have side effects or dependencies on infrastructure. For example, if you have a method that calls a static method, which in turn writes to a database, your method is tightly coupled to the database. Anything that breaks that database call will break your method. Testing such methods is notoriously difficult, since such tests either require commercial mocking libraries to mock the static calls, or can only be tested with a test database in place. Static calls that don't have any dependence on infrastructure, especially those calls that are completely stateless, are fine to call and have no impact on coupling or testability (beyond coupling code to the static call itself). - -Many developers understand the risks of static cling and global state, but will still tightly couple their code to specific implementations through direct instantiation. "New is glue" is meant to be a reminder of this coupling, and not a general condemnation of the use of the `new` keyword. Just as with static method calls, new instances of types that have no external dependencies typically do not tightly couple code to implementation details or make testing more difficult. But each time a class is instantiated, take just a brief moment to consider whether it makes sense to hard-code that specific instance in that particular location, or if it would be a better design to request that instance as a dependency. - -### Declare your dependencies - -ASP.NET Core is built around having methods and classes declare their dependencies, requesting them as arguments. ASP.NET applications are typically set up in _Program.cs_ or in a `Startup` class. - -> [!NOTE] -> Configuring apps completely in _Program.cs_ is the default approach for .NET 6 (and later) and Visual Studio 2022 apps. Project templates have been updated to help you get started with this new approach. ASP.NET Core projects can still use a `Startup` class, if desired. - -#### Configure services in _Program.cs_ - -For very simple apps, you can wire up dependencies directly in _Program.cs_ file using a `WebApplicationBuilder`. Once all needed services have been added, the builder is used to create the app. - -```csharp -var builder = WebApplication.CreateBuilder(args); - -// Add services to the container. -builder.Services.AddRazorPages(); - -var app = builder.Build(); -``` - -#### Configure services in _Startup.cs_ - -The _Startup.cs_ is itself configured to support dependency injection at several points. If you're using a `Startup` class, you can give it a constructor and it can request dependencies through it, like so: - -```csharp -public class Startup -{ - public Startup(IHostingEnvironment env) - { - var builder = new ConfigurationBuilder() - .SetBasePath(env.ContentRootPath) - .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true) - .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true); - } -} -``` - -The `Startup` class is interesting in that there are no explicit type requirements for it. It doesn't inherit from a special `Startup` base class, nor does it implement any particular interface. You can give it a constructor, or not, and you can specify as many parameters on the constructor as you want. When the web host you've configured for your application starts, it will call the `Startup` class (if you've told it to use one), and will use dependency injection to populate any dependencies the `Startup` class requires. Of course, if you request parameters that aren't configured in the services container used by ASP.NET Core, you'll get an exception, but as long as you stick to dependencies the container knows about, you can request anything you want. - -Dependency injection is built into your ASP.NET Core apps right from the start, when you create the Startup instance. It doesn't stop there for the Startup class. You can also request dependencies in the `Configure` method: - -```csharp -public void Configure(IApplicationBuilder app, - IHostingEnvironment env, - ILoggerFactory loggerFactory) -{ - -} -``` - -The ConfigureServices method is the exception to this behavior; it must take just one parameter of type `IServiceCollection`. It doesn't really need to support dependency injection, since on the one hand it is responsible for adding objects to the services container, and on the other it has access to all currently configured services via the `IServiceCollection` parameter. Thus, you can work with dependencies defined in the ASP.NET Core services collection in every part of the `Startup` class, either by requesting the needed service as a parameter or by working with the `IServiceCollection` in `ConfigureServices`. - -> [!NOTE] -> If you need to ensure certain services are available to your `Startup` class, you can configure them using an `IWebHostBuilder` and its `ConfigureServices` method inside the `CreateDefaultBuilder` call. - -The Startup class is a model for how you should structure other parts of your ASP.NET Core application, from Controllers to Middleware to Filters to your own Services. In each case, you should follow the [Explicit Dependencies Principle](https://deviq.com/explicit-dependencies-principle/), requesting your dependencies rather than directly creating them, and leveraging dependency injection throughout your application. Be careful of where and how you directly instantiate implementations, especially services and objects that work with infrastructure or have side effects. Prefer working with abstractions defined in your application core and passed in as arguments to hardcoding references to specific implementation types. - -## Structuring the application - -Monolithic applications typically have a single entry point. In the case of an ASP.NET Core web application, the entry point will be the ASP.NET Core web project. However, that doesn't mean the solution should consist of just a single project. It's useful to break up the application into different layers in order to follow separation of concerns. Once broken up into layers, it's helpful to go beyond folders to separate projects, which can help achieve better encapsulation. The best approach to achieve these goals with an ASP.NET Core application is a variation of the Clean Architecture discussed in chapter 5. Following this approach, the application's solution will comprise separate libraries for the UI, Infrastructure, and ApplicationCore. - -In addition to these projects, separate test projects are included as well (Testing is discussed in Chapter 9). - -The application's object model and interfaces should be placed in the ApplicationCore project. This project will have as few dependencies as possible (and none on specific infrastructure concerns), and the other projects in the solution will reference it. Business entities that need to be persisted are defined in the ApplicationCore project, as are services that do not directly depend on infrastructure. - -Implementation details, such as how persistence is performed or how notifications might be sent to a user, are kept in the Infrastructure project. This project will reference implementation-specific packages such as Entity Framework Core, but should not expose details about these implementations outside of the project. Infrastructure services and repositories should implement interfaces that are defined in the ApplicationCore project, and its persistence implementations are responsible for retrieving and storing entities defined in ApplicationCore. - -The ASP.NET Core UI project is responsible for any UI level concerns, but should not include business logic or infrastructure details. In fact, ideally it shouldn't even have a dependency on the Infrastructure project, which will help ensure no dependency between the two projects is introduced accidentally. This can be achieved using a third-party DI container like Autofac, which allows you to define DI rules in Module classes in each project. - -Another approach to decoupling the application from implementation details is to have the application call microservices, perhaps deployed in individual Docker containers. This provides even greater separation of concerns and decoupling than leveraging DI between two projects, but has additional complexity. - -### Feature organization - -By default, ASP.NET Core applications organize their folder structure to include Controllers and Views, and frequently ViewModels. Client-side code to support these server-side structures is typically stored separately in the wwwroot folder. However, large applications may encounter problems with this organization, since working on any given feature often requires jumping between these folders. This gets more and more difficult as the number of files and subfolders in each folder grows, resulting in a great deal of scrolling through Solution Explorer. One solution to this problem is to organize application code by _feature_ instead of by file type. This organizational style is typically referred to as feature folders or [feature slices](/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc) (see also: [Vertical Slices](https://deviq.com/vertical-slices/)). - -ASP.NET Core MVC supports Areas for this purpose. Using areas, you can create separate sets of Controllers and Views folders (as well as any associated models) in each Area folder. Figure 7-1 shows an example folder structure, using Areas. - -![Sample Area Organization](./media/image7-1.png) - -**Figure 7-1**. Sample Area Organization - -When using Areas, you must use attributes to decorate your controllers with the name of the area to which they belong: - -```csharp -[Area("Catalog")] -public class HomeController -{} -``` - -You also need to add area support to your routes: - -```csharp -app.UseEndpoints(endpoints => -{ - endpoints.MapControllerRoute(name: "areaRoute", pattern: "{area:exists}/{controller=Home}/{action=Index}/{id?}"); - endpoints.MapControllerRoute(name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); -}); -``` - -In addition to the built-in support for Areas, you can also use your own folder structure, and conventions in place of attributes and custom routes. This would allow you to have feature folders that didn't include separate folders for Views, Controllers, etc., keeping the hierarchy flatter and making it easier to see all related files in a single place for each feature. For APIs, folders can be used to replace controllers, and each folder can contain all of the API Endpoints and their associated DTOs. - -ASP.NET Core uses built-in convention types to control its behavior. You can modify or replace these conventions. For example, you can create a convention that will automatically get the feature name for a given controller based on its namespace (which typically correlates to the folder in which the controller is located): - -```csharp -public class FeatureConvention : IControllerModelConvention -{ - public void Apply(ControllerModel controller) - { - controller.Properties.Add("feature", - GetFeatureName(controller.ControllerType)); - } - - private string GetFeatureName(TypeInfo controllerType) - { - string[] tokens = controllerType.FullName.Split('.'); - if (!tokens.Any(t => t == "Features")) return ""; - string featureName = tokens - .SkipWhile(t => !t.Equals("features", StringComparison.CurrentCultureIgnoreCase)) - .Skip(1) - .Take(1) - .FirstOrDefault(); - return featureName; - } -} -``` - -You then specify this convention as an option when you add support for MVC to your application in `ConfigureServices` (or in _Program.cs_): - -```csharp -// ConfigureServices -services.AddMvc(o => o.Conventions.Add(new FeatureConvention())); - -// Program.cs -builder.Services.AddMvc(o => o.Conventions.Add(new FeatureConvention())); -``` - -ASP.NET Core MVC also uses a convention to locate views. You can override it with a custom convention so that views will be located in your feature folders (using the feature name provided by the FeatureConvention, above). You can learn more about this approach and download a working sample from the MSDN Magazine article, [Feature Slices for ASP.NET Core MVC](/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc). - -### APIs and Blazor applications - -If your application includes a set of web APIs, which must be secured, these APIs should ideally be configured as a separate project from your View or Razor Pages application. Separating APIs, especially public APIs, from your server-side web application has a number of benefits. These applications often will have unique deployment and load characteristics. They're also very likely to adopt different mechanisms for security, with standard form-based applications leveraging cookie-based authentication and APIs most likely using token-based authentication. - -Additionally, Blazor applications, whether using Blazor Server or Blazor WebAssembly, should be built as separate projects. The applications have different runtime characteristics as well as security models. They're likely to share common types with the server-side web application (or API project), and these types should be defined in a common shared project. - -The addition of a Blazor WebAssembly admin interface to eShopOnWeb required adding several new projects. The Blazor WebAssembly project itself, `BlazorAdmin`. A new set of public API endpoints, used by `BlazorAdmin` and configured to use token-based authentication, is defined in the `PublicApi` project. And certain shared types used by both of these projects are kept in a new `BlazorShared` project. - -One might ask, why add a separate `BlazorShared` project when there is already a common `ApplicationCore` project that could be used to share any types required by both `PublicApi` and `BlazorAdmin`? The answer is that this project includes all of the application's business logic and is thus much larger than necessary and also much more likely to need to be kept secure on the server. Remember that any library referenced by `BlazorAdmin` will be downloaded to users' browsers when they load the Blazor application. - -Depending on whether one is using the [Backends-For-Frontends (BFF) pattern](/azure/architecture/patterns/backends-for-frontends), the APIs consumed by the Blazor WebAssembly app may not share their types 100% with Blazor. In particular, a public API that's meant to be consumed by many different clients may define its own request and result types, rather than sharing them in a client-specific shared project. In the eShopOnWeb sample, the assumption is being made that the `PublicApi` project is, in fact, hosting a public API, so not all of its request and response types come from the `BlazorShared` project. - -### Cross-cutting concerns - -As applications grow, it becomes increasingly important to factor out cross-cutting concerns to eliminate duplication and maintain consistency. Some examples of cross-cutting concerns in ASP.NET Core applications are authentication, model validation rules, output caching, and error handling, though there are many others. ASP.NET Core MVC [filters](/aspnet/core/mvc/controllers/filters) allow you to run code before or after certain steps in the request processing pipeline. For instance, a filter can run before and after model binding, before and after an action, or before and after an action's result. You can also use an authorization filter to control access to the rest of the pipeline. Figures 7-2 shows how request execution flows through filters, if configured. - -![The request is processed through Authorization Filters, Resource Filters, Model Binding, Action Filters, Action Execution and Action Result Conversion, Exception Filters, Result Filters, and Result Execution. On the way out, the request is only processed by Result Filters and Resource Filters before becoming a response sent to the client.](./media/image7-2.png) - -**Figure 7-2**. Request execution through filters and request pipeline. - -Filters are usually implemented as attributes, so you can apply them to controllers or actions (or even globally). When added in this fashion, filters specified at the action level override or build upon filters specified at the controller level, which themselves override global filters. For example, the `[Route]` attribute can be used to build up routes between controllers and actions. Likewise, authorization can be configured at the controller level, and then overridden by individual actions, as the following sample demonstrates: - -```csharp -[Authorize] -public class AccountController : Controller -{ - [AllowAnonymous] // overrides the Authorize attribute - public async Task Login() {} - public async Task ForgotPassword() {} -} -``` - -The first method, Login, uses the `[AllowAnonymous]` filter (attribute) to override the Authorize filter set at the controller level. The `ForgotPassword` action (and any other action in the class that doesn't have an AllowAnonymous attribute) will require an authenticated request. - -Filters can be used to eliminate duplication in the form of common error handling policies for APIs. For example, a typical API policy is to return a NotFound response to requests referencing keys that do not exist, and a `BadRequest` response if model validation fails. The following example demonstrates these two policies in action: - -```csharp -[HttpPut("{id}")] -public async Task Put(int id, [FromBody]Author author) -{ - if ((await _authorRepository.ListAsync()).All(a => a.Id != id)) - { - return NotFound(id); - } - if (!ModelState.IsValid) - { - return BadRequest(ModelState); - } - author.Id = id; - await _authorRepository.UpdateAsync(author); - return Ok(); -} -``` - -Don't allow your action methods to become cluttered with conditional code like this. Instead, pull the policies into filters that can be applied on an as-needed basis. In this example, the model validation check, which should occur anytime a command is sent to the API, can be replaced by the following attribute: - -```csharp -public class ValidateModelAttribute : ActionFilterAttribute -{ - public override void OnActionExecuting(ActionExecutingContext context) - { - if (!context.ModelState.IsValid) - { - context.Result = new BadRequestObjectResult(context.ModelState); - } - } -} -``` - -You can add the `ValidateModelAttribute` to your project as a NuGet dependency by including the [Ardalis.ValidateModel](https://www.nuget.org/packages/Ardalis.ValidateModel) package. For APIs, you can use the `ApiController` attribute to enforce this behavior without the need for a separate `ValidateModel` filter. - -Likewise, a filter can be used to check if a record exists and return a 404 before the action is executed, eliminating the need to perform these checks in the action. Once you've pulled out common conventions and organized your solution to separate infrastructure code and business logic from your UI, your MVC action methods should be extremely thin: - -```csharp -[HttpPut("{id}")] -[ValidateAuthorExists] -public async Task Put(int id, [FromBody]Author author) -{ - await _authorRepository.UpdateAsync(author); - return Ok(); -} -``` - -You can read more about implementing filters and download a working sample from the MSDN Magazine article, [Real-World ASP.NET Core MVC Filters](/archive/msdn-magazine/2016/august/asp-net-core-real-world-asp-net-core-mvc-filters). - -If you find that you have a number of common responses from APIs based on common scenarios like validation errors (Bad Request), resource not found, and server errors, you might consider using a *result* abstraction. The result abstraction would be returned by services consumed by API endpoints, and the controller action or endpoint would use a filter to translate these into `IActionResults`. - -> ### References – Structuring applications -> -> - **Areas**\ -> [https://learn.microsoft.com/aspnet/core/mvc/controllers/areas](/aspnet/core/mvc/controllers/areas) -> - **MSDN Magazine – Feature Slices for ASP.NET Core MVC**\ -> [https://learn.microsoft.com/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc](/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc) -> - **Filters**\ -> [https://learn.microsoft.com/aspnet/core/mvc/controllers/filters](/aspnet/core/mvc/controllers/filters) -> - **MSDN Magazine – Real World ASP.NET Core MVC Filters**\ -> [https://learn.microsoft.com/archive/msdn-magazine/2016/august/asp-net-core-real-world-asp-net-core-mvc-filters](/archive/msdn-magazine/2016/august/asp-net-core-real-world-asp-net-core-mvc-filters) -> - **Result in eShopOnWeb**\ -> [https://github.com/dotnet-architecture/eShopOnWeb/wiki/Patterns#result](https://github.com/dotnet-architecture/eShopOnWeb/wiki/Patterns#result) - -## Security - -Securing web applications is a large topic, with many considerations. At its most basic level, security involves ensuring you know who a given request is coming from, and then ensuring that the request only has access to resources it should. Authentication is the process of comparing credentials provided with a request to those in a trusted data store, to see if the request should be treated as coming from a known entity. Authorization is the process of restricting access to certain resources based on user identity. A third security concern is protecting requests from eavesdropping by third parties, for which you should at least [ensure that SSL is used by your application](/aspnet/core/security/enforcing-ssl). - -### Identity - -ASP.NET Core Identity is a membership system you can use to support login functionality for your application. It has support for local user accounts as well as external login provider support from providers like Microsoft Account, Twitter, Facebook, Google, and more. In addition to ASP.NET Core Identity, your application can use windows authentication, or a third-party identity provider like [Identity Server](https://github.com/IdentityServer/IdentityServer4). - -ASP.NET Core Identity is included in new project templates if the Individual User Accounts option is selected. This template includes support for registration, login, external logins, forgotten passwords, and additional functionality. - -![Select Individual User Accounts to have Identity preconfigured](./media/image7-3.png) - -**Figure 7-3**. Select Individual User Accounts to have Identity preconfigured. - -Identity support is configured in _Program.cs_ or `Startup`, and includes configuring services as well as middleware. - -#### Configure Identity in _Program.cs_ - -In _Program.cs_, you configure services from the `WebHostBuilder` instance, and then once the app is created, you configure its middleware. The key points to note are the call to `AddDefaultIdentity` for required services and the `UseAuthentication` and `UseAuthorization` calls which add required middleware. - -```csharp -var builder = WebApplication.CreateBuilder(args); - -// Add services to the container. -var connectionString = builder.Configuration.GetConnectionString("DefaultConnection"); -builder.Services.AddDbContext(options => - options.UseSqlServer(connectionString)); -builder.Services.AddDatabaseDeveloperPageExceptionFilter(); - -builder.Services.AddDefaultIdentity(options => options.SignIn.RequireConfirmedAccount = true) - .AddEntityFrameworkStores(); -builder.Services.AddRazorPages(); - -var app = builder.Build(); - -// Configure the HTTP request pipeline. -if (app.Environment.IsDevelopment()) -{ - app.UseMigrationsEndPoint(); -} -else -{ - app.UseExceptionHandler("/Error"); - // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts. - app.UseHsts(); -} - -app.UseHttpsRedirection(); -app.UseStaticFiles(); - -app.UseRouting(); - -app.UseAuthentication(); -app.UseAuthorization(); - -app.MapRazorPages(); - -app.Run(); -``` - -#### Configuring Identity in app startup - -```csharp -// Add framework services. -builder.Services.AddDbContext(options => - options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); -builder.Services.AddIdentity() - .AddEntityFrameworkStores() - .AddDefaultTokenProviders(); -builder.Services.AddMvc(); - -var app = builder.Build(); - -if (app.Environment.IsDevelopment()) -{ - app.UseMigrationsEndPoint(); -} -else -{ - app.UseExceptionHandler("/Error"); - app.UseHsts(); -} - -app.UseHttpsRedirection(); -app.UseStaticFiles(); - -app.UseRouting(); - -app.UseAuthentication(); -app.UseAuthorization(); - -app.MapRazorPages(); -``` - -It's important that `UseAuthentication` and `UseAuthorization` appear before `MapRazorPages`. When configuring Identity services, you'll notice a call to `AddDefaultTokenProviders`. This has nothing to do with tokens that may be used to secure web communications, but instead refers to providers that create prompts that can be sent to users via SMS or email in order for them to confirm their identity. - -You can learn more about [configuring two-factor authentication](/aspnet/core/security/authentication/2fa) and [enabling external login providers](/aspnet/core/security/authentication/social/) from the official ASP.NET Core docs. - -### Authentication - -Authentication is the process of determining who is accessing the system. If you're using ASP.NET Core Identity and the configuration methods shown in the previous section, it will automatically configure some authentication defaults in the application. However, you can also configure these defaults manually, or override the ones set by AddIdentity. If you're using Identity, it configures cookie-based authentication as the default *scheme*. - -In web-based authentication, there are typically up to five actions that may be performed in the course of authenticating a client of a system. These are: - -- Authenticate. Use the information provided by the client to create an identity for them to use within the application. -- Challenge. This action is used to require the client to identify themselves. -- Forbid. Inform the client they are forbidden from performing an action. -- Sign-in. Persist the existing client in some way. -- Sign-out. Remove the client from persistence. - -There are a number of common techniques for performing authentication in web applications. These are referred to as schemes. A given scheme will define actions for some or all of the above options. Some schemes only support a subset of actions, and may require a separate scheme to perform those it does not support. For example, the OpenId-Connect (OIDC) scheme doesn't support Sign-in or Sign-out, but is commonly configured to use Cookie authentication for this persistence. - -In your ASP.NET Core application, you can configure a `DefaultAuthenticateScheme` as well as optional specific schemes for each of the actions described above. For example, `DefaultChallengeScheme` and `DefaultForbidScheme`. Calling configures a number of aspects of the application and adds many required services. It also includes this call to configure the authentication scheme: - -```csharp -builder.Services.AddAuthentication(options => -{ - options.DefaultAuthenticateScheme = IdentityConstants.ApplicationScheme; - options.DefaultChallengeScheme = IdentityConstants.ApplicationScheme; - options.DefaultSignInScheme = IdentityConstants.ExternalScheme; -}); -``` - -These schemes use cookies for persistence and redirection to login pages for authentication by default. These schemes are appropriate for web applications that interact with users via web browsers, but not recommended for APIs. Instead, APIs will typically use another form of authentication, such as JWT bearer tokens. - -Web APIs are consumed by code, such as `HttpClient` in .NET applications and equivalent types in other frameworks. These clients expect a usable response from an API call, or a status code indicating what, if any, problem has occurred. These clients are not interacting through a browser and do not render or interact with any HTML that an API might return. Thus, it is not appropriate for API endpoints to redirect their clients to login pages if they are not authenticated. Another scheme is more appropriate. - -To configure authentication for APIs, you might set up authentication like the following, used by the `PublicApi` project in the eShopOnWeb reference application: - -```csharp -builder.Services - .AddAuthentication(config => - { - config.DefaultScheme = JwtBearerDefaults.AuthenticationScheme; - }) - .AddJwtBearer(config => - { - config.RequireHttpsMetadata = false; - config.SaveToken = true; - config.TokenValidationParameters = new TokenValidationParameters - { - ValidateIssuerSigningKey = true, - IssuerSigningKey = new SymmetricSecurityKey(key), - ValidateIssuer = false, - ValidateAudience = false - }; - }); -``` - -While it is possible to configure multiple different authentication schemes within a single project, it is much simpler to configure a single default scheme. For this reason, among others, the eShopOnWeb reference application separates its APIs into their own project, `PublicApi`, separate from the main `Web` project that includes the application's views and Razor Pages. - -#### Authentication in Blazor apps - -Blazor Server applications can leverage the same authentication features as any other ASP.NET Core application. Blazor WebAssembly applications cannot use the built-in Identity and Authentication providers, however, since they run in the browser. Blazor WebAssembly applications can store user authentication status locally and can access claims to determine what actions users should be able to perform. However, all authentication and authorization checks should be performed on the server regardless of any logic implemented inside the Blazor WebAssembly app, since users can easily bypass the app and interact with the APIs directly. - -> ### References – Authentication -> -> - **Authentication Actions and Defaults**\ -> -> - **Authentication and Authorization for SPAs**\ -> [https://learn.microsoft.com/aspnet/core/security/authentication/identity-api-authorization](/aspnet/core/security/authentication/identity-api-authorization) -> - **ASP.NET Core Blazor Authentication and Authorization**\ -> [https://learn.microsoft.com/aspnet/core/blazor/security/](/aspnet/core/blazor/security/) -> - **Security: Authentication and Authorization in ASP.NET Web Forms and Blazor**\ -> [https://learn.microsoft.com/dotnet/architecture/blazor-for-web-forms-developers/security-authentication-authorization](../blazor-for-web-forms-developers/security-authentication-authorization.md) - -### Authorization - -The simplest form of authorization involves restricting access to anonymous users. This functionality can be achieved by applying the `[Authorize]` attribute to certain controllers or actions. If roles are being used, the attribute can be further extended to restrict access to users who belong to certain roles, as shown: - -```csharp -[Authorize(Roles = "HRManager,Finance")] -public class SalaryController : Controller -{ - -} -``` - -In this case, users belonging to either the `HRManager` or `Finance` roles (or both) would have access to the SalaryController. To require that a user belong to multiple roles (not just one of several), you can apply the attribute multiple times, specifying a required role each time. - -Specifying certain sets of roles as strings in many different controllers and actions can lead to undesirable repetition. At a minimum, define constants for these string literals and use the constants anywhere you need to specify the string. You can also configure authorization policies, which encapsulate authorization rules, and then specify the policy instead of individual roles when applying the `[Authorize]` attribute: - -```csharp -[Authorize(Policy = "CanViewPrivateReport")] -public IActionResult ExecutiveSalaryReport() -{ - return View(); -} -``` - -Using policies in this way, you can separate the kinds of actions being restricted from the specific roles or rules that apply to it. Later, if you create a new role that needs to have access to certain resources, you can just update a policy, rather than updating every list of roles on every `[Authorize]` attribute. - -#### Claims - -Claims are name value pairs that represent properties of an authenticated user. For example, you might store users' employee number as a claim. Claims can then be used as part of authorization policies. You could create a policy called "EmployeeOnly" that requires the existence of a claim called `"EmployeeNumber"`, as shown in this example: - -```csharp -public void ConfigureServices(IServiceCollection services) -{ - services.AddMvc(); - services.AddAuthorization(options => - { - options.AddPolicy("EmployeeOnly", policy => policy.RequireClaim("EmployeeNumber")); - }); -} -``` - -This policy could then be used with the `[Authorize]` attribute to protect any controller and/or action, as described above. - -#### Securing web APIs - -Most web APIs should implement a token-based authentication system. Token authentication is stateless and designed to be scalable. In a token-based authentication system, the client must first authenticate with the authentication provider. If successful, the client is issued a token, which is simply a cryptographically meaningful string of characters. The most common format for tokens is JSON Web Token, or JWT (often pronounced "jot"). When the client then needs to issue a request to an API, it adds this token as a header on the request. The server then validates the token found in the request header before completing the request. Figure 7-4 demonstrates this process. - -![TokenAuth](./media/image7-4.png) - -**Figure 7-4.** Token-based authentication for Web APIs. - -You can create your own authentication service, integrate with Azure AD and OAuth, or implement a service using an open-source tool like [IdentityServer](https://github.com/IdentityServer). - -JWT tokens can embed claims about the user, which can be read on the client or server. You can use a tool like [jwt.io](https://jwt.io/) to view the contents of a JWT token. Do not store sensitive data like passwords or keys in JTW tokens, since their contents are easily read. - -When using JWT tokens with SPA or Blazor WebAssembly applications, you must store the token somewhere on the client and then add it to every API call. This activity is typically done as a header, as the following code demonstrates: - -```csharp -// AuthService.cs in BlazorAdmin project of eShopOnWeb -private async Task SetAuthorizationHeader() -{ - var token = await GetToken(); - _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token); -} -``` - -After calling the above method, requests made with the `_httpClient` will have the token embedded in the request's headers, allowing the server-side API to authenticate and authorize the request. - -#### Custom Security - -> [!CAUTION] -> As a general rule, avoid implementing your own custom security implementations. - -Be especially careful about "rolling your own" implementation of cryptography, user membership, or token generation system. There are many commercial and open-source alternatives available, which will almost certainly have better security than a custom implementation. - -> ### References – Security -> -> - **Security Docs Overview**\ -> [https://learn.microsoft.com/aspnet/core/security/](/aspnet/core/security/) -> - **Enforcing SSL in an ASP.NET Core App**\ -> [https://learn.microsoft.com/aspnet/core/security/enforcing-ssl](/aspnet/core/security/enforcing-ssl) -> - **Introduction to Identity**\ -> [https://learn.microsoft.com/aspnet/core/security/authentication/identity](/aspnet/core/security/authentication/identity) -> - **Introduction to Authorization**\ -> [https://learn.microsoft.com/aspnet/core/security/authorization/introduction](/aspnet/core/security/authorization/introduction) -> - **Authentication and Authorization for API Apps in Azure App Service**\ -> [https://learn.microsoft.com/azure/app-service-api/app-service-api-authentication](/azure/app-service-api/app-service-api-authentication) -> - **Identity Server**\ -> - -## Client communication - -In addition to serving pages and responding to requests for data via web APIs, ASP.NET Core apps can communicate directly with connected clients. This outbound communication can use a variety of transport technologies, the most common being WebSockets. ASP.NET Core SignalR is a library that makes it simple to add real-time server-to-client communication functionality to your applications. SignalR supports a variety of transport technologies, including WebSockets, and abstracts away many of the implementation details from the developer. - -Real-time client communication, whether using WebSockets directly or other techniques, are useful in a variety of application scenarios. Some examples include: - -- Live chat room applications - -- Monitoring applications - -- Job progress updates - -- Notifications - -- Interactive forms applications - -When building client communication into your applications, there are typically two components: - -- Server-side connection manager (SignalR Hub, WebSocketManager WebSocketHandler) - -- Client-side library - -Clients aren't limited to browsers – mobile apps, console apps, and other native apps can also communicate using SignalR/WebSockets. The following simple program echoes all content sent to a chat application to the console, as part of a WebSocketManager sample application: - -```csharp -public class Program -{ - private static Connection _connection; - public static void Main(string[] args) - { - StartConnectionAsync(); - _connection.On("receiveMessage", (arguments) => - { - Console.WriteLine($"{arguments[0]} said: {arguments[1]}"); - }); - Console.ReadLine(); - StopConnectionAsync(); - } - - public static async Task StartConnectionAsync() - { - _connection = new Connection(); - await _connection.StartConnectionAsync("ws://localhost:65110/chat"); - } - - public static async Task StopConnectionAsync() - { - await _connection.StopConnectionAsync(); - } -} -``` - -Consider ways in which your applications communicate directly with client applications, and consider whether real-time communication would improve your app's user experience. - -> ### References – Client Communication -> -> - **ASP.NET Core SignalR**\ -> -> - **WebSocket Manager**\ -> - -## Domain-driven design – Should you apply it? - -Domain-Driven Design (DDD) is an agile approach to building software that emphasizes focusing on the _business domain_. It places a heavy emphasis on communication and interaction with business domain expert(s) who can relate to the developers how the real-world system works. For example, if you're building a system that handles stock trades, your domain expert might be an experienced stock broker. DDD is designed to address large, complex business problems, and is often not appropriate for smaller, simpler applications, as the investment in understanding and modeling the domain is not worth it. - -When building software following a DDD approach, your team (including non-technical stakeholders and contributors) should develop a _ubiquitous language_ for the problem space. That is, the same terminology should be used for the real-world concept being modeled, the software equivalent, and any structures that might exist to persist the concept (for example, database tables). Thus, the concepts described in the ubiquitous language should form the basis for your _domain model_. - -Your domain model comprises objects that interact with one another to represent the behavior of the system. These objects may fall into the following categories: - -- [Entities](https://deviq.com/entity/), which represent objects with a thread of identity. Entities are typically stored in persistence with a key by which they can later be retrieved. - -- [Aggregates](https://deviq.com/aggregate-pattern/), which represent groups of objects that should be persisted as a unit. - -- [Value objects](https://deviq.com/value-object/), which represent concepts that can be compared on the basis of the sum of their property values. For example, DateRange consisting of a start and end date. - -- [Domain events](https://martinfowler.com/eaaDev/DomainEvent.html), which represent things happening within the system that are of interest to other parts of the system. - -A DDD domain model should encapsulate complex behavior within the model. Entities, in particular, should not merely be collections of properties. When the domain model lacks behavior and merely represents the state of the system, it is said to be an [anemic model](https://deviq.com/anemic-model/), which is undesirable in DDD. - -In addition to these model types, DDD typically employs a variety of patterns: - -- [Repository](https://deviq.com/repository-pattern/), for abstracting persistence details. - -- [Factory](https://en.wikipedia.org/wiki/Factory_method_pattern), for encapsulating complex object creation. - -- [Services](http://gorodinski.com/blog/2012/04/14/services-in-domain-driven-design-ddd/), for encapsulating complex behavior and/or infrastructure implementation details. - -- [Command](https://en.wikipedia.org/wiki/Command_pattern), for decoupling issuing commands and executing the command itself. - -- [Specification](https://deviq.com/specification-pattern/), for encapsulating query details. - -DDD also recommends the use of the Clean Architecture discussed previously, allowing for loose coupling, encapsulation, and code that can easily be verified using unit tests. - -### When should you apply DDD - -DDD is well suited to large applications with significant business (not just technical) complexity. The application should require the knowledge of domain experts. There should be significant behavior in the domain model itself, representing business rules and interactions beyond simply storing and retrieving the current state of various records from data stores. - -### When shouldn't you apply DDD - -DDD involves investments in modeling, architecture, and communication that may not be warranted for smaller applications or applications that are essentially just CRUD (create/read/update/delete). If you choose to approach your application following DDD, but find that your domain has an anemic model with no behavior, you may need to rethink your approach. Either your application may not need DDD, or you may need assistance refactoring your application to encapsulate business logic in the domain model, rather than in your database or user interface. - -A hybrid approach would be to only use DDD for the transactional or more complex areas of the application, but not for simpler CRUD or read-only portions of the application. For instance, you don't need the constraints of an Aggregate if you're querying data to display a report or to visualize data for a dashboard. It's perfectly acceptable to have a separate, simpler read model for such requirements. - -> ### References – Domain-Driven Design -> -> - **DDD in Plain English (StackOverflow Answer)**\ -> - -## Deployment - -There are a few steps involved in the process of deploying your ASP.NET Core application, regardless of where it will be hosted. The first step is to publish the application, which can be done using the `dotnet publish` CLI command. This step will compile the application and place all of the files needed to run the application into a designated folder. When you deploy from Visual Studio, this step is performed for you automatically. The publish folder contains .exe and .dll files for the application and its dependencies. A self-contained application will also include a version of the .NET runtime. ASP.NET Core applications will also include configuration files, static client assets, and MVC views. - -ASP.NET Core applications are console applications that must be started when the server boots and restarted if the application (or server) crashes. A process manager can be used to automate this process. The most common process managers for ASP.NET Core are Nginx and Apache on Linux and IIS or Windows Service on Windows. - -In addition to a process manager, ASP.NET Core applications may use a reverse proxy server. A reverse proxy server receives HTTP requests from the Internet and forwards them to Kestrel after some preliminary handling. Reverse proxy servers provide a layer of security for the application. Kestrel also doesn't support hosting multiple applications on the same port, so techniques like host headers cannot be used with it to enable hosting multiple applications on the same port and IP address. - -![Kestrel to Internet](./media/image7-5.png) - -**Figure 7-5**. ASP.NET hosted in Kestrel behind a reverse proxy server - -Another scenario in which a reverse proxy can be helpful is to secure multiple applications using SSL/HTTPS. In this case, only the reverse proxy would need to have SSL configured. Communication between the reverse proxy server and Kestrel could take place over HTTP, as shown in Figure 7-6. - -![ASP.NET hosted behind an HTTPS-secured reverse proxy server](./media/image7-6.png) - -**Figure 7-6**. ASP.NET hosted behind an HTTPS-secured reverse proxy server - -An increasingly popular approach is to host your ASP.NET Core application in a Docker container, which then can be hosted locally or deployed to Azure for cloud-based hosting. The Docker container could contain your application code, running on Kestrel, and would be deployed behind a reverse proxy server, as shown above. - -If you're hosting your application on Azure, you can use Microsoft Azure Application Gateway as a dedicated virtual appliance to provide several services. In addition to acting as a reverse proxy for individual applications, Application Gateway can also offer the following features: - -- HTTP load balancing - -- SSL offload (SSL only to Internet) - -- End to End SSL - -- Multi-site routing (consolidate up to 20 sites on a single Application Gateway) - -- Web application firewall - -- Websocket support - -- Advanced diagnostics - -_Learn more about Azure deployment options in [Chapter 10](development-process-for-azure.md)._ - -> ### References – Deployment -> -> - **Hosting and Deployment Overview**\ -> [https://learn.microsoft.com/aspnet/core/publishing/](/aspnet/core/publishing/) -> - **When to use Kestrel with a reverse proxy**\ -> [https://learn.microsoft.com/aspnet/core/fundamentals/servers/kestrel#when-to-use-kestrel-with-a-reverse-proxy](/aspnet/core/fundamentals/servers/kestrel#when-to-use-kestrel-with-a-reverse-proxy) -> - **Host ASP.NET Core apps in Docker**\ -> [https://learn.microsoft.com/aspnet/core/publishing/docker](/aspnet/core/publishing/docker) -> - **Introducing Azure Application Gateway**\ -> [https://learn.microsoft.com/azure/application-gateway/application-gateway-introduction](/azure/application-gateway/application-gateway-introduction) - ->[!div class="step-by-step"] ->[Previous](common-client-side-web-technologies.md) ->[Next](work-with-data-in-asp-net-core-apps.md) diff --git a/docs/architecture/modern-web-apps-azure/development-process-for-azure.md b/docs/architecture/modern-web-apps-azure/development-process-for-azure.md deleted file mode 100644 index 10047c3d66da4..0000000000000 --- a/docs/architecture/modern-web-apps-azure/development-process-for-azure.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: Development process for Azure -description: Architect Modern Web Applications with ASP.NET Core and Azure | Development process for Azure -author: ardalis -ms.author: wiwagn -ms.date: 12/12/2021 ---- -# Development process for Azure - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> _"With the cloud, individuals and small businesses can snap their fingers and instantly set up enterprise-class services."_ -> _- Roy Stephan_ - -## Vision - -> *Develop well-designed ASP .NET Core applications the way you like, using Visual Studio or the dotnet CLI and Visual Studio Code or your editor of choice.* - -## Development environment for ASP.NET Core apps - -### Development tools choices: IDE or editor - -Whether you prefer a full and powerful IDE or a lightweight and agile editor, Microsoft has you covered when developing ASP.NET Core applications. - -**Visual Studio 2022.** Visual Studio 2022 is the best-in-class IDE for developing applications for ASP.NET Core. It offers a host of features that increase developer productivity. You can use it to develop the application, then analyze its performance and other characteristics. The integrated debugger lets you pause code execution and step back and forth through code on the fly as it's running. Its support for hot reloads allows you to continue working with your app where you left off, even after making code changes, without having to restart the app. The built-in test runner lets you organize your tests and their results and can even perform live unit testing while you're coding. Using Live Share, you can collaborate in real-time with other developers, sharing your code session seamlessly over the network. And when you're ready, Visual Studio includes everything you need to publish your application to Azure or wherever you might host it. - -[Download Visual Studio 2022](https://aka.ms/vsdownload?utm_source=mscom&utm_campaign=msdocs) - -**Visual Studio Code and dotnet CLI** (Cross-Platform Tools for Mac, Linux, and Windows). If you prefer a lightweight and cross-platform editor supporting any development language, you can use Microsoft Visual Studio Code and the dotnet CLI. These products provide a simple yet robust experience that streamlines the developer workflow. Additionally, Visual Studio Code supports extensions for C\# and web development, providing intellisense and shortcut-tasks within the editor. - -[Download the .NET SDK](https://dotnet.microsoft.com/download) - -[Download Visual Studio Code](https://code.visualstudio.com/download) - -## Development workflow for Azure-hosted ASP.NET Core apps - -The application development lifecycle starts from each developer's machine, coding the app using their preferred language and testing it locally. Developers may choose their preferred source control system and can configure Continuous Integration (CI) and/or Continuous Delivery/Deployment (CD) using a build server or based on built-in Azure features. - -To get started with developing an ASP.NET Core application using CI/CD, you can use Azure DevOps Services or your organization's own Team Foundation Server (TFS). GitHub Actions provide another option for easily building and deploying apps to Azure, for apps whose code is hosted on GitHub. - -### Initial setup - -To create a release pipeline for your app, you need to have your application code in source control. Set up a local repository and connect it to a remote repository in a team project. Follow these instructions: - -- [Share your code with Git and Visual Studio](/azure/devops/git/share-your-code-in-git-vs) or - -- [Share your code with TFVC and Visual Studio](/azure/devops/tfvc/share-your-code-in-tfvc-vs) - -Create an Azure App Service where you'll deploy your application. Create a Web App by going to the App Services blade on the Azure portal. Click +Add, select the Web App template, click Create, and provide a name and other details. The web app will be accessible from {name}.azurewebsites.net. - -![AzureWebApp](./media/image10-2.png) - -**Figure 10-1.** Creating a new Azure App Service Web App in the Azure Portal. - -Your CI build process will perform an automated build whenever new code is committed to the project's source control repository. This process gives you immediate feedback that the code builds (and, ideally, passes automated tests) and can potentially be deployed. This CI build will produce a web deploy package artifact and publish it for consumption by your CD process. - -[Define your CI build process](/azure/devops/pipelines/ecosystems/dotnet-core) - -Be sure to enable continuous integration so the system will queue a build whenever someone on your team commits new code. Test the build and verify that it is producing a web deploy package as one of its artifacts. - -When a build succeeds, your CD process will deploy the results of your CI build to your Azure web app. To configure this step, you create and configure a *Release*, which will deploy to your Azure App Service. - -[Deploy an Azure web app](/azure/devops/pipelines/targets/webapp) - -Once your CI/CD pipeline is configured, you can easily make updates to your web app and commit them to source control to have them deployed. - -### Workflow for developing Azure-hosted ASP.NET Core applications - -Once you have configured your Azure account and your CI/CD process, developing Azure-hosted ASP.NET Core applications is simple. The following are the basic steps you usually take when building an ASP.NET Core app, hosted in Azure App Service as a Web App, as illustrated in Figure 10-2. - -![EndToEndDevDeployWorkflow](./media/image10-3.png) - -**Figure 10-2.** Step-by-step workflow for building ASP.NET Core apps and hosting them in Azure - -#### Step 1. Local dev environment inner loop - -Developing your ASP.NET Core application for deployment to Azure is no different from developing your application otherwise. Use the local development environment you're comfortable with, whether that's Visual Studio 2019 or the dotnet CLI and Visual Studio Code or your preferred editor. You can write code, run and debug your changes, run automated tests, and make local commits to source control until you're ready to push your changes to your shared source control repository. - -#### Step 2. Application code repository - -Whenever you're ready to share your code with your team, you should push your changes from your local source repository to your team's shared source repository. If you've been working in a custom branch, this step usually involves merging your code into a shared branch (perhaps by means of a [pull request](/azure/devops/git/pull-requests)). - -#### Step 3. Build Server: Continuous integration. build, test, package - -A new build is triggered on the build server whenever a new commit is made to the shared application code repository. As part of the CI process, this build should fully compile the application and run automated tests to confirm everything is working as expected. The end result of the CI process should be a packaged version of the web app, ready for deployment. - -#### Step 4. Build Server: Continuous delivery - -Once a build has succeeded, the CD process will pick up the build artifacts produced. This process will include a web deploy package. The build server will deploy this package to Azure App Service, replacing any existing service with the newly created one. Typically this step targets a staging environment, but some applications deploy directly to production through a CD process. - -#### Step 5. Azure App Service Web App - -Once deployed, the ASP.NET Core application runs within the context of an Azure App Service Web App. This Web App can be monitored and further configured using the Azure Portal. - -#### Step 6. Production monitoring and diagnostics - -While the Web App is running, you can monitor the health of the application and collect diagnostics and user behavior data. Application Insights is included in Visual Studio, and offers automatic instrumentation for ASP.NET apps. It can provide you with information on usage, exceptions, requests, performance, and logs. - -## References - -**Build and Deploy Your ASP.NET Core App to Azure** -[https://learn.microsoft.com/azure/devops/build-release/apps/aspnet/build-aspnet-core](/azure/devops/build-release/apps/aspnet/build-aspnet-core) - ->[!div class="step-by-step"] ->[Previous](test-asp-net-core-mvc-apps.md) ->[Next](azure-hosting-recommendations-for-asp-net-web-apps.md) diff --git a/docs/architecture/modern-web-apps-azure/includes/download-alert.md b/docs/architecture/modern-web-apps-azure/includes/download-alert.md deleted file mode 100644 index dfc642a3a68f9..0000000000000 --- a/docs/architecture/modern-web-apps-azure/includes/download-alert.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -author: erjain -ms.author: v-tjain -ms.date: 04/13/2022 -ms.topic: include ---- - -> [!TIP] -> :::row::: -> :::column span="3"::: -> This content is an excerpt from the eBook, Architect Modern Web Applications with ASP.NET Core and Azure, available on [.NET Docs](/dotnet/architecture/modern-web-apps-azure) or as a free downloadable PDF that can be read offline. -> -> > [!div class="nextstepaction"] -> > [Download PDF](https://dotnet.microsoft.com/en-us/download/e-book/aspnet/pdf) -> :::column-end::: -> :::column::: -> :::image type="content" source="../media/cover-thumbnail.png" alt-text="Architect Modern Web Applications with ASP.NET Core and Azure eBook cover thumbnail."::: -> :::column-end::: -> :::row-end::: diff --git a/docs/architecture/modern-web-apps-azure/index.md b/docs/architecture/modern-web-apps-azure/index.md deleted file mode 100644 index 714f3c43ffb28..0000000000000 --- a/docs/architecture/modern-web-apps-azure/index.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Architect modern web applications with ASP.NET Core and Azure -description: A guide that provides end-to-end guidance on building monolithic web applications using ASP.NET Core and Azure. -author: ardalis -ms.author: wiwagn -ms.date: 01/10/2022 ---- - -# Architect Modern Web Applications with ASP.NET Core and Azure - -![Book cover image of the Architect Modern Web Applications guide.](./media/index/web-application-guide-cover-image.png) - -**EDITION v8.0** - Updated to ASP.NET Core 8.0 - -Refer [changelog](https://aka.ms/aspnet-ebook-changelog) for the book updates and community contributions. - -PUBLISHED BY - -Microsoft Developer Division, .NET, and Visual Studio product teams - -A division of Microsoft Corporation - -One Microsoft Way - -Redmond, Washington 98052-6399 - -Copyright © 2023 by Microsoft Corporation - -All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. - -This book is provided "as-is" and expresses the author's views and opinions. The views, opinions, and information expressed in this book, including URL and other Internet website references, may change without notice. - -Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred. - -Microsoft and the trademarks listed at on the "Trademarks" webpage are trademarks of the Microsoft group of companies. - -Mac and macOS are trademarks of Apple Inc. - -The Docker whale logo is a registered trademark of Docker, Inc. Used by permission. - -All other marks and logos are property of their respective owners. - -Author: - -> **Steve "ardalis" Smith** - Software Architect and Trainer - [Ardalis.com](https://ardalis.com) - -Editors: - -> **Maira Wenzel** - -## Action links - -- This e-book is also available in a PDF format (English version only) [Download](https://aka.ms/webappebook) - -- Clone/Fork the reference application [eShopOnWeb on GitHub](https://github.com/dotnet-architecture/eShopOnWeb) - -## Introduction - -.NET 8 and ASP.NET Core offer several advantages over traditional .NET development. You should use .NET 8 for your server applications if some or all of the following are important to your application's success: - -- Cross-platform support. - -- Use of microservices. - -- Use of Docker containers. - -- High performance and scalability requirements. - -- Side-by-side versioning of .NET versions by application on the same server. - -Traditional .NET 4.x apps can and do support many of these requirements, but ASP.NET Core and .NET 8 have been optimized to offer improved support for the above scenarios. - -More and more organizations are choosing to host their web applications in the cloud using services like Microsoft Azure. You should consider hosting your application in the cloud if the following are important to your application or organization: - -- Reduced investment in data center costs (hardware, software, space, utilities, server management, etc.) - -- Flexible pricing (pay based on usage, not for idle capacity). - -- Extreme reliability. - -- Improved app mobility; easily change where and how your app is deployed. - -- Flexible capacity; scale up or down based on actual needs. - -Building web applications with ASP.NET Core, hosted in Azure, offers many competitive advantages over traditional alternatives. ASP.NET Core is optimized for modern web application development practices and cloud hosting scenarios. In this guide, you'll learn how to architect your ASP.NET Core applications to best take advantage of these capabilities. - -## Version - -This guide has been revised to cover **.NET 8.0** version along with many additional updates related to the same "wave" of technologies (that is, Azure and additional third-party technologies) coinciding in time with the .NET 8.0 release. That's why the book version has also been updated to version **8.0**. - -## Purpose - -This guide provides end-to-end guidance on building *monolithic* web applications using ASP.NET Core and Azure. In this context, "monolithic" refers to the fact that these applications are deployed as a single unit, not as a collection of interacting services and applications. In some contexts, the term *monolith* may be used as a pejorative, but in the vast majority of situations a single application is much easier to build, deploy, and debug than an app composed of many different services, while still achieving the business requirements. - -This guide is complementary to ["_.NET Microservices. Architecture for Containerized .NET Applications_"](../microservices/index.md), which focuses more on Docker, microservices, and deployment of containers to host enterprise applications. - -### .NET Microservices. Architecture for Containerized .NET Applications - -- **e-book** - -- **Sample Application** - - -## Who should use this guide - -The audience for this guide is mainly developers, development leads, and architects who are interested in building modern web applications using Microsoft technologies and services in the cloud. - -A secondary audience is technical decision makers who are already familiar ASP.NET or Azure and are looking for information on whether it makes sense to upgrade to ASP.NET Core for new or existing projects. - -## How you can use this guide - -This guide has been condensed into a relatively small document that focuses on building web applications with modern .NET technologies and Azure. As such, it can be read in its entirety to provide a foundation of understanding such applications and their technical considerations. The guide, along with its sample application, can also serve as a starting point or reference. Use the associated sample application as a template for your own applications, or to see how you might organize your application's component parts. Refer back to the guide's principles and coverage of architecture and technology options and decision considerations when you're weighing these choices for your own application. - -Feel free to forward this guide to your team to help ensure a common understanding of these considerations and opportunities. Having everybody working from a common set of terminology and underlying principles helps ensure consistent application of architectural patterns and practices. - -[!INCLUDE [feedback](../includes/feedback.md)] - -## References - -- **Choosing between .NET and .NET Framework for server apps** - [https://learn.microsoft.com/dotnet/standard/choosing-core-framework-server](../../standard/choosing-core-framework-server.md) - ->[!div class="step-by-step"] ->[Next](modern-web-applications-characteristics.md) diff --git a/docs/architecture/modern-web-apps-azure/media/cover-thumbnail.png b/docs/architecture/modern-web-apps-azure/media/cover-thumbnail.png deleted file mode 100644 index e9a307d39f190..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/cover-thumbnail.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image1-10.png b/docs/architecture/modern-web-apps-azure/media/image1-10.png deleted file mode 100644 index e8bdd176ffd94..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image1-10.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image1-5.png b/docs/architecture/modern-web-apps-azure/media/image1-5.png deleted file mode 100644 index 1790b703e7241..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image1-5.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image1-6.png b/docs/architecture/modern-web-apps-azure/media/image1-6.png deleted file mode 100644 index fa9be197283f1..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image1-6.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image1-7.png b/docs/architecture/modern-web-apps-azure/media/image1-7.png deleted file mode 100644 index c4bdb418b1a7e..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image1-7.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image1-8.png b/docs/architecture/modern-web-apps-azure/media/image1-8.png deleted file mode 100644 index ecdecb38ed35a..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image1-8.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image1-9.gif b/docs/architecture/modern-web-apps-azure/media/image1-9.gif deleted file mode 100644 index cb044eb0e8328..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image1-9.gif and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image10-2.png b/docs/architecture/modern-web-apps-azure/media/image10-2.png deleted file mode 100644 index e2233a0d73401..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image10-2.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image10-3.png b/docs/architecture/modern-web-apps-azure/media/image10-3.png deleted file mode 100644 index 2c9dc961f7a52..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image10-3.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image11-2.png b/docs/architecture/modern-web-apps-azure/media/image11-2.png deleted file mode 100644 index c85b0a5854fb6..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image11-2.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image2-1.png b/docs/architecture/modern-web-apps-azure/media/image2-1.png deleted file mode 100644 index 3a44a8f87060e..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image2-1.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image4-1.png b/docs/architecture/modern-web-apps-azure/media/image4-1.png deleted file mode 100644 index ad72901872b98..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image4-1.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image4-2.png b/docs/architecture/modern-web-apps-azure/media/image4-2.png deleted file mode 100644 index 2a7abdace6077..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image4-2.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-1.png b/docs/architecture/modern-web-apps-azure/media/image5-1.png deleted file mode 100644 index b6c305ba38922..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-1.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-10.png b/docs/architecture/modern-web-apps-azure/media/image5-10.png deleted file mode 100644 index 1c5b188c249fd..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-10.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-11.png b/docs/architecture/modern-web-apps-azure/media/image5-11.png deleted file mode 100644 index 717a8f5fd9be1..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-11.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-12.png b/docs/architecture/modern-web-apps-azure/media/image5-12.png deleted file mode 100644 index fe0f47e7868e1..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-12.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-13.png b/docs/architecture/modern-web-apps-azure/media/image5-13.png deleted file mode 100644 index 03c5489cb32ca..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-13.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-14.png b/docs/architecture/modern-web-apps-azure/media/image5-14.png deleted file mode 100644 index 5fe5b44981435..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-14.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-2.png b/docs/architecture/modern-web-apps-azure/media/image5-2.png deleted file mode 100644 index e8f286be62c49..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-2.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-3.png b/docs/architecture/modern-web-apps-azure/media/image5-3.png deleted file mode 100644 index e891282be05d0..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-3.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-4.png b/docs/architecture/modern-web-apps-azure/media/image5-4.png deleted file mode 100644 index fec9f6ddcd279..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-4.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-5.png b/docs/architecture/modern-web-apps-azure/media/image5-5.png deleted file mode 100644 index c6388ab61affe..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-5.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-6.png b/docs/architecture/modern-web-apps-azure/media/image5-6.png deleted file mode 100644 index d591b2f5845cf..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-6.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-7.png b/docs/architecture/modern-web-apps-azure/media/image5-7.png deleted file mode 100644 index dd95aa78d53be..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-7.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-8.png b/docs/architecture/modern-web-apps-azure/media/image5-8.png deleted file mode 100644 index 8f01b369b564e..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-8.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image5-9.png b/docs/architecture/modern-web-apps-azure/media/image5-9.png deleted file mode 100644 index 92bdfaaa31183..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image5-9.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image6-1.png b/docs/architecture/modern-web-apps-azure/media/image6-1.png deleted file mode 100644 index fdbf00fe09894..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image6-1.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image7-1.png b/docs/architecture/modern-web-apps-azure/media/image7-1.png deleted file mode 100644 index d24e7ce427466..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image7-1.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image7-2.png b/docs/architecture/modern-web-apps-azure/media/image7-2.png deleted file mode 100644 index e3a07cce1818e..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image7-2.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image7-3.png b/docs/architecture/modern-web-apps-azure/media/image7-3.png deleted file mode 100644 index 15a0a16d38c79..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image7-3.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image7-4.png b/docs/architecture/modern-web-apps-azure/media/image7-4.png deleted file mode 100644 index 16fdee4bc8983..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image7-4.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image7-5.png b/docs/architecture/modern-web-apps-azure/media/image7-5.png deleted file mode 100644 index dc54fc3d58b61..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image7-5.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image7-6.png b/docs/architecture/modern-web-apps-azure/media/image7-6.png deleted file mode 100644 index 7ffebdf78d896..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image7-6.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image8-1.png b/docs/architecture/modern-web-apps-azure/media/image8-1.png deleted file mode 100644 index 5548e23518e0b..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image8-1.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image8-2.png b/docs/architecture/modern-web-apps-azure/media/image8-2.png deleted file mode 100644 index 2afec8fb78cd3..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image8-2.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image8-3.jpg b/docs/architecture/modern-web-apps-azure/media/image8-3.jpg deleted file mode 100644 index 0407ddbfa1edf..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image8-3.jpg and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image9-1.png b/docs/architecture/modern-web-apps-azure/media/image9-1.png deleted file mode 100644 index 48e1b365b5920..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image9-1.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image9-2.png b/docs/architecture/modern-web-apps-azure/media/image9-2.png deleted file mode 100644 index 4678dcbc2f633..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image9-2.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image9-3.png b/docs/architecture/modern-web-apps-azure/media/image9-3.png deleted file mode 100644 index bea4daab4729b..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image9-3.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/image9-4.png b/docs/architecture/modern-web-apps-azure/media/image9-4.png deleted file mode 100644 index 32bceca03ac24..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/image9-4.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/media/index/web-application-guide-cover-image.png b/docs/architecture/modern-web-apps-azure/media/index/web-application-guide-cover-image.png deleted file mode 100644 index cdc9857b72391..0000000000000 Binary files a/docs/architecture/modern-web-apps-azure/media/index/web-application-guide-cover-image.png and /dev/null differ diff --git a/docs/architecture/modern-web-apps-azure/modern-web-applications-characteristics.md b/docs/architecture/modern-web-apps-azure/modern-web-applications-characteristics.md deleted file mode 100644 index 11b2d3adb888b..0000000000000 --- a/docs/architecture/modern-web-apps-azure/modern-web-applications-characteristics.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: Characteristics of modern web applications -description: Architect Modern Web Applications with ASP.NET Core and Azure | Characteristics of modern web applications -author: ardalis -ms.author: wiwagn -no-loc: [Blazor, WebAssembly] -ms.date: 12/12/2021 ---- - -# Characteristics of Modern Web Applications - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "… with proper design, the features come cheaply. This approach is arduous, but continues to succeed." -> _\- Dennis Ritchie_ - -Modern web applications have higher user expectations and greater demands than ever before. Today's web apps are expected to be available 24/7 from anywhere in the world, and usable from virtually any device or screen size. Web applications must be secure, flexible, and scalable to meet spikes in demand. Increasingly, complex scenarios should be handled by rich user experiences built on the client using JavaScript, and communicating efficiently through web APIs. - -ASP.NET Core is optimized for modern web applications and cloud-based hosting scenarios. Its modular design enables applications to depend on only those features they actually use, improving application security and performance while reducing hosting resource requirements. - -## Reference application: eShopOnWeb - -This guidance includes a reference application, _eShopOnWeb_, that demonstrates some of the principles and recommendations. The application is a simple online store, which supports browsing through a catalog of shirts, coffee mugs, and other marketing items. The reference application is deliberately simple in order to make it easy to understand. - -![eShopOnWeb](./media/image2-1.png) - -**Figure 2-1.** eShopOnWeb - -> ### Reference Application -> -> - **eShopOnWeb** -> - -## Cloud-hosted and scalable - -ASP.NET Core is optimized for the cloud (public cloud, private cloud, any cloud) because it is low-memory and high-throughput. The smaller footprint of ASP.NET Core applications means you can host more of them on the same hardware, and you pay for fewer resources when using pay-as-you-go cloud hosting services. The higher-throughput means you can serve more customers from an application given the same hardware, further reducing the need to invest in servers and hosting infrastructure. - -## Cross platform - -ASP.NET Core is cross-platform and can run on Linux, macOS, and Windows. This capability opens up many new options for both the development and deployment of apps built with ASP.NET Core. Docker containers - both Linux and Windows - can host ASP.NET Core applications, allowing them to take advantage of the benefits of [containers and microservices](../microservices/index.md). - -## Modular and loosely coupled - -NuGet packages are first-class citizens in .NET Core, and ASP.NET Core apps are composed of many libraries through NuGet. This granularity of functionality helps ensure apps only depend on and deploy functionality they actually require, reducing their footprint and security vulnerability surface area. - -ASP.NET Core also fully supports [dependency injection](https://deviq.com/dependency-injection/), both internally and at the application level. Interfaces can have multiple implementations that can be swapped out as needed. Dependency injection allows apps to loosely couple to those interfaces, rather than specific implementations, making them easier to extend, maintain, and test. - -## Easily tested with automated tests - -ASP.NET Core applications support unit testing, and their loose coupling and support for dependency injection makes it easy to swap infrastructure concerns with fake implementations for test purposes. ASP.NET Core also ships with a TestServer that can be used to host apps in memory. Functional tests can then make requests to this in-memory server, exercising the full application stack (including middleware, routing, model binding, filters, etc.) and receiving a response, all in a fraction of the time it would take to host the app on a real server and make requests through the network layer. These tests are especially easy to write, and valuable, for APIs, which are increasingly important in modern web applications. - -## Traditional and SPA behaviors supported - -Traditional web applications have involved little client-side behavior, but instead have relied on the server for all navigation, queries, and updates the app might need to make. Each new operation made by the user would be translated into a new web request, with the result being a full page reload in the end user's browser. Classic Model-View-Controller (MVC) frameworks typically follow this approach, with each new request corresponding to a different controller action, which in turn would work with a model and return a view. Some individual operations on a given page might be enhanced with AJAX (Asynchronous JavaScript and XML) functionality, but the overall architecture of the app used many different MVC views and URL endpoints. In addition, ASP.NET Core MVC also supports Razor Pages, a simpler way to organize MVC-style pages. - -Single Page Applications (SPAs), by contrast, involve very few dynamically generated server-side page loads (if any). Many SPAs are initialized within a static HTML file that loads the necessary JavaScript libraries to start and run the app. These apps make heavy usage of web APIs for their data needs and can provide much richer user experiences. Blazor WebAssembly provides a means of building SPAs using .NET code, which then runs in the client's browser. - -Many web applications involve a combination of traditional web application behavior (typically for content) and SPAs (for interactivity). ASP.NET Core supports both MVC (Views or Page based) and web APIs in the same application, using the same set of tools and underlying framework libraries. - -## Simple development and deployment - -ASP.NET Core applications can be written using simple text editors and command-line interfaces, or full-featured development environments like Visual Studio. Monolithic applications are typically deployed to a single endpoint. Deployments can easily be automated to occur as part of a continuous integration (CI) and continuous delivery (CD) pipeline. In addition to traditional CI/CD tools, Microsoft Azure has integrated support for git repositories and can automatically deploy updates as they are made to a specified git branch or tag. Azure DevOps provides a full-featured CI/CD build and deployment pipeline, and GitHub Actions provide another option for projects hosted there. - -## Traditional ASP.NET and Web Forms - -In addition to ASP.NET Core, traditional ASP.NET 4.x continues to be a robust and reliable platform for building web applications. ASP.NET supports MVC and Web API development models, as well as Web Forms, which is well suited to rich page-based application development and features a rich third-party component ecosystem. Microsoft Azure has great longstanding support for ASP.NET 4.x applications, and many developers are familiar with this platform. - -## Blazor - -Blazor is included with ASP.NET Core 3.0 and later. It provides a new mechanism for building rich interactive web client applications using Razor, C#, and ASP.NET Core. It offers another solution to consider when developing modern web applications. There are two versions of Blazor to consider: server-side and client-side. - -Server-side Blazor was released in 2019 with ASP.NET Core 3.0. As its name implies, it runs on the server, rendering changes to the client document back to the browser over the network. Server-side Blazor provides a rich client experience without requiring client-side JavaScript and without requiring separate page loads for each client page interaction. Changes in the loaded page are requested from and processed by the server and then sent back to the client using SignalR. - -Client-side Blazor, released in 2020, eliminates the need to render changes on the server. Instead, it leverages WebAssembly to run .NET code within the client. The client can still make API calls to the server if needed to request data, but all client-side behavior runs in the client via WebAssembly, which is already supported by all major browsers and is just a JavaScript library. - -> ### References – Modern Web Applications -> -> - **Introduction to ASP.NET Core** -> [https://learn.microsoft.com/aspnet/core/](/aspnet/core/) -> - **Testing in ASP.NET Core** -> [https://learn.microsoft.com/aspnet/core/testing/](/aspnet/core/testing/) -> - **Blazor - Get Started** -> - ->[!div class="step-by-step"] ->[Previous](index.md) ->[Next](choose-between-traditional-web-and-single-page-apps.md) diff --git a/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md b/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md deleted file mode 100644 index 738c2c60466e1..0000000000000 --- a/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md +++ /dev/null @@ -1,309 +0,0 @@ ---- -title: Test ASP.NET Core MVC apps -description: Architect Modern Web Applications with ASP.NET Core and Azure | Test ASP.NET Core MVC Apps -author: ardalis -ms.author: wiwagn -ms.date: 12/12/2021 ---- - -# Test ASP.NET Core MVC apps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> *"If you don't like unit testing your product, most likely your customers won't like to test it, either."* - > \_- Anonymous- - -Software of any complexity can fail in unexpected ways in response to changes. Thus, testing after making changes is required for all but the most trivial (or least critical) applications. Manual testing is the slowest, least reliable, most expensive way to test software. Unfortunately, if applications aren't designed to be testable, it can be the only means of testing available. Applications written to follow the architectural principles laid out in [chapter 4](architectural-principles.md) should be largely unit testable. ASP.NET Core applications support automated integration and functional testing. - -## Kinds of automated tests - -There are many kinds of automated tests for software applications. The simplest, lowest level test is the unit test. At a slightly higher level, there are integration tests and functional tests. Other kinds of tests, such as UI tests, load tests, stress tests, and smoke tests, are beyond the scope of this document. - -### Unit tests - -A unit test tests a single part of your application's logic. One can further describe it by listing some of the things that it isn't. A unit test doesn't test how your code works with dependencies or infrastructure – that's what integration tests are for. A unit test doesn't test the framework your code is written on – you should assume it works or, if you find it doesn't, file a bug and code a workaround. A unit test runs completely in memory and in process. It doesn't communicate with the file system, the network, or a database. Unit tests should only test your code. - -Unit tests, by virtue of the fact that they test only a single unit of your code, with no external dependencies, should execute extremely fast. Thus, you should be able to run test suites of hundreds of unit tests in a few seconds. Run them frequently, ideally before every push to a shared source control repository, and certainly with every automated build on your build server. - -### Integration tests - -Although it's a good idea to encapsulate your code that interacts with infrastructure like databases and file systems, you will still have some of that code, and you will probably want to test it. Additionally, you should verify that your code's layers interact as you expect when your application's dependencies are fully resolved. This functionality is the responsibility of integration tests. Integration tests tend to be slower and more difficult to set up than unit tests, because they often depend on external dependencies and infrastructure. Thus, you should avoid testing things that could be tested with unit tests in integration tests. If you can test a given scenario with a unit test, you should test it with a unit test. If you can't, then consider using an integration test. - -Integration tests will often have more complex setup and teardown procedures than unit tests. For example, an integration test that goes against an actual database will need a way to return the database to a known state before each test run. As new tests are added and the production database schema evolves, these test scripts will tend to grow in size and complexity. In many large systems, it is impractical to run full suites of integration tests on developer workstations before checking in changes to shared source control. In these cases, integration tests may be run on a build server. - -### Functional tests - -Integration tests are written from the perspective of the developer, to verify that some components of the system work correctly together. Functional tests are written from the perspective of the user, and verify the correctness of the system based on its requirements. The following excerpt offers a useful analogy for how to think about functional tests, compared to unit tests: - -> "Many times the development of a system is likened to the building of a house. While this analogy isn't quite correct, we can extend it for the purposes of understanding the difference between unit and functional tests. Unit testing is analogous to a building inspector visiting a house's construction site. He is focused on the various internal systems of the house, the foundation, framing, electrical, plumbing, and so on. He ensures (tests) that the parts of the house will work correctly and safely, that is, meet the building code. Functional tests in this scenario are analogous to the homeowner visiting this same construction site. He assumes that the internal systems will behave appropriately, that the building inspector is performing his task. The homeowner is focused on what it will be like to live in this house. He is concerned with how the house looks, are the various rooms a comfortable size, does the house fit the family's needs, are the windows in a good spot to catch the morning sun. The homeowner is performing functional tests on the house. He has the user's perspective. The building inspector is performing unit tests on the house. He has the builder's perspective." - -Source: [Unit Testing versus Functional Tests](https://www.softwaretestingtricks.com/2007/01/unit-testing-versus-functional-tests.html) - -I'm fond of saying "As developers, we fail in two ways: we build the thing wrong, or we build the wrong thing." Unit tests ensure you are building the thing right; functional tests ensure you are building the right thing. - -Since functional tests operate at the system level, they may require some degree of UI automation. Like integration tests, they usually work with some kind of test infrastructure as well. This activity makes them slower and more brittle than unit and integration tests. You should have only as many functional tests as you need to be confident the system is behaving as users expect. - -### Testing Pyramid - -Martin Fowler wrote about the testing pyramid, an example of which is shown in Figure 9-1. - -![Testing Pyramid](./media/image9-1.png) - -**Figure 9-1**. Testing Pyramid - -The different layers of the pyramid, and their relative sizes, represent different kinds of tests and how many you should write for your application. As you can see, the recommendation is to have a large base of unit tests, supported by a smaller layer of integration tests, with an even smaller layer of functional tests. Each layer should ideally only have tests in it that cannot be performed adequately at a lower layer. Keep the testing pyramid in mind when you are trying to decide which kind of test you need for a particular scenario. - -### What to test - -A common problem for developers who are inexperienced with writing automated tests is coming up with what to test. A good starting point is to test conditional logic. Anywhere you have a method with behavior that changes based on a conditional statement (if-else, switch, and so on), you should be able to come up with at least a couple of tests that confirm the correct behavior for certain conditions. If your code has error conditions, it's good to write at least one test for the "happy path" through the code (with no errors), and at least one test for the "sad path" (with errors or atypical results) to confirm your application behaves as expected in the face of errors. Finally, try to focus on testing things that can fail, rather than focusing on metrics like code coverage. More code coverage is better than less, generally. However, writing a few more tests of a complex and business-critical method is usually a better use of time than writing tests for auto-properties just to improve test code coverage metrics. - -## Organizing test projects - -Test projects can be organized however works best for you. It's a good idea to separate tests by type (unit test, integration test) and by what they are testing (by project, by namespace). Whether this separation consists of folders within a single test project, or multiple test projects, is a design decision. One project is simplest, but for large projects with many tests, or in order to more easily run different sets of tests, you might want to have several different test projects. Many teams organize test projects based on the project they are testing, which for applications with more than a few projects can result in a large number of test projects, especially if you still break these down according to what kind of tests are in each project. A compromise approach is to have one project per kind of test, per application, with folders inside the test projects to indicate the project (and class) being tested. - -A common approach is to organize the application projects under a 'src' folder, and the application's test projects under a parallel 'tests' folder. You can create matching solution folders in Visual Studio, if you find this organization useful. - -![Test organization in your solution](./media/image9-2.png) - -**Figure 9-2**. Test organization in your solution - -You can use whichever test framework you prefer. The xUnit framework works well and is what all of the ASP.NET Core and EF Core tests are written in. You can add an xUnit test project in Visual Studio using the template shown in Figure 9-3, or from the CLI using `dotnet new xunit`. - -![Add an xUnit Test Project in Visual Studio](./media/image9-3.png) - -**Figure 9-3**. Add an xUnit Test Project in Visual Studio - -### Test naming - -Name your tests in a consistent fashion, with names that indicate what each test does. One approach I've had great success with is to name test classes according to the class and method they are testing. This approach results in many small test classes, but it makes it extremely clear what each test is responsible for. With the test class name set up, to identify the class and method to be tested, the test method name can be used to specify the behavior being tested. This name should include the expected behavior and any inputs or assumptions that should yield this behavior. Some example test names: - -- `CatalogControllerGetImage.CallsImageServiceWithId` - -- `CatalogControllerGetImage.LogsWarningGivenImageMissingException` - -- `CatalogControllerGetImage.ReturnsFileResultWithBytesGivenSuccess` - -- `CatalogControllerGetImage.ReturnsNotFoundResultGivenImageMissingException` - -A variation of this approach ends each test class name with "Should" and modifies the tense slightly: - -- `CatalogControllerGetImage`**Should**`.`**Call**`ImageServiceWithId` - -- `CatalogControllerGetImage`**Should**`.`**Log**`WarningGivenImageMissingException` - -Some teams find the second naming approach clearer, though slightly more verbose. In any case, try to use a naming convention that provides insight into test behavior, so that when one or more tests fail, it's obvious from their names what cases have failed. Avoid naming your tests vaguely, such as ControllerTests.Test1, as these names offer no value when you see them in test results. - -If you follow a naming convention like the one above that produces many small test classes, it's a good idea to further organize your tests using folders and namespaces. Figure 9-4 shows one approach to organizing tests by folder within several test projects. - -![Organizing test classes by folder based on class being tested](./media/image9-4.png) - -**Figure 9-4.** Organizing test classes by folder based on class being tested. - -If a particular application class has many methods being tested (and thus many test classes), it may make sense to place these classes in a folder corresponding to the application class. This organization is no different than how you might organize files into folders elsewhere. If you have more than three or four related files in a folder containing many other files, it's often helpful to move them into their own subfolder. - -## Unit testing ASP.NET Core apps - -In a well-designed ASP.NET Core application, most of the complexity and business logic will be encapsulated in business entities and a variety of services. The ASP.NET Core MVC app itself, with its controllers, filters, viewmodels, and views, should require few unit tests. Much of the functionality of a given action lies outside the action method itself. Testing whether routing or global error handling work correctly cannot be done effectively with a unit test. Likewise, any filters, including model validation and authentication and authorization filters, cannot be unit tested with a test targeting a controller's action method. Without these sources of behavior, most action methods should be trivially small, delegating the bulk of their work to services that can be tested independent of the controller that uses them. - -Sometimes you'll need to refactor your code in order to unit test it. Frequently this activity involves identifying abstractions and using dependency injection to access the abstraction in the code you'd like to test, rather than coding directly against infrastructure. For example, consider this easy action method for displaying images: - -```csharp -[HttpGet("[controller]/pic/{id}")] -public IActionResult GetImage(int id) -{ - var contentRoot = _env.ContentRootPath + "//Pics"; - var path = Path.Combine(contentRoot, id + ".png"); - Byte[] b = System.IO.File.ReadAllBytes(path); - return File(b, "image/png"); -} -``` - -Unit testing this method is made difficult by its direct dependency on `System.IO.File`, which it uses to read from the file system. You can test this behavior to ensure it works as expected, but doing so with real files is an integration test. It's worth noting you can't unit test this method's route—you'll see how to do this testing with a functional test shortly. - -If you can't unit test the file system behavior directly, and you can't test the route, what is there to test? Well, after refactoring to make unit testing possible, you may discover some test cases and missing behavior, such as error handling. What does the method do when a file isn't found? What should it do? In this example, the refactored method looks like this: - -```csharp -[HttpGet("[controller]/pic/{id}")] -public IActionResult GetImage(int id) -{ - byte[] imageBytes; - try - { - imageBytes = _imageService.GetImageBytesById(id); - } - catch (CatalogImageMissingException ex) - { - _logger.LogWarning($"No image found for id: {id}"); - return NotFound(); - } - return File(imageBytes, "image/png"); -} -``` - -`_logger` and `_imageService` are both injected as dependencies. Now you can test that the same ID that is passed to the action method is passed to `_imageService`, and that the resulting bytes are returned as part of the FileResult. You can also test that error logging is happening as expected, and that a `NotFound` result is returned if the image is missing, assuming this behavior is important application behavior (that is, not just temporary code the developer added to diagnose an issue). The actual file logic has moved into a separate implementation service, and has been augmented to return an application-specific exception for the case of a missing file. You can test this implementation independently, using an integration test. - -In most cases, you'll want to use global exception handlers in your controllers, so the amount of logic in them should be minimal and probably not worth unit testing. Do most of your testing of controller actions using functional tests and the `TestServer` class described below. - -## Integration testing ASP.NET Core apps - -Most of the integration tests in your ASP.NET Core apps should be testing services and other implementation types defined in your Infrastructure project. For example, you could [test that EF Core was successfully updating and retrieving the data that you expect](/ef/core/miscellaneous/testing/) from your data access classes residing in the Infrastructure project. The best way to test that your ASP.NET Core MVC project is behaving correctly is with functional tests that run against your app running in a test host. - -## Functional testing ASP.NET Core apps - -For ASP.NET Core applications, the `TestServer` class makes functional tests fairly easy to write. You configure a `TestServer` using a `WebHostBuilder` (or `HostBuilder`) directly (as you normally do for your application), or with the `WebApplicationFactory` type (available since version 2.1). Try to match your test host to your production host as closely as possible, so your tests exercise behavior similar to what the app will do in production. The `WebApplicationFactory` class is helpful for configuring the TestServer's ContentRoot, which is used by ASP.NET Core to locate static resource like Views. - -You can create simple functional tests by creating a test class that implements `IClassFixture>`, where `TEntryPoint` is your web application's `Startup` class. With this interface in place, your test fixture can create a client using the factory's `CreateClient` method: - -```csharp -public class BasicWebTests : IClassFixture> -{ - protected readonly HttpClient _client; - - public BasicWebTests(WebApplicationFactory factory) - { - _client = factory.CreateClient(); - } - - // write tests that use _client -} -``` - -> [!TIP] -> If you're using minimal API configuration in your _Program.cs_ file, by default the class will be declared internal and won't be accessible from the test project. You can choose any other instance class in your web project instead, or add this to your _Program.cs_ file: -> -> ```csharp -> // Make the implicit Program class public so test projects can access it -> public partial class Program { } -> ``` - -Frequently, you'll want to perform some additional configuration of your site before each test runs, such as configuring the application to use an in-memory data store and then seeding the application with test data. To achieve this functionality, create your own subclass of `WebApplicationFactory` and override its `ConfigureWebHost` method. The example below is from the eShopOnWeb FunctionalTests project and is used as part of the tests on the main web application. - -```csharp -using Microsoft.AspNetCore.Hosting; -using Microsoft.AspNetCore.Identity; -using Microsoft.AspNetCore.Mvc.Testing; -using Microsoft.EntityFrameworkCore; -using Microsoft.eShopWeb.Infrastructure.Data; -using Microsoft.eShopWeb.Infrastructure.Identity; -using Microsoft.eShopWeb.Web; -using Microsoft.Extensions.DependencyInjection; -using Microsoft.Extensions.Logging; -using System; - -namespace Microsoft.eShopWeb.FunctionalTests.Web; -public class WebTestFixture : WebApplicationFactory -{ - protected override void ConfigureWebHost(IWebHostBuilder builder) - { - builder.UseEnvironment("Testing"); - - builder.ConfigureServices(services => - { - services.AddEntityFrameworkInMemoryDatabase(); - - // Create a new service provider. - var provider = services - .AddEntityFrameworkInMemoryDatabase() - .BuildServiceProvider(); - - // Add a database context (ApplicationDbContext) using an in-memory - // database for testing. - services.AddDbContext(options => - { - options.UseInMemoryDatabase("InMemoryDbForTesting"); - options.UseInternalServiceProvider(provider); - }); - - services.AddDbContext(options => - { - options.UseInMemoryDatabase("Identity"); - options.UseInternalServiceProvider(provider); - }); - - // Build the service provider. - var sp = services.BuildServiceProvider(); - - // Create a scope to obtain a reference to the database - // context (ApplicationDbContext). - using (var scope = sp.CreateScope()) - { - var scopedServices = scope.ServiceProvider; - var db = scopedServices.GetRequiredService(); - var loggerFactory = scopedServices.GetRequiredService(); - - var logger = scopedServices - .GetRequiredService>(); - - // Ensure the database is created. - db.Database.EnsureCreated(); - - try - { - // Seed the database with test data. - CatalogContextSeed.SeedAsync(db, loggerFactory).Wait(); - - // seed sample user data - var userManager = scopedServices.GetRequiredService>(); - var roleManager = scopedServices.GetRequiredService>(); - AppIdentityDbContextSeed.SeedAsync(userManager, roleManager).Wait(); - } - catch (Exception ex) - { - logger.LogError(ex, $"An error occurred seeding the " + - "database with test messages. Error: {ex.Message}"); - } - } - }); - } -} -``` - -Tests can make use of this custom WebApplicationFactory by using it to create a client and then making requests to the application using this client instance. The application will have data seeded that can be used as part of the test's assertions. The following test verifies that the home page of the eShopOnWeb application loads correctly and includes a product listing that was added to the application as part of the seed data. - -```csharp -using Microsoft.eShopWeb.FunctionalTests.Web; -using System.Net.Http; -using System.Threading.Tasks; -using Xunit; - -namespace Microsoft.eShopWeb.FunctionalTests.WebRazorPages; -[Collection("Sequential")] -public class HomePageOnGet : IClassFixture -{ - public HomePageOnGet(WebTestFixture factory) - { - Client = factory.CreateClient(); - } - - public HttpClient Client { get; } - - [Fact] - public async Task ReturnsHomePageWithProductListing() - { - // Arrange & Act - var response = await Client.GetAsync("/"); - response.EnsureSuccessStatusCode(); - var stringResponse = await response.Content.ReadAsStringAsync(); - - // Assert - Assert.Contains(".NET Bot Black Sweatshirt", stringResponse); - } -} -``` - -This functional test exercises the full ASP.NET Core MVC / Razor Pages application stack, including all middleware, filters, and binders that may be in place. It verifies that a given route ("/") returns the expected success status code and HTML output. It does so without setting up a real web server, and avoids much of the brittleness that using a real web server for testing can experience (for example, problems with firewall settings). Functional tests that run against TestServer are usually slower than integration and unit tests, but are much faster than tests that would run over the network to a test web server. Use functional tests to ensure your application's front-end stack is working as expected. These tests are especially useful when you find duplication in your controllers or pages and you address the duplication by adding filters. Ideally, this refactoring won't change the behavior of the application, and a suite of functional tests will verify this is the case. - -> ### References – Test ASP.NET Core MVC apps -> -> - **Testing in ASP.NET Core** \ -> [https://learn.microsoft.com/aspnet/core/testing/](/aspnet/core/testing/) -> - **Unit Test Naming Convention** \ -> -> - **Testing EF Core** \ -> [https://learn.microsoft.com/ef/core/miscellaneous/testing/](/ef/core/miscellaneous/testing/) -> - **Integration tests in ASP.NET Core** \ -> [https://learn.microsoft.com/aspnet/core/test/integration-tests](/aspnet/core/test/integration-tests) - ->[!div class="step-by-step"] ->[Previous](work-with-data-in-asp-net-core-apps.md) ->[Next](development-process-for-azure.md) diff --git a/docs/architecture/modern-web-apps-azure/toc.yml b/docs/architecture/modern-web-apps-azure/toc.yml deleted file mode 100644 index fb5a6d055b7b4..0000000000000 --- a/docs/architecture/modern-web-apps-azure/toc.yml +++ /dev/null @@ -1,24 +0,0 @@ -items: -- name: Introduction - href: index.md - items: - - name: Characteristics of modern web applications - href: modern-web-applications-characteristics.md - - name: Choose between traditional web apps and single page apps - href: choose-between-traditional-web-and-single-page-apps.md - - name: Architectural principles - href: architectural-principles.md - - name: Common web application architectures - href: common-web-application-architectures.md - - name: Common client side web technologies - href: common-client-side-web-technologies.md - - name: Develop ASP.NET Core MVC Apps - href: develop-asp-net-core-mvc-apps.md - - name: Work with data in ASP.NET Core - href: work-with-data-in-asp-net-core-apps.md - - name: Test ASP.NET Core MVC Apps - href: test-asp-net-core-mvc-apps.md - - name: Development process for Azure - href: development-process-for-azure.md - - name: Azure hosting recommendations for ASP.NET web apps - href: azure-hosting-recommendations-for-asp-net-web-apps.md diff --git a/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md b/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md deleted file mode 100644 index 53cd139f8e50b..0000000000000 --- a/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md +++ /dev/null @@ -1,550 +0,0 @@ ---- -title: Work with data in ASP.NET Core Apps -description: Architect Modern Web Applications with ASP.NET Core and Azure | Working with data in ASP.NET Core apps -author: ardalis -ms.author: wiwagn -ms.date: 12/12/2021 -no-loc: [Blazor, WebAssembly] ---- -# Working with Data in ASP.NET Core Apps - -[!INCLUDE [download-alert](includes/download-alert.md)] - -> "Data is a precious thing and will last longer than the systems themselves." -> -> Tim Berners-Lee - -Data access is an important part of almost any software application. ASP.NET Core supports various data access options, including Entity Framework Core (and Entity Framework 6 as well), and can work with any .NET data access framework. The choice of which data access framework to use depends on the application's needs. Abstracting these choices from the ApplicationCore and UI projects, and encapsulating implementation details in Infrastructure, helps to produce loosely coupled, testable software. - -## Entity Framework Core (for relational databases) - -If you're writing a new ASP.NET Core application that needs to work with relational data, then Entity Framework Core (EF Core) is the recommended way for your application to access its data. EF Core is an object-relational mapper (O/RM) that enables .NET developers to persist objects to and from a data source. It eliminates the need for most of the data access code developers would typically need to write. Like ASP.NET Core, EF Core has been rewritten from the ground up to support modular cross-platform applications. You add it to your application as a NuGet package, configure it during app startup, and request it through dependency injection wherever you need it. - -To use EF Core with a SQL Server database, run the following dotnet CLI command: - -```dotnetcli -dotnet add package Microsoft.EntityFrameworkCore.SqlServer -``` - -To add support for an InMemory data source, for testing: - -```dotnetcli -dotnet add package Microsoft.EntityFrameworkCore.InMemory -``` - -### The DbContext - -To work with EF Core, you need a subclass of . This class holds properties representing collections of the entities your application will work with. The eShopOnWeb sample includes a `CatalogContext` with collections for items, brands, and types: - -```csharp -public class CatalogContext : DbContext -{ - public CatalogContext(DbContextOptions options) : base(options) - { - - } - - public DbSet CatalogItems { get; set; } - public DbSet CatalogBrands { get; set; } - public DbSet CatalogTypes { get; set; } -} -``` - -Your DbContext must have a constructor that accepts `DbContextOptions` and pass this argument to the base `DbContext` constructor. If you have only one DbContext in your application, you can pass an instance of `DbContextOptions`, but if you have more than one you must use the generic `DbContextOptions` type, passing in your DbContext type as the generic parameter. - -### Configuring EF Core - -In your ASP.NET Core application, you'll typically configure EF Core in _Program.cs_ with your application's other dependencies. EF Core uses a `DbContextOptionsBuilder`, which supports several helpful extension methods to streamline its configuration. To configure CatalogContext to use a SQL Server database with a connection string defined in Configuration, you would add the following code: - -```csharp -builder.Services.AddDbContext( - options => options.UseSqlServer( - builder.Configuration.GetConnectionString("DefaultConnection"))); -``` - -To use the in-memory database: - -```csharp -builder.Services.AddDbContext(options => - options.UseInMemoryDatabase()); -``` - -Once you have installed EF Core, created a DbContext child type, and added the type to the application's services, you are ready to use EF Core. You can request an instance of your DbContext type in any service that needs it and start working with your persisted entities using LINQ as if they were simply in a collection. EF Core does the work of translating your LINQ expressions into SQL queries to store and retrieve your data. - -You can see the queries EF Core is executing by configuring a logger and ensuring its level is set to at least Information, as shown in Figure 8-1. - -![Logging EF Core queries to the console](./media/image8-1.png) - -**Figure 8-1**. Logging EF Core queries to the console - -### Fetching and storing Data - -To retrieve data from EF Core, you access the appropriate property and use LINQ to filter the result. You can also use LINQ to perform projection, transforming the result from one type to another. The following example would retrieve CatalogBrands, ordered by name, filtered by their Enabled property, and projected onto a `SelectListItem` type: - -```csharp -var brandItems = await _context.CatalogBrands - .Where(b => b.Enabled) - .OrderBy(b => b.Name) - .Select(b => new SelectListItem { - Value = b.Id, Text = b.Name }) - .ToListAsync(); -``` - -It's important in the above example to add the call to `ToListAsync` in order to execute the query immediately. Otherwise, the statement will assign an `IQueryable` to brandItems, which will not be executed until it is enumerated. There are pros and cons to returning `IQueryable` results from methods. It allows the query EF Core will construct to be further modified, but can also result in errors that only occur at run time, if operations are added to the query that EF Core cannot translate. It's generally safer to pass any filters into the method performing the data access, and return back an in-memory collection (for example, `List`) as the result. - -EF Core tracks changes on entities it fetches from persistence. To save changes to a tracked entity, you just call the `SaveChangesAsync` method on the DbContext, making sure it's the same DbContext instance that was used to fetch the entity. Adding and removing entities is directly done on the appropriate DbSet property, again with a call to `SaveChangesAsync` to execute the database commands. The following example demonstrates adding, updating, and removing entities from persistence. - -```csharp -// create -var newBrand = new CatalogBrand() { Brand = "Acme" }; -_context.Add(newBrand); -await _context.SaveChangesAsync(); - -// read and update -var existingBrand = _context.CatalogBrands.Find(1); -existingBrand.Brand = "Updated Brand"; -await _context.SaveChangesAsync(); - -// read and delete (alternate Find syntax) -var brandToDelete = _context.Find(2); -_context.CatalogBrands.Remove(brandToDelete); -await _context.SaveChangesAsync(); -``` - -EF Core supports both synchronous and async methods for fetching and saving. In web applications, it's recommended to use the async/await pattern with the async methods, so that web server threads are not blocked while waiting for data access operations to complete. - -For more information, see [Buffering and Streaming](/ef/core/performance/efficient-querying#buffering-and-streaming). - -### Fetching related data - -When EF Core retrieves entities, it populates all of the properties that are stored directly with that entity in the database. Navigation properties, such as lists of related entities, are not populated and may have their value set to null. This process ensures EF Core is not fetching more data than is needed, which is especially important for web applications, which must quickly process requests and return responses in an efficient manner. To include relationships with an entity using _eager loading_, you specify the property using the Include extension method on the query, as shown: - -```csharp -// .Include requires using Microsoft.EntityFrameworkCore -var brandsWithItems = await _context.CatalogBrands - .Include(b => b.Items) - .ToListAsync(); -``` - -You can include multiple relationships, and you can also include subrelationships using ThenInclude. EF Core will execute a single query to retrieve the resulting set of entities. Alternately you can include navigation properties of navigation properties by passing a '.'-separated string to the `.Include()` extension method, like so: - -```csharp - .Include("Items.Products") -``` - -In addition to encapsulating filtering logic, a specification can specify the shape of the data to be returned, including which properties to populate. The eShopOnWeb sample includes several specifications that demonstrate encapsulating eager loading information within the specification. You can see how the specification is used as part of a query here: - -```csharp -// Includes all expression-based includes -query = specification.Includes.Aggregate(query, - (current, include) => current.Include(include)); - -// Include any string-based include statements -query = specification.IncludeStrings.Aggregate(query, - (current, include) => current.Include(include)); -``` - -Another option for loading related data is to use _explicit loading_. Explicit loading allows you to load additional data into an entity that has already been retrieved. Since this approach involves a separate request to the database, it's not recommended for web applications, which should minimize the number of database round trips made per request. - -_Lazy loading_ is a feature that automatically loads related data as it is referenced by the application. EF Core has added support for lazy loading in version 2.1. Lazy loading is not enabled by default and requires installing the `Microsoft.EntityFrameworkCore.Proxies`. As with explicit loading, lazy loading should typically be disabled for web applications, since its use will result in additional database queries being made within each web request. Unfortunately, the overhead incurred by lazy loading often goes unnoticed at development time, when the latency is small and often the data sets used for testing are small. However, in production, with more users, more data, and more latency, the additional database requests can often result in poor performance for web applications that make heavy use of lazy loading. - -[Avoid Lazy Loading Entities in Web Applications](https://ardalis.com/avoid-lazy-loading-entities-in-asp-net-applications) - -It's a good idea to test your application while examining the actual database queries it makes. Under certain circumstances, EF Core may make many more queries or a more expensive query than is optimal for the application. One such problem is known as a [Cartesian Explosion](/ef/core/querying/single-split-queries#cartesian-explosion). The EF Core team makes available the [AsSplitQuery method](/ef/core/querying/single-split-queries#split-queries) as one of several ways to tune runtime behavior. - -### Encapsulating data - -EF Core supports several features that allow your model to properly encapsulate its state. A common problem in domain models is that they expose collection navigation properties as publicly accessible list types. This problem allows any collaborator to manipulate the contents of these collection types, which may bypass important business rules related to the collection, possibly leaving the object in an invalid state. The solution to this problem is to expose read-only access to related collections, and explicitly provide methods defining ways in which clients can manipulate them, as in this example: - -```csharp -public class Basket : BaseEntity -{ - public string BuyerId { get; set; } - private readonly List _items = new List(); - public IReadOnlyCollection Items => _items.AsReadOnly(); - - public void AddItem(int catalogItemId, decimal unitPrice, int quantity = 1) - { - var existingItem = Items.FirstOrDefault(i => i.CatalogItemId == catalogItemId); - if (existingItem == null) - { - _items.Add(new BasketItem() - { - CatalogItemId = catalogItemId, - Quantity = quantity, - UnitPrice = unitPrice - }); - } - else existingItem.Quantity += quantity; - } -} -``` - -This entity type doesn't expose a public `List` or `ICollection` property, but instead exposes an `IReadOnlyCollection` type that wraps the underlying List type. When using this pattern, you can indicate to Entity Framework Core to use the backing field like so: - -```csharp -private void ConfigureBasket(EntityTypeBuilder builder) -{ - var navigation = builder.Metadata.FindNavigation(nameof(Basket.Items)); - - navigation.SetPropertyAccessMode(PropertyAccessMode.Field); -} -``` - -Another way in which you can improve your domain model is by using value objects for types that lack identity and are only distinguished by their properties. Using such types as properties of your entities can help keep logic specific to the value object where it belongs, and can avoid duplicate logic between multiple entities that use the same concept. In Entity Framework Core, you can persist value objects in the same table as their owning entity by configuring the type as an owned entity, like so: - -```csharp -private void ConfigureOrder(EntityTypeBuilder builder) -{ - builder.OwnsOne(o => o.ShipToAddress); -} -``` - -In this example, the `ShipToAddress` property is of type `Address`. `Address` is a value object with several properties such as `Street` and `City`. EF Core maps the `Order` object to its table with one column per `Address` property, prefixing each column name with the name of the property. In this example, the `Order` table would include columns such as `ShipToAddress_Street` and `ShipToAddress_City`. It's also possible to store owned types in separate tables, if desired. - -Learn more about owned [entity support in EF Core](/ef/core/modeling/owned-entities). - -### Resilient connections - -External resources like SQL databases may occasionally be unavailable. In cases of temporary unavailability, applications can use retry logic to avoid raising an exception. This technique is commonly referred to as _connection resiliency_. You can implement your [own retry with exponential backoff](/azure/architecture/patterns/retry) technique by attempting to retry with an exponentially increasing wait time, until a maximum retry count has been reached. This technique embraces the fact that cloud resources might intermittently be unavailable for short periods of time, resulting in the failure of some requests. - -For Azure SQL DB, Entity Framework Core already provides internal database connection resiliency and retry logic. But you need to enable the Entity Framework execution strategy for each DbContext connection if you want to have resilient EF Core connections. - -For instance, the following code at the EF Core connection level enables resilient SQL connections that are retried if the connection fails. - -```csharp -builder.Services.AddDbContext(options => -{ - options.UseSqlServer(builder.Configuration["ConnectionString"], - sqlServerOptionsAction: sqlOptions => - { - sqlOptions.EnableRetryOnFailure( - maxRetryCount: 5, - maxRetryDelay: TimeSpan.FromSeconds(30), - errorNumbersToAdd: null); - } - ); -}); -``` - -#### Execution strategies and explicit transactions using BeginTransaction and multiple DbContexts - -When retries are enabled in EF Core connections, each operation you perform using EF Core becomes its own retryable operation. Each query and each call to `SaveChangesAsync` will be retried as a unit if a transient failure occurs. - -However, if your code initiates a transaction using BeginTransaction, you are defining your own group of operations that need to be treated as a unit; everything inside the transaction has to be rolled back if a failure occurs. You will see an exception like the following if you attempt to execute that transaction when using an EF execution strategy (retry policy) and you include several `SaveChangesAsync` from multiple DbContexts in it. - -System.InvalidOperationException: The configured execution strategy `SqlServerRetryingExecutionStrategy` does not support user initiated transactions. Use the execution strategy returned by `DbContext.Database.CreateExecutionStrategy()` to execute all the operations in the transaction as a retryable unit. - -The solution is to manually invoke the EF execution strategy with a delegate representing everything that needs to be executed. If a transient failure occurs, the execution strategy will invoke the delegate again. The following code shows how to implement this approach: - -```csharp -// Use of an EF Core resiliency strategy when using multiple DbContexts -// within an explicit transaction -// See: -// https://learn.microsoft.com/ef/core/miscellaneous/connection-resiliency -var strategy = _catalogContext.Database.CreateExecutionStrategy(); -await strategy.ExecuteAsync(async () => -{ - // Achieving atomicity between original Catalog database operation and the - // IntegrationEventLog thanks to a local transaction - using (var transaction = _catalogContext.Database.BeginTransaction()) - { - _catalogContext.CatalogItems.Update(catalogItem); - await _catalogContext.SaveChangesAsync(); - - // Save to EventLog only if product price changed - if (raiseProductPriceChangedEvent) - { - await _integrationEventLogService.SaveEventAsync(priceChangedEvent); - transaction.Commit(); - } - } -}); -``` - -The first DbContext is the `_catalogContext` and the second DbContext is within the `_integrationEventLogService` object. Finally, the Commit action would be performed multiple DbContexts and using an EF Execution Strategy. - -> ### References – Entity Framework Core -> -> - **EF Core Docs** -> [https://learn.microsoft.com/ef/](/ef/) -> - **EF Core: Related Data** -> [https://learn.microsoft.com/ef/core/querying/related-data](/ef/core/querying/related-data) -> - **Avoid Lazy Loading Entities in ASPNET Applications** -> - -## EF Core or micro-ORM? - -While EF Core is a great choice for managing persistence, and for the most part encapsulates database details from application developers, it isn't the only choice. Another popular open-source alternative is [Dapper](https://github.com/StackExchange/Dapper), a so-called micro-ORM. A micro-ORM is a lightweight, less full-featured tool for mapping objects to data structures. In the case of Dapper, its design goals focus on performance, rather than fully encapsulating the underlying queries it uses to retrieve and update data. Because it doesn't abstract SQL from the developer, Dapper is "closer to the metal" and lets developers write the exact queries they want to use for a given data access operation. - -EF Core has two significant features it provides which separate it from Dapper but also add to its performance overhead. The first is the translation from LINQ expressions into SQL. These translations are cached, but even so there is overhead in performing them the first time. The second is change tracking on entities (so that efficient update statements can be generated). This behavior can be turned off for specific queries by using the extension. EF Core also generates SQL queries that usually are very efficient and in any case perfectly acceptable from a performance standpoint, but if you need fine control over the precise query to be executed, you can pass in custom SQL (or execute a stored procedure) using EF Core, too. In this case, Dapper still outperforms EF Core, but only very slightly. Current performance benchmark data for a variety of data access methods can be found on [the Dapper site](https://github.com/StackExchange/Dapper). - -To see how the syntax for Dapper varies from EF Core, consider these two versions of the same method for retrieving a list of items: - -```csharp -// EF Core -private readonly CatalogContext _context; -public async Task> GetCatalogTypes() -{ - return await _context.CatalogTypes.ToListAsync(); -} - -// Dapper -private readonly SqlConnection _conn; -public async Task> GetCatalogTypesWithDapper() -{ - return await _conn.QueryAsync("SELECT * FROM CatalogType"); -} -``` - -If you need to build more complex object graphs with Dapper, you need to write the associated queries yourself (as opposed to adding an Include as you would in EF Core). This functionality is supported through various syntaxes, including a feature called Multi Mapping that lets you map individual rows to multiple mapped objects. For example, given a class Post with a property Owner of type User, the following SQL would return all of the necessary data: - -```sql -select * from #Posts p -left join #Users u on u.Id = p.OwnerId -Order by p.Id -``` - -Each returned row includes both User and Post data. Since the User data should be attached to the Post data via its Owner property, the following function is used: - -```csharp -(post, user) => { post.Owner = user; return post; } -``` - -The full code listing to return a collection of posts with their Owner property populated with the associated user data would be: - -```csharp -var sql = @"select * from #Posts p -left join #Users u on u.Id = p.OwnerId -Order by p.Id"; -var data = connection.Query(sql, -(post, user) => { post.Owner = user; return post;}); -``` - -Because it offers less encapsulation, Dapper requires developers know more about how their data is stored, how to query it efficiently, and write more code to fetch it. When the model changes, instead of simply creating a new migration (another EF Core feature), and/or updating mapping information in one place in a DbContext, every query that is impacted must be updated. These queries have no compile-time guarantees, so they may break at run time in response to changes to the model or database, making errors more difficult to detect quickly. In exchange for these tradeoffs, Dapper offers extremely fast performance. - -For most applications, and most parts of almost all applications, EF Core offers acceptable performance. Thus, its developer productivity benefits are likely to outweigh its performance overhead. For queries that can benefit from caching, the actual query may only be executed a tiny percentage of the time, making relatively small query performance differences moot. - -## SQL or NoSQL - -Traditionally, relational databases like SQL Server have dominated the marketplace for persistent data storage, but they are not the only solution available. NoSQL databases like [MongoDB](https://www.mongodb.com/what-is-mongodb) offer a different approach to storing objects. Rather than mapping objects to tables and rows, another option is to serialize the entire object graph, and store the result. The benefits of this approach, at least initially, are simplicity and performance. It's simpler to store a single serialized object with a key than to decompose the object into many tables with relationships and update rows that may have changed since the object was last retrieved from the database. Likewise, fetching and deserializing a single object from a key-based store is typically much faster and easier than complex joins or multiple database queries required to fully compose the same object from a relational database. The lack of locks or transactions or a fixed schema also makes NoSQL databases amenable to scaling across many machines, supporting very large datasets. - -On the other hand, NoSQL databases (as they are typically called) have their drawbacks. Relational databases use normalization to enforce consistency and avoid duplication of data. This approach reduces the total size of the database and ensures that updates to shared data are available immediately throughout the database. In a relational database, an Address table might reference a Country table by ID, such that if the name of a country/region were changed, the address records would benefit from the update without themselves having to be updated. However, in a NoSQL database, Address, and its associated Country might be serialized as part of many stored objects. An update to a country/region name would require all such objects to be updated, rather than a single row. Relational databases can also ensure relational integrity by enforcing rules like foreign keys. NoSQL databases typically do not offer such constraints on their data. - -Another complexity NoSQL databases must deal with is versioning. When an object's properties change, it may not be able to be deserialized from past versions that were stored. Thus, all existing objects that have a serialized (previous) version of the object must be updated to conform to its new schema. This approach is not conceptually different from a relational database, where schema changes sometimes require update scripts or mapping updates. However, the number of entries that must be modified is often much greater in the NoSQL approach, because there is more duplication of data. - -It's possible in NoSQL databases to store multiple versions of objects, something fixed schema relational databases typically do not support. However, in this case, your application code will need to account for the existence of previous versions of objects, adding additional complexity. - -NoSQL databases typically do not enforce [ACID](https://en.wikipedia.org/wiki/ACID), which means they have both performance and scalability benefits over relational databases. They're well suited to extremely large datasets and objects that are not well suited to storage in normalized table structures. There is no reason why a single application cannot take advantage of both relational and NoSQL databases, using each where it is best suited. - -## Azure Cosmos DB - -Azure Cosmos DB is a fully managed NoSQL database service that offers cloud-based schema-free data storage. Azure Cosmos DB is built for fast and predictable performance, high availability, elastic scaling, and global distribution. Despite being a NoSQL database, developers can use rich and familiar SQL query capabilities on JSON data. All resources in Azure Cosmos DB are stored as JSON documents. Resources are managed as _items_, which are documents containing metadata, and _feeds_, which are collections of items. Figure 8-2 shows the relationship between different Azure Cosmos DB resources. - -![The hierarchical relationship between resources in Azure Cosmos DB, a NoSQL JSON database](./media/image8-2.png) - -**Figure 8-2.** Azure Cosmos DB resource organization. - -The Azure Cosmos DB query language is a simple yet powerful interface for querying JSON documents. The language supports a subset of ANSI SQL grammar and adds deep integration of JavaScript object, arrays, object construction, and function invocation. - -**References – Azure Cosmos DB** - -- Azure Cosmos DB Introduction - [https://learn.microsoft.com/azure/cosmos-db/introduction](/azure/cosmos-db/introduction) - -## Other persistence options - -In addition to relational and NoSQL storage options, ASP.NET Core applications can use Azure Storage to store various data formats and files in a cloud-based, scalable fashion. Azure Storage is massively scalable, so you can start out storing small amounts of data and scale up to storing hundreds or terabytes if your application requires it. Azure Storage supports four kinds of data: - -- Blob Storage for unstructured text or binary storage, also referred to as object storage. - -- Table Storage for structured datasets, accessible via row keys. - -- Queue Storage for reliable queue-based messaging. - -- File Storage for shared file access between Azure virtual machines and on-premises applications. - -**References – Azure Storage** - -- Azure Storage Introduction - [https://learn.microsoft.com/azure/storage/common/storage-introduction](/azure/storage/common/storage-introduction) - -## Caching - -In web applications, each web request should be completed in the shortest time possible. One way to achieve this functionality is to limit the number of external calls the server must make to complete the request. Caching involves storing a copy of data on the server (or another data store that is more easily queried than the source of the data). Web applications, and especially non-SPA traditional web applications, need to build the entire user interface with every request. This approach frequently involves making many of the same database queries repeatedly from one user request to the next. In most cases, this data changes rarely, so there is little reason to constantly request it from the database. ASP.NET Core supports response caching, for caching entire pages, and data caching, which supports more granular caching behavior. - -When implementing caching, it's important to keep in mind separation of concerns. Avoid implementing caching logic in your data access logic, or in your user interface. Instead, encapsulate caching in its own classes, and use configuration to manage its behavior. This approach follows the Open/Closed and Single Responsibility principles, and will make it easier for you to manage how you use caching in your application as it grows. - -### ASP.NET Core response caching - -ASP.NET Core supports two levels of response caching. The first level does not cache anything on the server, but adds HTTP headers that instruct clients and proxy servers to cache responses. This functionality is implemented by adding the ResponseCache attribute to individual controllers or actions: - -```csharp -[ResponseCache(Duration = 60)] -public IActionResult Contact() -{ - ViewData["Message"] = "Your contact page."; - return View(); -} -``` - -The previous example will result in the following header being added to the response, instructing clients to cache the result for up to 60 seconds. - -```html -Cache-Control: public,max-age=60 -``` - -In order to add server-side in-memory caching to the application, you must reference the `Microsoft.AspNetCore.ResponseCaching` NuGet package, and then add the Response Caching middleware. This middleware is configured with services and middleware during app startup: - -```csharp -builder.Services.AddResponseCaching(); - -// other code omitted, including building the app - -app.UseResponseCaching(); -``` - -The Response Caching Middleware will automatically cache responses based on a set of conditions, which you can customize. By default, only 200 (OK) responses requested via GET or HEAD methods are cached. In addition, requests must have a response with a Cache-Control: public header, and cannot include headers for Authorization or Set-Cookie. See a [complete list of the caching conditions used by the response caching middleware](/aspnet/core/performance/caching/middleware#conditions-for-caching). - -### Data caching - -Rather than (or in addition to) caching full web responses, you can cache the results of individual data queries. For this functionality, you can use in memory caching on the web server, or use [a distributed cache](/aspnet/core/performance/caching/distributed). This section will demonstrate how to implement in memory caching. - -Add support for memory (or distributed) caching with the following code: - -```csharp -builder.Services.AddMemoryCache(); -builder.Services.AddMvc(); -``` - -Be sure to add the `Microsoft.Extensions.Caching.Memory` NuGet package as well. - -Once you've added the service, you request `IMemoryCache` via dependency injection wherever you need to access the cache. In this example, the `CachedCatalogService` is using the Proxy (or Decorator) design pattern, by providing an alternative implementation of `ICatalogService` that controls access to (or adds behavior to) the underlying `CatalogService` implementation. - -```csharp -public class CachedCatalogService : ICatalogService -{ - private readonly IMemoryCache _cache; - private readonly CatalogService _catalogService; - private static readonly string _brandsKey = "brands"; - private static readonly string _typesKey = "types"; - private static readonly TimeSpan _defaultCacheDuration = TimeSpan.FromSeconds(30); - - public CachedCatalogService( - IMemoryCache cache, - CatalogService catalogService) - { - _cache = cache; - _catalogService = catalogService; - } - - public async Task> GetBrands() - { - return await _cache.GetOrCreateAsync(_brandsKey, async entry => - { - entry.SlidingExpiration = _defaultCacheDuration; - return await _catalogService.GetBrands(); - }); - } - - public async Task GetCatalogItems(int pageIndex, int itemsPage, int? brandID, int? typeId) - { - string cacheKey = $"items-{pageIndex}-{itemsPage}-{brandID}-{typeId}"; - return await _cache.GetOrCreateAsync(cacheKey, async entry => - { - entry.SlidingExpiration = _defaultCacheDuration; - return await _catalogService.GetCatalogItems(pageIndex, itemsPage, brandID, typeId); - }); - } - - public async Task> GetTypes() - { - return await _cache.GetOrCreateAsync(_typesKey, async entry => - { - entry.SlidingExpiration = _defaultCacheDuration; - return await _catalogService.GetTypes(); - }); - } -} -``` - -To configure the application to use the cached version of the service, but still allow the service to get the instance of CatalogService it needs in its constructor, you would add the following lines in _Program.cs_: - -```csharp -builder.Services.AddMemoryCache(); -builder.Services.AddScoped(); -builder.Services.AddScoped(); -``` - -With this code in place, the database calls to fetch the catalog data will only be made once per minute, rather than on every request. Depending on the traffic to the site, this can have a significant impact on the number of queries made to the database, and the average page load time for the home page that currently depends on all three of the queries exposed by this service. - -An issue that arises when caching is implemented is _stale data_ – that is, data that has changed at the source but an out-of-date version remains in the cache. A simple way to mitigate this issue is to use small cache durations, since for a busy application there is a limited additional benefit to extending the length data is cached. For example, consider a page that makes a single database query, and is requested 10 times per second. If this page is cached for one minute, it will result in the number of database queries made per minute to drop from 600 to 1, a reduction of 99.8%. If instead the cache duration was made one hour, the overall reduction would be 99.997%, but now the likelihood and potential age of stale data are both increased dramatically. - -Another approach is to proactively remove cache entries when the data they contain is updated. Any individual entry can be removed if its key is known: - -```csharp -_cache.Remove(cacheKey); -``` - -If your application exposes functionality for updating entries that it caches, you can remove the corresponding cache entries in your code that performs the updates. Sometimes there may be many different entries that depend on a particular set of data. In that case, it can be useful to create dependencies between cache entries, by using a CancellationChangeToken. With a CancellationChangeToken, you can expire multiple cache entries at once by canceling the token. - -```csharp -// configure CancellationToken and add entry to cache -var cts = new CancellationTokenSource(); -_cache.Set("cts", cts); -_cache.Set(cacheKey, itemToCache, new CancellationChangeToken(cts.Token)); - -// elsewhere, expire the cache by cancelling the token\ -_cache.Get("cts").Cancel(); -``` - -Caching can dramatically improve the performance of web pages that repeatedly request the same values from the database. Be sure to measure data access and page performance before applying caching, and only apply caching where you see a need for improvement. Caching consumes web server memory resources and increases the complexity of the application, so it's important you don't prematurely optimize using this technique. - -## Getting data to Blazor WebAssembly apps - -If you're building apps that use Blazor Server, you can use Entity Framework and other direct data access technologies as they've been discussed thus far in this chapter. However, when building Blazor WebAssembly apps, like other SPA frameworks, you will need a different strategy for data access. Typically, these applications access data and interact with the server through web API endpoints. - -If the data or operations being performed are sensitive, be sure to review the section on security in the [previous chapter](develop-asp-net-core-mvc-apps.md) and protect your APIs against unauthorized access. - -You'll find an example of a Blazor WebAssembly app in the [eShopOnWeb reference application](https://github.com/dotnet-architecture/eShopOnWeb), in the BlazorAdmin project. This project is hosted within the eShopOnWeb Web project, and allows users in the Administrators group to manage the items in the store. You can see a screenshot of the application in Figure 8-3. - -![eShopOnWeb Catalog Admin Screenshot](./media/image8-3.jpg) - -**Figure 8-3.** eShopOnWeb Catalog Admin Screenshot. - -When fetching data from web APIs within a Blazor WebAssembly app, you just use an instance of `HttpClient` as you would in any .NET application. The basic steps involved are to create the request to send (if necessary, usually for POST or PUT requests), await the request itself, verify the status code, and deserialize the response. If you're going to make many requests to a given set of APIs, it's a good idea to encapsulate your APIs and configure the `HttpClient` base address centrally. This way, if you need to adjust any of these settings between environments, you can make the changes in just one place. You should add support for this service in your `Program.Main`: - -```csharp -builder.Services.AddScoped(sp => new HttpClient - { - BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) - }); -``` - -If you need to access services securely, you should access a secure token and configure the `HttpClient` to pass this token as an Authentication header with every request: - -```csharp -_httpClient.DefaultRequestHeaders.Authorization = - new AuthenticationHeaderValue("Bearer", token); -``` - -This activity can be done from any component that has the `HttpClient` injected into it, provided that `HttpClient` wasn't added to the application's services with a `Transient` lifetime. Every reference to `HttpClient` in the application references the same instance, so changes to it in one component flow through the entire application. A good place to perform this authentication check (followed by specifying the token) is in a shared component like the main navigation for the site. Learn more about this approach in the `BlazorAdmin` project in the [eShopOnWeb reference application](https://github.com/dotnet-architecture/eShopOnWeb). - -One benefit of Blazor WebAssembly over traditional JavaScript SPAs is that you don't need to keep copies of your data transfer objects(DTOs) synchronized. Your Blazor WebAssembly project and your web API project can both share the same DTOs in a common shared project. This approach eliminates some of the friction involved in developing SPAs. - -To quickly get data from an API endpoint, you can use the built-in helper method, `GetFromJsonAsync`. There are similar methods for POST, PUT, etc. The following shows how to get a CatalogItem from an API endpoint using a configured `HttpClient` in a Blazor WebAssembly app: - -```csharp -var item = await _httpClient.GetFromJsonAsync($"catalog-items/{id}"); -``` - -Once you have the data you need, you'll typically track changes locally. When you want to make updates to the backend data store, you'll call additional web APIs for this purpose. - -**References – Blazor Data** - -- Call a web API from ASP.NET Core Blazor - [https://learn.microsoft.com/aspnet/core/blazor/call-web-api](/aspnet/core/blazor/call-web-api) - ->[!div class="step-by-step"] ->[Previous](develop-asp-net-core-mvc-apps.md) ->[Next](test-asp-net-core-mvc-apps.md)