In the previous labs you learned how to take an existing monolithic Java EE application to the cloud with JBoss EAP and OpenShift, and you got a glimpse into the power of OpenShift for existing applications.
You will now begin the process of modernizing the application by breaking the application into multiple microservices using different technologies, with the eventual goal of re-architecting the entire application as a set of distributed microservices. Later on we’ll explore how you can better manage and monitor the application after it is re-architected.
In this lab you will learn more about Supersonic, Subatomic Java with Quarkus, which is designed to be container and developer friendly.
Quarkus is a Kubernetes Native Java stack, crafted from the best of breed Java libraries and standards. Amazingly fast boot time, incredibly low RSS memory (not just heap size!) offering near instant scale up and high density memory utilization in container orchestration platforms like Kubernetes. Quarkus uses a technique called compile time boot.
Red Hat offers the fully supported Red Hat Build of Quarkus(RHBQ) with support and maintenance of Quarkus. In this workhop, you will use Quarkus to develop Kubernetes-native microservices and deploy them to OpenShift. Quarkus is one of the runtimes included in Red Hat Runtimes. Learn more about RHBQ.
You will implement one component of the monolith as a Quarkus microservice and modify it to address microservice concerns, understand its structure, deploy it to OpenShift and exercise the interfaces between Quarkus apps, microservices, and OpenShift/Kubernetes.
The goal is to deploy this new microservice alongside the existing monolith, and then later on we’ll tie them together. But after this lab, you should end up with something like:
Run the following commands to set up your environment for this lab and start in the right directory:
In the project explorer, expand the inventory project.
The sample Quarkus project shows a minimal CRUD service exposing a couple of endpoints over REST, with a front-end based on Angular so you can play with it from your browser.
While the code is surprisingly simple, under the hood this is using:
-
RESTEasy to expose the REST endpoints
-
Hibernate ORM with Panache to perform CRUD operations on the database
-
A PostgreSQL database; see below to run one via Linux Container
-
Some example Dockerfile to generate new images for JVM and Native mode compilation
Hibernate ORM
is the de facto JPA implementation and offers you the full breadth of an Object Relational Mapper. It makes complex mappings possible, but it does not make simple and common mappings trivial. Hibernate ORM with Panache focuses on making your entities trivial and fun to write in Quarkus.
Now let’s write some code and create a domain model, service interface and a RESTful endpoint to access inventory:
We will add Quarkus extensions to the Inventory application for using Panache (a simplified way to access data via Hibernate ORM), a database with PostgreSQL. We’ll also add the ability to add health probes (which we’ll use later on) using the MicroProfile Health extension. Run the following commands to add the extensions using VS Code Terminal:
mvn quarkus:add-extension -Dextensions="hibernate-orm-panache, jdbc-postgresql, smallrye-health" -f $PROJECT_SOURCE/inventory
you will see:
[INFO] [SUCCESS] ✅ Extension io.quarkus:quarkus-hibernate-orm-panache has been installed
[INFO] [SUCCESS] ✅ Extension io.quarkus:quarkus-smallrye-health has been installed
[INFO] [SUCCESS] ✅ Extension io.quarkus:quarkus-jdbc-postgresql has been installed
This adds the extensions to pom.xml
.
Note
|
There are many more extensions for Quarkus for popular frameworks like Vert.x, Apache Camel, Infinispan, Spring (e.g. |
With our skeleton project in place, let’s get to work defining the business logic.
The first step is to define the model (entity) of an Inventory object. Since Quarkus uses Hibernate ORM Panache, we can re-use the same model definition from our monolithic application - no need to re-write or re-architect!
Under the inventory
directory, open up the empty Inventory.java file in com.redhat.coolstore package and paste the following code into it (identical to the
monolith code):
package com.redhat.coolstore;
import jakarta.persistence.Cacheable;
import jakarta.persistence.Entity;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@Entity
@Cacheable
public class Inventory extends PanacheEntity {
public String itemId;
public String location;
public int quantity;
public String link;
public Inventory() {
}
}
By extending PanacheEntity
in your entities, you will get an ID field that is auto-generated. If you require a custom ID
strategy, you can extend PanacheEntityBase
instead and handle the ID yourself.
By using Use public fields, there is no need for functionless getters and setters (those that simply get or set the field). You simply refer to fields like Inventory.location without the need to write a Inventory.getLocation() implementation. Panache will auto-generate any getters and setters you do not write, or you can develop your own getters/setters that do more than get/set, which will be called when the field is accessed directly.
The PanacheEntity
superclass comes with lots of super useful static methods and you can add your own in your derived entity
class. Much like traditional object-oriented programming it’s natural and recommended to place custom queries as close to the
entity as possible, ideally within the entity definition itself. Users can just start using your entity Inventory by typing
Inventory, and get completion for all the operations in a single place.
When an entity is annotated with @Cacheable
, all its field values are cached except for collections and relations to other
entities. This means the entity can be loaded quicker without querying the database for frequently-accessed, but rarely-changing
data.
In this step we will mirror the abstraction of a service so that we can inject the Inventory service into various places (like a RESTful resource endpoint) in the future. This is the same approach that our monolith uses, so we can re-use this idea again. Open up the empty InventoryResource.java class in the com.redhat.coolstore package.
Add this code to it:
package com.redhat.coolstore;
import java.util.List;
import java.util.stream.Collectors;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.json.Json;
import jakarta.ws.rs.Consumes;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.WebApplicationException;
import jakarta.ws.rs.core.Response;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.ext.ExceptionMapper;
import jakarta.ws.rs.ext.Provider;
@Path("/services/inventory")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class InventoryResource {
@GET
public List<Inventory> getAll() {
return Inventory.listAll();
}
@GET
@Path("/{itemId}")
public List<Inventory> getAvailability(@PathParam("itemId") String itemId) {
return Inventory.<Inventory>streamAll()
.filter(p -> p.itemId.equals(itemId))
.collect(Collectors.toList());
}
@Provider
public static class ErrorMapper implements ExceptionMapper<Exception> {
@Override
public Response toResponse(Exception exception) {
int code = 500;
if (exception instanceof WebApplicationException) {
code = ((WebApplicationException) exception).getResponse().getStatus();
}
return Response.status(code)
.entity(Json.createObjectBuilder().add("error", exception.getMessage()).add("code", code).build())
.build();
}
}
}
The above REST services defines two endpoints:
-
/services/inventory
that is accessible via HTTP GET which will return all known product Inventory entities as JSON -
/services/inventory/<itemId>
that is accessible via HTTP GET at for exampleservices/inventory/329199
with the last path parameter being the ID for which we want inventory status.
Let’s add inventory data to the database so we can test things out. Open up the src/main/resources/import.sql
file and copy
the following SQL statements to import.sql:
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (1, '329299', 'http://maps.google.com/?q=Raleigh', 'Raleigh', 736);
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (2, '329199', 'http://maps.google.com/?q=Boston', 'Boston', 512);
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (3, '165613', 'http://maps.google.com/?q=Seoul', 'Seoul', 256);
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (4, '165614', 'http://maps.google.com/?q=Singapore', 'Singapore', 54);
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (5, '165954', 'http://maps.google.com/?q=London', 'London', 87);
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (6, '444434', 'http://maps.google.com/?q=NewYork', 'NewYork', 443);
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (7, '444435', 'http://maps.google.com/?q=Paris', 'Paris', 600);
INSERT INTO INVENTORY (id, itemId, link, location, quantity) values (8, '444437', 'http://maps.google.com/?q=Tokyo', 'Tokyo', 230);
ALTER SEQUENCE inventory_seq RESTART WITH 9;
When testing or running in dev mode Quarkus can provide you with a zero-config database out of the box, a feature we refer to as Dev Services. Depending on your database type you may need Docker or Podman installed in order to use this feature. Dev Services is supported for the PostgreSQL databases. With that, you don’t need to add required configurations (e.g., username, password, JDBC URL) to set up the PostgreSQL in the application.properties file.
Red Hat Dev Spaces enables you to run the Quarkus Dev Services in the terminal using the KUBEDOCK tool. Unlikely local environment, you need to set the quarkus.datasource.devservices.volumes
where the PostgreSQL’s data is stored inside a container.
Append the following variables in the src/main/resources/application.properties:
%dev.quarkus.datasource.devservices.volumes."/"=/var/lib/postgresql/
Find more information about the Quarkus Dev Services here.
Let’s check if you already have a running container (e.g. PostgreSQL) before you start the Quarkus Dev Mode later.
podman ps
The output should look something like this. There’s no running container at this moment.
In the Terminal, run the project in Live Coding mode:
mvn quarkus:dev -Dquarkus.http.host=0.0.0.0 -f $PROJECT_SOURCE/inventory
You should see a bunch of log output that ends with:
Listening for transport dt_socket at address: 5005
INFO [io.qua.dat.dep.dev.DevServicesDatasourceProcessor] (build-5) Dev Services for the default datasource (postgresql) started - container ID is 7a35fc3bd279
INFO [io.qua.hib.orm.dep.dev.HibernateOrmDevServicesProcessor] (build-15) Setting quarkus.hibernate-orm.database.generation=drop-and-create to initialize Dev Services managed database
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread) SQL Warning Code: 0, SQLState: 00000
WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread) table "inventory" does not exist, skipping
WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread) SQL Warning Code: 0, SQLState: 00000
WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread) sequence "inventory_seq" does not exist, skipping
INFO [io.quarkus] (Quarkus Main Thread) inventory 1.0-SNAPSHOT on JVM (powered by Quarkus 3.2.6.Final-redhat-00002) started in 11.878s. Listening on: http://0.0.0.0:8080
INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, jdbc-postgresql, narayana-jta, resteasy-reactive, resteasy-reactive-jackson, smallrye-context-propagation, smallrye-health, vertx]
--
Tests paused
Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>
You probably noticed that there was magic and we didn’t specify the JDBC URL, this is because of Quarkus Dev Services and it works for all DBs.
When you take a look at the Quarkus Dev Mode terminal, you should see the following logs:
Dev Services for the default datasource (postgresql) started
Then, execute podman ps
again in another terminal. You will see that a PostgreSQL container is running automatically.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a35fc3bd279 docker.io/postgres:14 2 minutes ago Healthy (Up) 7a35fc3bd27961dc343093d5410e48c70ffc27d489bb205bdce62c5f3ba0714b
You can also see Tests paused by default when a Quarkus application gets started.
VS Code will also detect that the Quarkus app opens port 5005
(for debugging) and 8080
(for web requests). Close the popup not to add a port 5005, but when prompted, Open In New Tab to open a port 8080
, which opens a new tab in your web browser:
You should see the Coolstore Inveotry page:
Open a new terminal by selecting +
icon:
and invoke the RESTful endpoint using the following CURL commands.
curl http://localhost:8080/services/inventory | jq
The output should look like this.
[
{
"id": 1,
"itemId": "329299",
"location": "Raleigh",
"quantity": 736,
"link": "http://maps.google.com/?q=Raleigh"
},
{
"id": 2,
"itemId": "329199",
"location": "Boston",
"quantity": 512,
"link": "http://maps.google.com/?q=Boston"
},
{
"id": 3,
"itemId": "165613",
"location": "Seoul",
"quantity": 256,
"link": "http://maps.google.com/?q=Seoul"
},
{
"id": 4,
"itemId": "165614",
"location": "Singapore",
"quantity": 54,
"link": "http://maps.google.com/?q=Singapore"
},
{
"id": 5,
"itemId": "165954",
"location": "London",
"quantity": 87,
"link": "http://maps.google.com/?q=London"
},
{
"id": 6,
"itemId": "444434",
"location": "NewYork",
"quantity": 443,
"link": "http://maps.google.com/?q=NewYork"
},
{
"id": 7,
"itemId": "444435",
"location": "Paris",
"quantity": 600,
"link": "http://maps.google.com/?q=Paris"
},
{
"id": 8,
"itemId": "444437",
"location": "Tokyo",
"quantity": 230,
"link": "http://maps.google.com/?q=Tokyo"
}
]
Quarkus also provides a new experimental Dev UI, which is available in dev mode (when you start quarkus with mvn quarkus:dev) at /q/dev
by default. It allows you to quickly visualize all the extensions currently loaded, see their status and go directly to their documentation.
In case you’re using a Cloud IDE (e.g., Red Hat Dev Spaces), you need to disable the CORS option which protect a security attack in the cloud because Dev UI aims to be used in developer’s local environment. Append the following variables in the src/main/resources/application.properties:
%dev.quarkus.dev-ui.cors.enabled=false
To show up the public endpoints, select Show plug-in endpoints
. Then, unfold the ENDPOINTS > Public
and select the Open Dev UI icon.
A new web browser will open automatically then it will show you something like this:
MicroProfile Health allows applications to provide information about their state to external viewers which is typically useful in cloud environments like OpenShift where automated processes must be able to determine whether the application should be discarded or restarted.
When you imported the health extension earlier, the /q/health
endpoint is automatically exposed directly that can be used to
run the health check procedures.
Our application is still running, so you can exercise the default (no-op) health check with this command in a separate Terminal:
curl -s http://localhost:8080/q/health | jq
The output shows:
{
"status": "UP",
"checks": [
{
"name": "Database connections health check",
"status": "UP",
"data": {
"<default>": "UP"
}
}
]
}
The general outcome of the health check is computed as a logical AND of all the declared health check procedures. Quarkus extensions can also provide default health checks out of the box, which is why you see the Database connections health check
above, since we are using a database extension.
Next, let’s fill in the class by creating a new RESTful endpoint which will be used by OpenShift to probe our services. Open the empty Java class: src/main/java/com/redhat/coolstore/InventoryHealthCheck.java
and add the following code:
package com.redhat.coolstore;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
import org.eclipse.microprofile.health.HealthCheck;
import org.eclipse.microprofile.health.HealthCheckResponse;
import org.eclipse.microprofile.health.Readiness;
@Readiness
@ApplicationScoped
public class InventoryHealthCheck implements HealthCheck {
@Inject
private InventoryResource inventoryResource;
@Override
public HealthCheckResponse call() {
if (inventoryResource.getAll() != null) {
return HealthCheckResponse.named("Success of Inventory Health Check!!!").up().build();
} else {
return HealthCheckResponse.named("Failure of Inventory Health Check!!!").down().build();
}
}
}
The call()
method exposes an HTTP GET endpoint which will return the status of the service. The logic of this check does a
simple query to the underlying database to ensure the connection to it is stable and available. The method is also annotated with
MicroProfile’s @Readiness
annotation, which directs Quarkus to expose this endpoint as a health check at /q/health/ready
.
Note
|
You don’t need to stop and re-run re-run the Inventory application because Quarkus will reload the changes automatically via the Live Coding feature. |
Access the health endpoint again using curl and the result looks like:
curl -s http://localhost:8080/q/health | jq
The result should be:
{
"status": "UP",
"checks": [
{
"name": "Database connections health check",
"status": "UP",
"data": {
"<default>": "UP"
}
},
{
"name": "Success of Inventory Health Check!!!",
"status": "UP"
}
]
}
You now see the default health check, along with your new Inventory health check.
Note
|
You can define separate readiness and liveness probes using |
In this step, we will deploy our new Inventory microservice for our CoolStore application in a separate project to house it and keep it separate from our monolith and our other microservices we will create later on.
Quarkus also offers the ability to automatically generate OpenShift resources based on sane default and user supplied configuration. The OpenShift extension is actually a wrapper extension that brings together the kubernetes and container-image-s2i extensions with defaults so that it’s easier for the user to get started with Quarkus on OpenShift.
Add openshift extension via VS Code Terminal:
mvn quarkus:add-extension -Dextensions="openshift" -f $PROJECT_SOURCE/inventory
you will see:
[INFO] [SUCCESS] ✅ Extension io.quarkus:quarkus-openshift has been installed
Quarkus supports the notion of configuration profiles. These allows you to have multiple configurations in the same file and select between then via a profile name.
By default Quarkus has three profiles, although it is possible to use as many as you like. The default profiles are:
-
dev
- Activated when in development mode (i.e. quarkus:dev) -
test
- Activated when running tests -
prod
- The default profile when not running in development or test mode
Let’s add
the following variables in src/main/resources/application.properties:
%prod.quarkus.datasource.db-kind=postgresql
%prod.quarkus.datasource.jdbc.url=jdbc:postgresql://inventory-database:5432/inventory
%prod.quarkus.datasource.jdbc.driver=org.postgresql.Driver
%prod.quarkus.datasource.username=inventory
%prod.quarkus.datasource.password=mysecretpassword
%prod.quarkus.hibernate-orm.database.generation=drop-and-create
%prod.quarkus.hibernate-orm.sql-load-script=import.sql
%prod.quarkus.hibernate-orm.log.sql=true
%prod.quarkus.kubernetes-client.trust-certs=true(1)
%prod.quarkus.kubernetes.deploy=true(2)
%prod.quarkus.kubernetes.deployment-target=openshift(3)
%prod.quarkus.openshift.build-strategy=docker(4)
%prod.quarkus.openshift.route.expose=true(5)
%prod.quarkus.openshift.deployment-kind=Deployment(6)
%prod.quarkus.container-image.group={{ USER_ID }}-inventory(7)
%prod.quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000(8)
-
We are using self-signed certs in this simple example, so this simply says to the extension to trust them.
-
Instructs the extension to deploy to OpenShift after the container image is built
-
Instructs the extension to generate and create the OpenShift resources (like
DeploymentConfig
andService
) after building the container -
Set the Docker build strategy
-
Instructs the extension to generate an OpenShift
Route
-
Generate the Deployment resource
-
Specify a project where the application is deployed
-
Sepcify an internal container registry to push an application image
Docker build strategy builds the artifacts (JAR files or a native executable) outside the OpenShift cluster, either locally or in a CI environment, and then provides them to the OpenShift build system together with a Dockerfile. The container is built inside the OpenShift cluster and provided as an image stream.
In OpenShift, ensure you’re in the Developer perspective and then choose the {{ USER_ID }}-inventory
project which has already been created for you.
There’s nothing there yet, but that’s about to change.
Let’s deploy our new inventory microservice to OpenShift!
Our production inventory microservice will use an external database (PostgreSQL) to house inventory data. First, deploy a new instance of PostgreSQL using the following oc
command.
oc new-app -n {{ USER_ID }}-inventory --name inventory-database -e POSTGRESQL_USER=inventory -e POSTGRESQL_PASSWORD=mysecretpassword -e POSTGRESQL_DATABASE=inventory registry.redhat.io/rhel9/postgresql-15
Now let’s deploy the application itself. Run the following command which will build and deploy using the OpenShift extension:
oc project {{ USER_ID }}-inventory && \
mvn clean package -DskipTests -f $PROJECT_SOURCE/inventory
The output should end with BUILD SUCCESS
.
Finally, make sure it’s actually done rolling out:
oc rollout status -w dc/inventory
Wait for that command to report replication controller inventory-1 successfully rolled out before continuing.
And label the items with proper icons:
oc label deployment/inventory-database app.openshift.io/runtime=postgresql --overwrite && \
oc label deployment/inventory app.kubernetes.io/part-of=inventory --overwrite && \
oc label deployment/inventory-database app.kubernetes.io/part-of=inventory --overwrite && \
oc annotate deployment/inventory app.openshift.io/connects-to=inventory-database --overwrite && \
oc annotate deployment/inventory app.openshift.io/vcs-uri=https://github.com/RedHat-Middleware-Workshops/cloud-native-workshop-v2m1-labs.git --overwrite && \
oc annotate deployment/inventory app.openshift.io/vcs-ref=ocp-4.14 --overwrite
Back on the {{ CONSOLE_URL }}/topology/ns/{{ USER_ID }}-inventory[Topology View^], make sure it’s done deploying (dark blue circle):
Click on the Route icon above (the arrow) to access our inventory running on OpenShift:
The UI will refresh the inventory table every 2 seconds.
You should also be able to access the health check logic at the inventory endpoint using curl in the Terminal:
curl $(oc get route inventory -o jsonpath={% raw %}"{.spec.host}"{% endraw %})/q/health/ready | jq
You should the same JSON response:
{
"status": "UP",
"checks": [
{
"name": "Database connections health check",
"status": "UP"
},
{
"name": "Success of Inventory Health Check!!!",
"status": "UP"
}
]
}
The various timeout values for the probes can be configured in many ways. Let’s tune the readiness probe initial delay so that we have to wait 30 seconds for it to be activated. Use the oc command to tune the probe to wait 30 seconds before starting to poll the probe:
oc set probe dc/inventory --readiness --initial-delay-seconds=30
And verify it’s been changed (look at the delay= value for the Readiness probe) via VS Code Terminal:
oc describe dc/inventory | egrep 'Readiness|Liveness'
The result:
Liveness: http-get http://:8080/q/health/live delay=0s timeout=10s period=30s #success=1 #failure=3
Readiness: http-get http://:8080/q/health/ready delay=30s timeout=10s period=30s #success=1 #failure=3
In the next step, we’ll exercise the probe and watch as it fails and OpenShift recovers the application.
Open the http://inventory-{{ USER_ID }}-inventory.{{ ROUTE_SUBDOMAIN}}[Inventory UI^].
This will open up the sample application UI in a new browser tab:
The app will begin polling the inventory as before and report success:
Now you will corrupt the service and cause its health check to start failing. To simulate the app crashing, let’s kill the underlying service so it stops responding. Execute via VS Code Terminal:
oc rsh dc/inventory /bin/bash -c 'kill 1'
This will execute the Linux kill command to stop the running Java process in the container.
Check out the application sample UI page and notice it is now failing to access the inventory data, and the Last Successful Fetch counter starts increasing, indicating that the UI cannot access inventory. This could have been caused by an overloaded server, a bug in the code, or any other reason that could make the application unhealthy.
Back in the {{ CONSOLE_URL }}/topology/ns/{{ USER_ID }}-inventory[Topology View^], you will see the pod is failing (light blue or yellow warning circle):
After too many healthcheck probe failures, OpenShift will forcibly kill the pod and container running the service, and spin up a new one to take its place. Once this occurs, the light blue or yellow warning circle should return to dark blue. This should take about 30 seconds.
Return to the same sample app UI (without reloading the page) and notice that the UI has automatically re-connected to the new service and successfully accessed the inventory once again:
Let’s set the probe back to more appropriate values:
oc set probe dc/inventory --readiness --initial-delay-seconds=5 --period-seconds=5 --failure-threshold=15
You learned a bit more about what Quarkus is, and how it can be used to create modern Java microservice-oriented applications.
You created a new Inventory microservice representing functionality previously implmented in the monolithic CoolStore application. For now this new microservice is completely disconnected from our monolith and is not very useful on its own. In future steps you will link this and other microservices into the monolith to begin the process of strangling the monolith.
In the next lab, you’ll use Spring Boot, another popular framework, to implement additional microservices. Let’s go!