This logging library is designed to centralize and send logs from multiple microservices to an Elasticsearch and Logstash stack for centralized log analysis. It helps manage traceability across services using traceId and spanId and handles exceptions in Spring Boot applications.
- Sends logs to Logstash via TCP.
- Integrates with Elasticsearch for storing logs and visualizing them in Kibana.
- Automates the creation of Data Views in Kibana through an API.
To integrate the library into your Spring Boot project:
- Install the library in your local Maven repository:
mvn clean install
- In your Spring Boot project, add the library dependency in your
pom.xml
:
<dependency>
<groupId>com.carlosmgv02</groupId>
<artifactId>logging-library</artifactId>
<version>0.0.1-SNAPSHOT</version>
</dependency>
To configure Logback to use the logging library and send logs to Logstash, add or modify your logback-spring.xml
or logback.xml
file in your Spring Boot project as follows:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- Custom LogColorizer to colorize logs -->
<property scope="context" name="COLORIZER_COLORS" value="red@,yellow@,green@,blue@,cyan@"/>
<conversionRule conversionWord="colorize" converterClass="org.tuxdude.logback.extensions.LogColorizer"/>
<!-- Appender to send logs to Logstash -->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5050</destination> <!-- Dirección de Logstash -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"service":"${spring.application.name:-undefined-service}"}</customFields>
</encoder>
</appender>
<!-- Console appender to print logs to the console -->
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%d{yyyy-MM-dd HH:mm:ss.SSS} [%colorize(%-5level)] %magenta(${spring.application.name:-undefined-service}) [%boldCyan(traceId: %X{traceId}) %boldBlue(spanId: %X{spanId})] [%logger{36}] - %msg%n
</pattern>
</encoder>
</appender>
<!-- Logger de root -->
<root level="INFO">
<appender-ref ref="LOGSTASH"/>
<appender-ref ref="STDOUT"/>
</root>
</configuration>
I provide a Docker Compose file that sets up Elasticsearch, Logstash, Kibana, and the OpenTelemetry Collector. You can download it from the project:
Once you have downloaded the docker-compose.yaml
file, you can start the entire logging stack by running:
docker-compose up -d
The docker-compose.yaml
file includes:
- Elasticsearch on port
9200
- Logstash on port
5000
(for logs) - Kibana on port
5601
- OTLP Collector on port
4317
Warning
Check ports at docker-compose.yaml, content might be different.
Once the logs are being ingested into Elasticsearch, you can automate the creation of a Data View in Kibana using the following API request:
curl -X POST "http://localhost:5601/api/data_views/data_view" \
-H "Content-Type: application/json" \
-H "kbn-xsrf: true" \
-d '{
"data_view": {
"title": "logstash-logs-*",
"timeFieldName": "@timestamp",
"name": "Logstash Logs Data View"
}
}'
Note
Alternatively you can open http://localhost:5601
and do the following:
- In Kibana, go to the Management section in the left-hand side menu.
- In the menu, select Stack Management.
- Under the Kibana section, select Data Views (formerly known as Index Patterns).
- Click the Create data view button (or similar, depending on your version).
- In the Index pattern field, enter
logstash-logs-*
to match the indices being created by Logstash in Elasticsearch.- Note: If you're unsure of the exact index name, you can check the current indices in Index Management, also within Stack Management.
- If your Logstash configuration includes a
@timestamp
field, select this as the primary time field. - Click Create data view.
- Once the Data View is created, go to the Discover section in the left-hand side menu.
- Select the newly created Data View (
logstash-logs-*
). - You should now be able to see the logs being sent from your application and stored in Elasticsearch.
Once everything is set up:
- Start the application:
mvn spring-boot:run -Dspring.application.name=your-application-name
Note
The -Dspring.application.name flag will let logstash know the name of the app, useful when tracking request through different services.
-
Generate logs by interacting with your application (e.g., sending requests to your API).
-
Check logs in Kibana:
- Access Kibana at
http://localhost:5601
. - Open Analytics > Discover on the left side bar.
- You should see the Data View named "Logstash Logs Data View" in Kibana, where you can explore and visualize the logs generated by your application.
- Access Kibana at
For production:
- Enable security (
xpack.security.enabled=true
) and use HTTPS for Elasticsearch. - Configure RBAC (Role-Based Access Control) to restrict access based on roles.
- Enable authentication in Kibana.
This config handles the log ingestion process:
input {
tcp {
port => 5000
codec => json_lines
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "logstash-logs-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
- Input: Listens on port
5000
for JSON logs. - Output: Sends logs to Elasticsearch and prints them to the console for debugging.
This config handles trace collection:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
logging:
loglevel: debug
processors:
batch:
timeout: 5s
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging]
- Receivers: Accepts gRPC traces on port
4317
. - Exporters: Logs traces to the console.
- Processors: Batches traces for efficiency.
- Carlos Martínez García-Villarrubia