Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MODINVSTOR-1294 Release 28.0.2 #1122

Merged
merged 4 commits into from
Dec 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
## v28.0.2 2024-12-01
### Features
* Add `ecsRequestRouting` field to service point schema ([MODINVSTOR-1179](https://folio-org.atlassian.net/browse/MODINVSTOR-1179))
* Do not return routing service points by default ([MODINVSTOR-1219](https://folio-org.atlassian.net/browse/MODINVSTOR-1219))
* Implement synchronization operation for service point events ([MODINVSTOR-1245](https://folio-org.atlassian.net/browse/MODINVSTOR-1245))
* Add missing permission, improve test coverage ([MODINVSTOR-1262](https://folio-org.atlassian.net/browse/MODINVSTOR-1262))

## v28.0.1 2024-11-08
### Features
* Modify endpoint for bulk instances upsert with publish events flag ([MODINVSTOR-1283](https://folio-org.atlassian.net/browse/MODINVSTOR-1283))
Expand Down Expand Up @@ -35,6 +42,7 @@
### Features
* Add floating collection flag in location schema ([MODINVSTOR-1250](https://issues.folio.org/browse/MODINVSTOR-1250))
* Implement domain event production for location create/update/delete ([MODINVSTOR-1181](https://issues.folio.org/browse/MODINVSTOR-1181))
* Add a new boolean field ecsRequestRouting to the service point schema ([MODINVSTOR-1179](https://issues.folio.org/browse/MODINVSTOR-1179))
* Implement domain event production for library create/update/delete ([MODINVSTOR-1216](https://issues.folio.org/browse/MODINVSTOR-1216))
* Implement domain event production for campus create/update/delete ([MODINVSTOR-1217](https://issues.folio.org/browse/MODINVSTOR-1217))
* Implement domain event production for institution create/update/delete ([MODINVSTOR-1218](https://issues.folio.org/browse/MODINVSTOR-1218))
Expand Down
1 change: 1 addition & 0 deletions README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,7 @@ These environment variables configure the module interaction with S3-compatible
* `S3_ACCESS_KEY_ID`
* `S3_SECRET_ACCESS_KEY`
* `S3_IS_AWS` (default value - `false`)
* `ECS_TLR_FEATURE_ENABLED` (default value - `false`)

# Local Deployment using Docker

Expand Down
20 changes: 13 additions & 7 deletions descriptors/ModuleDescriptor-template.json
Original file line number Diff line number Diff line change
Expand Up @@ -1120,28 +1120,33 @@
},
{
"id": "service-points",
"version": "3.3",
"version": "3.4",
"handlers": [
{
"methods": ["GET"],
"pathPattern": "/service-points",
"permissionsRequired": ["inventory-storage.service-points.collection.get"]
"permissionsRequired": ["inventory-storage.service-points.collection.get"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["GET"],
"pathPattern": "/service-points/{id}",
"permissionsRequired": ["inventory-storage.service-points.item.get"]
"permissionsRequired": ["inventory-storage.service-points.item.get"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["POST"],
"pathPattern": "/service-points",
"permissionsRequired": ["inventory-storage.service-points.item.post"]
"permissionsRequired": ["inventory-storage.service-points.item.post"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["PUT"],
"pathPattern": "/service-points/{id}",
"permissionsRequired": ["inventory-storage.service-points.item.put"]
"permissionsRequired": ["inventory-storage.service-points.item.put"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["DELETE"],
"pathPattern": "/service-points/{id}",
"permissionsRequired": ["inventory-storage.service-points.item.delete"]
"permissionsRequired": ["inventory-storage.service-points.item.delete"],
"modulePermissions": ["user-tenants.collection.get"]
}
]
},
Expand Down Expand Up @@ -2900,7 +2905,8 @@
{ "name": "S3_BUCKET", "value": "marc-migrations" },
{ "name": "S3_ACCESS_KEY_ID", "value": "" },
{ "name": "S3_SECRET_ACCESS_KEY", "value": "" },
{ "name": "S3_IS_AWS", "value": "true" }
{ "name": "S3_IS_AWS", "value": "true" },
{ "name": "ECS_TLR_FEATURE_ENABLED", "value": "false"}
]
}
}
9 changes: 8 additions & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

<artifactId>mod-inventory-storage</artifactId>
<groupId>org.folio</groupId>
<version>28.0.2-SNAPSHOT</version>
<version>28.0.3-SNAPSHOT</version>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
Expand Down Expand Up @@ -37,6 +37,7 @@
<rest-assured.version>5.5.0</rest-assured.version>
<awaitility.version>4.2.2</awaitility.version>
<assertj.version>3.26.3</assertj.version>
<system-stubs-junit4.version>2.1.7</system-stubs-junit4.version>

<maven-compiler-plugin.version>3.13.0</maven-compiler-plugin.version>
<build-helper-maven-plugin.version>3.6.0</build-helper-maven-plugin.version>
Expand Down Expand Up @@ -273,6 +274,12 @@
<artifactId>log4j-slf4j2-impl</artifactId>
<scope>test</scope>
</dependency>
<dependency> <!-- for mocking env variable -->
<groupId>uk.org.webcompere</groupId>
<artifactId>system-stubs-junit4</artifactId>
<version>${system-stubs-junit4.version}</version>
<scope>test</scope>
</dependency>
</dependencies>

<build>
Expand Down
8 changes: 7 additions & 1 deletion ramls/service-point.raml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#%RAML 1.0
title: Service Points API
version: v3.3
version: v3.4
protocols: [ HTTP, HTTPS ]
baseUri: http://localhost

Expand Down Expand Up @@ -34,6 +34,12 @@ resourceTypes:
searchable: { description: "with valid searchable fields", example: "name=aaa"},
pageable
]
queryParameters:
includeRoutingServicePoints:
description: "Should ECS request routing service points be included in the response"
default: false
required: false
type: boolean
description: Return a list of service points
post:
description: Create a new service point
Expand Down
5 changes: 5 additions & 0 deletions ramls/servicepoint.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,11 @@
]
}
},
"ecsRequestRouting": {
"type": "boolean",
"description": "Indicates a service point used for the ECS functionality",
"default" : false
},
"metadata": {
"type": "object",
"$ref": "raml-util/schemas/metadata.schema",
Expand Down
19 changes: 19 additions & 0 deletions src/main/java/org/folio/rest/impl/InitApiImpl.java
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@
import io.vertx.core.Future;
import io.vertx.core.Handler;
import io.vertx.core.Promise;
import io.vertx.core.ThreadingModel;
import io.vertx.core.Vertx;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.folio.rest.resource.interfaces.InitAPI;
import org.folio.services.caches.ConsortiumDataCache;
import org.folio.services.consortium.ServicePointSynchronizationVerticle;
import org.folio.services.consortium.ShadowInstanceSynchronizationVerticle;
import org.folio.services.consortium.SynchronizationVerticle;
import org.folio.services.migration.async.AsyncMigrationConsumerVerticle;
Expand All @@ -25,6 +27,7 @@ public void init(Vertx vertx, Context context, Handler<AsyncResult<Boolean>> han
initAsyncMigrationVerticle(vertx)
.compose(v -> initShadowInstanceSynchronizationVerticle(vertx, getConsortiumDataCache(context)))
.compose(v -> initSynchronizationVerticle(vertx, getConsortiumDataCache(context)))
.compose(v -> initServicePointSynchronizationVerticle(vertx, getConsortiumDataCache(context)))
.map(true)
.onComplete(handler);
}
Expand Down Expand Up @@ -76,6 +79,22 @@ private Future<Object> initSynchronizationVerticle(Vertx vertx, ConsortiumDataCa
.mapEmpty();
}

private Future<Object> initServicePointSynchronizationVerticle(Vertx vertx,
ConsortiumDataCache consortiumDataCache) {

DeploymentOptions options = new DeploymentOptions()
.setThreadingModel(ThreadingModel.WORKER)
.setInstances(1);

return vertx.deployVerticle(() -> new ServicePointSynchronizationVerticle(consortiumDataCache),
options)
.onSuccess(v -> log.info("initServicePointSynchronizationVerticle:: "
+ "ServicePointSynchronizationVerticle verticle was successfully started"))
.onFailure(e -> log.error("initServicePointSynchronizationVerticle:: "
+ "ServicePointSynchronizationVerticle verticle was not successfully started", e))
.mapEmpty();
}

private void initConsortiumDataCache(Vertx vertx, Context context) {
ConsortiumDataCache consortiumDataCache = new ConsortiumDataCache(vertx, vertx.createHttpClient());
context.put(ConsortiumDataCache.class.getName(), consortiumDataCache);
Expand Down
43 changes: 27 additions & 16 deletions src/main/java/org/folio/rest/impl/ServicePointApi.java
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

import static io.vertx.core.Future.succeededFuture;
import static java.lang.Boolean.TRUE;
import static org.apache.commons.lang3.StringUtils.isNotBlank;
import static org.folio.rest.support.EndpointFailureHandler.handleFailure;

import io.vertx.core.AsyncResult;
Expand Down Expand Up @@ -31,15 +32,18 @@ public class ServicePointApi implements org.folio.rest.jaxrs.resource.ServicePoi
"Hold shelf expiry period must be specified when service point can be used for pickup.";
public static final String SERVICE_POINT_CREATE_ERR_MSG_WITHOUT_BEING_PICKUP_LOC =
"Hold shelf expiry period cannot be specified when service point cannot be used for pickup";
private static final String ECS_ROUTING_QUERY_FILTER = "cql.allRecords=1 NOT ecsRequestRouting=true";
private static final Logger logger = LogManager.getLogger();

@Validate
@Override
public void getServicePoints(String query, String totalRecords, int offset, int limit,
public void getServicePoints(boolean includeRoutingServicePoints, String query,
String totalRecords, int offset, int limit,
Map<String, String> okapiHeaders,
Handler<AsyncResult<Response>> asyncResultHandler,
Context vertxContext) {

query = updateGetServicePointsQuery(query, includeRoutingServicePoints);
PgUtil.get(SERVICE_POINT_TABLE, Servicepoint.class, Servicepoints.class,
query, offset, limit, okapiHeaders, vertxContext, GetServicePointsResponse.class, asyncResultHandler);
}
Expand Down Expand Up @@ -67,11 +71,11 @@ public void postServicePoints(Servicepoint entity,
id = UUID.randomUUID().toString();
entity.setId(id);
}
String tenantId = getTenant(okapiHeaders);
PostgresClient pgClient = getPgClient(vertxContext, tenantId);
pgClient.save(SERVICE_POINT_TABLE, id, entity, saveReply -> {
if (saveReply.failed()) {
String message = logAndSaveError(saveReply.cause());
new ServicePointService(vertxContext, okapiHeaders)
.createServicePoint(id, entity)
.onSuccess(response -> asyncResultHandler.handle(succeededFuture(response)))
.onFailure(throwable -> {
String message = logAndSaveError(throwable);
if (isDuplicate(message)) {
asyncResultHandler.handle(Future.succeededFuture(
PostServicePointsResponse.respond422WithApplicationJson(
Expand All @@ -82,15 +86,7 @@ public void postServicePoints(Servicepoint entity,
PostServicePointsResponse.respond500WithTextPlain(
getErrorResponse(message))));
}
} else {
String ret = saveReply.result();
entity.setId(ret);
asyncResultHandler.handle(Future.succeededFuture(
PostServicePointsResponse
.respond201WithApplicationJson(entity,
PostServicePointsResponse.headersFor201().withLocation(LOCATION_PREFIX + ret))));
}
});
});
} catch (Exception e) {
String message = logAndSaveError(e);
asyncResultHandler.handle(Future.succeededFuture(
Expand Down Expand Up @@ -244,7 +240,7 @@ private boolean isDuplicate(String errorMessage) {
"duplicate key value violates unique constraint");
}

private String validateServicePoint(Servicepoint svcpt) {
public static String validateServicePoint(Servicepoint svcpt) {

HoldShelfExpiryPeriod holdShelfExpiryPeriod = svcpt.getHoldShelfExpiryPeriod();
boolean pickupLocation = svcpt.getPickupLocation() == null ? Boolean.FALSE : svcpt.getPickupLocation();
Expand All @@ -261,4 +257,19 @@ private Future<Boolean> checkServicePointInUse() {
return Future.succeededFuture(false);
}

private static String updateGetServicePointsQuery(String query, boolean includeRoutingServicePoints) {
if (includeRoutingServicePoints) {
return query;
}

logger.debug("updateGetServicePointsQuery:: original query: {}", query);
String newQuery = ECS_ROUTING_QUERY_FILTER;
if (isNotBlank(query)) {
newQuery += " and " + query;
}
logger.debug("updateGetServicePointsQuery:: updated query: {}", newQuery);

return newQuery;
}

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
package org.folio.services.consortium;

import static org.folio.rest.tools.utils.ModuleName.getModuleName;
import static org.folio.rest.tools.utils.ModuleName.getModuleVersion;
import static org.folio.services.domainevent.ServicePointEventType.SERVICE_POINT_CREATED;
import static org.folio.services.domainevent.ServicePointEventType.SERVICE_POINT_DELETED;
import static org.folio.services.domainevent.ServicePointEventType.SERVICE_POINT_UPDATED;

import io.vertx.core.AbstractVerticle;
import io.vertx.core.Future;
import io.vertx.core.Promise;
import io.vertx.core.http.HttpClient;
import java.util.ArrayList;
import java.util.List;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.folio.kafka.AsyncRecordHandler;
import org.folio.kafka.GlobalLoadSensor;
import org.folio.kafka.KafkaConfig;
import org.folio.kafka.KafkaConsumerWrapper;
import org.folio.kafka.SubscriptionDefinition;
import org.folio.kafka.services.KafkaEnvironmentProperties;
import org.folio.kafka.services.KafkaTopic;
import org.folio.services.caches.ConsortiumDataCache;
import org.folio.services.consortium.handler.ServicePointSynchronizationCreateHandler;
import org.folio.services.consortium.handler.ServicePointSynchronizationDeleteHandler;
import org.folio.services.consortium.handler.ServicePointSynchronizationUpdateHandler;
import org.folio.services.domainevent.ServicePointEventType;

public class ServicePointSynchronizationVerticle extends AbstractVerticle {

private static final Logger log = LogManager.getLogger(ServicePointSynchronizationVerticle.class);
private static final String TENANT_PATTERN = "\\w{1,}";
private static final String MODULE_ID = getModuleId();
private static final int DEFAULT_LOAD_LIMIT = 5;
private final ConsortiumDataCache consortiumDataCache;

private final List<KafkaConsumerWrapper<String, String>> consumers = new ArrayList<>();

public ServicePointSynchronizationVerticle(final ConsortiumDataCache consortiumDataCache) {
this.consortiumDataCache = consortiumDataCache;
}

@Override
public void start(Promise<Void> startPromise) throws Exception {
var httpClient = vertx.createHttpClient();

createConsumers(httpClient)
.onSuccess(v -> log.info("start:: verticle started"))
.onFailure(t -> log.error("start:: verticle start failed", t))
.onComplete(startPromise);
}

private Future<Void> createConsumers(HttpClient httpClient) {
final var config = getKafkaConfig();

return createEventConsumer(SERVICE_POINT_CREATED, config,
new ServicePointSynchronizationCreateHandler(consortiumDataCache, httpClient, vertx))
.compose(r -> createEventConsumer(SERVICE_POINT_UPDATED, config,
new ServicePointSynchronizationUpdateHandler(consortiumDataCache, httpClient, vertx)))
.compose(r -> createEventConsumer(SERVICE_POINT_DELETED, config,
new ServicePointSynchronizationDeleteHandler(consortiumDataCache, httpClient, vertx)))
.mapEmpty();
}

private Future<KafkaConsumerWrapper<String, String>> createEventConsumer(
ServicePointEventType eventType, KafkaConfig kafkaConfig,
AsyncRecordHandler<String, String> handler) {

var subscriptionDefinition = SubscriptionDefinition.builder()
.eventType(eventType.name())
.subscriptionPattern(buildSubscriptionPattern(eventType.getKafkaTopic(), kafkaConfig))
.build();

return createConsumer(kafkaConfig, subscriptionDefinition, handler);
}

private Future<KafkaConsumerWrapper<String, String>> createConsumer(KafkaConfig kafkaConfig,
SubscriptionDefinition subscriptionDefinition,
AsyncRecordHandler<String, String> recordHandler) {

var consumer = KafkaConsumerWrapper.<String, String>builder()
.context(context)
.vertx(vertx)
.kafkaConfig(kafkaConfig)
.loadLimit(DEFAULT_LOAD_LIMIT)
.globalLoadSensor(new GlobalLoadSensor())
.subscriptionDefinition(subscriptionDefinition)
.build();

return consumer.start(recordHandler, MODULE_ID)
.onSuccess(v -> consumers.add(consumer))
.map(consumer);
}

private static String buildSubscriptionPattern(KafkaTopic kafkaTopic, KafkaConfig kafkaConfig) {
return kafkaTopic.fullTopicName(kafkaConfig, TENANT_PATTERN);
}

private static String getModuleId() {
return getModuleName().replace("_", "-") + "-" + getModuleVersion();
}

private KafkaConfig getKafkaConfig() {
return KafkaConfig.builder()
.envId(KafkaEnvironmentProperties.environment())
.kafkaHost(KafkaEnvironmentProperties.host())
.kafkaPort(KafkaEnvironmentProperties.port())
.build();
}
}
Loading