Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(wren-ai-service): Consolidate SQL Pairs Service and Remove Redundant Code #1268

Merged
merged 14 commits into from
Feb 6, 2025

Conversation

paopa
Copy link
Member

@paopa paopa commented Feb 5, 2025

This PR refactors the SQL Pairs functionality to improve code organization and remove redundancy:

Key Changes

  • Consolidated SQL pairs functionality into a single service (SqlPairsService) and removed redundant SqlPairsPreparationService
  • Simplified SQL pairs pipeline by merging indexing and deletion operations into a single pipeline
  • Moved service imports to __init__.py to improve code organization and avoid circular imports
  • Updated file paths and naming conventions for consistency (e.g., sql_pairs.json instead of pairs.json)

Technical Details

  • Removed SqlPairsDeletion pipeline and integrated deletion functionality into SqlPairs pipeline
  • Updated service container to use the new consolidated SqlPairsService
  • Added comprehensive tests for the new service implementation
  • Improved error handling and status tracking for SQL pairs operations

Testing

  • Added new test cases covering:
    • SQL pairs preparation
    • Single and batch deletion
    • Cross-project operations
    • Error handling scenarios
    • Empty input handling

Breaking Changes

  • Removed SqlPairsPreparationService in favor of SqlPairsService
  • Updated API endpoints to use the new service implementation
  • Changed configuration path from pairs.json to sql_pairs.json

Summary by CodeRabbit

  • New Features

    • Introduced new endpoints for managing SQL pairs, enabling preparation, deletion, and status tracking.
    • Added support for a new LLM model configuration.
  • Refactor

    • Streamlined service imports and consolidated SQL pairs handling for improved consistency.
    • Enhanced background operations with dynamic execution.
  • Removed Features

    • Eliminated legacy SQL pairs deletion functionality and redundant endpoints.
  • Tests

    • Added comprehensive tests for SQL pairs deletion while removing obsolete test cases.
  • Chores

    • Updated configuration examples and deployment settings to align with the new SQL pairs management approach.

@paopa paopa added module/ai-service ai-service related ci/ai-service ai-service related labels Feb 5, 2025
Copy link
Contributor

coderabbitai bot commented Feb 5, 2025

Walkthrough

This pull request updates the SQL pairs configuration and related service logic. The default file path for SQL pairs is renamed, and service imports in the ServiceContainer are consolidated. Pipeline functions are modified to support dynamic method invocation and improved logging. In addition, legacy SQL pairs deletion and preparation functionality is removed in favor of a new SQL pairs API and service that handle indexing, cleaning, and deletion. Associated tests have been added, and outdated tests removed.

Changes

File(s) Change Summary
wren-ai-service/src/config.py Changed default sql_pairs_path from "pairs.json" to "sql_pairs.json".
wren-ai-service/src/globals.py Refactored ServiceContainer: consolidated service imports and updated service references; renamed sql_pairs_preparation_service to sql_pairs_service.
wren-ai-service/src/pipelines/common.py Updated dry_run_pipeline signature to include a method parameter and adjusted logger configuration with is_dev=True.
wren-ai-service/src/pipelines/indexing/... In sql_pairs.py, added a new SqlPairsCleaner class, updated SqlPairs.run to accept an additional external_pairs parameter, and added a clean method; removed legacy SQL pairs deletion functionality and exports (including deletion file and entries in __init__.py).
wren-ai-service/src/web/v1/routers/... Replaced sql_pairs_preparation.router with sql_pairs.router; consolidated import statements in routers for question, relationship, and semantics; added new sql_pairs.py router and removed the outdated sql_pairs_preparation.py file.
wren-ai-service/src/web/v1/services/... Added new service imports and an explicit __all__ list; introduced the SqlPairsService class to handle SQL pairs indexing and deletion; removed legacy SqlPairsPreparationService.
wren-ai-service/tests/pytest/... Added tests for SQL pairs deletion and SqlPairsService functionality; removed outdated test files for SQL pairs deletion and preparation.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Router
    participant Service as SqlPairsService
    participant Pipeline
    participant Cache

    Client->>Router: POST /sql-pairs (payload)
    Router->>Service: index(request)
    Service->>Pipeline: Execute SQL pairs indexing
    Pipeline-->>Service: Return result
    Service->>Cache: Update status (finished/failed)
    Service-->>Router: Response with tracking ID
    Router-->>Client: Return response
Loading
sequenceDiagram
    participant Client
    participant Router
    participant Service as SqlPairsService
    participant Pipeline
    participant Cache

    Client->>Router: DELETE /sql-pairs (payload)
    Router->>Service: delete(request)
    Service->>Pipeline: Execute SQL pairs deletion (clean)
    Pipeline-->>Service: Return deletion result
    Service->>Cache: Update status (finished/failed)
    Service-->>Router: Response with deletion ID
    Router-->>Client: Return deletion response
Loading

Suggested reviewers

  • cyyeh
  • wwwy3y3

Poem

I'm a bunny in the code, so fleet,
Hopping through changes with nimble feet,
SQL pairs renamed with a clever twist,
Pipelines and services now coexist,
I wiggle my whiskers in joyful delight,
Celebrating smooth code from morning to night!

Tip

🌐 Web search-backed reviews and chat
  • We have enabled web search-based reviews and chat for all users. This feature allows CodeRabbit to access the latest documentation and information on the web.
  • You can disable this feature by setting web_search: false in the knowledge_base settings.
  • Please share any feedback in the Discord discussion.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5943227 and 61f7e13.

📒 Files selected for processing (9)
  • deployment/kustomizations/base/cm.yaml (0 hunks)
  • docker/config.example.yaml (0 hunks)
  • wren-ai-service/docs/config_examples/config.azure.yaml (0 hunks)
  • wren-ai-service/docs/config_examples/config.deepseek.yaml (0 hunks)
  • wren-ai-service/docs/config_examples/config.google_ai_studio.yaml (0 hunks)
  • wren-ai-service/docs/config_examples/config.groq.yaml (0 hunks)
  • wren-ai-service/docs/config_examples/config.ollama.yaml (0 hunks)
  • wren-ai-service/tools/config/config.example.yaml (0 hunks)
  • wren-ai-service/tools/config/config.full.yaml (0 hunks)
💤 Files with no reviewable changes (9)
  • wren-ai-service/docs/config_examples/config.deepseek.yaml
  • wren-ai-service/docs/config_examples/config.azure.yaml
  • wren-ai-service/docs/config_examples/config.ollama.yaml
  • wren-ai-service/docs/config_examples/config.groq.yaml
  • deployment/kustomizations/base/cm.yaml
  • docker/config.example.yaml
  • wren-ai-service/tools/config/config.full.yaml
  • wren-ai-service/docs/config_examples/config.google_ai_studio.yaml
  • wren-ai-service/tools/config/config.example.yaml
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: pytest

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
wren-ai-service/src/web/v1/routers/sql_pairs.py (1)

145-162: Return a unified status.

The GET endpoint returns "deleting", "finished", or "failed". If you anticipate indexing and deleting statuses in the same ID’s lifecycle, consider also supporting "indexing". Otherwise, the returned data might look incomplete for those using the GET endpoint to track indexing progress.

- status: Literal["deleting", "finished", "failed"]
+ status: Literal["indexing", "deleting", "finished", "failed"]
🧹 Nitpick comments (9)
wren-ai-service/src/web/v1/services/sql_pairs.py (4)

16-25: Consider using a more descriptive error code mechanism.

Currently, the error field only supports "OTHERS" as a literal. If you anticipate more nuanced error categories (e.g., "PIPELINE_ERROR", "DATA_ERROR", etc.), consider refactoring to an Enum or a broader set of Literals for better clarity and extensibility.


35-47: Centralize logging or re-raise exceptions for broader monitoring.

While _handle_exception sets the Resource to "failed" and logs the error, consider whether additional handling (like re-raising or alerting) might be beneficial, especially if you have a monitoring or alerting pipeline that depends on raised exceptions rather than mere log outputs.


90-111: Ensure consistent error handling with the delete operation.

Deletion logic mirrors the indexing flow, but if partial deletions fail, you may want to confirm whether the entire request should be considered failed. Currently, any thrown exception marks the entire operation as "failed". If partial or bulk deletes are allowed, consider different statuses or more granular error reporting.


113-119: Avoid using logger.exception for resource-not-found scenarios.

If the resource was never created or has expired from the TTL cache, this often isn’t truly an “exceptional” scenario. Using logger.exception can confuse operational metrics or log scrapers into thinking an error occurred. Consider switching to logger.info or logger.warning.

wren-ai-service/src/web/v1/routers/sql_pairs.py (1)

91-110: BackgroundTasks error monitoring.

The indexing operation is offloaded to a background task. If an unhandled exception occurs there, it may never surface in the main request logs. Ensure you have a robust logging or error collection mechanism to capture background task exceptions.

wren-ai-service/src/pipelines/indexing/sql_pairs.py (2)

52-74: Check for double-deletion or missing docs.

SqlPairsCleaner.run calls delete_documents on matching records. If a record doesn’t exist, there’s no error. Depending on your logic, you may prefer more explicit reporting (for partial or total misses). Also ensure the DocumentStore’s filter logic aligns with your ID or project scoping.


213-225: Return confirmation on successful deletion.

Your clean method is async and returns None, but you might want to return an object or success boolean. That can help the caller confirm that the cleaning ran without error instead of having to rely solely on logs or exceptions.

wren-ai-service/tests/pytest/pipelines/indexing/test_sql_pairs.py (1)

73-96: Consider adding edge case tests.

While the test covers basic deletion scenarios, consider adding tests for:

  • Deleting non-existent SQL pairs
  • Deleting with invalid project IDs
  • Concurrent deletions
@pytest.mark.asyncio
async def test_sql_pairs_deletion_edge_cases():
    # Setup code...
    
    # Test deleting non-existent pairs
    await pipe.clean(sql_pairs=[], project_id="non-existent-id")
    assert await store.count_documents() == 2  # Should not affect existing documents
    
    # Test invalid project ID
    with pytest.raises(ValueError):
        await pipe.clean(sql_pairs=[], project_id="")
        
    # Test concurrent deletions
    await asyncio.gather(
        pipe.clean(sql_pairs=[], project_id="fake-id"),
        pipe.clean(sql_pairs=[], project_id="fake-id-2")
    )
    assert await store.count_documents() == 0
wren-ai-service/tests/pytest/services/test_sql_pairs.py (1)

28-229: Consider adding more edge cases to improve test coverage.

The current test suite covers basic functionality well. Consider adding these scenarios:

  1. Test with invalid SQL queries
  2. Test with duplicate SQL pair IDs
  3. Test with non-existent project IDs
  4. Test concurrent indexing operations
  5. Test error handling for database connection failures

Would you like me to help implement these additional test cases?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7292b1f and 5943227.

📒 Files selected for processing (19)
  • wren-ai-service/src/config.py (1 hunks)
  • wren-ai-service/src/globals.py (11 hunks)
  • wren-ai-service/src/pipelines/common.py (1 hunks)
  • wren-ai-service/src/pipelines/indexing/__init__.py (0 hunks)
  • wren-ai-service/src/pipelines/indexing/sql_pairs.py (4 hunks)
  • wren-ai-service/src/pipelines/indexing/sql_pairs_deletion.py (0 hunks)
  • wren-ai-service/src/web/v1/routers/__init__.py (2 hunks)
  • wren-ai-service/src/web/v1/routers/question_recommendation.py (1 hunks)
  • wren-ai-service/src/web/v1/routers/relationship_recommendation.py (1 hunks)
  • wren-ai-service/src/web/v1/routers/semantics_description.py (1 hunks)
  • wren-ai-service/src/web/v1/routers/sql_pairs.py (1 hunks)
  • wren-ai-service/src/web/v1/routers/sql_pairs_preparation.py (0 hunks)
  • wren-ai-service/src/web/v1/services/__init__.py (2 hunks)
  • wren-ai-service/src/web/v1/services/sql_pairs.py (1 hunks)
  • wren-ai-service/src/web/v1/services/sql_pairs_preparation.py (0 hunks)
  • wren-ai-service/tests/pytest/pipelines/indexing/test_sql_pairs.py (1 hunks)
  • wren-ai-service/tests/pytest/pipelines/indexing/test_sql_pairs_deletion.py (0 hunks)
  • wren-ai-service/tests/pytest/services/test_sql_pairs.py (1 hunks)
  • wren-ai-service/tests/pytest/services/test_sql_pairs_preparation.py (0 hunks)
💤 Files with no reviewable changes (6)
  • wren-ai-service/tests/pytest/pipelines/indexing/test_sql_pairs_deletion.py
  • wren-ai-service/src/pipelines/indexing/init.py
  • wren-ai-service/src/pipelines/indexing/sql_pairs_deletion.py
  • wren-ai-service/src/web/v1/services/sql_pairs_preparation.py
  • wren-ai-service/src/web/v1/routers/sql_pairs_preparation.py
  • wren-ai-service/tests/pytest/services/test_sql_pairs_preparation.py
✅ Files skipped from review due to trivial changes (3)
  • wren-ai-service/src/web/v1/routers/semantics_description.py
  • wren-ai-service/src/web/v1/routers/relationship_recommendation.py
  • wren-ai-service/src/web/v1/routers/question_recommendation.py
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: pytest
  • GitHub Check: pytest
  • GitHub Check: Analyze (go)
🔇 Additional comments (16)
wren-ai-service/src/web/v1/services/sql_pairs.py (3)

26-34: Validate pipeline keys before use.

You're referencing self._pipelines["sql_pairs"] later on; ensure that "sql_pairs" key is always present. A defensive check or a try-except for missing pipeline keys could enhance reliability if there's a configuration or registration issue.


60-81: Guard against uninitialized or empty requests.

When processing .sql_pairs in index, consider validating that request.sql_pairs is non-empty and well-formed. Empty or erroneous input might cause an unnecessary pipeline run or partial failure.


127-129: Consider concurrency safety for cache writes.

__setitem__ is straightforward, but if you anticipate concurrent writes to the same key from multiple tasks, data synchronization or checks might be needed. A TTLCache could still face race conditions in extreme cases.

wren-ai-service/src/web/v1/routers/sql_pairs.py (2)

20-39: Optional input validation for SQL pairs.

When receiving SQL pairs in the POST request, ensure the list is not excessively large or malformed. A pre-validation step (e.g., restricting the size or performing schema checks) can safeguard against unexpected overhead or pipeline failures.


121-143: Symmetry in endpoint naming and handling.

The DELETE endpoint mirrors the POST flow. This consistency is good, but also be sure each endpoint’s background task exceptions are tracked consistently. If you want partial deletes or more granular feedback, you might add additional metadata (e.g., which IDs succeeded/failed).

wren-ai-service/src/pipelines/indexing/sql_pairs.py (2)

24-25: Clarify default values.

sql: str = "" and question: str = "" are convenient defaults, but consider whether empty strings might cause confusion downstream. If an empty query is invalid, it might be safer to raise an error during initialization.


184-185: Validate existence of local SQL pairs file.

_load_sql_pairs warns if the file is not found, but the subsequent pipeline logic always merges self._external_pairs. If the file is missing or corrupt, you might define a fallback to prevent silent partial merges or duplicated keys.

wren-ai-service/src/web/v1/services/__init__.py (2)

41-41: LGTM! String formatting improvement.

Changed from single quotes to double quotes for better consistency.


61-92: LGTM! Well-organized service imports.

Good practice to:

  • Consolidate service imports in __init__.py to avoid circular imports
  • Use __all__ to explicitly control exports
  • Include # noqa: E402 to suppress import order warnings
wren-ai-service/src/pipelines/common.py (2)

36-41: LGTM! Enhanced pipeline flexibility.

Added method parameter enables dynamic invocation of pipeline methods, supporting the consolidated SQL pairs functionality.


49-49: Verify the impact of dev logging in production.

Setting is_dev=True might expose sensitive information in logs.

wren-ai-service/src/web/v1/routers/__init__.py (1)

15-15: LGTM! Router consolidation aligns with PR objectives.

Successfully replaced sql_pairs_preparation with consolidated sql_pairs router while maintaining import and inclusion order.

Also applies to: 33-33

wren-ai-service/src/config.py (1)

63-63: Verify the impact of the configuration path change.

The default path for SQL pairs has been updated from "pairs.json" to "sql_pairs.json" for better naming consistency. This is a breaking change that requires users to update their configuration files.

Run this script to find any hardcoded references to the old path:

✅ Verification successful

🏁 Script executed:

#!/bin/bash
# Description: Search for hardcoded references to the old path.
rg -i "pairs\.json"

Length of output: 681


Configuration Change Impact Verified

  • The default sql_pairs_path in wren-ai-service/src/config.py has been updated to "sql_pairs.json", and this change propagates to the corresponding module in wren-ai-service/src/pipelines/indexing/sql_pairs.py.
  • Test cases in wren-ai-service/tests/pytest/pipelines/indexing/test_sql_pairs.py explicitly override the path using "tests/data/pairs.json", so they remain unaffected.
  • This confirms that production users will need to update their configuration files accordingly, as the breaking change is applied only to the default and not in tests.
wren-ai-service/tests/pytest/services/test_sql_pairs.py (1)

12-26: LGTM! Well-structured fixture setup.

The fixture correctly initializes the service container and document store provider with proper cleanup between tests.

wren-ai-service/src/globals.py (2)

10-10: LGTM! Improved import organization.

Consolidating service imports into a single namespace improves code organization and maintainability.


17-30: LGTM! Service container updates align with the PR objectives.

The changes correctly:

  1. Update service type annotations to use the services namespace
  2. Rename SqlPairsPreparationService to SqlPairsService
  3. Update pipeline configuration for SQL pairs

Also applies to: 232-240

Comment on lines 200 to +212

return await self._pipe.execute(
["write"],
inputs={
"mdl_str": mdl_str,
"project_id": project_id,
**self._components,
input = {
"mdl_str": mdl_str,
"project_id": project_id,
"external_pairs": {
**self._external_pairs,
**external_pairs,
},
**self._components,
}

return await self._pipe.execute(["write"], inputs=input)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider error handling in pipeline execution.

The pipeline self._pipe.execute(["write"], inputs=input) might fail if the store or embedding process errors. Currently, that error is not caught here. For user-facing reliability, consider a try-except to either log or handle partial writes more gracefully.

+        try:
+            return await self._pipe.execute(["write"], inputs=input)
+        except Exception as e:
+            logger.error(f"Pipeline execution failed: {e}")
+            raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return await self._pipe.execute(
["write"],
inputs={
"mdl_str": mdl_str,
"project_id": project_id,
**self._components,
input = {
"mdl_str": mdl_str,
"project_id": project_id,
"external_pairs": {
**self._external_pairs,
**external_pairs,
},
**self._components,
}
return await self._pipe.execute(["write"], inputs=input)
input = {
"mdl_str": mdl_str,
"project_id": project_id,
"external_pairs": {
**self._external_pairs,
**external_pairs,
},
**self._components,
}
try:
return await self._pipe.execute(["write"], inputs=input)
except Exception as e:
logger.error(f"Pipeline execution failed: {e}")
raise


pipe_components = generate_components(settings.components)
pipeline = pipeline_cls(**pipe_components[pipeline_name])
init_langfuse(settings)

async_validate(lambda: pipeline.run(**kwargs))
async_validate(lambda: getattr(pipeline, method)(**kwargs))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Verify error handling for dynamic method invocation.

Using getattr without checking if the method exists could raise AttributeError.

-    async_validate(lambda: getattr(pipeline, method)(**kwargs))
+    if not hasattr(pipeline, method):
+        raise ValueError(f"Method '{method}' not found in pipeline")
+    async_validate(lambda: getattr(pipeline, method)(**kwargs))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async_validate(lambda: getattr(pipeline, method)(**kwargs))
if not hasattr(pipeline, method):
raise ValueError(f"Method '{method}' not found in pipeline")
async_validate(lambda: getattr(pipeline, method)(**kwargs))

Copy link
Member

@cyyeh cyyeh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@cyyeh cyyeh merged commit 69b220e into main Feb 6, 2025
10 checks passed
@cyyeh cyyeh deleted the feat/sql-paris-endpoint branch February 6, 2025 02:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/ai-service ai-service related module/ai-service ai-service related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants