-
Notifications
You must be signed in to change notification settings - Fork 428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup env #3996
Cleanup env #3996
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
4 Skipped Deployments
|
Warning Rate limit exceeded@jonator has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 41 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThis pull request introduces significant changes to the handling of environment variables and related functionalities within the application. Key modifications include the removal of several constants related to data URLs, specifically replacing Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
packages/server/src/queries/data-services/position-performance.ts (1)
3-3
: Consider documenting the service consolidationThe consolidation of TIMESERIES_DATA_URL and INDEXER_DATA_URL into HISTORICAL_DATA_URL represents a significant architectural change. Consider:
- Updating API documentation to reflect the consolidated service
- Adding integration tests to verify all endpoints work with the new service
- Planning a coordinated deployment to ensure all consumers are updated
Also applies to: 3-3
packages/server/src/queries/complex/pools/incentives.ts (1)
287-291
: Consider extracting the gauge staleness check parameters.The gauge staleness check logic uses magic numbers. Consider extracting these values into named constants for better maintainability.
+const GAUGE_STALENESS_CHECK_PARAMS = { + maxNewGaugesPerDay: 100, + durationOneDay: 86400000, +}; + checkForStaleness( gauge, Number.parseInt(data[data.length - 1].id), epochs )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
turbo.json
is excluded by!**/*.json
📒 Files selected for processing (15)
packages/server/src/env.ts
(1 hunks)packages/server/src/queries/blockaid/index.ts
(0 hunks)packages/server/src/queries/blockaid/transaction.ts
(0 hunks)packages/server/src/queries/complex/pools/incentives.ts
(6 hunks)packages/server/src/queries/data-services/filtered-pools.ts
(2 hunks)packages/server/src/queries/data-services/pools-fees.ts
(2 hunks)packages/server/src/queries/data-services/position-performance.ts
(2 hunks)packages/server/src/queries/data-services/price-range-apr.ts
(2 hunks)packages/server/src/queries/data-services/token-data.ts
(2 hunks)packages/server/src/queries/data-services/token-historical-chart.ts
(2 hunks)packages/server/src/queries/data-services/token-pair-historical-chart.ts
(2 hunks)packages/server/src/queries/index.ts
(0 hunks)packages/web/.env
(1 hunks)packages/web/config/env.ts
(1 hunks)packages/web/stores/root.ts
(3 hunks)
💤 Files with no reviewable changes (3)
- packages/server/src/queries/index.ts
- packages/server/src/queries/blockaid/index.ts
- packages/server/src/queries/blockaid/transaction.ts
🔇 Additional comments (15)
packages/server/src/queries/data-services/pools-fees.ts (1)
3-3
: Verify API compatibility with the new historical data service
The URL endpoint remains unchanged while the base URL is switched to HISTORICAL_DATA_URL. Let's verify that the new consolidated service maintains the same API contract.
Also applies to: 18-18
✅ Verification successful
Let me gather more information about the URL constant changes to ensure API compatibility.
Based on the gathered information, I can now generate the final response:
URL change is part of service consolidation with maintained API contract
The change from TIMESERIES_DATA_URL to HISTORICAL_DATA_URL is part of a broader service consolidation effort. The evidence shows:
- The HISTORICAL_DATA_URL is consistently used across multiple data service endpoints
- The PoolFees interface remains unchanged, maintaining the API contract
- The endpoint path "/fees/v1/pools" is preserved
- The default URL points to a valid data service: "https://data.stage.osmosis.zone"
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the existence and structure of the fees endpoint in the new service
# Check if the fees endpoint exists in other files (could be documentation or tests)
rg -g '!*.{lock,sum}' -A 3 '/fees/v1/pools'
# Look for any interface or type definitions related to pool fees
ast-grep --pattern 'interface PoolFees {
$$$
}'
Length of output: 1242
Script:
#!/bin/bash
# Check the changes in environment variables and their usage
rg "HISTORICAL_DATA_URL|TIMESERIES_DATA_URL" -A 2
# Look for any tests or documentation about the API endpoints
fd -e test.ts -e spec.ts -e md . -x rg -l "pools.*fees"
Length of output: 5056
packages/server/src/queries/data-services/price-range-apr.ts (1)
3-3
: Verify complex endpoint compatibility in the consolidated service
The endpoint requires specific query parameters and path variables. We should verify that the new HISTORICAL_DATA_URL service supports this exact API structure.
Also applies to: 20-20
✅ Verification successful
Let me gather more information about the service URL changes to ensure compatibility.
Based on the search results, I can now generate the final response:
The endpoint structure is consistent across the codebase
The /cl/v1/apr/rewards/${poolId}?lower_tick=${lowerTickIndex}&upper_tick=${upperTickIndex}
endpoint is consistently used with the HISTORICAL_DATA_URL service, which is properly configured in the environment. The same endpoint structure is used in both packages/server/src/queries/data-services/price-range-apr.ts
and packages/stores/src/queries-external/concentrated-liquidity/range-aprs.ts
, confirming API compatibility. The service URL defaults to "https://data.stage.osmosis.zone" if not specified in the environment.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the APR rewards endpoint structure
# Search for similar endpoint patterns to confirm consistency
rg -g '!*.{lock,sum}' -A 3 '/cl/v1/apr/rewards/.*\?lower_tick='
# Look for any interface definitions or tests related to APR calculations
ast-grep --pattern 'interface PriceRangeAPR {
$$$
}'
Length of output: 1150
Script:
#!/bin/bash
# Check for any other references to HISTORICAL_DATA_URL and INDEXER_DATA_URL
rg -g '!*.{lock,sum}' "HISTORICAL_DATA_URL" -B 2 -A 2
# Look for any environment variable definitions
fd -g '.env*' --exec cat {} \;
Length of output: 8626
packages/server/src/queries/data-services/position-performance.ts (1)
3-3
: Verify position data structure compatibility
The endpoint returns complex position performance data. We should verify that the new HISTORICAL_DATA_URL service maintains the same data structure and types.
Also applies to: 49-49
✅ Verification successful
Let me gather more information about the data structures and interfaces to verify compatibility.
Based on the search results, I can now see the complete interface definition and data structure in the codebase. Let me generate the final response:
Verified data structure compatibility
The HISTORICAL_DATA_URL service maintains compatible data structures with the following interfaces:
PositionPerformance
withtotal_spread_rewards
,total_incentives_rewards
,total_forfeit_rewards
, andprincipal
fieldsPrincipalAction
containing position details and asset informationAsset
withdenom
,amount
, andvalue
fields
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify position performance endpoint and data structures
# Search for similar position endpoints
rg -g '!*.{lock,sum}' -A 3 '/cl/v1/position/last/id'
# Look for related interface definitions
ast-grep --pattern 'interface PositionPerformance {
$$$
}'
Length of output: 826
Script:
#!/bin/bash
# Search for interface definitions and types related to position performance
rg -g '!*.{lock,sum}' -A 15 'interface.*Position.*Performance'
# Search for the actual usage of the endpoint response
rg -g '!*.{lock,sum}' -B 3 -A 10 'HISTORICAL_DATA_URL.*position/last/id'
# Look for any type definitions in nearby files
fd "position.*\.(ts|tsx)$" --exec head -n 50 {}
Length of output: 22840
packages/web/config/env.ts (1)
17-17
: Verify environment variable configuration across deployments
The consolidation of data services looks good. However, we should ensure that NEXT_PUBLIC_HISTORICAL_DATA_URL
is properly configured across all deployment environments.
✅ Verification successful
Environment variable is properly configured and used across the codebase
The environment variable NEXT_PUBLIC_HISTORICAL_DATA_URL
is:
- Properly declared in
turbo.json
as a shared environment variable - Correctly used in the web application (
packages/web/config/env.ts
andpackages/web/stores/root.ts
) - Well integrated in the server package with a fallback value (
packages/server/src/env.ts
) - Consistently used across multiple data service queries in the server package
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for environment variable usage in deployment configurations
echo "Checking for HISTORICAL_DATA_URL configuration in deployment files..."
fd -e yaml -e yml | xargs rg "NEXT_PUBLIC_HISTORICAL_DATA_URL"
Length of output: 211
Script:
#!/bin/bash
# Let's check for environment variable configuration in different ways:
# 1. Check all files for the environment variable
echo "Checking for HISTORICAL_DATA_URL in all files..."
rg "HISTORICAL_DATA_URL|NEXT_PUBLIC_HISTORICAL_DATA_URL" -A 2
# 2. Look for environment configuration files
echo -e "\nListing potential environment configuration files..."
fd -e env -e env.example -e env.local -e env.development -e env.production
# 3. Check package.json files for environment-related scripts
echo -e "\nChecking package.json files for environment configurations..."
fd package.json -x cat {} | jq -c 'select(.scripts != null) | .scripts | keys[] as $k | select($k | contains("env"))'
Length of output: 6209
packages/server/src/queries/data-services/token-data.ts (2)
3-3
: LGTM! Import updated for consolidated service
The import change aligns with the consolidation of data services.
30-30
: Verify API endpoint compatibility
The URL endpoints remain unchanged while the base URL has been updated. Let's verify that these endpoints are available in the new consolidated service.
Also applies to: 38-38
✅ Verification successful
Let me gather more information about the base URL to ensure it's consistently updated across the codebase.
Based on the gathered information, I can now provide the final response:
Base URL and endpoints are consistently configured
The /tokens/v2/
endpoints are consistently used across the codebase and properly configured with HISTORICAL_DATA_URL
. The base URL is centrally defined in packages/server/src/env.ts
with a default value of "https://data.stage.osmosis.zone" and can be overridden via environment variable NEXT_PUBLIC_HISTORICAL_DATA_URL
. All token-related endpoints under the v2 path (/tokens/v2/all
, /tokens/v2/${coinDenom}
, /tokens/v2/historical/
, /tokens/v2/mcap
) are consistently used with this base URL.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if these endpoints are referenced elsewhere to ensure consistent updates
echo "Checking for token endpoint references..."
rg -A 2 "/tokens/v2/(all|\$|[^\"'\s]+)" --type ts
Length of output: 1893
Script:
#!/bin/bash
# Check for HISTORICAL_DATA_URL usage and definition
echo "Checking HISTORICAL_DATA_URL references..."
rg "HISTORICAL_DATA_URL" --type ts -A 2
echo -e "\nChecking for any other token data URL configurations..."
rg "new URL\(.*token" --type ts
Length of output: 5591
packages/server/src/queries/data-services/token-pair-historical-chart.ts (2)
3-3
: LGTM! Import updated for consolidated service
The import change aligns with the consolidation of data services.
36-36
: Verify error handling with new service
The URL base change looks good. The error handling is robust, but we should verify that the new service returns errors in the same format.
✅ Verification successful
Let me gather more context about the error handling in the new service.
Let me check the implementation of the API client to understand how it handles responses.
Let me try a different approach to find the apiClient implementation.
Let me check the implementation of apiClient and its tests.
Error handling is consistent with the API client expectations
The error handling in token-pair-historical-chart.ts
is well-aligned with the API client's implementation. The code correctly handles:
- JSON responses with
message
property for error cases - Specific handling for "symbol not Found" errors by returning an empty array
- Array responses for successful cases
- Unexpected response formats
The API client itself provides robust error handling for various scenarios, including JSON parsing errors, bad gateway errors, and chain-specific errors. The error format checking in the file is consistent with how the API client processes and transforms responses.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for similar error handling patterns across the codebase
echo "Checking for similar error handling patterns..."
ast-grep --pattern 'if ("message" in response) {
$$$
}'
Length of output: 1413
Script:
#!/bin/bash
# Let's check the file content to understand the complete error handling context
cat packages/server/src/queries/data-services/token-pair-historical-chart.ts
Length of output: 1507
Script:
#!/bin/bash
# Check the implementation of apiClient
rg -A 5 "export const apiClient" packages/utils/
Length of output: 50
Script:
#!/bin/bash
# Search for apiClient in the entire codebase
fd . -t f -e ts -e js | xargs rg -l "apiClient"
Length of output: 2032
Script:
#!/bin/bash
# Check the implementation of apiClient and its tests
cat packages/utils/src/api-client.ts packages/utils/src/__tests__/api-client.spec.ts
Length of output: 8040
packages/web/.env (1)
15-18
: Verify the production URL configuration.
The HISTORICAL_DATA_URL
is set to the production URL. Please ensure this is the correct endpoint and that it's properly secured.
packages/server/src/queries/data-services/token-historical-chart.ts (1)
3-3
: Verify API compatibility with the new endpoint.
The switch to HISTORICAL_DATA_URL
looks correct, but we should verify that the endpoint supports the same API contract for historical chart data.
Also applies to: 59-59
✅ Verification successful
API compatibility is maintained with the new endpoint
The verification shows that:
- Both server and store implementations use the same endpoint pattern
/tokens/v2/historical/${symbol}/chart?tf=${tf}
- Error handling for "symbol not Found" remains consistent
- The response type
TokenHistoricalPrice[]
and data structure are identical - The URL construction and parameter handling (symbol encoding, timeframe) are compatible
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any other files using similar endpoints
echo "Checking for similar endpoint usage patterns..."
rg "/tokens/v2/historical/.*?/chart\?tf=" --type ts
# Check for any error handling patterns that might need updating
echo "Checking error handling patterns..."
rg "symbol not Found" --type ts
Length of output: 698
Script:
#!/bin/bash
# Let's check the actual implementation to verify error handling and API usage
echo "Checking token-historical-chart implementation..."
ast-grep --pattern 'class $_ extends $_ {
$$$
async fetch($$$) {
$$$
}
$$$
}' packages/server/src/queries/data-services/token-historical-chart.ts
echo "\nChecking the store implementation..."
ast-grep --pattern 'class $_ extends $_ {
$$$
async fetch($$$) {
$$$
}
$$$
}' packages/stores/src/queries-external/token-historical-chart/index.ts
Length of output: 516
Script:
#!/bin/bash
# Let's check the actual file content to see the implementation details
echo "Content of token-historical-chart.ts in server:"
cat packages/server/src/queries/data-services/token-historical-chart.ts
echo -e "\nContent of token-historical-chart/index.ts in stores:"
cat packages/stores/src/queries-external/token-historical-chart/index.ts
Length of output: 6038
packages/server/src/queries/data-services/filtered-pools.ts (1)
84-84
: LGTM! Verify URL transition.
The URL construction has been correctly updated to use HISTORICAL_DATA_URL
. The change aligns with the PR objective of consolidating historical data URLs.
✅ Verification successful
URL transition verified successfully
The HISTORICAL_DATA_URL is properly defined and used consistently across the codebase:
- Defined in
packages/server/src/env.ts
with a fallback to "https://data.stage.osmosis.zone" - Used correctly in the filtered pools query at
packages/server/src/queries/data-services/filtered-pools.ts
- No references to TIMESERIES_DATA_URL found in the codebase
- The endpoint
/stream/pool/v1/all
is used only with HISTORICAL_DATA_URL
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that HISTORICAL_DATA_URL is properly defined and that there are no remaining references to TIMESERIES_DATA_URL
# Check for HISTORICAL_DATA_URL definition
rg "HISTORICAL_DATA_URL.*=.*process\.env"
# Check for any remaining TIMESERIES_DATA_URL references
rg "TIMESERIES_DATA_URL"
Length of output: 175
Script:
#!/bin/bash
# Let's verify the URL transition more thoroughly
# Check all occurrences of HISTORICAL_DATA_URL to understand its usage
rg "HISTORICAL_DATA_URL" -B 2 -A 2
# Check for any URL construction with "/stream/pool/v1/all"
rg "/stream/pool/v1/all" -B 2 -A 2
# Look for environment variable definitions in common config locations
fd "env" --exec cat {}
Length of output: 22967
packages/web/stores/root.ts (2)
23-23
: LGTM! Type-only import.
Good practice to use type
import when only the type information is needed.
161-162
: Verify the impact of using the same URL twice.
The same HISTORICAL_DATA_URL
is being used twice in the QueriesExternalStore constructor. This appears to be consolidating two different data sources (previously TIMESERIES_DATA_URL and INDEXER_DATA_URL) into one.
✅ Verification successful
Based on the gathered information, I can now provide a final response:
Using the same HISTORICAL_DATA_URL twice is intentional and correct.
The constructor parameters timeseriesDataBaseUrl
and indexerDataBaseUrl
in the QueriesExternalStore are being consolidated to use the same HISTORICAL_DATA_URL. This is safe because:
- The historical data functionality is primarily handled by
ObservableQueryTokensHistoricalChart
which only requires a single base URL - The store's implementation shows it's designed to work with a single historical data source
- The change appears to be part of an intentional consolidation of data sources
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify QueriesExternalStore constructor usage and implementation
# Check the constructor parameters to understand the URL usage
ast-grep --pattern 'class QueriesExternalStore {
constructor($_) {
$$$
}
}'
Length of output: 85
Script:
#!/bin/bash
# Let's try to find the QueriesExternalStore class definition and its usage
rg -A 10 "class QueriesExternalStore"
# Also search for the HISTORICAL_DATA_URL constant definition
rg "HISTORICAL_DATA_URL.*="
Length of output: 2549
Script:
#!/bin/bash
# Let's check how HISTORICAL_DATA_URL is used in the constructor and the store implementation
rg -B 5 -A 10 "HISTORICAL_DATA_URL" packages/web/stores/root.ts
# Also check the parent class implementation
rg -B 5 -A 10 "constructor.*QueriesExternalStore" packages/stores/src/queries-external/store.ts
Length of output: 1195
Script:
#!/bin/bash
# Let's check the implementation of the store to understand how these URLs are used
rg -B 5 -A 15 "constructor" packages/web/stores/queries-external/store.ts
# Also check for any references to historical data in the observable queries
rg -A 5 "ObservableQuery.*Historical"
Length of output: 5157
packages/server/src/queries/complex/pools/incentives.ts (2)
1-1
: LGTM! Enhanced type safety.
Good practice to use explicit type
imports when only the type information is needed. This reduces the bundle size and makes type usage more explicit.
Also applies to: 3-3, 14-14, 17-17, 20-20
237-239
: LGTM! Standardized number parsing.
Good practice to use Number.parseInt
instead of global parseInt
. This is more explicit and avoids potential issues with the global scope.
Also applies to: 260-260, 299-300, 322-322
What is the purpose of the change:
Update env to match current env deps.
Brief Changelog
Now 3 separate vars have been > 1 to represent the indexer base URL
Testing and Verifying
Tested locally and ensure builds pass